logo资料库

TFlearn部分手册.docx

第1页 / 共15页
第2页 / 共15页
第3页 / 共15页
第4页 / 共15页
第5页 / 共15页
第6页 / 共15页
第7页 / 共15页
第8页 / 共15页
资料共15页,剩余部分请下载后查看
Batch Normalization
Arguments
Local Response Normalization
Input
Output
Arguments
Regression 模型评估,梯度下降优化,损失函数,学习率
Input
Output
Arguments
Attributes
TFLearn 手册 http://tflearn.org 一、 各个层 (一) Core 1. Input_data:输入数据 tflearn.layers.core.input_data (shape=None, placeholder=None, dtype=tf.float32, data_preprocessing=None, data_augmentation=None, name='InputData') 用于往网络中输入数据。其输入是一系列 shape,去创建一个新的 placeholder,或者使用一个已经存在的 placeholder。输出是拥有给定 shape 的 Placeholder Tensor。 参数意义: shape:list of int。表示输入数据形状的 array 或者 tuple。当不提供 placeholder 的时候需要输入 shape。Shape 的第一个元素应该是 None (表示 batch size),如果没有提供的话,会自动添加。 placeholder:用来填充这一层(可选)。 dtype:tf.type, placeholder 数据的类型(可选),默认是 float32. data_preprocessing:A DataPreprocessing subclass object to manage real-time data pre-processing when training and predicting (such as zero center data, std normalization...). data_augmentation: DataAugmentation. A DataAugmentation subclass object to manage real-time data augmentation while training ( such as random image crop, random image flip, random sequence reverse...).
name: str。给这一层起一个名字。 2. Fully_connected 全连接层 tflearn.layers.core.fully_connected activation='linear', bias=True, weights_init='truncated_normal', bias_init='zeros', regularizer=None, weight_decay=0.001, trainable=True, restore=True, reuse=False, scope=None, name='FullyConnected') (incoming, n_units, 输入:2D Tensor [samples, input dim]. 如果不是 2D 的话,输入应该 降维处理。 输出:张量[samples, n_units] 参数: Incoming:输入的张量 N_units:这一层的单元数,int 型 Activation:激活函数的名称或者激活函数。默认是线性激活函数 Bias:bool 类型,如果是 true 则用偏置 Weights_init:初始化权值,str(name) or tensor,默认是 truncated_normal Bias_init:初始化偏置,str or tensor。默认是‘zeros’ Regularizer:str or tensor。此层的权值上添加一个正则化矩阵,默认 是 none Weight_decay:float。权值衰减参数,默认 0.001 Trainable:bool。如果为 true,则权值被训练 Restore:bool,如果为真,加载模型时该层的权值会被复原 Reuse:bool,如果 true 而且提供 scope,该层的变量会被重复利用(共 享) Scope:str。定义此层的范围。这个范围可用于在层间共享变量。注
意这个范围会重写名字。(可选) Name: 此层的名字,(可选)。默认“FullyConnected”。 属性: scope: Scope. W:tensor。用变量代表权重 b:tensor. 变量代表偏置 3. Dropout 按输入的 1 / keep_prob 的比例输出,防止模型过拟合 tflearn.layers.core.dropout noise_shape=None, name='Dropout') keep_prob, (incoming, 参数 incoming : 输入张量 keep_prob : 代表每个元素被保留的概率,float 型 noise_shape : int 型一维张量,代表随机生成的 keep/drop 标记的 shape name : A name for this layer (optional). 4. Custom layer 自定义层 tflearn.layers.core.custom_layer (incoming, custom_fn, **kwargs) 自定义层可以对输入张量或者张量列表做任何操作,自定义函数可以 连同他自己的参数一样,被当作参数 参数 incoming : A Tensor or list of Tensor. Incoming tensor. custom_fn : A custom function, to apply some ops on incoming tensor. **kwargs: Some custom parameters that custom function might need. 5. Reshape tflearn.layers.core.reshape (incoming, new_shape, name='Reshape')
把输入张量转化为想要的 shape 参数: incoming: A Tensor. The incoming tensor. new_shape: A list of int. The desired shape. name: A name for this layer (optional). 6. Flatten tflearn.layers.core.flatten (incoming, name='Flatten') 把输入张量变为一维的 Input (2+)-D Tensor. Output 2-D Tensor [batch, flatten_dims]. Arguments incoming: Tensor. The incoming tensor. 7. Activation tflearn.layers.core.activation (incoming, activation='linear', name='activation') 对传入的张量进行激活函数 参数 incoming: A Tensor. The incoming tensor. activation: str (name) or function (returning a Tensor). 本层应用激活函 数. Default: 'linear'. 8. Single unit tflearn.layers.core.single_unit trainable=True, restore=True, reuse=False, scope=None, name='Linear') activation='linear', (incoming, bias=True,
单层(线性层) Input 1-D Tensor [samples]. If not 2D, input will be flatten. Output 1-D Tensor [samples]. 参数 incoming: Tensor. Incoming Tensor. activation: str (name) or function. Activation applied to this layer (see tflearn.activations). Default: 'linear'. bias: bool. If True, a bias is used. trainable: bool. If True, weights will be trainable. restore: bool. If True, this layer weights will be restored when loading a model. reuse: bool. If True and 'scope' is provided, this layer variables will be reused (shared). scope: str. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name. name: A name for this layer (optional). Default: 'Linear'. 属性 W: Tensor. Variable representing weight. b: Tensor. Variable representing bias. 9. Fully connected highway
(incoming, n_units, activation='linear', tflearn.layers.core.highway transform_dropout=None, bias_init='zeros', regularizer=None, weight_decay=0.001, trainable=True, restore=True, reuse=False, scope=None, name='FullyConnectedHighway') Input weights_init='truncated_normal', (2+)-D Tensor [samples, input dim]. If not 2D, input will be flatten. Output 2D Tensor [samples, n_units]. 参数 incoming: Tensor. Incoming (2+)D Tensor. n_units: int, number of units for this layer. activation: str (name) or function (returning a Tensor). Activation applied to this layer (see tflearn.activations). Default: 'linear'. transform_dropout: float: Keep probability on the highway transform gate. weights_init: str (name) or Tensor. Weights initialization. (see tflearn.initializations) Default: 'truncated_normal'. bias_init: str (name) or Tensor. Bias initialization. (see tflearn.initializations) Default: 'zeros'. regularizer: str (name) or Tensor. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None. weight_decay: float. Regularizer decay parameter. Default: 0.001. trainable: bool. If True, weights will be trainable. restore: bool. If True, this layer weights will be restored when loading a
model reuse: bool. If True and 'scope' is provided, this layer variables will be reused (shared). scope: str. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name. name: A name for this layer (optional). Default: 'FullyConnectedHighway'. 属性 scope: Scope. This layer scope. W: Tensor. Variable representing units weights. W_t: Tensor. Variable representing units weights for transform gate. b: Tensor. Variable representing biases. b_t: Tensor. Variable representing biases for transform gate. 10.One hot encoding tflearn.layers.core.one_hot_encoding (target, n_classes, on_value=1.0, off_value=0.0, name='OneHotEncoding') 将数字标签转换为二进制向量 Input The Labels Placeholder. Output 2-D Tensor, The encoded labels. 参数 target: Placeholder. The labels placeholder.
n_classes: int. Total number of classes. on_value: scalar(标量,数量). A scalar defining the on-value. off_value: scalar. A scalar defining the off-value. name: A name for this layer (optional). Default: 'OneHotEncoding'. 11.Time distributed tflearn.layers.core.time_distributed (incoming, fn, args=None, scope=None) 这一层对于输入张量的每一步都应用一个函数。这个用户自定义函数 的第一个参数应该时每一步的输入张量。用户函数的其他参数可能在 ‘args’中作为一个 list 指定。 Examples # Applying a fully_connected layer at every timestep x = time_distributed(input_tensor, fully_connected, [64]) # Using a conv layer at every timestep with a scope x = time_distributed(input_tensor, conv_2d, [64, 3], scope='tconv') Input (3+)-D Tensor [samples, timestep, input_dim]. Output (3+)-D Tensor [samples, timestep, output_dim]. 参数 incoming: Tensor. The incoming tensor. fn: function. A function to apply at every timestep. This function first parameter must be the input tensor per timestep. Additional parameters may be specified in 'args' argument. args: list. A list of parameters to use with the provided function. scope: str. A scope to give to each timestep tensor. Useful when sharing
分享到:
收藏