doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
tf.compat.v1.layers.experimental.keras_style_scope Use Keras-style variable management. @tf_contextlib.contextmanager tf.compat.v1.layers.experimental.keras_style_scope() All tf.layers and tf RNN cells created in this scope use Keras-style variable management. Creating such layers with a scope= argument is disallowed, and reuse=True is disallowed. The purpose of this scope is to allow users of existing layers to slowly transition to a Keras layers API without breaking existing functionality. One example of this is when using TensorFlow's RNN classes with Keras Models or Networks. Because Keras models do not properly set variable scopes, users of RNNs may either accidentally share scopes between two different models, or get errors about variables that already exist. Example: class RNNModel(tf.keras.Model): def __init__(self, name): super(RNNModel, self).__init__(name=name) self.rnn = tf.compat.v1.nn.rnn_cell.MultiRNNCell( [tf.compat.v1.nn.rnn_cell.LSTMCell(64) for _ in range(2)]) def call(self, input, state): return self.rnn(input, state) model_1 = RNNModel("model_1") model_2 = RNNModel("model_2") # OK output_1, next_state_1 = model_1(input, state) # Raises an error about trying to create an already existing variable. output_2, next_state_2 = model_2(input, state) The solution is to wrap the model construction and execution in a keras-style scope: with keras_style_scope(): model_1 = RNNModel("model_1") model_2 = RNNModel("model_2") # model_1 and model_2 are guaranteed to create their own variables. output_1, next_state_1 = model_1(input, state) output_2, next_state_2 = model_2(input, state) assert len(model_1.weights) > 0 assert len(model_2.weights) > 0 assert(model_1.weights != model_2.weights) Yields A keras layer style scope.
tensorflow.compat.v1.layers.experimental.keras_style_scope
tf.compat.v1.layers.experimental.set_keras_style Use Keras-style variable management. tf.compat.v1.layers.experimental.set_keras_style() All tf.layers and tf RNN cells created after keras style ha been enabled use Keras-style variable management. Creating such layers with a scope= argument is disallowed, and reuse=True is disallowed. The purpose of this function is to allow users of existing layers to slowly transition to Keras layers API without breaking existing functionality. For more details, see the documentation for keras_style_scope. Note, once keras style has been set, it is set globally for the entire program and cannot be unset. Example: set_keras_style() model_1 = RNNModel(name="model_1") model_2 = RNNModel(name="model_2") # model_1 and model_2 are guaranteed to create their own variables. output_1, next_state_1 = model_1(input, state) output_2, next_state_2 = model_2(input, state) assert len(model_1.weights) > 0 assert len(model_2.weights) > 0 assert(model_1.weights != model_2.weights)
tensorflow.compat.v1.layers.experimental.set_keras_style
tf.compat.v1.layers.Flatten Flattens an input tensor while preserving the batch axis (axis 0). Inherits From: Flatten, Layer, Layer, Module tf.compat.v1.layers.Flatten( data_format=None, **kwargs ) Arguments data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, ..., channels) while channels_first corresponds to inputs with shape (batch, channels, ...). Examples: x = tf.compat.v1.placeholder(shape=(None, 4, 4), dtype='float32') y = Flatten()(x) # now `y` has shape `(None, 16)` x = tf.compat.v1.placeholder(shape=(None, 3, None), dtype='float32') y = Flatten()(x) # now `y` has shape `(None, None)` Attributes graph scope_name
tensorflow.compat.v1.layers.flatten
tf.compat.v1.layers.Layer Base layer class. Inherits From: Layer, Module tf.compat.v1.layers.Layer( trainable=True, name=None, dtype=None, **kwargs ) It is considered legacy, and we recommend the use of tf.keras.layers.Layer instead. Arguments trainable Boolean, whether the layer's variables should be trainable. name String name of the layer. dtype Default dtype of the layer's weights (default of None means use the type of the first input). Read-only properties: name: The name of the layer (string). dtype: Default dtype of the layer's weights (default of None means use the type of the first input). trainable_variables: List of trainable variables. non_trainable_variables: List of non-trainable variables. variables: List of all variables of this layer, trainable and non-trainable. updates: List of update ops of this layer. losses: List of losses added by this layer. trainable_weights: List of variables to be included in backprop. non_trainable_weights: List of variables that should not be included in backprop. weights: The concatenation of the lists trainable_weights and non_trainable_weights (in this order). Mutable properties: trainable: Whether the layer should be trained (boolean). input_spec: Optional (list of) InputSpec object(s) specifying the constraints on inputs that can be accepted by the layer. Attributes graph scope_name
tensorflow.compat.v1.layers.layer
tf.compat.v1.layers.MaxPooling1D Max Pooling layer for 1D inputs. Inherits From: MaxPool1D, Layer, Layer, Module tf.compat.v1.layers.MaxPooling1D( pool_size, strides, padding='valid', data_format='channels_last', name=None, **kwargs ) Arguments pool_size An integer or tuple/list of a single integer, representing the size of the pooling window. strides An integer or tuple/list of a single integer, specifying the strides of the pooling operation. padding A string. The padding method, either 'valid' or 'same'. Case-insensitive. data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, length, channels) while channels_first corresponds to inputs with shape (batch, channels, length). name A string, the name of the layer. Attributes graph scope_name
tensorflow.compat.v1.layers.maxpooling1d
tf.compat.v1.layers.MaxPooling2D Max pooling layer for 2D inputs (e.g. images). Inherits From: MaxPool2D, Layer, Layer, Module tf.compat.v1.layers.MaxPooling2D( pool_size, strides, padding='valid', data_format='channels_last', name=None, **kwargs ) Arguments pool_size An integer or tuple/list of 2 integers: (pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions. strides An integer or tuple/list of 2 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions. padding A string. The padding method, either 'valid' or 'same'. Case-insensitive. data_format A string. The ordering of the dimensions in the inputs. channels_last (default) and channels_first are supported. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). name A string, the name of the layer. Attributes graph scope_name
tensorflow.compat.v1.layers.maxpooling2d
tf.compat.v1.layers.MaxPooling3D Max pooling layer for 3D inputs (e.g. volumes). Inherits From: MaxPool3D, Layer, Layer, Module tf.compat.v1.layers.MaxPooling3D( pool_size, strides, padding='valid', data_format='channels_last', name=None, **kwargs ) Arguments pool_size An integer or tuple/list of 3 integers: (pool_depth, pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions. strides An integer or tuple/list of 3 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions. padding A string. The padding method, either 'valid' or 'same'. Case-insensitive. data_format A string. The ordering of the dimensions in the inputs. channels_last (default) and channels_first are supported. channels_last corresponds to inputs with shape (batch, depth, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, depth, height, width). name A string, the name of the layer. Attributes graph scope_name
tensorflow.compat.v1.layers.maxpooling3d
tf.compat.v1.layers.max_pooling1d Max Pooling layer for 1D inputs. tf.compat.v1.layers.max_pooling1d( inputs, pool_size, strides, padding='valid', data_format='channels_last', name=None ) Arguments inputs The tensor over which to pool. Must have rank 3. pool_size An integer or tuple/list of a single integer, representing the size of the pooling window. strides An integer or tuple/list of a single integer, specifying the strides of the pooling operation. padding A string. The padding method, either 'valid' or 'same'. Case-insensitive. data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, length, channels) while channels_first corresponds to inputs with shape (batch, channels, length). name A string, the name of the layer. Returns The output tensor, of rank 3. Raises ValueError if eager execution is enabled.
tensorflow.compat.v1.layers.max_pooling1d
tf.compat.v1.layers.max_pooling2d Max pooling layer for 2D inputs (e.g. images). tf.compat.v1.layers.max_pooling2d( inputs, pool_size, strides, padding='valid', data_format='channels_last', name=None ) Arguments inputs The tensor over which to pool. Must have rank 4. pool_size An integer or tuple/list of 2 integers: (pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions. strides An integer or tuple/list of 2 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions. padding A string. The padding method, either 'valid' or 'same'. Case-insensitive. data_format A string. The ordering of the dimensions in the inputs. channels_last (default) and channels_first are supported. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). name A string, the name of the layer. Returns Output tensor. Raises ValueError if eager execution is enabled.
tensorflow.compat.v1.layers.max_pooling2d
tf.compat.v1.layers.max_pooling3d Max pooling layer for 3D inputs (e.g. tf.compat.v1.layers.max_pooling3d( inputs, pool_size, strides, padding='valid', data_format='channels_last', name=None ) volumes). Arguments inputs The tensor over which to pool. Must have rank 5. pool_size An integer or tuple/list of 3 integers: (pool_depth, pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions. strides An integer or tuple/list of 3 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions. padding A string. The padding method, either 'valid' or 'same'. Case-insensitive. data_format A string. The ordering of the dimensions in the inputs. channels_last (default) and channels_first are supported. channels_last corresponds to inputs with shape (batch, depth, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, depth, height, width). name A string, the name of the layer. Returns Output tensor. Raises ValueError if eager execution is enabled.
tensorflow.compat.v1.layers.max_pooling3d
tf.compat.v1.layers.SeparableConv1D Depthwise separable 1D convolution. Inherits From: SeparableConv1D, Layer, Layer, Module tf.compat.v1.layers.SeparableConv1D( filters, kernel_size, strides=1, padding='valid', data_format='channels_last', dilation_rate=1, depth_multiplier=1, activation=None, use_bias=True, depthwise_initializer=None, pointwise_initializer=None, bias_initializer=tf.zeros_initializer(), depthwise_regularizer=None, pointwise_regularizer=None, bias_regularizer=None, activity_regularizer=None, depthwise_constraint=None, pointwise_constraint=None, bias_constraint=None, trainable=True, name=None, **kwargs ) This layer performs a depthwise convolution that acts separately on channels, followed by a pointwise convolution that mixes channels. If use_bias is True and a bias initializer is provided, it adds a bias vector to the output. It then optionally applies an activation function to produce the final output. Arguments filters Integer, the dimensionality of the output space (i.e. the number of filters in the convolution). kernel_size A single integer specifying the spatial dimensions of the filters. strides A single integer specifying the strides of the convolution. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1. padding One of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, length, channels) while channels_first corresponds to inputs with shape (batch, channels, length). dilation_rate A single integer, specifying the dilation rate to use for dilated convolution. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1. depth_multiplier The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to num_filters_in * depth_multiplier. activation Activation function. Set it to None to maintain a linear activation. use_bias Boolean, whether the layer uses a bias. depthwise_initializer An initializer for the depthwise convolution kernel. pointwise_initializer An initializer for the pointwise convolution kernel. bias_initializer An initializer for the bias vector. If None, the default initializer will be used. depthwise_regularizer Optional regularizer for the depthwise convolution kernel. pointwise_regularizer Optional regularizer for the pointwise convolution kernel. bias_regularizer Optional regularizer for the bias vector. activity_regularizer Optional regularizer function for the output. depthwise_constraint Optional projection function to be applied to the depthwise kernel after being updated by an Optimizer (e.g. used for norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training. pointwise_constraint Optional projection function to be applied to the pointwise kernel after being updated by an Optimizer. bias_constraint Optional projection function to be applied to the bias after being updated by an Optimizer. trainable Boolean, if True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable). name A string, the name of the layer. Attributes graph scope_name
tensorflow.compat.v1.layers.separableconv1d
tf.compat.v1.layers.SeparableConv2D Depthwise separable 2D convolution. Inherits From: SeparableConv2D, Layer, Layer, Module tf.compat.v1.layers.SeparableConv2D( filters, kernel_size, strides=(1, 1), padding='valid', data_format='channels_last', dilation_rate=(1, 1), depth_multiplier=1, activation=None, use_bias=True, depthwise_initializer=None, pointwise_initializer=None, bias_initializer=tf.zeros_initializer(), depthwise_regularizer=None, pointwise_regularizer=None, bias_regularizer=None, activity_regularizer=None, depthwise_constraint=None, pointwise_constraint=None, bias_constraint=None, trainable=True, name=None, **kwargs ) This layer performs a depthwise convolution that acts separately on channels, followed by a pointwise convolution that mixes channels. If use_bias is True and a bias initializer is provided, it adds a bias vector to the output. It then optionally applies an activation function to produce the final output. Arguments filters Integer, the dimensionality of the output space (i.e. the number of filters in the convolution). kernel_size A tuple or list of 2 integers specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions. strides A tuple or list of 2 positive integers specifying the strides of the convolution. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1. padding One of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). dilation_rate An integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1. depth_multiplier The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to num_filters_in * depth_multiplier. activation Activation function. Set it to None to maintain a linear activation. use_bias Boolean, whether the layer uses a bias. depthwise_initializer An initializer for the depthwise convolution kernel. pointwise_initializer An initializer for the pointwise convolution kernel. bias_initializer An initializer for the bias vector. If None, the default initializer will be used. depthwise_regularizer Optional regularizer for the depthwise convolution kernel. pointwise_regularizer Optional regularizer for the pointwise convolution kernel. bias_regularizer Optional regularizer for the bias vector. activity_regularizer Optional regularizer function for the output. depthwise_constraint Optional projection function to be applied to the depthwise kernel after being updated by an Optimizer (e.g. used for norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training. pointwise_constraint Optional projection function to be applied to the pointwise kernel after being updated by an Optimizer. bias_constraint Optional projection function to be applied to the bias after being updated by an Optimizer. trainable Boolean, if True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable). name A string, the name of the layer. Attributes graph scope_name
tensorflow.compat.v1.layers.separableconv2d
tf.compat.v1.layers.separable_conv1d Functional interface for the depthwise separable 1D convolution layer. tf.compat.v1.layers.separable_conv1d( inputs, filters, kernel_size, strides=1, padding='valid', data_format='channels_last', dilation_rate=1, depth_multiplier=1, activation=None, use_bias=True, depthwise_initializer=None, pointwise_initializer=None, bias_initializer=tf.zeros_initializer(), depthwise_regularizer=None, pointwise_regularizer=None, bias_regularizer=None, activity_regularizer=None, depthwise_constraint=None, pointwise_constraint=None, bias_constraint=None, trainable=True, name=None, reuse=None ) This layer performs a depthwise convolution that acts separately on channels, followed by a pointwise convolution that mixes channels. If use_bias is True and a bias initializer is provided, it adds a bias vector to the output. It then optionally applies an activation function to produce the final output. Arguments inputs Input tensor. filters Integer, the dimensionality of the output space (i.e. the number of filters in the convolution). kernel_size A single integer specifying the spatial dimensions of the filters. strides A single integer specifying the strides of the convolution. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1. padding One of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, length, channels) while channels_first corresponds to inputs with shape (batch, channels, length). dilation_rate A single integer, specifying the dilation rate to use for dilated convolution. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1. depth_multiplier The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to num_filters_in * depth_multiplier. activation Activation function. Set it to None to maintain a linear activation. use_bias Boolean, whether the layer uses a bias. depthwise_initializer An initializer for the depthwise convolution kernel. pointwise_initializer An initializer for the pointwise convolution kernel. bias_initializer An initializer for the bias vector. If None, the default initializer will be used. depthwise_regularizer Optional regularizer for the depthwise convolution kernel. pointwise_regularizer Optional regularizer for the pointwise convolution kernel. bias_regularizer Optional regularizer for the bias vector. activity_regularizer Optional regularizer function for the output. depthwise_constraint Optional projection function to be applied to the depthwise kernel after being updated by an Optimizer (e.g. used for norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training. pointwise_constraint Optional projection function to be applied to the pointwise kernel after being updated by an Optimizer. bias_constraint Optional projection function to be applied to the bias after being updated by an Optimizer. trainable Boolean, if True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable). name A string, the name of the layer. reuse Boolean, whether to reuse the weights of a previous layer by the same name. Returns Output tensor. Raises ValueError if eager execution is enabled.
tensorflow.compat.v1.layers.separable_conv1d
tf.compat.v1.layers.separable_conv2d Functional interface for the depthwise separable 2D convolution layer. tf.compat.v1.layers.separable_conv2d( inputs, filters, kernel_size, strides=(1, 1), padding='valid', data_format='channels_last', dilation_rate=(1, 1), depth_multiplier=1, activation=None, use_bias=True, depthwise_initializer=None, pointwise_initializer=None, bias_initializer=tf.zeros_initializer(), depthwise_regularizer=None, pointwise_regularizer=None, bias_regularizer=None, activity_regularizer=None, depthwise_constraint=None, pointwise_constraint=None, bias_constraint=None, trainable=True, name=None, reuse=None ) This layer performs a depthwise convolution that acts separately on channels, followed by a pointwise convolution that mixes channels. If use_bias is True and a bias initializer is provided, it adds a bias vector to the output. It then optionally applies an activation function to produce the final output. Arguments inputs Input tensor. filters Integer, the dimensionality of the output space (i.e. the number of filters in the convolution). kernel_size A tuple or list of 2 integers specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions. strides A tuple or list of 2 positive integers specifying the strides of the convolution. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1. padding One of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). dilation_rate An integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1. depth_multiplier The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to num_filters_in * depth_multiplier. activation Activation function. Set it to None to maintain a linear activation. use_bias Boolean, whether the layer uses a bias. depthwise_initializer An initializer for the depthwise convolution kernel. pointwise_initializer An initializer for the pointwise convolution kernel. bias_initializer An initializer for the bias vector. If None, the default initializer will be used. depthwise_regularizer Optional regularizer for the depthwise convolution kernel. pointwise_regularizer Optional regularizer for the pointwise convolution kernel. bias_regularizer Optional regularizer for the bias vector. activity_regularizer Optional regularizer function for the output. depthwise_constraint Optional projection function to be applied to the depthwise kernel after being updated by an Optimizer (e.g. used for norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training. pointwise_constraint Optional projection function to be applied to the pointwise kernel after being updated by an Optimizer. bias_constraint Optional projection function to be applied to the bias after being updated by an Optimizer. trainable Boolean, if True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable). name A string, the name of the layer. reuse Boolean, whether to reuse the weights of a previous layer by the same name. Returns Output tensor. Raises ValueError if eager execution is enabled.
tensorflow.compat.v1.layers.separable_conv2d
Module: tf.compat.v1.linalg Operations for linear algebra. Modules experimental module: Public API for tf.linalg.experimental namespace. Classes class LinearOperator: Base class defining a [batch of] linear operator[s]. class LinearOperatorAdjoint: LinearOperator representing the adjoint of another operator. class LinearOperatorBlockDiag: Combines one or more LinearOperators in to a Block Diagonal matrix. class LinearOperatorBlockLowerTriangular: Combines LinearOperators into a blockwise lower-triangular matrix. class LinearOperatorCirculant: LinearOperator acting like a circulant matrix. class LinearOperatorCirculant2D: LinearOperator acting like a block circulant matrix. class LinearOperatorCirculant3D: LinearOperator acting like a nested block circulant matrix. class LinearOperatorComposition: Composes one or more LinearOperators. class LinearOperatorDiag: LinearOperator acting like a [batch] square diagonal matrix. class LinearOperatorFullMatrix: LinearOperator that wraps a [batch] matrix. class LinearOperatorHouseholder: LinearOperator acting like a [batch] of Householder transformations. class LinearOperatorIdentity: LinearOperator acting like a [batch] square identity matrix. class LinearOperatorInversion: LinearOperator representing the inverse of another operator. class LinearOperatorKronecker: Kronecker product between two LinearOperators. class LinearOperatorLowRankUpdate: Perturb a LinearOperator with a rank K update. class LinearOperatorLowerTriangular: LinearOperator acting like a [batch] square lower triangular matrix. class LinearOperatorPermutation: LinearOperator acting like a [batch] of permutation matrices. class LinearOperatorScaledIdentity: LinearOperator acting like a scaled [batch] identity matrix A = c I. class LinearOperatorToeplitz: LinearOperator acting like a [batch] of toeplitz matrices. class LinearOperatorTridiag: LinearOperator acting like a [batch] square tridiagonal matrix. class LinearOperatorZeros: LinearOperator acting like a [batch] zero matrix. Functions adjoint(...): Transposes the last two dimensions of and conjugates tensor matrix. band_part(...): Copy a tensor setting everything outside a central band in each innermost matrix to zero. cholesky(...): Computes the Cholesky decomposition of one or more square matrices. cholesky_solve(...): Solves systems of linear eqns A X = RHS, given Cholesky factorizations. cross(...): Compute the pairwise cross product. det(...): Computes the determinant of one or more square matrices. diag(...): Returns a batched diagonal tensor with given batched diagonal values. diag_part(...): Returns the batched diagonal part of a batched tensor. eigh(...): Computes the eigen decomposition of a batch of self-adjoint matrices. eigvalsh(...): Computes the eigenvalues of one or more self-adjoint matrices. einsum(...): Tensor contraction over specified indices and outer product. expm(...): Computes the matrix exponential of one or more square matrices. eye(...): Construct an identity matrix, or a batch of matrices. global_norm(...): Computes the global norm of multiple tensors. inv(...): Computes the inverse of one or more square invertible matrices or their adjoints (conjugate transposes). l2_normalize(...): Normalizes along dimension axis using an L2 norm. (deprecated arguments) logdet(...): Computes log of the determinant of a hermitian positive definite matrix. logm(...): Computes the matrix logarithm of one or more square matrices: lstsq(...): Solves one or more linear least-squares problems. lu(...): Computes the LU decomposition of one or more square matrices. lu_matrix_inverse(...): Computes the inverse given the LU decomposition(s) of one or more matrices. lu_reconstruct(...): The reconstruct one or more matrices from their LU decomposition(s). lu_solve(...): Solves systems of linear eqns A X = RHS, given LU factorizations. matmul(...): Multiplies matrix a by matrix b, producing a * b. matrix_rank(...): Compute the matrix rank of one or more matrices. matrix_transpose(...): Transposes last two dimensions of tensor a. matvec(...): Multiplies matrix a by vector b, producing a * b. norm(...): Computes the norm of vectors, matrices, and tensors. (deprecated arguments) normalize(...): Normalizes tensor along dimension axis using specified norm. pinv(...): Compute the Moore-Penrose pseudo-inverse of one or more matrices. qr(...): Computes the QR decompositions of one or more matrices. set_diag(...): Returns a batched matrix tensor with new batched diagonal values. slogdet(...): Computes the sign and the log of the absolute value of the determinant of solve(...): Solves systems of linear equations. sqrtm(...): Computes the matrix square root of one or more square matrices: svd(...): Computes the singular value decompositions of one or more matrices. tensor_diag(...): Returns a diagonal tensor with a given diagonal values. tensor_diag_part(...): Returns the diagonal part of the tensor. tensordot(...): Tensor contraction of a and b along specified axes and outer product. trace(...): Compute the trace of a tensor x. transpose(...): Transposes last two dimensions of tensor a. triangular_solve(...): Solve systems of linear equations with upper or lower triangular matrices. tridiagonal_matmul(...): Multiplies tridiagonal matrix by matrix. tridiagonal_solve(...): Solves tridiagonal systems of equations.
tensorflow.compat.v1.linalg
Module: tf.compat.v1.linalg.experimental Public API for tf.linalg.experimental namespace. Functions conjugate_gradient(...): Conjugate gradient solver.
tensorflow.compat.v1.linalg.experimental
tf.compat.v1.linalg.l2_normalize Normalizes along dimension axis using an L2 norm. (deprecated arguments) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.math.l2_normalize, tf.compat.v1.nn.l2_normalize tf.compat.v1.linalg.l2_normalize( x, axis=None, epsilon=1e-12, name=None, dim=None ) Warning: SOME ARGUMENTS ARE DEPRECATED: (dim). They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead For a 1-D tensor with axis = 0, computes output = x / sqrt(max(sum(x**2), epsilon)) For x with more dimensions, independently normalizes each 1-D slice along dimension axis. Args x A Tensor. axis Dimension along which to normalize. A scalar or a vector of integers. epsilon A lower bound value for the norm. Will use sqrt(epsilon) as the divisor if norm < sqrt(epsilon). name A name for this operation (optional). dim Deprecated alias for axis. Returns A Tensor with the same shape as x.
tensorflow.compat.v1.linalg.l2_normalize
Module: tf.compat.v1.lite Public API for tf.lite namespace. Modules constants module: Public API for tf.lite.constants namespace. experimental module: Public API for tf.lite.experimental namespace. Classes class Interpreter: Interpreter interface for TensorFlow Lite Models. class OpHint: A class that helps build tflite function invocations. class OpsSet: Enum class defining the sets of ops available to generate TFLite models. class Optimize: Enum defining the optimizations to apply when generating tflite graphs. class RepresentativeDataset: Representative dataset to evaluate optimizations. class TFLiteConverter: Convert a TensorFlow model into output_format. class TargetSpec: Specification of target device. class TocoConverter: Convert a TensorFlow model into output_format using TOCO. Functions toco_convert(...): Convert a model using TOCO. (deprecated)
tensorflow.compat.v1.lite
Module: tf.compat.v1.lite.constants Public API for tf.lite.constants namespace. Other Members FLOAT tf.dtypes.DType FLOAT16 tf.dtypes.DType GRAPHVIZ_DOT 3 INT16 tf.dtypes.DType INT32 tf.dtypes.DType INT64 tf.dtypes.DType INT8 tf.dtypes.DType QUANTIZED_UINT8 tf.dtypes.DType STRING tf.dtypes.DType TFLITE 2
tensorflow.compat.v1.lite.constants
Module: tf.compat.v1.lite.experimental Public API for tf.lite.experimental namespace. Modules nn module: Public API for tf.lite.experimental.nn namespace. Functions convert_op_hints_to_stubs(...): Converts a graphdef with LiteOp hints into stub operations. get_potentially_supported_ops(...): Returns operations potentially supported by TensorFlow Lite. load_delegate(...): Returns loaded Delegate object.
tensorflow.compat.v1.lite.experimental
tf.compat.v1.lite.experimental.convert_op_hints_to_stubs Converts a graphdef with LiteOp hints into stub operations. tf.compat.v1.lite.experimental.convert_op_hints_to_stubs( session=None, graph_def=None, write_callback=(lambda graph_def, comments: None) ) This is used to prepare for toco conversion of complex intrinsic usages. Note: only one of session or graph_def should be used, not both. Args session A TensorFlow session that contains the graph to convert. graph_def A graph def that we should convert. write_callback A function pointer that can be used to write intermediate steps of graph transformation (optional). Returns A new graphdef with all ops contained in OpHints being replaced by a single op call with the right parameters. Raises ValueError If both session and graph_def are provided.
tensorflow.compat.v1.lite.experimental.convert_op_hints_to_stubs
tf.compat.v1.lite.experimental.get_potentially_supported_ops Returns operations potentially supported by TensorFlow Lite. tf.compat.v1.lite.experimental.get_potentially_supported_ops() The potentially support list contains a list of ops that are partially or fully supported, which is derived by simply scanning op names to check whether they can be handled without real conversion and specific parameters. Given that some ops may be partially supported, the optimal way to determine if a model's operations are supported is by converting using the TensorFlow Lite converter. Returns A list of SupportedOp.
tensorflow.compat.v1.lite.experimental.get_potentially_supported_ops
Module: tf.compat.v1.lite.experimental.nn Public API for tf.lite.experimental.nn namespace. Classes class TFLiteLSTMCell: Long short-term memory unit (LSTM) recurrent network cell. class TfLiteRNNCell: The most basic RNN cell. Functions dynamic_rnn(...): Creates a recurrent neural network specified by RNNCell cell.
tensorflow.compat.v1.lite.experimental.nn
tf.compat.v1.lite.experimental.nn.dynamic_rnn Creates a recurrent neural network specified by RNNCell cell. tf.compat.v1.lite.experimental.nn.dynamic_rnn( cell, inputs, sequence_length=None, initial_state=None, dtype=None, parallel_iterations=None, swap_memory=False, time_major=True, scope=None ) Performs fully dynamic unrolling of inputs. Example: # create a BasicRNNCell rnn_cell = tf.compat.v1.nn.rnn_cell.BasicRNNCell(hidden_size) # 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size] # defining initial state initial_state = rnn_cell.zero_state(batch_size, dtype=tf.float32) # 'state' is a tensor of shape [batch_size, cell_state_size] outputs, state = tf.compat.v1.nn.dynamic_rnn(rnn_cell, input_data, initial_state=initial_state, dtype=tf.float32) # create 2 LSTMCells rnn_layers = [tf.compat.v1.nn.rnn_cell.LSTMCell(size) for size in [128, 256]] # create a RNN cell composed sequentially of a number of RNNCells multi_rnn_cell = tf.compat.v1.nn.rnn_cell.MultiRNNCell(rnn_layers) # 'outputs' is a tensor of shape [batch_size, max_time, 256] # 'state' is a N-tuple where N is the number of LSTMCells containing a # tf.nn.rnn_cell.LSTMStateTuple for each cell outputs, state = tf.compat.v1.nn.dynamic_rnn(cell=multi_rnn_cell, inputs=data, dtype=tf.float32) Args cell An instance of RNNCell. inputs The RNN inputs. If time_major == False (default), this must be a Tensor of shape: [batch_size, max_time, ...], or a nested tuple of such elements. If time_major == True, this must be a Tensor of shape: [max_time, batch_size, ...], or a nested tuple of such elements. This may also be a (possibly nested) tuple of Tensors satisfying this property. The first two dimensions must match across all the inputs, but otherwise the ranks and other shape components may differ. In this case, input to cell at each time-step will replicate the structure of these tuples, except for the time dimension (from which the time is taken). The input to cell at each time step will be a Tensor or (possibly nested) tuple of Tensors each with dimensions [batch_size, ...]. sequence_length (optional) An int32/int64 vector sized [batch_size]. Used to copy-through state and zero-out outputs when past a batch element's sequence length. So it's more for performance than correctness. initial_state (optional) An initial state for the RNN. If cell.state_size is an integer, this must be a Tensor of appropriate type and shape [batch_size, cell.state_size]. If cell.state_size is a tuple, this should be a tuple of tensors having shapes [batch_size, s] for s in cell.state_size. dtype (optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype. parallel_iterations (Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer. swap_memory Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty. time_major The shape format of the inputs and outputs Tensors. If true, these Tensors must be shaped [max_time, batch_size, depth]. If false, these Tensors must be shaped [batch_size, max_time, depth]. Using time_major = True is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form. scope VariableScope for the created subgraph; defaults to "rnn". Returns A pair (outputs, state) where: outputs The RNN output Tensor. If time_major == False (default), this will be a Tensor shaped: [batch_size, max_time, cell.output_size]. If time_major == True, this will be a Tensor shaped: [max_time, batch_size, cell.output_size]. Note, if cell.output_size is a (possibly nested) tuple of integers or TensorShape objects, then outputs will be a tuple having the same structure as cell.output_size, containing Tensors having shapes corresponding to the shape data in cell.output_size. state The final state. If cell.state_size is an int, this will be shaped [batch_size, cell.state_size]. If it is a TensorShape, this will be shaped [batch_size] + cell.state_size. If it is a (possibly nested) tuple of ints or TensorShape, this will be a tuple having the corresponding shapes. If cells are LSTMCells state will be a tuple containing a LSTMStateTuple for each cell. Raises TypeError If cell is not an instance of RNNCell. ValueError If inputs is None or an empty list. RuntimeError If not using control flow v2.
tensorflow.compat.v1.lite.experimental.nn.dynamic_rnn
tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell Long short-term memory unit (LSTM) recurrent network cell. Inherits From: RNNCell, Layer, Layer, Module tf.compat.v1.lite.experimental.nn.TFLiteLSTMCell( num_units, use_peepholes=False, cell_clip=None, initializer=None, num_proj=None, proj_clip=None, num_unit_shards=None, num_proj_shards=None, forget_bias=1.0, state_is_tuple=True, activation=None, reuse=None, name=None, dtype=None ) This is used only for TfLite, it provides hints and it also makes the variables in the desired for the tflite ops (transposed and separated). The default non-peephole implementation is based on: https://pdfs.semanticscholar.org/1154/0131eae85b2e11d53df7f1360eeb6476e7f4.pdf Felix Gers, Jurgen Schmidhuber, and Fred Cummins. "Learning to forget: Continual prediction with LSTM." IET, 850-855, 1999. The peephole implementation is based on: https://research.google.com/pubs/archive/43905.pdf Hasim Sak, Andrew Senior, and Francoise Beaufays. "Long short-term memory recurrent neural network architectures for large scale acoustic modeling." INTERSPEECH, 2014. The class uses optional peep-hole connections, optional cell clipping, and an optional projection layer. Note that this cell is not optimized for performance. Please use tf.contrib.cudnn_rnn.CudnnLSTM for better performance on GPU, or tf.contrib.rnn.LSTMBlockCell and tf.contrib.rnn.LSTMBlockFusedCell for better performance on CPU. Args num_units int, The number of units in the LSTM cell. use_peepholes bool, set True to enable diagonal/peephole connections. cell_clip (optional) A float value, if provided the cell state is clipped by this value prior to the cell output activation. initializer (optional) The initializer to use for the weight and projection matrices. num_proj (optional) int, The output dimensionality for the projection matrices. If None, no projection is performed. proj_clip (optional) A float value. If num_proj > 0 and proj_clip is provided, then the projected values are clipped elementwise to within [-proj_clip, proj_clip]. num_unit_shards Deprecated, will be removed by Jan. 2017. Use a variable_scope partitioner instead. num_proj_shards Deprecated, will be removed by Jan. 2017. Use a variable_scope partitioner instead. forget_bias Biases of the forget gate are initialized by default to 1 in order to reduce the scale of forgetting at the beginning of the training. Must set it manually to 0.0 when restoring from CudnnLSTM trained checkpoints. state_is_tuple If True, accepted and returned states are 2-tuples of the c_state and m_state. If False, they are concatenated along the column axis. This latter behavior will soon be deprecated. activation Activation function of the inner states. Default: tanh. reuse (optional) Python boolean describing whether to reuse variables in an existing scope. If not True, and the existing scope already has the given variables, an error is raised. name String, the name of the layer. Layers with the same name will share weights, but to avoid mistakes we require reuse=True in such cases. dtype Default dtype of the layer (default of None means use the type of the first input). Required when build is called before call. When restoring from CudnnLSTM-trained checkpoints, use CudnnCompatibleLSTMCell instead. Attributes graph output_size Integer or TensorShape: size of outputs produced by this cell. scope_name state_size size(s) of state(s) used by this cell. It can be represented by an Integer, a TensorShape or a tuple of Integers or TensorShapes. Methods get_initial_state View source get_initial_state( inputs=None, batch_size=None, dtype=None ) zero_state View source zero_state( batch_size, dtype ) Return zero-filled state tensor(s). Args batch_size int, float, or unit Tensor representing the batch size. dtype the data type to use for the state. Returns If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size, state_size] filled with zeros. If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shapes [batch_size, s] for each s in state_size.
tensorflow.compat.v1.lite.experimental.nn.tflitelstmcell
tf.compat.v1.lite.experimental.nn.TfLiteRNNCell The most basic RNN cell. Inherits From: RNNCell, Layer, Layer, Module tf.compat.v1.lite.experimental.nn.TfLiteRNNCell( num_units, activation=None, reuse=None, name=None, dtype=None, **kwargs ) This is used only for TfLite, it provides hints and it also makes the variables in the desired for the tflite ops. Args num_units int, The number of units in the RNN cell. activation Nonlinearity to use. Default: tanh. It could also be string that is within Keras activation function names. reuse (optional) Python boolean describing whether to reuse variables in an existing scope. Raises an error if not True and the existing scope already has the given variables. name String, the name of the layer. Layers with the same name will share weights, but to avoid mistakes we require reuse=True in such cases. dtype Default dtype of the layer (default of None means use the type of the first input). Required when build is called before call. **kwargs Dict, keyword named properties for common layer attributes, like trainable etc when constructing the cell from configs of get_config(). Raises ValueError If the existing scope already has the given variables. Attributes graph output_size Integer or TensorShape: size of outputs produced by this cell. scope_name state_size size(s) of state(s) used by this cell. It can be represented by an Integer, a TensorShape or a tuple of Integers or TensorShapes. Methods get_initial_state View source get_initial_state( inputs=None, batch_size=None, dtype=None ) zero_state View source zero_state( batch_size, dtype ) Return zero-filled state tensor(s). Args batch_size int, float, or unit Tensor representing the batch size. dtype the data type to use for the state. Returns If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size, state_size] filled with zeros. If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shapes [batch_size, s] for each s in state_size.
tensorflow.compat.v1.lite.experimental.nn.tfliternncell
tf.compat.v1.lite.OpHint A class that helps build tflite function invocations. tf.compat.v1.lite.OpHint( function_name, level=1, children_inputs_mappings=None, **kwargs ) It allows you to take a bunch of TensorFlow ops and annotate the construction such that toco knows how to convert it to tflite. This embeds a pseudo function in a TensorFlow graph. This allows embedding high-level API usage information in a lower level TensorFlow implementation so that an alternative implementation can be substituted later. Essentially, any "input" into this pseudo op is fed into an identity, and attributes are added to that input before being used by the constituent ops that make up the pseudo op. A similar process is done to any output that is to be exported from the current op. Args function_name Name of the function (the custom op name in tflite) level OpHint level. children_inputs_mappings Children OpHint inputs/outputs mapping. children_inputs_mappings should like below: "parent_first_child_input": [{"parent_input_index": num, "child_input_index": num}, ...] "parent_last_child_output": [{"parent_output_index": num, "child_output_index": num}, ...] "internal_children_input_output": [{"child_input_index": num, "child_output_index": num}, ...] **kwargs Keyword arguments of any constant attributes for the function. Child Classes class OpHintArgumentTracker Methods add_input View source add_input( *args, **kwargs ) Add a wrapped input argument to the hint. Args *args The input tensor. **kwargs "name" label "tag" a tag to group multiple arguments that will be aggregated. I.e. a string like 'cool_input'. Basically multiple inputs can be added to the same hint for parallel operations that will eventually be combined. An example would be static_rnn which creates multiple copies of state or inputs. "aggregate" aggregation strategy that is valid only for tag non None. Acceptable values are OpHint.AGGREGATE_FIRST, OpHint.AGGREGATE_LAST, and OpHint.AGGREGATE_STACK. "index_override" The global index to use. This corresponds to the argument order in the final stub that will be generated. Returns The wrapped input tensor. add_inputs View source add_inputs( *args, **kwargs ) Add a sequence of inputs to the function invocation. Args *args List of inputs to be converted (should be Tf.Tensor). **kwargs This allows 'names' which should be a list of names. Returns Wrapped inputs (identity standins that have additional metadata). These are also are also tf.Tensor's. add_output View source add_output( *args, **kwargs ) Add a wrapped output argument to the hint. Args *args The output tensor. **kwargs "name" label "tag" a tag to group multiple arguments that will be aggregated. I.e. a string like 'cool_input'. Basically multiple inputs can be added to the same hint for parallel operations that will eventually be combined. An example would be static_rnn which creates multiple copies of state or inputs. "aggregate" aggregation strategy that is valid only for tag non None. Acceptable values are OpHint.AGGREGATE_FIRST, OpHint.AGGREGATE_LAST, and OpHint.AGGREGATE_STACK. "index_override" The global index to use. This corresponds to the argument order in the final stub that will be generated. Returns The wrapped output tensor. add_outputs View source add_outputs( *args, **kwargs ) Add a sequence of outputs to the function invocation. Args *args List of outputs to be converted (should be tf.Tensor). **kwargs See Returns Wrapped outputs (identity standins that have additional metadata). These are also tf.Tensor's. Class Variables AGGREGATE_FIRST 'first' AGGREGATE_LAST 'last' AGGREGATE_STACK 'stack' CHILDREN_INPUTS_MAPPINGS '_tflite_children_ophint_inputs_mapping' FUNCTION_AGGREGATE_ATTR '_tflite_function_aggregate' FUNCTION_INPUT_INDEX_ATTR '_tflite_function_input_index' FUNCTION_LEVEL_ATTR '_tflite_ophint_level' FUNCTION_NAME_ATTR '_tflite_function_name' FUNCTION_OUTPUT_INDEX_ATTR '_tflite_function_output_index' FUNCTION_SORT_INDEX_ATTR '_tflite_function_sort_index' FUNCTION_UUID_ATTR '_tflite_function_uuid' TFLITE_INPUT_INDICES '_tflite_input_indices'
tensorflow.compat.v1.lite.ophint
tf.compat.v1.lite.OpHint.OpHintArgumentTracker Conceptually tracks indices of arguments of "OpHint functions". tf.compat.v1.lite.OpHint.OpHintArgumentTracker( function_name, unique_function_id, node_name_prefix, attr_name, level=1, children_inputs_mappings=None ) The inputs and arguments of these functions both use an instance of the class so they can have independent numbering. Args function_name Name of the function that this tracks arguments for. unique_function_id UUID of function that this tracks arguments for. node_name_prefix How identities that are created are named. attr_name Name of attribute to use to store the index for this hint. i.e. FUNCTION_INPUT_INDEX or FUNCTION_OUTPUT_INDEX level Hierarchical level of the Ophint node, a number. children_inputs_mappings Inputs/Outputs mapping for children hints. Methods add View source add( arg, tag=None, name=None, aggregate=None, index_override=None ) Return a wrapped tensor of an input tensor as an argument. Args arg A TensorFlow tensor that should be considered an argument. tag String tag to identify arguments that should be packed. name Name of argument. This is included in the Identity hint op names. aggregate Strategy to aggregate. Acceptable values are OpHint.AGGREGATE_FIRST, OpHint.AGGREGATE_LAST, and OpHint.AGGREGATE_STACK. Note, aggregate is only valid if tag is specified. index_override Specify what input/output index should this be in the final stub. i.e. add(arg0, index=1); add(arg1, index=0) will make the final stub be as stub_func(inputs[arg1, arg0], outputs=[]) rather than the default call order based ordering. Returns A tensor representing the wrapped argument. Raises ValueError When indices are not consistent.
tensorflow.compat.v1.lite.ophint.ophintargumenttracker
tf.compat.v1.lite.TFLiteConverter Convert a TensorFlow model into output_format. tf.compat.v1.lite.TFLiteConverter( graph_def, input_tensors, output_tensors, input_arrays_with_shape=None, output_arrays=None, experimental_debug_info_func=None ) This is used to convert from a TensorFlow GraphDef, SavedModel or tf.keras model into either a TFLite FlatBuffer or graph visualization. Example usage: # Converting a GraphDef from session. converter = tf.compat.v1.lite.TFLiteConverter.from_session( sess, in_tensors, out_tensors) tflite_model = converter.convert() open("converted_model.tflite", "wb").write(tflite_model) # Converting a GraphDef from file. converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph( graph_def_file, input_arrays, output_arrays) tflite_model = converter.convert() open("converted_model.tflite", "wb").write(tflite_model) # Converting a SavedModel. converter = tf.compat.v1.lite.TFLiteConverter.from_saved_model( saved_model_dir) tflite_model = converter.convert() open("converted_model.tflite", "wb").write(tflite_model) # Converting a tf.keras model. converter = tf.compat.v1.lite.TFLiteConverter.from_keras_model_file( keras_model) tflite_model = converter.convert() open("converted_model.tflite", "wb").write(tflite_model) Args graph_def Frozen TensorFlow GraphDef. input_tensors List of input tensors. Type and shape are computed using foo.shape and foo.dtype. output_tensors List of output tensors (only .name is used from this). input_arrays_with_shape Tuple of strings representing input tensor names and list of integers representing input shapes (e.g., [("foo" : [1, 16, 16, 3])]). Use only when graph cannot be loaded into TensorFlow and when input_tensors and output_tensors are None. (default None) output_arrays List of output tensors to freeze graph with. Use only when graph cannot be loaded into TensorFlow and when input_tensors and output_tensors are None. (default None) experimental_debug_info_func An experimental function to retrieve the graph debug info for a set of nodes from the graph_def. Raises ValueError Invalid arguments. Attributes inference_type Target data type of real-number arrays in the output file. Must be {tf.float32, tf.uint8}. If optimzations are provided, this parameter is ignored. (default tf.float32) inference_input_type Target data type of real-number input arrays. Allows for a different type for input arrays. If an integer type is provided and optimizations are not used, quantized_input_stats must be provided. If inference_type is tf.uint8, signaling conversion to a fully quantized model from a quantization-aware trained input model, then inference_input_type defaults to tf.uint8. In all other cases, inference_input_type defaults to tf.float32. Must be {tf.float32, tf.uint8, tf.int8} inference_output_type Target data type of real-number output arrays. Allows for a different type for output arrays. If inference_type is tf.uint8, signaling conversion to a fully quantized model from a quantization-aware trained output model, then inference_output_type defaults to tf.uint8. In all other cases, inference_output_type must be tf.float32, an error will be thrown otherwise. Must be {tf.float32, tf.uint8, tf.int8} output_format Output file format. Currently must be {TFLITE, GRAPHVIZ_DOT}. (default TFLITE) quantized_input_stats Dict of strings representing input tensor names mapped to tuple of floats representing the mean and standard deviation of the training data (e.g., {"foo" : (0., 1.)}). Only need if inference_input_type is QUANTIZED_UINT8. real_input_value = (quantized_input_value - mean_value) / std_dev_value. (default {}) default_ranges_stats Tuple of integers representing (min, max) range values for all arrays without a specified range. Intended for experimenting with quantization via "dummy quantization". (default None) drop_control_dependency Boolean indicating whether to drop control dependencies silently. This is due to TFLite not supporting control dependencies. (default True) reorder_across_fake_quant Boolean indicating whether to reorder FakeQuant nodes in unexpected locations. Used when the location of the FakeQuant nodes is preventing graph transformations necessary to convert the graph. Results in a graph that differs from the quantized training graph, potentially causing differing arithmetic behavior. (default False) change_concat_input_ranges Boolean to change behavior of min/max ranges for inputs and outputs of the concat operator for quantized models. Changes the ranges of concat operator overlap when true. (default False) allow_custom_ops Boolean indicating whether to allow custom operations. When false any unknown operation is an error. When true, custom ops are created for any op that is unknown. The developer will need to provide these to the TensorFlow Lite runtime with a custom resolver. (default False) post_training_quantize Deprecated. Please specify [Optimize.DEFAULT] for optimizations instead. Boolean indicating whether to quantize the weights of the converted float model. Model size will be reduced and there will be latency improvements (at the cost of accuracy). (default False) dump_graphviz_dir Full filepath of folder to dump the graphs at various stages of processing GraphViz .dot files. Preferred over --output_format=GRAPHVIZ_DOT in order to keep the requirements of the output file. (default None) dump_graphviz_video Boolean indicating whether to dump the graph after every graph transformation. (default False) conversion_summary_dir A string indicating the path to the generated conversion logs. target_ops Deprecated. Please specify target_spec.supported_ops instead. Set of OpsSet options indicating which converter to use. (default set([OpsSet.TFLITE_BUILTINS])) target_spec Experimental flag, subject to change. Specification of target device. optimizations Experimental flag, subject to change. A list of optimizations to apply when converting the model. E.g. [Optimize.DEFAULT] representative_dataset A representative dataset that can be used to generate input and output samples for the model. The converter can use the dataset to evaluate different optimizations. experimental_new_converter Experimental flag, subject to change. Enables MLIR-based conversion instead of TOCO conversion. (default True) Methods convert View source convert() Converts a TensorFlow GraphDef based on instance variables. Returns The converted data in serialized format. Either a TFLite Flatbuffer or a Graphviz graph depending on value in output_format. Raises ValueError Input shape is not specified. None value for dimension in input_tensor. from_frozen_graph View source @classmethod from_frozen_graph( graph_def_file, input_arrays, output_arrays, input_shapes=None ) Creates a TFLiteConverter class from a file containing a frozen GraphDef. Args graph_def_file Full filepath of file containing frozen GraphDef. input_arrays List of input tensors to freeze graph with. output_arrays List of output tensors to freeze graph with. input_shapes Dict of strings representing input tensor names to list of integers representing input shapes (e.g., {"foo" : [1, 16, 16, 3]}). Automatically determined when input shapes is None (e.g., {"foo" : None}). (default None) Returns TFLiteConverter class. Raises IOError File not found. Unable to parse input file. ValueError The graph is not frozen. input_arrays or output_arrays contains an invalid tensor name. input_shapes is not correctly defined when required from_keras_model_file View source @classmethod from_keras_model_file( model_file, input_arrays=None, input_shapes=None, output_arrays=None, custom_objects=None ) Creates a TFLiteConverter class from a tf.keras model file. Args model_file Full filepath of HDF5 file containing the tf.keras model. input_arrays List of input tensors to freeze graph with. Uses input arrays from SignatureDef when none are provided. (default None) input_shapes Dict of strings representing input tensor names to list of integers representing input shapes (e.g., {"foo" : [1, 16, 16, 3]}). Automatically determined when input shapes is None (e.g., {"foo" : None}). (default None) output_arrays List of output tensors to freeze graph with. Uses output arrays from SignatureDef when none are provided. (default None) custom_objects Dict mapping names (strings) to custom classes or functions to be considered during model deserialization. (default None) Returns TFLiteConverter class. from_saved_model View source @classmethod from_saved_model( saved_model_dir, input_arrays=None, input_shapes=None, output_arrays=None, tag_set=None, signature_key=None ) Creates a TFLiteConverter class from a SavedModel. Args saved_model_dir SavedModel directory to convert. input_arrays List of input tensors to freeze graph with. Uses input arrays from SignatureDef when none are provided. (default None) input_shapes Dict of strings representing input tensor names to list of integers representing input shapes (e.g., {"foo" : [1, 16, 16, 3]}). Automatically determined when input shapes is None (e.g., {"foo" : None}). (default None) output_arrays List of output tensors to freeze graph with. Uses output arrays from SignatureDef when none are provided. (default None) tag_set Set of tags identifying the MetaGraphDef within the SavedModel to analyze. All tags in the tag set must be present. (default set("serve")) signature_key Key identifying SignatureDef containing inputs and outputs. (default DEFAULT_SERVING_SIGNATURE_DEF_KEY) Returns TFLiteConverter class. from_session View source @classmethod from_session( sess, input_tensors, output_tensors ) Creates a TFLiteConverter class from a TensorFlow Session. Args sess TensorFlow Session. input_tensors List of input tensors. Type and shape are computed using foo.shape and foo.dtype. output_tensors List of output tensors (only .name is used from this). Returns TFLiteConverter class. get_input_arrays View source get_input_arrays() Returns a list of the names of the input tensors. Returns List of strings.
tensorflow.compat.v1.lite.tfliteconverter
tf.compat.v1.lite.TocoConverter Convert a TensorFlow model into output_format using TOCO. This class has been deprecated. Please use lite.TFLiteConverter instead. Methods from_frozen_graph View source @classmethod from_frozen_graph( graph_def_file, input_arrays, output_arrays, input_shapes=None ) Creates a TocoConverter class from a file containing a frozen graph. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use lite.TFLiteConverter.from_frozen_graph instead. from_keras_model_file View source @classmethod from_keras_model_file( model_file, input_arrays=None, input_shapes=None, output_arrays=None ) Creates a TocoConverter class from a tf.keras model file. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use lite.TFLiteConverter.from_keras_model_file instead. from_saved_model View source @classmethod from_saved_model( saved_model_dir, input_arrays=None, input_shapes=None, output_arrays=None, tag_set=None, signature_key=None ) Creates a TocoConverter class from a SavedModel. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use lite.TFLiteConverter.from_saved_model instead. from_session View source @classmethod from_session( sess, input_tensors, output_tensors ) Creates a TocoConverter class from a TensorFlow Session. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use lite.TFLiteConverter.from_session instead.
tensorflow.compat.v1.lite.tococonverter
tf.compat.v1.lite.toco_convert Convert a model using TOCO. (deprecated) tf.compat.v1.lite.toco_convert( input_data, input_tensors, output_tensors, *args, **kwargs ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use lite.TFLiteConverter instead. Typically this function is used to convert from TensorFlow GraphDef to TFLite. Conversion can be customized by providing arguments that are forwarded to build_toco_convert_protos (see documentation for details). This function has been deprecated. Please use lite.TFLiteConverter instead. Args input_data Input data (i.e. often sess.graph_def), input_tensors List of input tensors. Type and shape are computed using foo.shape and foo.dtype. output_tensors List of output tensors (only .name is used from this). *args See build_toco_convert_protos, **kwargs See build_toco_convert_protos. Returns The converted data. For example if TFLite was the destination, then this will be a tflite flatbuffer in a bytes array. Raises Defined in build_toco_convert_protos.
tensorflow.compat.v1.lite.toco_convert
tf.compat.v1.LMDBReader A Reader that outputs the records from a LMDB file. Inherits From: ReaderBase tf.compat.v1.LMDBReader( name=None, options=None ) See ReaderBase for supported methods. Args name A name for the operation (optional). options A LMDBRecordOptions object (optional). Eager Compatibility Readers are not compatible with eager execution. Instead, please use tf.data to get data into your model. Attributes reader_ref Op that implements the reader. supports_serialize Whether the Reader implementation can serialize its state. Methods num_records_produced View source num_records_produced( name=None ) Returns the number of records this reader has produced. This is the same as the number of Read executions that have succeeded. Args name A name for the operation (optional). Returns An int64 Tensor. num_work_units_completed View source num_work_units_completed( name=None ) Returns the number of work units this reader has finished processing. Args name A name for the operation (optional). Returns An int64 Tensor. read View source read( queue, name=None ) Returns the next record (key, value) pair produced by a reader. Will dequeue a work unit from queue if necessary (e.g. when the Reader needs to start reading from a new file since it has finished with the previous file). Args queue A Queue or a mutable string Tensor representing a handle to a Queue, with string work items. name A name for the operation (optional). Returns A tuple of Tensors (key, value). key A string scalar Tensor. value A string scalar Tensor. read_up_to View source read_up_to( queue, num_records, name=None ) Returns up to num_records (key, value) pairs produced by a reader. Will dequeue a work unit from queue if necessary (e.g., when the Reader needs to start reading from a new file since it has finished with the previous file). It may return less than num_records even before the last batch. Args queue A Queue or a mutable string Tensor representing a handle to a Queue, with string work items. num_records Number of records to read. name A name for the operation (optional). Returns A tuple of Tensors (keys, values). keys A 1-D string Tensor. values A 1-D string Tensor. reset View source reset( name=None ) Restore a reader to its initial clean state. Args name A name for the operation (optional). Returns The created Operation. restore_state View source restore_state( state, name=None ) Restore a reader to a previously saved state. Not all Readers support being restored, so this can produce an Unimplemented error. Args state A string Tensor. Result of a SerializeState of a Reader with matching type. name A name for the operation (optional). Returns The created Operation. serialize_state View source serialize_state( name=None ) Produce a string tensor that encodes the state of a reader. Not all Readers support being serialized, so this can produce an Unimplemented error. Args name A name for the operation (optional). Returns A string Tensor.
tensorflow.compat.v1.lmdbreader
tf.compat.v1.load_file_system_library Loads a TensorFlow plugin, containing file system implementation. (deprecated) tf.compat.v1.load_file_system_library( library_filename ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.load_library instead. Pass library_filename to a platform-specific mechanism for dynamically loading a library. The rules for determining the exact location of the library are platform-specific and are not documented here. Args library_filename Path to the plugin. Relative or absolute filesystem path to a dynamic library file. Returns None. Raises RuntimeError when unable to load the library.
tensorflow.compat.v1.load_file_system_library
tf.compat.v1.local_variables Returns local variables. tf.compat.v1.local_variables( scope=None ) Local variables - per process variables, usually not saved/restored to checkpoint and used for temporary or intermediate values. For example, they can be used as counters for metrics computation or number of epochs this machine has read data. The tf.contrib.framework.local_variable() function automatically adds the new variable to GraphKeys.LOCAL_VARIABLES. This convenience function returns the contents of that collection. An alternative to local variables are global variables. See tf.compat.v1.global_variables Args scope (Optional.) A string. If supplied, the resulting list is filtered to include only items whose name attribute matches scope using re.match. Items without a name attribute are never returned if a scope is supplied. The choice of re.match means that a scope without special tokens filters by prefix. Returns A list of local Variable objects.
tensorflow.compat.v1.local_variables
tf.compat.v1.local_variables_initializer Returns an Op that initializes all local variables. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.initializers.local_variables tf.compat.v1.local_variables_initializer() This is just a shortcut for variables_initializer(local_variables()) Returns An Op that initializes all local variables in the graph.
tensorflow.compat.v1.local_variables_initializer
Module: tf.compat.v1.logging Logging and Summary Operations. Functions TaskLevelStatusMessage(...) debug(...) error(...) fatal(...) flush(...) get_verbosity(...): Return how much logging output will be produced. info(...) log(...) log_every_n(...): Log 'msg % args' at level 'level' once per 'n' times. log_first_n(...): Log 'msg % args' at level 'level' only first 'n' times. log_if(...): Log 'msg % args' at level 'level' only if condition is fulfilled. set_verbosity(...): Sets the threshold for what messages will be logged. vlog(...) warn(...) warning(...) Other Members DEBUG 10 ERROR 40 FATAL 50 INFO 20 WARN 30
tensorflow.compat.v1.logging
tf.compat.v1.logging.debug tf.compat.v1.logging.debug( msg, *args, **kwargs )
tensorflow.compat.v1.logging.debug
tf.compat.v1.logging.error tf.compat.v1.logging.error( msg, *args, **kwargs )
tensorflow.compat.v1.logging.error
tf.compat.v1.logging.fatal tf.compat.v1.logging.fatal( msg, *args, **kwargs )
tensorflow.compat.v1.logging.fatal
tf.compat.v1.logging.flush tf.compat.v1.logging.flush()
tensorflow.compat.v1.logging.flush
tf.compat.v1.logging.get_verbosity Return how much logging output will be produced. tf.compat.v1.logging.get_verbosity()
tensorflow.compat.v1.logging.get_verbosity
tf.compat.v1.logging.info tf.compat.v1.logging.info( msg, *args, **kwargs )
tensorflow.compat.v1.logging.info
tf.compat.v1.logging.log tf.compat.v1.logging.log( level, msg, *args, **kwargs )
tensorflow.compat.v1.logging.log
tf.compat.v1.logging.log_every_n Log 'msg % args' at level 'level' once per 'n' times. tf.compat.v1.logging.log_every_n( level, msg, n, *args ) Logs the 1st call, (N+1)st call, (2N+1)st call, etc. Not threadsafe. Args level The level at which to log. msg The message to be logged. n The number of times this should be called before it is logged. *args The args to be substituted into the msg.
tensorflow.compat.v1.logging.log_every_n
tf.compat.v1.logging.log_first_n Log 'msg % args' at level 'level' only first 'n' times. tf.compat.v1.logging.log_first_n( level, msg, n, *args ) Not threadsafe. Args level The level at which to log. msg The message to be logged. n The number of times this should be called before it is logged. *args The args to be substituted into the msg.
tensorflow.compat.v1.logging.log_first_n
tf.compat.v1.logging.log_if Log 'msg % args' at level 'level' only if condition is fulfilled. tf.compat.v1.logging.log_if( level, msg, condition, *args )
tensorflow.compat.v1.logging.log_if
tf.compat.v1.logging.set_verbosity Sets the threshold for what messages will be logged. tf.compat.v1.logging.set_verbosity( v )
tensorflow.compat.v1.logging.set_verbosity
tf.compat.v1.logging.TaskLevelStatusMessage tf.compat.v1.logging.TaskLevelStatusMessage( msg )
tensorflow.compat.v1.logging.tasklevelstatusmessage
tf.compat.v1.logging.vlog tf.compat.v1.logging.vlog( level, msg, *args, **kwargs )
tensorflow.compat.v1.logging.vlog
tf.compat.v1.logging.warn tf.compat.v1.logging.warn( msg, *args, **kwargs )
tensorflow.compat.v1.logging.warn
tf.compat.v1.logging.warning tf.compat.v1.logging.warning( msg, *args, **kwargs )
tensorflow.compat.v1.logging.warning
tf.compat.v1.LogMessage A ProtocolMessage Attributes level Level level message string message Class Variables DEBUGGING 10 ERROR 40 FATAL 50 INFO 20 Level UNKNOWN 0 WARN 30
tensorflow.compat.v1.logmessage
Module: tf.compat.v1.lookup Public API for tf.lookup namespace. Modules experimental module: Public API for tf.lookup.experimental namespace. Classes class KeyValueTensorInitializer: Table initializers given keys and values tensors. class StaticHashTable: A generic hash table that is immutable once initialized. class StaticVocabularyTable: String to Id table that assigns out-of-vocabulary keys to hash buckets. class TextFileIndex: The key and value content to get from each line. class TextFileInitializer: Table initializers from a text file.
tensorflow.compat.v1.lookup
Module: tf.compat.v1.lookup.experimental Public API for tf.lookup.experimental namespace. Classes class DatasetInitializer: Creates a table initializer from a tf.data.Dataset. class DenseHashTable: A generic mutable hash table implementation using tensors as backing store.
tensorflow.compat.v1.lookup.experimental
tf.compat.v1.lookup.StaticHashTable A generic hash table that is immutable once initialized. Inherits From: StaticHashTable tf.compat.v1.lookup.StaticHashTable( initializer, default_value, name=None ) When running in graph mode, you must evaluate the tensor returned by tf.tables_initializer() before evaluating the tensor returned by this class's lookup() method. Example usage in graph mode: keys_tensor = tf.constant([1, 2]) vals_tensor = tf.constant([3, 4]) input_tensor = tf.constant([1, 5]) table = tf.lookup.StaticHashTable( tf.lookup.KeyValueTensorInitializer(keys_tensor, vals_tensor), -1) out = table.lookup(input_tensor) with tf.Session() as sess: sess.run(tf.tables_initializer()) print(sess.run(out)) In eager mode, no special code is needed to initialize the table. Example usage in eager mode: tf.enable_eager_execution() keys_tensor = tf.constant([1, 2]) vals_tensor = tf.constant([3, 4]) input_tensor = tf.constant([1, 5]) table = tf.lookup.StaticHashTable( tf.lookup.KeyValueTensorInitializer(keys_tensor, vals_tensor), -1) print(table.lookup(input_tensor)) Args initializer The table initializer to use. See HashTable kernel for supported key and value types. default_value The value to use if a key is missing in the table. name A name for the operation (optional). Attributes default_value The default value of the table. initializer key_dtype The table key dtype. name The name of the table. resource_handle Returns the resource handle associated with this Resource. value_dtype The table value dtype. Methods export View source export( name=None ) Returns tensors of all keys and values in the table. Args name A name for the operation (optional). Returns A pair of tensors with the first tensor containing all keys and the second tensors containing all values in the table. lookup View source lookup( keys, name=None ) Looks up keys in a table, outputs the corresponding values. The default_value is used for keys not present in the table. Args keys Keys to look up. May be either a SparseTensor or dense Tensor. name A name for the operation (optional). Returns A SparseTensor if keys are sparse, a RaggedTensor if keys are ragged, otherwise a dense Tensor. Raises TypeError when keys or default_value doesn't match the table data types. size View source size( name=None ) Compute the number of elements in this table. Args name A name for the operation (optional). Returns A scalar tensor containing the number of elements in this table. __getitem__ View source __getitem__( keys ) Looks up keys in a table, outputs the corresponding values.
tensorflow.compat.v1.lookup.statichashtable
tf.compat.v1.lookup.StaticVocabularyTable String to Id table that assigns out-of-vocabulary keys to hash buckets. Inherits From: StaticVocabularyTable tf.compat.v1.lookup.StaticVocabularyTable( initializer, num_oov_buckets, lookup_key_dtype=None, name=None ) For example, if an instance of StaticVocabularyTable is initialized with a string-to-id initializer that maps: init = tf.lookup.KeyValueTensorInitializer( keys=tf.constant(['emerson', 'lake', 'palmer']), values=tf.constant([0, 1, 2], dtype=tf.int64)) table = tf.lookup.StaticVocabularyTable( init, num_oov_buckets=5) The Vocabulary object will performs the following mapping: emerson -> 0 lake -> 1 palmer -> 2 <other term> -> bucket_id, where bucket_id will be between 3 and 3 + num_oov_buckets - 1 = 7, calculated by: hash(<term>) % num_oov_buckets + vocab_size If input_tensor is: input_tensor = tf.constant(["emerson", "lake", "palmer", "king", "crimson"]) table[input_tensor].numpy() array([0, 1, 2, 6, 7]) If initializer is None, only out-of-vocabulary buckets are used. Example usage: num_oov_buckets = 3 vocab = ["emerson", "lake", "palmer", "crimnson"] import tempfile f = tempfile.NamedTemporaryFile(delete=False) f.write('\n'.join(vocab).encode('utf-8')) f.close() init = tf.lookup.TextFileInitializer( f.name, key_dtype=tf.string, key_index=tf.lookup.TextFileIndex.WHOLE_LINE, value_dtype=tf.int64, value_index=tf.lookup.TextFileIndex.LINE_NUMBER) table = tf.lookup.StaticVocabularyTable(init, num_oov_buckets) table.lookup(tf.constant(["palmer", "crimnson" , "king", "tarkus", "black", "moon"])).numpy() array([2, 3, 5, 6, 6, 4]) The hash function used for generating out-of-vocabulary buckets ID is Fingerprint64. Args initializer A TableInitializerBase object that contains the data used to initialize the table. If None, then we only use out-of-vocab buckets. num_oov_buckets Number of buckets to use for out-of-vocabulary keys. Must be greater than zero. lookup_key_dtype Data type of keys passed to lookup. Defaults to initializer.key_dtype if initializer is specified, otherwise tf.string. Must be string or integer, and must be castable to initializer.key_dtype. name A name for the operation (optional). Raises ValueError when num_oov_buckets is not positive. TypeError when lookup_key_dtype or initializer.key_dtype are not integer or string. Also when initializer.value_dtype != int64. Attributes initializer key_dtype The table key dtype. name The name of the table. resource_handle Returns the resource handle associated with this Resource. value_dtype The table value dtype. Methods lookup View source lookup( keys, name=None ) Looks up keys in the table, outputs the corresponding values. It assigns out-of-vocabulary keys to buckets based in their hashes. Args keys Keys to look up. May be either a SparseTensor or dense Tensor. name Optional name for the op. Returns A SparseTensor if keys are sparse, a RaggedTensor if keys are ragged, otherwise a dense Tensor. Raises TypeError when keys doesn't match the table key data type. size View source size( name=None ) Compute the number of elements in this table. __getitem__ View source __getitem__( keys ) Looks up keys in a table, outputs the corresponding values.
tensorflow.compat.v1.lookup.staticvocabularytable
Module: tf.compat.v1.losses Loss operations for use in neural networks. Note: All the losses are added to the GraphKeys.LOSSES collection by default. Classes class Reduction: Types of loss reduction. Functions absolute_difference(...): Adds an Absolute Difference loss to the training procedure. add_loss(...): Adds a externally defined loss to the collection of losses. compute_weighted_loss(...): Computes the weighted loss. cosine_distance(...): Adds a cosine-distance loss to the training procedure. (deprecated arguments) get_losses(...): Gets the list of losses from the loss_collection. get_regularization_loss(...): Gets the total regularization loss. get_regularization_losses(...): Gets the list of regularization losses. get_total_loss(...): Returns a tensor whose value represents the total loss. hinge_loss(...): Adds a hinge loss to the training procedure. huber_loss(...): Adds a Huber Loss term to the training procedure. log_loss(...): Adds a Log Loss term to the training procedure. mean_pairwise_squared_error(...): Adds a pairwise-errors-squared loss to the training procedure. mean_squared_error(...): Adds a Sum-of-Squares loss to the training procedure. sigmoid_cross_entropy(...): Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits. softmax_cross_entropy(...): Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2. sparse_softmax_cross_entropy(...): Cross-entropy loss using tf.nn.sparse_softmax_cross_entropy_with_logits.
tensorflow.compat.v1.losses
tf.compat.v1.losses.absolute_difference Adds an Absolute Difference loss to the training procedure. tf.compat.v1.losses.absolute_difference( labels, predictions, weights=1.0, scope=None, loss_collection=tf.GraphKeys.LOSSES, reduction=Reduction.SUM_BY_NONZERO_WEIGHTS ) weights acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weights is a Tensor of shape [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the weights vector. If the shape of weights matches the shape of predictions, then the loss of each measurable element of predictions is scaled by the corresponding value of weights. Args labels The ground truth output tensor, same dimensions as 'predictions'. predictions The predicted outputs. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding losses dimension). scope The scope for the operations performed in computing the loss. loss_collection collection to which this loss will be added. reduction Type of reduction to apply to loss. Returns Weighted loss float Tensor. If reduction is NONE, this has the same shape as labels; otherwise, it is scalar. Raises ValueError If the shape of predictions doesn't match that of labels or if the shape of weights is invalid or if labels or predictions is None. Eager Compatibility The loss_collection argument is ignored when executing eagerly. Consider holding on to the return value or collecting losses via a tf.keras.Model.
tensorflow.compat.v1.losses.absolute_difference
tf.compat.v1.losses.add_loss Adds a externally defined loss to the collection of losses. tf.compat.v1.losses.add_loss( loss, loss_collection=tf.GraphKeys.LOSSES ) Args loss A loss Tensor. loss_collection Optional collection to add the loss to.
tensorflow.compat.v1.losses.add_loss
tf.compat.v1.losses.compute_weighted_loss Computes the weighted loss. tf.compat.v1.losses.compute_weighted_loss( losses, weights=1.0, scope=None, loss_collection=tf.GraphKeys.LOSSES, reduction=Reduction.SUM_BY_NONZERO_WEIGHTS ) Args losses Tensor of shape [batch_size, d1, ... dN]. weights Optional Tensor whose rank is either 0, or the same rank as losses, and must be broadcastable to losses (i.e., all dimensions must be either 1, or the same as the corresponding losses dimension). scope the scope for the operations performed in computing the loss. loss_collection the loss will be added to these collections. reduction Type of reduction to apply to loss. Returns Weighted loss Tensor of the same type as losses. If reduction is NONE, this has the same shape as losses; otherwise, it is scalar. Raises ValueError If weights is None or the shape is not compatible with losses, or if the number of dimensions (rank) of either losses or weights is missing. Note: When calculating the gradient of a weighted loss contributions from both losses and weights are considered. If your weights depend on some model parameters but you do not want this to affect the loss gradient, you need to apply tf.stop_gradient to weights before passing them to compute_weighted_loss. Eager Compatibility The loss_collection argument is ignored when executing eagerly. Consider holding on to the return value or collecting losses via a tf.keras.Model.
tensorflow.compat.v1.losses.compute_weighted_loss
tf.compat.v1.losses.cosine_distance Adds a cosine-distance loss to the training procedure. (deprecated arguments) tf.compat.v1.losses.cosine_distance( labels, predictions, axis=None, weights=1.0, scope=None, loss_collection=tf.GraphKeys.LOSSES, reduction=Reduction.SUM_BY_NONZERO_WEIGHTS, dim=None ) Warning: SOME ARGUMENTS ARE DEPRECATED: (dim). They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead Note that the function assumes that predictions and labels are already unit-normalized. Args labels Tensor whose shape matches 'predictions' predictions An arbitrary matrix. axis The dimension along which the cosine distance is computed. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding losses dimension). scope The scope for the operations performed in computing the loss. loss_collection collection to which this loss will be added. reduction Type of reduction to apply to loss. dim The old (deprecated) name for axis. Returns Weighted loss float Tensor. If reduction is NONE, this has the same shape as labels; otherwise, it is scalar. Raises ValueError If predictions shape doesn't match labels shape, or axis, labels, predictions or weights is None. Eager Compatibility The loss_collection argument is ignored when executing eagerly. Consider holding on to the return value or collecting losses via a tf.keras.Model.
tensorflow.compat.v1.losses.cosine_distance
tf.compat.v1.losses.get_losses Gets the list of losses from the loss_collection. tf.compat.v1.losses.get_losses( scope=None, loss_collection=tf.GraphKeys.LOSSES ) Args scope An optional scope name for filtering the losses to return. loss_collection Optional losses collection. Returns a list of loss tensors.
tensorflow.compat.v1.losses.get_losses
tf.compat.v1.losses.get_regularization_loss Gets the total regularization loss. tf.compat.v1.losses.get_regularization_loss( scope=None, name='total_regularization_loss' ) Args scope An optional scope name for filtering the losses to return. name The name of the returned tensor. Returns A scalar regularization loss.
tensorflow.compat.v1.losses.get_regularization_loss
tf.compat.v1.losses.get_regularization_losses Gets the list of regularization losses. tf.compat.v1.losses.get_regularization_losses( scope=None ) Args scope An optional scope name for filtering the losses to return. Returns A list of regularization losses as Tensors.
tensorflow.compat.v1.losses.get_regularization_losses
tf.compat.v1.losses.get_total_loss Returns a tensor whose value represents the total loss. tf.compat.v1.losses.get_total_loss( add_regularization_losses=True, name='total_loss', scope=None ) In particular, this adds any losses you have added with tf.add_loss() to any regularization losses that have been added by regularization parameters on layers constructors e.g. tf.layers. Be very sure to use this if you are constructing a loss_op manually. Otherwise regularization arguments on tf.layers methods will not function. Args add_regularization_losses A boolean indicating whether or not to use the regularization losses in the sum. name The name of the returned tensor. scope An optional scope name for filtering the losses to return. Note that this filters the losses added with tf.add_loss() as well as the regularization losses to that scope. Returns A Tensor whose value represents the total loss. Raises ValueError if losses is not iterable.
tensorflow.compat.v1.losses.get_total_loss
tf.compat.v1.losses.hinge_loss Adds a hinge loss to the training procedure. tf.compat.v1.losses.hinge_loss( labels, logits, weights=1.0, scope=None, loss_collection=tf.GraphKeys.LOSSES, reduction=Reduction.SUM_BY_NONZERO_WEIGHTS ) Args labels The ground truth output tensor. Its shape should match the shape of logits. The values of the tensor are expected to be 0.0 or 1.0. Internally the {0,1} labels are converted to {-1,1} when calculating the hinge loss. logits The logits, a float tensor. Note that logits are assumed to be unbounded and 0-centered. A value > 0 (resp. < 0) is considered a positive (resp. negative) binary prediction. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding losses dimension). scope The scope for the operations performed in computing the loss. loss_collection collection to which the loss will be added. reduction Type of reduction to apply to loss. Returns Weighted loss float Tensor. If reduction is NONE, this has the same shape as labels; otherwise, it is scalar. Raises ValueError If the shapes of logits and labels don't match or if labels or logits is None. Eager Compatibility The loss_collection argument is ignored when executing eagerly. Consider holding on to the return value or collecting losses via a tf.keras.Model.
tensorflow.compat.v1.losses.hinge_loss
tf.compat.v1.losses.huber_loss Adds a Huber Loss term to the training procedure. tf.compat.v1.losses.huber_loss( labels, predictions, weights=1.0, delta=1.0, scope=None, loss_collection=tf.GraphKeys.LOSSES, reduction=Reduction.SUM_BY_NONZERO_WEIGHTS ) For each value x in error=labels-predictions, the following is calculated: 0.5 * x^2 if |x| <= d 0.5 * d^2 + d * (|x| - d) if |x| > d where d is delta. weights acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weights is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the weights vector. If the shape of weights matches the shape of predictions, then the loss of each measurable element of predictions is scaled by the corresponding value of weights. Args labels The ground truth output tensor, same dimensions as 'predictions'. predictions The predicted outputs. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding losses dimension). delta float, the point where the huber loss function changes from a quadratic to linear. scope The scope for the operations performed in computing the loss. loss_collection collection to which the loss will be added. reduction Type of reduction to apply to loss. Returns Weighted loss float Tensor. If reduction is NONE, this has the same shape as labels; otherwise, it is scalar. Raises ValueError If the shape of predictions doesn't match that of labels or if the shape of weights is invalid. Also if labels or predictions is None. Eager Compatibility The loss_collection argument is ignored when executing eagerly. Consider holding on to the return value or collecting losses via a tf.keras.Model.
tensorflow.compat.v1.losses.huber_loss
tf.compat.v1.losses.log_loss Adds a Log Loss term to the training procedure. tf.compat.v1.losses.log_loss( labels, predictions, weights=1.0, epsilon=1e-07, scope=None, loss_collection=tf.GraphKeys.LOSSES, reduction=Reduction.SUM_BY_NONZERO_WEIGHTS ) weights acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weights is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the weights vector. If the shape of weights matches the shape of predictions, then the loss of each measurable element of predictions is scaled by the corresponding value of weights. Args labels The ground truth output tensor, same dimensions as 'predictions'. predictions The predicted outputs. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding losses dimension). epsilon A small increment to add to avoid taking a log of zero. scope The scope for the operations performed in computing the loss. loss_collection collection to which the loss will be added. reduction Type of reduction to apply to loss. Returns Weighted loss float Tensor. If reduction is NONE, this has the same shape as labels; otherwise, it is scalar. Raises ValueError If the shape of predictions doesn't match that of labels or if the shape of weights is invalid. Also if labels or predictions is None. Eager Compatibility The loss_collection argument is ignored when executing eagerly. Consider holding on to the return value or collecting losses via a tf.keras.Model.
tensorflow.compat.v1.losses.log_loss
tf.compat.v1.losses.mean_pairwise_squared_error Adds a pairwise-errors-squared loss to the training procedure. tf.compat.v1.losses.mean_pairwise_squared_error( labels, predictions, weights=1.0, scope=None, loss_collection=tf.GraphKeys.LOSSES ) Unlike mean_squared_error, which is a measure of the differences between corresponding elements of predictions and labels, mean_pairwise_squared_error is a measure of the differences between pairs of corresponding elements of predictions and labels. For example, if labels=[a, b, c] and predictions=[x, y, z], there are three pairs of differences are summed to compute the loss: loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3 Note that since the inputs are of shape [batch_size, d0, ... dN], the corresponding pairs are computed within each batch sample but not across samples within a batch. For example, if predictions represents a batch of 16 grayscale images of dimension [batch_size, 100, 200], then the set of pairs is drawn from each image, but not across images. weights acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weights is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the weights vector. Args labels The ground truth output tensor, whose shape must match the shape of predictions. predictions The predicted outputs, a tensor of size [batch_size, d0, .. dN] where N+1 is the total number of dimensions in predictions. weights Coefficients for the loss a scalar, a tensor of shape [batch_size] or a tensor whose shape matches predictions. scope The scope for the operations performed in computing the loss. loss_collection collection to which the loss will be added. Returns A scalar Tensor that returns the weighted loss. Raises ValueError If the shape of predictions doesn't match that of labels or if the shape of weights is invalid. Also if labels or predictions is None. Eager Compatibility The loss_collection argument is ignored when executing eagerly. Consider holding on to the return value or collecting losses via a tf.keras.Model.
tensorflow.compat.v1.losses.mean_pairwise_squared_error
tf.compat.v1.losses.mean_squared_error Adds a Sum-of-Squares loss to the training procedure. tf.compat.v1.losses.mean_squared_error( labels, predictions, weights=1.0, scope=None, loss_collection=tf.GraphKeys.LOSSES, reduction=Reduction.SUM_BY_NONZERO_WEIGHTS ) weights acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weights is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the weights vector. If the shape of weights matches the shape of predictions, then the loss of each measurable element of predictions is scaled by the corresponding value of weights. Args labels The ground truth output tensor, same dimensions as 'predictions'. predictions The predicted outputs. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding losses dimension). scope The scope for the operations performed in computing the loss. loss_collection collection to which the loss will be added. reduction Type of reduction to apply to loss. Returns Weighted loss float Tensor. If reduction is NONE, this has the same shape as labels; otherwise, it is scalar. Raises ValueError If the shape of predictions doesn't match that of labels or if the shape of weights is invalid. Also if labels or predictions is None. Eager Compatibility The loss_collection argument is ignored when executing eagerly. Consider holding on to the return value or collecting losses via a tf.keras.Model.
tensorflow.compat.v1.losses.mean_squared_error
tf.compat.v1.losses.Reduction Types of loss reduction. Contains the following values: NONE: Un-reduced weighted losses with the same shape as input. SUM: Scalar sum of weighted losses. MEAN: Scalar SUM divided by sum of weights. DEPRECATED. SUM_OVER_BATCH_SIZE: Scalar SUM divided by number of elements in losses. SUM_OVER_NONZERO_WEIGHTS: Scalar SUM divided by number of non-zero weights. DEPRECATED. SUM_BY_NONZERO_WEIGHTS: Same as SUM_OVER_NONZERO_WEIGHTS. DEPRECATED. Methods all View source @classmethod all() validate View source @classmethod validate( key ) Class Variables MEAN 'weighted_mean' NONE 'none' SUM 'weighted_sum' SUM_BY_NONZERO_WEIGHTS 'weighted_sum_by_nonzero_weights' SUM_OVER_BATCH_SIZE 'weighted_sum_over_batch_size' SUM_OVER_NONZERO_WEIGHTS 'weighted_sum_by_nonzero_weights'
tensorflow.compat.v1.losses.reduction
tf.compat.v1.losses.sigmoid_cross_entropy Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits. tf.compat.v1.losses.sigmoid_cross_entropy( multi_class_labels, logits, weights=1.0, label_smoothing=0, scope=None, loss_collection=tf.GraphKeys.LOSSES, reduction=Reduction.SUM_BY_NONZERO_WEIGHTS ) weights acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weights is a tensor of shape [batch_size], then the loss weights apply to each corresponding sample. If label_smoothing is nonzero, smooth the labels towards 1/2: new_multiclass_labels = multiclass_labels * (1 - label_smoothing) + 0.5 * label_smoothing Args multi_class_labels [batch_size, num_classes] target integer labels in {0, 1}. logits Float [batch_size, num_classes] logits outputs of the network. weights Optional Tensor whose rank is either 0, or the same rank as multi_class_labels, and must be broadcastable to multi_class_labels (i.e., all dimensions must be either 1, or the same as the corresponding losses dimension). label_smoothing If greater than 0 then smooth the labels. scope The scope for the operations performed in computing the loss. loss_collection collection to which the loss will be added. reduction Type of reduction to apply to loss. Returns Weighted loss Tensor of the same type as logits. If reduction is NONE, this has the same shape as logits; otherwise, it is scalar. Raises ValueError If the shape of logits doesn't match that of multi_class_labels or if the shape of weights is invalid, or if weights is None. Also if multi_class_labels or logits is None. Eager Compatibility The loss_collection argument is ignored when executing eagerly. Consider holding on to the return value or collecting losses via a tf.keras.Model.
tensorflow.compat.v1.losses.sigmoid_cross_entropy
tf.compat.v1.losses.softmax_cross_entropy Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2. tf.compat.v1.losses.softmax_cross_entropy( onehot_labels, logits, weights=1.0, label_smoothing=0, scope=None, loss_collection=tf.GraphKeys.LOSSES, reduction=Reduction.SUM_BY_NONZERO_WEIGHTS ) weights acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weights is a tensor of shape [batch_size], then the loss weights apply to each corresponding sample. If label_smoothing is nonzero, smooth the labels towards 1/num_classes: new_onehot_labels = onehot_labels * (1 - label_smoothing) + label_smoothing / num_classes Note that onehot_labels and logits must have the same shape, e.g. [batch_size, num_classes]. The shape of weights must be broadcastable to loss, whose shape is decided by the shape of logits. In case the shape of logits is [batch_size, num_classes], loss is a Tensor of shape [batch_size]. Args onehot_labels One-hot-encoded labels. logits Logits outputs of the network. weights Optional Tensor that is broadcastable to loss. label_smoothing If greater than 0 then smooth the labels. scope the scope for the operations performed in computing the loss. loss_collection collection to which the loss will be added. reduction Type of reduction to apply to loss. Returns Weighted loss Tensor of the same type as logits. If reduction is NONE, this has shape [batch_size]; otherwise, it is scalar. Raises ValueError If the shape of logits doesn't match that of onehot_labels or if the shape of weights is invalid or if weights is None. Also if onehot_labels or logits is None. Eager Compatibility The loss_collection argument is ignored when executing eagerly. Consider holding on to the return value or collecting losses via a tf.keras.Model.
tensorflow.compat.v1.losses.softmax_cross_entropy
tf.compat.v1.losses.sparse_softmax_cross_entropy Cross-entropy loss using tf.nn.sparse_softmax_cross_entropy_with_logits. tf.compat.v1.losses.sparse_softmax_cross_entropy( labels, logits, weights=1.0, scope=None, loss_collection=tf.GraphKeys.LOSSES, reduction=Reduction.SUM_BY_NONZERO_WEIGHTS ) weights acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weights is a tensor of shape [batch_size], then the loss weights apply to each corresponding sample. Args labels Tensor of shape [d_0, d_1, ..., d_{r-1}] (where r is rank of labels and result) and dtype int32 or int64. Each entry in labels must be an index in [0, num_classes). Other values will raise an exception when this op is run on CPU, and return NaN for corresponding loss and gradient rows on GPU. logits Unscaled log probabilities of shape [d_0, d_1, ..., d_{r-1}, num_classes] and dtype float16, float32 or float64. weights Coefficients for the loss. This must be scalar or broadcastable to labels (i.e. same rank and each dimension is either 1 or the same). scope the scope for the operations performed in computing the loss. loss_collection collection to which the loss will be added. reduction Type of reduction to apply to loss. Returns Weighted loss Tensor of the same type as logits. If reduction is NONE, this has the same shape as labels; otherwise, it is scalar. Raises ValueError If the shapes of logits, labels, and weights are incompatible, or if any of them are None. Eager Compatibility The loss_collection argument is ignored when executing eagerly. Consider holding on to the return value or collecting losses via a tf.keras.Model.
tensorflow.compat.v1.losses.sparse_softmax_cross_entropy
tf.compat.v1.make_template Given an arbitrary function, wrap it so that it does variable sharing. tf.compat.v1.make_template( name_, func_, create_scope_now_=False, unique_name_=None, custom_getter_=None, **kwargs ) This wraps func_ in a Template and partially evaluates it. Templates are functions that create variables the first time they are called and reuse them thereafter. In order for func_ to be compatible with a Template it must have the following properties: The function should create all trainable variables and any variables that should be reused by calling tf.compat.v1.get_variable. If a trainable variable is created using tf.Variable, then a ValueError will be thrown. Variables that are intended to be locals can be created by specifying tf.Variable(..., trainable=false). The function may use variable scopes and other templates internally to create and reuse variables, but it shouldn't use tf.compat.v1.global_variables to capture variables that are defined outside of the scope of the function. Internal scopes and variable names should not depend on any arguments that are not supplied to make_template. In general you will get a ValueError telling you that you are trying to reuse a variable that doesn't exist if you make a mistake. In the following example, both z and w will be scaled by the same y. It is important to note that if we didn't assign scalar_name and used a different name for z and w that a ValueError would be thrown because it couldn't reuse the variable. def my_op(x, scalar_name): var1 = tf.compat.v1.get_variable(scalar_name, shape=[], initializer=tf.compat.v1.constant_initializer(1)) return x * var1 scale_by_y = tf.compat.v1.make_template('scale_by_y', my_op, scalar_name='y') z = scale_by_y(input1) w = scale_by_y(input2) As a safe-guard, the returned function will raise a ValueError after the first call if trainable variables are created by calling tf.Variable. If all of these are true, then 2 properties are enforced by the template: Calling the same template multiple times will share all non-local variables. Two different templates are guaranteed to be unique, unless you reenter the same variable scope as the initial definition of a template and redefine it. An examples of this exception: def my_op(x, scalar_name): var1 = tf.compat.v1.get_variable(scalar_name, shape=[], initializer=tf.compat.v1.constant_initializer(1)) return x * var1 with tf.compat.v1.variable_scope('scope') as vs: scale_by_y = tf.compat.v1.make_template('scale_by_y', my_op, scalar_name='y') z = scale_by_y(input1) w = scale_by_y(input2) # Creates a template that reuses the variables above. with tf.compat.v1.variable_scope(vs, reuse=True): scale_by_y2 = tf.compat.v1.make_template('scale_by_y', my_op, scalar_name='y') z2 = scale_by_y2(input1) w2 = scale_by_y2(input2) Depending on the value of create_scope_now_, the full variable scope may be captured either at the time of first call or at the time of construction. If this option is set to True, then all Tensors created by repeated calls to the template will have an extra trailing _N+1 to their name, as the first time the scope is entered in the Template constructor no Tensors are created. Note: name_, func_ and create_scope_now_ have a trailing underscore to reduce the likelihood of collisions with kwargs. Args name_ A name for the scope created by this template. If necessary, the name will be made unique by appending _N to the name. func_ The function to wrap. create_scope_now_ Boolean controlling whether the scope should be created when the template is constructed or when the template is called. Default is False, meaning the scope is created when the template is called. unique_name_ When used, it overrides name_ and is not made unique. If a template of the same scope/unique_name already exists and reuse is false, an error is raised. Defaults to None. custom_getter_ Optional custom getter for variables used in func_. See the tf.compat.v1.get_variable custom_getter documentation for more information. **kwargs Keyword arguments to apply to func_. Returns A function to encapsulate a set of variables which should be created once and reused. An enclosing scope will be created either when make_template is called or when the result is called, depending on the value of create_scope_now_. Regardless of the value, the first time the template is called it will enter the scope with no reuse, and call func_ to create variables, which are guaranteed to be unique. All subsequent calls will re-enter the scope and reuse those variables. Raises ValueError if name_ is None.
tensorflow.compat.v1.make_template
Module: tf.compat.v1.manip Operators for manipulating tensors. Functions batch_to_space_nd(...): BatchToSpace for N-D tensors of type T. gather_nd(...): Gather slices from params into a Tensor with shape specified by indices. reshape(...): Reshapes a tensor. reverse(...): Reverses specific dimensions of a tensor. roll(...): Rolls the elements of a tensor along an axis. scatter_nd(...): Scatter updates into a new tensor according to indices. space_to_batch_nd(...): SpaceToBatch for N-D tensors of type T. tile(...): Constructs a tensor by tiling a given tensor.
tensorflow.compat.v1.manip
tf.compat.v1.map_fn Transforms elems by applying fn to each element unstacked on axis 0. (deprecated arguments) tf.compat.v1.map_fn( fn, elems, dtype=None, parallel_iterations=None, back_prop=True, swap_memory=False, infer_shape=True, name=None, fn_output_signature=None ) Warning: SOME ARGUMENTS ARE DEPRECATED: (dtype). They will be removed in a future version. Instructions for updating: Use fn_output_signature instead See also tf.scan. map_fn unstacks elems on axis 0 to obtain a sequence of elements; calls fn to transform each element; and then stacks the transformed values back together. Mapping functions with single-Tensor inputs and outputs If elems is a single tensor and fn's signature is tf.Tensor->tf.Tensor, then map_fn(fn, elems) is equivalent to tf.stack([fn(elem) for elem in tf.unstack(elems)]). E.g.: tf.map_fn(fn=lambda t: tf.range(t, t + 3), elems=tf.constant([3, 5, 2])) <tf.Tensor: shape=(3, 3), dtype=int32, numpy= array([[3, 4, 5], [5, 6, 7], [2, 3, 4]], dtype=int32)> map_fn(fn, elems).shape = [elems.shape[0]] + fn(elems[0]).shape. Mapping functions with multi-arity inputs and outputs map_fn also supports functions with multi-arity inputs and outputs: If elems is a tuple (or nested structure) of tensors, then those tensors must all have the same outer-dimension size (num_elems); and fn is used to transform each tuple (or structure) of corresponding slices from elems. E.g., if elems is a tuple (t1, t2, t3), then fn is used to transform each tuple of slices (t1[i], t2[i], t3[i]) (where 0 <= i < num_elems). If fn returns a tuple (or nested structure) of tensors, then the result is formed by stacking corresponding elements from those structures. Specifying fn's output signature If fn's input and output signatures are different, then the output signature must be specified using fn_output_signature. (The input and output signatures are differ if their structures, dtypes, or tensor types do not match). E.g.: tf.map_fn(fn=tf.strings.length, # input & output have different dtypes elems=tf.constant(["hello", "moon"]), fn_output_signature=tf.int32) <tf.Tensor: shape=(2,), dtype=int32, numpy=array([5, 4], dtype=int32)> tf.map_fn(fn=tf.strings.join, # input & output have different structures elems=[tf.constant(['The', 'A']), tf.constant(['Dog', 'Cat'])], fn_output_signature=tf.string) <tf.Tensor: shape=(2,), dtype=string, numpy=array([b'TheDog', b'ACat'], dtype=object)> fn_output_signature can be specified using any of the following: A tf.DType or tf.TensorSpec (to describe a tf.Tensor) A tf.RaggedTensorSpec (to describe a tf.RaggedTensor) A tf.SparseTensorSpec (to describe a tf.sparse.SparseTensor) A (possibly nested) tuple, list, or dict containing the above types. RaggedTensors map_fn supports tf.RaggedTensor inputs and outputs. In particular: If elems is a RaggedTensor, then fn will be called with each row of that ragged tensor. If elems has only one ragged dimension, then the values passed to fn will be tf.Tensors. If elems has multiple ragged dimensions, then the values passed to fn will be tf.RaggedTensors with one fewer ragged dimension. If the result of map_fn should be a RaggedTensor, then use a tf.RaggedTensorSpec to specify fn_output_signature. If fn returns tf.Tensors with varying sizes, then use a tf.RaggedTensorSpec with ragged_rank=0 to combine them into a single ragged tensor (which will have ragged_rank=1). If fn returns tf.RaggedTensors, then use a tf.RaggedTensorSpec with the same ragged_rank. # Example: RaggedTensor input rt = tf.ragged.constant([[1, 2, 3], [], [4, 5], [6]]) tf.map_fn(tf.reduce_sum, rt, fn_output_signature=tf.int32) <tf.Tensor: shape=(4,), dtype=int32, numpy=array([6, 0, 9, 6], dtype=int32)> # Example: RaggedTensor output elems = tf.constant([3, 5, 0, 2]) tf.map_fn(tf.range, elems, fn_output_signature=tf.RaggedTensorSpec(shape=[None], dtype=tf.int32)) <tf.RaggedTensor [[0, 1, 2], [0, 1, 2, 3, 4], [], [0, 1]]> Note: map_fn should only be used if you need to map a function over the rows of a RaggedTensor. If you wish to map a function over the individual values, then you should use: tf.ragged.map_flat_values(fn, rt) (if fn is expressible as TensorFlow ops) rt.with_flat_values(map_fn(fn, rt.flat_values)) (otherwise) E.g.: rt = tf.ragged.constant([[1, 2, 3], [], [4, 5], [6]]) tf.ragged.map_flat_values(lambda x: x + 2, rt) <tf.RaggedTensor [[3, 4, 5], [], [6, 7], [8]]> SparseTensors map_fn supports tf.sparse.SparseTensor inputs and outputs. In particular: If elems is a SparseTensor, then fn will be called with each row of that sparse tensor. In particular, the value passed to fn will be a tf.sparse.SparseTensor with one fewer dimension than elems. If the result of map_fn should be a SparseTensor, then use a tf.SparseTensorSpec to specify fn_output_signature. The individual SparseTensors returned by fn will be stacked into a single SparseTensor with one more dimension. # Example: SparseTensor input st = tf.sparse.SparseTensor([[0, 0], [2, 0], [2, 1]], [2, 3, 4], [4, 4]) tf.map_fn(tf.sparse.reduce_sum, st, fn_output_signature=tf.int32) <tf.Tensor: shape=(4,), dtype=int32, numpy=array([2, 0, 7, 0], dtype=int32)> # Example: SparseTensor output tf.sparse.to_dense( tf.map_fn(tf.sparse.eye, tf.constant([2, 3]), fn_output_signature=tf.SparseTensorSpec(None, tf.float32))) <tf.Tensor: shape=(2, 3, 3), dtype=float32, numpy= array([[[1., 0., 0.], [0., 1., 0.], [0., 0., 0.]], [[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]], dtype=float32)> Note: map_fn should only be used if you need to map a function over the rows of a SparseTensor. If you wish to map a function over the nonzero values, then you should use: If the function is expressible as TensorFlow ops, use: tf.sparse.SparseTensor(st.indices, fn(st.values), st.dense_shape) Otherwise, use: tf.sparse.SparseTensor(st.indices, tf.map_fn(fn, st.values), st.dense_shape) map_fn vs. vectorized operations map_fn will apply the operations used by fn to each element of elems, resulting in O(elems.shape[0]) total operations. This is somewhat mitigated by the fact that map_fn can process elements in parallel. However, a transform expressed using map_fn is still typically less efficient than an equivalent transform expressed using vectorized operations. map_fn should typically only be used if one of the following is true: It is difficult or expensive to express the desired transform with vectorized operations. fn creates large intermediate values, so an equivalent vectorized transform would take too much memory. Processing elements in parallel is more efficient than an equivalent vectorized transform. Efficiency of the transform is not critical, and using map_fn is more readable. E.g., the example given above that maps fn=lambda t: tf.range(t, t + 3) across elems could be rewritten more efficiently using vectorized ops: elems = tf.constant([3, 5, 2]) tf.range(3) + tf.expand_dims(elems, 1) <tf.Tensor: shape=(3, 3), dtype=int32, numpy= array([[3, 4, 5], [5, 6, 7], [2, 3, 4]], dtype=int32)> In some cases, tf.vectorized_map can be used to automatically convert a function to a vectorized eqivalent. Eager execution When executing eagerly, map_fn does not execute in parallel even if parallel_iterations is set to a value > 1. You can still get the performance benefits of running a function in parallel by using the tf.function decorator: fn=lambda t: tf.range(t, t + 3) @tf.function def func(elems): return tf.map_fn(fn, elems, parallel_iterations=3) func(tf.constant([3, 5, 2])) <tf.Tensor: shape=(3, 3), dtype=int32, numpy= array([[3, 4, 5], [5, 6, 7], [2, 3, 4]], dtype=int32)> Note: if you use the tf.function decorator, any non-TensorFlow Python code that you may have written in your function won't get executed. See tf.function for more details. The recommendation would be to debug without tf.function but switch to it to get performance benefits of running map_fn in parallel. Args fn The callable to be performed. It accepts one argument, which will have the same (possibly nested) structure as elems. Its output must have the same structure as fn_output_signature if one is provided; otherwise it must have the same structure as elems. elems A tensor or (possibly nested) sequence of tensors, each of which will be unstacked along their first dimension. fn will be applied to the nested sequence of the resulting slices. elems may include ragged and sparse tensors. elems must consist of at least one tensor. dtype Deprecated: Equivalent to fn_output_signature. parallel_iterations (optional) The number of iterations allowed to run in parallel. When graph building, the default value is 10. While executing eagerly, the default value is set to 1. back_prop (optional) False disables support for back propagation. swap_memory (optional) True enables GPU-CPU memory swapping. infer_shape (optional) False disables tests for consistent output shapes. name (optional) Name prefix for the returned tensors. fn_output_signature The output signature of fn. Must be specified if fn's input and output signatures are different (i.e., if their structures, dtypes, or tensor types do not match). fn_output_signature can be specified using any of the following: A tf.DType or tf.TensorSpec (to describe a tf.Tensor) A tf.RaggedTensorSpec (to describe a tf.RaggedTensor) A tf.SparseTensorSpec (to describe a tf.sparse.SparseTensor) A (possibly nested) tuple, list, or dict containing the above types. Returns A tensor or (possibly nested) sequence of tensors. Each tensor stacks the results of applying fn to tensors unstacked from elems along the first dimension, from first to last. The result may include ragged and sparse tensors. Raises TypeError if fn is not callable or the structure of the output of fn and fn_output_signature do not match. ValueError if the lengths of the output of fn and fn_output_signature do not match, or if the elems does not contain any tensor. Examples: elems = np.array([1, 2, 3, 4, 5, 6]) tf.map_fn(lambda x: x * x, elems) <tf.Tensor: shape=(6,), dtype=int64, numpy=array([ 1, 4, 9, 16, 25, 36])> elems = (np.array([1, 2, 3]), np.array([-1, 1, -1])) tf.map_fn(lambda x: x[0] * x[1], elems, fn_output_signature=tf.int64) <tf.Tensor: shape=(3,), dtype=int64, numpy=array([-1, 2, -3])> elems = np.array([1, 2, 3]) tf.map_fn(lambda x: (x, -x), elems, fn_output_signature=(tf.int64, tf.int64)) (<tf.Tensor: shape=(3,), dtype=int64, numpy=array([1, 2, 3])>, <tf.Tensor: shape=(3,), dtype=int64, numpy=array([-1, -2, -3])>)
tensorflow.compat.v1.map_fn
Module: tf.compat.v1.math Math Operations. Note: Functions taking Tensor arguments can also take anything accepted by tf.convert_to_tensor. Note: Elementwise binary operations in TensorFlow follow numpy-style broadcasting. TensorFlow provides a variety of math functions including: Basic arithmetic operators and trigonometric functions. Special math functions (like: tf.math.igamma and tf.math.zeta) Complex number functions (like: tf.math.imag and tf.math.angle) Reductions and scans (like: tf.math.reduce_mean and tf.math.cumsum) Segment functions (like: tf.math.segment_sum) See: tf.linalg for matrix and tensor functions. About Segmentation TensorFlow provides several operations that you can use to perform common math computations on tensor segments. Here a segmentation is a partitioning of a tensor along the first dimension, i.e. it defines a mapping from the first dimension onto segment_ids. The segment_ids tensor should be the size of the first dimension, d0, with consecutive IDs in the range 0 to k, where k<d0. In particular, a segmentation of a matrix tensor is a mapping of rows to segments. For example: c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]]) tf.math.segment_sum(c, tf.constant([0, 0, 1])) # ==> [[0 0 0 0] # [5 6 7 8]] The standard segment_* functions assert that the segment indices are sorted. If you have unsorted indices use the equivalent unsorted_segment_ function. These functions take an additional argument num_segments so that the output tensor can be efficiently allocated. c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]]) tf.math.unsorted_segment_sum(c, tf.constant([0, 1, 0]), num_segments=2) # ==> [[ 6, 8, 10, 12], # [-1, -2, -3, -4]] Modules special module: Public API for tf.math.special namespace. Functions abs(...): Computes the absolute value of a tensor. accumulate_n(...): Returns the element-wise sum of a list of tensors. acos(...): Computes acos of x element-wise. acosh(...): Computes inverse hyperbolic cosine of x element-wise. add(...): Returns x + y element-wise. add_n(...): Adds all input tensors element-wise. angle(...): Returns the element-wise argument of a complex (or real) tensor. argmax(...): Returns the index with the largest value across axes of a tensor. (deprecated arguments) argmin(...): Returns the index with the smallest value across axes of a tensor. (deprecated arguments) asin(...): Computes the trignometric inverse sine of x element-wise. asinh(...): Computes inverse hyperbolic sine of x element-wise. atan(...): Computes the trignometric inverse tangent of x element-wise. atan2(...): Computes arctangent of y/x element-wise, respecting signs of the arguments. atanh(...): Computes inverse hyperbolic tangent of x element-wise. bessel_i0(...): Computes the Bessel i0 function of x element-wise. bessel_i0e(...): Computes the Bessel i0e function of x element-wise. bessel_i1(...): Computes the Bessel i1 function of x element-wise. bessel_i1e(...): Computes the Bessel i1e function of x element-wise. betainc(...): Compute the regularized incomplete beta integral \(I_x(a, b)\). bincount(...): Counts the number of occurrences of each value in an integer array. ceil(...): Return the ceiling of the input, element-wise. confusion_matrix(...): Computes the confusion matrix from predictions and labels. conj(...): Returns the complex conjugate of a complex number. cos(...): Computes cos of x element-wise. cosh(...): Computes hyperbolic cosine of x element-wise. count_nonzero(...): Computes number of nonzero elements across dimensions of a tensor. (deprecated arguments) (deprecated arguments) cumprod(...): Compute the cumulative product of the tensor x along axis. cumsum(...): Compute the cumulative sum of the tensor x along axis. cumulative_logsumexp(...): Compute the cumulative log-sum-exp of the tensor x along axis. digamma(...): Computes Psi, the derivative of Lgamma (the log of the absolute value of divide(...): Computes Python style division of x by y. divide_no_nan(...): Computes a safe divide which returns 0 if the y is zero. equal(...): Returns the truth value of (x == y) element-wise. erf(...): Computes the Gauss error function of x element-wise. erfc(...): Computes the complementary error function of x element-wise. erfcinv(...): Computes the inverse of complementary error function. erfinv(...): Compute inverse error function. exp(...): Computes exponential of x element-wise. \(y = e^x\). expm1(...): Computes exp(x) - 1 element-wise. floor(...): Returns element-wise largest integer not greater than x. floordiv(...): Divides x / y elementwise, rounding toward the most negative integer. floormod(...): Returns element-wise remainder of division. When x < 0 xor y < 0 is greater(...): Returns the truth value of (x > y) element-wise. greater_equal(...): Returns the truth value of (x >= y) element-wise. igamma(...): Compute the lower regularized incomplete Gamma function P(a, x). igammac(...): Compute the upper regularized incomplete Gamma function Q(a, x). imag(...): Returns the imaginary part of a complex (or real) tensor. in_top_k(...): Says whether the targets are in the top K predictions. invert_permutation(...): Computes the inverse permutation of a tensor. is_finite(...): Returns which elements of x are finite. is_inf(...): Returns which elements of x are Inf. is_nan(...): Returns which elements of x are NaN. is_non_decreasing(...): Returns True if x is non-decreasing. is_strictly_increasing(...): Returns True if x is strictly increasing. l2_normalize(...): Normalizes along dimension axis using an L2 norm. (deprecated arguments) lbeta(...): Computes \(ln(|Beta(x)|)\), reducing along the last dimension. less(...): Returns the truth value of (x < y) element-wise. less_equal(...): Returns the truth value of (x <= y) element-wise. lgamma(...): Computes the log of the absolute value of Gamma(x) element-wise. log(...): Computes natural logarithm of x element-wise. log1p(...): Computes natural logarithm of (1 + x) element-wise. log_sigmoid(...): Computes log sigmoid of x element-wise. log_softmax(...): Computes log softmax activations. (deprecated arguments) logical_and(...): Logical AND function. logical_not(...): Returns the truth value of NOT x element-wise. logical_or(...): Returns the truth value of x OR y element-wise. logical_xor(...): Logical XOR function. maximum(...): Returns the max of x and y (i.e. x > y ? x : y) element-wise. minimum(...): Returns the min of x and y (i.e. x < y ? x : y) element-wise. mod(...): Returns element-wise remainder of division. When x < 0 xor y < 0 is multiply(...): Returns an element-wise x * y. multiply_no_nan(...): Computes the product of x and y and returns 0 if the y is zero, even if x is NaN or infinite. ndtri(...): Compute quantile of Standard Normal. negative(...): Computes numerical negative value element-wise. nextafter(...): Returns the next representable value of x1 in the direction of x2, element-wise. not_equal(...): Returns the truth value of (x != y) element-wise. polygamma(...): Compute the polygamma function \(\psi^{(n)}(x)\). polyval(...): Computes the elementwise value of a polynomial. pow(...): Computes the power of one value to another. real(...): Returns the real part of a complex (or real) tensor. reciprocal(...): Computes the reciprocal of x element-wise. reciprocal_no_nan(...): Performs a safe reciprocal operation, element wise. reduce_all(...): Computes the "logical and" of elements across dimensions of a tensor. (deprecated arguments) reduce_any(...): Computes the "logical or" of elements across dimensions of a tensor. (deprecated arguments) reduce_euclidean_norm(...): Computes the Euclidean norm of elements across dimensions of a tensor. reduce_logsumexp(...): Computes log(sum(exp(elements across dimensions of a tensor))). (deprecated arguments) reduce_max(...): Computes the maximum of elements across dimensions of a tensor. (deprecated arguments) reduce_mean(...): Computes the mean of elements across dimensions of a tensor. reduce_min(...): Computes the minimum of elements across dimensions of a tensor. (deprecated arguments) reduce_prod(...): Computes the product of elements across dimensions of a tensor. (deprecated arguments) reduce_std(...): Computes the standard deviation of elements across dimensions of a tensor. reduce_sum(...): Computes the sum of elements across dimensions of a tensor. (deprecated arguments) reduce_variance(...): Computes the variance of elements across dimensions of a tensor. rint(...): Returns element-wise integer closest to x. round(...): Rounds the values of a tensor to the nearest integer, element-wise. rsqrt(...): Computes reciprocal of square root of x element-wise. scalar_mul(...): Multiplies a scalar times a Tensor or IndexedSlices object. segment_max(...): Computes the maximum along segments of a tensor. segment_mean(...): Computes the mean along segments of a tensor. segment_min(...): Computes the minimum along segments of a tensor. segment_prod(...): Computes the product along segments of a tensor. segment_sum(...): Computes the sum along segments of a tensor. sigmoid(...): Computes sigmoid of x element-wise. sign(...): Returns an element-wise indication of the sign of a number. sin(...): Computes sine of x element-wise. sinh(...): Computes hyperbolic sine of x element-wise. sobol_sample(...): Generates points from the Sobol sequence. softmax(...): Computes softmax activations. (deprecated arguments) softplus(...): Computes softplus: log(exp(features) + 1). softsign(...): Computes softsign: features / (abs(features) + 1). sqrt(...): Computes element-wise square root of the input tensor. square(...): Computes square of x element-wise. squared_difference(...): Returns conj(x - y)(x - y) element-wise. subtract(...): Returns x - y element-wise. tan(...): Computes tan of x element-wise. tanh(...): Computes hyperbolic tangent of x element-wise. top_k(...): Finds values and indices of the k largest entries for the last dimension. truediv(...): Divides x / y elementwise (using Python 3 division operator semantics). unsorted_segment_max(...): Computes the maximum along segments of a tensor. unsorted_segment_mean(...): Computes the mean along segments of a tensor. unsorted_segment_min(...): Computes the minimum along segments of a tensor. unsorted_segment_prod(...): Computes the product along segments of a tensor. unsorted_segment_sqrt_n(...): Computes the sum along segments of a tensor divided by the sqrt(N). unsorted_segment_sum(...): Computes the sum along segments of a tensor. xdivy(...): Returns 0 if x == 0, and x / y otherwise, elementwise. xlog1py(...): Compute x * log1p(y). xlogy(...): Returns 0 if x == 0, and x * log(y) otherwise, elementwise. zero_fraction(...): Returns the fraction of zeros in value. zeta(...): Compute the Hurwitz zeta function \(\zeta(x, q)\).
tensorflow.compat.v1.math
tf.compat.v1.math.in_top_k Says whether the targets are in the top K predictions. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.nn.in_top_k tf.compat.v1.math.in_top_k( predictions, targets, k, name=None ) This outputs a batch_size bool array, an entry out[i] is true if the prediction for the target class is finite (not inf, -inf, or nan) and among the top k predictions among all predictions for example i. Note that the behavior of InTopK differs from the TopK op in its handling of ties; if multiple classes have the same prediction value and straddle the top-k boundary, all of those classes are considered to be in the top k. More formally, let \(predictions_i\) be the predictions for all classes for example i, \(targets_i\) be the target class for example i, \(out_i\) be the output for example i, $$out_i = predictions_{i, targets_i} \in TopKIncludingTies(predictions_i)$$ Args predictions A Tensor of type float32. A batch_size x classes tensor. targets A Tensor. Must be one of the following types: int32, int64. A batch_size vector of class ids. k An int. Number of top elements to look at for computing precision. name A name for the operation (optional). Returns A Tensor of type bool. Computed Precision at k as a bool Tensor.
tensorflow.compat.v1.math.in_top_k
tf.compat.v1.math.log_softmax Computes log softmax activations. (deprecated arguments) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.nn.log_softmax tf.compat.v1.math.log_softmax( logits, axis=None, name=None, dim=None ) Warning: SOME ARGUMENTS ARE DEPRECATED: (dim). They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead For each batch i and class j we have logsoftmax = logits - log(reduce_sum(exp(logits), axis)) Args logits A non-empty Tensor. Must be one of the following types: half, float32, float64. axis The dimension softmax would be performed on. The default is -1 which indicates the last dimension. name A name for the operation (optional). dim Deprecated alias for axis. Returns A Tensor. Has the same type as logits. Same shape as logits. Raises InvalidArgumentError if logits is empty or axis is beyond the last dimension of logits.
tensorflow.compat.v1.math.log_softmax
tf.compat.v1.math.softmax Computes softmax activations. (deprecated arguments) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.nn.softmax tf.compat.v1.math.softmax( logits, axis=None, name=None, dim=None ) Warning: SOME ARGUMENTS ARE DEPRECATED: (dim). They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead This function performs the equivalent of softmax = tf.exp(logits) / tf.reduce_sum(tf.exp(logits), axis) See: https://en.wikipedia.org/wiki/Softmax_function Example usage: tf.nn.softmax([-1, 0., 1.]) <tf.Tensor: shape=(3,), dtype=float32, numpy=array([0.09003057, 0.24472848, 0.66524094], dtype=float32)> Args logits A non-empty Tensor, or an object whose type has a registered Tensor conversion function. Must be one of the following types: half,float32, float64. See also convert_to_tensor axis The dimension softmax would be performed on. The default is -1 which indicates the last dimension. name A name for the operation (optional). dim Deprecated alias for axis. Returns A Tensor. Has the same type and shape as logits. Raises InvalidArgumentError if logits is empty or axis is beyond the last dimension of logits. TypeError If no conversion function is registered for logits to Tensor. RuntimeError If a registered conversion function returns an invalid value.
tensorflow.compat.v1.math.softmax
Module: tf.compat.v1.math.special Public API for tf.math.special namespace. Functions bessel_i0(...): Computes the Bessel i0 function of x element-wise. bessel_i0e(...): Computes the Bessel i0e function of x element-wise. bessel_i1(...): Computes the Bessel i1 function of x element-wise. bessel_i1e(...): Computes the Bessel i1e function of x element-wise. bessel_j0(...): Computes the Bessel j0 function of x element-wise. bessel_j1(...): Computes the Bessel j1 function of x element-wise. bessel_k0(...): Computes the Bessel k0 function of x element-wise. bessel_k0e(...): Computes the Bessel k0e function of x element-wise. bessel_k1(...): Computes the Bessel k1 function of x element-wise. bessel_k1e(...): Computes the Bessel k1e function of x element-wise. bessel_y0(...): Computes the Bessel y0 function of x element-wise. bessel_y1(...): Computes the Bessel y1 function of x element-wise. dawsn(...): Computes Dawson's integral of x element-wise. expint(...): Computes the Exponential integral of x element-wise. fresnel_cos(...): Computes Fresnel's cosine integral of x element-wise. fresnel_sin(...): Computes Fresnel's sine integral of x element-wise. spence(...): Computes Spence's integral of x element-wise.
tensorflow.compat.v1.math.special
tf.compat.v1.MetaGraphDef A ProtocolMessage Attributes asset_file_def repeated AssetFileDef asset_file_def collection_def repeated CollectionDefEntry collection_def graph_def GraphDef graph_def meta_info_def MetaInfoDef meta_info_def object_graph_def SavedObjectGraph object_graph_def saver_def SaverDef saver_def signature_def repeated SignatureDefEntry signature_def Child Classes class CollectionDefEntry class MetaInfoDef class SignatureDefEntry
tensorflow.compat.v1.metagraphdef
tf.compat.v1.MetaGraphDef.CollectionDefEntry A ProtocolMessage Attributes key string key value CollectionDef value
tensorflow.compat.v1.metagraphdef.collectiondefentry
tf.compat.v1.MetaGraphDef.MetaInfoDef A ProtocolMessage Attributes any_info Any any_info function_aliases repeated FunctionAliasesEntry function_aliases meta_graph_version string meta_graph_version stripped_default_attrs bool stripped_default_attrs stripped_op_list OpList stripped_op_list tags repeated string tags tensorflow_git_version string tensorflow_git_version tensorflow_version string tensorflow_version Child Classes class FunctionAliasesEntry
tensorflow.compat.v1.metagraphdef.metainfodef
tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry A ProtocolMessage Attributes key string key value string value
tensorflow.compat.v1.metagraphdef.metainfodef.functionaliasesentry
tf.compat.v1.MetaGraphDef.SignatureDefEntry A ProtocolMessage Attributes key string key value SignatureDef value
tensorflow.compat.v1.metagraphdef.signaturedefentry
Module: tf.compat.v1.metrics Evaluation-related metrics. Functions accuracy(...): Calculates how often predictions matches labels. auc(...): Computes the approximate AUC via a Riemann sum. (deprecated) average_precision_at_k(...): Computes average precision@k of predictions with respect to sparse labels. false_negatives(...): Computes the total number of false negatives. false_negatives_at_thresholds(...): Computes false negatives at provided threshold values. false_positives(...): Sum the weights of false positives. false_positives_at_thresholds(...): Computes false positives at provided threshold values. mean(...): Computes the (weighted) mean of the given values. mean_absolute_error(...): Computes the mean absolute error between the labels and predictions. mean_cosine_distance(...): Computes the cosine distance between the labels and predictions. mean_iou(...): Calculate per-step mean Intersection-Over-Union (mIOU). mean_per_class_accuracy(...): Calculates the mean of the per-class accuracies. mean_relative_error(...): Computes the mean relative error by normalizing with the given values. mean_squared_error(...): Computes the mean squared error between the labels and predictions. mean_tensor(...): Computes the element-wise (weighted) mean of the given tensors. percentage_below(...): Computes the percentage of values less than the given threshold. precision(...): Computes the precision of the predictions with respect to the labels. precision_at_k(...): Computes precision@k of the predictions with respect to sparse labels. precision_at_thresholds(...): Computes precision values for different thresholds on predictions. precision_at_top_k(...): Computes precision@k of the predictions with respect to sparse labels. recall(...): Computes the recall of the predictions with respect to the labels. recall_at_k(...): Computes recall@k of the predictions with respect to sparse labels. recall_at_thresholds(...): Computes various recall values for different thresholds on predictions. recall_at_top_k(...): Computes recall@k of top-k predictions with respect to sparse labels. root_mean_squared_error(...): Computes the root mean squared error between the labels and predictions. sensitivity_at_specificity(...): Computes the specificity at a given sensitivity. sparse_average_precision_at_k(...): Renamed to average_precision_at_k, please use that method instead. (deprecated) sparse_precision_at_k(...): Renamed to precision_at_k, please use that method instead. (deprecated) specificity_at_sensitivity(...): Computes the specificity at a given sensitivity. true_negatives(...): Sum the weights of true_negatives. true_negatives_at_thresholds(...): Computes true negatives at provided threshold values. true_positives(...): Sum the weights of true_positives. true_positives_at_thresholds(...): Computes true positives at provided threshold values.
tensorflow.compat.v1.metrics
tf.compat.v1.metrics.accuracy Calculates how often predictions matches labels. tf.compat.v1.metrics.accuracy( labels, predictions, weights=None, metrics_collections=None, updates_collections=None, name=None ) The accuracy function creates two local variables, total and count that are used to compute the frequency with which predictions matches labels. This frequency is ultimately returned as accuracy: an idempotent operation that simply divides total by count. For estimation of the metric over a stream of data, the function creates an update_op operation that updates these variables and returns the accuracy. Internally, an is_correct operation computes a Tensor with elements 1.0 where the corresponding elements of predictions and labels match and 0.0 otherwise. Then update_op increments total with the reduced sum of the product of weights and is_correct, and it increments count with the reduced sum of weights. If weights is None, weights default to 1. Use weights of 0 to mask values. Args labels The ground truth values, a Tensor whose shape matches predictions. predictions The predicted values, a Tensor of any shape. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). metrics_collections An optional list of collections that accuracy should be added to. updates_collections An optional list of collections that update_op should be added to. name An optional variable_scope name. Returns accuracy A Tensor representing the accuracy, the value of total divided by count. update_op An operation that increments the total and count variables appropriately and whose value matches accuracy. Raises ValueError If predictions and labels have mismatched shapes, or if weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.accuracy
tf.compat.v1.metrics.auc Computes the approximate AUC via a Riemann sum. (deprecated) tf.compat.v1.metrics.auc( labels, predictions, weights=None, num_thresholds=200, metrics_collections=None, updates_collections=None, curve='ROC', name=None, summation_method='trapezoidal', thresholds=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: The value of AUC returned by this may race with the update so this is deprecated. Please use tf.keras.metrics.AUC instead. The auc function creates four local variables, true_positives, true_negatives, false_positives and false_negatives that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall. This value is ultimately returned as auc, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The num_thresholds variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on num_thresholds. For best results, predictions should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting summation_method to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. The thresholds parameter can be used to manually specify thresholds which split the predictions more evenly. For estimation of the metric over a stream of data, the function creates an update_op operation that updates these variables and returns the auc. If weights is None, weights default to 1. Use weights of 0 to mask values. Args labels A Tensor whose shape matches predictions. Will be cast to bool. predictions A floating point Tensor of arbitrary shape and whose values are in the range [0, 1]. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). num_thresholds The number of thresholds to use when discretizing the roc curve. metrics_collections An optional list of collections that auc should be added to. updates_collections An optional list of collections that update_op should be added to. curve Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve. name An optional variable_scope name. summation_method Specifies the Riemann summation method used (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that applies the trapezoidal rule; 'careful_interpolation', a variant of it differing only by a more correct interpolation scheme for PR-AUC - interpolating (true/false) positives but not the ratio that is precision; 'minoring' that applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' that does the opposite. Note that 'careful_interpolation' is strictly preferred to 'trapezoidal' (to be deprecated soon) as it applies the same method for ROC, and a better one (see Davis & Goadrich 2006 for details) for the PR curve. thresholds An optional list of floating point values to use as the thresholds for discretizing the curve. If set, the num_thresholds parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1. Returns auc A scalar Tensor representing the current area-under-curve. update_op An operation that increments the true_positives, true_negatives, false_positives and false_negatives variables appropriately and whose value matches auc. Raises ValueError If predictions and labels have mismatched shapes, or if weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.auc
tf.compat.v1.metrics.average_precision_at_k Computes average precision@k of predictions with respect to sparse labels. tf.compat.v1.metrics.average_precision_at_k( labels, predictions, k, weights=None, metrics_collections=None, updates_collections=None, name=None ) average_precision_at_k creates two local variables, average_precision_at_<k>/total and average_precision_at_<k>/max, that are used to compute the frequency. This frequency is ultimately returned as average_precision_at_<k>: an idempotent operation that simply divides average_precision_at_<k>/total by average_precision_at_<k>/max. For estimation of the metric over a stream of data, the function creates an update_op operation that updates these variables and returns the precision_at_<k>. Internally, a top_k operation computes a Tensor indicating the top k predictions. Set operations applied to top_k and labels calculate the true positives and false positives weighted by weights. Then update_op increments true_positive_at_<k> and false_positive_at_<k> using these values. If weights is None, weights default to 1. Use weights of 0 to mask values. Args labels int64 Tensor or SparseTensor with shape [D1, ... DN, num_labels] or [D1, ... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and labels has shape [batch_size, num_labels]. [D1, ... DN] must match predictions. Values should be in range [0, num_classes), where num_classes is the last dimension of predictions. Values outside this range are ignored. predictions Float Tensor with shape [D1, ... DN, num_classes] where N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1, ... DN] must match labels. k Integer, k for @k metric. This will calculate an average precision for range [1,k], as documented above. weights Tensor whose rank is either 0, or n-1, where n is the rank of labels. If the latter, it must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). metrics_collections An optional list of collections that values should be added to. updates_collections An optional list of collections that updates should be added to. name Name of new update operation, and namespace for other dependent ops. Returns mean_average_precision Scalar float64 Tensor with the mean average precision values. update Operation that increments variables appropriately, and whose value matches metric. Raises ValueError if k is invalid. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.average_precision_at_k
tf.compat.v1.metrics.false_negatives Computes the total number of false negatives. tf.compat.v1.metrics.false_negatives( labels, predictions, weights=None, metrics_collections=None, updates_collections=None, name=None ) If weights is None, weights default to 1. Use weights of 0 to mask values. Args labels The ground truth values, a Tensor whose dimensions must match predictions. Will be cast to bool. predictions The predicted values, a Tensor of arbitrary dimensions. Will be cast to bool. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). metrics_collections An optional list of collections that the metric value variable should be added to. updates_collections An optional list of collections that the metric update ops should be added to. name An optional variable_scope name. Returns value_tensor A Tensor representing the current value of the metric. update_op An operation that accumulates the error from a batch of data. Raises ValueError If weights is not None and its shape doesn't match values, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.false_negatives
tf.compat.v1.metrics.false_negatives_at_thresholds Computes false negatives at provided threshold values. tf.compat.v1.metrics.false_negatives_at_thresholds( labels, predictions, thresholds, weights=None, metrics_collections=None, updates_collections=None, name=None ) If weights is None, weights default to 1. Use weights of 0 to mask values. Args labels A Tensor whose shape matches predictions. Will be cast to bool. predictions A floating point Tensor of arbitrary shape and whose values are in the range [0, 1]. thresholds A python list or tuple of float thresholds in [0, 1]. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). metrics_collections An optional list of collections that false_negatives should be added to. updates_collections An optional list of collections that update_op should be added to. name An optional variable_scope name. Returns false_negatives A float Tensor of shape [len(thresholds)]. update_op An operation that updates the false_negatives variable and returns its current value. Raises ValueError If predictions and labels have mismatched shapes, or if weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.false_negatives_at_thresholds
tf.compat.v1.metrics.false_positives Sum the weights of false positives. tf.compat.v1.metrics.false_positives( labels, predictions, weights=None, metrics_collections=None, updates_collections=None, name=None ) If weights is None, weights default to 1. Use weights of 0 to mask values. Args labels The ground truth values, a Tensor whose dimensions must match predictions. Will be cast to bool. predictions The predicted values, a Tensor of arbitrary dimensions. Will be cast to bool. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). metrics_collections An optional list of collections that the metric value variable should be added to. updates_collections An optional list of collections that the metric update ops should be added to. name An optional variable_scope name. Returns value_tensor A Tensor representing the current value of the metric. update_op An operation that accumulates the error from a batch of data. Raises ValueError If predictions and labels have mismatched shapes, or if weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.false_positives
tf.compat.v1.metrics.false_positives_at_thresholds Computes false positives at provided threshold values. tf.compat.v1.metrics.false_positives_at_thresholds( labels, predictions, thresholds, weights=None, metrics_collections=None, updates_collections=None, name=None ) If weights is None, weights default to 1. Use weights of 0 to mask values. Args labels A Tensor whose shape matches predictions. Will be cast to bool. predictions A floating point Tensor of arbitrary shape and whose values are in the range [0, 1]. thresholds A python list or tuple of float thresholds in [0, 1]. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). metrics_collections An optional list of collections that false_positives should be added to. updates_collections An optional list of collections that update_op should be added to. name An optional variable_scope name. Returns false_positives A float Tensor of shape [len(thresholds)]. update_op An operation that updates the false_positives variable and returns its current value. Raises ValueError If predictions and labels have mismatched shapes, or if weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.false_positives_at_thresholds
tf.compat.v1.metrics.mean Computes the (weighted) mean of the given values. tf.compat.v1.metrics.mean( values, weights=None, metrics_collections=None, updates_collections=None, name=None ) The mean function creates two local variables, total and count that are used to compute the average of values. This average is ultimately returned as mean which is an idempotent operation that simply divides total by count. For estimation of the metric over a stream of data, the function creates an update_op operation that updates these variables and returns the mean. update_op increments total with the reduced sum of the product of values and weights, and it increments count with the reduced sum of weights. If weights is None, weights default to 1. Use weights of 0 to mask values. Args values A Tensor of arbitrary dimensions. weights Optional Tensor whose rank is either 0, or the same rank as values, and must be broadcastable to values (i.e., all dimensions must be either 1, or the same as the corresponding values dimension). metrics_collections An optional list of collections that mean should be added to. updates_collections An optional list of collections that update_op should be added to. name An optional variable_scope name. Returns mean A Tensor representing the current mean, the value of total divided by count. update_op An operation that increments the total and count variables appropriately and whose value matches mean_value. Raises ValueError If weights is not None and its shape doesn't match values, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.mean
tf.compat.v1.metrics.mean_absolute_error Computes the mean absolute error between the labels and predictions. tf.compat.v1.metrics.mean_absolute_error( labels, predictions, weights=None, metrics_collections=None, updates_collections=None, name=None ) The mean_absolute_error function creates two local variables, total and count that are used to compute the mean absolute error. This average is weighted by weights, and it is ultimately returned as mean_absolute_error: an idempotent operation that simply divides total by count. For estimation of the metric over a stream of data, the function creates an update_op operation that updates these variables and returns the mean_absolute_error. Internally, an absolute_errors operation computes the absolute value of the differences between predictions and labels. Then update_op increments total with the reduced sum of the product of weights and absolute_errors, and it increments count with the reduced sum of weights If weights is None, weights default to 1. Use weights of 0 to mask values. Args labels A Tensor of the same shape as predictions. predictions A Tensor of arbitrary shape. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). metrics_collections An optional list of collections that mean_absolute_error should be added to. updates_collections An optional list of collections that update_op should be added to. name An optional variable_scope name. Returns mean_absolute_error A Tensor representing the current mean, the value of total divided by count. update_op An operation that increments the total and count variables appropriately and whose value matches mean_absolute_error. Raises ValueError If predictions and labels have mismatched shapes, or if weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.mean_absolute_error
tf.compat.v1.metrics.mean_cosine_distance Computes the cosine distance between the labels and predictions. tf.compat.v1.metrics.mean_cosine_distance( labels, predictions, dim, weights=None, metrics_collections=None, updates_collections=None, name=None ) The mean_cosine_distance function creates two local variables, total and count that are used to compute the average cosine distance between predictions and labels. This average is weighted by weights, and it is ultimately returned as mean_distance, which is an idempotent operation that simply divides total by count. For estimation of the metric over a stream of data, the function creates an update_op operation that updates these variables and returns the mean_distance. If weights is None, weights default to 1. Use weights of 0 to mask values. Args labels A Tensor of arbitrary shape. predictions A Tensor of the same shape as labels. dim The dimension along which the cosine distance is computed. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). Also, dimension dim must be 1. metrics_collections An optional list of collections that the metric value variable should be added to. updates_collections An optional list of collections that the metric update ops should be added to. name An optional variable_scope name. Returns mean_distance A Tensor representing the current mean, the value of total divided by count. update_op An operation that increments the total and count variables appropriately. Raises ValueError If predictions and labels have mismatched shapes, or if weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.mean_cosine_distance
tf.compat.v1.metrics.mean_iou Calculate per-step mean Intersection-Over-Union (mIOU). tf.compat.v1.metrics.mean_iou( labels, predictions, num_classes, weights=None, metrics_collections=None, updates_collections=None, name=None ) Mean Intersection-Over-Union is a common evaluation metric for semantic image segmentation, which first computes the IOU for each semantic class and then computes the average over classes. IOU is defined as follows: IOU = true_positive / (true_positive + false_positive + false_negative). The predictions are accumulated in a confusion matrix, weighted by weights, and mIOU is then calculated from it. For estimation of the metric over a stream of data, the function creates an update_op operation that updates these variables and returns the mean_iou. If weights is None, weights default to 1. Use weights of 0 to mask values. Args labels A Tensor of ground truth labels with shape [batch size] and of type int32 or int64. The tensor will be flattened if its rank > 1. predictions A Tensor of prediction results for semantic labels, whose shape is [batch size] and type int32 or int64. The tensor will be flattened if its rank > 1. num_classes The possible number of labels the prediction task can have. This value must be provided, since a confusion matrix of dimension = [num_classes, num_classes] will be allocated. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). metrics_collections An optional list of collections that mean_iou should be added to. updates_collections An optional list of collections update_op should be added to. name An optional variable_scope name. Returns mean_iou A Tensor representing the mean intersection-over-union. update_op An operation that increments the confusion matrix. Raises ValueError If predictions and labels have mismatched shapes, or if weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.mean_iou
tf.compat.v1.metrics.mean_per_class_accuracy Calculates the mean of the per-class accuracies. tf.compat.v1.metrics.mean_per_class_accuracy( labels, predictions, num_classes, weights=None, metrics_collections=None, updates_collections=None, name=None ) Calculates the accuracy for each class, then takes the mean of that. For estimation of the metric over a stream of data, the function creates an update_op operation that updates the accuracy of each class and returns them. If weights is None, weights default to 1. Use weights of 0 to mask values. Args labels A Tensor of ground truth labels with shape [batch size] and of type int32 or int64. The tensor will be flattened if its rank > 1. predictions A Tensor of prediction results for semantic labels, whose shape is [batch size] and type int32 or int64. The tensor will be flattened if its rank > 1. num_classes The possible number of labels the prediction task can have. This value must be provided, since two variables with shape = [num_classes] will be allocated. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). metrics_collections An optional list of collections that mean_per_class_accuracy' should be added to. </td> </tr><tr> <td>updates_collections</td> <td> An optional list of collectionsupdate_opshould be added to. </td> </tr><tr> <td>name` An optional variable_scope name. Returns mean_accuracy A Tensor representing the mean per class accuracy. update_op An operation that updates the accuracy tensor. Raises ValueError If predictions and labels have mismatched shapes, or if weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.mean_per_class_accuracy