doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
tf.compat.v1.metrics.mean_relative_error Computes the mean relative error by normalizing with the given values. tf.compat.v1.metrics.mean_relative_error( labels, predictions, normalizer, weights=None, metrics_collections=None, updates_collections=None, name=None ) The mean_relative_error function creates two local variables, total and count that are used to compute the mean relative absolute error. This average is weighted by weights, and it is ultimately returned as mean_relative_error: an idempotent operation that simply divides total by count. For estimation of the metric over a stream of data, the function creates an update_op operation that updates these variables and returns the mean_reative_error. Internally, a relative_errors operation divides the absolute value of the differences between predictions and labels by the normalizer. Then update_op increments total with the reduced sum of the product of weights and relative_errors, and it increments count with the reduced sum of weights. If weights is None, weights default to 1. Use weights of 0 to mask values. Args labels A Tensor of the same shape as predictions. predictions A Tensor of arbitrary shape. normalizer A Tensor of the same shape as predictions. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). metrics_collections An optional list of collections that mean_relative_error should be added to. updates_collections An optional list of collections that update_op should be added to. name An optional variable_scope name. Returns mean_relative_error A Tensor representing the current mean, the value of total divided by count. update_op An operation that increments the total and count variables appropriately and whose value matches mean_relative_error. Raises ValueError If predictions and labels have mismatched shapes, or if weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.mean_relative_error
tf.compat.v1.metrics.mean_squared_error Computes the mean squared error between the labels and predictions. tf.compat.v1.metrics.mean_squared_error( labels, predictions, weights=None, metrics_collections=None, updates_collections=None, name=None ) The mean_squared_error function creates two local variables, total and count that are used to compute the mean squared error. This average is weighted by weights, and it is ultimately returned as mean_squared_error: an idempotent operation that simply divides total by count. For estimation of the metric over a stream of data, the function creates an update_op operation that updates these variables and returns the mean_squared_error. Internally, a squared_error operation computes the element-wise square of the difference between predictions and labels. Then update_op increments total with the reduced sum of the product of weights and squared_error, and it increments count with the reduced sum of weights. If weights is None, weights default to 1. Use weights of 0 to mask values. Args labels A Tensor of the same shape as predictions. predictions A Tensor of arbitrary shape. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). metrics_collections An optional list of collections that mean_squared_error should be added to. updates_collections An optional list of collections that update_op should be added to. name An optional variable_scope name. Returns mean_squared_error A Tensor representing the current mean, the value of total divided by count. update_op An operation that increments the total and count variables appropriately and whose value matches mean_squared_error. Raises ValueError If predictions and labels have mismatched shapes, or if weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.mean_squared_error
tf.compat.v1.metrics.mean_tensor Computes the element-wise (weighted) mean of the given tensors. tf.compat.v1.metrics.mean_tensor( values, weights=None, metrics_collections=None, updates_collections=None, name=None ) In contrast to the mean function which returns a scalar with the mean, this function returns an average tensor with the same shape as the input tensors. The mean_tensor function creates two local variables, total_tensor and count_tensor that are used to compute the average of values. This average is ultimately returned as mean which is an idempotent operation that simply divides total by count. For estimation of the metric over a stream of data, the function creates an update_op operation that updates these variables and returns the mean. update_op increments total with the reduced sum of the product of values and weights, and it increments count with the reduced sum of weights. If weights is None, weights default to 1. Use weights of 0 to mask values. Args values A Tensor of arbitrary dimensions. weights Optional Tensor whose rank is either 0, or the same rank as values, and must be broadcastable to values (i.e., all dimensions must be either 1, or the same as the corresponding values dimension). metrics_collections An optional list of collections that mean should be added to. updates_collections An optional list of collections that update_op should be added to. name An optional variable_scope name. Returns mean A float Tensor representing the current mean, the value of total divided by count. update_op An operation that increments the total and count variables appropriately and whose value matches mean_value. Raises ValueError If weights is not None and its shape doesn't match values, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.mean_tensor
tf.compat.v1.metrics.percentage_below Computes the percentage of values less than the given threshold. tf.compat.v1.metrics.percentage_below( values, threshold, weights=None, metrics_collections=None, updates_collections=None, name=None ) The percentage_below function creates two local variables, total and count that are used to compute the percentage of values that fall below threshold. This rate is weighted by weights, and it is ultimately returned as percentage which is an idempotent operation that simply divides total by count. For estimation of the metric over a stream of data, the function creates an update_op operation that updates these variables and returns the percentage. If weights is None, weights default to 1. Use weights of 0 to mask values. Args values A numeric Tensor of arbitrary size. threshold A scalar threshold. weights Optional Tensor whose rank is either 0, or the same rank as values, and must be broadcastable to values (i.e., all dimensions must be either 1, or the same as the corresponding values dimension). metrics_collections An optional list of collections that the metric value variable should be added to. updates_collections An optional list of collections that the metric update ops should be added to. name An optional variable_scope name. Returns percentage A Tensor representing the current mean, the value of total divided by count. update_op An operation that increments the total and count variables appropriately. Raises ValueError If weights is not None and its shape doesn't match values, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.percentage_below
tf.compat.v1.metrics.precision Computes the precision of the predictions with respect to the labels. tf.compat.v1.metrics.precision( labels, predictions, weights=None, metrics_collections=None, updates_collections=None, name=None ) The precision function creates two local variables, true_positives and false_positives, that are used to compute the precision. This value is ultimately returned as precision, an idempotent operation that simply divides true_positives by the sum of true_positives and false_positives. For estimation of the metric over a stream of data, the function creates an update_op operation that updates these variables and returns the precision. update_op weights each prediction by the corresponding value in weights. If weights is None, weights default to 1. Use weights of 0 to mask values. Args labels The ground truth values, a Tensor whose dimensions must match predictions. Will be cast to bool. predictions The predicted values, a Tensor of arbitrary dimensions. Will be cast to bool. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). metrics_collections An optional list of collections that precision should be added to. updates_collections An optional list of collections that update_op should be added to. name An optional variable_scope name. Returns precision Scalar float Tensor with the value of true_positives divided by the sum of true_positives and false_positives. update_op Operation that increments true_positives and false_positives variables appropriately and whose value matches precision. Raises ValueError If predictions and labels have mismatched shapes, or if weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.precision
tf.compat.v1.metrics.precision_at_k Computes precision@k of the predictions with respect to sparse labels. tf.compat.v1.metrics.precision_at_k( labels, predictions, k, class_id=None, weights=None, metrics_collections=None, updates_collections=None, name=None ) If class_id is specified, we calculate precision by considering only the entries in the batch for which class_id is in the top-k highest predictions, and computing the fraction of them for which class_id is indeed a correct label. If class_id is not specified, we'll calculate precision as how often on average a class among the top-k classes with the highest predicted values of a batch entry is correct and can be found in the label for that entry. precision_at_k creates two local variables, true_positive_at_<k> and false_positive_at_<k>, that are used to compute the precision@k frequency. This frequency is ultimately returned as precision_at_<k>: an idempotent operation that simply divides true_positive_at_<k> by total (true_positive_at_<k> + false_positive_at_<k>). For estimation of the metric over a stream of data, the function creates an update_op operation that updates these variables and returns the precision_at_<k>. Internally, a top_k operation computes a Tensor indicating the top k predictions. Set operations applied to top_k and labels calculate the true positives and false positives weighted by weights. Then update_op increments true_positive_at_<k> and false_positive_at_<k> using these values. If weights is None, weights default to 1. Use weights of 0 to mask values. Args labels int64 Tensor or SparseTensor with shape [D1, ... DN, num_labels] or [D1, ... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and labels has shape [batch_size, num_labels]. [D1, ... DN] must match predictions. Values should be in range [0, num_classes), where num_classes is the last dimension of predictions. Values outside this range are ignored. predictions Float Tensor with shape [D1, ... DN, num_classes] where N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1, ... DN] must match labels. k Integer, k for @k metric. class_id Integer class ID for which we want binary metrics. This should be in range [0, num_classes], where num_classes is the last dimension of predictions. If class_id is outside this range, the method returns NAN. weights Tensor whose rank is either 0, or n-1, where n is the rank of labels. If the latter, it must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). metrics_collections An optional list of collections that values should be added to. updates_collections An optional list of collections that updates should be added to. name Name of new update operation, and namespace for other dependent ops. Returns precision Scalar float64 Tensor with the value of true_positives divided by the sum of true_positives and false_positives. update_op Operation that increments true_positives and false_positives variables appropriately, and whose value matches precision. Raises ValueError If weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.precision_at_k
tf.compat.v1.metrics.precision_at_thresholds Computes precision values for different thresholds on predictions. tf.compat.v1.metrics.precision_at_thresholds( labels, predictions, thresholds, weights=None, metrics_collections=None, updates_collections=None, name=None ) The precision_at_thresholds function creates four local variables, true_positives, true_negatives, false_positives and false_negatives for various values of thresholds. precision[i] is defined as the total weight of values in predictions above thresholds[i] whose corresponding entry in labels is True, divided by the total weight of values in predictions above thresholds[i] (true_positives[i] / (true_positives[i] + false_positives[i])). For estimation of the metric over a stream of data, the function creates an update_op operation that updates these variables and returns the precision. If weights is None, weights default to 1. Use weights of 0 to mask values. Args labels The ground truth values, a Tensor whose dimensions must match predictions. Will be cast to bool. predictions A floating point Tensor of arbitrary shape and whose values are in the range [0, 1]. thresholds A python list or tuple of float thresholds in [0, 1]. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). metrics_collections An optional list of collections that auc should be added to. updates_collections An optional list of collections that update_op should be added to. name An optional variable_scope name. Returns precision A float Tensor of shape [len(thresholds)]. update_op An operation that increments the true_positives, true_negatives, false_positives and false_negatives variables that are used in the computation of precision. Raises ValueError If predictions and labels have mismatched shapes, or if weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.precision_at_thresholds
tf.compat.v1.metrics.precision_at_top_k Computes precision@k of the predictions with respect to sparse labels. tf.compat.v1.metrics.precision_at_top_k( labels, predictions_idx, k=None, class_id=None, weights=None, metrics_collections=None, updates_collections=None, name=None ) Differs from sparse_precision_at_k in that predictions must be in the form of top k class indices, whereas sparse_precision_at_k expects logits. Refer to sparse_precision_at_k for more details. Args labels int64 Tensor or SparseTensor with shape [D1, ... DN, num_labels] or [D1, ... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and labels has shape [batch_size, num_labels]. [D1, ... DN] must match predictions. Values should be in range [0, num_classes), where num_classes is the last dimension of predictions. Values outside this range are ignored. predictions_idx Integer Tensor with shape [D1, ... DN, k] where N >= 1. Commonly, N=1 and predictions has shape [batch size, k]. The final dimension contains the top k predicted class indices. [D1, ... DN] must match labels. k Integer, k for @k metric. Only used for the default op name. class_id Integer class ID for which we want binary metrics. This should be in range [0, num_classes], where num_classes is the last dimension of predictions. If class_id is outside this range, the method returns NAN. weights Tensor whose rank is either 0, or n-1, where n is the rank of labels. If the latter, it must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). metrics_collections An optional list of collections that values should be added to. updates_collections An optional list of collections that updates should be added to. name Name of new update operation, and namespace for other dependent ops. Returns precision Scalar float64 Tensor with the value of true_positives divided by the sum of true_positives and false_positives. update_op Operation that increments true_positives and false_positives variables appropriately, and whose value matches precision. Raises ValueError If weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.precision_at_top_k
tf.compat.v1.metrics.recall Computes the recall of the predictions with respect to the labels. tf.compat.v1.metrics.recall( labels, predictions, weights=None, metrics_collections=None, updates_collections=None, name=None ) The recall function creates two local variables, true_positives and false_negatives, that are used to compute the recall. This value is ultimately returned as recall, an idempotent operation that simply divides true_positives by the sum of true_positives and false_negatives. For estimation of the metric over a stream of data, the function creates an update_op that updates these variables and returns the recall. update_op weights each prediction by the corresponding value in weights. If weights is None, weights default to 1. Use weights of 0 to mask values. Args labels The ground truth values, a Tensor whose dimensions must match predictions. Will be cast to bool. predictions The predicted values, a Tensor of arbitrary dimensions. Will be cast to bool. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). metrics_collections An optional list of collections that recall should be added to. updates_collections An optional list of collections that update_op should be added to. name An optional variable_scope name. Returns recall Scalar float Tensor with the value of true_positives divided by the sum of true_positives and false_negatives. update_op Operation that increments true_positives and false_negatives variables appropriately and whose value matches recall. Raises ValueError If predictions and labels have mismatched shapes, or if weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.recall
tf.compat.v1.metrics.recall_at_k Computes recall@k of the predictions with respect to sparse labels. tf.compat.v1.metrics.recall_at_k( labels, predictions, k, class_id=None, weights=None, metrics_collections=None, updates_collections=None, name=None ) If class_id is specified, we calculate recall by considering only the entries in the batch for which class_id is in the label, and computing the fraction of them for which class_id is in the top-k predictions. If class_id is not specified, we'll calculate recall as how often on average a class among the labels of a batch entry is in the top-k predictions. sparse_recall_at_k creates two local variables, true_positive_at_<k> and false_negative_at_<k>, that are used to compute the recall_at_k frequency. This frequency is ultimately returned as recall_at_<k>: an idempotent operation that simply divides true_positive_at_<k> by total (true_positive_at_<k> + false_negative_at_<k>). For estimation of the metric over a stream of data, the function creates an update_op operation that updates these variables and returns the recall_at_<k>. Internally, a top_k operation computes a Tensor indicating the top k predictions. Set operations applied to top_k and labels calculate the true positives and false negatives weighted by weights. Then update_op increments true_positive_at_<k> and false_negative_at_<k> using these values. If weights is None, weights default to 1. Use weights of 0 to mask values. Args labels int64 Tensor or SparseTensor with shape [D1, ... DN, num_labels] or [D1, ... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and labels has shape [batch_size, num_labels]. [D1, ... DN] must match predictions. Values should be in range [0, num_classes), where num_classes is the last dimension of predictions. Values outside this range always count towards false_negative_at_<k>. predictions Float Tensor with shape [D1, ... DN, num_classes] where N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1, ... DN] must match labels. k Integer, k for @k metric. class_id Integer class ID for which we want binary metrics. This should be in range [0, num_classes), where num_classes is the last dimension of predictions. If class_id is outside this range, the method returns NAN. weights Tensor whose rank is either 0, or n-1, where n is the rank of labels. If the latter, it must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). metrics_collections An optional list of collections that values should be added to. updates_collections An optional list of collections that updates should be added to. name Name of new update operation, and namespace for other dependent ops. Returns recall Scalar float64 Tensor with the value of true_positives divided by the sum of true_positives and false_negatives. update_op Operation that increments true_positives and false_negatives variables appropriately, and whose value matches recall. Raises ValueError If weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.recall_at_k
tf.compat.v1.metrics.recall_at_thresholds Computes various recall values for different thresholds on predictions. tf.compat.v1.metrics.recall_at_thresholds( labels, predictions, thresholds, weights=None, metrics_collections=None, updates_collections=None, name=None ) The recall_at_thresholds function creates four local variables, true_positives, true_negatives, false_positives and false_negatives for various values of thresholds. recall[i] is defined as the total weight of values in predictions above thresholds[i] whose corresponding entry in labels is True, divided by the total weight of True values in labels (true_positives[i] / (true_positives[i] + false_negatives[i])). For estimation of the metric over a stream of data, the function creates an update_op operation that updates these variables and returns the recall. If weights is None, weights default to 1. Use weights of 0 to mask values. Args labels The ground truth values, a Tensor whose dimensions must match predictions. Will be cast to bool. predictions A floating point Tensor of arbitrary shape and whose values are in the range [0, 1]. thresholds A python list or tuple of float thresholds in [0, 1]. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). metrics_collections An optional list of collections that recall should be added to. updates_collections An optional list of collections that update_op should be added to. name An optional variable_scope name. Returns recall A float Tensor of shape [len(thresholds)]. update_op An operation that increments the true_positives, true_negatives, false_positives and false_negatives variables that are used in the computation of recall. Raises ValueError If predictions and labels have mismatched shapes, or if weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.recall_at_thresholds
tf.compat.v1.metrics.recall_at_top_k Computes recall@k of top-k predictions with respect to sparse labels. tf.compat.v1.metrics.recall_at_top_k( labels, predictions_idx, k=None, class_id=None, weights=None, metrics_collections=None, updates_collections=None, name=None ) Differs from recall_at_k in that predictions must be in the form of top k class indices, whereas recall_at_k expects logits. Refer to recall_at_k for more details. Args labels int64 Tensor or SparseTensor with shape [D1, ... DN, num_labels] or [D1, ... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and labels has shape [batch_size, num_labels]. [D1, ... DN] must match predictions. Values should be in range [0, num_classes), where num_classes is the last dimension of predictions. Values outside this range always count towards false_negative_at_<k>. predictions_idx Integer Tensor with shape [D1, ... DN, k] where N >= 1. Commonly, N=1 and predictions has shape [batch size, k]. The final dimension contains the top k predicted class indices. [D1, ... DN] must match labels. k Integer, k for @k metric. Only used for the default op name. class_id Integer class ID for which we want binary metrics. This should be in range [0, num_classes), where num_classes is the last dimension of predictions. If class_id is outside this range, the method returns NAN. weights Tensor whose rank is either 0, or n-1, where n is the rank of labels. If the latter, it must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). metrics_collections An optional list of collections that values should be added to. updates_collections An optional list of collections that updates should be added to. name Name of new update operation, and namespace for other dependent ops. Returns recall Scalar float64 Tensor with the value of true_positives divided by the sum of true_positives and false_negatives. update_op Operation that increments true_positives and false_negatives variables appropriately, and whose value matches recall. Raises ValueError If weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple.
tensorflow.compat.v1.metrics.recall_at_top_k
tf.compat.v1.metrics.root_mean_squared_error Computes the root mean squared error between the labels and predictions. tf.compat.v1.metrics.root_mean_squared_error( labels, predictions, weights=None, metrics_collections=None, updates_collections=None, name=None ) The root_mean_squared_error function creates two local variables, total and count that are used to compute the root mean squared error. This average is weighted by weights, and it is ultimately returned as root_mean_squared_error: an idempotent operation that takes the square root of the division of total by count. For estimation of the metric over a stream of data, the function creates an update_op operation that updates these variables and returns the root_mean_squared_error. Internally, a squared_error operation computes the element-wise square of the difference between predictions and labels. Then update_op increments total with the reduced sum of the product of weights and squared_error, and it increments count with the reduced sum of weights. If weights is None, weights default to 1. Use weights of 0 to mask values. Args labels A Tensor of the same shape as predictions. predictions A Tensor of arbitrary shape. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). metrics_collections An optional list of collections that root_mean_squared_error should be added to. updates_collections An optional list of collections that update_op should be added to. name An optional variable_scope name. Returns root_mean_squared_error A Tensor representing the current mean, the value of total divided by count. update_op An operation that increments the total and count variables appropriately and whose value matches root_mean_squared_error. Raises ValueError If predictions and labels have mismatched shapes, or if weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.root_mean_squared_error
tf.compat.v1.metrics.sensitivity_at_specificity Computes the specificity at a given sensitivity. tf.compat.v1.metrics.sensitivity_at_specificity( labels, predictions, specificity, weights=None, num_thresholds=200, metrics_collections=None, updates_collections=None, name=None ) The sensitivity_at_specificity function creates four local variables, true_positives, true_negatives, false_positives and false_negatives that are used to compute the sensitivity at the given specificity value. The threshold for the given specificity value is computed and used to evaluate the corresponding sensitivity. For estimation of the metric over a stream of data, the function creates an update_op operation that updates these variables and returns the sensitivity. update_op increments the true_positives, true_negatives, false_positives and false_negatives counts with the weight of each case found in the predictions and labels. If weights is None, weights default to 1. Use weights of 0 to mask values. For additional information about specificity and sensitivity, see the following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity Args labels The ground truth values, a Tensor whose dimensions must match predictions. Will be cast to bool. predictions A floating point Tensor of arbitrary shape and whose values are in the range [0, 1]. specificity A scalar value in range [0, 1]. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). num_thresholds The number of thresholds to use for matching the given specificity. metrics_collections An optional list of collections that sensitivity should be added to. updates_collections An optional list of collections that update_op should be added to. name An optional variable_scope name. Returns sensitivity A scalar Tensor representing the sensitivity at the given specificity value. update_op An operation that increments the true_positives, true_negatives, false_positives and false_negatives variables appropriately and whose value matches sensitivity. Raises ValueError If predictions and labels have mismatched shapes, if weights is not None and its shape doesn't match predictions, or if specificity is not between 0 and 1, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.sensitivity_at_specificity
tf.compat.v1.metrics.sparse_average_precision_at_k Renamed to average_precision_at_k, please use that method instead. (deprecated) tf.compat.v1.metrics.sparse_average_precision_at_k( labels, predictions, k, weights=None, metrics_collections=None, updates_collections=None, name=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use average_precision_at_k instead
tensorflow.compat.v1.metrics.sparse_average_precision_at_k
tf.compat.v1.metrics.sparse_precision_at_k Renamed to precision_at_k, please use that method instead. (deprecated) tf.compat.v1.metrics.sparse_precision_at_k( labels, predictions, k, class_id=None, weights=None, metrics_collections=None, updates_collections=None, name=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use precision_at_k instead
tensorflow.compat.v1.metrics.sparse_precision_at_k
tf.compat.v1.metrics.specificity_at_sensitivity Computes the specificity at a given sensitivity. tf.compat.v1.metrics.specificity_at_sensitivity( labels, predictions, sensitivity, weights=None, num_thresholds=200, metrics_collections=None, updates_collections=None, name=None ) The specificity_at_sensitivity function creates four local variables, true_positives, true_negatives, false_positives and false_negatives that are used to compute the specificity at the given sensitivity value. The threshold for the given sensitivity value is computed and used to evaluate the corresponding specificity. For estimation of the metric over a stream of data, the function creates an update_op operation that updates these variables and returns the specificity. update_op increments the true_positives, true_negatives, false_positives and false_negatives counts with the weight of each case found in the predictions and labels. If weights is None, weights default to 1. Use weights of 0 to mask values. For additional information about specificity and sensitivity, see the following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity Args labels The ground truth values, a Tensor whose dimensions must match predictions. Will be cast to bool. predictions A floating point Tensor of arbitrary shape and whose values are in the range [0, 1]. sensitivity A scalar value in range [0, 1]. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). num_thresholds The number of thresholds to use for matching the given sensitivity. metrics_collections An optional list of collections that specificity should be added to. updates_collections An optional list of collections that update_op should be added to. name An optional variable_scope name. Returns specificity A scalar Tensor representing the specificity at the given sensitivity value. update_op An operation that increments the true_positives, true_negatives, false_positives and false_negatives variables appropriately and whose value matches specificity. Raises ValueError If predictions and labels have mismatched shapes, if weights is not None and its shape doesn't match predictions, or if sensitivity is not between 0 and 1, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.specificity_at_sensitivity
tf.compat.v1.metrics.true_negatives Sum the weights of true_negatives. tf.compat.v1.metrics.true_negatives( labels, predictions, weights=None, metrics_collections=None, updates_collections=None, name=None ) If weights is None, weights default to 1. Use weights of 0 to mask values. Args labels The ground truth values, a Tensor whose dimensions must match predictions. Will be cast to bool. predictions The predicted values, a Tensor of arbitrary dimensions. Will be cast to bool. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). metrics_collections An optional list of collections that the metric value variable should be added to. updates_collections An optional list of collections that the metric update ops should be added to. name An optional variable_scope name. Returns value_tensor A Tensor representing the current value of the metric. update_op An operation that accumulates the error from a batch of data. Raises ValueError If predictions and labels have mismatched shapes, or if weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.true_negatives
tf.compat.v1.metrics.true_negatives_at_thresholds Computes true negatives at provided threshold values. tf.compat.v1.metrics.true_negatives_at_thresholds( labels, predictions, thresholds, weights=None, metrics_collections=None, updates_collections=None, name=None ) If weights is None, weights default to 1. Use weights of 0 to mask values. Args labels A Tensor whose shape matches predictions. Will be cast to bool. predictions A floating point Tensor of arbitrary shape and whose values are in the range [0, 1]. thresholds A python list or tuple of float thresholds in [0, 1]. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). metrics_collections An optional list of collections that true_negatives should be added to. updates_collections An optional list of collections that update_op should be added to. name An optional variable_scope name. Returns true_negatives A float Tensor of shape [len(thresholds)]. update_op An operation that updates the true_negatives variable and returns its current value. Raises ValueError If predictions and labels have mismatched shapes, or if weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.true_negatives_at_thresholds
tf.compat.v1.metrics.true_positives Sum the weights of true_positives. tf.compat.v1.metrics.true_positives( labels, predictions, weights=None, metrics_collections=None, updates_collections=None, name=None ) If weights is None, weights default to 1. Use weights of 0 to mask values. Args labels The ground truth values, a Tensor whose dimensions must match predictions. Will be cast to bool. predictions The predicted values, a Tensor of arbitrary dimensions. Will be cast to bool. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). metrics_collections An optional list of collections that the metric value variable should be added to. updates_collections An optional list of collections that the metric update ops should be added to. name An optional variable_scope name. Returns value_tensor A Tensor representing the current value of the metric. update_op An operation that accumulates the error from a batch of data. Raises ValueError If predictions and labels have mismatched shapes, or if weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.true_positives
tf.compat.v1.metrics.true_positives_at_thresholds Computes true positives at provided threshold values. tf.compat.v1.metrics.true_positives_at_thresholds( labels, predictions, thresholds, weights=None, metrics_collections=None, updates_collections=None, name=None ) If weights is None, weights default to 1. Use weights of 0 to mask values. Args labels A Tensor whose shape matches predictions. Will be cast to bool. predictions A floating point Tensor of arbitrary shape and whose values are in the range [0, 1]. thresholds A python list or tuple of float thresholds in [0, 1]. weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). metrics_collections An optional list of collections that true_positives should be added to. updates_collections An optional list of collections that update_op should be added to. name An optional variable_scope name. Returns true_positives A float Tensor of shape [len(thresholds)]. update_op An operation that updates the true_positives variable and returns its current value. Raises ValueError If predictions and labels have mismatched shapes, or if weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
tensorflow.compat.v1.metrics.true_positives_at_thresholds
tf.compat.v1.min_max_variable_partitioner Partitioner to allocate minimum size per slice. tf.compat.v1.min_max_variable_partitioner( max_partitions=1, axis=0, min_slice_size=(256 << 10), bytes_per_string_element=16 ) Returns a partitioner that partitions the variable of given shape and dtype such that each partition has a minimum of min_slice_size slice of the variable. The maximum number of such partitions (upper bound) is given by max_partitions. Args max_partitions Upper bound on the number of partitions. Defaults to 1. axis Axis along which to partition the variable. Defaults to 0. min_slice_size Minimum size of the variable slice per partition. Defaults to 256K. bytes_per_string_element If the Variable is of type string, this provides an estimate of how large each scalar in the Variable is. Returns A partition function usable as the partitioner argument to variable_scope and get_variable.
tensorflow.compat.v1.min_max_variable_partitioner
Module: tf.compat.v1.mixed_precision Public API for tf.mixed_precision namespace. Modules experimental module: Public API for tf.mixed_precision.experimental namespace. Classes class DynamicLossScale: Loss scale that dynamically adjusts itself. class FixedLossScale: Loss scale with a fixed value. class LossScale: Base class for all TF1 loss scales. class MixedPrecisionLossScaleOptimizer: An optimizer that applies loss scaling. Functions disable_mixed_precision_graph_rewrite(...): Disables the mixed precision graph rewrite. enable_mixed_precision_graph_rewrite(...): Enable mixed precision via a graph rewrite.
tensorflow.compat.v1.mixed_precision
tf.compat.v1.mixed_precision.disable_mixed_precision_graph_rewrite Disables the mixed precision graph rewrite. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.train.experimental.disable_mixed_precision_graph_rewrite tf.compat.v1.mixed_precision.disable_mixed_precision_graph_rewrite() After this is called, the mixed precision graph rewrite will no longer run for new Sessions, and so float32 operations will no longer be converted to float16 in such Sessions. However, any existing Sessions will continue to have the graph rewrite enabled if they were created after enable_mixed_precision_graph_rewrite was called but before disable_mixed_precision_graph_rewrite was called. This does not undo the effects of loss scaling. Any optimizers wrapped with a LossScaleOptimizer will continue to do loss scaling, although this loss scaling will no longer be useful if the optimizer is used in new Sessions, as the graph rewrite no longer converts the graph to use float16. This function is useful for unit testing. A unit tests can test using the mixed precision graph rewrite, then disable it so future unit tests continue using float32. If this is done, unit tests should not share a single session, as enable_mixed_precision_graph_rewrite and disable_mixed_precision_graph_rewrite have no effect on existing sessions.
tensorflow.compat.v1.mixed_precision.disable_mixed_precision_graph_rewrite
tf.compat.v1.mixed_precision.enable_mixed_precision_graph_rewrite Enable mixed precision via a graph rewrite. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.train.experimental.enable_mixed_precision_graph_rewrite tf.compat.v1.mixed_precision.enable_mixed_precision_graph_rewrite( opt, loss_scale='dynamic' ) Mixed precision is the use of both float32 and float16 data types when training a model to improve performance. This is achieved via a graph rewrite operation and a loss-scale optimizer. Performing arithmetic operations in float16 takes advantage of specialized processing units, such as NVIDIA Tensor Cores, for much higher arithmetic throughput. However, due to the smaller representable range, performing the entire training with float16 can result in gradient underflow, that is, small gradient values becoming zeroes. Instead, performing only select arithmetic operations in float16 results in higher throughput and decreased training time when using compatible hardware accelerators while also reducing memory usage, typically without sacrificing model accuracy. Note: While the mixed precision rewrite changes the datatype of various layers throughout the model, the same accuracy reached in float32 is expected. If a NaN gradient occurs with dynamic loss scaling, the model update for that batch is skipped. In this case, the global step count is not incremented, and the LossScaleOptimizer attempts to decrease the loss scaling value to avoid NaN values in subsequent iterations. This approach has been shown to achieve the same accuracy as float32 and, in most cases, better training throughput. Example: model = tf.keras.models.Sequential([ tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(64, activation='softmax'), ]) opt = tf.keras.optimizers.SGD() opt = tf.train.experimental.enable_mixed_precision_graph_rewrite(opt) model.compile(loss="mse", optimizer=opt) x_train = np.random.random((1024, 64)) y_train = np.random.random((1024, 64)) model.fit(x_train, y_train) Calling enable_mixed_precision_graph_rewrite(opt) enables the graph rewrite operation before computing gradients. The function additionally returns an Optimizer (opt) wrapped with a LossScaleOptimizer. This prevents underflow in the float16 tensors during the backward pass. An optimizer of type tf.train.Optimizer or tf.keras.optimizers.Optimizer must be passed to this function, which will then be wrapped to use loss scaling. The graph rewrite operation changes the dtype of certain operations in the graph from float32 to float16. There are several categories of operations that are either included or excluded by this rewrite operation. The following categories of Ops are defined inside corresponding functions under the class AutoMixedPrecisionLists in auto_mixed_precision_lists.h: ClearList: Ops that do not have numerically significant adverse effects. E.g. ArgMax and Floor. AllowList: Ops that are considered numerically safe for execution in float16, and thus are always converted. E.g. Conv2D. DenyList: Ops that are numerically unsafe to execute in float16 and can negatively affect downstream nodes. E.g. Softmax. GrayList: Ops that are considered numerically safe for execution in float16 unless downstream from a DenyList Op. E.g. Add and AvgPool. When this function is used, gradients should only be computed and applied with the returned optimizer, either by calling opt.minimize() or opt.compute_gradients() followed by opt.apply_gradients(). Gradients should not be computed with tf.gradients or tf.GradientTape. This is because the returned optimizer will apply loss scaling, and tf.gradients or tf.GradientTape will not. If you do directly use tf.gradients or tf.GradientTape, your model may not converge due to float16 underflow problems. When eager execution is enabled, the mixed precision graph rewrite is only enabled within tf.functions, as outside tf.functions, there is no graph. For NVIDIA GPUs with Tensor cores, as a general performance guide, dimensions (such as batch size, input size, output size, and channel counts) should be powers of two if under 256, or otherwise divisible by 8 if above 256. For more information, check out the NVIDIA Deep Learning Performance Guide. Currently, mixed precision is only enabled on NVIDIA Tensor Core GPUs with Compute Capability 7.0 and above (Volta, Turing, or newer architectures). The parts of the graph on CPUs and TPUs are untouched by the graph rewrite. Raises ValueError, if the tf.keras.mixed_precision API is also used by calling tf.keras.mixed_precision.experimental.set_policy. Only one mixed precision API can be used. Args opt An instance of a tf.keras.optimizers.Optimizer or a tf.train.Optimizer. loss_scale Either an int/float, the string "dynamic", or an instance of a tf.mixed_precision.experimental.LossScale. The loss scale to use. It is recommended to keep this as its default value of "dynamic", which will adjust the scaling automatically to prevent Inf or NaN values. Returns A version of opt that will use loss scaling to prevent underflow.
tensorflow.compat.v1.mixed_precision.enable_mixed_precision_graph_rewrite
Module: tf.compat.v1.mixed_precision.experimental Public API for tf.mixed_precision.experimental namespace. Classes class DynamicLossScale: Loss scale that dynamically adjusts itself. class FixedLossScale: Loss scale with a fixed value. class LossScale: Base class for all TF1 loss scales.
tensorflow.compat.v1.mixed_precision.experimental
tf.compat.v1.mixed_precision.MixedPrecisionLossScaleOptimizer An optimizer that applies loss scaling. Inherits From: Optimizer View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer tf.compat.v1.mixed_precision.MixedPrecisionLossScaleOptimizer( opt, loss_scale ) Loss scaling is a process that multiplies the loss by a multiplier called the loss scale, and divides each gradient by the same multiplier. The pseudocode for this process is: loss = ... loss *= loss_scale grads = gradients(loss, vars) grads /= loss_scale Mathematically, loss scaling has no effect, but can help avoid numerical underflow in intermediate gradients when float16 tensors are used for mixed precision training. By multiplying the loss, each intermediate gradient will have the same multiplier applied. The loss scale can either be a fixed constant, chosen by the user, or be dynamically determined. Dynamically determining the loss scale is convenient as a loss scale does not have to be explicitly chosen. However it reduces performance. This optimizer wraps another optimizer and applies loss scaling to it via a LossScale. Loss scaling is applied whenever gradients are computed, such as through minimize(). Args use_locking Bool. If True apply use locks to prevent concurrent updates to variables. name A non-empty string. The name to use for accumulators created for the optimizer. Raises ValueError If name is malformed. Methods apply_gradients View source apply_gradients( grads_and_vars, global_step=None, name=None ) Apply gradients to variables. This is the second part of minimize(). It returns an Operation that conditionally applies gradients if all gradient values are finite. Otherwise no update is performed (nor is global_step incremented). Args grads_and_vars List of (gradient, variable) pairs as returned by compute_gradients(). global_step Optional Variable to increment by one after the variables have been updated. name Optional name for the returned operation. Default to the name passed to the Optimizer constructor. Returns An Operation that conditionally applies the specified gradients. If global_step was not None, that operation also increments global_step. Raises RuntimeError If you should use _distributed_apply() instead. compute_gradients View source compute_gradients( loss, var_list=None, gate_gradients=optimizer.Optimizer.GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None ) Compute gradients of loss for the variables in var_list. This adjusts the dynamic range of the gradient evaluation by scaling up the loss value. The gradient values are then scaled back down by the reciprocal of the loss scale. This is useful in reduced precision training where small gradient values would otherwise underflow the representable range. Args loss A Tensor containing the value to minimize or a callable taking no arguments which returns the value to minimize. When eager execution is enabled it must be a callable. var_list Optional list or tuple of tf.Variable to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns A list of (gradient, variable) pairs. Variable is always present, but gradient can be None. get_name View source get_name() get_slot View source get_slot( var, name ) Return a slot named name created for var by the Optimizer. Some Optimizer subclasses use additional variables. For example Momentum and Adagrad use variables to accumulate updates. This method gives access to these Variable objects if for some reason you need them. Use get_slot_names() to get the list of slot names created by the Optimizer. Args var A variable passed to minimize() or apply_gradients(). name A string. Returns The Variable for the slot if it was created, None otherwise. get_slot_names View source get_slot_names() Return a list of the names of slots created by the Optimizer. See get_slot(). Returns A list of strings. minimize View source minimize( loss, global_step=None, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None ) Add operations to minimize loss by updating var_list. This method simply combines calls compute_gradients() and apply_gradients(). If you want to process the gradient before applying them call compute_gradients() and apply_gradients() explicitly instead of using this function. Args loss A Tensor containing the value to minimize. global_step Optional Variable to increment by one after the variables have been updated. var_list Optional list or tuple of Variable objects to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. name Optional name for the returned operation. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns An Operation that updates the variables in var_list. If global_step was not None, that operation also increments global_step. Raises ValueError If some of the variables are not Variable objects. Eager Compatibility When eager execution is enabled, loss should be a Python function that takes no arguments and computes the value to be minimized. Minimization (and gradient computation) is done with respect to the elements of var_list if not None, else with respect to any trainable variables created during the execution of the loss function. gate_gradients, aggregation_method, colocate_gradients_with_ops and grad_loss are ignored when eager execution is enabled. variables View source variables() Returns the variables of the Optimizer. Class Variables GATE_GRAPH 2 GATE_NONE 0 GATE_OP 1
tensorflow.compat.v1.mixed_precision.mixedprecisionlossscaleoptimizer
Module: tf.compat.v1.mlir Public API for tf.mlir namespace. Modules experimental module: Public API for tf.mlir.experimental namespace.
tensorflow.compat.v1.mlir
Module: tf.compat.v1.mlir.experimental Public API for tf.mlir.experimental namespace. Functions convert_function(...): Import a ConcreteFunction and convert it to a textual MLIR module. convert_graph_def(...): Import a GraphDef and convert it to a textual MLIR module.
tensorflow.compat.v1.mlir.experimental
tf.compat.v1.model_variables Returns all variables in the MODEL_VARIABLES collection. tf.compat.v1.model_variables( scope=None ) Args scope (Optional.) A string. If supplied, the resulting list is filtered to include only items whose name attribute matches scope using re.match. Items without a name attribute are never returned if a scope is supplied. The choice of re.match means that a scope without special tokens filters by prefix. Returns A list of local Variable objects.
tensorflow.compat.v1.model_variables
tf.compat.v1.moving_average_variables Returns all variables that maintain their moving averages. tf.compat.v1.moving_average_variables( scope=None ) If an ExponentialMovingAverage object is created and the apply() method is called on a list of variables, these variables will be added to the GraphKeys.MOVING_AVERAGE_VARIABLES collection. This convenience function returns the contents of that collection. Args scope (Optional.) A string. If supplied, the resulting list is filtered to include only items whose name attribute matches scope using re.match. Items without a name attribute are never returned if a scope is supplied. The choice of re.match means that a scope without special tokens filters by prefix. Returns A list of Variable objects.
tensorflow.compat.v1.moving_average_variables
tf.compat.v1.multinomial Draws samples from a multinomial distribution. (deprecated) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.random.multinomial tf.compat.v1.multinomial( logits, num_samples, seed=None, name=None, output_dtype=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.random.categorical instead. Example: # samples has shape [1, 5], where each value is either 0 or 1 with equal # probability. samples = tf.random.categorical(tf.math.log([[0.5, 0.5]]), 5) Args logits 2-D Tensor with shape [batch_size, num_classes]. Each slice [i, :] represents the unnormalized log-probabilities for all classes. num_samples 0-D. Number of independent samples to draw for each row slice. seed A Python integer. Used to create a random seed for the distribution. See tf.random.set_seed for behavior. name Optional name for the operation. output_dtype integer type to use for the output. Defaults to int64. Returns The drawn samples of shape [batch_size, num_samples].
tensorflow.compat.v1.multinomial
tf.compat.v1.NameAttrList A ProtocolMessage Attributes attr repeated AttrEntry attr name string name Child Classes class AttrEntry
tensorflow.compat.v1.nameattrlist
tf.compat.v1.NameAttrList.AttrEntry A ProtocolMessage Attributes key string key value AttrValue value
tensorflow.compat.v1.nameattrlist.attrentry
Module: tf.compat.v1.nest Public API for tf.nest namespace. Functions assert_same_structure(...): Asserts that two structures are nested in the same way. flatten(...): Returns a flat list from a given nested structure. is_nested(...): Returns true if its input is a collections.abc.Sequence (except strings). map_structure(...): Applies func to each entry in structure and returns a new structure. pack_sequence_as(...): Returns a given flattened sequence packed into a given structure.
tensorflow.compat.v1.nest
Module: tf.compat.v1.nn Wrappers for primitive Neural Net (NN) Operations. Modules rnn_cell module: Module for constructing RNN Cells. Functions all_candidate_sampler(...): Generate the set of all classes. atrous_conv2d(...): Atrous convolution (a.k.a. convolution with holes or dilated convolution). atrous_conv2d_transpose(...): The transpose of atrous_conv2d. avg_pool(...): Performs the average pooling on the input. avg_pool1d(...): Performs the average pooling on the input. avg_pool2d(...): Performs the average pooling on the input. avg_pool3d(...): Performs the average pooling on the input. avg_pool_v2(...): Performs the avg pooling on the input. batch_norm_with_global_normalization(...): Batch normalization. batch_normalization(...): Batch normalization. bias_add(...): Adds bias to value. bidirectional_dynamic_rnn(...): Creates a dynamic version of bidirectional recurrent neural network. (deprecated) collapse_repeated(...): Merge repeated labels into single labels. compute_accidental_hits(...): Compute the position ids in sampled_candidates matching true_classes. compute_average_loss(...): Scales per-example losses with sample_weights and computes their average. conv1d(...): Computes a 1-D convolution of input with rank >=3 and a 3-D filter. (deprecated argument values) (deprecated argument values) conv1d_transpose(...): The transpose of conv1d. conv2d(...): Computes a 2-D convolution given 4-D input and filter tensors. conv2d_backprop_filter(...): Computes the gradients of convolution with respect to the filter. conv2d_backprop_input(...): Computes the gradients of convolution with respect to the input. conv2d_transpose(...): The transpose of conv2d. conv3d(...): Computes a 3-D convolution given 5-D input and filter tensors. conv3d_backprop_filter(...): Computes the gradients of 3-D convolution with respect to the filter. conv3d_backprop_filter_v2(...): Computes the gradients of 3-D convolution with respect to the filter. conv3d_transpose(...): The transpose of conv3d. conv_transpose(...): The transpose of convolution. convolution(...): Computes sums of N-D convolutions (actually cross-correlation). crelu(...): Computes Concatenated ReLU. ctc_beam_search_decoder(...): Performs beam search decoding on the logits given in input. ctc_beam_search_decoder_v2(...): Performs beam search decoding on the logits given in input. ctc_greedy_decoder(...): Performs greedy decoding on the logits given in input (best path). ctc_loss(...): Computes the CTC (Connectionist Temporal Classification) Loss. ctc_loss_v2(...): Computes CTC (Connectionist Temporal Classification) loss. ctc_unique_labels(...): Get unique labels and indices for batched labels for tf.nn.ctc_loss. depth_to_space(...): DepthToSpace for tensors of type T. depthwise_conv2d(...): Depthwise 2-D convolution. depthwise_conv2d_backprop_filter(...): Computes the gradients of depthwise convolution with respect to the filter. depthwise_conv2d_backprop_input(...): Computes the gradients of depthwise convolution with respect to the input. depthwise_conv2d_native(...): Computes a 2-D depthwise convolution. depthwise_conv2d_native_backprop_filter(...): Computes the gradients of depthwise convolution with respect to the filter. depthwise_conv2d_native_backprop_input(...): Computes the gradients of depthwise convolution with respect to the input. dilation2d(...): Computes the grayscale dilation of 4-D input and 3-D filter tensors. dropout(...): Computes dropout. (deprecated arguments) dynamic_rnn(...): Creates a recurrent neural network specified by RNNCell cell. (deprecated) elu(...): Computes exponential linear: exp(features) - 1 if < 0, features otherwise. embedding_lookup(...): Looks up embeddings for the given ids from a list of tensors. embedding_lookup_sparse(...): Looks up embeddings for the given ids and weights from a list of tensors. erosion2d(...): Computes the grayscale erosion of 4-D value and 3-D kernel tensors. fixed_unigram_candidate_sampler(...): Samples a set of classes using the provided (fixed) base distribution. fractional_avg_pool(...): Performs fractional average pooling on the input. (deprecated) fractional_max_pool(...): Performs fractional max pooling on the input. (deprecated) fused_batch_norm(...): Batch normalization. in_top_k(...): Says whether the targets are in the top K predictions. l2_loss(...): L2 Loss. l2_normalize(...): Normalizes along dimension axis using an L2 norm. (deprecated arguments) leaky_relu(...): Compute the Leaky ReLU activation function. learned_unigram_candidate_sampler(...): Samples a set of classes from a distribution learned during training. local_response_normalization(...): Local Response Normalization. log_poisson_loss(...): Computes log Poisson loss given log_input. log_softmax(...): Computes log softmax activations. (deprecated arguments) log_uniform_candidate_sampler(...): Samples a set of classes using a log-uniform (Zipfian) base distribution. lrn(...): Local Response Normalization. max_pool(...): Performs the max pooling on the input. max_pool1d(...): Performs the max pooling on the input. max_pool2d(...): Performs the max pooling on the input. max_pool3d(...): Performs the max pooling on the input. max_pool_v2(...): Performs the max pooling on the input. max_pool_with_argmax(...): Performs max pooling on the input and outputs both max values and indices. moments(...): Calculate the mean and variance of x. nce_loss(...): Computes and returns the noise-contrastive estimation training loss. normalize_moments(...): Calculate the mean and variance of based on the sufficient statistics. pool(...): Performs an N-D pooling operation. quantized_avg_pool(...): Produces the average pool of the input tensor for quantized types. quantized_conv2d(...): Computes a 2D convolution given quantized 4D input and filter tensors. quantized_max_pool(...): Produces the max pool of the input tensor for quantized types. quantized_relu_x(...): Computes Quantized Rectified Linear X: min(max(features, 0), max_value) raw_rnn(...): Creates an RNN specified by RNNCell cell and loop function loop_fn. relu(...): Computes rectified linear: max(features, 0). relu6(...): Computes Rectified Linear 6: min(max(features, 0), 6). relu_layer(...): Computes Relu(x * weight + biases). safe_embedding_lookup_sparse(...): Lookup embedding results, accounting for invalid IDs and empty features. sampled_softmax_loss(...): Computes and returns the sampled softmax training loss. scale_regularization_loss(...): Scales the sum of the given regularization losses by number of replicas. selu(...): Computes scaled exponential linear: scale * alpha * (exp(features) - 1) separable_conv2d(...): 2-D convolution with separable filters. sigmoid(...): Computes sigmoid of x element-wise. sigmoid_cross_entropy_with_logits(...): Computes sigmoid cross entropy given logits. silu(...): Computes the SiLU or Swish activation function: x * sigmoid(x). softmax(...): Computes softmax activations. (deprecated arguments) softmax_cross_entropy_with_logits(...): Computes softmax cross entropy between logits and labels. (deprecated) softmax_cross_entropy_with_logits_v2(...): Computes softmax cross entropy between logits and labels. (deprecated arguments) softplus(...): Computes softplus: log(exp(features) + 1). softsign(...): Computes softsign: features / (abs(features) + 1). space_to_batch(...): SpaceToBatch for 4-D tensors of type T. space_to_depth(...): SpaceToDepth for tensors of type T. sparse_softmax_cross_entropy_with_logits(...): Computes sparse softmax cross entropy between logits and labels. static_bidirectional_rnn(...): Creates a bidirectional recurrent neural network. (deprecated) static_rnn(...): Creates a recurrent neural network specified by RNNCell cell. (deprecated) static_state_saving_rnn(...): RNN that accepts a state saver for time-truncated RNN calculation. (deprecated) sufficient_statistics(...): Calculate the sufficient statistics for the mean and variance of x. swish(...): Computes the SiLU or Swish activation function: x * sigmoid(x). tanh(...): Computes hyperbolic tangent of x element-wise. top_k(...): Finds values and indices of the k largest entries for the last dimension. uniform_candidate_sampler(...): Samples a set of classes using a uniform base distribution. weighted_cross_entropy_with_logits(...): Computes a weighted cross entropy. (deprecated arguments) weighted_moments(...): Returns the frequency-weighted mean and variance of x. with_space_to_batch(...): Performs op on the space-to-batch representation of input. xw_plus_b(...): Computes matmul(x, weights) + biases. zero_fraction(...): Returns the fraction of zeros in value.
tensorflow.compat.v1.nn
tf.compat.v1.nn.avg_pool Performs the average pooling on the input. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.nn.avg_pool2d tf.compat.v1.nn.avg_pool( value, ksize, strides, padding, data_format='NHWC', name=None, input=None ) Each entry in output is the mean of the corresponding size ksize window in value. Args value A 4-D Tensor of shape [batch, height, width, channels] and type float32, float64, qint8, quint8, or qint32. ksize An int or list of ints that has length 1, 2 or 4. The size of the window for each dimension of the input tensor. strides An int or list of ints that has length 1, 2 or 4. The stride of the sliding window for each dimension of the input tensor. padding A string, either 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.nn.convolution for details. data_format A string. 'NHWC' and 'NCHW' are supported. name Optional name for the operation. input Alias for value. Returns A Tensor with the same type as value. The average pooled output tensor.
tensorflow.compat.v1.nn.avg_pool
tf.compat.v1.nn.batch_norm_with_global_normalization Batch normalization. tf.compat.v1.nn.batch_norm_with_global_normalization( t=None, m=None, v=None, beta=None, gamma=None, variance_epsilon=None, scale_after_normalization=None, name=None, input=None, mean=None, variance=None ) This op is deprecated. See tf.nn.batch_normalization. Args t A 4D input Tensor. m A 1D mean Tensor with size matching the last dimension of t. This is the first output from tf.nn.moments, or a saved moving average thereof. v A 1D variance Tensor with size matching the last dimension of t. This is the second output from tf.nn.moments, or a saved moving average thereof. beta A 1D beta Tensor with size matching the last dimension of t. An offset to be added to the normalized tensor. gamma A 1D gamma Tensor with size matching the last dimension of t. If "scale_after_normalization" is true, this tensor will be multiplied with the normalized tensor. variance_epsilon A small float number to avoid dividing by 0. scale_after_normalization A bool indicating whether the resulted tensor needs to be multiplied with gamma. name A name for this operation (optional). input Alias for t. mean Alias for m. variance Alias for v. Returns A batch-normalized t. References: Batch Normalization - Accelerating Deep Network Training by Reducing Internal Covariate Shift: Ioffe et al., 2015 (pdf)
tensorflow.compat.v1.nn.batch_norm_with_global_normalization
tf.compat.v1.nn.bidirectional_dynamic_rnn Creates a dynamic version of bidirectional recurrent neural network. (deprecated) tf.compat.v1.nn.bidirectional_dynamic_rnn( cell_fw, cell_bw, inputs, sequence_length=None, initial_state_fw=None, initial_state_bw=None, dtype=None, parallel_iterations=None, swap_memory=False, time_major=False, scope=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use keras.layers.Bidirectional(keras.layers.RNN(cell)), which is equivalent to this API Takes input and builds independent forward and backward RNNs. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given. Args cell_fw An instance of RNNCell, to be used for forward direction. cell_bw An instance of RNNCell, to be used for backward direction. inputs The RNN inputs. If time_major == False (default), this must be a tensor of shape: [batch_size, max_time, ...], or a nested tuple of such elements. If time_major == True, this must be a tensor of shape: [max_time, batch_size, ...], or a nested tuple of such elements. sequence_length (optional) An int32/int64 vector, size [batch_size], containing the actual lengths for each of the sequences in the batch. If not provided, all batch entries are assumed to be full sequences; and time reversal is applied from time 0 to max_time for each sequence. initial_state_fw (optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape [batch_size, cell_fw.state_size]. If cell_fw.state_size is a tuple, this should be a tuple of tensors having shapes [batch_size, s] for s in cell_fw.state_size. initial_state_bw (optional) Same as for initial_state_fw, but using the corresponding properties of cell_bw. dtype (optional) The data type for the initial states and expected output. Required if initial_states are not provided or RNN states have a heterogeneous dtype. parallel_iterations (Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer. swap_memory Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty. time_major The shape format of the inputs and outputs Tensors. If true, these Tensors must be shaped [max_time, batch_size, depth]. If false, these Tensors must be shaped [batch_size, max_time, depth]. Using time_major = True is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form. scope VariableScope for the created subgraph; defaults to "bidirectional_rnn" Returns A tuple (outputs, output_states) where: outputs: A tuple (output_fw, output_bw) containing the forward and the backward rnn output Tensor. If time_major == False (default), output_fw will be a Tensor shaped: [batch_size, max_time, cell_fw.output_size] and output_bw will be a Tensor shaped: [batch_size, max_time, cell_bw.output_size]. If time_major == True, output_fw will be a Tensor shaped: [max_time, batch_size, cell_fw.output_size] and output_bw will be a Tensor shaped: [max_time, batch_size, cell_bw.output_size]. It returns a tuple instead of a single concatenated Tensor, unlike in the bidirectional_rnn. If the concatenated one is preferred, the forward and backward outputs can be concatenated as tf.concat(outputs, 2). output_states: A tuple (output_state_fw, output_state_bw) containing the forward and the backward final states of bidirectional rnn. Raises TypeError If cell_fw or cell_bw is not an instance of RNNCell.
tensorflow.compat.v1.nn.bidirectional_dynamic_rnn
tf.compat.v1.nn.conv1d Computes a 1-D convolution of input with rank >=3 and a 3-D filter. (deprecated argument values) (deprecated argument values) tf.compat.v1.nn.conv1d( value=None, filters=None, stride=None, padding=None, use_cudnn_on_gpu=None, data_format=None, name=None, input=None, dilations=None ) Warning: SOME ARGUMENT VALUES ARE DEPRECATED: (data_format='NCHW'). They will be removed in a future version. Instructions for updating: NCHW for data_format is deprecated, use NCW insteadWarning: SOME ARGUMENT VALUES ARE DEPRECATED: (data_format='NHWC'). They will be removed in a future version. Instructions for updating: NHWC for data_format is deprecated, use NWC instead Given an input tensor of shape batch_shape + [in_width, in_channels] if data_format is "NWC", or batch_shape + [in_channels, in_width] if data_format is "NCW", and a filter / kernel tensor of shape [filter_width, in_channels, out_channels], this op reshapes the arguments to pass them to conv2d to perform the equivalent convolution operation. Internally, this op reshapes the input tensors and invokes tf.nn.conv2d. For example, if data_format does not start with "NC", a tensor of shape batch_shape + [in_width, in_channels] is reshaped to batch_shape + [1, in_width, in_channels], and the filter is reshaped to [1, filter_width, in_channels, out_channels]. The result is then reshaped back to batch_shape + [out_width, out_channels] (where out_width is a function of the stride and padding as in conv2d) and returned to the caller. Args value A Tensor of rank at least 3. Must be of type float16, float32, or float64. filters A Tensor of rank at least 3. Must have the same type as value. stride An int or list of ints that has length 1 or 3. The number of entries by which the filter is moved right at each step. padding 'SAME' or 'VALID' use_cudnn_on_gpu An optional bool. Defaults to True. data_format An optional string from "NWC", "NCW". Defaults to "NWC", the data is stored in the order of batch_shape + [in_width, in_channels]. The "NCW" format stores data as batch_shape + [in_channels, in_width]. name A name for the operation (optional). input Alias for value. dilations An int or list of ints that has length 1 or 3 which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1. Returns A Tensor. Has the same type as input. Raises ValueError if data_format is invalid.
tensorflow.compat.v1.nn.conv1d
tf.compat.v1.nn.conv2d Computes a 2-D convolution given 4-D input and filter tensors. tf.compat.v1.nn.conv2d( input, filter=None, strides=None, padding=None, use_cudnn_on_gpu=True, data_format='NHWC', dilations=[1, 1, 1, 1], name=None, filters=None ) Given an input tensor of shape [batch, in_height, in_width, in_channels] and a filter / kernel tensor of shape [filter_height, filter_width, in_channels, out_channels], this op performs the following: Flattens the filter to a 2-D matrix with shape [filter_height * filter_width * in_channels, output_channels]. Extracts image patches from the input tensor to form a virtual tensor of shape [batch, out_height, out_width, filter_height * filter_width * in_channels]. For each patch, right-multiplies the filter matrix and the image patch vector. In detail, with the default NHWC format, output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k] Must have strides[0] = strides[3] = 1. For the most common case of the same horizontal and vertical strides, strides = [1, stride, stride, 1]. Args input A Tensor. Must be one of the following types: half, bfloat16, float32, float64. A 4-D tensor. The dimension order is interpreted according to the value of data_format, see below for details. filter A Tensor. Must have the same type as input. A 4-D tensor of shape [filter_height, filter_width, in_channels, out_channels] strides An int or list of ints that has length 1, 2 or 4. The stride of the sliding window for each dimension of input. If a single value is given it is replicated in the H and W dimension. By default the N and C dimensions are set to 1. The dimension order is determined by the value of data_format, see below for details. padding Either the string "SAME" or "VALID" indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is "NHWC", this should be in the form [[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]. When explicit padding used and data_format is "NCHW", this should be in the form [[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]. use_cudnn_on_gpu An optional bool. Defaults to True. data_format An optional string from: "NHWC", "NCHW". Defaults to "NHWC". Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width]. dilations An int or list of ints that has length 1, 2 or 4, defaults to 1. The dilation factor for each dimension ofinput. If a single value is given it is replicated in the H and W dimension. By default the N and C dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of data_format, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1. name A name for the operation (optional). filters Alias for filter. Returns A Tensor. Has the same type as input.
tensorflow.compat.v1.nn.conv2d
tf.compat.v1.nn.conv2d_backprop_filter Computes the gradients of convolution with respect to the filter. tf.compat.v1.nn.conv2d_backprop_filter( input, filter_sizes, out_backprop, strides, padding, use_cudnn_on_gpu=True, data_format='NHWC', dilations=[1, 1, 1, 1], name=None ) Args input A Tensor. Must be one of the following types: half, bfloat16, float32, float64. 4-D with shape [batch, in_height, in_width, in_channels]. filter_sizes A Tensor of type int32. An integer vector representing the tensor shape of filter, where filter is a 4-D [filter_height, filter_width, in_channels, out_channels] tensor. out_backprop A Tensor. Must have the same type as input. 4-D with shape [batch, out_height, out_width, out_channels]. Gradients w.r.t. the output of the convolution. strides A list of ints. The stride of the sliding window for each dimension of the input of the convolution. Must be in the same order as the dimension specified with format. padding Either the string"SAME"or"VALID"indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is"NHWC", this should be in the form[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]. When explicit padding used and data_format is"NCHW", this should be in the form[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]. </td> </tr><tr> <td>use_cudnn_on_gpu</td> <td> An optionalbool. Defaults toTrue. </td> </tr><tr> <td>data_format</td> <td> An optionalstringfrom:"NHWC", "NCHW". Defaults to"NHWC". Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width]. </td> </tr><tr> <td>dilations</td> <td> An optional list ofints. Defaults to[1, 1, 1, 1]. 1-D tensor of length 4. The dilation factor for each dimension ofinput. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value ofdata_format, see above for details. Dilations in the batch and depth dimensions must be 1. </td> </tr><tr> <td>name` A name for the operation (optional). Returns A Tensor. Has the same type as input.
tensorflow.compat.v1.nn.conv2d_backprop_filter
tf.compat.v1.nn.conv2d_backprop_input Computes the gradients of convolution with respect to the input. tf.compat.v1.nn.conv2d_backprop_input( input_sizes, filter=None, out_backprop=None, strides=None, padding=None, use_cudnn_on_gpu=True, data_format='NHWC', dilations=[1, 1, 1, 1], name=None, filters=None ) Args input_sizes A Tensor of type int32. An integer vector representing the shape of input, where input is a 4-D [batch, height, width, channels] tensor. filter A Tensor. Must be one of the following types: half, bfloat16, float32, float64. 4-D with shape [filter_height, filter_width, in_channels, out_channels]. out_backprop A Tensor. Must have the same type as filter. 4-D with shape [batch, out_height, out_width, out_channels]. Gradients w.r.t. the output of the convolution. strides A list of ints. The stride of the sliding window for each dimension of the input of the convolution. Must be in the same order as the dimension specified with format. padding Either the string"SAME"or"VALID"indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is"NHWC", this should be in the form[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]. When explicit padding used and data_format is"NCHW", this should be in the form[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]. </td> </tr><tr> <td>use_cudnn_on_gpu</td> <td> An optionalbool. Defaults toTrue. </td> </tr><tr> <td>data_format</td> <td> An optionalstringfrom:"NHWC", "NCHW". Defaults to"NHWC". Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width]. </td> </tr><tr> <td>dilations</td> <td> An optional list ofints. Defaults to[1, 1, 1, 1]. 1-D tensor of length 4. The dilation factor for each dimension ofinput. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value ofdata_format, see above for details. Dilations in the batch and depth dimensions must be 1. </td> </tr><tr> <td>name</td> <td> A name for the operation (optional). </td> </tr><tr> <td>filters` Alias for filter. Returns A Tensor. Has the same type as filter.
tensorflow.compat.v1.nn.conv2d_backprop_input
tf.compat.v1.nn.conv2d_transpose The transpose of conv2d. tf.compat.v1.nn.conv2d_transpose( value=None, filter=None, output_shape=None, strides=None, padding='SAME', data_format='NHWC', name=None, input=None, filters=None, dilations=None ) This operation is sometimes called "deconvolution" after (Zeiler et al., 2010), but is really the transpose (gradient) of conv2d rather than an actual deconvolution. Args value A 4-D Tensor of type float and shape [batch, height, width, in_channels] for NHWC data format or [batch, in_channels, height, width] for NCHW data format. filter A 4-D Tensor with the same type as value and shape [height, width, output_channels, in_channels]. filter's in_channels dimension must match that of value. output_shape A 1-D Tensor representing the output shape of the deconvolution op. strides An int or list of ints that has length 1, 2 or 4. The stride of the sliding window for each dimension of input. If a single value is given it is replicated in the H and W dimension. By default the N and C dimensions are set to 0. The dimension order is determined by the value of data_format, see below for details. padding A string, either 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.nn.convolution for details. data_format A string. 'NHWC' and 'NCHW' are supported. name Optional name for the returned tensor. input Alias for value. filters Alias for filter. dilations An int or list of ints that has length 1, 2 or 4, defaults to 1. The dilation factor for each dimension ofinput. If a single value is given it is replicated in the H and W dimension. By default the N and C dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of data_format, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1. Returns A Tensor with the same type as value. Raises ValueError If input/output depth does not match filter's shape, or if padding is other than 'VALID' or 'SAME'. References: Deconvolutional Networks: Zeiler et al., 2010 (pdf)
tensorflow.compat.v1.nn.conv2d_transpose
tf.compat.v1.nn.conv3d Computes a 3-D convolution given 5-D input and filter tensors. tf.compat.v1.nn.conv3d( input, filter=None, strides=None, padding=None, data_format='NDHWC', dilations=[1, 1, 1, 1, 1], name=None, filters=None ) In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. Our Conv3D implements a form of cross-correlation. Args input A Tensor. Must be one of the following types: half, bfloat16, float32, float64. Shape [batch, in_depth, in_height, in_width, in_channels]. filter A Tensor. Must have the same type as input. Shape [filter_depth, filter_height, filter_width, in_channels, out_channels]. in_channels must match between input and filter. strides A list of ints that has length >= 5. 1-D tensor of length 5. The stride of the sliding window for each dimension of input. Must have strides[0] = strides[4] = 1. padding A string from: "SAME", "VALID". The type of padding algorithm to use. data_format An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width]. dilations An optional list of ints. Defaults to [1, 1, 1, 1, 1]. 1-D tensor of length 5. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of data_format, see above for details. Dilations in the batch and depth dimensions must be 1. name A name for the operation (optional). Returns A Tensor. Has the same type as input.
tensorflow.compat.v1.nn.conv3d
tf.compat.v1.nn.conv3d_backprop_filter Computes the gradients of 3-D convolution with respect to the filter. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.nn.conv3d_backprop_filter_v2 tf.compat.v1.nn.conv3d_backprop_filter( input, filter_sizes, out_backprop, strides, padding, data_format='NDHWC', dilations=[1, 1, 1, 1, 1], name=None ) Args input A Tensor. Must be one of the following types: half, bfloat16, float32, float64. Shape [batch, depth, rows, cols, in_channels]. filter_sizes A Tensor of type int32. An integer vector representing the tensor shape of filter, where filter is a 5-D [filter_depth, filter_height, filter_width, in_channels, out_channels] tensor. out_backprop A Tensor. Must have the same type as input. Backprop signal of shape [batch, out_depth, out_rows, out_cols, out_channels]. strides A list of ints that has length >= 5. 1-D tensor of length 5. The stride of the sliding window for each dimension of input. Must have strides[0] = strides[4] = 1. padding A string from: "SAME", "VALID". The type of padding algorithm to use. data_format An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width]. dilations An optional list of ints. Defaults to [1, 1, 1, 1, 1]. 1-D tensor of length 5. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of data_format, see above for details. Dilations in the batch and depth dimensions must be 1. name A name for the operation (optional). Returns A Tensor. Has the same type as input.
tensorflow.compat.v1.nn.conv3d_backprop_filter
tf.compat.v1.nn.conv3d_transpose The transpose of conv3d. tf.compat.v1.nn.conv3d_transpose( value, filter=None, output_shape=None, strides=None, padding='SAME', data_format='NDHWC', name=None, input=None, filters=None, dilations=None ) This operation is sometimes called "deconvolution" after (Zeiler et al., 2010), but is really the transpose (gradient) of conv3d rather than an actual deconvolution. Args value A 5-D Tensor of type float and shape [batch, depth, height, width, in_channels]. filter A 5-D Tensor with the same type as value and shape [depth, height, width, output_channels, in_channels]. filter's in_channels dimension must match that of value. output_shape A 1-D Tensor representing the output shape of the deconvolution op. strides A list of ints. The stride of the sliding window for each dimension of the input tensor. padding A string, either 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.nn.convolution for details. data_format A string, either 'NDHWC' or 'NCDHW' specifying the layout of the input and output tensors. Defaults to 'NDHWC'. name Optional name for the returned tensor. input Alias of value. filters Alias of filter. dilations An int or list of ints that has length 1, 3 or 5, defaults to 1. The dilation factor for each dimension ofinput. If a single value is given it is replicated in the D, H and W dimension. By default the N and C dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of data_format, see above for details. Dilations in the batch and depth dimensions if a 5-d tensor must be 1. Returns A Tensor with the same type as value. Raises ValueError If input/output depth does not match filter's shape, or if padding is other than 'VALID' or 'SAME'. References: Deconvolutional Networks: Zeiler et al., 2010 (pdf)
tensorflow.compat.v1.nn.conv3d_transpose
tf.compat.v1.nn.convolution Computes sums of N-D convolutions (actually cross-correlation). tf.compat.v1.nn.convolution( input, filter, padding, strides=None, dilation_rate=None, name=None, data_format=None, filters=None, dilations=None ) This also supports either output striding via the optional strides parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional dilation_rate parameter. Currently, however, output striding is not supported for atrous convolutions. Specifically, in the case that data_format does not start with "NC", given a rank (N+2) input Tensor of shape [num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels], a rank (N+2) filter Tensor of shape [spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels], an optional dilation_rate tensor of shape N specifying the filter upsampling/input downsampling rate, and an optional list of N strides (defaulting [1]*N), this computes for each N-D spatial output position (x[0], ..., x[N-1]): output[b, x[0], ..., x[N-1], k] = sum_{z[0], ..., z[N-1], q} filter[z[0], ..., z[N-1], q, k] * padded_input[b, x[0]*strides[0] + dilation_rate[0]*z[0], ..., x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1], q] where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, padded_input is obtained by zero padding the input using an effective spatial filter shape of (spatial_filter_shape-1) * dilation_rate + 1 and output striding strides as described in the comment here. In the case that data_format does start with "NC", the input and output (but not the filter) are simply transposed as follows: convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1)) It is required that 1 <= N <= 3. Args input An (N+2)-D Tensor of type T, of shape [batch_size] + input_spatial_shape + [in_channels] if data_format does not start with "NC" (default), or [batch_size, in_channels] + input_spatial_shape if data_format starts with "NC". filter An (N+2)-D Tensor with the same type as input and shape spatial_filter_shape + [in_channels, out_channels]. padding A string, either "VALID" or "SAME". The padding algorithm. "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. strides Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1. dilation_rate Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called input stride or dilation. The effective filter size used for the convolution will be spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1), obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1. name Optional name for the returned tensor. data_format A string or None. Specifies whether the channel dimension of the input and output is the last dimension (default, or if data_format does not start with "NC"), or the second dimension (if data_format starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW". Returns A Tensor with the same type as input of shape [batch_size] + output_spatial_shape + [out_channels] if data_format is None or does not start with "NC", or [batch_size, out_channels] + output_spatial_shape if data_format starts with "NC", where output_spatial_shape depends on the value of padding. If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i]) If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]). Raises ValueError If input/output depth does not match filter shape, if padding is other than "VALID" or "SAME", or if data_format is invalid.
tensorflow.compat.v1.nn.convolution
tf.compat.v1.nn.crelu Computes Concatenated ReLU. tf.compat.v1.nn.crelu( features, name=None, axis=-1 ) Concatenates a ReLU which selects only the positive part of the activation with a ReLU which selects only the negative part of the activation. Note that as a result this non-linearity doubles the depth of the activations. Source: Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units. W. Shang, et al. Args features A Tensor with type float, double, int32, int64, uint8, int16, or int8. name A name for the operation (optional). axis The axis that the output values are concatenated along. Default is -1. Returns A Tensor with the same type as features. References: Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units: Shang et al., 2016 (pdf)
tensorflow.compat.v1.nn.crelu
tf.compat.v1.nn.ctc_beam_search_decoder Performs beam search decoding on the logits given in input. tf.compat.v1.nn.ctc_beam_search_decoder( inputs, sequence_length, beam_width=100, top_paths=1, merge_repeated=True ) Note: The ctc_greedy_decoder is a special case of the ctc_beam_search_decoder with top_paths=1 and beam_width=1 (but that decoder is faster for this special case). If merge_repeated is True, merge repeated classes in the output beams. This means that if consecutive entries in a beam are the same, only the first of these is emitted. That is, when the sequence is A B B * B * B (where '*' is the blank label), the return value is: A B if merge_repeated = True. A B B B if merge_repeated = False. Args inputs 3-D float Tensor, size [max_time x batch_size x num_classes]. The logits. sequence_length 1-D int32 vector containing sequence lengths, having size [batch_size]. beam_width An int scalar >= 0 (beam search beam width). top_paths An int scalar >= 0, <= beam_width (controls output size). merge_repeated Boolean. Default: True. Returns A tuple (decoded, log_probabilities) where decoded A list of length top_paths, where decoded[j] is a SparseTensor containing the decoded outputs: decoded[j].indices: Indices matrix (total_decoded_outputs[j] x 2) The rows store: [batch, time]. decoded[j].values: Values vector, size (total_decoded_outputs[j]). The vector stores the decoded classes for beam j. decoded[j].dense_shape: Shape vector, size (2). The shape values are: [batch_size, max_decoded_length[j]]. log_probability A float matrix (batch_size x top_paths) containing sequence log-probabilities.
tensorflow.compat.v1.nn.ctc_beam_search_decoder
tf.compat.v1.nn.ctc_loss Computes the CTC (Connectionist Temporal Classification) Loss. tf.compat.v1.nn.ctc_loss( labels, inputs=None, sequence_length=None, preprocess_collapse_repeated=False, ctc_merge_repeated=True, ignore_longer_outputs_than_inputs=False, time_major=True, logits=None ) This op implements the CTC loss as presented in (Graves et al., 2006). Input requirements: sequence_length(b) <= time for all b max(labels.indices(labels.indices[:, 1] == b, 2)) <= sequence_length(b) for all b. Notes: This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM. The inputs Tensor's innermost dimension size, num_classes, represents num_labels + 1 classes, where num_labels is the number of true labels, and the largest value (num_classes - 1) is reserved for the blank label. For example, for a vocabulary containing 3 labels [a, b, c], num_classes = 4 and the labels indexing is {a: 0, b: 1, c: 2, blank: 3}. Regarding the arguments preprocess_collapse_repeated and ctc_merge_repeated: If preprocess_collapse_repeated is True, then a preprocessing step runs before loss calculation, wherein repeated labels passed to the loss are merged into single labels. This is useful if the training labels come from, e.g., forced alignments and therefore have unnecessary repetitions. If ctc_merge_repeated is set False, then deep within the CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified (non-standard) version of CTC. Here is a table of the (roughly) expected first order behavior: preprocess_collapse_repeated=False, ctc_merge_repeated=True Classical CTC behavior: Outputs true repeated classes with blanks in between, and can also output repeated classes with no blanks in between that need to be collapsed by the decoder. preprocess_collapse_repeated=True, ctc_merge_repeated=False Never learns to output repeated classes, as they are collapsed in the input labels before training. preprocess_collapse_repeated=False, ctc_merge_repeated=False Outputs repeated classes with blanks in between, but generally does not require the decoder to collapse/merge repeated classes. preprocess_collapse_repeated=True, ctc_merge_repeated=True Untested. Very likely will not learn to output repeated classes. The ignore_longer_outputs_than_inputs option allows to specify the behavior of the CTCLoss when dealing with sequences that have longer outputs than inputs. If true, the CTCLoss will simply return zero gradient for those items, otherwise an InvalidArgument error is returned, stopping training. Args labels An int32 SparseTensor. labels.indices[i, :] == [b, t] means labels.values[i] stores the id for (batch b, time t). labels.values[i] must take on values in [0, num_labels). See core/ops/ctc_ops.cc for more details. inputs 3-D float Tensor. If time_major == False, this will be a Tensor shaped: [batch_size, max_time, num_classes]. If time_major == True (default), this will be a Tensor shaped: [max_time, batch_size, num_classes]. The logits. sequence_length 1-D int32 vector, size [batch_size]. The sequence lengths. preprocess_collapse_repeated Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation. ctc_merge_repeated Boolean. Default: True. ignore_longer_outputs_than_inputs Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored. time_major The shape format of the inputs Tensors. If True, these Tensors must be shaped [max_time, batch_size, num_classes]. If False, these Tensors must be shaped [batch_size, max_time, num_classes]. Using time_major = True (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form. logits Alias for inputs. Returns A 1-D float Tensor, size [batch], containing the negative log probabilities. Raises TypeError if labels is not a SparseTensor. References: Connectionist Temporal Classification - Labeling Unsegmented Sequence Data with Recurrent Neural Networks: Graves et al., 2006 (pdf)
tensorflow.compat.v1.nn.ctc_loss
tf.compat.v1.nn.ctc_loss_v2 Computes CTC (Connectionist Temporal Classification) loss. tf.compat.v1.nn.ctc_loss_v2( labels, logits, label_length, logit_length, logits_time_major=True, unique=None, blank_index=None, name=None ) This op implements the CTC loss as presented in (Graves et al., 2006). Notes: Same as the "Classic CTC" in TensorFlow 1.x's tf.compat.v1.nn.ctc_loss setting of preprocess_collapse_repeated=False, ctc_merge_repeated=True Labels may be supplied as either a dense, zero-padded tensor with a vector of label sequence lengths OR as a SparseTensor. On TPU and GPU: Only dense padded labels are supported. On CPU: Caller may use SparseTensor or dense padded labels but calling with a SparseTensor will be significantly faster. Default blank label is 0 rather num_classes - 1, unless overridden by blank_index. Args labels tensor of shape [batch_size, max_label_seq_length] or SparseTensor logits tensor of shape [frames, batch_size, num_labels], if logits_time_major == False, shape is [batch_size, frames, num_labels]. label_length tensor of shape [batch_size], None if labels is SparseTensor Length of reference label sequence in labels. logit_length tensor of shape [batch_size] Length of input sequence in logits. logits_time_major (optional) If True (default), logits is shaped [time, batch, logits]. If False, shape is [batch, time, logits] unique (optional) Unique label indices as computed by ctc_unique_labels(labels). If supplied, enable a faster, memory efficient implementation on TPU. blank_index (optional) Set the class index to use for the blank label. Negative values will start from num_classes, ie, -1 will reproduce the ctc_loss behavior of using num_classes - 1 for the blank symbol. There is some memory/performance overhead to switching from the default of 0 as an additional shifted copy of the logits may be created. name A name for this Op. Defaults to "ctc_loss_dense". Returns loss tensor of shape [batch_size], negative log probabilities. References: Connectionist Temporal Classification - Labeling Unsegmented Sequence Data with Recurrent Neural Networks: Graves et al., 2006 (pdf)
tensorflow.compat.v1.nn.ctc_loss_v2
tf.compat.v1.nn.depthwise_conv2d Depthwise 2-D convolution. tf.compat.v1.nn.depthwise_conv2d( input, filter, strides, padding, rate=None, name=None, data_format=None, dilations=None ) Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape [filter_height, filter_width, in_channels, channel_multiplier] containing in_channels convolutional filters of depth 1, depthwise_conv2d applies a different filter to each input channel (expanding from 1 channel to channel_multiplier channels for each), then concatenates the results together. The output has in_channels * channel_multiplier channels. In detail, with the default NHWC format, output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k] Must have strides[0] = strides[3] = 1. For the most common case of the same horizontal and vertical strides, strides = [1, stride, stride, 1]. If any value in rate is greater than 1, we perform atrous depthwise convolution, in which case all values in the strides tensor must be equal to 1. Usage Example: x = np.array([ [1., 2.], [3., 4.], [5., 6.] ], dtype=np.float32).reshape((1, 3, 2, 1)) kernel = np.array([ [1., 2.], [3., 4] ], dtype=np.float32).reshape((2, 1, 1, 2)) tf.compat.v1.nn.depthwise_conv2d(x, kernel, strides=[1, 1, 1, 1], padding='VALID').numpy() array([[[[10., 14.], [14., 20.]], [[18., 26.], [22., 32.]]]], dtype=float32) tf.compat.v1.nn.depthwise_conv2d(x, kernel, strides=[1, 1, 1, 1], padding=[[0, 0], [1, 0], [1, 0], [0, 0]] ).numpy() array([[[[ 0., 0.], [ 3., 4.], [ 6., 8.]], [[ 0., 0.], [10., 14.], [14., 20.]], [[ 0., 0.], [18., 26.], [22., 32.]]]], dtype=float32) Args input 4-D with shape according to data_format. filter 4-D with shape [filter_height, filter_width, in_channels, channel_multiplier]. strides 1-D of size 4. The stride of the sliding window for each dimension of input. padding Controls how to pad the image before applying the convolution. Can be the string "SAME" or "VALID" indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is "NHWC", this should be in the form [[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]. When explicit padding used and data_format is "NCHW", this should be in the form [[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]. rate 1-D of size 2. The dilation rate in which we sample input values across the height and width dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1. name A name for this operation (optional). data_format The data format for input. Either "NHWC" (default) or "NCHW". dilations Alias of rate. Returns A 4-D Tensor with shape according to data_format. E.g., for "NHWC" format, shape is [batch, out_height, out_width, in_channels * channel_multiplier].
tensorflow.compat.v1.nn.depthwise_conv2d
tf.compat.v1.nn.depthwise_conv2d_native Computes a 2-D depthwise convolution. tf.compat.v1.nn.depthwise_conv2d_native( input, filter, strides, padding, data_format='NHWC', dilations=[1, 1, 1, 1], name=None ) Given an input tensor of shape [batch, in_height, in_width, in_channels] and a filter / kernel tensor of shape [filter_height, filter_width, in_channels, channel_multiplier], containing in_channels convolutional filters of depth 1, depthwise_conv2d applies a different filter to each input channel (expanding from 1 channel to channel_multiplier channels for each), then concatenates the results together. Thus, the output has in_channels * channel_multiplier channels. for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q] Must have strides[0] = strides[3] = 1. For the most common case of the same horizontal and vertices strides, strides = [1, stride, stride, 1]. Args input A Tensor. Must be one of the following types: half, bfloat16, float32, float64. filter A Tensor. Must have the same type as input. strides A list of ints. 1-D of length 4. The stride of the sliding window for each dimension of input. padding Controls how to pad the image before applying the convolution. Can be the string "SAME" or "VALID" indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is "NHWC", this should be in the form [[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]. When explicit padding used and data_format is "NCHW", this should be in the form [[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]. data_format An optional string from: "NHWC", "NCHW". Defaults to "NHWC". Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width]. dilations An optional list of ints. Defaults to [1, 1, 1, 1]. 1-D tensor of length 4. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of data_format, see above for details. Dilations in the batch and depth dimensions must be 1. name A name for the operation (optional). Returns A Tensor. Has the same type as input.
tensorflow.compat.v1.nn.depthwise_conv2d_native
tf.compat.v1.nn.dilation2d Computes the grayscale dilation of 4-D input and 3-D filter tensors. tf.compat.v1.nn.dilation2d( input, filter=None, strides=None, rates=None, padding=None, name=None, filters=None, dilations=None ) The input tensor has shape [batch, in_height, in_width, depth] and the filter tensor has shape [filter_height, filter_width, depth], i.e., each input channel is processed independently of the others with its own structuring function. The output tensor has shape [batch, out_height, out_width, depth]. The spatial dimensions of the output tensor depend on the padding algorithm. We currently only support the default "NHWC" data_format. In detail, the grayscale morphological 2-D dilation is the max-sum correlation (for consistency with conv2d, we use unmirrored filters): output[b, y, x, c] = max_{dy, dx} input[b, strides[1] * y + rates[1] * dy, strides[2] * x + rates[2] * dx, c] + filter[dy, dx, c] Max-pooling is a special case when the filter has size equal to the pooling kernel size and contains all zeros. Note on duality: The dilation of input by the filter is equal to the negation of the erosion of -input by the reflected filter. Args input A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. 4-D with shape [batch, in_height, in_width, depth]. filter A Tensor. Must have the same type as input. 3-D with shape [filter_height, filter_width, depth]. strides A list of ints that has length >= 4. The stride of the sliding window for each dimension of the input tensor. Must be: [1, stride_height, stride_width, 1]. rates A list of ints that has length >= 4. The input stride for atrous morphological dilation. Must be: [1, rate_height, rate_width, 1]. padding A string from: "SAME", "VALID". The type of padding algorithm to use. name A name for the operation (optional). Returns A Tensor. Has the same type as input.
tensorflow.compat.v1.nn.dilation2d
tf.compat.v1.nn.dropout Computes dropout. (deprecated arguments) tf.compat.v1.nn.dropout( x, keep_prob=None, noise_shape=None, seed=None, name=None, rate=None ) Warning: SOME ARGUMENTS ARE DEPRECATED: (keep_prob). They will be removed in a future version. Instructions for updating: Please use rate instead of keep_prob. Rate should be set to rate = 1 - keep_prob. For each element of x, with probability rate, outputs 0, and otherwise scales up the input by 1 / (1-rate). The scaling is such that the expected sum is unchanged. By default, each element is kept or dropped independently. If noise_shape is specified, it must be broadcastable to the shape of x, and only dimensions with noise_shape[i] == shape(x)[i] will make independent decisions. For example, if shape(x) = [k, l, m, n] and noise_shape = [k, 1, 1, n], each batch and channel component will be kept independently and each row and column will be kept or not kept together. Args x A floating point tensor. keep_prob (deprecated) A deprecated alias for (1-rate). noise_shape A 1-D Tensor of type int32, representing the shape for randomly generated keep/drop flags. seed A Python integer. Used to create random seeds. See tf.random.set_seed for behavior. name A name for this operation (optional). rate A scalar Tensor with the same type as x. The probability that each element of x is discarded. Returns A Tensor of the same shape of x. Raises ValueError If rate is not in [0, 1) or if x is not a floating point tensor.
tensorflow.compat.v1.nn.dropout
tf.compat.v1.nn.dynamic_rnn Creates a recurrent neural network specified by RNNCell cell. (deprecated) tf.compat.v1.nn.dynamic_rnn( cell, inputs, sequence_length=None, initial_state=None, dtype=None, parallel_iterations=None, swap_memory=False, time_major=False, scope=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use keras.layers.RNN(cell), which is equivalent to this API Performs fully dynamic unrolling of inputs. Example: # create a BasicRNNCell rnn_cell = tf.compat.v1.nn.rnn_cell.BasicRNNCell(hidden_size) # 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size] # defining initial state initial_state = rnn_cell.zero_state(batch_size, dtype=tf.float32) # 'state' is a tensor of shape [batch_size, cell_state_size] outputs, state = tf.compat.v1.nn.dynamic_rnn(rnn_cell, input_data, initial_state=initial_state, dtype=tf.float32) # create 2 LSTMCells rnn_layers = [tf.compat.v1.nn.rnn_cell.LSTMCell(size) for size in [128, 256]] # create a RNN cell composed sequentially of a number of RNNCells multi_rnn_cell = tf.compat.v1.nn.rnn_cell.MultiRNNCell(rnn_layers) # 'outputs' is a tensor of shape [batch_size, max_time, 256] # 'state' is a N-tuple where N is the number of LSTMCells containing a # tf.nn.rnn_cell.LSTMStateTuple for each cell outputs, state = tf.compat.v1.nn.dynamic_rnn(cell=multi_rnn_cell, inputs=data, dtype=tf.float32) Args cell An instance of RNNCell. inputs The RNN inputs. If time_major == False (default), this must be a Tensor of shape: [batch_size, max_time, ...], or a nested tuple of such elements. If time_major == True, this must be a Tensor of shape: [max_time, batch_size, ...], or a nested tuple of such elements. This may also be a (possibly nested) tuple of Tensors satisfying this property. The first two dimensions must match across all the inputs, but otherwise the ranks and other shape components may differ. In this case, input to cell at each time-step will replicate the structure of these tuples, except for the time dimension (from which the time is taken). The input to cell at each time step will be a Tensor or (possibly nested) tuple of Tensors each with dimensions [batch_size, ...]. sequence_length (optional) An int32/int64 vector sized [batch_size]. Used to copy-through state and zero-out outputs when past a batch element's sequence length. This parameter enables users to extract the last valid state and properly padded outputs, so it is provided for correctness. initial_state (optional) An initial state for the RNN. If cell.state_size is an integer, this must be a Tensor of appropriate type and shape [batch_size, cell.state_size]. If cell.state_size is a tuple, this should be a tuple of tensors having shapes [batch_size, s] for s in cell.state_size. dtype (optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype. parallel_iterations (Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer. swap_memory Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty. time_major The shape format of the inputs and outputs Tensors. If true, these Tensors must be shaped [max_time, batch_size, depth]. If false, these Tensors must be shaped [batch_size, max_time, depth]. Using time_major = True is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form. scope VariableScope for the created subgraph; defaults to "rnn". Returns A pair (outputs, state) where: outputs The RNN output Tensor. If time_major == False (default), this will be a Tensor shaped: [batch_size, max_time, cell.output_size]. If time_major == True, this will be a Tensor shaped: [max_time, batch_size, cell.output_size]. Note, if cell.output_size is a (possibly nested) tuple of integers or TensorShape objects, then outputs will be a tuple having the same structure as cell.output_size, containing Tensors having shapes corresponding to the shape data in cell.output_size. state The final state. If cell.state_size is an int, this will be shaped [batch_size, cell.state_size]. If it is a TensorShape, this will be shaped [batch_size] + cell.state_size. If it is a (possibly nested) tuple of ints or TensorShape, this will be a tuple having the corresponding shapes. If cells are LSTMCells state will be a tuple containing a LSTMStateTuple for each cell. Raises TypeError If cell is not an instance of RNNCell. ValueError If inputs is None or an empty list.
tensorflow.compat.v1.nn.dynamic_rnn
tf.compat.v1.nn.embedding_lookup Looks up embeddings for the given ids from a list of tensors. tf.compat.v1.nn.embedding_lookup( params, ids, partition_strategy='mod', name=None, validate_indices=True, max_norm=None ) This function is used to perform parallel lookups on the list of tensors in params. It is a generalization of tf.gather, where params is interpreted as a partitioning of a large embedding tensor. params may be a PartitionedVariable as returned by using tf.compat.v1.get_variable() with a partitioner. If len(params) > 1, each element id of ids is partitioned between the elements of params according to the partition_strategy. In all strategies, if the id space does not evenly divide the number of partitions, each of the first (max_id + 1) % len(params) partitions will be assigned one more id. If partition_strategy is "mod", we assign each id to partition p = id % len(params). For instance, 13 ids are split across 5 partitions as: [[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]] If partition_strategy is "div", we assign ids to partitions in a contiguous manner. In this case, 13 ids are split across 5 partitions as: [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]] If the input ids are ragged tensors, partition variables are not supported and the partition strategy and the max_norm are ignored. The results of the lookup are concatenated into a dense tensor. The returned tensor has shape shape(ids) + shape(params)[1:]. Args params A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, a PartitionedVariable, created by partitioning along dimension 0. Each element must be appropriately sized for the given partition_strategy. ids A Tensor or a 'RaggedTensor' with type int32 or int64 containing the ids to be looked up in params. partition_strategy A string specifying the partitioning strategy, relevant if len(params) > 1. Currently "div" and "mod" are supported. Default is "mod". name A name for the operation (optional). validate_indices DEPRECATED. If this operation is assigned to CPU, values in indices are always validated to be within range. If assigned to GPU, out-of-bound indices result in safe but unspecified behavior, which may include raising an error. max_norm If not None, each embedding is clipped if its l2-norm is larger than this value. Returns A Tensor or a 'RaggedTensor', depending on the input, with the same type as the tensors in params. Raises ValueError If params is empty.
tensorflow.compat.v1.nn.embedding_lookup
tf.compat.v1.nn.embedding_lookup_sparse Looks up embeddings for the given ids and weights from a list of tensors. tf.compat.v1.nn.embedding_lookup_sparse( params, sp_ids, sp_weights, partition_strategy='mod', name=None, combiner=None, max_norm=None ) This op assumes that there is at least one id for each row in the dense tensor represented by sp_ids (i.e. there are no rows with empty features), and that all the indices of sp_ids are in canonical row-major order. sp_ids and sp_weights (if not None) are SparseTensors with rank of 2. Embeddings are always aggregated along the last dimension. It also assumes that all id values lie in the range [0, p0), where p0 is the sum of the size of params along dimension 0. Args params A single tensor representing the complete embedding tensor, or a list tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, a PartitionedVariable, created by partitioning along dimension 0. Each element must be appropriately sized for the given partition_strategy. sp_ids N x M SparseTensor of int64 ids where N is typically batch size and M is arbitrary. sp_weights either a SparseTensor of float / double weights, or None to indicate all weights should be taken to be 1. If specified, sp_weights must have exactly the same shape and indices as sp_ids. partition_strategy A string specifying the partitioning strategy, relevant if len(params) > 1. Currently "div" and "mod" are supported. Default is "mod". See tf.nn.embedding_lookup for more details. name Optional name for the op. combiner A string specifying the reduction op. Currently "mean", "sqrtn" and "sum" are supported. "sum" computes the weighted sum of the embedding results for each row. "mean" is the weighted sum divided by the total weight. "sqrtn" is the weighted sum divided by the square root of the sum of the squares of the weights. Defaults to mean. max_norm If not None, each embedding is clipped if its l2-norm is larger than this value, before combining. Returns A dense tensor representing the combined embeddings for the sparse ids. For each row in the dense tensor represented by sp_ids, the op looks up the embeddings for all ids in that row, multiplies them by the corresponding weight, and combines these embeddings as specified. In other words, if shape(combined params) = [p0, p1, ..., pm] and shape(sp_ids) = shape(sp_weights) = [d0, d1] then shape(output) = [d0, p1, ..., pm]. For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are [0, 0]: id 1, weight 2.0 [0, 1]: id 3, weight 0.5 [1, 0]: id 0, weight 1.0 [2, 3]: id 1, weight 3.0 with combiner="mean", then the output will be a 3x20 matrix where output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5) output[1, :] = (params[0, :] * 1.0) / 1.0 output[2, :] = (params[1, :] * 3.0) / 3.0 Raises TypeError If sp_ids is not a SparseTensor, or if sp_weights is neither None nor SparseTensor. ValueError If combiner is not one of {"mean", "sqrtn", "sum"}.
tensorflow.compat.v1.nn.embedding_lookup_sparse
tf.compat.v1.nn.erosion2d Computes the grayscale erosion of 4-D value and 3-D kernel tensors. tf.compat.v1.nn.erosion2d( value, kernel, strides, rates, padding, name=None ) The value tensor has shape [batch, in_height, in_width, depth] and the kernel tensor has shape [kernel_height, kernel_width, depth], i.e., each input channel is processed independently of the others with its own structuring function. The output tensor has shape [batch, out_height, out_width, depth]. The spatial dimensions of the output tensor depend on the padding algorithm. We currently only support the default "NHWC" data_format. In detail, the grayscale morphological 2-D erosion is given by: output[b, y, x, c] = min_{dy, dx} value[b, strides[1] * y - rates[1] * dy, strides[2] * x - rates[2] * dx, c] - kernel[dy, dx, c] Duality: The erosion of value by the kernel is equal to the negation of the dilation of -value by the reflected kernel. Args value A Tensor. 4-D with shape [batch, in_height, in_width, depth]. kernel A Tensor. Must have the same type as value. 3-D with shape [kernel_height, kernel_width, depth]. strides A list of ints that has length >= 4. 1-D of length 4. The stride of the sliding window for each dimension of the input tensor. Must be: [1, stride_height, stride_width, 1]. rates A list of ints that has length >= 4. 1-D of length 4. The input stride for atrous morphological dilation. Must be: [1, rate_height, rate_width, 1]. padding A string from: "SAME", "VALID". The type of padding algorithm to use. name A name for the operation (optional). If not specified "erosion2d" is used. Returns A Tensor. Has the same type as value. 4-D with shape [batch, out_height, out_width, depth]. Raises ValueError If the value depth does not match kernel' shape, or if padding is other than 'VALID' or 'SAME'.
tensorflow.compat.v1.nn.erosion2d
tf.compat.v1.nn.fractional_avg_pool Performs fractional average pooling on the input. (deprecated) tf.compat.v1.nn.fractional_avg_pool( value, pooling_ratio, pseudo_random=False, overlapping=False, deterministic=False, seed=0, seed2=0, name=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: seed2 and deterministic args are deprecated. Use fractional_avg_pool_v2. This is a deprecated version of fractional_avg_pool. Fractional average pooling is similar to Fractional max pooling in the pooling region generation step. The only difference is that after pooling regions are generated, a mean operation is performed instead of a max operation in each pooling region. Args value A Tensor. 4-D with shape [batch, height, width, channels]. pooling_ratio A list of floats that has length >= 4. Pooling ratio for each dimension of value, currently only supports row and col dimension and should be >= 1.0. For example, a valid pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements must be 1.0 because we don't allow pooling on batch and channels dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions respectively. pseudo_random An optional bool. Defaults to False. When set to True, generates the pooling sequence in a pseudorandom fashion, otherwise, in a random fashion. Check paper (Graham, 2015) for difference between pseudorandom and random. overlapping An optional bool. Defaults to False. When set to True, it means when pooling, the values at the boundary of adjacent pooling cells are used by both cells. For example: index 0 1 2 3 4 value 20 5 16 3 7 If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice. The result would be [20, 16] for fractional avg pooling. deterministic An optional bool. Deprecated; use fractional_avg_pool_v2 instead. seed An optional int. Defaults to 0. If set to be non-zero, the random number generator is seeded by the given seed. Otherwise it is seeded by a random seed. seed2 An optional int. Deprecated; use fractional_avg_pool_v2 instead. name A name for the operation (optional). Returns A tuple of Tensor objects (output, row_pooling_sequence, col_pooling_sequence). output: Output Tensor after fractional avg pooling. Has the same type as value. row_pooling_sequence: A Tensor of type int64. col_pooling_sequence: A Tensor of type int64. References: Fractional Max-Pooling: Graham, 2015 (pdf)
tensorflow.compat.v1.nn.fractional_avg_pool
tf.compat.v1.nn.fractional_max_pool Performs fractional max pooling on the input. (deprecated) tf.compat.v1.nn.fractional_max_pool( value, pooling_ratio, pseudo_random=False, overlapping=False, deterministic=False, seed=0, seed2=0, name=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: seed2 and deterministic args are deprecated. Use fractional_max_pool_v2. This is a deprecated version of fractional_max_pool. Fractional max pooling is slightly different than regular max pooling. In regular max pooling, you downsize an input set by taking the maximum value of smaller N x N subsections of the set (often 2x2), and try to reduce the set by a factor of N, where N is an integer. Fractional max pooling, as you might expect from the word "fractional", means that the overall reduction ratio N does not have to be an integer. The sizes of the pooling regions are generated randomly but are fairly uniform. For example, let's look at the height dimension, and the constraints on the list of rows that will be pool boundaries. First we define the following: input_row_length : the number of rows from the input set output_row_length : which will be smaller than the input alpha = input_row_length / output_row_length : our reduction ratio K = floor(alpha) row_pooling_sequence : this is the result list of pool boundary rows Then, row_pooling_sequence should satisfy: a[0] = 0 : the first value of the sequence is 0 a[end] = input_row_length : the last value of the sequence is the size K <= (a[i+1] - a[i]) <= K+1 : all intervals are K or K+1 size length(row_pooling_sequence) = output_row_length+1 Args value A Tensor. 4-D with shape [batch, height, width, channels]. pooling_ratio A list of floats that has length >= 4. Pooling ratio for each dimension of value, currently only supports row and col dimension and should be >= 1.0. For example, a valid pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements must be 1.0 because we don't allow pooling on batch and channels dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions respectively. pseudo_random An optional bool. Defaults to False. When set to True, generates the pooling sequence in a pseudorandom fashion, otherwise, in a random fashion. Check (Graham, 2015) for difference between pseudorandom and random. overlapping An optional bool. Defaults to False. When set to True, it means when pooling, the values at the boundary of adjacent pooling cells are used by both cells. For example: index 0 1 2 3 4 value 20 5 16 3 7 If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice. The result would be [20, 16] for fractional max pooling. deterministic An optional bool. Deprecated; use fractional_max_pool_v2 instead. seed An optional int. Defaults to 0. If set to be non-zero, the random number generator is seeded by the given seed. Otherwise it is seeded by a random seed. seed2 An optional int. Deprecated; use fractional_max_pool_v2 instead. name A name for the operation (optional). Returns A tuple of Tensor objects (output, row_pooling_sequence, col_pooling_sequence). output: Output Tensor after fractional max pooling. Has the same type as value. row_pooling_sequence: A Tensor of type int64. col_pooling_sequence: A Tensor of type int64. References: Fractional Max-Pooling: Graham, 2015 (pdf)
tensorflow.compat.v1.nn.fractional_max_pool
tf.compat.v1.nn.fused_batch_norm Batch normalization. tf.compat.v1.nn.fused_batch_norm( x, scale, offset, mean=None, variance=None, epsilon=0.001, data_format='NHWC', is_training=True, name=None, exponential_avg_factor=1.0 ) See Source: Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy. Args x Input Tensor of 4 or 5 dimensions. scale A Tensor of 1 dimension for scaling. offset A Tensor of 1 dimension for bias. mean A Tensor of 1 dimension for population mean. The shape and meaning of this argument depends on the value of is_training and exponential_avg_factor as follows: is_trainingFalse (inference): Mean must be a Tensor of the same shape as scale containing the estimated population mean computed during training. is_trainingTrue and exponential_avg_factor == 1.0: Mean must be None. is_trainingTrue and exponential_avg_factor != 1.0: Mean must be a Tensor of the same shape as scale containing the exponential running mean. variance A Tensor of 1 dimension for population variance. The shape and meaning of this argument depends on the value of is_training and exponential_avg_factor as follows: is_trainingFalse (inference): Variance must be a Tensor of the same shape as scale containing the estimated population variance computed during training. is_training==True and exponential_avg_factor == 1.0: Variance must be None. is_training==True and exponential_avg_factor != 1.0: Variance must be a Tensor of the same shape as scale containing the exponential running variance. epsilon A small float number added to the variance of x. data_format The data format for x. Support "NHWC" (default) or "NCHW" for 4D tenors and "NDHWC" or "NCDHW" for 5D tensors. is_training A bool value to specify if the operation is used for training or inference. name A name for this operation (optional). exponential_avg_factor A float number (usually between 0 and 1) used for controlling the decay of the running population average of mean and variance. If set to 1.0, the current batch average is returned. Returns y A 4D or 5D Tensor for the normalized, scaled, offsetted x. running_mean A 1D Tensor for the exponential running mean of x. The output value is (1 - exponential_avg_factor) * mean + exponential_avg_factor * batch_mean), where batch_mean is the mean of the current batch in x. running_var A 1D Tensor for the exponential running variance The output value is (1 - exponential_avg_factor) * variance + exponential_avg_factor * batch_variance), where batch_variance is the variance of the current batch in x. References: Batch Normalization - Accelerating Deep Network Training by Reducing Internal Covariate Shift: Ioffe et al., 2015 (pdf)
tensorflow.compat.v1.nn.fused_batch_norm
tf.compat.v1.nn.max_pool Performs the max pooling on the input. tf.compat.v1.nn.max_pool( value, ksize, strides, padding, data_format='NHWC', name=None, input=None ) Args value A 4-D Tensor of the format specified by data_format. ksize An int or list of ints that has length 1, 2 or 4. The size of the window for each dimension of the input tensor. strides An int or list of ints that has length 1, 2 or 4. The stride of the sliding window for each dimension of the input tensor. padding Either the string"SAME"or"VALID"indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is"NHWC", this should be in the form[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]. When explicit padding used and data_format is"NCHW", this should be in the form[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]. When using explicit padding, the size of the paddings cannot be greater than the sliding window size. </td> </tr><tr> <td>data_format</td> <td> A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported. </td> </tr><tr> <td>name</td> <td> Optional name for the operation. </td> </tr><tr> <td>input` Alias for value. Returns A Tensor of format specified by data_format. The max pooled output tensor.
tensorflow.compat.v1.nn.max_pool
tf.compat.v1.nn.max_pool_with_argmax Performs max pooling on the input and outputs both max values and indices. tf.compat.v1.nn.max_pool_with_argmax( input, ksize, strides, padding, data_format='NHWC', Targmax=None, name=None, output_dtype=None, include_batch_in_index=False ) The indices in argmax are flattened, so that a maximum value at position [b, y, x, c] becomes flattened index: (y * width + x) * channels + c if include_batch_in_index is False; ((b * height + y) * width + x) * channels + c if include_batch_in_index is True. The indices returned are always in [0, height) x [0, width) before flattening, even if padding is involved and the mathematically correct answer is outside (either negative or too large). This is a bug, but fixing it is difficult to do in a safe backwards compatible way, especially due to flattening. Args input A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. 4-D with shape [batch, height, width, channels]. Input to pool over. ksize A list of ints that has length >= 4. The size of the window for each dimension of the input tensor. strides A list of ints that has length >= 4. The stride of the sliding window for each dimension of the input tensor. padding A string from: "SAME", "VALID". The type of padding algorithm to use. Targmax An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int64. include_batch_in_index An optional bool. Defaults to False. Whether to include batch dimension in flattened index of argmax. name A name for the operation (optional). Returns A tuple of Tensor objects (output, argmax). output A Tensor. Has the same type as input. argmax A Tensor of type Targmax.
tensorflow.compat.v1.nn.max_pool_with_argmax
tf.compat.v1.nn.moments Calculate the mean and variance of x. tf.compat.v1.nn.moments( x, axes, shift=None, name=None, keep_dims=None, keepdims=None ) The mean and variance are calculated by aggregating the contents of x across axes. If x is 1-D and axes = [0] this is just the mean and variance of a vector. Note: shift is currently not used; the true mean is computed and used. When using these moments for batch normalization (see tf.nn.batch_normalization): for so-called "global normalization", used with convolutional filters with shape [batch, height, width, depth], pass axes=[0, 1, 2]. for simple batch normalization pass axes=[0] (batch only). Args x A Tensor. axes Array of ints. Axes along which to compute mean and variance. shift Not used in the current implementation name Name used to scope the operations that compute the moments. keep_dims produce moments with the same dimensionality as the input. keepdims Alias to keep_dims. Returns Two Tensor objects: mean and variance.
tensorflow.compat.v1.nn.moments
tf.compat.v1.nn.nce_loss Computes and returns the noise-contrastive estimation training loss. tf.compat.v1.nn.nce_loss( weights, biases, labels, inputs, num_sampled, num_classes, num_true=1, sampled_values=None, remove_accidental_hits=False, partition_strategy='mod', name='nce_loss' ) A common use case is to use this method for training, and calculate the full sigmoid loss for evaluation or inference. In this case, you must set partition_strategy="div" for the two losses to be consistent, as in the following example: if mode == "train": loss = tf.nn.nce_loss( weights=weights, biases=biases, labels=labels, inputs=inputs, ..., partition_strategy="div") elif mode == "eval": logits = tf.matmul(inputs, tf.transpose(weights)) logits = tf.nn.bias_add(logits, biases) labels_one_hot = tf.one_hot(labels, n_classes) loss = tf.nn.sigmoid_cross_entropy_with_logits( labels=labels_one_hot, logits=logits) loss = tf.reduce_sum(loss, axis=1) Note: By default this uses a log-uniform (Zipfian) distribution for sampling, so your labels must be sorted in order of decreasing frequency to achieve good results. For more details, see tf.random.log_uniform_candidate_sampler. Note: In the case where num_true > 1, we assign to each target class the target probability 1 / num_true so that the target probabilities sum to 1 per-example. Note: It would be useful to allow a variable number of target classes per example. We hope to provide this functionality in a future release. For now, if you have a variable number of target classes, you can pad them out to a constant number by either repeating them or by padding with an otherwise unused class. Args weights A Tensor of shape [num_classes, dim], or a list of Tensor objects whose concatenation along dimension 0 has shape [num_classes, dim]. The (possibly-partitioned) class embeddings. biases A Tensor of shape [num_classes]. The class biases. labels A Tensor of type int64 and shape [batch_size, num_true]. The target classes. inputs A Tensor of shape [batch_size, dim]. The forward activations of the input network. num_sampled An int. The number of negative classes to randomly sample per batch. This single sample of negative classes is evaluated for each element in the batch. num_classes An int. The number of possible classes. num_true An int. The number of target classes per training example. sampled_values a tuple of (sampled_candidates, true_expected_count, sampled_expected_count) returned by a *_candidate_sampler function. (if None, we default to log_uniform_candidate_sampler) remove_accidental_hits A bool. Whether to remove "accidental hits" where a sampled class equals one of the target classes. If set to True, this is a "Sampled Logistic" loss instead of NCE, and we are learning to generate log-odds instead of log probabilities. See our Candidate Sampling Algorithms Reference (pdf). Default is False. partition_strategy A string specifying the partitioning strategy, relevant if len(weights) > 1. Currently "div" and "mod" are supported. Default is "mod". See tf.nn.embedding_lookup for more details. name A name for the operation (optional). Returns A batch_size 1-D tensor of per-example NCE losses. References: Noise-contrastive estimation - A new estimation principle for unnormalized statistical models: Gutmann et al., 2010 (pdf)
tensorflow.compat.v1.nn.nce_loss
tf.compat.v1.nn.pool Performs an N-D pooling operation. tf.compat.v1.nn.pool( input, window_shape, pooling_type, padding, dilation_rate=None, strides=None, name=None, data_format=None, dilations=None ) In the case that data_format does not start with "NC", computes for 0 <= b < batch_size, 0 <= x[i] < output_spatial_shape[i], 0 <= c < num_channels: output[b, x[0], ..., x[N-1], c] = REDUCE_{z[0], ..., z[N-1]} input[b, x[0] * strides[0] - pad_before[0] + dilation_rate[0]*z[0], ... x[N-1]*strides[N-1] - pad_before[N-1] + dilation_rate[N-1]*z[N-1], c], where the reduction function REDUCE depends on the value of pooling_type, and pad_before is defined based on the value of padding as described in the "returns" section of tf.nn.convolution for details. The reduction never includes out-of-bounds positions. In the case that data_format starts with "NC", the input and output are simply transposed as follows: pool(input, data_format, **kwargs) = tf.transpose(pool(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1)) Args input Tensor of rank N+2, of shape [batch_size] + input_spatial_shape + [num_channels] if data_format does not start with "NC" (default), or [batch_size, num_channels] + input_spatial_shape if data_format starts with "NC". Pooling happens over the spatial dimensions only. window_shape Sequence of N ints >= 1. pooling_type Specifies pooling operation, must be "AVG" or "MAX". padding The padding algorithm, must be "SAME" or "VALID". See the "returns" section of tf.nn.convolution for details. dilation_rate Optional. Dilation rate. List of N ints >= 1. Defaults to [1]N. If any value of dilation_rate is > 1, then all values of strides must be 1. strides Optional. Sequence of N ints >= 1. Defaults to [1]N. If any value of strides is > 1, then all values of dilation_rate must be 1. name Optional. Name of the op. data_format A string or None. Specifies whether the channel dimension of the input and output is the last dimension (default, or if data_format does not start with "NC"), or the second dimension (if data_format starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW". dilations Alias for dilation_rate Returns Tensor of rank N+2, of shape [batch_size] + output_spatial_shape + [num_channels] if data_format is None or does not start with "NC", or [batch_size, num_channels] + output_spatial_shape if data_format starts with "NC", where output_spatial_shape depends on the value of padding: If padding = "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i]) If padding = "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (window_shape[i] - 1) * dilation_rate[i]) / strides[i]). Raises ValueError if arguments are invalid.
tensorflow.compat.v1.nn.pool
tf.compat.v1.nn.quantized_avg_pool Produces the average pool of the input tensor for quantized types. tf.compat.v1.nn.quantized_avg_pool( input, min_input, max_input, ksize, strides, padding, name=None ) Args input A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16. 4-D with shape [batch, height, width, channels]. min_input A Tensor of type float32. The float value that the lowest quantized input value represents. max_input A Tensor of type float32. The float value that the highest quantized input value represents. ksize A list of ints. The size of the window for each dimension of the input tensor. The length must be 4 to match the number of dimensions of the input. strides A list of ints. The stride of the sliding window for each dimension of the input tensor. The length must be 4 to match the number of dimensions of the input. padding A string from: "SAME", "VALID". The type of padding algorithm to use. name A name for the operation (optional). Returns A tuple of Tensor objects (output, min_output, max_output). output A Tensor. Has the same type as input. min_output A Tensor of type float32. max_output A Tensor of type float32.
tensorflow.compat.v1.nn.quantized_avg_pool
tf.compat.v1.nn.quantized_conv2d Computes a 2D convolution given quantized 4D input and filter tensors. tf.compat.v1.nn.quantized_conv2d( input, filter, min_input, max_input, min_filter, max_filter, strides, padding, out_type=tf.dtypes.qint32, dilations=[1, 1, 1, 1], name=None ) The inputs are quantized tensors where the lowest value represents the real number of the associated minimum, and the highest represents the maximum. This means that you can only interpret the quantized output in the same way, by taking the returned minimum and maximum values into account. Args input A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16. filter A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16. filter's input_depth dimension must match input's depth dimensions. min_input A Tensor of type float32. The float value that the lowest quantized input value represents. max_input A Tensor of type float32. The float value that the highest quantized input value represents. min_filter A Tensor of type float32. The float value that the lowest quantized filter value represents. max_filter A Tensor of type float32. The float value that the highest quantized filter value represents. strides A list of ints. The stride of the sliding window for each dimension of the input tensor. padding A string from: "SAME", "VALID". The type of padding algorithm to use. out_type An optional tf.DType from: tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16. Defaults to tf.qint32. dilations An optional list of ints. Defaults to [1, 1, 1, 1]. 1-D tensor of length 4. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of data_format, see above for details. Dilations in the batch and depth dimensions must be 1. name A name for the operation (optional). Returns A tuple of Tensor objects (output, min_output, max_output). output A Tensor of type out_type. min_output A Tensor of type float32. max_output A Tensor of type float32.
tensorflow.compat.v1.nn.quantized_conv2d
tf.compat.v1.nn.quantized_max_pool Produces the max pool of the input tensor for quantized types. tf.compat.v1.nn.quantized_max_pool( input, min_input, max_input, ksize, strides, padding, name=None ) Args input A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16. The 4D (batch x rows x cols x depth) Tensor to MaxReduce over. min_input A Tensor of type float32. The float value that the lowest quantized input value represents. max_input A Tensor of type float32. The float value that the highest quantized input value represents. ksize A list of ints. The size of the window for each dimension of the input tensor. The length must be 4 to match the number of dimensions of the input. strides A list of ints. The stride of the sliding window for each dimension of the input tensor. The length must be 4 to match the number of dimensions of the input. padding A string from: "SAME", "VALID". The type of padding algorithm to use. name A name for the operation (optional). Returns A tuple of Tensor objects (output, min_output, max_output). output A Tensor. Has the same type as input. min_output A Tensor of type float32. max_output A Tensor of type float32.
tensorflow.compat.v1.nn.quantized_max_pool
tf.compat.v1.nn.quantized_relu_x Computes Quantized Rectified Linear X: min(max(features, 0), max_value) tf.compat.v1.nn.quantized_relu_x( features, max_value, min_features, max_features, out_type=tf.dtypes.quint8, name=None ) Args features A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16. max_value A Tensor of type float32. min_features A Tensor of type float32. The float value that the lowest quantized value represents. max_features A Tensor of type float32. The float value that the highest quantized value represents. out_type An optional tf.DType from: tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16. Defaults to tf.quint8. name A name for the operation (optional). Returns A tuple of Tensor objects (activations, min_activations, max_activations). activations A Tensor of type out_type. min_activations A Tensor of type float32. max_activations A Tensor of type float32.
tensorflow.compat.v1.nn.quantized_relu_x
tf.compat.v1.nn.raw_rnn Creates an RNN specified by RNNCell cell and loop function loop_fn. tf.compat.v1.nn.raw_rnn( cell, loop_fn, parallel_iterations=None, swap_memory=False, scope=None ) Note: This method is still in testing, and the API may change.** This function is a more primitive version of dynamic_rnn that provides more direct access to the inputs each iteration. It also provides more control over when to start and finish reading the sequence, and what to emit for the output. For example, it can be used to implement the dynamic decoder of a seq2seq model. Instead of working with Tensor objects, most operations work with TensorArray objects directly. The operation of raw_rnn, in pseudo-code, is basically the following: time = tf.constant(0, dtype=tf.int32) (finished, next_input, initial_state, emit_structure, loop_state) = loop_fn( time=time, cell_output=None, cell_state=None, loop_state=None) emit_ta = TensorArray(dynamic_size=True, dtype=initial_state.dtype) state = initial_state while not all(finished): (output, cell_state) = cell(next_input, state) (next_finished, next_input, next_state, emit, loop_state) = loop_fn( time=time + 1, cell_output=output, cell_state=cell_state, loop_state=loop_state) # Emit zeros and copy forward state for minibatch entries that are finished. state = tf.where(finished, state, next_state) emit = tf.where(finished, tf.zeros_like(emit_structure), emit) emit_ta = emit_ta.write(time, emit) # If any new minibatch entries are marked as finished, mark these. finished = tf.logical_or(finished, next_finished) time += 1 return (emit_ta, state, loop_state) with the additional properties that output and state may be (possibly nested) tuples, as determined by cell.output_size and cell.state_size, and as a result the final state and emit_ta may themselves be tuples. A simple implementation of dynamic_rnn via raw_rnn looks like this: inputs = tf.compat.v1.placeholder(shape=(max_time, batch_size, input_depth), dtype=tf.float32) sequence_length = tf.compat.v1.placeholder(shape=(batch_size,), dtype=tf.int32) inputs_ta = tf.TensorArray(dtype=tf.float32, size=max_time) inputs_ta = inputs_ta.unstack(inputs) cell = tf.compat.v1.nn.rnn_cell.LSTMCell(num_units) def loop_fn(time, cell_output, cell_state, loop_state): emit_output = cell_output # == None for time == 0 if cell_output is None: # time == 0 next_cell_state = cell.zero_state(batch_size, tf.float32) else: next_cell_state = cell_state elements_finished = (time >= sequence_length) finished = tf.reduce_all(elements_finished) next_input = tf.cond( finished, lambda: tf.zeros([batch_size, input_depth], dtype=tf.float32), lambda: inputs_ta.read(time)) next_loop_state = None return (elements_finished, next_input, next_cell_state, emit_output, next_loop_state) outputs_ta, final_state, _ = raw_rnn(cell, loop_fn) outputs = outputs_ta.stack() Args cell An instance of RNNCell. loop_fn A callable that takes inputs (time, cell_output, cell_state, loop_state) and returns the tuple (finished, next_input, next_cell_state, emit_output, next_loop_state). Here time is an int32 scalar Tensor, cell_output is a Tensor or (possibly nested) tuple of tensors as determined by cell.output_size, and cell_state is a Tensor or (possibly nested) tuple of tensors, as determined by the loop_fn on its first call (and should match cell.state_size). The outputs are: finished, a boolean Tensor of shape [batch_size], next_input: the next input to feed to cell, next_cell_state: the next state to feed to cell, and emit_output: the output to store for this iteration. Note that emit_output should be a Tensor or (possibly nested) tuple of tensors which is aggregated in the emit_ta inside the while_loop. For the first call to loop_fn, the emit_output corresponds to the emit_structure which is then used to determine the size of the zero_tensor for the emit_ta (defaults to cell.output_size). For the subsequent calls to the loop_fn, the emit_output corresponds to the actual output tensor that is to be aggregated in the emit_ta. The parameter cell_state and output next_cell_state may be either a single or (possibly nested) tuple of tensors. The parameter loop_state and output next_loop_state may be either a single or (possibly nested) tuple of Tensor and TensorArray objects. This last parameter may be ignored by loop_fn and the return value may be None. If it is not None, then the loop_state will be propagated through the RNN loop, for use purely by loop_fn to keep track of its own state. The next_loop_state parameter returned may be None. The first call to loop_fn will be time = 0, cell_output = None, cell_state = None, and loop_state = None. For this call: The next_cell_state value should be the value with which to initialize the cell's state. It may be a final state from a previous RNN or it may be the output of cell.zero_state(). It should be a (possibly nested) tuple structure of tensors. If cell.state_size is an integer, this must be a Tensor of appropriate type and shape [batch_size, cell.state_size]. If cell.state_size is a TensorShape, this must be a Tensor of appropriate type and shape [batch_size] + cell.state_size. If cell.state_size is a (possibly nested) tuple of ints or TensorShape, this will be a tuple having the corresponding shapes. The emit_output value may be either None or a (possibly nested) tuple structure of tensors, e.g., (tf.zeros(shape_0, dtype=dtype_0), tf.zeros(shape_1, dtype=dtype_1)). If this first emit_output return value is None, then the emit_ta result of raw_rnn will have the same structure and dtypes as cell.output_size. Otherwise emit_ta will have the same structure, shapes (prepended with a batch_size dimension), and dtypes as emit_output. The actual values returned for emit_output at this initializing call are ignored. Note, this emit structure must be consistent across all time steps. parallel_iterations (Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer. swap_memory Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty. scope VariableScope for the created subgraph; defaults to "rnn". Returns A tuple (emit_ta, final_state, final_loop_state) where: emit_ta: The RNN output TensorArray. If loop_fn returns a (possibly nested) set of Tensors for emit_output during initialization, (inputs time = 0, cell_output = None, and loop_state = None), then emit_ta will have the same structure, dtypes, and shapes as emit_output instead. If loop_fn returns emit_output = None during this call, the structure of cell.output_size is used: If cell.output_size is a (possibly nested) tuple of integers or TensorShape objects, then emit_ta will be a tuple having the same structure as cell.output_size, containing TensorArrays whose elements' shapes correspond to the shape data in cell.output_size. final_state: The final cell state. If cell.state_size is an int, this will be shaped [batch_size, cell.state_size]. If it is a TensorShape, this will be shaped [batch_size] + cell.state_size. If it is a (possibly nested) tuple of ints or TensorShape, this will be a tuple having the corresponding shapes. final_loop_state: The final loop state as returned by loop_fn. Raises TypeError If cell is not an instance of RNNCell, or loop_fn is not a callable.
tensorflow.compat.v1.nn.raw_rnn
tf.compat.v1.nn.relu_layer Computes Relu(x * weight + biases). tf.compat.v1.nn.relu_layer( x, weights, biases, name=None ) Args x a 2D tensor. Dimensions typically: batch, in_units weights a 2D tensor. Dimensions typically: in_units, out_units biases a 1D tensor. Dimensions: out_units name A name for the operation (optional). If not specified "nn_relu_layer" is used. Returns A 2-D Tensor computing relu(matmul(x, weights) + biases). Dimensions typically: batch, out_units.
tensorflow.compat.v1.nn.relu_layer
Module: tf.compat.v1.nn.rnn_cell Module for constructing RNN Cells. Classes class BasicLSTMCell: DEPRECATED: Please use tf.compat.v1.nn.rnn_cell.LSTMCell instead. class BasicRNNCell: The most basic RNN cell. class DeviceWrapper: Operator that ensures an RNNCell runs on a particular device. class DropoutWrapper: Operator adding dropout to inputs and outputs of the given cell. class GRUCell: Gated Recurrent Unit cell. class LSTMCell: Long short-term memory unit (LSTM) recurrent network cell. class LSTMStateTuple: Tuple used by LSTM Cells for state_size, zero_state, and output state. class MultiRNNCell: RNN cell composed sequentially of multiple simple cells. class RNNCell: Abstract object representing an RNN cell. class ResidualWrapper: RNNCell wrapper that ensures cell inputs are added to the outputs.
tensorflow.compat.v1.nn.rnn_cell
tf.compat.v1.nn.rnn_cell.BasicLSTMCell DEPRECATED: Please use tf.compat.v1.nn.rnn_cell.LSTMCell instead. Inherits From: RNNCell, Layer, Layer, Module tf.compat.v1.nn.rnn_cell.BasicLSTMCell( num_units, forget_bias=1.0, state_is_tuple=True, activation=None, reuse=None, name=None, dtype=None, **kwargs ) Basic LSTM recurrent network cell. The implementation is based on We add forget_bias (default: 1) to the biases of the forget gate in order to reduce the scale of forgetting in the beginning of the training. It does not allow cell clipping, a projection layer, and does not use peep-hole connections: it is the basic baseline. For advanced models, please use the full tf.compat.v1.nn.rnn_cell.LSTMCell that follows. Note that this cell is not optimized for performance. Please use tf.contrib.cudnn_rnn.CudnnLSTM for better performance on GPU, or tf.contrib.rnn.LSTMBlockCell and tf.contrib.rnn.LSTMBlockFusedCell for better performance on CPU. Args num_units int, The number of units in the LSTM cell. forget_bias float, The bias added to forget gates (see above). Must set to 0.0 manually when restoring from CudnnLSTM-trained checkpoints. state_is_tuple If True, accepted and returned states are 2-tuples of the c_state and m_state. If False, they are concatenated along the column axis. The latter behavior will soon be deprecated. activation Activation function of the inner states. Default: tanh. It could also be string that is within Keras activation function names. reuse (optional) Python boolean describing whether to reuse variables in an existing scope. If not True, and the existing scope already has the given variables, an error is raised. name String, the name of the layer. Layers with the same name will share weights, but to avoid mistakes we require reuse=True in such cases. dtype Default dtype of the layer (default of None means use the type of the first input). Required when build is called before call. **kwargs Dict, keyword named properties for common layer attributes, like trainable etc when constructing the cell from configs of get_config(). When restoring from CudnnLSTM-trained checkpoints, must use CudnnCompatibleLSTMCell instead. Attributes graph output_size Integer or TensorShape: size of outputs produced by this cell. scope_name state_size size(s) of state(s) used by this cell. It can be represented by an Integer, a TensorShape or a tuple of Integers or TensorShapes. Methods get_initial_state View source get_initial_state( inputs=None, batch_size=None, dtype=None ) zero_state View source zero_state( batch_size, dtype ) Return zero-filled state tensor(s). Args batch_size int, float, or unit Tensor representing the batch size. dtype the data type to use for the state. Returns If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size, state_size] filled with zeros. If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shapes [batch_size, s] for each s in state_size.
tensorflow.compat.v1.nn.rnn_cell.basiclstmcell
tf.compat.v1.nn.rnn_cell.BasicRNNCell The most basic RNN cell. Inherits From: RNNCell, Layer, Layer, Module tf.compat.v1.nn.rnn_cell.BasicRNNCell( num_units, activation=None, reuse=None, name=None, dtype=None, **kwargs ) Note that this cell is not optimized for performance. Please use tf.contrib.cudnn_rnn.CudnnRNNTanh for better performance on GPU. Args num_units int, The number of units in the RNN cell. activation Nonlinearity to use. Default: tanh. It could also be string that is within Keras activation function names. reuse (optional) Python boolean describing whether to reuse variables in an existing scope. If not True, and the existing scope already has the given variables, an error is raised. name String, the name of the layer. Layers with the same name will share weights, but to avoid mistakes we require reuse=True in such cases. dtype Default dtype of the layer (default of None means use the type of the first input). Required when build is called before call. **kwargs Dict, keyword named properties for common layer attributes, like trainable etc when constructing the cell from configs of get_config(). Attributes graph output_size Integer or TensorShape: size of outputs produced by this cell. scope_name state_size size(s) of state(s) used by this cell. It can be represented by an Integer, a TensorShape or a tuple of Integers or TensorShapes. Methods get_initial_state View source get_initial_state( inputs=None, batch_size=None, dtype=None ) zero_state View source zero_state( batch_size, dtype ) Return zero-filled state tensor(s). Args batch_size int, float, or unit Tensor representing the batch size. dtype the data type to use for the state. Returns If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size, state_size] filled with zeros. If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shapes [batch_size, s] for each s in state_size.
tensorflow.compat.v1.nn.rnn_cell.basicrnncell
tf.compat.v1.nn.rnn_cell.DeviceWrapper Operator that ensures an RNNCell runs on a particular device. Inherits From: RNNCell, Layer, Layer, Module tf.compat.v1.nn.rnn_cell.DeviceWrapper( *args, **kwargs ) Args cell An instance of RNNCell. device A device string or function, for passing to tf.device. **kwargs dict of keyword arguments for base layer. Attributes graph output_size scope_name state_size Methods get_initial_state View source get_initial_state( inputs=None, batch_size=None, dtype=None ) zero_state View source zero_state( batch_size, dtype )
tensorflow.compat.v1.nn.rnn_cell.devicewrapper
tf.compat.v1.nn.rnn_cell.DropoutWrapper Operator adding dropout to inputs and outputs of the given cell. Inherits From: RNNCell, Layer, Layer, Module tf.compat.v1.nn.rnn_cell.DropoutWrapper( *args, **kwargs ) Args cell an RNNCell, a projection to output_size is added to it. input_keep_prob unit Tensor or float between 0 and 1, input keep probability; if it is constant and 1, no input dropout will be added. output_keep_prob unit Tensor or float between 0 and 1, output keep probability; if it is constant and 1, no output dropout will be added. state_keep_prob unit Tensor or float between 0 and 1, output keep probability; if it is constant and 1, no output dropout will be added. State dropout is performed on the outgoing states of the cell. Note the state components to which dropout is applied when state_keep_prob is in (0, 1) are also determined by the argument dropout_state_filter_visitor (e.g. by default dropout is never applied to the c component of an LSTMStateTuple). variational_recurrent Python bool. If True, then the same dropout pattern is applied across all time steps per run call. If this parameter is set, input_size must be provided. input_size (optional) (possibly nested tuple of) TensorShape objects containing the depth(s) of the input tensors expected to be passed in to the DropoutWrapper. Required and used iff variational_recurrent = True and input_keep_prob < 1. dtype (optional) The dtype of the input, state, and output tensors. Required and used iff variational_recurrent = True. seed (optional) integer, the randomness seed. dropout_state_filter_visitor (optional), default: (see below). Function that takes any hierarchical level of the state and returns a scalar or depth=1 structure of Python booleans describing which terms in the state should be dropped out. In addition, if the function returns True, dropout is applied across this sublevel. If the function returns False, dropout is not applied across this entire sublevel. Default behavior: perform dropout on all terms except the memory (c) state of LSTMCellState objects, and don't try to apply dropout to TensorArray objects: def dropout_state_filter_visitor(s): if isinstance(s, LSTMCellState): # Never perform dropout on the c state. return LSTMCellState(c=False, h=True) elif isinstance(s, TensorArray): return False return True **kwargs dict of keyword arguments for base layer. Raises TypeError if cell is not an RNNCell, or keep_state_fn is provided but not callable. ValueError if any of the keep_probs are not between 0 and 1. Attributes graph output_size scope_name state_size wrapped_cell Methods get_initial_state View source get_initial_state( inputs=None, batch_size=None, dtype=None ) zero_state View source zero_state( batch_size, dtype )
tensorflow.compat.v1.nn.rnn_cell.dropoutwrapper
tf.compat.v1.nn.rnn_cell.GRUCell Gated Recurrent Unit cell. Inherits From: RNNCell, Layer, Layer, Module tf.compat.v1.nn.rnn_cell.GRUCell( num_units, activation=None, reuse=None, kernel_initializer=None, bias_initializer=None, name=None, dtype=None, **kwargs ) Note that this cell is not optimized for performance. Please use tf.contrib.cudnn_rnn.CudnnGRU for better performance on GPU, or tf.contrib.rnn.GRUBlockCellV2 for better performance on CPU. Args num_units int, The number of units in the GRU cell. activation Nonlinearity to use. Default: tanh. reuse (optional) Python boolean describing whether to reuse variables in an existing scope. If not True, and the existing scope already has the given variables, an error is raised. kernel_initializer (optional) The initializer to use for the weight and projection matrices. bias_initializer (optional) The initializer to use for the bias. name String, the name of the layer. Layers with the same name will share weights, but to avoid mistakes we require reuse=True in such cases. dtype Default dtype of the layer (default of None means use the type of the first input). Required when build is called before call. **kwargs Dict, keyword named properties for common layer attributes, like trainable etc when constructing the cell from configs of get_config(). References: Learning Phrase Representations using RNN Encoder Decoder for Statistical Machine Translation: Cho et al., 2014 (pdf) Attributes graph output_size Integer or TensorShape: size of outputs produced by this cell. scope_name state_size size(s) of state(s) used by this cell. It can be represented by an Integer, a TensorShape or a tuple of Integers or TensorShapes. Methods get_initial_state View source get_initial_state( inputs=None, batch_size=None, dtype=None ) zero_state View source zero_state( batch_size, dtype ) Return zero-filled state tensor(s). Args batch_size int, float, or unit Tensor representing the batch size. dtype the data type to use for the state. Returns If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size, state_size] filled with zeros. If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shapes [batch_size, s] for each s in state_size.
tensorflow.compat.v1.nn.rnn_cell.grucell
tf.compat.v1.nn.rnn_cell.LSTMCell Long short-term memory unit (LSTM) recurrent network cell. Inherits From: RNNCell, Layer, Layer, Module tf.compat.v1.nn.rnn_cell.LSTMCell( num_units, use_peepholes=False, cell_clip=None, initializer=None, num_proj=None, proj_clip=None, num_unit_shards=None, num_proj_shards=None, forget_bias=1.0, state_is_tuple=True, activation=None, reuse=None, name=None, dtype=None, **kwargs ) The default non-peephole implementation is based on (Gers et al., 1999). The peephole implementation is based on (Sak et al., 2014). The class uses optional peep-hole connections, optional cell clipping, and an optional projection layer. Note that this cell is not optimized for performance. Please use tf.contrib.cudnn_rnn.CudnnLSTM for better performance on GPU, or tf.contrib.rnn.LSTMBlockCell and tf.contrib.rnn.LSTMBlockFusedCell for better performance on CPU. References: Long short-term memory recurrent neural network architectures for large scale acoustic modeling: Sak et al., 2014 (pdf) Learning to forget: Gers et al., 1999 (pdf) Long Short-Term Memory: Hochreiter et al., 1997 (pdf) Args num_units int, The number of units in the LSTM cell. use_peepholes bool, set True to enable diagonal/peephole connections. cell_clip (optional) A float value, if provided the cell state is clipped by this value prior to the cell output activation. initializer (optional) The initializer to use for the weight and projection matrices. num_proj (optional) int, The output dimensionality for the projection matrices. If None, no projection is performed. proj_clip (optional) A float value. If num_proj > 0 and proj_clip is provided, then the projected values are clipped elementwise to within [-proj_clip, proj_clip]. num_unit_shards Deprecated, will be removed by Jan. 2017. Use a variable_scope partitioner instead. num_proj_shards Deprecated, will be removed by Jan. 2017. Use a variable_scope partitioner instead. forget_bias Biases of the forget gate are initialized by default to 1 in order to reduce the scale of forgetting at the beginning of the training. Must set it manually to 0.0 when restoring from CudnnLSTM trained checkpoints. state_is_tuple If True, accepted and returned states are 2-tuples of the c_state and m_state. If False, they are concatenated along the column axis. This latter behavior will soon be deprecated. activation Activation function of the inner states. Default: tanh. It could also be string that is within Keras activation function names. reuse (optional) Python boolean describing whether to reuse variables in an existing scope. If not True, and the existing scope already has the given variables, an error is raised. name String, the name of the layer. Layers with the same name will share weights, but to avoid mistakes we require reuse=True in such cases. dtype Default dtype of the layer (default of None means use the type of the first input). Required when build is called before call. **kwargs Dict, keyword named properties for common layer attributes, like trainable etc when constructing the cell from configs of get_config(). When restoring from CudnnLSTM-trained checkpoints, use CudnnCompatibleLSTMCell instead. Attributes graph output_size Integer or TensorShape: size of outputs produced by this cell. scope_name state_size size(s) of state(s) used by this cell. It can be represented by an Integer, a TensorShape or a tuple of Integers or TensorShapes. Methods get_initial_state View source get_initial_state( inputs=None, batch_size=None, dtype=None ) zero_state View source zero_state( batch_size, dtype ) Return zero-filled state tensor(s). Args batch_size int, float, or unit Tensor representing the batch size. dtype the data type to use for the state. Returns If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size, state_size] filled with zeros. If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shapes [batch_size, s] for each s in state_size.
tensorflow.compat.v1.nn.rnn_cell.lstmcell
tf.compat.v1.nn.rnn_cell.LSTMStateTuple Tuple used by LSTM Cells for state_size, zero_state, and output state. tf.compat.v1.nn.rnn_cell.LSTMStateTuple( c, h ) Stores two elements: (c, h), in that order. Where c is the hidden state and h is the output. Only used when state_is_tuple=True. Attributes c h dtype
tensorflow.compat.v1.nn.rnn_cell.lstmstatetuple
tf.compat.v1.nn.rnn_cell.MultiRNNCell RNN cell composed sequentially of multiple simple cells. Inherits From: RNNCell, Layer, Layer, Module tf.compat.v1.nn.rnn_cell.MultiRNNCell( cells, state_is_tuple=True ) Example: num_units = [128, 64] cells = [BasicLSTMCell(num_units=n) for n in num_units] stacked_rnn_cell = MultiRNNCell(cells) Args cells list of RNNCells that will be composed in this order. state_is_tuple If True, accepted and returned states are n-tuples, where n = len(cells). If False, the states are all concatenated along the column axis. This latter behavior will soon be deprecated. Raises ValueError if cells is empty (not allowed), or at least one of the cells returns a state tuple but the flag state_is_tuple is False. Attributes graph output_size Integer or TensorShape: size of outputs produced by this cell. scope_name state_size size(s) of state(s) used by this cell. It can be represented by an Integer, a TensorShape or a tuple of Integers or TensorShapes. Methods get_initial_state View source get_initial_state( inputs=None, batch_size=None, dtype=None ) zero_state View source zero_state( batch_size, dtype ) Return zero-filled state tensor(s). Args batch_size int, float, or unit Tensor representing the batch size. dtype the data type to use for the state. Returns If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size, state_size] filled with zeros. If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shapes [batch_size, s] for each s in state_size.
tensorflow.compat.v1.nn.rnn_cell.multirnncell
tf.compat.v1.nn.rnn_cell.ResidualWrapper RNNCell wrapper that ensures cell inputs are added to the outputs. Inherits From: RNNCell, Layer, Layer, Module tf.compat.v1.nn.rnn_cell.ResidualWrapper( *args, **kwargs ) Args cell An instance of RNNCell. residual_fn (Optional) The function to map raw cell inputs and raw cell outputs to the actual cell outputs of the residual network. Defaults to calling nest.map_structure on (lambda i, o: i + o), inputs and outputs. **kwargs dict of keyword arguments for base layer. Attributes graph output_size scope_name state_size Methods get_initial_state View source get_initial_state( inputs=None, batch_size=None, dtype=None ) zero_state View source zero_state( batch_size, dtype )
tensorflow.compat.v1.nn.rnn_cell.residualwrapper
tf.compat.v1.nn.rnn_cell.RNNCell Abstract object representing an RNN cell. Inherits From: Layer, Layer, Module tf.compat.v1.nn.rnn_cell.RNNCell( trainable=True, name=None, dtype=None, **kwargs ) Every RNNCell must have the properties below and implement call with the signature (output, next_state) = call(input, state). The optional third input argument, scope, is allowed for backwards compatibility purposes; but should be left off for new subclasses. This definition of cell differs from the definition used in the literature. In the literature, 'cell' refers to an object with a single scalar output. This definition refers to a horizontal array of such units. An RNN cell, in the most abstract setting, is anything that has a state and performs some operation that takes a matrix of inputs. This operation results in an output matrix with self.output_size columns. If self.state_size is an integer, this operation also results in a new state matrix with self.state_size columns. If self.state_size is a (possibly nested tuple of) TensorShape object(s), then it should return a matching structure of Tensors having shape [batch_size].concatenate(s) for each s in self.batch_size. Attributes graph output_size Integer or TensorShape: size of outputs produced by this cell. scope_name state_size size(s) of state(s) used by this cell. It can be represented by an Integer, a TensorShape or a tuple of Integers or TensorShapes. Methods get_initial_state View source get_initial_state( inputs=None, batch_size=None, dtype=None ) zero_state View source zero_state( batch_size, dtype ) Return zero-filled state tensor(s). Args batch_size int, float, or unit Tensor representing the batch size. dtype the data type to use for the state. Returns If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size, state_size] filled with zeros. If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shapes [batch_size, s] for each s in state_size.
tensorflow.compat.v1.nn.rnn_cell.rnncell
tf.compat.v1.nn.safe_embedding_lookup_sparse Lookup embedding results, accounting for invalid IDs and empty features. tf.compat.v1.nn.safe_embedding_lookup_sparse( embedding_weights, sparse_ids, sparse_weights=None, combiner='mean', default_id=None, name=None, partition_strategy='div', max_norm=None ) The partitioned embedding in embedding_weights must all be the same shape except for the first dimension. The first dimension is allowed to vary as the vocabulary size is not necessarily a multiple of P. embedding_weights may be a PartitionedVariable as returned by using tf.compat.v1.get_variable() with a partitioner. Invalid IDs (< 0) are pruned from input IDs and weights, as well as any IDs with non-positive weight. For an entry with no features, the embedding vector for default_id is returned, or the 0-vector if default_id is not supplied. The ids and weights may be multi-dimensional. Embeddings are always aggregated along the last dimension. Args embedding_weights A single tensor representing the complete embedding tensor, or a list tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, a PartitionedVariable, created by partitioning along dimension 0. Each element must be appropriately sized for the given partition_strategy. sparse_ids SparseTensor of shape [d_0, d_1, ..., d_n] containing the ids. d_0 is typically batch size. sparse_weights SparseTensor of same shape as sparse_ids, containing float weights corresponding to sparse_ids, or None if all weights are be assumed to be 1.0. combiner A string specifying how to combine embedding results for each entry. Currently "mean", "sqrtn" and "sum" are supported, with "mean" the default. default_id The id to use for an entry with no features. name A name for this operation (optional). partition_strategy A string specifying the partitioning strategy. Currently "div" and "mod" are supported. Default is "div". max_norm If not None, all embeddings are l2-normalized to max_norm before combining. Returns A dense tensor representing the combined embeddings for the sparse ids. For each row in the dense tensor represented by sp_ids, the op looks up the embeddings for all ids in that row, multiplies them by the corresponding weight, and combines these embeddings as specified. In other words, if shape(combined embedding_weights) = [p0, p1, ..., pm] and shape(sparse_ids) = shape(sparse_weights) = [d0, d1, ..., dn] then shape(output) = [d0, d1, ... dn-1, p1, ..., pm]. For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are [0, 0]: id 1, weight 2.0 [0, 1]: id 3, weight 0.5 [1, 0]: id -1, weight 1.0 [2, 3]: id 1, weight 3.0 default_id is 0. with combiner="mean", then the output will be a 3x20 matrix where output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5) output[1, :] = (params[0, :] * 1.0) / 1.0 output[2, :] = (params[1, :] * 3.0) / 3.0 Raises ValueError if embedding_weights is empty.
tensorflow.compat.v1.nn.safe_embedding_lookup_sparse
tf.compat.v1.nn.sampled_softmax_loss Computes and returns the sampled softmax training loss. tf.compat.v1.nn.sampled_softmax_loss( weights, biases, labels, inputs, num_sampled, num_classes, num_true=1, sampled_values=None, remove_accidental_hits=True, partition_strategy='mod', name='sampled_softmax_loss', seed=None ) This is a faster way to train a softmax classifier over a huge number of classes. This operation is for training only. It is generally an underestimate of the full softmax loss. A common use case is to use this method for training, and calculate the full softmax loss for evaluation or inference. In this case, you must set partition_strategy="div" for the two losses to be consistent, as in the following example: if mode == "train": loss = tf.nn.sampled_softmax_loss( weights=weights, biases=biases, labels=labels, inputs=inputs, ..., partition_strategy="div") elif mode == "eval": logits = tf.matmul(inputs, tf.transpose(weights)) logits = tf.nn.bias_add(logits, biases) labels_one_hot = tf.one_hot(labels, n_classes) loss = tf.nn.softmax_cross_entropy_with_logits( labels=labels_one_hot, logits=logits) See our Candidate Sampling Algorithms Reference (pdf). Also see Section 3 of (Jean et al., 2014) for the math. Args weights A Tensor of shape [num_classes, dim], or a list of Tensor objects whose concatenation along dimension 0 has shape [num_classes, dim]. The (possibly-sharded) class embeddings. biases A Tensor of shape [num_classes]. The class biases. labels A Tensor of type int64 and shape [batch_size, num_true]. The target classes. Note that this format differs from the labels argument of nn.softmax_cross_entropy_with_logits. inputs A Tensor of shape [batch_size, dim]. The forward activations of the input network. num_sampled An int. The number of classes to randomly sample per batch. num_classes An int. The number of possible classes. num_true An int. The number of target classes per training example. sampled_values a tuple of (sampled_candidates, true_expected_count, sampled_expected_count) returned by a *_candidate_sampler function. (if None, we default to log_uniform_candidate_sampler) remove_accidental_hits A bool. whether to remove "accidental hits" where a sampled class equals one of the target classes. Default is True. partition_strategy A string specifying the partitioning strategy, relevant if len(weights) > 1. Currently "div" and "mod" are supported. Default is "mod". See tf.nn.embedding_lookup for more details. name A name for the operation (optional). seed random seed for candidate sampling. Default to None, which doesn't set the op-level random seed for candidate sampling. Returns A batch_size 1-D tensor of per-example sampled softmax losses. References: On Using Very Large Target Vocabulary for Neural Machine Translation: Jean et al., 2014 (pdf)
tensorflow.compat.v1.nn.sampled_softmax_loss
tf.compat.v1.nn.separable_conv2d 2-D convolution with separable filters. tf.compat.v1.nn.separable_conv2d( input, depthwise_filter, pointwise_filter, strides, padding, rate=None, name=None, data_format=None, dilations=None ) Performs a depthwise convolution that acts separately on channels followed by a pointwise convolution that mixes channels. Note that this is separability between dimensions [1, 2] and 3, not spatial separability between dimensions 1 and 2. In detail, with the default NHWC format, output[b, i, j, k] = sum_{di, dj, q, r} input[b, strides[1] * i + di, strides[2] * j + dj, q] * depthwise_filter[di, dj, q, r] * pointwise_filter[0, 0, q * channel_multiplier + r, k] strides controls the strides for the depthwise convolution only, since the pointwise convolution has implicit strides of [1, 1, 1, 1]. Must have strides[0] = strides[3] = 1. For the most common case of the same horizontal and vertical strides, strides = [1, stride, stride, 1]. If any value in rate is greater than 1, we perform atrous depthwise convolution, in which case all values in the strides tensor must be equal to 1. Args input 4-D Tensor with shape according to data_format. depthwise_filter 4-D Tensor with shape [filter_height, filter_width, in_channels, channel_multiplier]. Contains in_channels convolutional filters of depth 1. pointwise_filter 4-D Tensor with shape [1, 1, channel_multiplier * in_channels, out_channels]. Pointwise filter to mix channels after depthwise_filter has convolved spatially. strides 1-D of size 4. The strides for the depthwise convolution for each dimension of input. padding Controls how to pad the image before applying the depthwise convolution. Can be the string "SAME" or "VALID" indicating the type of padding algorithm to use, or a Python list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is "NHWC", this should be in the form [[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]. When explicit padding used and data_format is "NCHW", this should be in the form [[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]. rate 1-D of size 2. The dilation rate in which we sample input values across the height and width dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1. name A name for this operation (optional). data_format The data format for input. Either "NHWC" (default) or "NCHW". dilations Alias of rate. Returns A 4-D Tensor with shape according to 'data_format'. For example, with data_format="NHWC", shape is [batch, out_height, out_width, out_channels].
tensorflow.compat.v1.nn.separable_conv2d
tf.compat.v1.nn.sigmoid_cross_entropy_with_logits Computes sigmoid cross entropy given logits. tf.compat.v1.nn.sigmoid_cross_entropy_with_logits( _sentinel=None, labels=None, logits=None, name=None ) Measures the probability error in discrete classification tasks in which each class is independent and not mutually exclusive. For instance, one could perform multilabel classification where a picture can contain both an elephant and a dog at the same time. For brevity, let x = logits, z = labels. The logistic loss is z * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x)) = z * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x))) = z * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x))) = z * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x)) = (1 - z) * x + log(1 + exp(-x)) = x - x * z + log(1 + exp(-x)) For x < 0, to avoid overflow in exp(-x), we reformulate the above x - x * z + log(1 + exp(-x)) = log(exp(x)) - x * z + log(1 + exp(-x)) = - x * z + log(1 + exp(x)) Hence, to ensure stability and avoid overflow, the implementation uses this equivalent formulation max(x, 0) - x * z + log(1 + exp(-abs(x))) logits and labels must have the same type and shape. Args _sentinel Used to prevent positional parameters. Internal, do not use. labels A Tensor of the same type and shape as logits. logits A Tensor of type float32 or float64. name A name for the operation (optional). Returns A Tensor of the same shape as logits with the componentwise logistic losses. Raises ValueError If logits and labels do not have the same shape.
tensorflow.compat.v1.nn.sigmoid_cross_entropy_with_logits
tf.compat.v1.nn.softmax_cross_entropy_with_logits Computes softmax cross entropy between logits and labels. (deprecated) tf.compat.v1.nn.softmax_cross_entropy_with_logits( _sentinel=None, labels=None, logits=None, dim=-1, name=None, axis=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default. See tf.nn.softmax_cross_entropy_with_logits_v2. Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both. Note: While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of labels is a valid probability distribution. If they are not, the computation of the gradient will be incorrect. If using exclusive labels (wherein one and only one class is true at a time), see sparse_softmax_cross_entropy_with_logits. Warning: This op expects unscaled logits, since it performs a softmax on logits internally for efficiency. Do not call this op with the output of softmax, as it will produce incorrect results. A common use case is to have logits and labels of shape [batch_size, num_classes], but higher dimensions are supported, with the dim argument specifying the class dimension. Backpropagation will happen only into logits. To calculate a cross entropy loss that allows backpropagation into both logits and labels, see tf.nn.softmax_cross_entropy_with_logits_v2. Note that to avoid confusion, it is required to pass only named arguments to this function. Args _sentinel Used to prevent positional parameters. Internal, do not use. labels Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape [batch_size, num_classes], each row of labels[i] must be a valid probability distribution. logits Per-label activations, typically a linear output. These activation energies are interpreted as unnormalized log probabilities. dim The class dimension. Defaulted to -1 which is the last dimension. name A name for the operation (optional). axis Alias for dim. Returns A Tensor that contains the softmax cross entropy loss. Its type is the same as logits and its shape is the same as labels except that it does not have the last dimension of labels.
tensorflow.compat.v1.nn.softmax_cross_entropy_with_logits
tf.compat.v1.nn.softmax_cross_entropy_with_logits_v2 Computes softmax cross entropy between logits and labels. (deprecated arguments) tf.compat.v1.nn.softmax_cross_entropy_with_logits_v2( labels, logits, axis=None, name=None, dim=None ) Warning: SOME ARGUMENTS ARE DEPRECATED: (dim). They will be removed in a future version. Instructions for updating: dim is deprecated, use axis instead Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both. Note: While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of labels is a valid probability distribution. If they are not, the computation of the gradient will be incorrect. If using exclusive labels (wherein one and only one class is true at a time), see sparse_softmax_cross_entropy_with_logits. Warning: This op expects unscaled logits, since it performs a softmax on logits internally for efficiency. Do not call this op with the output of softmax, as it will produce incorrect results. A common use case is to have logits and labels of shape [batch_size, num_classes], but higher dimensions are supported, with the axis argument specifying the class dimension. logits and labels must have the same dtype (either float16, float32, or float64). Backpropagation will happen into both logits and labels. To disallow backpropagation into labels, pass label tensors through tf.stop_gradient before feeding it to this function. Note that to avoid confusion, it is required to pass only named arguments to this function. Args labels Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape [batch_size, num_classes], each row of labels[i] must be a valid probability distribution. logits Unscaled log probabilities. axis The class dimension. Defaulted to -1 which is the last dimension. name A name for the operation (optional). dim Deprecated alias for axis. Returns A Tensor that contains the softmax cross entropy loss. Its type is the same as logits and its shape is the same as labels except that it does not have the last dimension of labels.
tensorflow.compat.v1.nn.softmax_cross_entropy_with_logits_v2
tf.compat.v1.nn.sparse_softmax_cross_entropy_with_logits Computes sparse softmax cross entropy between logits and labels. tf.compat.v1.nn.sparse_softmax_cross_entropy_with_logits( _sentinel=None, labels=None, logits=None, name=None ) Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both. Note: For this operation, the probability of a given label is considered exclusive. That is, soft classes are not allowed, and the labels vector must provide a single specific index for the true class for each row of logits (each minibatch entry). For soft softmax classification with a probability distribution for each entry, see softmax_cross_entropy_with_logits_v2. Warning: This op expects unscaled logits, since it performs a softmax on logits internally for efficiency. Do not call this op with the output of softmax, as it will produce incorrect results. A common use case is to have logits of shape [batch_size, num_classes] and have labels of shape [batch_size], but higher dimensions are supported, in which case the dim-th dimension is assumed to be of size num_classes. logits must have the dtype of float16, float32, or float64, and labels must have the dtype of int32 or int64. Note that to avoid confusion, it is required to pass only named arguments to this function. Args _sentinel Used to prevent positional parameters. Internal, do not use. labels Tensor of shape [d_0, d_1, ..., d_{r-1}] (where r is rank of labels and result) and dtype int32 or int64. Each entry in labels must be an index in [0, num_classes). Other values will raise an exception when this op is run on CPU, and return NaN for corresponding loss and gradient rows on GPU. logits Per-label activations (typically a linear output) of shape [d_0, d_1, ..., d_{r-1}, num_classes] and dtype float16, float32, or float64. These activation energies are interpreted as unnormalized log probabilities. name A name for the operation (optional). Returns A Tensor of the same shape as labels and of the same type as logits with the softmax cross entropy loss. Raises ValueError If logits are scalars (need to have rank >= 1) or if the rank of the labels is not equal to the rank of the logits minus one.
tensorflow.compat.v1.nn.sparse_softmax_cross_entropy_with_logits
tf.compat.v1.nn.static_bidirectional_rnn Creates a bidirectional recurrent neural network. (deprecated) tf.compat.v1.nn.static_bidirectional_rnn( cell_fw, cell_bw, inputs, initial_state_fw=None, initial_state_bw=None, dtype=None, sequence_length=None, scope=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use keras.layers.Bidirectional(keras.layers.RNN(cell, unroll=True)), which is equivalent to this API Similar to the unidirectional case above (rnn) but takes input and builds independent forward and backward RNNs with the final forward and backward outputs depth-concatenated, such that the output will have the format [time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given. Args cell_fw An instance of RNNCell, to be used for forward direction. cell_bw An instance of RNNCell, to be used for backward direction. inputs A length T list of inputs, each a tensor of shape [batch_size, input_size], or a nested tuple of such elements. initial_state_fw (optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape [batch_size, cell_fw.state_size]. If cell_fw.state_size is a tuple, this should be a tuple of tensors having shapes [batch_size, s] for s in cell_fw.state_size. initial_state_bw (optional) Same as for initial_state_fw, but using the corresponding properties of cell_bw. dtype (optional) The data type for the initial state. Required if either of the initial states are not provided. sequence_length (optional) An int32/int64 vector, size [batch_size], containing the actual lengths for each of the sequences. scope VariableScope for the created subgraph; defaults to "bidirectional_rnn" Returns A tuple (outputs, output_state_fw, output_state_bw) where: outputs is a length T list of outputs (one for each input), which are depth-concatenated forward and backward outputs. output_state_fw is the final state of the forward rnn. output_state_bw is the final state of the backward rnn. Raises TypeError If cell_fw or cell_bw is not an instance of RNNCell. ValueError If inputs is None or an empty list.
tensorflow.compat.v1.nn.static_bidirectional_rnn
tf.compat.v1.nn.static_rnn Creates a recurrent neural network specified by RNNCell cell. (deprecated) tf.compat.v1.nn.static_rnn( cell, inputs, initial_state=None, dtype=None, sequence_length=None, scope=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use keras.layers.RNN(cell, unroll=True), which is equivalent to this API The simplest form of RNN network generated is: state = cell.zero_state(...) outputs = [] for input_ in inputs: output, state = cell(input_, state) outputs.append(output) return (outputs, state) However, a few other options are available: An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output. The dynamic calculation performed is, at time t for batch row b, (output, state)(b, t) = (t >= sequence_length(b)) ? (zeros(cell.output_size), states(b, sequence_length(b) - 1)) : cell(input(b, t), state(b, t - 1)) Args cell An instance of RNNCell. inputs A length T list of inputs, each a Tensor of shape [batch_size, input_size], or a nested tuple of such elements. initial_state (optional) An initial state for the RNN. If cell.state_size is an integer, this must be a Tensor of appropriate type and shape [batch_size, cell.state_size]. If cell.state_size is a tuple, this should be a tuple of tensors having shapes [batch_size, s] for s in cell.state_size. dtype (optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype. sequence_length Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size [batch_size], values in [0, T). scope VariableScope for the created subgraph; defaults to "rnn". Returns A pair (outputs, state) where: outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. state is the final state Raises TypeError If cell is not an instance of RNNCell. ValueError If inputs is None or an empty list, or if the input depth (column size) cannot be inferred from inputs via shape inference.
tensorflow.compat.v1.nn.static_rnn
tf.compat.v1.nn.static_state_saving_rnn RNN that accepts a state saver for time-truncated RNN calculation. (deprecated) tf.compat.v1.nn.static_state_saving_rnn( cell, inputs, state_saver, state_name, sequence_length=None, scope=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use keras.layers.RNN(cell, stateful=True), which is equivalent to this API Args cell An instance of RNNCell. inputs A length T list of inputs, each a Tensor of shape [batch_size, input_size]. state_saver A state saver object with methods state and save_state. state_name Python string or tuple of strings. The name to use with the state_saver. If the cell returns tuples of states (i.e., cell.state_size is a tuple) then state_name should be a tuple of strings having the same length as cell.state_size. Otherwise it should be a single string. sequence_length (optional) An int32/int64 vector size [batch_size]. See the documentation for rnn() for more details about sequence_length. scope VariableScope for the created subgraph; defaults to "rnn". Returns A pair (outputs, state) where: outputs is a length T list of outputs (one for each input) states is the final state Raises TypeError If cell is not an instance of RNNCell. ValueError If inputs is None or an empty list, or if the arity and type of state_name does not match that of cell.state_size.
tensorflow.compat.v1.nn.static_state_saving_rnn
tf.compat.v1.nn.sufficient_statistics Calculate the sufficient statistics for the mean and variance of x. tf.compat.v1.nn.sufficient_statistics( x, axes, shift=None, keep_dims=None, name=None, keepdims=None ) These sufficient statistics are computed using the one pass algorithm on an input that's optionally shifted. See: https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Computing_shifted_data For example: t = [[1, 2, 3], [4, 5, 6]] sufficient_statistics(t, [1]) (<tf.Tensor: shape=(), dtype=int32, numpy=3>, <tf.Tensor: shape=(2,), dtype=int32, numpy=array([ 6, 15], dtype=int32)>, <tf.Tensor: shape=(2,), dtype=int32, numpy=array([14, 77], dtype=int32)>, None) sufficient_statistics(t, [-1]) (<tf.Tensor: shape=(), dtype=int32, numpy=3>, <tf.Tensor: shape=(2,), dtype=int32, numpy=array([ 6, 15], dtype=int32)>, <tf.Tensor: shape=(2,), dtype=int32, numpy=array([14, 77], dtype=int32)>, None) Args x A Tensor. axes Array of ints. Axes along which to compute mean and variance. As in Python, the axes can also be negative numbers. A negative axis is interpreted as counting from the end of the rank, i.e., axis + rank(values)-th dimension. shift A Tensor containing the value by which to shift the data for numerical stability, or None if no shift is to be performed. A shift close to the true mean provides the most numerically stable results. keep_dims produce statistics with the same dimensionality as the input. name Name used to scope the operations that compute the sufficient stats. keepdims Alias for keep_dims. Returns Four Tensor objects of the same type as x: the count (number of elements to average over). the (possibly shifted) sum of the elements in the array. the (possibly shifted) sum of squares of the elements in the array. the shift by which the mean must be corrected or None if shift is None.
tensorflow.compat.v1.nn.sufficient_statistics
tf.compat.v1.nn.weighted_cross_entropy_with_logits Computes a weighted cross entropy. (deprecated arguments) tf.compat.v1.nn.weighted_cross_entropy_with_logits( labels=None, logits=None, pos_weight=None, name=None, targets=None ) Warning: SOME ARGUMENTS ARE DEPRECATED: (targets). They will be removed in a future version. Instructions for updating: targets is deprecated, use labels instead This is like sigmoid_cross_entropy_with_logits() except that pos_weight, allows one to trade off recall and precision by up- or down-weighting the cost of a positive error relative to a negative error. The usual cross-entropy cost is defined as: labels * -log(sigmoid(logits)) + (1 - labels) * -log(1 - sigmoid(logits)) A value pos_weight > 1 decreases the false negative count, hence increasing the recall. Conversely setting pos_weight < 1 decreases the false positive count and increases the precision. This can be seen from the fact that pos_weight is introduced as a multiplicative coefficient for the positive labels term in the loss expression: labels * -log(sigmoid(logits)) * pos_weight + (1 - labels) * -log(1 - sigmoid(logits)) For brevity, let x = logits, z = labels, q = pos_weight. The loss is: qz * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x)) = qz * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x))) = qz * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x))) = qz * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x)) = (1 - z) * x + (qz + 1 - z) * log(1 + exp(-x)) = (1 - z) * x + (1 + (q - 1) * z) * log(1 + exp(-x)) Setting l = (1 + (q - 1) * z), to ensure stability and avoid overflow, the implementation uses (1 - z) * x + l * (log(1 + exp(-abs(x))) + max(-x, 0)) logits and labels must have the same type and shape. Args labels A Tensor of the same type and shape as logits. logits A Tensor of type float32 or float64. pos_weight A coefficient to use on the positive examples. name A name for the operation (optional). targets Deprecated alias for labels. Returns A Tensor of the same shape as logits with the componentwise weighted logistic losses. Raises ValueError If logits and labels do not have the same shape.
tensorflow.compat.v1.nn.weighted_cross_entropy_with_logits
tf.compat.v1.nn.weighted_moments Returns the frequency-weighted mean and variance of x. tf.compat.v1.nn.weighted_moments( x, axes, frequency_weights, name=None, keep_dims=None, keepdims=None ) Args x A tensor. axes 1-d tensor of int32 values; these are the axes along which to compute mean and variance. frequency_weights A tensor of positive weights which can be broadcast with x. name Name used to scope the operation. keep_dims Produce moments with the same dimensionality as the input. keepdims Alias of keep_dims. Returns Two tensors: weighted_mean and weighted_variance.
tensorflow.compat.v1.nn.weighted_moments
tf.compat.v1.nn.xw_plus_b Computes matmul(x, weights) + biases. tf.compat.v1.nn.xw_plus_b( x, weights, biases, name=None ) Args x a 2D tensor. Dimensions typically: batch, in_units weights a 2D tensor. Dimensions typically: in_units, out_units biases a 1D tensor. Dimensions: out_units name A name for the operation (optional). If not specified "xw_plus_b" is used. Returns A 2-D Tensor computing matmul(x, weights) + biases. Dimensions typically: batch, out_units.
tensorflow.compat.v1.nn.xw_plus_b
tf.compat.v1.NodeDef A ProtocolMessage Attributes attr repeated AttrEntry attr device string device experimental_debug_info ExperimentalDebugInfo experimental_debug_info input repeated string input name string name op string op Child Classes class AttrEntry class ExperimentalDebugInfo
tensorflow.compat.v1.nodedef