doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
tf.compat.v1.NodeDef.AttrEntry A ProtocolMessage Attributes key string key value AttrValue value
tensorflow.compat.v1.nodedef.attrentry
tf.compat.v1.NodeDef.ExperimentalDebugInfo A ProtocolMessage Attributes original_func_names repeated string original_func_names original_node_names repeated string original_node_names
tensorflow.compat.v1.nodedef.experimentaldebuginfo
tf.compat.v1.norm Computes the norm of vectors, matrices, and tensors. (deprecated arguments) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.linalg.norm tf.compat.v1.norm( tensor, ord='euclidean', axis=None, keepdims=None, name=None, keep_dims=None ) Warning: SOME ARGUMENTS ARE DEPRECATED: (keep_dims). They will be removed in a future version. Instructions for updating: keep_dims is deprecated, use keepdims instead This function can compute several different vector norms (the 1-norm, the Euclidean or 2-norm, the inf-norm, and in general the p-norm for p > 0) and matrix norms (Frobenius, 1-norm, 2-norm and inf-norm). Args tensor Tensor of types float32, float64, complex64, complex128 ord Order of the norm. Supported values are 'fro', 'euclidean', 1, 2, np.inf and any positive real number yielding the corresponding p-norm. Default is 'euclidean' which is equivalent to Frobenius norm if tensor is a matrix and equivalent to 2-norm for vectors. Some restrictions apply: a) The Frobenius norm fro is not defined for vectors, b) If axis is a 2-tuple (matrix norm), only 'euclidean', 'fro', 1, 2, np.inf are supported. See the description of axis on how to compute norms for a batch of vectors or matrices stored in a tensor. axis If axis is None (the default), the input is considered a vector and a single vector norm is computed over the entire set of values in the tensor, i.e. norm(tensor, ord=ord) is equivalent to norm(reshape(tensor, [-1]), ord=ord). If axis is a Python integer, the input is considered a batch of vectors, and axis determines the axis in tensor over which to compute vector norms. If axis is a 2-tuple of Python integers it is considered a batch of matrices and axis determines the axes in tensor over which to compute a matrix norm. Negative indices are supported. Example: If you are passing a tensor that can be either a matrix or a batch of matrices at runtime, pass axis=[-2,-1] instead of axis=None to make sure that matrix norms are computed. keepdims If True, the axis indicated in axis are kept with size 1. Otherwise, the dimensions in axis are removed from the output shape. name The name of the op. keep_dims Deprecated alias for keepdims. Returns output A Tensor of the same type as tensor, containing the vector or matrix norms. If keepdims is True then the rank of output is equal to the rank of tensor. Otherwise, if axis is none the output is a scalar, if axis is an integer, the rank of output is one less than the rank of tensor, if axis is a 2-tuple the rank of output is two less than the rank of tensor. Raises ValueError If ord or axis is invalid. Numpy Compatibility Mostly equivalent to numpy.linalg.norm. Not supported: ord <= 0, 2-norm for matrices, nuclear norm. Other differences: a) If axis is None, treats the flattened tensor as a vector regardless of rank. b) Explicitly supports 'euclidean' norm as the default, including for higher order tensors.
tensorflow.compat.v1.norm
tf.compat.v1.no_regularizer Use this function to prevent regularization of variables. tf.compat.v1.no_regularizer( _ )
tensorflow.compat.v1.no_regularizer
tf.compat.v1.ones_like Creates a tensor with all elements set to 1. tf.compat.v1.ones_like( tensor, dtype=None, name=None, optimize=True ) See also tf.ones. Given a single tensor (tensor), this operation returns a tensor of the same type and shape as tensor with all elements set to 1. Optionally, you can specify a new type (dtype) for the returned tensor. For example: tensor = tf.constant([[1, 2, 3], [4, 5, 6]]) tf.ones_like(tensor) # [[1, 1, 1], [1, 1, 1]] Args tensor A Tensor. dtype A type for the returned Tensor. Must be float32, float64, int8, uint8, int16, uint16, int32, int64, complex64, complex128 or bool. name A name for the operation (optional). optimize if true, attempt to statically determine the shape of 'tensor' and encode it as a constant. Returns A Tensor with all elements set to 1.
tensorflow.compat.v1.ones_like
tf.compat.v1.OptimizerOptions A ProtocolMessage Attributes do_common_subexpression_elimination bool do_common_subexpression_elimination do_constant_folding bool do_constant_folding do_function_inlining bool do_function_inlining global_jit_level GlobalJitLevel global_jit_level max_folded_constant_in_bytes int64 max_folded_constant_in_bytes opt_level Level opt_level Class Variables DEFAULT 0 GlobalJitLevel L0 -1 L1 0 Level OFF -1 ON_1 1 ON_2 2
tensorflow.compat.v1.optimizeroptions
tf.compat.v1.op_scope DEPRECATED. Same as name_scope above, just different argument order. @tf_contextlib.contextmanager tf.compat.v1.op_scope( values, name, default_name=None )
tensorflow.compat.v1.op_scope
tf.compat.v1.pad Pads a tensor. tf.compat.v1.pad( tensor, paddings, mode='CONSTANT', name=None, constant_values=0 ) This operation pads a tensor according to the paddings you specify. paddings is an integer tensor with shape [n, 2], where n is the rank of tensor. For each dimension D of input, paddings[D, 0] indicates how many values to add before the contents of tensor in that dimension, and paddings[D, 1] indicates how many values to add after the contents of tensor in that dimension. If mode is "REFLECT" then both paddings[D, 0] and paddings[D, 1] must be no greater than tensor.dim_size(D) - 1. If mode is "SYMMETRIC" then both paddings[D, 0] and paddings[D, 1] must be no greater than tensor.dim_size(D). The padded size of each dimension D of the output is: paddings[D, 0] + tensor.dim_size(D) + paddings[D, 1] For example: t = tf.constant([[1, 2, 3], [4, 5, 6]]) paddings = tf.constant([[1, 1,], [2, 2]]) # 'constant_values' is 0. # rank of 't' is 2. tf.pad(t, paddings, "CONSTANT") # [[0, 0, 0, 0, 0, 0, 0], # [0, 0, 1, 2, 3, 0, 0], # [0, 0, 4, 5, 6, 0, 0], # [0, 0, 0, 0, 0, 0, 0]] tf.pad(t, paddings, "REFLECT") # [[6, 5, 4, 5, 6, 5, 4], # [3, 2, 1, 2, 3, 2, 1], # [6, 5, 4, 5, 6, 5, 4], # [3, 2, 1, 2, 3, 2, 1]] tf.pad(t, paddings, "SYMMETRIC") # [[2, 1, 1, 2, 3, 3, 2], # [2, 1, 1, 2, 3, 3, 2], # [5, 4, 4, 5, 6, 6, 5], # [5, 4, 4, 5, 6, 6, 5]] Args tensor A Tensor. paddings A Tensor of type int32. mode One of "CONSTANT", "REFLECT", or "SYMMETRIC" (case-insensitive) name A name for the operation (optional). constant_values In "CONSTANT" mode, the scalar pad value to use. Must be same type as tensor. Returns A Tensor. Has the same type as tensor. Raises ValueError When mode is not one of "CONSTANT", "REFLECT", or "SYMMETRIC".
tensorflow.compat.v1.pad
tf.compat.v1.parse_example Parses Example protos into a dict of tensors. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.io.parse_example tf.compat.v1.parse_example( serialized, features, name=None, example_names=None ) Parses a number of serialized Example protos given in serialized. We refer to serialized as a batch with batch_size many entries of individual Example protos. example_names may contain descriptive names for the corresponding serialized protos. These may be useful for debugging purposes, but they have no effect on the output. If not None, example_names must be the same length as serialized. This op parses serialized examples into a dictionary mapping keys to Tensor SparseTensor, and RaggedTensor objects. features is a dict from keys to VarLenFeature, SparseFeature, RaggedFeature, and FixedLenFeature objects. Each VarLenFeature and SparseFeature is mapped to a SparseTensor; each FixedLenFeature is mapped to a Tensor; and each RaggedFeature is mapped to a RaggedTensor. Each VarLenFeature maps to a SparseTensor of the specified type representing a ragged matrix. Its indices are [batch, index] where batch identifies the example in serialized, and index is the value's index in the list of values associated with that feature and example. Each SparseFeature maps to a SparseTensor of the specified type representing a Tensor of dense_shape [batch_size] + SparseFeature.size. Its values come from the feature in the examples with key value_key. A values[i] comes from a position k in the feature of an example at batch entry batch. This positional information is recorded in indices[i] as [batch, index_0, index_1, ...] where index_j is the k-th value of the feature in the example at with key SparseFeature.index_key[j]. In other words, we split the indices (except the first index indicating the batch entry) of a SparseTensor by dimension into different features of the Example. Due to its complexity a VarLenFeature should be preferred over a SparseFeature whenever possible. Each FixedLenFeature df maps to a Tensor of the specified type (or tf.float32 if not specified) and shape (serialized.size(),) + df.shape. FixedLenFeature entries with a default_value are optional. With no default value, we will fail if that Feature is missing from any example in serialized. Each FixedLenSequenceFeature df maps to a Tensor of the specified type (or tf.float32 if not specified) and shape (serialized.size(), None) + df.shape. All examples in serialized will be padded with default_value along the second dimension. Each RaggedFeature maps to a RaggedTensor of the specified type. It is formed by stacking the RaggedTensor for each example, where the RaggedTensor for each individual example is constructed using the tensors specified by RaggedTensor.values_key and RaggedTensor.partition. See the tf.io.RaggedFeature documentation for details and examples. Examples: For example, if one expects a tf.float32 VarLenFeature ft and three serialized Examples are provided: serialized = [ features { feature { key: "ft" value { float_list { value: [1.0, 2.0] } } } }, features { feature []}, features { feature { key: "ft" value { float_list { value: [3.0] } } } ] then the output will look like: {"ft": SparseTensor(indices=[[0, 0], [0, 1], [2, 0]], values=[1.0, 2.0, 3.0], dense_shape=(3, 2)) } If instead a FixedLenSequenceFeature with default_value = -1.0 and shape=[] is used then the output will look like: {"ft": [[1.0, 2.0], [3.0, -1.0]]} Given two Example input protos in serialized: [ features { feature { key: "kw" value { bytes_list { value: [ "knit", "big" ] } } } feature { key: "gps" value { float_list { value: [] } } } }, features { feature { key: "kw" value { bytes_list { value: [ "emmy" ] } } } feature { key: "dank" value { int64_list { value: [ 42 ] } } } feature { key: "gps" value { } } } ] And arguments example_names: ["input0", "input1"], features: { "kw": VarLenFeature(tf.string), "dank": VarLenFeature(tf.int64), "gps": VarLenFeature(tf.float32), } Then the output is a dictionary: { "kw": SparseTensor( indices=[[0, 0], [0, 1], [1, 0]], values=["knit", "big", "emmy"] dense_shape=[2, 2]), "dank": SparseTensor( indices=[[1, 0]], values=[42], dense_shape=[2, 1]), "gps": SparseTensor( indices=[], values=[], dense_shape=[2, 0]), } For dense results in two serialized Examples: [ features { feature { key: "age" value { int64_list { value: [ 0 ] } } } feature { key: "gender" value { bytes_list { value: [ "f" ] } } } }, features { feature { key: "age" value { int64_list { value: [] } } } feature { key: "gender" value { bytes_list { value: [ "f" ] } } } } ] We can use arguments: example_names: ["input0", "input1"], features: { "age": FixedLenFeature([], dtype=tf.int64, default_value=-1), "gender": FixedLenFeature([], dtype=tf.string), } And the expected output is: { "age": [[0], [-1]], "gender": [["f"], ["f"]], } An alternative to VarLenFeature to obtain a SparseTensor is SparseFeature. For example, given two Example input protos in serialized: [ features { feature { key: "val" value { float_list { value: [ 0.5, -1.0 ] } } } feature { key: "ix" value { int64_list { value: [ 3, 20 ] } } } }, features { feature { key: "val" value { float_list { value: [ 0.0 ] } } } feature { key: "ix" value { int64_list { value: [ 42 ] } } } } ] And arguments example_names: ["input0", "input1"], features: { "sparse": SparseFeature( index_key="ix", value_key="val", dtype=tf.float32, size=100), } Then the output is a dictionary: { "sparse": SparseTensor( indices=[[0, 3], [0, 20], [1, 42]], values=[0.5, -1.0, 0.0] dense_shape=[2, 100]), } See the tf.io.RaggedFeature documentation for examples showing how RaggedFeature can be used to obtain RaggedTensors. Args serialized A vector (1-D Tensor) of strings, a batch of binary serialized Example protos. features A dict mapping feature keys to FixedLenFeature, VarLenFeature, SparseFeature, and RaggedFeature values. example_names A vector (1-D Tensor) of strings (optional), the names of the serialized protos in the batch. name A name for this operation (optional). Returns A dict mapping feature keys to Tensor, SparseTensor, and RaggedTensor values. Raises ValueError if any feature is invalid.
tensorflow.compat.v1.parse_example
tf.compat.v1.parse_single_example Parses a single Example proto. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.io.parse_single_example tf.compat.v1.parse_single_example( serialized, features, name=None, example_names=None ) Similar to parse_example, except: For dense tensors, the returned Tensor is identical to the output of parse_example, except there is no batch dimension, the output shape is the same as the shape given in dense_shape. For SparseTensors, the first (batch) column of the indices matrix is removed (the indices matrix is a column vector), the values vector is unchanged, and the first (batch_size) entry of the shape vector is removed (it is now a single element vector). One might see performance advantages by batching Example protos with parse_example instead of using this function directly. Args serialized A scalar string Tensor, a single serialized Example. features A dict mapping feature keys to FixedLenFeature or VarLenFeature values. name A name for this operation (optional). example_names (Optional) A scalar string Tensor, the associated name. Returns A dict mapping feature keys to Tensor and SparseTensor values. Raises ValueError if any feature is invalid.
tensorflow.compat.v1.parse_single_example
tf.compat.v1.placeholder Inserts a placeholder for a tensor that will be always fed. tf.compat.v1.placeholder( dtype, shape=None, name=None ) Key Point: This tensor will produce an error if evaluated. Its value must be fed using the feed_dict optional argument to Session.run(), Tensor.eval(), or Operation.run(). For example: x = tf.compat.v1.placeholder(tf.float32, shape=(1024, 1024)) y = tf.matmul(x, x) with tf.compat.v1.Session() as sess: print(sess.run(y)) # ERROR: will fail because x was not fed. rand_array = np.random.rand(1024, 1024) print(sess.run(y, feed_dict={x: rand_array})) # Will succeed. Args dtype The type of elements in the tensor to be fed. shape The shape of the tensor to be fed (optional). If the shape is not specified, you can feed a tensor of any shape. name A name for the operation (optional). Returns A Tensor that may be used as a handle for feeding a value, but not evaluated directly. Raises RuntimeError if eager execution is enabled Eager Compatibility Placeholders are not compatible with eager execution.
tensorflow.compat.v1.placeholder
tf.compat.v1.placeholder_with_default A placeholder op that passes through input when its output is not fed. tf.compat.v1.placeholder_with_default( input, shape, name=None ) Args input A Tensor. The default value to produce when output is not fed. shape A tf.TensorShape or list of ints. The (possibly partial) shape of the tensor. name A name for the operation (optional). Returns A Tensor. Has the same type as input.
tensorflow.compat.v1.placeholder_with_default
tf.compat.v1.Print Prints a list of tensors. (deprecated) tf.compat.v1.Print( input_, data, message=None, first_n=None, summarize=None, name=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2018-08-20. Instructions for updating: Use tf.print instead of tf.Print. Note that tf.print returns a no-output operator that directly prints the output. Outside of defuns or eager mode, this operator will not be executed unless it is directly specified in session.run or used as a control dependency for other operators. This is only a concern in graph mode. Below is an example of how to ensure tf.print executes in graph mode: This is an identity op (behaves like tf.identity) with the side effect of printing data when evaluating. Note: This op prints to the standard error. It is not currently compatible with jupyter notebook (printing to the notebook server's output, not into the notebook). Args input_ A tensor passed through this op. data A list of tensors to print out when op is evaluated. message A string, prefix of the error message. first_n Only log first_n number of times. Negative numbers log always; this is the default. summarize Only print this many entries of each tensor. If None, then a maximum of 3 elements are printed per input tensor. name A name for the operation (optional). Returns A Tensor. Has the same type and contents as input_. sess = tf.compat.v1.Session() with sess.as_default(): tensor = tf.range(10) print_op = tf.print(tensor) with tf.control_dependencies([print_op]): out = tf.add(tensor, tensor) sess.run(out)
tensorflow.compat.v1.print
Module: tf.compat.v1.profiler Public API for tf.profiler namespace. Classes class AdviceProto: A ProtocolMessage class GraphNodeProto: A ProtocolMessage class MultiGraphNodeProto: A ProtocolMessage class OpLogProto: A ProtocolMessage class ProfileOptionBuilder: Option Builder for Profiling API. class Profiler: TensorFlow multi-step profiler. Functions advise(...): Auto profile and advise. profile(...): Profile model. write_op_log(...): Log provided 'op_log', and add additional model information below.
tensorflow.compat.v1.profiler
tf.compat.v1.profiler.AdviceProto A ProtocolMessage Attributes checkers repeated CheckersEntry checkers Child Classes class Checker class CheckersEntry
tensorflow.compat.v1.profiler.adviceproto
tf.compat.v1.profiler.AdviceProto.Checker A ProtocolMessage Attributes reports repeated string reports
tensorflow.compat.v1.profiler.adviceproto.checker
tf.compat.v1.profiler.AdviceProto.CheckersEntry A ProtocolMessage Attributes key string key value Checker value
tensorflow.compat.v1.profiler.adviceproto.checkersentry
tf.compat.v1.profiler.advise Auto profile and advise. tf.compat.v1.profiler.advise( graph=None, run_meta=None, options=_DEFAULT_ADVISE_OPTIONS ) Builds profiles and automatically check anomalies of various aspects. For more details: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/profiler/README.md Args graph tf.Graph. If None and eager execution is not enabled, use default graph. run_meta optional tensorflow.RunMetadata proto. It is necessary to to support run time information profiling, such as time and memory. options see ALL_ADVICE example above. Default checks everything. Returns Returns AdviceProto proto
tensorflow.compat.v1.profiler.advise
tf.compat.v1.profiler.GraphNodeProto A ProtocolMessage Attributes accelerator_exec_micros int64 accelerator_exec_micros children repeated GraphNodeProto children cpu_exec_micros int64 cpu_exec_micros devices repeated string devices exec_micros int64 exec_micros float_ops int64 float_ops input_shapes repeated InputShapesEntry input_shapes name string name output_bytes int64 output_bytes parameters int64 parameters peak_bytes int64 peak_bytes requested_bytes int64 requested_bytes residual_bytes int64 residual_bytes run_count int64 run_count shapes repeated TensorShapeProto shapes tensor_value TFProfTensorProto tensor_value total_accelerator_exec_micros int64 total_accelerator_exec_micros total_cpu_exec_micros int64 total_cpu_exec_micros total_definition_count int64 total_definition_count total_exec_micros int64 total_exec_micros total_float_ops int64 total_float_ops total_output_bytes int64 total_output_bytes total_parameters int64 total_parameters total_peak_bytes int64 total_peak_bytes total_requested_bytes int64 total_requested_bytes total_residual_bytes int64 total_residual_bytes total_run_count int64 total_run_count Child Classes class InputShapesEntry
tensorflow.compat.v1.profiler.graphnodeproto
tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry A ProtocolMessage Attributes key int32 key value TensorShapeProto value
tensorflow.compat.v1.profiler.graphnodeproto.inputshapesentry
tf.compat.v1.profiler.MultiGraphNodeProto A ProtocolMessage Attributes accelerator_exec_micros int64 accelerator_exec_micros children repeated MultiGraphNodeProto children cpu_exec_micros int64 cpu_exec_micros exec_micros int64 exec_micros float_ops int64 float_ops graph_nodes repeated GraphNodeProto graph_nodes name string name output_bytes int64 output_bytes parameters int64 parameters peak_bytes int64 peak_bytes requested_bytes int64 requested_bytes residual_bytes int64 residual_bytes total_accelerator_exec_micros int64 total_accelerator_exec_micros total_cpu_exec_micros int64 total_cpu_exec_micros total_exec_micros int64 total_exec_micros total_float_ops int64 total_float_ops total_output_bytes int64 total_output_bytes total_parameters int64 total_parameters total_peak_bytes int64 total_peak_bytes total_requested_bytes int64 total_requested_bytes total_residual_bytes int64 total_residual_bytes
tensorflow.compat.v1.profiler.multigraphnodeproto
tf.compat.v1.profiler.OpLogProto A ProtocolMessage Attributes id_to_string repeated IdToStringEntry id_to_string log_entries repeated OpLogEntry log_entries Child Classes class IdToStringEntry
tensorflow.compat.v1.profiler.oplogproto
tf.compat.v1.profiler.OpLogProto.IdToStringEntry A ProtocolMessage Attributes key int64 key value string value
tensorflow.compat.v1.profiler.oplogproto.idtostringentry
tf.compat.v1.profiler.profile Profile model. tf.compat.v1.profiler.profile( graph=None, run_meta=None, op_log=None, cmd='scope', options=_DEFAULT_PROFILE_OPTIONS ) Tutorials and examples can be found in: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/profiler/g3doc/python_api.md Args graph tf.Graph. If None and eager execution is not enabled, use default graph. run_meta optional tensorflow.RunMetadata proto. It is necessary to to support run time information profiling, such as time and memory. op_log tensorflow.tfprof.OpLogProto proto. User can assign "types" to graph nodes with op_log. "types" allow user to flexibly group and account profiles using options['accounted_type_regexes']. cmd string. Either 'op', 'scope', 'graph' or 'code'. 'op' view organizes profile using operation type. (e.g. MatMul) 'scope' view organizes profile using graph node name scope. 'graph' view organizes profile using graph node inputs/outputs. 'code' view organizes profile using Python call stack. options A dict of options. See core/profiler/g3doc/options.md. Returns If cmd is 'scope' or 'graph', returns GraphNodeProto proto. If cmd is 'op' or 'code', returns MultiGraphNodeProto proto. Side effect: stdout/file/timeline.json depending on options['output']
tensorflow.compat.v1.profiler.profile
tf.compat.v1.profiler.ProfileOptionBuilder Option Builder for Profiling API. tf.compat.v1.profiler.ProfileOptionBuilder( options=None ) For tutorial on the options, see https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/profiler/g3doc/options.md # Users can use pre-built options: opts = ( tf.profiler.ProfileOptionBuilder.trainable_variables_parameter()) # Or, build your own options: opts = (tf.compat.v1.profiler.ProfileOptionBuilder() .with_max_depth(10) .with_min_micros(1000) .select(['accelerator_micros']) .with_stdout_output() .build() # Or customize the pre-built options: opts = (tf.compat.v1.profiler.ProfileOptionBuilder( tf.profiler.ProfileOptionBuilder.time_and_memory()) .with_displaying_options(show_name_regexes=['.*rnn.*']) .build()) # Finally, profiling with the options: _ = tf.compat.v1.profiler.profile(tf.compat.v1.get_default_graph(), run_meta=run_meta, cmd='scope', options=opts) Args options Optional initial option dict to start with. Methods account_displayed_op_only View source account_displayed_op_only( is_true ) Whether only account the statistics of displayed profiler nodes. Args is_true If true, only account statistics of nodes eventually displayed by the outputs. Otherwise, a node's statistics are accounted by its parents as long as it's types match 'account_type_regexes', even if it is hidden from the output, say, by hide_name_regexes. Returns self build View source build() Build a profiling option. Returns A dict of profiling options. float_operation View source @staticmethod float_operation() Options used to profile float operations. Please see https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/profiler/g3doc/profile_model_architecture.md on the caveats of calculating float operations. Returns A dict of profiling options. order_by View source order_by( attribute ) Order the displayed profiler nodes based on a attribute. Supported attribute includes micros, bytes, occurrence, params, etc. https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/profiler/g3doc/options.md Args attribute An attribute the profiler node has. Returns self select View source select( attributes ) Select the attributes to display. See https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/profiler/g3doc/options.md for supported attributes. Args attributes A list of attribute the profiler node has. Returns self time_and_memory View source @staticmethod time_and_memory( min_micros=1, min_bytes=1, min_accelerator_micros=0, min_cpu_micros=0, min_peak_bytes=0, min_residual_bytes=0, min_output_bytes=0 ) Show operation time and memory consumptions. Args min_micros Only show profiler nodes with execution time no less than this. It sums accelerator and cpu times. min_bytes Only show profiler nodes requested to allocate no less bytes than this. min_accelerator_micros Only show profiler nodes spend no less than this time on accelerator (e.g. GPU). min_cpu_micros Only show profiler nodes spend no less than this time on cpu. min_peak_bytes Only show profiler nodes using no less than this bytes at peak (high watermark). For profiler nodes consist of multiple graph nodes, it sums the graph nodes' peak_bytes. min_residual_bytes Only show profiler nodes have no less than this bytes not being de-allocated after Compute() ends. For profiler nodes consist of multiple graph nodes, it sums the graph nodes' residual_bytes. min_output_bytes Only show profiler nodes have no less than this bytes output. The output are not necessarily allocated by this profiler nodes. Returns A dict of profiling options. trainable_variables_parameter View source @staticmethod trainable_variables_parameter() Options used to profile trainable variable parameters. Normally used together with 'scope' view. Returns A dict of profiling options. with_accounted_types View source with_accounted_types( account_type_regexes ) Selectively counting statistics based on node types. Here, 'types' means the profiler nodes' properties. Profiler by default consider device name (e.g. /job:xx/.../device:GPU:0) and operation type (e.g. MatMul) as profiler nodes' properties. User can also associate customized 'types' to profiler nodes through OpLogProto proto. For example, user can select profiler nodes placed on gpu:0 with: account_type_regexes=['.*gpu:0.*'] If none of a node's properties match the specified regexes, the node is not displayed nor accounted. Args account_type_regexes A list of regexes specifying the types. Returns self. with_empty_output View source with_empty_output() Do not generate side-effect outputs. with_file_output View source with_file_output( outfile ) Print the result to a file. with_max_depth View source with_max_depth( max_depth ) Set the maximum depth of display. The depth depends on profiling view. For 'scope' view, it's the depth of name scope hierarchy (tree), for 'op' view, it's the number of operation types (list), etc. Args max_depth Maximum depth of the data structure to display. Returns self with_min_execution_time View source with_min_execution_time( min_micros=0, min_accelerator_micros=0, min_cpu_micros=0 ) Only show profiler nodes consuming no less than 'min_micros'. Args min_micros Only show profiler nodes with execution time no less than this. It sums accelerator and cpu times. min_accelerator_micros Only show profiler nodes spend no less than this time on accelerator (e.g. GPU). min_cpu_micros Only show profiler nodes spend no less than this time on cpu. Returns self with_min_float_operations View source with_min_float_operations( min_float_ops ) Only show profiler nodes consuming no less than 'min_float_ops'. Please see https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/profiler/g3doc/profile_model_architecture.md on the caveats of calculating float operations. Args min_float_ops Only show profiler nodes with float operations no less than this. Returns self with_min_memory View source with_min_memory( min_bytes=0, min_peak_bytes=0, min_residual_bytes=0, min_output_bytes=0 ) Only show profiler nodes consuming no less than 'min_bytes'. Args min_bytes Only show profiler nodes requested to allocate no less bytes than this. min_peak_bytes Only show profiler nodes using no less than this bytes at peak (high watermark). For profiler nodes consist of multiple graph nodes, it sums the graph nodes' peak_bytes. min_residual_bytes Only show profiler nodes have no less than this bytes not being de-allocated after Compute() ends. For profiler nodes consist of multiple graph nodes, it sums the graph nodes' residual_bytes. min_output_bytes Only show profiler nodes have no less than this bytes output. The output are not necessarily allocated by this profiler nodes. Returns self with_min_occurrence View source with_min_occurrence( min_occurrence ) Only show profiler nodes including no less than 'min_occurrence' graph nodes. A "node" means a profiler output node, which can be a python line (code view), an operation type (op view), or a graph node (graph/scope view). A python line includes all graph nodes created by that line, while an operation type includes all graph nodes of that type. Args min_occurrence Only show nodes including no less than this. Returns self with_min_parameters View source with_min_parameters( min_params ) Only show profiler nodes holding no less than 'min_params' parameters. 'Parameters' normally refers the weights of in TensorFlow variables. It reflects the 'capacity' of models. Args min_params Only show profiler nodes holding number parameters no less than this. Returns self with_node_names View source with_node_names( start_name_regexes=None, show_name_regexes=None, hide_name_regexes=None, trim_name_regexes=None ) Regular expressions used to select profiler nodes to display. After 'with_accounted_types' is evaluated, 'with_node_names' are evaluated as follows: For a profile data structure, profiler first finds the profiler nodes matching 'start_name_regexes', and starts displaying profiler nodes from there. Then, if a node matches 'show_name_regexes' and doesn't match 'hide_name_regexes', it's displayed. If a node matches 'trim_name_regexes', profiler stops further searching that branch. Args start_name_regexes list of node name regexes to start displaying. show_name_regexes list of node names regexes to display. hide_name_regexes list of node_names regexes that should be hidden. trim_name_regexes list of node name regexes from where to stop. Returns self with_pprof_output View source with_pprof_output( pprof_file ) Generate a pprof profile gzip file. To use the pprof file: pprof -png --nodecount=100 --sample_index=1 Args pprof_file filename for output, usually suffixed with .pb.gz. Returns self. with_stdout_output View source with_stdout_output() Print the result to stdout. with_step View source with_step( step ) Which profile step to use for profiling. The 'step' here refers to the step defined by Profiler.add_step() API. Args step When multiple steps of profiles are available, select which step's profile to use. If -1, use average of all available steps. Returns self with_timeline_output View source with_timeline_output( timeline_file ) Generate a timeline json file.
tensorflow.compat.v1.profiler.profileoptionbuilder
tf.compat.v1.profiler.Profiler TensorFlow multi-step profiler. tf.compat.v1.profiler.Profiler( graph=None, op_log=None ) https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/profiler/README.md Typical use case: # Currently we are only allowed to create 1 profiler per process. profiler = Profiler(sess.graph) for i in xrange(total_steps): if i % 10000 == 0: run_meta = tf.compat.v1.RunMetadata() _ = sess.run(..., options=tf.compat.v1.RunOptions( trace_level=tf.RunOptions.FULL_TRACE), run_metadata=run_meta) profiler.add_step(i, run_meta) # Profile the parameters of your model. profiler.profile_name_scope(options=(option_builder.ProfileOptionBuilder .trainable_variables_parameter())) # Or profile the timing of your model operations. opts = option_builder.ProfileOptionBuilder.time_and_memory() profiler.profile_operations(options=opts) # Or you can generate a timeline: opts = (option_builder.ProfileOptionBuilder( option_builder.ProfileOptionBuilder.time_and_memory()) .with_step(i) .with_timeline_output(filename).build()) profiler.profile_graph(options=opts) else: _ = sess.run(...) # Auto detect problems and generate advice. profiler.advise() Args graph tf.Graph. If None and eager execution is not enabled, use default graph. op_log optional. tensorflow::tfprof::OpLogProto proto. Used to define extra op types. Methods add_step View source add_step( step, run_meta ) Add statistics of a step. Args step int, An id used to group one or more different run_meta together. When profiling with the profile_xxx APIs, user can use the step id in the options to profile these run_meta together. run_meta RunMetadata proto that contains statistics of a session run. advise View source advise( options ) Automatically detect problems and generate reports. Args options A dict of options. See ALL_ADVICE example above. Returns An Advise proto that contains the reports from all checkers. profile_graph View source profile_graph( options ) Profile the statistics of graph nodes, organized by dataflow graph. Args options A dict of options. See core/profiler/g3doc/options.md. Returns a GraphNodeProto that records the results. profile_name_scope View source profile_name_scope( options ) Profile the statistics of graph nodes, organized by name scope. Args options A dict of options. See core/profiler/g3doc/options.md. Returns a GraphNodeProto that records the results. profile_operations View source profile_operations( options ) Profile the statistics of the Operation types (e.g. MatMul, Conv2D). Args options A dict of options. See core/profiler/g3doc/options.md. Returns a MultiGraphNodeProto that records the results. profile_python View source profile_python( options ) Profile the statistics of the Python codes. By default, it shows the call stack from root. To avoid redundant output, you may use options to filter as below options['show_name_regexes'] = ['.my_code.py.'] Args options A dict of options. See core/profiler/g3doc/options.md. Returns a MultiGraphNodeProto that records the results. serialize_to_string View source serialize_to_string() Serialize the ProfileProto to a binary string. Users can write it to file for offline analysis by tfprof commandline or graphical interface. Returns ProfileProto binary string.
tensorflow.compat.v1.profiler.profiler
tf.compat.v1.profiler.write_op_log Log provided 'op_log', and add additional model information below. tf.compat.v1.profiler.write_op_log( graph, log_dir, op_log=None, run_meta=None, add_trace=True ) The API also assigns ops in tf.compat.v1.trainable_variables() an op type called '_trainable_variables'. The API also logs 'flops' statistics for ops with op.RegisterStatistics() defined. flops calculation depends on Tensor shapes defined in 'graph', which might not be complete. 'run_meta', if provided, completes the shape information with best effort. Args graph tf.Graph. If None and eager execution is not enabled, use default graph. log_dir directory to write the log file. op_log (Optional) OpLogProto proto to be written. If not provided, an new one is created. run_meta (Optional) RunMetadata proto that helps flops computation using run time shape information. add_trace Whether to add python code trace information. Used to support "code" view.
tensorflow.compat.v1.profiler.write_op_log
Module: tf.compat.v1.python_io Python functions for directly manipulating TFRecord-formatted files. Classes class TFRecordCompressionType: The type of compression for the record. class TFRecordOptions: Options used for manipulating TFRecord files. class TFRecordWriter: A class to write records to a TFRecords file. Functions tf_record_iterator(...): An iterator that read the records from a TFRecords file. (deprecated)
tensorflow.compat.v1.python_io
tf.compat.v1.py_func Wraps a python function and uses it as a TensorFlow op. tf.compat.v1.py_func( func, inp, Tout, stateful=True, name=None ) Given a python function func, which takes numpy arrays as its arguments and returns numpy arrays as its outputs, wrap this function as an operation in a TensorFlow graph. The following snippet constructs a simple TensorFlow graph that invokes the np.sinh() NumPy function as a operation in the graph: def my_func(x): # x will be a numpy array with the contents of the placeholder below return np.sinh(x) input = tf.compat.v1.placeholder(tf.float32) y = tf.compat.v1.py_func(my_func, [input], tf.float32) Note: The tf.compat.v1.py_func() operation has the following known limitations: The body of the function (i.e. func) will not be serialized in a GraphDef. Therefore, you should not use this function if you need to serialize your model and restore it in a different environment. The operation must run in the same address space as the Python program that calls tf.compat.v1.py_func(). If you are using distributed TensorFlow, you must run a tf.distribute.Server in the same process as the program that calls tf.compat.v1.py_func() and you must pin the created operation to a device in that server (e.g. using with tf.device():). Args func A Python function, which accepts ndarray objects as arguments and returns a list of ndarray objects (or a single ndarray). This function must accept as many arguments as there are tensors in inp, and these argument types will match the corresponding tf.Tensor objects in inp. The returns ndarrays must match the number and types defined Tout. Important Note: Input and output numpy ndarrays of func are not guaranteed to be copies. In some cases their underlying memory will be shared with the corresponding TensorFlow tensors. In-place modification or storing func input or return values in python datastructures without explicit (np.)copy can have non-deterministic consequences. inp A list of Tensor objects. Tout A list or tuple of tensorflow data types or a single tensorflow data type if there is only one, indicating what func returns. stateful (Boolean.) If True, the function should be considered stateful. If a function is stateless, when given the same input it will return the same output and have no observable side effects. Optimizations such as common subexpression elimination are only performed on stateless operations. name A name for the operation (optional). Returns A list of Tensor or a single Tensor which func computes.
tensorflow.compat.v1.py_func
Module: tf.compat.v1.quantization Public API for tf.quantization namespace. Functions dequantize(...): Dequantize the 'input' tensor into a float or bfloat16 Tensor. fake_quant_with_min_max_args(...): Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type. fake_quant_with_min_max_args_gradient(...): Compute gradients for a FakeQuantWithMinMaxArgs operation. fake_quant_with_min_max_vars(...): Fake-quantize the 'inputs' tensor of type float via global float scalars fake_quant_with_min_max_vars_gradient(...): Compute gradients for a FakeQuantWithMinMaxVars operation. fake_quant_with_min_max_vars_per_channel(...): Fake-quantize the 'inputs' tensor of type float via per-channel floats fake_quant_with_min_max_vars_per_channel_gradient(...): Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation. quantize(...): Quantize the 'input' tensor of type float to 'output' tensor of type 'T'. quantize_and_dequantize(...): Quantizes then dequantizes a tensor. (deprecated) quantize_and_dequantize_v2(...): Quantizes then dequantizes a tensor. quantized_concat(...): Concatenates quantized tensors along one dimension.
tensorflow.compat.v1.quantization
tf.compat.v1.quantize_v2 Please use tf.quantization.quantize instead. tf.compat.v1.quantize_v2( input, min_range, max_range, T, mode='MIN_COMBINED', name=None, round_mode='HALF_AWAY_FROM_ZERO', narrow_range=False, axis=None, ensure_minimum_range=0.01 )
tensorflow.compat.v1.quantize_v2
Module: tf.compat.v1.queue Public API for tf.queue namespace. Classes class FIFOQueue: A queue implementation that dequeues elements in first-in first-out order. class PaddingFIFOQueue: A FIFOQueue that supports batching variable-sized tensors by padding. class PriorityQueue: A queue implementation that dequeues elements in prioritized order. class QueueBase: Base class for queue implementations. class RandomShuffleQueue: A queue implementation that dequeues elements in a random order.
tensorflow.compat.v1.queue
Module: tf.compat.v1.ragged Ragged Tensors. This package defines ops for manipulating ragged tensors (tf.RaggedTensor), which are tensors with non-uniform shapes. In particular, each RaggedTensor has one or more ragged dimensions, which are dimensions whose slices may have different lengths. For example, the inner (column) dimension of rt=[[3, 1, 4, 1], [], [5, 9, 2], [6], []] is ragged, since the column slices (rt[0, :], ..., rt[4, :]) have different lengths. For a more detailed description of ragged tensors, see the tf.RaggedTensor class documentation and the Ragged Tensor Guide. Additional ops that support RaggedTensor Arguments that accept RaggedTensors are marked in bold. tf.batch_gather(params, indices, name=None) tf.bitwise.bitwise_and(x, y, name=None) tf.bitwise.bitwise_or(x, y, name=None) tf.bitwise.bitwise_xor(x, y, name=None) tf.bitwise.invert(x, name=None) tf.bitwise.left_shift(x, y, name=None) tf.bitwise.right_shift(x, y, name=None) tf.cast(x, dtype, name=None) tf.clip_by_value(t, clip_value_min, clip_value_max, name=None) tf.concat(values, axis, name='concat') tf.debugging.check_numerics(tensor, message, name=None) tf.dtypes.complex(real, imag, name=None) tf.dtypes.saturate_cast(value, dtype, name=None) tf.dynamic_partition(data, partitions, num_partitions, name=None) tf.expand_dims(input, axis=None, name=None, dim=None) tf.gather_nd(params, indices, name=None, batch_dims=0) tf.gather(params, indices, validate_indices=None, name=None, axis=None, batch_dims=0) tf.identity(input, name=None) tf.io.decode_base64(input, name=None) tf.io.decode_compressed(bytes, compression_type='', name=None) tf.io.encode_base64(input, pad=False, name=None) tf.math.abs(x, name=None) tf.math.acos(x, name=None) tf.math.acosh(x, name=None) tf.math.add_n(inputs, name=None) tf.math.add(x, y, name=None) tf.math.angle(input, name=None) tf.math.asin(x, name=None) tf.math.asinh(x, name=None) tf.math.atan2(y, x, name=None) tf.math.atan(x, name=None) tf.math.atanh(x, name=None) tf.math.ceil(x, name=None) tf.math.conj(x, name=None) tf.math.cos(x, name=None) tf.math.cosh(x, name=None) tf.math.digamma(x, name=None) tf.math.divide_no_nan(x, y, name=None) tf.math.divide(x, y, name=None) tf.math.equal(x, y, name=None) tf.math.erf(x, name=None) tf.math.erfc(x, name=None) tf.math.erfinv(x, name=None) tf.math.exp(x, name=None) tf.math.expm1(x, name=None) tf.math.floor(x, name=None) tf.math.floordiv(x, y, name=None) tf.math.floormod(x, y, name=None) tf.math.greater_equal(x, y, name=None) tf.math.greater(x, y, name=None) tf.math.imag(input, name=None) tf.math.is_finite(x, name=None) tf.math.is_inf(x, name=None) tf.math.is_nan(x, name=None) tf.math.less_equal(x, y, name=None) tf.math.less(x, y, name=None) tf.math.lgamma(x, name=None) tf.math.log1p(x, name=None) tf.math.log_sigmoid(x, name=None) tf.math.log(x, name=None) tf.math.logical_and(x, y, name=None) tf.math.logical_not(x, name=None) tf.math.logical_or(x, y, name=None) tf.math.logical_xor(x, y, name='LogicalXor') tf.math.maximum(x, y, name=None) tf.math.minimum(x, y, name=None) tf.math.multiply(x, y, name=None) tf.math.ndtri(x, name=None) tf.math.negative(x, name=None) tf.math.not_equal(x, y, name=None) tf.math.pow(x, y, name=None) tf.math.real(input, name=None) tf.math.reciprocal(x, name=None) tf.math.reduce_all(input_tensor, axis=None, keepdims=False, name=None) tf.math.reduce_any(input_tensor, axis=None, keepdims=False, name=None) tf.math.reduce_max(input_tensor, axis=None, keepdims=False, name=None) tf.math.reduce_mean(input_tensor, axis=None, keepdims=False, name=None) tf.math.reduce_min(input_tensor, axis=None, keepdims=False, name=None) tf.math.reduce_prod(input_tensor, axis=None, keepdims=False, name=None) tf.math.reduce_sum(input_tensor, axis=None, keepdims=False, name=None) tf.math.rint(x, name=None) tf.math.round(x, name=None) tf.math.rsqrt(x, name=None) tf.math.sign(x, name=None) tf.math.sin(x, name=None) tf.math.sinh(x, name=None) tf.math.sqrt(x, name=None) tf.math.square(x, name=None) tf.math.squared_difference(x, y, name=None) tf.math.subtract(x, y, name=None) tf.math.tan(x, name=None) tf.math.truediv(x, y, name=None) tf.math.unsorted_segment_max(data, segment_ids, num_segments, name=None) tf.math.unsorted_segment_mean(data, segment_ids, num_segments, name=None) tf.math.unsorted_segment_min(data, segment_ids, num_segments, name=None) tf.math.unsorted_segment_prod(data, segment_ids, num_segments, name=None) tf.math.unsorted_segment_sqrt_n(data, segment_ids, num_segments, name=None) tf.math.unsorted_segment_sum(data, segment_ids, num_segments, name=None) tf.nn.dropout(x, keep_prob=None, noise_shape=None, seed=None, name=None, rate=None) tf.one_hot(indices, depth, on_value=None, off_value=None, axis=None, dtype=None, name=None) tf.ones_like(tensor, dtype=None, name=None, optimize=True) tf.print(*inputs, **kwargs) tf.rank(input, name=None) tf.realdiv(x, y, name=None) tf.reverse(tensor, axis, name=None) tf.size(input, name=None, out_type=tf.int32) tf.squeeze(input, axis=None, name=None, squeeze_dims=None) tf.stack(values, axis=0, name='stack') tf.strings.as_string(input, precision=-1, scientific=False, shortest=False, width=-1, fill='', name=None) tf.strings.format(template, inputs, placeholder='{}', summarize=3, name=None) tf.strings.join(inputs, separator='', name=None) tf.strings.length(input, name=None, unit='BYTE') tf.strings.reduce_join(inputs, axis=None, keepdims=False, separator='', name=None) tf.strings.regex_full_match(input, pattern, name=None) tf.strings.regex_replace(input, pattern, rewrite, replace_global=True, name=None) tf.strings.strip(input, name=None) tf.strings.substr(input, pos, len, name=None, unit='BYTE') tf.strings.to_hash_bucket_fast(input, num_buckets, name=None) tf.strings.to_hash_bucket_strong(input, num_buckets, key, name=None) tf.strings.to_hash_bucket(input, num_buckets, name=None) tf.strings.to_hash_bucket(input, num_buckets, name=None) tf.strings.to_number(input, out_type=tf.float32, name=None) tf.strings.unicode_script(input, name=None) tf.tile(input, multiples, name=None) tf.truncatediv(x, y, name=None) tf.truncatemod(x, y, name=None) tf.where(condition, x=None, y=None, name=None) tf.where(condition, x=None, y=None, name=None) tf.zeros_like(tensor, dtype=None, name=None, optimize=True)n Classes class RaggedTensorValue: Represents the value of a RaggedTensor. Functions boolean_mask(...): Applies a boolean mask to data without flattening the mask dimensions. constant(...): Constructs a constant RaggedTensor from a nested Python list. constant_value(...): Constructs a RaggedTensorValue from a nested Python list. cross(...): Generates feature cross from a list of tensors. cross_hashed(...): Generates hashed feature cross from a list of tensors. map_flat_values(...): Applies op to the values of one or more RaggedTensors. placeholder(...): Creates a placeholder for a tf.RaggedTensor that will always be fed. range(...): Returns a RaggedTensor containing the specified sequences of numbers. row_splits_to_segment_ids(...): Generates the segmentation corresponding to a RaggedTensor row_splits. segment_ids_to_row_splits(...): Generates the RaggedTensor row_splits corresponding to a segmentation. stack(...): Stacks a list of rank-R tensors into one rank-(R+1) RaggedTensor. stack_dynamic_partitions(...): Stacks dynamic partitions of a Tensor or RaggedTensor.
tensorflow.compat.v1.ragged
tf.compat.v1.ragged.constant_value Constructs a RaggedTensorValue from a nested Python list. tf.compat.v1.ragged.constant_value( pylist, dtype=None, ragged_rank=None, inner_shape=None, row_splits_dtype='int64' ) Warning: This function returns a RaggedTensorValue, not a RaggedTensor. If you wish to construct a constant RaggedTensor, use ragged.constant(...) instead. Example: tf.compat.v1.ragged.constant_value([[1, 2], [3], [4, 5, 6]]) tf.RaggedTensorValue(values=array([1, 2, 3, 4, 5, 6]), row_splits=array([0, 2, 3, 6])) All scalar values in pylist must have the same nesting depth K, and the returned RaggedTensorValue will have rank K. If pylist contains no scalar values, then K is one greater than the maximum depth of empty lists in pylist. All scalar values in pylist must be compatible with dtype. Args pylist A nested list, tuple or np.ndarray. Any nested element that is not a list or tuple must be a scalar value compatible with dtype. dtype numpy.dtype. The type of elements for the returned RaggedTensor. If not specified, then a default is chosen based on the scalar values in pylist. ragged_rank An integer specifying the ragged rank of the returned RaggedTensorValue. Must be nonnegative and less than K. Defaults to max(0, K - 1) if inner_shape is not specified. Defaults to `max(0, K 1 - len(inner_shape))ifinner_shapeis specified. </td> </tr><tr> <td>inner_shape</td> <td> A tuple of integers specifying the shape for individual inner values in the returnedRaggedTensorValue. Defaults to()ifragged_rankis not specified. Ifragged_rankis specified, then a default is chosen based on the contents ofpylist. </td> </tr><tr> <td>row_splits_dtype</td> <td> data type for the constructedRaggedTensorValue's row_splits. One ofnumpy.int32ornumpy.int64`. Returns A tf.RaggedTensorValue or numpy.array with rank K and the specified ragged_rank, containing the values from pylist. Raises ValueError If the scalar values in pylist have inconsistent nesting depth; or if ragged_rank or inner_shape are incompatible with pylist.
tensorflow.compat.v1.ragged.constant_value
tf.compat.v1.ragged.placeholder Creates a placeholder for a tf.RaggedTensor that will always be fed. tf.compat.v1.ragged.placeholder( dtype, ragged_rank, value_shape=None, name=None ) Key Point: This ragged tensor will produce an error if evaluated. Its value must be fed using the feed_dict optional argument to Session.run(), Tensor.eval(), or Operation.run(). @compatibility{eager} Placeholders are not compatible with eager execution. Args dtype The data type for the RaggedTensor. ragged_rank The ragged rank for the RaggedTensor value_shape The shape for individual flat values in the RaggedTensor. name A name for the operation (optional). Returns A RaggedTensor that may be used as a handle for feeding a value, but not evaluated directly. Raises RuntimeError if eager execution is enabled
tensorflow.compat.v1.ragged.placeholder
tf.compat.v1.ragged.RaggedTensorValue Represents the value of a RaggedTensor. tf.compat.v1.ragged.RaggedTensorValue( values, row_splits ) Warning: RaggedTensorValue should only be used in graph mode; in eager mode, the tf.RaggedTensor class contains its value directly. See tf.RaggedTensor for a description of ragged tensors. Args values A numpy array of any type and shape; or a RaggedTensorValue. row_splits A 1-D int32 or int64 numpy array. Attributes dtype The numpy dtype of values in this tensor. flat_values The innermost values array for this ragged tensor value. nested_row_splits The row_splits for all ragged dimensions in this ragged tensor value. ragged_rank The number of ragged dimensions in this ragged tensor value. row_splits The split indices for the ragged tensor value. shape A tuple indicating the shape of this RaggedTensorValue. values The concatenated values for all rows in this tensor. Methods to_list View source to_list() Returns this ragged tensor value as a nested Python list.
tensorflow.compat.v1.ragged.raggedtensorvalue
Module: tf.compat.v1.random Public API for tf.random namespace. Modules experimental module: Public API for tf.random.experimental namespace. Classes class Algorithm: An enumeration. class Generator: Random-number generator. Functions all_candidate_sampler(...): Generate the set of all classes. categorical(...): Draws samples from a categorical distribution. create_rng_state(...): Creates a RNG state from an integer or a vector. fixed_unigram_candidate_sampler(...): Samples a set of classes using the provided (fixed) base distribution. gamma(...): Draws shape samples from each of the given Gamma distribution(s). get_global_generator(...): Retrieves the global generator. get_seed(...): Returns the local seeds an operation should use given an op-specific seed. learned_unigram_candidate_sampler(...): Samples a set of classes from a distribution learned during training. log_uniform_candidate_sampler(...): Samples a set of classes using a log-uniform (Zipfian) base distribution. multinomial(...): Draws samples from a multinomial distribution. (deprecated) normal(...): Outputs random values from a normal distribution. poisson(...): Draws shape samples from each of the given Poisson distribution(s). set_global_generator(...): Replaces the global generator with another Generator object. set_random_seed(...): Sets the graph-level random seed for the default graph. shuffle(...): Randomly shuffles a tensor along its first dimension. stateless_binomial(...): Outputs deterministic pseudorandom values from a binomial distribution. stateless_categorical(...): Draws deterministic pseudorandom samples from a categorical distribution. stateless_gamma(...): Outputs deterministic pseudorandom values from a gamma distribution. stateless_multinomial(...): Draws deterministic pseudorandom samples from a multinomial distribution. (deprecated) stateless_normal(...): Outputs deterministic pseudorandom values from a normal distribution. stateless_parameterized_truncated_normal(...): Outputs random values from a truncated normal distribution. stateless_poisson(...): Outputs deterministic pseudorandom values from a Poisson distribution. stateless_truncated_normal(...): Outputs deterministic pseudorandom values, truncated normally distributed. stateless_uniform(...): Outputs deterministic pseudorandom values from a uniform distribution. truncated_normal(...): Outputs random values from a truncated normal distribution. uniform(...): Outputs random values from a uniform distribution. uniform_candidate_sampler(...): Samples a set of classes using a uniform base distribution.
tensorflow.compat.v1.random
Module: tf.compat.v1.random.experimental Public API for tf.random.experimental namespace. Classes class Algorithm: An enumeration. class Generator: Random-number generator. Functions create_rng_state(...): Creates a RNG state from an integer or a vector. get_global_generator(...): Retrieves the global generator. set_global_generator(...): Replaces the global generator with another Generator object. stateless_fold_in(...): Folds in data to an RNG seed to form a new RNG seed. stateless_split(...): Splits an RNG seed into num new seeds by adding a leading axis.
tensorflow.compat.v1.random.experimental
tf.compat.v1.random.stateless_multinomial Draws deterministic pseudorandom samples from a multinomial distribution. (deprecated) tf.compat.v1.random.stateless_multinomial( logits, num_samples, seed, output_dtype=tf.dtypes.int64, name=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.random.stateless_categorical instead. This is a stateless version of tf.random.categorical: if run twice with the same seeds and shapes, it will produce the same pseudorandom numbers. The output is consistent across multiple runs on the same hardware (and between CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU hardware. Example: # samples has shape [1, 5], where each value is either 0 or 1 with equal # probability. samples = tf.random.stateless_categorical( tf.math.log([[0.5, 0.5]]), 5, seed=[7, 17]) Args logits 2-D Tensor with shape [batch_size, num_classes]. Each slice [i, :] represents the unnormalized log-probabilities for all classes. num_samples 0-D. Number of independent samples to draw for each row slice. seed A shape [2] Tensor, the seed to the random number generator. Must have dtype int32 or int64. (When using XLA, only int32 is allowed.) output_dtype integer type to use for the output. Defaults to int64. name Optional name for the operation. Returns The drawn samples of shape [batch_size, num_samples].
tensorflow.compat.v1.random.stateless_multinomial
tf.compat.v1.random_normal_initializer Initializer that generates tensors with a normal distribution. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.initializers.random_normal tf.compat.v1.random_normal_initializer( mean=0.0, stddev=1.0, seed=None, dtype=tf.dtypes.float32 ) Args mean a python scalar or a scalar tensor. Mean of the random values to generate. stddev a python scalar or a scalar tensor. Standard deviation of the random values to generate. seed A Python integer. Used to create random seeds. See tf.compat.v1.set_random_seed for behavior. dtype Default data type, used if no dtype argument is provided when calling the initializer. Only floating point types are supported. Methods from_config View source @classmethod from_config( config ) Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1) config = initializer.get_config() initializer = RandomUniform.from_config(config) Args config A Python dictionary. It will typically be the output of get_config. Returns An Initializer instance. get_config View source get_config() Returns the configuration of the initializer as a JSON-serializable dict. Returns A JSON-serializable Python dict. __call__ View source __call__( shape, dtype=None, partition_info=None ) Returns a tensor object initialized as specified by the initializer. Args shape Shape of the tensor. dtype Optional dtype of the tensor. If not provided use the initializer dtype. partition_info Optional information about the possible partitioning of a tensor.
tensorflow.compat.v1.random_normal_initializer
tf.compat.v1.random_poisson Draws shape samples from each of the given Poisson distribution(s). View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.random.poisson tf.compat.v1.random_poisson( lam, shape, dtype=tf.dtypes.float32, seed=None, name=None ) lam is the rate parameter describing the distribution(s). Example: samples = tf.random.poisson([0.5, 1.5], [10]) # samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents # the samples drawn from each distribution samples = tf.random.poisson([12.2, 3.3], [7, 5]) # samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1] # represents the 7x5 samples drawn from each of the two distributions Args lam A Tensor or Python value or N-D array of type dtype. lam provides the rate parameter(s) describing the poisson distribution(s) to sample. shape A 1-D integer Tensor or Python array. The shape of the output samples to be drawn per "rate"-parameterized distribution. dtype The type of the output: float16, float32, float64, int32 or int64. seed A Python integer. Used to create a random seed for the distributions. See tf.random.set_seed for behavior. name Optional name for the operation. Returns samples a Tensor of shape tf.concat([shape, tf.shape(lam)], axis=0) with values of type dtype.
tensorflow.compat.v1.random_poisson
tf.compat.v1.random_uniform_initializer Initializer that generates tensors with a uniform distribution. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.initializers.random_uniform tf.compat.v1.random_uniform_initializer( minval=0, maxval=None, seed=None, dtype=tf.dtypes.float32 ) Args minval A python scalar or a scalar tensor. Lower bound of the range of random values to generate. maxval A python scalar or a scalar tensor. Upper bound of the range of random values to generate. Defaults to 1 for float types. seed A Python integer. Used to create random seeds. See tf.compat.v1.set_random_seed for behavior. dtype Default data type, used if no dtype argument is provided when calling the initializer. Methods from_config View source @classmethod from_config( config ) Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1) config = initializer.get_config() initializer = RandomUniform.from_config(config) Args config A Python dictionary. It will typically be the output of get_config. Returns An Initializer instance. get_config View source get_config() Returns the configuration of the initializer as a JSON-serializable dict. Returns A JSON-serializable Python dict. __call__ View source __call__( shape, dtype=None, partition_info=None ) Returns a tensor object initialized as specified by the initializer. Args shape Shape of the tensor. dtype Optional dtype of the tensor. If not provided use the initializer dtype. partition_info Optional information about the possible partitioning of a tensor.
tensorflow.compat.v1.random_uniform_initializer
Module: tf.compat.v1.raw_ops Public API for tf.raw_ops namespace. Functions Abort(...): Raise a exception to abort the process when called. Abs(...): Computes the absolute value of a tensor. AccumulateNV2(...): Returns the element-wise sum of a list of tensors. AccumulatorApplyGradient(...): Applies a gradient to a given accumulator. AccumulatorNumAccumulated(...): Returns the number of gradients aggregated in the given accumulators. AccumulatorSetGlobalStep(...): Updates the accumulator with a new value for global_step. AccumulatorTakeGradient(...): Extracts the average gradient in the given ConditionalAccumulator. Acos(...): Computes acos of x element-wise. Acosh(...): Computes inverse hyperbolic cosine of x element-wise. Add(...): Returns x + y element-wise. AddManySparseToTensorsMap(...): Add an N-minibatch SparseTensor to a SparseTensorsMap, return N handles. AddN(...): Add all input tensors element wise. AddSparseToTensorsMap(...): Add a SparseTensor to a SparseTensorsMap return its handle. AddV2(...): Returns x + y element-wise. AdjustContrast(...): Deprecated. Disallowed in GraphDef version >= 2. AdjustContrastv2(...): Adjust the contrast of one or more images. AdjustHue(...): Adjust the hue of one or more images. AdjustSaturation(...): Adjust the saturation of one or more images. All(...): Computes the "logical and" of elements across dimensions of a tensor. AllCandidateSampler(...): Generates labels for candidate sampling with a learned unigram distribution. AllToAll(...): An Op to exchange data across TPU replicas. Angle(...): Returns the argument of a complex number. AnonymousIterator(...): A container for an iterator resource. AnonymousIteratorV2(...): A container for an iterator resource. AnonymousMemoryCache(...) AnonymousMultiDeviceIterator(...): A container for a multi device iterator resource. AnonymousRandomSeedGenerator(...) AnonymousSeedGenerator(...) Any(...): Computes the "logical or" of elements across dimensions of a tensor. ApplyAdaMax(...): Update '*var' according to the AdaMax algorithm. ApplyAdadelta(...): Update '*var' according to the adadelta scheme. ApplyAdagrad(...): Update '*var' according to the adagrad scheme. ApplyAdagradDA(...): Update '*var' according to the proximal adagrad scheme. ApplyAdagradV2(...): Update '*var' according to the adagrad scheme. ApplyAdam(...): Update '*var' according to the Adam algorithm. ApplyAddSign(...): Update '*var' according to the AddSign update. ApplyCenteredRMSProp(...): Update '*var' according to the centered RMSProp algorithm. ApplyFtrl(...): Update '*var' according to the Ftrl-proximal scheme. ApplyFtrlV2(...): Update '*var' according to the Ftrl-proximal scheme. ApplyGradientDescent(...): Update '*var' by subtracting 'alpha' * 'delta' from it. ApplyMomentum(...): Update '*var' according to the momentum scheme. ApplyPowerSign(...): Update '*var' according to the AddSign update. ApplyProximalAdagrad(...): Update 'var' and 'accum' according to FOBOS with Adagrad learning rate. ApplyProximalGradientDescent(...): Update '*var' as FOBOS algorithm with fixed learning rate. ApplyRMSProp(...): Update '*var' according to the RMSProp algorithm. ApproximateEqual(...): Returns the truth value of abs(x-y) < tolerance element-wise. ArgMax(...): Returns the index with the largest value across dimensions of a tensor. ArgMin(...): Returns the index with the smallest value across dimensions of a tensor. AsString(...): Converts each entry in the given tensor to strings. Asin(...): Computes the trignometric inverse sine of x element-wise. Asinh(...): Computes inverse hyperbolic sine of x element-wise. Assert(...): Asserts that the given condition is true. AssertCardinalityDataset(...) AssertNextDataset(...): A transformation that asserts which transformations happen next. Assign(...): Update 'ref' by assigning 'value' to it. AssignAdd(...): Update 'ref' by adding 'value' to it. AssignAddVariableOp(...): Adds a value to the current value of a variable. AssignSub(...): Update 'ref' by subtracting 'value' from it. AssignSubVariableOp(...): Subtracts a value from the current value of a variable. AssignVariableOp(...): Assigns a new value to a variable. Atan(...): Computes the trignometric inverse tangent of x element-wise. Atan2(...): Computes arctangent of y/x element-wise, respecting signs of the arguments. Atanh(...): Computes inverse hyperbolic tangent of x element-wise. AudioSpectrogram(...): Produces a visualization of audio data over time. AudioSummary(...): Outputs a Summary protocol buffer with audio. AudioSummaryV2(...): Outputs a Summary protocol buffer with audio. AutoShardDataset(...): Creates a dataset that shards the input dataset. AvgPool(...): Performs average pooling on the input. AvgPool3D(...): Performs 3D average pooling on the input. AvgPool3DGrad(...): Computes gradients of average pooling function. AvgPoolGrad(...): Computes gradients of the average pooling function. BandedTriangularSolve(...) Barrier(...): Defines a barrier that persists across different graph executions. BarrierClose(...): Closes the given barrier. BarrierIncompleteSize(...): Computes the number of incomplete elements in the given barrier. BarrierInsertMany(...): For each key, assigns the respective value to the specified component. BarrierReadySize(...): Computes the number of complete elements in the given barrier. BarrierTakeMany(...): Takes the given number of completed elements from a barrier. Batch(...): Batches all input tensors nondeterministically. BatchCholesky(...) BatchCholeskyGrad(...) BatchDataset(...): Creates a dataset that batches batch_size elements from input_dataset. BatchDatasetV2(...): Creates a dataset that batches batch_size elements from input_dataset. BatchFFT(...) BatchFFT2D(...) BatchFFT3D(...) BatchFunction(...): Batches all the inputs tensors to the computation done by the function. BatchIFFT(...) BatchIFFT2D(...) BatchIFFT3D(...) BatchMatMul(...): Multiplies slices of two tensors in batches. BatchMatMulV2(...): Multiplies slices of two tensors in batches. BatchMatrixBandPart(...) BatchMatrixDeterminant(...) BatchMatrixDiag(...) BatchMatrixDiagPart(...) BatchMatrixInverse(...) BatchMatrixSetDiag(...) BatchMatrixSolve(...) BatchMatrixSolveLs(...) BatchMatrixTriangularSolve(...) BatchNormWithGlobalNormalization(...): Batch normalization. BatchNormWithGlobalNormalizationGrad(...): Gradients for batch normalization. BatchSelfAdjointEig(...) BatchSelfAdjointEigV2(...) BatchSvd(...) BatchToSpace(...): BatchToSpace for 4-D tensors of type T. BatchToSpaceND(...): BatchToSpace for N-D tensors of type T. BesselI0(...) BesselI0e(...) BesselI1(...) BesselI1e(...) BesselJ0(...) BesselJ1(...) BesselK0(...) BesselK0e(...) BesselK1(...) BesselK1e(...) BesselY0(...) BesselY1(...) Betainc(...): Compute the regularized incomplete beta integral \(I_x(a, b)\). BiasAdd(...): Adds bias to value. BiasAddGrad(...): The backward operation for "BiasAdd" on the "bias" tensor. BiasAddV1(...): Adds bias to value. Bincount(...): Counts the number of occurrences of each value in an integer array. Bitcast(...): Bitcasts a tensor from one type to another without copying data. BitwiseAnd(...): Elementwise computes the bitwise AND of x and y. BitwiseOr(...): Elementwise computes the bitwise OR of x and y. BitwiseXor(...): Elementwise computes the bitwise XOR of x and y. BlockLSTM(...): Computes the LSTM cell forward propagation for all the time steps. BlockLSTMGrad(...): Computes the LSTM cell backward propagation for the entire time sequence. BlockLSTMGradV2(...): Computes the LSTM cell backward propagation for the entire time sequence. BlockLSTMV2(...): Computes the LSTM cell forward propagation for all the time steps. BoostedTreesAggregateStats(...): Aggregates the summary of accumulated stats for the batch. BoostedTreesBucketize(...): Bucketize each feature based on bucket boundaries. BoostedTreesCalculateBestFeatureSplit(...): Calculates gains for each feature and returns the best possible split information for the feature. BoostedTreesCalculateBestFeatureSplitV2(...): Calculates gains for each feature and returns the best possible split information for each node. However, if no split is found, then no split information is returned for that node. BoostedTreesCalculateBestGainsPerFeature(...): Calculates gains for each feature and returns the best possible split information for the feature. BoostedTreesCenterBias(...): Calculates the prior from the training data (the bias) and fills in the first node with the logits' prior. Returns a boolean indicating whether to continue centering. BoostedTreesCreateEnsemble(...): Creates a tree ensemble model and returns a handle to it. BoostedTreesCreateQuantileStreamResource(...): Create the Resource for Quantile Streams. BoostedTreesDeserializeEnsemble(...): Deserializes a serialized tree ensemble config and replaces current tree BoostedTreesEnsembleResourceHandleOp(...): Creates a handle to a BoostedTreesEnsembleResource BoostedTreesExampleDebugOutputs(...): Debugging/model interpretability outputs for each example. BoostedTreesFlushQuantileSummaries(...): Flush the quantile summaries from each quantile stream resource. BoostedTreesGetEnsembleStates(...): Retrieves the tree ensemble resource stamp token, number of trees and growing statistics. BoostedTreesMakeQuantileSummaries(...): Makes the summary of quantiles for the batch. BoostedTreesMakeStatsSummary(...): Makes the summary of accumulated stats for the batch. BoostedTreesPredict(...): Runs multiple additive regression ensemble predictors on input instances and BoostedTreesQuantileStreamResourceAddSummaries(...): Add the quantile summaries to each quantile stream resource. BoostedTreesQuantileStreamResourceDeserialize(...): Deserialize bucket boundaries and ready flag into current QuantileAccumulator. BoostedTreesQuantileStreamResourceFlush(...): Flush the summaries for a quantile stream resource. BoostedTreesQuantileStreamResourceGetBucketBoundaries(...): Generate the bucket boundaries for each feature based on accumulated summaries. BoostedTreesQuantileStreamResourceHandleOp(...): Creates a handle to a BoostedTreesQuantileStreamResource. BoostedTreesSerializeEnsemble(...): Serializes the tree ensemble to a proto. BoostedTreesSparseAggregateStats(...): Aggregates the summary of accumulated stats for the batch. BoostedTreesSparseCalculateBestFeatureSplit(...): Calculates gains for each feature and returns the best possible split information for the feature. BoostedTreesTrainingPredict(...): Runs multiple additive regression ensemble predictors on input instances and BoostedTreesUpdateEnsemble(...): Updates the tree ensemble by either adding a layer to the last tree being grown BoostedTreesUpdateEnsembleV2(...): Updates the tree ensemble by adding a layer to the last tree being grown BroadcastArgs(...): Return the shape of s0 op s1 with broadcast. BroadcastGradientArgs(...): Return the reduction indices for computing gradients of s0 op s1 with broadcast. BroadcastTo(...): Broadcast an array for a compatible shape. Bucketize(...): Bucketizes 'input' based on 'boundaries'. BytesProducedStatsDataset(...): Records the bytes size of each element of input_dataset in a StatsAggregator. CSRSparseMatrixComponents(...): Reads out the CSR components at batch index. CSRSparseMatrixToDense(...): Convert a (possibly batched) CSRSparseMatrix to dense. CSRSparseMatrixToSparseTensor(...): Converts a (possibly batched) CSRSparesMatrix to a SparseTensor. CSVDataset(...) CSVDatasetV2(...) CTCBeamSearchDecoder(...): Performs beam search decoding on the logits given in input. CTCGreedyDecoder(...): Performs greedy decoding on the logits given in inputs. CTCLoss(...): Calculates the CTC Loss (log probability) for each batch entry. Also calculates CTCLossV2(...): Calculates the CTC Loss (log probability) for each batch entry. Also calculates CacheDataset(...): Creates a dataset that caches elements from input_dataset. CacheDatasetV2(...) Case(...): An n-way switch statement which calls a single branch function. Cast(...): Cast x of type SrcT to y of DstT. Ceil(...): Returns element-wise smallest integer not less than x. CheckNumerics(...): Checks a tensor for NaN and Inf values. CheckNumericsV2(...): Checks a tensor for NaN, -Inf and +Inf values. Cholesky(...): Computes the Cholesky decomposition of one or more square matrices. CholeskyGrad(...): Computes the reverse mode backpropagated gradient of the Cholesky algorithm. ChooseFastestBranchDataset(...) ChooseFastestDataset(...) ClipByValue(...): Clips tensor values to a specified min and max. CloseSummaryWriter(...) CollectiveBcastRecv(...): Receives a tensor value broadcast from another device. CollectiveBcastSend(...): Broadcasts a tensor value to one or more other devices. CollectiveGather(...): Mutually accumulates multiple tensors of identical type and shape. CollectiveGatherV2(...): Mutually accumulates multiple tensors of identical type and shape. CollectivePermute(...): An Op to permute tensors across replicated TPU instances. CollectiveReduce(...): Mutually reduces multiple tensors of identical type and shape. CollectiveReduceV2(...): Mutually reduces multiple tensors of identical type and shape. CombinedNonMaxSuppression(...): Greedily selects a subset of bounding boxes in descending order of score, CompareAndBitpack(...): Compare values of input to threshold and pack resulting bits into a uint8. Complex(...): Converts two real numbers to a complex number. ComplexAbs(...): Computes the complex absolute value of a tensor. CompressElement(...): Compresses a dataset element. ComputeAccidentalHits(...): Computes the ids of the positions in sampled_candidates that match true_labels. ComputeBatchSize(...): Computes the static batch size of a dataset sans partial batches. Concat(...): Concatenates tensors along one dimension. ConcatOffset(...): Computes offsets of concat inputs within its output. ConcatV2(...): Concatenates tensors along one dimension. ConcatenateDataset(...): Creates a dataset that concatenates input_dataset with another_dataset. ConditionalAccumulator(...): A conditional accumulator for aggregating gradients. ConfigureDistributedTPU(...): Sets up the centralized structures for a distributed TPU system. ConfigureTPUEmbedding(...): Sets up TPUEmbedding in a distributed TPU system. Conj(...): Returns the complex conjugate of a complex number. ConjugateTranspose(...): Shuffle dimensions of x according to a permutation and conjugate the result. Const(...): Returns a constant tensor. ConsumeMutexLock(...): This op consumes a lock created by MutexLock. ControlTrigger(...): Does nothing. Serves as a control trigger for scheduling. Conv2D(...): Computes a 2-D convolution given 4-D input and filter tensors. Conv2DBackpropFilter(...): Computes the gradients of convolution with respect to the filter. Conv2DBackpropInput(...): Computes the gradients of convolution with respect to the input. Conv3D(...): Computes a 3-D convolution given 5-D input and filter tensors. Conv3DBackpropFilter(...): Computes the gradients of 3-D convolution with respect to the filter. Conv3DBackpropFilterV2(...): Computes the gradients of 3-D convolution with respect to the filter. Conv3DBackpropInput(...): Computes the gradients of 3-D convolution with respect to the input. Conv3DBackpropInputV2(...): Computes the gradients of 3-D convolution with respect to the input. Copy(...): Copy a tensor from CPU-to-CPU or GPU-to-GPU. CopyHost(...): Copy a tensor to host. Cos(...): Computes cos of x element-wise. Cosh(...): Computes hyperbolic cosine of x element-wise. CountUpTo(...): Increments 'ref' until it reaches 'limit'. CreateSummaryDbWriter(...) CreateSummaryFileWriter(...) CropAndResize(...): Extracts crops from the input image tensor and resizes them. CropAndResizeGradBoxes(...): Computes the gradient of the crop_and_resize op wrt the input boxes tensor. CropAndResizeGradImage(...): Computes the gradient of the crop_and_resize op wrt the input image tensor. Cross(...): Compute the pairwise cross product. CrossReplicaSum(...): An Op to sum inputs across replicated TPU instances. CudnnRNN(...): A RNN backed by cuDNN. CudnnRNNBackprop(...): Backprop step of CudnnRNN. CudnnRNNBackpropV2(...): Backprop step of CudnnRNN. CudnnRNNBackpropV3(...): Backprop step of CudnnRNNV3. CudnnRNNCanonicalToParams(...): Converts CudnnRNN params from canonical form to usable form. CudnnRNNCanonicalToParamsV2(...): Converts CudnnRNN params from canonical form to usable form. It supports the projection in LSTM. CudnnRNNParamsSize(...): Computes size of weights that can be used by a Cudnn RNN model. CudnnRNNParamsToCanonical(...): Retrieves CudnnRNN params in canonical form. CudnnRNNParamsToCanonicalV2(...): Retrieves CudnnRNN params in canonical form. It supports the projection in LSTM. CudnnRNNV2(...): A RNN backed by cuDNN. CudnnRNNV3(...): A RNN backed by cuDNN. Cumprod(...): Compute the cumulative product of the tensor x along axis. Cumsum(...): Compute the cumulative sum of the tensor x along axis. CumulativeLogsumexp(...): Compute the cumulative product of the tensor x along axis. DataFormatDimMap(...): Returns the dimension index in the destination data format given the one in DataFormatVecPermute(...): Permute input tensor from src_format to dst_format. DataServiceDataset(...): Creates a dataset that reads data from the tf.data service. DatasetCardinality(...): Returns the cardinality of input_dataset. DatasetFromGraph(...): Creates a dataset from the given graph_def. DatasetToGraph(...): Returns a serialized GraphDef representing input_dataset. DatasetToGraphV2(...): Returns a serialized GraphDef representing input_dataset. DatasetToSingleElement(...): Outputs the single element from the given dataset. DatasetToTFRecord(...): Writes the given dataset to the given file using the TFRecord format. Dawsn(...) DebugGradientIdentity(...): Identity op for gradient debugging. DebugGradientRefIdentity(...): Identity op for gradient debugging. DebugIdentity(...): Provides an identity mapping of the non-Ref type input tensor for debugging. DebugIdentityV2(...): Debug Identity V2 Op. DebugNanCount(...): Debug NaN Value Counter Op. DebugNumericSummary(...): Debug Numeric Summary Op. DebugNumericSummaryV2(...): Debug Numeric Summary V2 Op. DecodeAndCropJpeg(...): Decode and Crop a JPEG-encoded image to a uint8 tensor. DecodeBase64(...): Decode web-safe base64-encoded strings. DecodeBmp(...): Decode the first frame of a BMP-encoded image to a uint8 tensor. DecodeCSV(...): Convert CSV records to tensors. Each column maps to one tensor. DecodeCompressed(...): Decompress strings. DecodeGif(...): Decode the frame(s) of a GIF-encoded image to a uint8 tensor. DecodeImage(...): Function for decode_bmp, decode_gif, decode_jpeg, and decode_png. DecodeJSONExample(...): Convert JSON-encoded Example records to binary protocol buffer strings. DecodeJpeg(...): Decode a JPEG-encoded image to a uint8 tensor. DecodePaddedRaw(...): Reinterpret the bytes of a string as a vector of numbers. DecodePng(...): Decode a PNG-encoded image to a uint8 or uint16 tensor. DecodeProtoV2(...): The op extracts fields from a serialized protocol buffers message into tensors. DecodeRaw(...): Reinterpret the bytes of a string as a vector of numbers. DecodeWav(...): Decode a 16-bit PCM WAV file to a float tensor. DeepCopy(...): Makes a copy of x. DeleteIterator(...): A container for an iterator resource. DeleteMemoryCache(...) DeleteMultiDeviceIterator(...): A container for an iterator resource. DeleteRandomSeedGenerator(...) DeleteSeedGenerator(...) DeleteSessionTensor(...): Delete the tensor specified by its handle in the session. DenseBincount(...): Counts the number of occurrences of each value in an integer array. DenseCountSparseOutput(...): Performs sparse-output bin counting for a tf.tensor input. DenseToCSRSparseMatrix(...): Converts a dense tensor to a (possibly batched) CSRSparseMatrix. DenseToDenseSetOperation(...): Applies set operation along last dimension of 2 Tensor inputs. DenseToSparseBatchDataset(...): Creates a dataset that batches input elements into a SparseTensor. DenseToSparseSetOperation(...): Applies set operation along last dimension of Tensor and SparseTensor. DepthToSpace(...): DepthToSpace for tensors of type T. DepthwiseConv2dNative(...): Computes a 2-D depthwise convolution given 4-D input and filter tensors. DepthwiseConv2dNativeBackpropFilter(...): Computes the gradients of depthwise convolution with respect to the filter. DepthwiseConv2dNativeBackpropInput(...): Computes the gradients of depthwise convolution with respect to the input. Dequantize(...): Dequantize the 'input' tensor into a float or bfloat16 Tensor. DeserializeIterator(...): Converts the given variant tensor to an iterator and stores it in the given resource. DeserializeManySparse(...): Deserialize and concatenate SparseTensors from a serialized minibatch. DeserializeSparse(...): Deserialize SparseTensor objects. DestroyResourceOp(...): Deletes the resource specified by the handle. DestroyTemporaryVariable(...): Destroys the temporary variable and returns its final value. DeviceIndex(...): Return the index of device the op runs. Diag(...): Returns a diagonal tensor with a given diagonal values. DiagPart(...): Returns the diagonal part of the tensor. Digamma(...): Computes Psi, the derivative of Lgamma (the log of the absolute value of Dilation2D(...): Computes the grayscale dilation of 4-D input and 3-D filter tensors. Dilation2DBackpropFilter(...): Computes the gradient of morphological 2-D dilation with respect to the filter. Dilation2DBackpropInput(...): Computes the gradient of morphological 2-D dilation with respect to the input. DirectedInterleaveDataset(...): A substitute for InterleaveDataset on a fixed list of N datasets. Div(...): Returns x / y element-wise. DivNoNan(...): Returns 0 if the denominator is zero. DrawBoundingBoxes(...): Draw bounding boxes on a batch of images. DrawBoundingBoxesV2(...): Draw bounding boxes on a batch of images. DummyIterationCounter(...) DummyMemoryCache(...) DummySeedGenerator(...) DynamicPartition(...): Partitions data into num_partitions tensors using indices from partitions. DynamicStitch(...): Interleave the values from the data tensors into a single tensor. EagerPyFunc(...): Eagerly executes a python function to compute func(input)->output. The EditDistance(...): Computes the (possibly normalized) Levenshtein Edit Distance. Eig(...): Computes the eigen decomposition of one or more square matrices. Einsum(...): Tensor contraction according to Einstein summation convention. Elu(...): Computes exponential linear: exp(features) - 1 if < 0, features otherwise. EluGrad(...): Computes gradients for the exponential linear (Elu) operation. Empty(...): Creates a tensor with the given shape. EmptyTensorList(...): Creates and returns an empty tensor list. EncodeBase64(...): Encode strings into web-safe base64 format. EncodeJpeg(...): JPEG-encode an image. EncodeJpegVariableQuality(...): JPEG encode input image with provided compression quality. EncodePng(...): PNG-encode an image. EncodeProto(...): The op serializes protobuf messages provided in the input tensors. EncodeWav(...): Encode audio data using the WAV file format. EnqueueTPUEmbeddingIntegerBatch(...): An op that enqueues a list of input batch tensors to TPUEmbedding. EnqueueTPUEmbeddingRaggedTensorBatch(...): Eases the porting of code that uses tf.nn.embedding_lookup(). EnqueueTPUEmbeddingSparseBatch(...): An op that enqueues TPUEmbedding input indices from a SparseTensor. EnqueueTPUEmbeddingSparseTensorBatch(...): Eases the porting of code that uses tf.nn.embedding_lookup_sparse(). EnsureShape(...): Ensures that the tensor's shape matches the expected shape. Enter(...): Creates or finds a child frame, and makes data available to the child frame. Equal(...): Returns the truth value of (x == y) element-wise. Erf(...): Computes the Gauss error function of x element-wise. Erfc(...): Computes the complementary error function of x element-wise. Erfinv(...) EuclideanNorm(...): Computes the euclidean norm of elements across dimensions of a tensor. Exit(...): Exits the current frame to its parent frame. Exp(...): Computes exponential of x element-wise. \(y = e^x\). ExpandDims(...): Inserts a dimension of 1 into a tensor's shape. ExperimentalAssertNextDataset(...) ExperimentalAutoShardDataset(...): Creates a dataset that shards the input dataset. ExperimentalBytesProducedStatsDataset(...): Records the bytes size of each element of input_dataset in a StatsAggregator. ExperimentalCSVDataset(...) ExperimentalChooseFastestDataset(...) ExperimentalDatasetCardinality(...): Returns the cardinality of input_dataset. ExperimentalDatasetToTFRecord(...): Writes the given dataset to the given file using the TFRecord format. ExperimentalDenseToSparseBatchDataset(...): Creates a dataset that batches input elements into a SparseTensor. ExperimentalDirectedInterleaveDataset(...): A substitute for InterleaveDataset on a fixed list of N datasets. ExperimentalGroupByReducerDataset(...): Creates a dataset that computes a group-by on input_dataset. ExperimentalGroupByWindowDataset(...): Creates a dataset that computes a windowed group-by on input_dataset. ExperimentalIgnoreErrorsDataset(...): Creates a dataset that contains the elements of input_dataset ignoring errors. ExperimentalIteratorGetDevice(...): Returns the name of the device on which resource has been placed. ExperimentalLMDBDataset(...) ExperimentalLatencyStatsDataset(...): Records the latency of producing input_dataset elements in a StatsAggregator. ExperimentalMapAndBatchDataset(...): Creates a dataset that fuses mapping with batching. ExperimentalMapDataset(...): Creates a dataset that applies f to the outputs of input_dataset. ExperimentalMatchingFilesDataset(...) ExperimentalMaxIntraOpParallelismDataset(...): Creates a dataset that overrides the maximum intra-op parallelism. ExperimentalNonSerializableDataset(...) ExperimentalParallelInterleaveDataset(...): Creates a dataset that applies f to the outputs of input_dataset. ExperimentalParseExampleDataset(...): Transforms input_dataset containing Example protos as vectors of DT_STRING into a dataset of Tensor or SparseTensor objects representing the parsed features. ExperimentalPrivateThreadPoolDataset(...): Creates a dataset that uses a custom thread pool to compute input_dataset. ExperimentalRandomDataset(...): Creates a Dataset that returns pseudorandom numbers. ExperimentalRebatchDataset(...): Creates a dataset that changes the batch size. ExperimentalScanDataset(...): Creates a dataset successively reduces f over the elements of input_dataset. ExperimentalSetStatsAggregatorDataset(...) ExperimentalSleepDataset(...) ExperimentalSlidingWindowDataset(...): Creates a dataset that passes a sliding window over input_dataset. ExperimentalSqlDataset(...): Creates a dataset that executes a SQL query and emits rows of the result set. ExperimentalStatsAggregatorHandle(...): Creates a statistics manager resource. ExperimentalStatsAggregatorSummary(...): Produces a summary of any statistics recorded by the given statistics manager. ExperimentalTakeWhileDataset(...): Creates a dataset that stops iteration when predicate` is false. ExperimentalThreadPoolDataset(...): Creates a dataset that uses a custom thread pool to compute input_dataset. ExperimentalThreadPoolHandle(...): Creates a dataset that uses a custom thread pool to compute input_dataset. ExperimentalUnbatchDataset(...): A dataset that splits the elements of its input into multiple elements. ExperimentalUniqueDataset(...): Creates a dataset that contains the unique elements of input_dataset. Expint(...) Expm1(...): Computes exp(x) - 1 element-wise. ExtractGlimpse(...): Extracts a glimpse from the input tensor. ExtractGlimpseV2(...): Extracts a glimpse from the input tensor. ExtractImagePatches(...): Extract patches from images and put them in the "depth" output dimension. ExtractJpegShape(...): Extract the shape information of a JPEG-encoded image. ExtractVolumePatches(...): Extract patches from input and put them in the "depth" output dimension. 3D extension of extract_image_patches. FFT(...): Fast Fourier transform. FFT2D(...): 2D fast Fourier transform. FFT3D(...): 3D fast Fourier transform. FIFOQueue(...): A queue that produces elements in first-in first-out order. FIFOQueueV2(...): A queue that produces elements in first-in first-out order. Fact(...): Output a fact about factorials. FakeParam(...): This op is used as a placeholder in If branch functions. It doesn't provide a FakeQuantWithMinMaxArgs(...): Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type. FakeQuantWithMinMaxArgsGradient(...): Compute gradients for a FakeQuantWithMinMaxArgs operation. FakeQuantWithMinMaxVars(...): Fake-quantize the 'inputs' tensor of type float via global float scalars FakeQuantWithMinMaxVarsGradient(...): Compute gradients for a FakeQuantWithMinMaxVars operation. FakeQuantWithMinMaxVarsPerChannel(...): Fake-quantize the 'inputs' tensor of type float via per-channel floats FakeQuantWithMinMaxVarsPerChannelGradient(...): Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation. FakeQueue(...): Deprecated. Do not use. Fill(...): Creates a tensor filled with a scalar value. FilterByLastComponentDataset(...): Creates a dataset containing elements of first component of input_dataset having true in the last component. FilterDataset(...): Creates a dataset containing elements of input_dataset matching predicate. Fingerprint(...): Generates fingerprint values. FixedLengthRecordDataset(...): Creates a dataset that emits the records from one or more binary files. FixedLengthRecordDatasetV2(...) FixedLengthRecordReader(...): A Reader that outputs fixed-length records from a file. FixedLengthRecordReaderV2(...): A Reader that outputs fixed-length records from a file. FixedUnigramCandidateSampler(...): Generates labels for candidate sampling with a learned unigram distribution. FlatMapDataset(...): Creates a dataset that applies f to the outputs of input_dataset. Floor(...): Returns element-wise largest integer not greater than x. FloorDiv(...): Returns x // y element-wise. FloorMod(...): Returns element-wise remainder of division. When x < 0 xor y < 0 is FlushSummaryWriter(...) For(...): ```python FractionalAvgPool(...): Performs fractional average pooling on the input. FractionalAvgPoolGrad(...): Computes gradient of the FractionalAvgPool function. FractionalMaxPool(...): Performs fractional max pooling on the input. FractionalMaxPoolGrad(...): Computes gradient of the FractionalMaxPool function. FresnelCos(...) FresnelSin(...) FusedBatchNorm(...): Batch normalization. FusedBatchNormGrad(...): Gradient for batch normalization. FusedBatchNormGradV2(...): Gradient for batch normalization. FusedBatchNormGradV3(...): Gradient for batch normalization. FusedBatchNormV2(...): Batch normalization. FusedBatchNormV3(...): Batch normalization. FusedPadConv2D(...): Performs a padding as a preprocess during a convolution. FusedResizeAndPadConv2D(...): Performs a resize and padding as a preprocess during a convolution. GRUBlockCell(...): Computes the GRU cell forward propagation for 1 time step. GRUBlockCellGrad(...): Computes the GRU cell back-propagation for 1 time step. Gather(...): Gather slices from params according to indices. GatherNd(...): Gather slices from params into a Tensor with shape specified by indices. GatherV2(...): Gather slices from params axis axis according to indices. GenerateBoundingBoxProposals(...): This op produces Region of Interests from given bounding boxes(bbox_deltas) encoded wrt anchors according to eq.2 in arXiv:1506.01497 GenerateVocabRemapping(...): Given a path to new and old vocabulary files, returns a remapping Tensor of GeneratorDataset(...): Creates a dataset that invokes a function to generate elements. GetSessionHandle(...): Store the input tensor in the state of the current session. GetSessionHandleV2(...): Store the input tensor in the state of the current session. GetSessionTensor(...): Get the value of the tensor specified by its handle. Greater(...): Returns the truth value of (x > y) element-wise. GreaterEqual(...): Returns the truth value of (x >= y) element-wise. GroupByReducerDataset(...): Creates a dataset that computes a group-by on input_dataset. GroupByWindowDataset(...): Creates a dataset that computes a windowed group-by on input_dataset. GuaranteeConst(...): Gives a guarantee to the TF runtime that the input tensor is a constant. HSVToRGB(...): Convert one or more images from HSV to RGB. HashTable(...): Creates a non-initialized hash table. HashTableV2(...): Creates a non-initialized hash table. HistogramFixedWidth(...): Return histogram of values. HistogramSummary(...): Outputs a Summary protocol buffer with a histogram. IFFT(...): Inverse fast Fourier transform. IFFT2D(...): Inverse 2D fast Fourier transform. IFFT3D(...): Inverse 3D fast Fourier transform. IRFFT(...): Inverse real-valued fast Fourier transform. IRFFT2D(...): Inverse 2D real-valued fast Fourier transform. IRFFT3D(...): Inverse 3D real-valued fast Fourier transform. Identity(...): Return a tensor with the same shape and contents as the input tensor or value. IdentityN(...): Returns a list of tensors with the same shapes and contents as the input IdentityReader(...): A Reader that outputs the queued work as both the key and value. IdentityReaderV2(...): A Reader that outputs the queued work as both the key and value. If(...): output = cond ? then_branch(input) : else_branch(input) Igamma(...): Compute the lower regularized incomplete Gamma function P(a, x). IgammaGradA(...): Computes the gradient of igamma(a, x) wrt a. Igammac(...): Compute the upper regularized incomplete Gamma function Q(a, x). IgnoreErrorsDataset(...): Creates a dataset that contains the elements of input_dataset ignoring errors. Imag(...): Returns the imaginary part of a complex number. ImageProjectiveTransformV2(...): Applies the given transform to each of the images. ImageProjectiveTransformV3(...): Applies the given transform to each of the images. ImageSummary(...): Outputs a Summary protocol buffer with images. ImmutableConst(...): Returns immutable tensor from memory region. ImportEvent(...) InTopK(...): Says whether the targets are in the top K predictions. InTopKV2(...): Says whether the targets are in the top K predictions. InfeedDequeue(...): A placeholder op for a value that will be fed into the computation. InfeedDequeueTuple(...): Fetches multiple values from infeed as an XLA tuple. InfeedEnqueue(...): An op which feeds a single Tensor value into the computation. InfeedEnqueuePrelinearizedBuffer(...): An op which enqueues prelinearized buffer into TPU infeed. InfeedEnqueueTuple(...): Feeds multiple Tensor values into the computation as an XLA tuple. InitializeTable(...): Table initializer that takes two tensors for keys and values respectively. InitializeTableFromDataset(...) InitializeTableFromTextFile(...): Initializes a table from a text file. InitializeTableFromTextFileV2(...): Initializes a table from a text file. InitializeTableV2(...): Table initializer that takes two tensors for keys and values respectively. InplaceAdd(...): Adds v into specified rows of x. InplaceSub(...): Subtracts v into specified rows of x. InplaceUpdate(...): Updates specified rows 'i' with values 'v'. InterleaveDataset(...): Creates a dataset that applies f to the outputs of input_dataset. Inv(...): Computes the reciprocal of x element-wise. InvGrad(...): Computes the gradient for the inverse of x wrt its input. Invert(...): Invert (flip) each bit of supported types; for example, type uint8 value 01010101 becomes 10101010. InvertPermutation(...): Computes the inverse permutation of a tensor. IsBoostedTreesEnsembleInitialized(...): Checks whether a tree ensemble has been initialized. IsBoostedTreesQuantileStreamResourceInitialized(...): Checks whether a quantile stream has been initialized. IsFinite(...): Returns which elements of x are finite. IsInf(...): Returns which elements of x are Inf. IsNan(...): Returns which elements of x are NaN. IsVariableInitialized(...): Checks whether a tensor has been initialized. IsotonicRegression(...): Solves a batch of isotonic regression problems. Iterator(...): A container for an iterator resource. IteratorFromStringHandle(...): Converts the given string representing a handle to an iterator to a resource. IteratorFromStringHandleV2(...) IteratorGetDevice(...): Returns the name of the device on which resource has been placed. IteratorGetNext(...): Gets the next output from the given iterator . IteratorGetNextAsOptional(...): Gets the next output from the given iterator as an Optional variant. IteratorGetNextSync(...): Gets the next output from the given iterator. IteratorToStringHandle(...): Converts the given resource_handle representing an iterator to a string. IteratorV2(...) L2Loss(...): L2 Loss. LMDBDataset(...): Creates a dataset that emits the key-value pairs in one or more LMDB files. LMDBReader(...): A Reader that outputs the records from a LMDB file. LRN(...): Local Response Normalization. LRNGrad(...): Gradients for Local Response Normalization. LSTMBlockCell(...): Computes the LSTM cell forward propagation for 1 time step. LSTMBlockCellGrad(...): Computes the LSTM cell backward propagation for 1 timestep. LatencyStatsDataset(...): Records the latency of producing input_dataset elements in a StatsAggregator. LeakyRelu(...): Computes rectified linear: max(features, features * alpha). LeakyReluGrad(...): Computes rectified linear gradients for a LeakyRelu operation. LearnedUnigramCandidateSampler(...): Generates labels for candidate sampling with a learned unigram distribution. LeftShift(...): Elementwise computes the bitwise left-shift of x and y. LegacyParallelInterleaveDatasetV2(...): Creates a dataset that applies f to the outputs of input_dataset. Less(...): Returns the truth value of (x < y) element-wise. LessEqual(...): Returns the truth value of (x <= y) element-wise. Lgamma(...): Computes the log of the absolute value of Gamma(x) element-wise. LinSpace(...): Generates values in an interval. ListDiff(...): Computes the difference between two lists of numbers or strings. LoadAndRemapMatrix(...): Loads a 2-D (matrix) Tensor with name old_tensor_name from the checkpoint LoadDataset(...) LoadTPUEmbeddingADAMParameters(...): Load ADAM embedding parameters. LoadTPUEmbeddingADAMParametersGradAccumDebug(...): Load ADAM embedding parameters with debug support. LoadTPUEmbeddingAdadeltaParameters(...): Load Adadelta embedding parameters. LoadTPUEmbeddingAdadeltaParametersGradAccumDebug(...): Load Adadelta parameters with debug support. LoadTPUEmbeddingAdagradParameters(...): Load Adagrad embedding parameters. LoadTPUEmbeddingAdagradParametersGradAccumDebug(...): Load Adagrad embedding parameters with debug support. LoadTPUEmbeddingCenteredRMSPropParameters(...): Load centered RMSProp embedding parameters. LoadTPUEmbeddingFTRLParameters(...): Load FTRL embedding parameters. LoadTPUEmbeddingFTRLParametersGradAccumDebug(...): Load FTRL embedding parameters with debug support. LoadTPUEmbeddingMDLAdagradLightParameters(...): Load MDL Adagrad Light embedding parameters. LoadTPUEmbeddingMomentumParameters(...): Load Momentum embedding parameters. LoadTPUEmbeddingMomentumParametersGradAccumDebug(...): Load Momentum embedding parameters with debug support. LoadTPUEmbeddingProximalAdagradParameters(...): Load proximal Adagrad embedding parameters. LoadTPUEmbeddingProximalAdagradParametersGradAccumDebug(...): Load proximal Adagrad embedding parameters with debug support. LoadTPUEmbeddingProximalYogiParameters(...) LoadTPUEmbeddingProximalYogiParametersGradAccumDebug(...) LoadTPUEmbeddingRMSPropParameters(...): Load RMSProp embedding parameters. LoadTPUEmbeddingRMSPropParametersGradAccumDebug(...): Load RMSProp embedding parameters with debug support. LoadTPUEmbeddingStochasticGradientDescentParameters(...): Load SGD embedding parameters. LoadTPUEmbeddingStochasticGradientDescentParametersGradAccumDebug(...): Load SGD embedding parameters. Log(...): Computes natural logarithm of x element-wise. Log1p(...): Computes natural logarithm of (1 + x) element-wise. LogMatrixDeterminant(...): Computes the sign and the log of the absolute value of the determinant of LogSoftmax(...): Computes log softmax activations. LogUniformCandidateSampler(...): Generates labels for candidate sampling with a log-uniform distribution. LogicalAnd(...): Returns the truth value of x AND y element-wise. LogicalNot(...): Returns the truth value of NOT x element-wise. LogicalOr(...): Returns the truth value of x OR y element-wise. LookupTableExport(...): Outputs all keys and values in the table. LookupTableExportV2(...): Outputs all keys and values in the table. LookupTableFind(...): Looks up keys in a table, outputs the corresponding values. LookupTableFindV2(...): Looks up keys in a table, outputs the corresponding values. LookupTableImport(...): Replaces the contents of the table with the specified keys and values. LookupTableImportV2(...): Replaces the contents of the table with the specified keys and values. LookupTableInsert(...): Updates the table to associates keys with values. LookupTableInsertV2(...): Updates the table to associates keys with values. LookupTableRemoveV2(...): Removes keys and its associated values from a table. LookupTableSize(...): Computes the number of elements in the given table. LookupTableSizeV2(...): Computes the number of elements in the given table. LoopCond(...): Forwards the input to the output. LowerBound(...): Applies lower_bound(sorted_search_values, values) along each row. Lu(...): Computes the LU decomposition of one or more square matrices. MakeIterator(...): Makes a new iterator from the given dataset and stores it in iterator. MapAndBatchDataset(...): Creates a dataset that fuses mapping with batching. MapClear(...): Op removes all elements in the underlying container. MapDataset(...): Creates a dataset that applies f to the outputs of input_dataset. MapDefun(...): Maps a function on the list of tensors unpacked from arguments on dimension 0. MapIncompleteSize(...): Op returns the number of incomplete elements in the underlying container. MapPeek(...): Op peeks at the values at the specified key. If the MapSize(...): Op returns the number of elements in the underlying container. MapStage(...): Stage (key, values) in the underlying container which behaves like a hashtable. MapUnstage(...): Op removes and returns the values associated with the key MapUnstageNoKey(...): Op removes and returns a random (key, value) MatMul(...): Multiply the matrix "a" by the matrix "b". MatchingFiles(...): Returns the set of files matching one or more glob patterns. MatchingFilesDataset(...) MatrixBandPart(...): Copy a tensor setting everything outside a central band in each innermost matrix to zero. MatrixDeterminant(...): Computes the determinant of one or more square matrices. MatrixDiag(...): Returns a batched diagonal tensor with a given batched diagonal values. MatrixDiagPart(...): Returns the batched diagonal part of a batched tensor. MatrixDiagPartV2(...): Returns the batched diagonal part of a batched tensor. MatrixDiagPartV3(...): Returns the batched diagonal part of a batched tensor. MatrixDiagV2(...): Returns a batched diagonal tensor with given batched diagonal values. MatrixDiagV3(...): Returns a batched diagonal tensor with given batched diagonal values. MatrixExponential(...): Deprecated, use python implementation tf.linalg.matrix_exponential. MatrixInverse(...): Computes the inverse of one or more square invertible matrices or their adjoints (conjugate transposes). MatrixLogarithm(...): Computes the matrix logarithm of one or more square matrices: MatrixSetDiag(...): Returns a batched matrix tensor with new batched diagonal values. MatrixSetDiagV2(...): Returns a batched matrix tensor with new batched diagonal values. MatrixSetDiagV3(...): Returns a batched matrix tensor with new batched diagonal values. MatrixSolve(...): Solves systems of linear equations. MatrixSolveLs(...): Solves one or more linear least-squares problems. MatrixSquareRoot(...): Computes the matrix square root of one or more square matrices: MatrixTriangularSolve(...): Solves systems of linear equations with upper or lower triangular matrices by backsubstitution. Max(...): Computes the maximum of elements across dimensions of a tensor. MaxIntraOpParallelismDataset(...): Creates a dataset that overrides the maximum intra-op parallelism. MaxPool(...): Performs max pooling on the input. MaxPool3D(...): Performs 3D max pooling on the input. MaxPool3DGrad(...): Computes gradients of 3D max pooling function. MaxPool3DGradGrad(...): Computes second-order gradients of the maxpooling function. MaxPoolGrad(...): Computes gradients of the maxpooling function. MaxPoolGradGrad(...): Computes second-order gradients of the maxpooling function. MaxPoolGradGradV2(...): Computes second-order gradients of the maxpooling function. MaxPoolGradGradWithArgmax(...): Computes second-order gradients of the maxpooling function. MaxPoolGradV2(...): Computes gradients of the maxpooling function. MaxPoolGradWithArgmax(...): Computes gradients of the maxpooling function. MaxPoolV2(...): Performs max pooling on the input. MaxPoolWithArgmax(...): Performs max pooling on the input and outputs both max values and indices. Maximum(...): Returns the max of x and y (i.e. x > y ? x : y) element-wise. Mean(...): Computes the mean of elements across dimensions of a tensor. Merge(...): Forwards the value of an available tensor from inputs to output. MergeSummary(...): Merges summaries. MergeV2Checkpoints(...): V2 format specific: merges the metadata files of sharded checkpoints. The Mfcc(...): Transforms a spectrogram into a form that's useful for speech recognition. Min(...): Computes the minimum of elements across dimensions of a tensor. Minimum(...): Returns the min of x and y (i.e. x < y ? x : y) element-wise. MirrorPad(...): Pads a tensor with mirrored values. MirrorPadGrad(...): Gradient op for MirrorPad op. This op folds a mirror-padded tensor. Mod(...): Returns element-wise remainder of division. This emulates C semantics in that ModelDataset(...): Identity transformation that models performance. Mul(...): Returns x * y element-wise. MulNoNan(...): Returns x * y element-wise. Returns zero if y is zero, even if x if infinite or NaN. MultiDeviceIterator(...): Creates a MultiDeviceIterator resource. MultiDeviceIteratorFromStringHandle(...): Generates a MultiDeviceIterator resource from its provided string handle. MultiDeviceIteratorGetNextFromShard(...): Gets next element for the provided shard number. MultiDeviceIteratorInit(...): Initializes the multi device iterator with the given dataset. MultiDeviceIteratorToStringHandle(...): Produces a string handle for the given MultiDeviceIterator. Multinomial(...): Draws samples from a multinomial distribution. MutableDenseHashTable(...): Creates an empty hash table that uses tensors as the backing store. MutableDenseHashTableV2(...): Creates an empty hash table that uses tensors as the backing store. MutableHashTable(...): Creates an empty hash table. MutableHashTableOfTensors(...): Creates an empty hash table. MutableHashTableOfTensorsV2(...): Creates an empty hash table. MutableHashTableV2(...): Creates an empty hash table. MutexLock(...): Locks a mutex resource. The output is the lock. So long as the lock tensor MutexV2(...): Creates a Mutex resource that can be locked by MutexLock. NcclAllReduce(...): Outputs a tensor containing the reduction across all input tensors. NcclBroadcast(...): Sends input to all devices that are connected to the output. NcclReduce(...): Reduces input from num_devices using reduction to a single device. Ndtri(...) Neg(...): Computes numerical negative value element-wise. NextAfter(...): Returns the next representable value of x1 in the direction of x2, element-wise. NextIteration(...): Makes its input available to the next iteration. NoOp(...): Does nothing. Only useful as a placeholder for control edges. NonDeterministicInts(...): Non-deterministically generates some integers. NonMaxSuppression(...): Greedily selects a subset of bounding boxes in descending order of score, NonMaxSuppressionV2(...): Greedily selects a subset of bounding boxes in descending order of score, NonMaxSuppressionV3(...): Greedily selects a subset of bounding boxes in descending order of score, NonMaxSuppressionV4(...): Greedily selects a subset of bounding boxes in descending order of score, NonMaxSuppressionV5(...): Greedily selects a subset of bounding boxes in descending order of score, NonMaxSuppressionWithOverlaps(...): Greedily selects a subset of bounding boxes in descending order of score, NonSerializableDataset(...) NotEqual(...): Returns the truth value of (x != y) element-wise. NthElement(...): Finds values of the n-th order statistic for the last dimension. OneHot(...): Returns a one-hot tensor. OneShotIterator(...): Makes a "one-shot" iterator that can be iterated only once. OnesLike(...): Returns a tensor of ones with the same shape and type as x. OptimizeDataset(...): Creates a dataset by applying optimizations to input_dataset. OptimizeDatasetV2(...): Creates a dataset by applying related optimizations to input_dataset. OptionalFromValue(...): Constructs an Optional variant from a tuple of tensors. OptionalGetValue(...): Returns the value stored in an Optional variant or raises an error if none exists. OptionalHasValue(...): Returns true if and only if the given Optional variant has a value. OptionalNone(...): Creates an Optional variant with no value. OrderedMapClear(...): Op removes all elements in the underlying container. OrderedMapIncompleteSize(...): Op returns the number of incomplete elements in the underlying container. OrderedMapPeek(...): Op peeks at the values at the specified key. If the OrderedMapSize(...): Op returns the number of elements in the underlying container. OrderedMapStage(...): Stage (key, values) in the underlying container which behaves like a ordered OrderedMapUnstage(...): Op removes and returns the values associated with the key OrderedMapUnstageNoKey(...): Op removes and returns the (key, value) element with the smallest OutfeedDequeue(...): Retrieves a single tensor from the computation outfeed. OutfeedDequeueTuple(...): Retrieve multiple values from the computation outfeed. OutfeedDequeueTupleV2(...): Retrieve multiple values from the computation outfeed. Device ordinal is a OutfeedDequeueV2(...): Retrieves a single tensor from the computation outfeed. Device ordinal is a OutfeedEnqueue(...): Enqueue a Tensor on the computation outfeed. OutfeedEnqueueTuple(...): Enqueue multiple Tensor values on the computation outfeed. Pack(...): Packs a list of N rank-R tensors into one rank-(R+1) tensor. Pad(...): Pads a tensor with zeros. PadV2(...): Pads a tensor. PaddedBatchDataset(...): Creates a dataset that batches and pads batch_size elements from the input. PaddedBatchDatasetV2(...): Creates a dataset that batches and pads batch_size elements from the input. PaddingFIFOQueue(...): A queue that produces elements in first-in first-out order. PaddingFIFOQueueV2(...): A queue that produces elements in first-in first-out order. ParallelConcat(...): Concatenates a list of N tensors along the first dimension. ParallelDynamicStitch(...): Interleave the values from the data tensors into a single tensor. ParallelInterleaveDataset(...): Creates a dataset that applies f to the outputs of input_dataset. ParallelInterleaveDatasetV2(...): Creates a dataset that applies f to the outputs of input_dataset. ParallelInterleaveDatasetV3(...): Creates a dataset that applies f to the outputs of input_dataset. ParallelInterleaveDatasetV4(...): Creates a dataset that applies f to the outputs of input_dataset. ParallelMapDataset(...): Creates a dataset that applies f to the outputs of input_dataset. ParallelMapDatasetV2(...): Creates a dataset that applies f to the outputs of input_dataset. ParameterizedTruncatedNormal(...): Outputs random values from a normal distribution. The parameters may each be a ParseExample(...): Transforms a vector of brain.Example protos (as strings) into typed tensors. ParseExampleDataset(...): Transforms input_dataset containing Example protos as vectors of DT_STRING into a dataset of Tensor or SparseTensor objects representing the parsed features. ParseExampleDatasetV2(...): Transforms input_dataset containing Example protos as vectors of DT_STRING into a dataset of Tensor or SparseTensor objects representing the parsed features. ParseExampleV2(...): Transforms a vector of tf.Example protos (as strings) into typed tensors. ParseSequenceExample(...): Transforms a vector of brain.SequenceExample protos (as strings) into typed tensors. ParseSequenceExampleV2(...): Transforms a vector of tf.io.SequenceExample protos (as strings) into ParseSingleExample(...): Transforms a tf.Example proto (as a string) into typed tensors. ParseSingleSequenceExample(...): Transforms a scalar brain.SequenceExample proto (as strings) into typed tensors. ParseTensor(...): Transforms a serialized tensorflow.TensorProto proto into a Tensor. PartitionedCall(...): returns f(inputs), where f's body is placed and partitioned. Placeholder(...): A placeholder op for a value that will be fed into the computation. PlaceholderV2(...): A placeholder op for a value that will be fed into the computation. PlaceholderWithDefault(...): A placeholder op that passes through input when its output is not fed. Polygamma(...): Compute the polygamma function \(\psi^{(n)}(x)\). PopulationCount(...): Computes element-wise population count (a.k.a. popcount, bitsum, bitcount). Pow(...): Computes the power of one value to another. PrefetchDataset(...): Creates a dataset that asynchronously prefetches elements from input_dataset. Prelinearize(...): An op which linearizes one Tensor value to an opaque variant tensor. PrelinearizeTuple(...): An op which linearizes multiple Tensor values to an opaque variant tensor. PreventGradient(...): An identity op that triggers an error if a gradient is requested. Print(...): Prints a list of tensors. PrintV2(...): Prints a string scalar. PriorityQueue(...): A queue that produces elements sorted by the first component value. PriorityQueueV2(...): A queue that produces elements sorted by the first component value. PrivateThreadPoolDataset(...): Creates a dataset that uses a custom thread pool to compute input_dataset. Prod(...): Computes the product of elements across dimensions of a tensor. PyFunc(...): Invokes a python function to compute func(input)->output. PyFuncStateless(...): A stateless version of PyFunc. Qr(...): Computes the QR decompositions of one or more matrices. QuantizeAndDequantize(...): Use QuantizeAndDequantizeV2 instead. QuantizeAndDequantizeV2(...): Quantizes then dequantizes a tensor. QuantizeAndDequantizeV3(...): Quantizes then dequantizes a tensor. QuantizeAndDequantizeV4(...): Returns the gradient of QuantizeAndDequantizeV4. QuantizeAndDequantizeV4Grad(...): Returns the gradient of QuantizeAndDequantizeV4. QuantizeDownAndShrinkRange(...): Convert the quantized 'input' tensor into a lower-precision 'output', using the QuantizeV2(...): Quantize the 'input' tensor of type float to 'output' tensor of type 'T'. QuantizedAdd(...): Returns x + y element-wise, working on quantized buffers. QuantizedAvgPool(...): Produces the average pool of the input tensor for quantized types. QuantizedBatchNormWithGlobalNormalization(...): Quantized Batch normalization. QuantizedBiasAdd(...): Adds Tensor 'bias' to Tensor 'input' for Quantized types. QuantizedConcat(...): Concatenates quantized tensors along one dimension. QuantizedConv2D(...): Computes a 2D convolution given quantized 4D input and filter tensors. QuantizedConv2DAndRelu(...) QuantizedConv2DAndReluAndRequantize(...) QuantizedConv2DAndRequantize(...) QuantizedConv2DPerChannel(...): Computes QuantizedConv2D per channel. QuantizedConv2DWithBias(...) QuantizedConv2DWithBiasAndRelu(...) QuantizedConv2DWithBiasAndReluAndRequantize(...) QuantizedConv2DWithBiasAndRequantize(...) QuantizedConv2DWithBiasSignedSumAndReluAndRequantize(...) QuantizedConv2DWithBiasSumAndRelu(...) QuantizedConv2DWithBiasSumAndReluAndRequantize(...) QuantizedDepthwiseConv2D(...): Computes quantized depthwise Conv2D. QuantizedDepthwiseConv2DWithBias(...): Computes quantized depthwise Conv2D with Bias. QuantizedDepthwiseConv2DWithBiasAndRelu(...): Computes quantized depthwise Conv2D with Bias and Relu. QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize(...): Computes quantized depthwise Conv2D with Bias, Relu and Requantize. QuantizedInstanceNorm(...): Quantized Instance normalization. QuantizedMatMul(...): Perform a quantized matrix multiplication of a by the matrix b. QuantizedMatMulWithBias(...): Performs a quantized matrix multiplication of a by the matrix b with bias QuantizedMatMulWithBiasAndDequantize(...) QuantizedMatMulWithBiasAndRelu(...): Perform a quantized matrix multiplication of a by the matrix b with bias QuantizedMatMulWithBiasAndReluAndRequantize(...): Perform a quantized matrix multiplication of a by the matrix b with bias QuantizedMatMulWithBiasAndRequantize(...) QuantizedMaxPool(...): Produces the max pool of the input tensor for quantized types. QuantizedMul(...): Returns x * y element-wise, working on quantized buffers. QuantizedRelu(...): Computes Quantized Rectified Linear: max(features, 0) QuantizedRelu6(...): Computes Quantized Rectified Linear 6: min(max(features, 0), 6) QuantizedReluX(...): Computes Quantized Rectified Linear X: min(max(features, 0), max_value) QuantizedReshape(...): Reshapes a quantized tensor as per the Reshape op. QuantizedResizeBilinear(...): Resize quantized images to size using quantized bilinear interpolation. QueueClose(...): Closes the given queue. QueueCloseV2(...): Closes the given queue. QueueDequeue(...): Dequeues a tuple of one or more tensors from the given queue. QueueDequeueMany(...): Dequeues n tuples of one or more tensors from the given queue. QueueDequeueManyV2(...): Dequeues n tuples of one or more tensors from the given queue. QueueDequeueUpTo(...): Dequeues n tuples of one or more tensors from the given queue. QueueDequeueUpToV2(...): Dequeues n tuples of one or more tensors from the given queue. QueueDequeueV2(...): Dequeues a tuple of one or more tensors from the given queue. QueueEnqueue(...): Enqueues a tuple of one or more tensors in the given queue. QueueEnqueueMany(...): Enqueues zero or more tuples of one or more tensors in the given queue. QueueEnqueueManyV2(...): Enqueues zero or more tuples of one or more tensors in the given queue. QueueEnqueueV2(...): Enqueues a tuple of one or more tensors in the given queue. QueueIsClosed(...): Returns true if queue is closed. QueueIsClosedV2(...): Returns true if queue is closed. QueueSize(...): Computes the number of elements in the given queue. QueueSizeV2(...): Computes the number of elements in the given queue. RFFT(...): Real-valued fast Fourier transform. RFFT2D(...): 2D real-valued fast Fourier transform. RFFT3D(...): 3D real-valued fast Fourier transform. RGBToHSV(...): Converts one or more images from RGB to HSV. RaggedBincount(...): Counts the number of occurrences of each value in an integer array. RaggedCountSparseOutput(...): Performs sparse-output bin counting for a ragged tensor input. RaggedCross(...): Generates a feature cross from a list of tensors, and returns it as a RaggedGather(...): Gather ragged slices from params axis 0 according to indices. RaggedRange(...): Returns a RaggedTensor containing the specified sequences of numbers. RaggedTensorFromVariant(...): Decodes a variant Tensor into a RaggedTensor. RaggedTensorToSparse(...): Converts a RaggedTensor into a SparseTensor with the same values. RaggedTensorToTensor(...): Create a dense tensor from a ragged tensor, possibly altering its shape. RaggedTensorToVariant(...): Encodes a RaggedTensor into a variant Tensor. RaggedTensorToVariantGradient(...): Helper used to compute the gradient for RaggedTensorToVariant. RandomCrop(...): Randomly crop image. RandomDataset(...): Creates a Dataset that returns pseudorandom numbers. RandomGamma(...): Outputs random values from the Gamma distribution(s) described by alpha. RandomGammaGrad(...): Computes the derivative of a Gamma random sample w.r.t. alpha. RandomPoisson(...): Use RandomPoissonV2 instead. RandomPoissonV2(...): Outputs random values from the Poisson distribution(s) described by rate. RandomShuffle(...): Randomly shuffles a tensor along its first dimension. RandomShuffleQueue(...): A queue that randomizes the order of elements. RandomShuffleQueueV2(...): A queue that randomizes the order of elements. RandomStandardNormal(...): Outputs random values from a normal distribution. RandomUniform(...): Outputs random values from a uniform distribution. RandomUniformInt(...): Outputs random integers from a uniform distribution. Range(...): Creates a sequence of numbers. RangeDataset(...): Creates a dataset with a range of values. Corresponds to python's xrange. Rank(...): Returns the rank of a tensor. ReadFile(...): Reads and outputs the entire contents of the input filename. ReadVariableOp(...): Reads the value of a variable. ReaderNumRecordsProduced(...): Returns the number of records this Reader has produced. ReaderNumRecordsProducedV2(...): Returns the number of records this Reader has produced. ReaderNumWorkUnitsCompleted(...): Returns the number of work units this Reader has finished processing. ReaderNumWorkUnitsCompletedV2(...): Returns the number of work units this Reader has finished processing. ReaderRead(...): Returns the next record (key, value pair) produced by a Reader. ReaderReadUpTo(...): Returns up to num_records (key, value) pairs produced by a Reader. ReaderReadUpToV2(...): Returns up to num_records (key, value) pairs produced by a Reader. ReaderReadV2(...): Returns the next record (key, value pair) produced by a Reader. ReaderReset(...): Restore a Reader to its initial clean state. ReaderResetV2(...): Restore a Reader to its initial clean state. ReaderRestoreState(...): Restore a reader to a previously saved state. ReaderRestoreStateV2(...): Restore a reader to a previously saved state. ReaderSerializeState(...): Produce a string tensor that encodes the state of a Reader. ReaderSerializeStateV2(...): Produce a string tensor that encodes the state of a Reader. Real(...): Returns the real part of a complex number. RealDiv(...): Returns x / y element-wise for real types. RebatchDataset(...): Creates a dataset that changes the batch size. RebatchDatasetV2(...): Creates a dataset that changes the batch size. Reciprocal(...): Computes the reciprocal of x element-wise. ReciprocalGrad(...): Computes the gradient for the inverse of x wrt its input. RecordInput(...): Emits randomized records. Recv(...): Receives the named tensor from send_device on recv_device. RecvTPUEmbeddingActivations(...): An op that receives embedding activations on the TPU. ReduceDataset(...): Reduces the input dataset to a singleton using a reduce function. ReduceJoin(...): Joins a string Tensor across the given dimensions. RefEnter(...): Creates or finds a child frame, and makes data available to the child frame. RefExit(...): Exits the current frame to its parent frame. RefIdentity(...): Return the same ref tensor as the input ref tensor. RefMerge(...): Forwards the value of an available tensor from inputs to output. RefNextIteration(...): Makes its input available to the next iteration. RefSelect(...): Forwards the indexth element of inputs to output. RefSwitch(...): Forwards the ref tensor data to the output port determined by pred. RegexFullMatch(...): Check if the input matches the regex pattern. RegexReplace(...): Replaces matches of the pattern regular expression in input with the RegisterDataset(...): Registers a dataset with the tf.data service. Relu(...): Computes rectified linear: max(features, 0). Relu6(...): Computes rectified linear 6: min(max(features, 0), 6). Relu6Grad(...): Computes rectified linear 6 gradients for a Relu6 operation. ReluGrad(...): Computes rectified linear gradients for a Relu operation. RemoteCall(...): Runs function f on a remote device indicated by target. RepeatDataset(...): Creates a dataset that emits the outputs of input_dataset count times. RequantizationRange(...): Computes a range that covers the actual values present in a quantized tensor. RequantizationRangePerChannel(...): Computes requantization range per channel. Requantize(...): Converts the quantized input tensor into a lower-precision output. RequantizePerChannel(...): Requantizes input with min and max values known per channel. Reshape(...): Reshapes a tensor. ResizeArea(...): Resize images to size using area interpolation. ResizeBicubic(...): Resize images to size using bicubic interpolation. ResizeBicubicGrad(...): Computes the gradient of bicubic interpolation. ResizeBilinear(...): Resize images to size using bilinear interpolation. ResizeBilinearGrad(...): Computes the gradient of bilinear interpolation. ResizeNearestNeighbor(...): Resize images to size using nearest neighbor interpolation. ResizeNearestNeighborGrad(...): Computes the gradient of nearest neighbor interpolation. ResourceAccumulatorApplyGradient(...): Applies a gradient to a given accumulator. ResourceAccumulatorNumAccumulated(...): Returns the number of gradients aggregated in the given accumulators. ResourceAccumulatorSetGlobalStep(...): Updates the accumulator with a new value for global_step. ResourceAccumulatorTakeGradient(...): Extracts the average gradient in the given ConditionalAccumulator. ResourceApplyAdaMax(...): Update '*var' according to the AdaMax algorithm. ResourceApplyAdadelta(...): Update '*var' according to the adadelta scheme. ResourceApplyAdagrad(...): Update '*var' according to the adagrad scheme. ResourceApplyAdagradDA(...): Update '*var' according to the proximal adagrad scheme. ResourceApplyAdagradV2(...): Update '*var' according to the adagrad scheme. ResourceApplyAdam(...): Update '*var' according to the Adam algorithm. ResourceApplyAdamWithAmsgrad(...): Update '*var' according to the Adam algorithm. ResourceApplyAddSign(...): Update '*var' according to the AddSign update. ResourceApplyCenteredRMSProp(...): Update '*var' according to the centered RMSProp algorithm. ResourceApplyFtrl(...): Update '*var' according to the Ftrl-proximal scheme. ResourceApplyFtrlV2(...): Update '*var' according to the Ftrl-proximal scheme. ResourceApplyGradientDescent(...): Update '*var' by subtracting 'alpha' * 'delta' from it. ResourceApplyKerasMomentum(...): Update '*var' according to the momentum scheme. ResourceApplyMomentum(...): Update '*var' according to the momentum scheme. ResourceApplyPowerSign(...): Update '*var' according to the AddSign update. ResourceApplyProximalAdagrad(...): Update 'var' and 'accum' according to FOBOS with Adagrad learning rate. ResourceApplyProximalGradientDescent(...): Update '*var' as FOBOS algorithm with fixed learning rate. ResourceApplyRMSProp(...): Update '*var' according to the RMSProp algorithm. ResourceConditionalAccumulator(...): A conditional accumulator for aggregating gradients. ResourceCountUpTo(...): Increments variable pointed to by 'resource' until it reaches 'limit'. ResourceGather(...): Gather slices from the variable pointed to by resource according to indices. ResourceGatherNd(...) ResourceScatterAdd(...): Adds sparse updates to the variable referenced by resource. ResourceScatterDiv(...): Divides sparse updates into the variable referenced by resource. ResourceScatterMax(...): Reduces sparse updates into the variable referenced by resource using the max operation. ResourceScatterMin(...): Reduces sparse updates into the variable referenced by resource using the min operation. ResourceScatterMul(...): Multiplies sparse updates into the variable referenced by resource. ResourceScatterNdAdd(...): Applies sparse addition to individual values or slices in a Variable. ResourceScatterNdMax(...) ResourceScatterNdMin(...) ResourceScatterNdSub(...): Applies sparse subtraction to individual values or slices in a Variable. ResourceScatterNdUpdate(...): Applies sparse updates to individual values or slices within a given ResourceScatterSub(...): Subtracts sparse updates from the variable referenced by resource. ResourceScatterUpdate(...): Assigns sparse updates to the variable referenced by resource. ResourceSparseApplyAdadelta(...): var: Should be from a Variable(). ResourceSparseApplyAdagrad(...): Update relevant entries in 'var' and 'accum' according to the adagrad scheme. ResourceSparseApplyAdagradDA(...): Update entries in 'var' and 'accum' according to the proximal adagrad scheme. ResourceSparseApplyAdagradV2(...): Update relevant entries in 'var' and 'accum' according to the adagrad scheme. ResourceSparseApplyCenteredRMSProp(...): Update '*var' according to the centered RMSProp algorithm. ResourceSparseApplyFtrl(...): Update relevant entries in '*var' according to the Ftrl-proximal scheme. ResourceSparseApplyFtrlV2(...): Update relevant entries in '*var' according to the Ftrl-proximal scheme. ResourceSparseApplyKerasMomentum(...): Update relevant entries in 'var' and 'accum' according to the momentum scheme. ResourceSparseApplyMomentum(...): Update relevant entries in 'var' and 'accum' according to the momentum scheme. ResourceSparseApplyProximalAdagrad(...): Sparse update entries in 'var' and 'accum' according to FOBOS algorithm. ResourceSparseApplyProximalGradientDescent(...): Sparse update '*var' as FOBOS algorithm with fixed learning rate. ResourceSparseApplyRMSProp(...): Update '*var' according to the RMSProp algorithm. ResourceStridedSliceAssign(...): Assign value to the sliced l-value reference of ref. Restore(...): Restores a tensor from checkpoint files. RestoreSlice(...): Restores a tensor from checkpoint files. RestoreV2(...): Restores tensors from a V2 checkpoint. RetrieveTPUEmbeddingADAMParameters(...): Retrieve ADAM embedding parameters. RetrieveTPUEmbeddingADAMParametersGradAccumDebug(...): Retrieve ADAM embedding parameters with debug support. RetrieveTPUEmbeddingAdadeltaParameters(...): Retrieve Adadelta embedding parameters. RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug(...): Retrieve Adadelta embedding parameters with debug support. RetrieveTPUEmbeddingAdagradParameters(...): Retrieve Adagrad embedding parameters. RetrieveTPUEmbeddingAdagradParametersGradAccumDebug(...): Retrieve Adagrad embedding parameters with debug support. RetrieveTPUEmbeddingCenteredRMSPropParameters(...): Retrieve centered RMSProp embedding parameters. RetrieveTPUEmbeddingFTRLParameters(...): Retrieve FTRL embedding parameters. RetrieveTPUEmbeddingFTRLParametersGradAccumDebug(...): Retrieve FTRL embedding parameters with debug support. RetrieveTPUEmbeddingMDLAdagradLightParameters(...): Retrieve MDL Adagrad Light embedding parameters. RetrieveTPUEmbeddingMomentumParameters(...): Retrieve Momentum embedding parameters. RetrieveTPUEmbeddingMomentumParametersGradAccumDebug(...): Retrieve Momentum embedding parameters with debug support. RetrieveTPUEmbeddingProximalAdagradParameters(...): Retrieve proximal Adagrad embedding parameters. RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug(...): Retrieve proximal Adagrad embedding parameters with debug support. RetrieveTPUEmbeddingProximalYogiParameters(...) RetrieveTPUEmbeddingProximalYogiParametersGradAccumDebug(...) RetrieveTPUEmbeddingRMSPropParameters(...): Retrieve RMSProp embedding parameters. RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug(...): Retrieve RMSProp embedding parameters with debug support. RetrieveTPUEmbeddingStochasticGradientDescentParameters(...): Retrieve SGD embedding parameters. RetrieveTPUEmbeddingStochasticGradientDescentParametersGradAccumDebug(...): Retrieve SGD embedding parameters with debug support. Reverse(...): Reverses specific dimensions of a tensor. ReverseSequence(...): Reverses variable length slices. ReverseV2(...): Reverses specific dimensions of a tensor. RightShift(...): Elementwise computes the bitwise right-shift of x and y. Rint(...): Returns element-wise integer closest to x. RngReadAndSkip(...): Advance the counter of a counter-based RNG. RngSkip(...): Advance the counter of a counter-based RNG. Roll(...): Rolls the elements of a tensor along an axis. Round(...): Rounds the values of a tensor to the nearest integer, element-wise. Rsqrt(...): Computes reciprocal of square root of x element-wise. RsqrtGrad(...): Computes the gradient for the rsqrt of x wrt its input. SampleDistortedBoundingBox(...): Generate a single randomly distorted bounding box for an image. SampleDistortedBoundingBoxV2(...): Generate a single randomly distorted bounding box for an image. SamplingDataset(...): Creates a dataset that takes a Bernoulli sample of the contents of another dataset. Save(...): Saves the input tensors to disk. SaveDataset(...) SaveSlices(...): Saves input tensors slices to disk. SaveV2(...): Saves tensors in V2 checkpoint format. ScalarSummary(...): Outputs a Summary protocol buffer with scalar values. ScaleAndTranslate(...) ScaleAndTranslateGrad(...) ScanDataset(...): Creates a dataset successively reduces f over the elements of input_dataset. ScatterAdd(...): Adds sparse updates to a variable reference. ScatterDiv(...): Divides a variable reference by sparse updates. ScatterMax(...): Reduces sparse updates into a variable reference using the max operation. ScatterMin(...): Reduces sparse updates into a variable reference using the min operation. ScatterMul(...): Multiplies sparse updates into a variable reference. ScatterNd(...): Scatter updates into a new tensor according to indices. ScatterNdAdd(...): Applies sparse addition to individual values or slices in a Variable. ScatterNdMax(...): Computes element-wise maximum. ScatterNdMin(...): Computes element-wise minimum. ScatterNdNonAliasingAdd(...): Applies sparse addition to input using individual values or slices ScatterNdSub(...): Applies sparse subtraction to individual values or slices in a Variable. ScatterNdUpdate(...): Applies sparse updates to individual values or slices within a given ScatterSub(...): Subtracts sparse updates to a variable reference. ScatterUpdate(...): Applies sparse updates to a variable reference. SdcaFprint(...): Computes fingerprints of the input strings. SdcaOptimizer(...): Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for SdcaOptimizerV2(...): Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for SdcaShrinkL1(...): Applies L1 regularization shrink step on the parameters. SegmentMax(...): Computes the maximum along segments of a tensor. SegmentMean(...): Computes the mean along segments of a tensor. SegmentMin(...): Computes the minimum along segments of a tensor. SegmentProd(...): Computes the product along segments of a tensor. SegmentSum(...): Computes the sum along segments of a tensor. Select(...): Selects elements from x or y, depending on condition. SelectV2(...) SelfAdjointEig(...): Computes the Eigen Decomposition of a batch of square self-adjoint matrices. SelfAdjointEigV2(...): Computes the eigen decomposition of one or more square self-adjoint matrices. Selu(...): Computes scaled exponential linear: scale * alpha * (exp(features) - 1) SeluGrad(...): Computes gradients for the scaled exponential linear (Selu) operation. Send(...): Sends the named tensor from send_device to recv_device. SendTPUEmbeddingGradients(...): Performs gradient updates of embedding tables. SerializeIterator(...): Converts the given resource_handle representing an iterator to a variant tensor. SerializeManySparse(...): Serialize an N-minibatch SparseTensor into an [N, 3] Tensor object. SerializeSparse(...): Serialize a SparseTensor into a [3] Tensor object. SerializeTensor(...): Transforms a Tensor into a serialized TensorProto proto. SetSize(...): Number of unique elements along last dimension of input set. SetStatsAggregatorDataset(...) Shape(...): Returns the shape of a tensor. ShapeN(...): Returns shape of tensors. ShardDataset(...): Creates a Dataset that includes only 1/num_shards of this dataset. ShardedFilename(...): Generate a sharded filename. The filename is printf formatted as ShardedFilespec(...): Generate a glob pattern matching all sharded file names. ShuffleAndRepeatDataset(...): Creates a dataset that shuffles and repeats elements from input_dataset ShuffleAndRepeatDatasetV2(...) ShuffleDataset(...): Creates a dataset that shuffles elements from input_dataset pseudorandomly. ShuffleDatasetV2(...) ShuffleDatasetV3(...) ShutdownDistributedTPU(...): Shuts down a running distributed TPU system. Sigmoid(...): Computes sigmoid of x element-wise. SigmoidGrad(...): Computes the gradient of the sigmoid of x wrt its input. Sign(...): Returns an element-wise indication of the sign of a number. Sin(...): Computes sine of x element-wise. Sinh(...): Computes hyperbolic sine of x element-wise. Size(...): Returns the size of a tensor. SkipDataset(...): Creates a dataset that skips count elements from the input_dataset. SleepDataset(...) Slice(...): Return a slice from 'input'. SlidingWindowDataset(...): Creates a dataset that passes a sliding window over input_dataset. Snapshot(...): Returns a copy of the input tensor. SnapshotDataset(...): Creates a dataset that will write to / read from a snapshot. SnapshotDatasetV2(...): Creates a dataset that will write to / read from a snapshot. SobolSample(...): Generates points from the Sobol sequence. Softmax(...): Computes softmax activations. SoftmaxCrossEntropyWithLogits(...): Computes softmax cross entropy cost and gradients to backpropagate. Softplus(...): Computes softplus: log(exp(features) + 1). SoftplusGrad(...): Computes softplus gradients for a softplus operation. Softsign(...): Computes softsign: features / (abs(features) + 1). SoftsignGrad(...): Computes softsign gradients for a softsign operation. SpaceToBatch(...): SpaceToBatch for 4-D tensors of type T. SpaceToBatchND(...): SpaceToBatch for N-D tensors of type T. SpaceToDepth(...): SpaceToDepth for tensors of type T. SparseAccumulatorApplyGradient(...): Applies a sparse gradient to a given accumulator. SparseAccumulatorTakeGradient(...): Extracts the average sparse gradient in a SparseConditionalAccumulator. SparseAdd(...): Adds two SparseTensor objects to produce another SparseTensor. SparseAddGrad(...): The gradient operator for the SparseAdd op. SparseApplyAdadelta(...): var: Should be from a Variable(). SparseApplyAdagrad(...): Update relevant entries in 'var' and 'accum' according to the adagrad scheme. SparseApplyAdagradDA(...): Update entries in 'var' and 'accum' according to the proximal adagrad scheme. SparseApplyAdagradV2(...): Update relevant entries in 'var' and 'accum' according to the adagrad scheme. SparseApplyCenteredRMSProp(...): Update '*var' according to the centered RMSProp algorithm. SparseApplyFtrl(...): Update relevant entries in '*var' according to the Ftrl-proximal scheme. SparseApplyFtrlV2(...): Update relevant entries in '*var' according to the Ftrl-proximal scheme. SparseApplyMomentum(...): Update relevant entries in 'var' and 'accum' according to the momentum scheme. SparseApplyProximalAdagrad(...): Sparse update entries in 'var' and 'accum' according to FOBOS algorithm. SparseApplyProximalGradientDescent(...): Sparse update '*var' as FOBOS algorithm with fixed learning rate. SparseApplyRMSProp(...): Update '*var' according to the RMSProp algorithm. SparseBincount(...): Counts the number of occurrences of each value in an integer array. SparseConcat(...): Concatenates a list of SparseTensor along the specified dimension. SparseConditionalAccumulator(...): A conditional accumulator for aggregating sparse gradients. SparseCountSparseOutput(...): Performs sparse-output bin counting for a sparse tensor input. SparseCross(...): Generates sparse cross from a list of sparse and dense tensors. SparseCrossHashed(...): Generates sparse cross from a list of sparse and dense tensors. SparseCrossV2(...): Generates sparse cross from a list of sparse and dense tensors. SparseDenseCwiseAdd(...): Adds up a SparseTensor and a dense Tensor, using these special rules: SparseDenseCwiseDiv(...): Component-wise divides a SparseTensor by a dense Tensor. SparseDenseCwiseMul(...): Component-wise multiplies a SparseTensor by a dense Tensor. SparseFillEmptyRows(...): Fills empty rows in the input 2-D SparseTensor with a default value. SparseFillEmptyRowsGrad(...): The gradient of SparseFillEmptyRows. SparseMatMul(...): Multiply matrix "a" by matrix "b". SparseMatrixAdd(...): Sparse addition of two CSR matrices, C = alpha * A + beta * B. SparseMatrixMatMul(...): Matrix-multiplies a sparse matrix with a dense matrix. SparseMatrixMul(...): Element-wise multiplication of a sparse matrix with a dense tensor. SparseMatrixNNZ(...): Returns the number of nonzeroes of sparse_matrix. SparseMatrixOrderingAMD(...): Computes the Approximate Minimum Degree (AMD) ordering of input. SparseMatrixSoftmax(...): Calculates the softmax of a CSRSparseMatrix. SparseMatrixSoftmaxGrad(...): Calculates the gradient of the SparseMatrixSoftmax op. SparseMatrixSparseCholesky(...): Computes the sparse Cholesky decomposition of input. SparseMatrixSparseMatMul(...): Sparse-matrix-multiplies two CSR matrices a and b. SparseMatrixTranspose(...): Transposes the inner (matrix) dimensions of a CSRSparseMatrix. SparseMatrixZeros(...): Creates an all-zeros CSRSparseMatrix with shape dense_shape. SparseReduceMax(...): Computes the max of elements across dimensions of a SparseTensor. SparseReduceMaxSparse(...): Computes the max of elements across dimensions of a SparseTensor. SparseReduceSum(...): Computes the sum of elements across dimensions of a SparseTensor. SparseReduceSumSparse(...): Computes the sum of elements across dimensions of a SparseTensor. SparseReorder(...): Reorders a SparseTensor into the canonical, row-major ordering. SparseReshape(...): Reshapes a SparseTensor to represent values in a new dense shape. SparseSegmentMean(...): Computes the mean along sparse segments of a tensor. SparseSegmentMeanGrad(...): Computes gradients for SparseSegmentMean. SparseSegmentMeanWithNumSegments(...): Computes the mean along sparse segments of a tensor. SparseSegmentSqrtN(...): Computes the sum along sparse segments of a tensor divided by the sqrt of N. SparseSegmentSqrtNGrad(...): Computes gradients for SparseSegmentSqrtN. SparseSegmentSqrtNWithNumSegments(...): Computes the sum along sparse segments of a tensor divided by the sqrt of N. SparseSegmentSum(...): Computes the sum along sparse segments of a tensor. SparseSegmentSumWithNumSegments(...): Computes the sum along sparse segments of a tensor. SparseSlice(...): Slice a SparseTensor based on the start and size. SparseSliceGrad(...): The gradient operator for the SparseSlice op. SparseSoftmax(...): Applies softmax to a batched N-D SparseTensor. SparseSoftmaxCrossEntropyWithLogits(...): Computes softmax cross entropy cost and gradients to backpropagate. SparseSparseMaximum(...): Returns the element-wise max of two SparseTensors. SparseSparseMinimum(...): Returns the element-wise min of two SparseTensors. SparseSplit(...): Split a SparseTensor into num_split tensors along one dimension. SparseTensorDenseAdd(...): Adds up a SparseTensor and a dense Tensor, producing a dense Tensor. SparseTensorDenseMatMul(...): Multiply SparseTensor (of rank 2) "A" by dense matrix "B". SparseTensorSliceDataset(...): Creates a dataset that splits a SparseTensor into elements row-wise. SparseTensorToCSRSparseMatrix(...): Converts a SparseTensor to a (possibly batched) CSRSparseMatrix. SparseToDense(...): Converts a sparse representation into a dense tensor. SparseToSparseSetOperation(...): Applies set operation along last dimension of 2 SparseTensor inputs. Spence(...) Split(...): Splits a tensor into num_split tensors along one dimension. SplitV(...): Splits a tensor into num_split tensors along one dimension. SqlDataset(...): Creates a dataset that executes a SQL query and emits rows of the result set. Sqrt(...): Computes square root of x element-wise. SqrtGrad(...): Computes the gradient for the sqrt of x wrt its input. Square(...): Computes square of x element-wise. SquaredDifference(...): Returns conj(x - y)(x - y) element-wise. Squeeze(...): Removes dimensions of size 1 from the shape of a tensor. Stack(...): Deprecated, use StackV2. StackClose(...): Deprecated, use StackCloseV2. StackCloseV2(...): Delete the stack from its resource container. StackPop(...): Deprecated, use StackPopV2. StackPopV2(...): Pop the element at the top of the stack. StackPush(...): Deprecated, use StackPushV2. StackPushV2(...): Push an element onto the stack. StackV2(...): A stack that produces elements in first-in last-out order. Stage(...): Stage values similar to a lightweight Enqueue. StageClear(...): Op removes all elements in the underlying container. StagePeek(...): Op peeks at the values at the specified index. If the StageSize(...): Op returns the number of elements in the underlying container. StatefulPartitionedCall(...): returns f(inputs), where f's body is placed and partitioned. StatefulRandomBinomial(...) StatefulStandardNormal(...): Outputs random values from a normal distribution. This op is deprecated in favor of op 'StatefulStandardNormalV2' StatefulStandardNormalV2(...): Outputs random values from a normal distribution. StatefulTruncatedNormal(...): Outputs random values from a truncated normal distribution. StatefulUniform(...): Outputs random values from a uniform distribution. StatefulUniformFullInt(...): Outputs random integers from a uniform distribution. StatefulUniformInt(...): Outputs random integers from a uniform distribution. StatelessCase(...): An n-way switch statement which calls a single branch function. StatelessIf(...): output = cond ? then_branch(input) : else_branch(input) StatelessMultinomial(...): Draws samples from a multinomial distribution. StatelessParameterizedTruncatedNormal(...) StatelessRandomBinomial(...): Outputs deterministic pseudorandom random numbers from a binomial distribution. StatelessRandomGammaV2(...): Outputs deterministic pseudorandom random numbers from a gamma distribution. StatelessRandomGetKeyCounterAlg(...): Picks the best algorithm based on device, and scrambles seed into key and counter. StatelessRandomNormal(...): Outputs deterministic pseudorandom values from a normal distribution. StatelessRandomNormalV2(...): Outputs deterministic pseudorandom values from a normal distribution. StatelessRandomPoisson(...): Outputs deterministic pseudorandom random numbers from a Poisson distribution. StatelessRandomUniform(...): Outputs deterministic pseudorandom random values from a uniform distribution. StatelessRandomUniformFullInt(...): Outputs deterministic pseudorandom random integers from a uniform distribution. StatelessRandomUniformFullIntV2(...): Outputs deterministic pseudorandom random integers from a uniform distribution. StatelessRandomUniformInt(...): Outputs deterministic pseudorandom random integers from a uniform distribution. StatelessRandomUniformIntV2(...): Outputs deterministic pseudorandom random integers from a uniform distribution. StatelessRandomUniformV2(...): Outputs deterministic pseudorandom random values from a uniform distribution. StatelessSampleDistortedBoundingBox(...): Generate a randomly distorted bounding box for an image deterministically. StatelessTruncatedNormal(...): Outputs deterministic pseudorandom values from a truncated normal distribution. StatelessTruncatedNormalV2(...): Outputs deterministic pseudorandom values from a truncated normal distribution. StatelessWhile(...): output = input; While (Cond(output)) { output = Body(output) } StaticRegexFullMatch(...): Check if the input matches the regex pattern. StaticRegexReplace(...): Replaces the match of pattern in input with rewrite. StatsAggregatorHandle(...): Creates a statistics manager resource. StatsAggregatorHandleV2(...) StatsAggregatorSetSummaryWriter(...): Set a summary_writer_interface to record statistics using given stats_aggregator. StatsAggregatorSummary(...): Produces a summary of any statistics recorded by the given statistics manager. StopGradient(...): Stops gradient computation. StridedSlice(...): Return a strided slice from input. StridedSliceAssign(...): Assign value to the sliced l-value reference of ref. StridedSliceGrad(...): Returns the gradient of StridedSlice. StringFormat(...): Formats a string template using a list of tensors. StringJoin(...): Joins the strings in the given list of string tensors into one tensor; StringLength(...): String lengths of input. StringLower(...): Converts all uppercase characters into their respective lowercase replacements. StringNGrams(...): Creates ngrams from ragged string data. StringSplit(...): Split elements of input based on delimiter into a SparseTensor. StringSplitV2(...): Split elements of source based on sep into a SparseTensor. StringStrip(...): Strip leading and trailing whitespaces from the Tensor. StringToHashBucket(...): Converts each string in the input Tensor to its hash mod by a number of buckets. StringToHashBucketFast(...): Converts each string in the input Tensor to its hash mod by a number of buckets. StringToHashBucketStrong(...): Converts each string in the input Tensor to its hash mod by a number of buckets. StringToNumber(...): Converts each string in the input Tensor to the specified numeric type. StringUpper(...): Converts all lowercase characters into their respective uppercase replacements. Sub(...): Returns x - y element-wise. Substr(...): Return substrings from Tensor of strings. Sum(...): Computes the sum of elements across dimensions of a tensor. SummaryWriter(...) Svd(...): Computes the singular value decompositions of one or more matrices. Switch(...): Forwards data to the output port determined by pred. SymbolicGradient(...): Computes the gradient function for function f via backpropagation. TFRecordDataset(...): Creates a dataset that emits the records from one or more TFRecord files. TFRecordReader(...): A Reader that outputs the records from a TensorFlow Records file. TFRecordReaderV2(...): A Reader that outputs the records from a TensorFlow Records file. TPUCompilationResult(...): Returns the result of a TPU compilation. TPUEmbeddingActivations(...): An op enabling differentiation of TPU Embeddings. TPUOrdinalSelector(...): A TPU core selector Op. TPUPartitionedCall(...): Calls a function placed on a specified TPU device. TPUReplicateMetadata(...): Metadata indicating how the TPU computation should be replicated. TPUReplicatedInput(...): Connects N inputs to an N-way replicated TPU computation. TPUReplicatedOutput(...): Connects N outputs from an N-way replicated TPU computation. TakeDataset(...): Creates a dataset that contains count elements from the input_dataset. TakeManySparseFromTensorsMap(...): Read SparseTensors from a SparseTensorsMap and concatenate them. TakeWhileDataset(...): Creates a dataset that stops iteration when predicate` is false. Tan(...): Computes tan of x element-wise. Tanh(...): Computes hyperbolic tangent of x element-wise. TanhGrad(...): Computes the gradient for the tanh of x wrt its input. TemporaryVariable(...): Returns a tensor that may be mutated, but only persists within a single step. TensorArray(...) TensorArrayClose(...) TensorArrayCloseV2(...): Deprecated. Use TensorArrayCloseV3 TensorArrayCloseV3(...): Delete the TensorArray from its resource container. TensorArrayConcat(...) TensorArrayConcatV2(...): Deprecated. Use TensorArrayConcatV3 TensorArrayConcatV3(...): Concat the elements from the TensorArray into value value. TensorArrayGather(...) TensorArrayGatherV2(...): Deprecated. Use TensorArrayGatherV3 TensorArrayGatherV3(...): Gather specific elements from the TensorArray into output value. TensorArrayGrad(...) TensorArrayGradV2(...): Deprecated. Use TensorArrayGradV3 TensorArrayGradV3(...): Creates a TensorArray for storing the gradients of values in the given handle. TensorArrayGradWithShape(...): Creates a TensorArray for storing multiple gradients of values in the given handle. TensorArrayPack(...) TensorArrayRead(...) TensorArrayReadV2(...): Deprecated. Use TensorArrayReadV3 TensorArrayReadV3(...): Read an element from the TensorArray into output value. TensorArrayScatter(...) TensorArrayScatterV2(...): Deprecated. Use TensorArrayScatterV3 TensorArrayScatterV3(...): Scatter the data from the input value into specific TensorArray elements. TensorArraySize(...) TensorArraySizeV2(...): Deprecated. Use TensorArraySizeV3 TensorArraySizeV3(...): Get the current size of the TensorArray. TensorArraySplit(...) TensorArraySplitV2(...): Deprecated. Use TensorArraySplitV3 TensorArraySplitV3(...): Split the data from the input value into TensorArray elements. TensorArrayUnpack(...) TensorArrayV2(...): Deprecated. Use TensorArrayV3 TensorArrayV3(...): An array of Tensors of given size. TensorArrayWrite(...) TensorArrayWriteV2(...): Deprecated. Use TensorArrayGradV3 TensorArrayWriteV3(...): Push an element onto the tensor_array. TensorDataset(...): Creates a dataset that emits components as a tuple of tensors once. TensorListConcat(...): Concats all tensors in the list along the 0th dimension. TensorListConcatLists(...) TensorListConcatV2(...): Concats all tensors in the list along the 0th dimension. TensorListElementShape(...): The shape of the elements of the given list, as a tensor. TensorListFromTensor(...): Creates a TensorList which, when stacked, has the value of tensor. TensorListGather(...): Creates a Tensor by indexing into the TensorList. TensorListGetItem(...): Returns the item in the list with the given index. TensorListLength(...): Returns the number of tensors in the input tensor list. TensorListPopBack(...): Returns the last element of the input list as well as a list with all but that element. TensorListPushBack(...): Returns a list which has the passed-in Tensor as last element and the other elements of the given list in input_handle. TensorListPushBackBatch(...) TensorListReserve(...): List of the given size with empty elements. TensorListResize(...): Resizes the list. TensorListScatter(...): Creates a TensorList by indexing into a Tensor. TensorListScatterIntoExistingList(...): Scatters tensor at indices in an input list. TensorListScatterV2(...): Creates a TensorList by indexing into a Tensor. TensorListSetItem(...): Sets the index-th position of the list to contain the given tensor. TensorListSplit(...): Splits a tensor into a list. TensorListStack(...): Stacks all tensors in the list. TensorScatterAdd(...): Adds sparse updates to an existing tensor according to indices. TensorScatterMax(...) TensorScatterMin(...) TensorScatterSub(...): Subtracts sparse updates from an existing tensor according to indices. TensorScatterUpdate(...): Scatter updates into an existing tensor according to indices. TensorSliceDataset(...): Creates a dataset that emits each dim-0 slice of components once. TensorStridedSliceUpdate(...): Assign value to the sliced l-value reference of input. TensorSummary(...): Outputs a Summary protocol buffer with a tensor. TensorSummaryV2(...): Outputs a Summary protocol buffer with a tensor and per-plugin data. TextLineDataset(...): Creates a dataset that emits the lines of one or more text files. TextLineReader(...): A Reader that outputs the lines of a file delimited by '\n'. TextLineReaderV2(...): A Reader that outputs the lines of a file delimited by '\n'. ThreadPoolDataset(...): Creates a dataset that uses a custom thread pool to compute input_dataset. ThreadPoolHandle(...): Creates a dataset that uses a custom thread pool to compute input_dataset. ThreadUnsafeUnigramCandidateSampler(...): Generates labels for candidate sampling with a learned unigram distribution. Tile(...): Constructs a tensor by tiling a given tensor. TileGrad(...): Returns the gradient of Tile. Timestamp(...): Provides the time since epoch in seconds. ToBool(...): Converts a tensor to a scalar predicate. TopK(...): Finds values and indices of the k largest elements for the last dimension. TopKV2(...): Finds values and indices of the k largest elements for the last dimension. Transpose(...): Shuffle dimensions of x according to a permutation. TridiagonalMatMul(...): Calculate product with tridiagonal matrix. TridiagonalSolve(...): Solves tridiagonal systems of equations. TruncateDiv(...): Returns x / y element-wise for integer types. TruncateMod(...): Returns element-wise remainder of division. This emulates C semantics in that TruncatedNormal(...): Outputs random values from a truncated normal distribution. Unbatch(...): Reverses the operation of Batch for a single output Tensor. UnbatchDataset(...): A dataset that splits the elements of its input into multiple elements. UnbatchGrad(...): Gradient of Unbatch. UncompressElement(...): Uncompresses a compressed dataset element. UnicodeDecode(...): Decodes each string in input into a sequence of Unicode code points. UnicodeDecodeWithOffsets(...): Decodes each string in input into a sequence of Unicode code points. UnicodeEncode(...): Encode a tensor of ints into unicode strings. UnicodeScript(...): Determine the script codes of a given tensor of Unicode integer code points. UnicodeTranscode(...): Transcode the input text from a source encoding to a destination encoding. UniformCandidateSampler(...): Generates labels for candidate sampling with a uniform distribution. Unique(...): Finds unique elements in a 1-D tensor. UniqueDataset(...): Creates a dataset that contains the unique elements of input_dataset. UniqueV2(...): Finds unique elements along an axis of a tensor. UniqueWithCounts(...): Finds unique elements in a 1-D tensor. UniqueWithCountsV2(...): Finds unique elements along an axis of a tensor. Unpack(...): Unpacks a given dimension of a rank-R tensor into num rank-(R-1) tensors. UnravelIndex(...): Converts an array of flat indices into a tuple of coordinate arrays. UnsortedSegmentJoin(...): Joins the elements of inputs based on segment_ids. UnsortedSegmentMax(...): Computes the maximum along segments of a tensor. UnsortedSegmentMin(...): Computes the minimum along segments of a tensor. UnsortedSegmentProd(...): Computes the product along segments of a tensor. UnsortedSegmentSum(...): Computes the sum along segments of a tensor. Unstage(...): Op is similar to a lightweight Dequeue. UnwrapDatasetVariant(...) UpperBound(...): Applies upper_bound(sorted_search_values, values) along each row. VarHandleOp(...): Creates a handle to a Variable resource. VarIsInitializedOp(...): Checks whether a resource handle-based variable has been initialized. Variable(...): Use VariableV2 instead. VariableShape(...): Returns the shape of the variable pointed to by resource. VariableV2(...): Holds state in the form of a tensor that persists across steps. Where(...): Returns locations of nonzero / true values in a tensor. While(...): output = input; While (Cond(output)) { output = Body(output) } WholeFileReader(...): A Reader that outputs the entire contents of a file as a value. WholeFileReaderV2(...): A Reader that outputs the entire contents of a file as a value. WindowDataset(...): Combines (nests of) input elements into a dataset of (nests of) windows. WorkerHeartbeat(...): Worker heartbeat op. WrapDatasetVariant(...) WriteAudioSummary(...): Writes an audio summary. WriteFile(...): Writes contents to the file at input filename. Creates file and recursively WriteGraphSummary(...): Writes a graph summary. WriteHistogramSummary(...): Writes a histogram summary. WriteImageSummary(...): Writes an image summary. WriteRawProtoSummary(...): Writes a serialized proto summary. WriteScalarSummary(...): Writes a scalar summary. WriteSummary(...): Writes a tensor summary. Xdivy(...): Returns 0 if x == 0, and x / y otherwise, elementwise. Xlog1py(...): Returns 0 if x == 0, and x * log1p(y) otherwise, elementwise. Xlogy(...): Returns 0 if x == 0, and x * log(y) otherwise, elementwise. ZerosLike(...): Returns a tensor of zeros with the same shape and type as x. Zeta(...): Compute the Hurwitz zeta function \(\zeta(x, q)\). ZipDataset(...): Creates a dataset that zips together input_datasets.
tensorflow.compat.v1.raw_ops
tf.compat.v1.ReaderBase Base class for different Reader types, that produce a record every step. tf.compat.v1.ReaderBase( reader_ref, supports_serialize=False ) Conceptually, Readers convert string 'work units' into records (key, value pairs). Typically the 'work units' are filenames and the records are extracted from the contents of those files. We want a single record produced per step, but a work unit can correspond to many records. Therefore we introduce some decoupling using a queue. The queue contains the work units and the Reader dequeues from the queue when it is asked to produce a record (via Read()) but it has finished the last work unit. Args reader_ref The operation that implements the reader. supports_serialize True if the reader implementation can serialize its state. Raises RuntimeError If eager execution is enabled. Eager Compatibility Readers are not compatible with eager execution. Instead, please use tf.data to get data into your model. Attributes reader_ref Op that implements the reader. supports_serialize Whether the Reader implementation can serialize its state. Methods num_records_produced View source num_records_produced( name=None ) Returns the number of records this reader has produced. This is the same as the number of Read executions that have succeeded. Args name A name for the operation (optional). Returns An int64 Tensor. num_work_units_completed View source num_work_units_completed( name=None ) Returns the number of work units this reader has finished processing. Args name A name for the operation (optional). Returns An int64 Tensor. read View source read( queue, name=None ) Returns the next record (key, value) pair produced by a reader. Will dequeue a work unit from queue if necessary (e.g. when the Reader needs to start reading from a new file since it has finished with the previous file). Args queue A Queue or a mutable string Tensor representing a handle to a Queue, with string work items. name A name for the operation (optional). Returns A tuple of Tensors (key, value). key A string scalar Tensor. value A string scalar Tensor. read_up_to View source read_up_to( queue, num_records, name=None ) Returns up to num_records (key, value) pairs produced by a reader. Will dequeue a work unit from queue if necessary (e.g., when the Reader needs to start reading from a new file since it has finished with the previous file). It may return less than num_records even before the last batch. Args queue A Queue or a mutable string Tensor representing a handle to a Queue, with string work items. num_records Number of records to read. name A name for the operation (optional). Returns A tuple of Tensors (keys, values). keys A 1-D string Tensor. values A 1-D string Tensor. reset View source reset( name=None ) Restore a reader to its initial clean state. Args name A name for the operation (optional). Returns The created Operation. restore_state View source restore_state( state, name=None ) Restore a reader to a previously saved state. Not all Readers support being restored, so this can produce an Unimplemented error. Args state A string Tensor. Result of a SerializeState of a Reader with matching type. name A name for the operation (optional). Returns The created Operation. serialize_state View source serialize_state( name=None ) Produce a string tensor that encodes the state of a reader. Not all Readers support being serialized, so this can produce an Unimplemented error. Args name A name for the operation (optional). Returns A string Tensor.
tensorflow.compat.v1.readerbase
tf.compat.v1.reduce_all Computes the "logical and" of elements across dimensions of a tensor. (deprecated arguments) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.math.reduce_all tf.compat.v1.reduce_all( input_tensor, axis=None, keepdims=None, name=None, reduction_indices=None, keep_dims=None ) Warning: SOME ARGUMENTS ARE DEPRECATED: (keep_dims). They will be removed in a future version. Instructions for updating: keep_dims is deprecated, use keepdims instead Reduces input_tensor along the dimensions given in axis. Unless keepdims is true, the rank of the tensor is reduced by 1 for each of the entries in axis, which must be unique. If keepdims is true, the reduced dimensions are retained with length 1. If axis is None, all dimensions are reduced, and a tensor with a single element is returned. For example: x = tf.constant([[True, True], [False, False]]) tf.reduce_all(x) # False tf.reduce_all(x, 0) # [False, False] tf.reduce_all(x, 1) # [True, False] Args input_tensor The boolean tensor to reduce. axis The dimensions to reduce. If None (the default), reduces all dimensions. Must be in the range [-rank(input_tensor), rank(input_tensor)). keepdims If true, retains reduced dimensions with length 1. name A name for the operation (optional). reduction_indices The old (deprecated) name for axis. keep_dims Deprecated alias for keepdims. Returns The reduced tensor. Numpy Compatibility Equivalent to np.all
tensorflow.compat.v1.reduce_all
tf.compat.v1.reduce_any Computes the "logical or" of elements across dimensions of a tensor. (deprecated arguments) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.math.reduce_any tf.compat.v1.reduce_any( input_tensor, axis=None, keepdims=None, name=None, reduction_indices=None, keep_dims=None ) Warning: SOME ARGUMENTS ARE DEPRECATED: (keep_dims). They will be removed in a future version. Instructions for updating: keep_dims is deprecated, use keepdims instead Reduces input_tensor along the dimensions given in axis. Unless keepdims is true, the rank of the tensor is reduced by 1 for each of the entries in axis, which must be unique. If keepdims is true, the reduced dimensions are retained with length 1. If axis is None, all dimensions are reduced, and a tensor with a single element is returned. For example: x = tf.constant([[True, True], [False, False]]) tf.reduce_any(x) # True tf.reduce_any(x, 0) # [True, True] tf.reduce_any(x, 1) # [True, False] Args input_tensor The boolean tensor to reduce. axis The dimensions to reduce. If None (the default), reduces all dimensions. Must be in the range [-rank(input_tensor), rank(input_tensor)). keepdims If true, retains reduced dimensions with length 1. name A name for the operation (optional). reduction_indices The old (deprecated) name for axis. keep_dims Deprecated alias for keepdims. Returns The reduced tensor. Numpy Compatibility Equivalent to np.any
tensorflow.compat.v1.reduce_any
tf.compat.v1.reduce_join Joins all strings into a single string, or joins along an axis. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.strings.reduce_join tf.compat.v1.reduce_join( inputs, axis=None, keep_dims=None, separator='', name=None, reduction_indices=None, keepdims=None ) tf.strings.reduce_join([['abc','123'], ['def','456']]).numpy() b'abc123def456' tf.strings.reduce_join([['abc','123'], ['def','456']], axis=-1).numpy() array([b'abc123', b'def456'], dtype=object) tf.strings.reduce_join([['abc','123'], ['def','456']], axis=-1, separator=" ").numpy() array([b'abc 123', b'def 456'], dtype=object) Args inputs A tf.string tensor. axis Which axis to join along. The default behavior is to join all elements, producing a scalar. keepdims If true, retains reduced dimensions with length 1. separator a string added between each string being joined. name A name for the operation (optional). Returns A tf.string tensor.
tensorflow.compat.v1.reduce_join
tf.compat.v1.reduce_logsumexp Computes log(sum(exp(elements across dimensions of a tensor))). (deprecated arguments) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.math.reduce_logsumexp tf.compat.v1.reduce_logsumexp( input_tensor, axis=None, keepdims=None, name=None, reduction_indices=None, keep_dims=None ) Warning: SOME ARGUMENTS ARE DEPRECATED: (keep_dims). They will be removed in a future version. Instructions for updating: keep_dims is deprecated, use keepdims instead Reduces input_tensor along the dimensions given in axis. Unless keepdims is true, the rank of the tensor is reduced by 1 for each of the entries in axis, which must be unique. If keepdims is true, the reduced dimensions are retained with length 1. If axis has no entries, all dimensions are reduced, and a tensor with a single element is returned. This function is more numerically stable than log(sum(exp(input))). It avoids overflows caused by taking the exp of large inputs and underflows caused by taking the log of small inputs. For example: x = tf.constant([[0., 0., 0.], [0., 0., 0.]]) tf.reduce_logsumexp(x) # log(6) tf.reduce_logsumexp(x, 0) # [log(2), log(2), log(2)] tf.reduce_logsumexp(x, 1) # [log(3), log(3)] tf.reduce_logsumexp(x, 1, keepdims=True) # [[log(3)], [log(3)]] tf.reduce_logsumexp(x, [0, 1]) # log(6) Args input_tensor The tensor to reduce. Should have numeric type. axis The dimensions to reduce. If None (the default), reduces all dimensions. Must be in the range [-rank(input_tensor), rank(input_tensor)). keepdims If true, retains reduced dimensions with length 1. name A name for the operation (optional). reduction_indices The old (deprecated) name for axis. keep_dims Deprecated alias for keepdims. Returns The reduced tensor.
tensorflow.compat.v1.reduce_logsumexp
tf.compat.v1.reduce_max Computes the maximum of elements across dimensions of a tensor. (deprecated arguments) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.math.reduce_max tf.compat.v1.reduce_max( input_tensor, axis=None, keepdims=None, name=None, reduction_indices=None, keep_dims=None ) Warning: SOME ARGUMENTS ARE DEPRECATED: (keep_dims). They will be removed in a future version. Instructions for updating: keep_dims is deprecated, use keepdims instead Reduces input_tensor along the dimensions given in axis. Unless keepdims is true, the rank of the tensor is reduced by 1 for each of the entries in axis, which must be unique. If keepdims is true, the reduced dimensions are retained with length 1. If axis is None, all dimensions are reduced, and a tensor with a single element is returned. Args input_tensor The tensor to reduce. Should have real numeric type. axis The dimensions to reduce. If None (the default), reduces all dimensions. Must be in the range [-rank(input_tensor), rank(input_tensor)). keepdims If true, retains reduced dimensions with length 1. name A name for the operation (optional). reduction_indices The old (deprecated) name for axis. keep_dims Deprecated alias for keepdims. Returns The reduced tensor. Numpy Compatibility Equivalent to np.max
tensorflow.compat.v1.reduce_max
tf.compat.v1.reduce_mean Computes the mean of elements across dimensions of a tensor. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.math.reduce_mean tf.compat.v1.reduce_mean( input_tensor, axis=None, keepdims=None, name=None, reduction_indices=None, keep_dims=None ) Reduces input_tensor along the dimensions given in axis by computing the mean of elements across the dimensions in axis. Unless keepdims is true, the rank of the tensor is reduced by 1 for each the entries in axis, which must be unique. If keepdims is true, the reduced dimensions are retained with length 1. If axis is None, all dimensions are reduced, and a tensor with a single element is returned. For example: x = tf.constant([[1., 1.], [2., 2.]]) tf.reduce_mean(x) <tf.Tensor: shape=(), dtype=float32, numpy=1.5> tf.reduce_mean(x, 0) <tf.Tensor: shape=(2,), dtype=float32, numpy=array([1.5, 1.5], dtype=float32)> tf.reduce_mean(x, 1) <tf.Tensor: shape=(2,), dtype=float32, numpy=array([1., 2.], dtype=float32)> Args input_tensor The tensor to reduce. Should have numeric type. axis The dimensions to reduce. If None (the default), reduces all dimensions. Must be in the range [-rank(input_tensor), rank(input_tensor)). keepdims If true, retains reduced dimensions with length 1. name A name for the operation (optional). reduction_indices The old (deprecated) name for axis. keep_dims Deprecated alias for keepdims. Returns The reduced tensor. Numpy Compatibility Equivalent to np.mean Please note that np.mean has a dtype parameter that could be used to specify the output type. By default this is dtype=float64. On the other hand, tf.reduce_mean has an aggressive type inference from input_tensor, for example: x = tf.constant([1, 0, 1, 0]) tf.reduce_mean(x) <tf.Tensor: shape=(), dtype=int32, numpy=0> y = tf.constant([1., 0., 1., 0.]) tf.reduce_mean(y) <tf.Tensor: shape=(), dtype=float32, numpy=0.5>
tensorflow.compat.v1.reduce_mean
tf.compat.v1.reduce_min Computes the minimum of elements across dimensions of a tensor. (deprecated arguments) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.math.reduce_min tf.compat.v1.reduce_min( input_tensor, axis=None, keepdims=None, name=None, reduction_indices=None, keep_dims=None ) Warning: SOME ARGUMENTS ARE DEPRECATED: (keep_dims). They will be removed in a future version. Instructions for updating: keep_dims is deprecated, use keepdims instead Reduces input_tensor along the dimensions given in axis. Unless keepdims is true, the rank of the tensor is reduced by 1 for each of the entries in axis, which must be unique. If keepdims is true, the reduced dimensions are retained with length 1. If axis is None, all dimensions are reduced, and a tensor with a single element is returned. Args input_tensor The tensor to reduce. Should have real numeric type. axis The dimensions to reduce. If None (the default), reduces all dimensions. Must be in the range [-rank(input_tensor), rank(input_tensor)). keepdims If true, retains reduced dimensions with length 1. name A name for the operation (optional). reduction_indices The old (deprecated) name for axis. keep_dims Deprecated alias for keepdims. Returns The reduced tensor. Numpy Compatibility Equivalent to np.min
tensorflow.compat.v1.reduce_min
tf.compat.v1.reduce_prod Computes the product of elements across dimensions of a tensor. (deprecated arguments) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.math.reduce_prod tf.compat.v1.reduce_prod( input_tensor, axis=None, keepdims=None, name=None, reduction_indices=None, keep_dims=None ) Warning: SOME ARGUMENTS ARE DEPRECATED: (keep_dims). They will be removed in a future version. Instructions for updating: keep_dims is deprecated, use keepdims instead Reduces input_tensor along the dimensions given in axis. Unless keepdims is true, the rank of the tensor is reduced by 1 for each of the entries in axis, which must be unique. If keepdims is true, the reduced dimensions are retained with length 1. If axis is None, all dimensions are reduced, and a tensor with a single element is returned. Args input_tensor The tensor to reduce. Should have numeric type. axis The dimensions to reduce. If None (the default), reduces all dimensions. Must be in the range [-rank(input_tensor), rank(input_tensor)). keepdims If true, retains reduced dimensions with length 1. name A name for the operation (optional). reduction_indices The old (deprecated) name for axis. keep_dims Deprecated alias for keepdims. Returns The reduced tensor. Numpy Compatibility Equivalent to np.prod
tensorflow.compat.v1.reduce_prod
tf.compat.v1.reduce_sum Computes the sum of elements across dimensions of a tensor. (deprecated arguments) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.math.reduce_sum tf.compat.v1.reduce_sum( input_tensor, axis=None, keepdims=None, name=None, reduction_indices=None, keep_dims=None ) Warning: SOME ARGUMENTS ARE DEPRECATED: (keep_dims). They will be removed in a future version. Instructions for updating: keep_dims is deprecated, use keepdims instead Reduces input_tensor along the dimensions given in axis. Unless keepdims is true, the rank of the tensor is reduced by 1 for each of the entries in axis, which must be unique. If keepdims is true, the reduced dimensions are retained with length 1. If axis is None, all dimensions are reduced, and a tensor with a single element is returned. For example: x = tf.constant([[1, 1, 1], [1, 1, 1]]) tf.reduce_sum(x) # 6 tf.reduce_sum(x, 0) # [2, 2, 2] tf.reduce_sum(x, 1) # [3, 3] tf.reduce_sum(x, 1, keepdims=True) # [[3], [3]] tf.reduce_sum(x, [0, 1]) # 6 Args input_tensor The tensor to reduce. Should have numeric type. axis The dimensions to reduce. If None (the default), reduces all dimensions. Must be in the range [-rank(input_tensor), rank(input_tensor)). keepdims If true, retains reduced dimensions with length 1. name A name for the operation (optional). reduction_indices The old (deprecated) name for axis. keep_dims Deprecated alias for keepdims. Returns The reduced tensor, of the same dtype as the input_tensor. Numpy Compatibility Equivalent to np.sum apart the fact that numpy upcast uint8 and int32 to int64 while tensorflow returns the same dtype as the input.
tensorflow.compat.v1.reduce_sum
tf.compat.v1.report_uninitialized_variables Adds ops to list the names of uninitialized variables. tf.compat.v1.report_uninitialized_variables( var_list=None, name='report_uninitialized_variables' ) When run, it returns a 1-D tensor containing the names of uninitialized variables if there are any, or an empty array if there are none. Args var_list List of Variable objects to check. Defaults to the value of global_variables() + local_variables() name Optional name of the Operation. Returns A 1-D tensor containing names of the uninitialized variables, or an empty 1-D tensor if there are no variables or no uninitialized variables. Note: The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method.
tensorflow.compat.v1.report_uninitialized_variables
tf.compat.v1.reset_default_graph Clears the default graph stack and resets the global default graph. tf.compat.v1.reset_default_graph() Note: The default graph is a property of the current thread. This function applies only to the current thread. Calling this function while a tf.compat.v1.Session or tf.compat.v1.InteractiveSession is active will result in undefined behavior. Using any previously created tf.Operation or tf.Tensor objects after calling this function will result in undefined behavior. Raises: AssertionError: If this function is called within a nested graph.
tensorflow.compat.v1.reset_default_graph
Module: tf.compat.v1.resource_loader Resource management library. Functions get_data_files_path(...): Get a direct path to the data files colocated with the script. get_path_to_datafile(...): Get the path to the specified file in the data dependencies. get_root_dir_with_all_resources(...): Get a root directory containing all the data attributes in the build rule. load_resource(...): Load the resource at given path, where path is relative to tensorflow/. readahead_file_path(...): Readahead files not implemented; simply returns given path.
tensorflow.compat.v1.resource_loader
tf.compat.v1.resource_loader.get_data_files_path Get a direct path to the data files colocated with the script. tf.compat.v1.resource_loader.get_data_files_path() Returns The directory where files specified in data attribute of py_test and py_binary are stored.
tensorflow.compat.v1.resource_loader.get_data_files_path
tf.compat.v1.resource_loader.get_path_to_datafile Get the path to the specified file in the data dependencies. tf.compat.v1.resource_loader.get_path_to_datafile( path ) The path is relative to tensorflow/ Args path a string resource path relative to tensorflow/ Returns The path to the specified file present in the data attribute of py_test or py_binary. Raises IOError If the path is not found, or the resource can't be opened.
tensorflow.compat.v1.resource_loader.get_path_to_datafile
tf.compat.v1.resource_loader.get_root_dir_with_all_resources Get a root directory containing all the data attributes in the build rule. tf.compat.v1.resource_loader.get_root_dir_with_all_resources() Returns The path to the specified file present in the data attribute of py_test or py_binary. Falls back to returning the same as get_data_files_path if it fails to detect a bazel runfiles directory.
tensorflow.compat.v1.resource_loader.get_root_dir_with_all_resources
tf.compat.v1.resource_loader.load_resource Load the resource at given path, where path is relative to tensorflow/. tf.compat.v1.resource_loader.load_resource( path ) Args path a string resource path relative to tensorflow/. Returns The contents of that resource. Raises IOError If the path is not found, or the resource can't be opened.
tensorflow.compat.v1.resource_loader.load_resource
tf.compat.v1.resource_loader.readahead_file_path Readahead files not implemented; simply returns given path. tf.compat.v1.resource_loader.readahead_file_path( path, readahead='128M' )
tensorflow.compat.v1.resource_loader.readahead_file_path
tf.compat.v1.resource_variables_enabled Returns True if resource variables are enabled. tf.compat.v1.resource_variables_enabled() Resource variables are improved versions of TensorFlow variables with a well-defined memory model. Accessing a resource variable reads its value, and all ops which access a specific read value of the variable are guaranteed to see the same value for that tensor. Writes which happen after a read (by having a control or data dependency on the read) are guaranteed not to affect the value of the read tensor, and similarly writes which happen before a read are guaranteed to affect the value. No guarantees are made about unordered read/write pairs. Calling tf.enable_resource_variables() lets you opt-in to this TensorFlow 2.0 feature.
tensorflow.compat.v1.resource_variables_enabled
tf.compat.v1.reverse_sequence Reverses variable length slices. (deprecated arguments) (deprecated arguments) tf.compat.v1.reverse_sequence( input, seq_lengths, seq_axis=None, batch_axis=None, name=None, seq_dim=None, batch_dim=None ) Warning: SOME ARGUMENTS ARE DEPRECATED: (seq_dim). They will be removed in a future version. Instructions for updating: seq_dim is deprecated, use seq_axis insteadWarning: SOME ARGUMENTS ARE DEPRECATED: (batch_dim). They will be removed in a future version. Instructions for updating: batch_dim is deprecated, use batch_axis instead This op first slices input along the dimension batch_axis, and for each slice i, reverses the first seq_lengths[i] elements along the dimension seq_axis. The elements of seq_lengths must obey seq_lengths[i] <= input.dims[seq_axis], and seq_lengths must be a vector of length input.dims[batch_axis]. The output slice i along dimension batch_axis is then given by input slice i, with the first seq_lengths[i] slices along dimension seq_axis reversed. Example usage: seq_lengths = [7, 2, 3, 5] input = [[1, 2, 3, 4, 5, 0, 0, 0], [1, 2, 0, 0, 0, 0, 0, 0], [1, 2, 3, 4, 0, 0, 0, 0], [1, 2, 3, 4, 5, 6, 7, 8]] output = tf.reverse_sequence(input, seq_lengths, seq_axis=1, batch_axis=0) output <tf.Tensor: shape=(4, 8), dtype=int32, numpy= array([[0, 0, 5, 4, 3, 2, 1, 0], [2, 1, 0, 0, 0, 0, 0, 0], [3, 2, 1, 4, 0, 0, 0, 0], [5, 4, 3, 2, 1, 6, 7, 8]], dtype=int32)> Args input A Tensor. The input to reverse. seq_lengths A Tensor. Must be one of the following types: int32, int64. 1-D with length input.dims(batch_axis) and max(seq_lengths) <= input.dims(seq_axis) seq_axis An int. The dimension which is partially reversed. batch_axis An optional int. Defaults to 0. The dimension along which reversal is performed. name A name for the operation (optional). Returns A Tensor. Has the same type as input.
tensorflow.compat.v1.reverse_sequence
tf.compat.v1.RunMetadata A ProtocolMessage Attributes cost_graph CostGraphDef cost_graph function_graphs repeated FunctionGraphs function_graphs partition_graphs repeated GraphDef partition_graphs step_stats StepStats step_stats Child Classes class FunctionGraphs
tensorflow.compat.v1.runmetadata
tf.compat.v1.RunMetadata.FunctionGraphs A ProtocolMessage Attributes partition_graphs repeated GraphDef partition_graphs post_optimization_graph GraphDef post_optimization_graph pre_optimization_graph GraphDef pre_optimization_graph
tensorflow.compat.v1.runmetadata.functiongraphs
tf.compat.v1.RunOptions A ProtocolMessage Attributes debug_options DebugOptions debug_options experimental Experimental experimental inter_op_thread_pool int32 inter_op_thread_pool output_partition_graphs bool output_partition_graphs report_tensor_allocations_upon_oom bool report_tensor_allocations_upon_oom timeout_in_ms int64 timeout_in_ms trace_level TraceLevel trace_level Child Classes class Experimental Class Variables FULL_TRACE 3 HARDWARE_TRACE 2 NO_TRACE 0 SOFTWARE_TRACE 1 TraceLevel
tensorflow.compat.v1.runoptions
tf.compat.v1.RunOptions.Experimental A ProtocolMessage Attributes collective_graph_key int64 collective_graph_key run_handler_pool_options RunHandlerPoolOptions run_handler_pool_options use_run_handler_pool bool use_run_handler_pool Child Classes class RunHandlerPoolOptions
tensorflow.compat.v1.runoptions.experimental
tf.compat.v1.RunOptions.Experimental.RunHandlerPoolOptions A ProtocolMessage Attributes priority int64 priority
tensorflow.compat.v1.runoptions.experimental.runhandlerpooloptions
Module: tf.compat.v1.saved_model Public API for tf.saved_model namespace. Modules builder module: SavedModel builder. constants module: Constants for SavedModel save and restore operations. experimental module: Public API for tf.saved_model.experimental namespace. loader module: Loader functionality for SavedModel with hermetic, language-neutral exports. main_op module: SavedModel main op. signature_constants module: Signature constants for SavedModel save and restore operations. signature_def_utils module: SignatureDef utility functions. tag_constants module: Common tags used for graphs in SavedModel. utils module: SavedModel utility functions. Classes class Asset: Represents a file asset to hermetically include in a SavedModel. class Builder: Builds the SavedModel protocol buffer and saves variables and assets. class SaveOptions: Options for saving to SavedModel. Functions build_signature_def(...): Utility function to build a SignatureDef protocol buffer. build_tensor_info(...): Utility function to build TensorInfo proto from a Tensor. (deprecated) classification_signature_def(...): Creates classification signature from given examples and predictions. contains_saved_model(...): Checks whether the provided export directory could contain a SavedModel. get_tensor_from_tensor_info(...): Returns the Tensor or CompositeTensor described by a TensorInfo proto. (deprecated) is_valid_signature(...): Determine whether a SignatureDef can be served by TensorFlow Serving. load(...): Loads the model from a SavedModel as specified by tags. (deprecated) load_v2(...): Load a SavedModel from export_dir. main_op_with_restore(...): Returns a main op to init variables, tables and restore the graph. (deprecated) maybe_saved_model_directory(...): Checks whether the provided export directory could contain a SavedModel. predict_signature_def(...): Creates prediction signature from given inputs and outputs. regression_signature_def(...): Creates regression signature from given examples and predictions. save(...): Exports the Trackable object obj to SavedModel format. simple_save(...): Convenience function to build a SavedModel suitable for serving. (deprecated) Other Members ASSETS_DIRECTORY 'assets' ASSETS_KEY 'saved_model_assets' CLASSIFY_INPUTS 'inputs' CLASSIFY_METHOD_NAME 'tensorflow/serving/classify' CLASSIFY_OUTPUT_CLASSES 'classes' CLASSIFY_OUTPUT_SCORES 'scores' DEBUG_DIRECTORY 'debug' DEBUG_INFO_FILENAME_PB 'saved_model_debug_info.pb' DEFAULT_SERVING_SIGNATURE_DEF_KEY 'serving_default' GPU 'gpu' LEGACY_INIT_OP_KEY 'legacy_init_op' MAIN_OP_KEY 'saved_model_main_op' PREDICT_INPUTS 'inputs' PREDICT_METHOD_NAME 'tensorflow/serving/predict' PREDICT_OUTPUTS 'outputs' REGRESS_INPUTS 'inputs' REGRESS_METHOD_NAME 'tensorflow/serving/regress' REGRESS_OUTPUTS 'outputs' SAVED_MODEL_FILENAME_PB 'saved_model.pb' SAVED_MODEL_FILENAME_PBTXT 'saved_model.pbtxt' SAVED_MODEL_SCHEMA_VERSION 1 SERVING 'serve' TPU 'tpu' TRAINING 'train' VARIABLES_DIRECTORY 'variables' VARIABLES_FILENAME 'variables'
tensorflow.compat.v1.saved_model
Module: tf.compat.v1.saved_model.builder SavedModel builder. Builds a SavedModel that can be saved to storage, is language neutral, and enables systems to produce, consume, or transform TensorFlow Models. Classes class SavedModelBuilder: Builds the SavedModel protocol buffer and saves variables and assets.
tensorflow.compat.v1.saved_model.builder
tf.compat.v1.saved_model.build_signature_def Utility function to build a SignatureDef protocol buffer. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.saved_model.signature_def_utils.build_signature_def tf.compat.v1.saved_model.build_signature_def( inputs=None, outputs=None, method_name=None ) Args inputs Inputs of the SignatureDef defined as a proto map of string to tensor info. outputs Outputs of the SignatureDef defined as a proto map of string to tensor info. method_name Method name of the SignatureDef as a string. Returns A SignatureDef protocol buffer constructed based on the supplied arguments.
tensorflow.compat.v1.saved_model.build_signature_def
tf.compat.v1.saved_model.build_tensor_info Utility function to build TensorInfo proto from a Tensor. (deprecated) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.saved_model.utils.build_tensor_info tf.compat.v1.saved_model.build_tensor_info( tensor ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info. Args tensor Tensor or SparseTensor whose name, dtype and shape are used to build the TensorInfo. For SparseTensors, the names of the three constituent Tensors are used. Returns A TensorInfo protocol buffer constructed based on the supplied argument. Raises RuntimeError If eager execution is enabled.
tensorflow.compat.v1.saved_model.build_tensor_info
tf.compat.v1.saved_model.classification_signature_def Creates classification signature from given examples and predictions. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.saved_model.signature_def_utils.classification_signature_def tf.compat.v1.saved_model.classification_signature_def( examples, classes, scores ) This function produces signatures intended for use with the TensorFlow Serving Classify API (tensorflow_serving/apis/prediction_service.proto), and so constrains the input and output types to those allowed by TensorFlow Serving. Args examples A string Tensor, expected to accept serialized tf.Examples. classes A string Tensor. Note that the ClassificationResponse message requires that class labels are strings, not integers or anything else. scores a float Tensor. Returns A classification-flavored signature_def. Raises ValueError If examples is None.
tensorflow.compat.v1.saved_model.classification_signature_def
Module: tf.compat.v1.saved_model.constants Constants for SavedModel save and restore operations. Other Members ASSETS_DIRECTORY 'assets' ASSETS_KEY 'saved_model_assets' DEBUG_DIRECTORY 'debug' DEBUG_INFO_FILENAME_PB 'saved_model_debug_info.pb' LEGACY_INIT_OP_KEY 'legacy_init_op' MAIN_OP_KEY 'saved_model_main_op' SAVED_MODEL_FILENAME_PB 'saved_model.pb' SAVED_MODEL_FILENAME_PBTXT 'saved_model.pbtxt' SAVED_MODEL_SCHEMA_VERSION 1 VARIABLES_DIRECTORY 'variables' VARIABLES_FILENAME 'variables'
tensorflow.compat.v1.saved_model.constants
tf.compat.v1.saved_model.contains_saved_model Checks whether the provided export directory could contain a SavedModel. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.saved_model.loader.maybe_saved_model_directory, tf.compat.v1.saved_model.maybe_saved_model_directory tf.compat.v1.saved_model.contains_saved_model( export_dir ) Note that the method does not load any data by itself. If the method returns false, the export directory definitely does not contain a SavedModel. If the method returns true, the export directory may contain a SavedModel but provides no guarantee that it can be loaded. Args export_dir Absolute string path to possible export location. For example, '/my/foo/model'. Returns True if the export directory contains SavedModel files, False otherwise.
tensorflow.compat.v1.saved_model.contains_saved_model
Module: tf.compat.v1.saved_model.experimental Public API for tf.saved_model.experimental namespace. Classes class VariablePolicy: Enum defining options for variable handling when saving. Functions save(...): Exports the Trackable object obj to SavedModel format.
tensorflow.compat.v1.saved_model.experimental
tf.compat.v1.saved_model.get_tensor_from_tensor_info Returns the Tensor or CompositeTensor described by a TensorInfo proto. (deprecated) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.saved_model.utils.get_tensor_from_tensor_info tf.compat.v1.saved_model.get_tensor_from_tensor_info( tensor_info, graph=None, import_scope=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.get_tensor_from_tensor_info or tf.compat.v1.saved_model.get_tensor_from_tensor_info. Args tensor_info A TensorInfo proto describing a Tensor or SparseTensor or CompositeTensor. graph The tf.Graph in which tensors are looked up. If None, the current default graph is used. import_scope If not None, names in tensor_info are prefixed with this string before lookup. Returns The Tensor or SparseTensor or CompositeTensor in graph described by tensor_info. Raises KeyError If tensor_info does not correspond to a tensor in graph. ValueError If tensor_info is malformed.
tensorflow.compat.v1.saved_model.get_tensor_from_tensor_info
tf.compat.v1.saved_model.is_valid_signature Determine whether a SignatureDef can be served by TensorFlow Serving. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.saved_model.signature_def_utils.is_valid_signature tf.compat.v1.saved_model.is_valid_signature( signature_def )
tensorflow.compat.v1.saved_model.is_valid_signature
tf.compat.v1.saved_model.load Loads the model from a SavedModel as specified by tags. (deprecated) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.saved_model.loader.load tf.compat.v1.saved_model.load( sess, tags, export_dir, import_scope=None, **saver_kwargs ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.loader.load or tf.compat.v1.saved_model.load. There will be a new function for importing SavedModels in Tensorflow 2.0. Args sess The TensorFlow session to restore the variables. tags Set of string tags to identify the required MetaGraphDef. These should correspond to the tags used when saving the variables using the SavedModel save() API. export_dir Directory in which the SavedModel protocol buffer and variables to be loaded are located. import_scope Optional string -- if specified, prepend this string followed by '/' to all loaded tensor names. This scope is applied to tensor instances loaded into the passed session, but it is not written through to the static MetaGraphDef protocol buffer that is returned. **saver_kwargs Optional keyword arguments passed through to Saver. Returns The MetaGraphDef protocol buffer loaded in the provided session. This can be used to further extract signature-defs, collection-defs, etc. Raises RuntimeError MetaGraphDef associated with the tags cannot be found.
tensorflow.compat.v1.saved_model.load
Module: tf.compat.v1.saved_model.loader Loader functionality for SavedModel with hermetic, language-neutral exports. Load and restore capability for a SavedModel, which may include multiple meta graph defs. Each SavedModel is associated with a single checkpoint. Each meta graph def is saved with one or more tags, which are used to identify the exact meta graph def to load. The load operation requires the session in which to restore the graph definition and variables, the tags used to identify the meta graph def to load and the location of the SavedModel. Upon a load, the subset of variables and assets supplied as part of the specific meta graph def, will be restored into the supplied session. The values of the variables though will correspond to the saved values from the first meta graph added to the SavedModel using add_meta_graph_and_variables(...) in builder.py. Typical usage: ... builder = tf.compat.v1.saved_model.builder.SavedModelBuilder(export_dir) with tf.compat.v1.Session(graph=tf.Graph()) as sess: ... builder.add_meta_graph_and_variables(sess, ["foo-tag"], signature_def_map=foo_signatures, assets_collection=foo_assets) ... with tf.compat.v1.Session(graph=tf.Graph()) as sess: ... builder.add_meta_graph(["bar-tag", "baz-tag"], assets_collection=bar_baz_assets) ... builder.save() ... with tf.compat.v1.Session(graph=tf.Graph()) as sess: tf.compat.v1.saved_model.loader.load(sess, ["foo-tag"], export_dir) ... Functions load(...): Loads the model from a SavedModel as specified by tags. (deprecated) maybe_saved_model_directory(...): Checks whether the provided export directory could contain a SavedModel.
tensorflow.compat.v1.saved_model.loader
Module: tf.compat.v1.saved_model.main_op SavedModel main op. Builds a main op that defines the sequence of ops to be run as part of the SavedModel load/restore operations. Functions main_op(...): Returns a main op to init variables and tables. (deprecated) main_op_with_restore(...): Returns a main op to init variables, tables and restore the graph. (deprecated)
tensorflow.compat.v1.saved_model.main_op
tf.compat.v1.saved_model.main_op.main_op Returns a main op to init variables and tables. (deprecated) tf.compat.v1.saved_model.main_op.main_op() Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.main_op.main_op. Returns the main op including the group of ops that initializes all variables, initializes local variables and initialize all tables. Returns The set of ops to be run as part of the main op upon the load operation.
tensorflow.compat.v1.saved_model.main_op.main_op
tf.compat.v1.saved_model.main_op_with_restore Returns a main op to init variables, tables and restore the graph. (deprecated) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.saved_model.main_op.main_op_with_restore tf.compat.v1.saved_model.main_op_with_restore( restore_op_name ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.main_op_with_restore or tf.compat.v1.saved_model.main_op.main_op_with_restore. Returns the main op including the group of ops that initializes all variables, initialize local variables, initialize all tables and the restore op name. Args restore_op_name Name of the op to use to restore the graph. Returns The set of ops to be run as part of the main op upon the load operation.
tensorflow.compat.v1.saved_model.main_op_with_restore
tf.compat.v1.saved_model.predict_signature_def Creates prediction signature from given inputs and outputs. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.saved_model.signature_def_utils.predict_signature_def tf.compat.v1.saved_model.predict_signature_def( inputs, outputs ) This function produces signatures intended for use with the TensorFlow Serving Predict API (tensorflow_serving/apis/prediction_service.proto). This API imposes no constraints on the input and output types. Args inputs dict of string to Tensor. outputs dict of string to Tensor. Returns A prediction-flavored signature_def. Raises ValueError If inputs or outputs is None.
tensorflow.compat.v1.saved_model.predict_signature_def
tf.compat.v1.saved_model.regression_signature_def Creates regression signature from given examples and predictions. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.saved_model.signature_def_utils.regression_signature_def tf.compat.v1.saved_model.regression_signature_def( examples, predictions ) This function produces signatures intended for use with the TensorFlow Serving Regress API (tensorflow_serving/apis/prediction_service.proto), and so constrains the input and output types to those allowed by TensorFlow Serving. Args examples A string Tensor, expected to accept serialized tf.Examples. predictions A float Tensor. Returns A regression-flavored signature_def. Raises ValueError If examples is None.
tensorflow.compat.v1.saved_model.regression_signature_def
Module: tf.compat.v1.saved_model.signature_constants Signature constants for SavedModel save and restore operations. Other Members CLASSIFY_INPUTS 'inputs' CLASSIFY_METHOD_NAME 'tensorflow/serving/classify' CLASSIFY_OUTPUT_CLASSES 'classes' CLASSIFY_OUTPUT_SCORES 'scores' DEFAULT_SERVING_SIGNATURE_DEF_KEY 'serving_default' PREDICT_INPUTS 'inputs' PREDICT_METHOD_NAME 'tensorflow/serving/predict' PREDICT_OUTPUTS 'outputs' REGRESS_INPUTS 'inputs' REGRESS_METHOD_NAME 'tensorflow/serving/regress' REGRESS_OUTPUTS 'outputs'
tensorflow.compat.v1.saved_model.signature_constants
Module: tf.compat.v1.saved_model.signature_def_utils SignatureDef utility functions. Utility functions for building and inspecting SignatureDef protos. Classes class MethodNameUpdater: Updates the method name(s) of the SavedModel stored in the given path. Functions build_signature_def(...): Utility function to build a SignatureDef protocol buffer. classification_signature_def(...): Creates classification signature from given examples and predictions. is_valid_signature(...): Determine whether a SignatureDef can be served by TensorFlow Serving. predict_signature_def(...): Creates prediction signature from given inputs and outputs. regression_signature_def(...): Creates regression signature from given examples and predictions.
tensorflow.compat.v1.saved_model.signature_def_utils
tf.compat.v1.saved_model.signature_def_utils.MethodNameUpdater Updates the method name(s) of the SavedModel stored in the given path. tf.compat.v1.saved_model.signature_def_utils.MethodNameUpdater( export_dir ) The MethodNameUpdater class provides the functionality to update the method name field in the signature_defs of the given SavedModel. For example, it can be used to replace the predict method_name to regress. Typical usages of the MethodNameUpdater ... updater = tf.compat.v1.saved_model.signature_def_utils.MethodNameUpdater( export_dir) # Update all signature_defs with key "foo" in all meta graph defs. updater.replace_method_name(signature_key="foo", method_name="regress") # Update a single signature_def with key "bar" in the meta graph def with # tags ["serve"] updater.replace_method_name(signature_key="bar", method_name="classify", tags="serve") updater.save(new_export_dir) Note: This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.builder.MethodNameUpdater. Args export_dir Directory containing the SavedModel files. Raises IOError If the saved model file does not exist, or cannot be successfully parsed. Methods replace_method_name View source replace_method_name( signature_key, method_name, tags=None ) Replaces the method_name in the specified signature_def. This will match and replace multiple sig defs iff tags is None (i.e when multiple MetaGraphs have a signature_def with the same key). If tags is not None, this will only replace a single signature_def in the MetaGraph with matching tags. Args signature_key Key of the signature_def to be updated. method_name new method_name to replace the existing one. tags A tag or sequence of tags identifying the MetaGraph to update. If None, all meta graphs will be updated. Raises ValueError if signature_key or method_name are not defined or if no metagraphs were found with the associated tags or if no meta graph has a signature_def that matches signature_key. save View source save( new_export_dir=None ) Saves the updated SavedModel. Args new_export_dir Path where the updated SavedModel will be saved. If None, the input SavedModel will be overriden with the updates. Raises errors.OpError If there are errors during the file save operation.
tensorflow.compat.v1.saved_model.signature_def_utils.methodnameupdater
tf.compat.v1.saved_model.simple_save Convenience function to build a SavedModel suitable for serving. (deprecated) tf.compat.v1.saved_model.simple_save( session, export_dir, inputs, outputs, legacy_init_op=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.simple_save. In many common cases, saving models for serving will be as simple as: simple_save(session, export_dir, inputs={"x": x, "y": y}, outputs={"z": z}) Although in many cases it's not necessary to understand all of the many ways to configure a SavedModel, this method has a few practical implications: It will be treated as a graph for inference / serving (i.e. uses the tag saved_model.SERVING) The SavedModel will load in TensorFlow Serving and supports the Predict API. To use the Classify, Regress, or MultiInference APIs, please use either tf.Estimator or the lower level SavedModel APIs. Some TensorFlow ops depend on information on disk or other information called "assets". These are generally handled automatically by adding the assets to the GraphKeys.ASSET_FILEPATHS collection. Only assets in that collection are exported; if you need more custom behavior, you'll need to use the SavedModelBuilder. More information about SavedModel and signatures can be found here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md Args session The TensorFlow session from which to save the meta graph and variables. export_dir The path to which the SavedModel will be stored. inputs dict mapping string input names to tensors. These are added to the SignatureDef as the inputs. outputs dict mapping string output names to tensors. These are added to the SignatureDef as the outputs. legacy_init_op Legacy support for op or group of ops to execute after the restore op upon a load.
tensorflow.compat.v1.saved_model.simple_save
Module: tf.compat.v1.saved_model.tag_constants Common tags used for graphs in SavedModel. Other Members GPU 'gpu' SERVING 'serve' TPU 'tpu' TRAINING 'train'
tensorflow.compat.v1.saved_model.tag_constants
Module: tf.compat.v1.saved_model.utils SavedModel utility functions. Utility functions to assist with setup and construction of the SavedModel proto. Functions build_tensor_info(...): Utility function to build TensorInfo proto from a Tensor. (deprecated) get_tensor_from_tensor_info(...): Returns the Tensor or CompositeTensor described by a TensorInfo proto. (deprecated)
tensorflow.compat.v1.saved_model.utils
tf.compat.v1.scalar_mul Multiplies a scalar times a Tensor or IndexedSlices object. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.math.scalar_mul tf.compat.v1.scalar_mul( scalar, x, name=None ) Intended for use in gradient code which might deal with IndexedSlices objects, which are easy to multiply by a scalar but more expensive to multiply with arbitrary tensors. Args scalar A 0-D scalar Tensor. Must have known shape. x A Tensor or IndexedSlices to be scaled. name A name for the operation (optional). Returns scalar * x of the same type (Tensor or IndexedSlices) as x. Raises ValueError if scalar is not a 0-D scalar.
tensorflow.compat.v1.scalar_mul
tf.compat.v1.scan scan on the list of tensors unpacked from elems on dimension 0. tf.compat.v1.scan( fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, infer_shape=True, reverse=False, name=None ) See also tf.map_fn. The simplest version of scan repeatedly applies the callable fn to a sequence of elements from first to last. The elements are made of the tensors unpacked from elems on dimension 0. The callable fn takes two tensors as arguments. The first argument is the accumulated value computed from the preceding invocation of fn, and the second is the value at the current position of elems. If initializer is None, elems must contain at least one element, and its first element is used as the initializer. Suppose that elems is unpacked into values, a list of tensors. The shape of the result tensor is [len(values)] + fn(initializer, values[0]).shape. If reverse=True, it's fn(initializer, values[-1]).shape. This method also allows multi-arity elems and accumulator. If elems is a (possibly nested) list or tuple of tensors, then each of these tensors must have a matching first (unpack) dimension. The second argument of fn must match the structure of elems. If no initializer is provided, the output structure and dtypes of fn are assumed to be the same as its input; and in this case, the first argument of fn must match the structure of elems. If an initializer is provided, then the output of fn must have the same structure as initializer; and the first argument of fn must match this structure. For example, if elems is (t1, [t2, t3]) and initializer is [i1, i2] then an appropriate signature for fn in python2 is: fn = lambda (acc_p1, acc_p2), (t1, [t2, t3]): and fn must return a list, [acc_n1, acc_n2]. An alternative correct signature for fn, and the one that works in python3, is: fn = lambda a, t:, where a and t correspond to the input tuples. Args fn The callable to be performed. It accepts two arguments. The first will have the same structure as initializer if one is provided, otherwise it will have the same structure as elems. The second will have the same (possibly nested) structure as elems. Its output must have the same structure as initializer if one is provided, otherwise it must have the same structure as elems. elems A tensor or (possibly nested) sequence of tensors, each of which will be unpacked along their first dimension. The nested sequence of the resulting slices will be the first argument to fn. initializer (optional) A tensor or (possibly nested) sequence of tensors, initial value for the accumulator, and the expected output type of fn. parallel_iterations (optional) The number of iterations allowed to run in parallel. back_prop (optional) True enables support for back propagation. swap_memory (optional) True enables GPU-CPU memory swapping. infer_shape (optional) False disables tests for consistent output shapes. reverse (optional) True scans the tensor last to first (instead of first to last). name (optional) Name prefix for the returned tensors. Returns A tensor or (possibly nested) sequence of tensors. Each tensor packs the results of applying fn to tensors unpacked from elems along the first dimension, and the previous accumulator value(s), from first to last (or last to first, if reverse=True). Raises TypeError if fn is not callable or the structure of the output of fn and initializer do not match. ValueError if the lengths of the output of fn and initializer do not match. Examples: elems = np.array([1, 2, 3, 4, 5, 6]) sum = scan(lambda a, x: a + x, elems) # sum == [1, 3, 6, 10, 15, 21] sum = scan(lambda a, x: a + x, elems, reverse=True) # sum == [21, 20, 18, 15, 11, 6] elems = np.array([1, 2, 3, 4, 5, 6]) initializer = np.array(0) sum_one = scan( lambda a, x: x[0] - x[1] + a, (elems + 1, elems), initializer) # sum_one == [1, 2, 3, 4, 5, 6] elems = np.array([1, 0, 0, 0, 0, 0]) initializer = (np.array(0), np.array(1)) fibonaccis = scan(lambda a, _: (a[1], a[0] + a[1]), elems, initializer) # fibonaccis == ([1, 1, 2, 3, 5, 8], [1, 2, 3, 5, 8, 13])
tensorflow.compat.v1.scan
tf.compat.v1.scatter_add Adds sparse updates to the variable referenced by resource. tf.compat.v1.scatter_add( ref, indices, updates, use_locking=False, name=None ) This operation computes # Scalar indices ref[indices, ...] += updates[...] # Vector indices (for each i) ref[indices[i], ...] += updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] += updates[i, ..., j, ...] This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the updated value. Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions add. Requires updates.shape = indices.shape + ref.shape[1:]. Args ref A Variable. indices A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref. updates A Tensor. Must have the same type as ref. A tensor of updated values to store in ref. use_locking An optional bool. Defaults to False. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.
tensorflow.compat.v1.scatter_add
tf.compat.v1.scatter_div Divides a variable reference by sparse updates. tf.compat.v1.scatter_div( ref, indices, updates, use_locking=False, name=None ) This operation computes # Scalar indices ref[indices, ...] /= updates[...] # Vector indices (for each i) ref[indices[i], ...] /= updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] /= updates[i, ..., j, ...] This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value. Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions divide. Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = []. Args ref A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable node. indices A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref. updates A Tensor. Must have the same type as ref. A tensor of values that ref is divided by. use_locking An optional bool. Defaults to False. If True, the operation will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as ref.
tensorflow.compat.v1.scatter_div
tf.compat.v1.scatter_max Reduces sparse updates into a variable reference using the max operation. tf.compat.v1.scatter_max( ref, indices, updates, use_locking=False, name=None ) This operation computes # Scalar indices ref[indices, ...] = max(ref[indices, ...], updates[...]) # Vector indices (for each i) ref[indices[i], ...] = max(ref[indices[i], ...], updates[i, ...]) # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] = max(ref[indices[i, ..., j], ...], updates[i, ..., j, ...]) This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value. Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions combine. Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = []. Args ref A mutable Tensor. Must be one of the following types: half, bfloat16, float32, float64, int32, int64. Should be from a Variable node. indices A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref. updates A Tensor. Must have the same type as ref. A tensor of updated values to reduce into ref. use_locking An optional bool. Defaults to False. If True, the update will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as ref.
tensorflow.compat.v1.scatter_max
tf.compat.v1.scatter_min Reduces sparse updates into a variable reference using the min operation. tf.compat.v1.scatter_min( ref, indices, updates, use_locking=False, name=None ) This operation computes # Scalar indices ref[indices, ...] = min(ref[indices, ...], updates[...]) # Vector indices (for each i) ref[indices[i], ...] = min(ref[indices[i], ...], updates[i, ...]) # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] = min(ref[indices[i, ..., j], ...], updates[i, ..., j, ...]) This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value. Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions combine. Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = []. Args ref A mutable Tensor. Must be one of the following types: half, bfloat16, float32, float64, int32, int64. Should be from a Variable node. indices A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref. updates A Tensor. Must have the same type as ref. A tensor of updated values to reduce into ref. use_locking An optional bool. Defaults to False. If True, the update will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as ref.
tensorflow.compat.v1.scatter_min
tf.compat.v1.scatter_mul Multiplies sparse updates into a variable reference. tf.compat.v1.scatter_mul( ref, indices, updates, use_locking=False, name=None ) This operation computes # Scalar indices ref[indices, ...] *= updates[...] # Vector indices (for each i) ref[indices[i], ...] *= updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] *= updates[i, ..., j, ...] This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value. Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions multiply. Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = []. Args ref A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable node. indices A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref. updates A Tensor. Must have the same type as ref. A tensor of updated values to multiply to ref. use_locking An optional bool. Defaults to False. If True, the operation will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as ref.
tensorflow.compat.v1.scatter_mul
tf.compat.v1.scatter_nd_add Applies sparse addition to individual values or slices in a Variable. tf.compat.v1.scatter_nd_add( ref, indices, updates, use_locking=False, name=None ) ref is a Tensor with rank P and indices is a Tensor of rank Q. indices must be integer tensor, containing indices into ref. It must be shape [d_0, ..., d_{Q-2}, K] where 0 < K <= P. The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the Kth dimension of ref. updates is Tensor of rank Q-1+P-K with shape: [d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]] For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that addition would look like this: ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) indices = tf.constant([[4], [3], [1], [7]]) updates = tf.constant([9, 10, 11, 12]) add = tf.compat.v1.scatter_nd_add(ref, indices, updates) with tf.compat.v1.Session() as sess: print sess.run(add) The resulting update to ref would look like this: [1, 13, 3, 14, 14, 6, 7, 20] See tf.scatter_nd for more details about how to make updates to slices. Args ref A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. A mutable Tensor. Should be from a Variable node. indices A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref. updates A Tensor. Must have the same type as ref. A tensor of updated values to add to ref. use_locking An optional bool. Defaults to False. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as ref.
tensorflow.compat.v1.scatter_nd_add
tf.compat.v1.scatter_nd_sub Applies sparse subtraction to individual values or slices in a Variable. tf.compat.v1.scatter_nd_sub( ref, indices, updates, use_locking=False, name=None ) ref is a Tensor with rank P and indices is a Tensor of rank Q. indices must be integer tensor, containing indices into ref. It must be shape [d_0, ..., d_{Q-2}, K] where 0 < K <= P. The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the Kth dimension of ref. updates is Tensor of rank Q-1+P-K with shape: [d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]] For example, say we want to subtract 4 scattered elements from a rank-1 tensor with 8 elements. In Python, that update would look like this: ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) indices = tf.constant([[4], [3], [1] ,[7]]) updates = tf.constant([9, 10, 11, 12]) op = tf.compat.v1.scatter_nd_sub(ref, indices, updates) with tf.compat.v1.Session() as sess: print sess.run(op) The resulting update to ref would look like this: [1, -9, 3, -6, -6, 6, 7, -4] See tf.scatter_nd for more details about how to make updates to slices. Args ref A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. A mutable Tensor. Should be from a Variable node. indices A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref. updates A Tensor. Must have the same type as ref. A tensor of updated values to add to ref. use_locking An optional bool. Defaults to False. An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as ref.
tensorflow.compat.v1.scatter_nd_sub