doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
tf.compat.v1.WholeFileReader A Reader that outputs the entire contents of a file as a value. Inherits From: ReaderBase tf.compat.v1.WholeFileReader( name=None ) To use, enqueue filenames in a Queue. The output of Read will be a filename (key) and the contents of that file (value). See ReaderBase for supported methods. Args name A name for the operation (optional). Eager Compatibility Readers are not compatible with eager execution. Instead, please use tf.data to get data into your model. Attributes reader_ref Op that implements the reader. supports_serialize Whether the Reader implementation can serialize its state. Methods num_records_produced View source num_records_produced( name=None ) Returns the number of records this reader has produced. This is the same as the number of Read executions that have succeeded. Args name A name for the operation (optional). Returns An int64 Tensor. num_work_units_completed View source num_work_units_completed( name=None ) Returns the number of work units this reader has finished processing. Args name A name for the operation (optional). Returns An int64 Tensor. read View source read( queue, name=None ) Returns the next record (key, value) pair produced by a reader. Will dequeue a work unit from queue if necessary (e.g. when the Reader needs to start reading from a new file since it has finished with the previous file). Args queue A Queue or a mutable string Tensor representing a handle to a Queue, with string work items. name A name for the operation (optional). Returns A tuple of Tensors (key, value). key A string scalar Tensor. value A string scalar Tensor. read_up_to View source read_up_to( queue, num_records, name=None ) Returns up to num_records (key, value) pairs produced by a reader. Will dequeue a work unit from queue if necessary (e.g., when the Reader needs to start reading from a new file since it has finished with the previous file). It may return less than num_records even before the last batch. Args queue A Queue or a mutable string Tensor representing a handle to a Queue, with string work items. num_records Number of records to read. name A name for the operation (optional). Returns A tuple of Tensors (keys, values). keys A 1-D string Tensor. values A 1-D string Tensor. reset View source reset( name=None ) Restore a reader to its initial clean state. Args name A name for the operation (optional). Returns The created Operation. restore_state View source restore_state( state, name=None ) Restore a reader to a previously saved state. Not all Readers support being restored, so this can produce an Unimplemented error. Args state A string Tensor. Result of a SerializeState of a Reader with matching type. name A name for the operation (optional). Returns The created Operation. serialize_state View source serialize_state( name=None ) Produce a string tensor that encodes the state of a reader. Not all Readers support being serialized, so this can produce an Unimplemented error. Args name A name for the operation (optional). Returns A string Tensor.
tensorflow.compat.v1.wholefilereader
tf.compat.v1.wrap_function Wraps the TF 1.x function fn into a graph function. tf.compat.v1.wrap_function( fn, signature, name=None ) The python function fn will be called once with symbolic arguments specified in the signature, traced, and turned into a graph function. Any variables created by fn will be owned by the object returned by wrap_function. The resulting graph function can be called with tensors which match the signature. def f(x, do_add): v = tf.Variable(5.0) if do_add: op = v.assign_add(x) else: op = v.assign_sub(x) with tf.control_dependencies([op]): return v.read_value() f_add = tf.compat.v1.wrap_function(f, [tf.TensorSpec((), tf.float32), True]) assert float(f_add(1.0)) == 6.0 assert float(f_add(1.0)) == 7.0 # Can call tf.compat.v1.wrap_function again to get a new trace, a new set # of variables, and possibly different non-template arguments. f_sub= tf.compat.v1.wrap_function(f, [tf.TensorSpec((), tf.float32), False]) assert float(f_sub(1.0)) == 4.0 assert float(f_sub(1.0)) == 3.0 Both tf.compat.v1.wrap_function and tf.function create a callable TensorFlow graph. But while tf.function runs all stateful operations (e.g. tf.print) and sequences operations to provide the same semantics as eager execution, wrap_function is closer to the behavior of session.run in TensorFlow 1.x. It will not run any operations unless they are required to compute the function's outputs, either through a data dependency or a control dependency. Nor will it sequence operations. Unlike tf.function, wrap_function will only trace the Python function once. As with placeholders in TF 1.x, shapes and dtypes must be provided to wrap_function's signature argument. Since it is only traced once, variables and state may be created inside the function and owned by the function wrapper object. Args fn python function to be wrapped signature the placeholder and python arguments to be passed to the wrapped function name Optional. The name of the function. Returns the wrapped graph function.
tensorflow.compat.v1.wrap_function
Module: tf.compat.v1.xla Public API for tf.xla namespace. Modules experimental module: Public API for tf.xla.experimental namespace.
tensorflow.compat.v1.xla
Module: tf.compat.v1.xla.experimental Public API for tf.xla.experimental namespace. Functions compile(...): Builds an operator that compiles and runs computation with XLA. (deprecated) jit_scope(...): Enable or disable JIT compilation of operators within the scope.
tensorflow.compat.v1.xla.experimental
tf.compat.v1.zeros_like Creates a tensor with all elements set to zero. tf.compat.v1.zeros_like( tensor, dtype=None, name=None, optimize=True ) See also tf.zeros. Given a single tensor (tensor), this operation returns a tensor of the same type and shape as tensor with all elements set to zero. Optionally, you can use dtype to specify a new type for the returned tensor. Examples: tensor = tf.constant([[1, 2, 3], [4, 5, 6]]) tf.zeros_like(tensor) <tf.Tensor: shape=(2, 3), dtype=int32, numpy= array([[0, 0, 0], [0, 0, 0]], dtype=int32)> tf.zeros_like(tensor, dtype=tf.float32) <tf.Tensor: shape=(2, 3), dtype=float32, numpy= array([[0., 0., 0.], [0., 0., 0.]], dtype=float32)> Args tensor A Tensor. dtype A type for the returned Tensor. Must be float16, float32, float64, int8, uint8, int16, uint16, int32, int64, complex64, complex128, bool or string. (optional) name A name for the operation (optional). optimize if True, attempt to statically determine the shape of tensor and encode it as a constant. (optional, defaults to True) Returns A Tensor with all elements set to zero.
tensorflow.compat.v1.zeros_like
tf.concat View source on GitHub Concatenates tensors along one dimension. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.concat tf.concat( values, axis, name='concat' ) See also tf.tile, tf.stack, tf.repeat. Concatenates the list of tensors values along dimension axis. If values[i].shape = [D0, D1, ... Daxis(i), ...Dn], the concatenated result has shape [D0, D1, ... Raxis, ...Dn] where Raxis = sum(Daxis(i)) That is, the data from the input tensors is joined along the axis dimension. The number of dimensions of the input tensors must match, and all dimensions except axis must be equal. For example: t1 = [[1, 2, 3], [4, 5, 6]] t2 = [[7, 8, 9], [10, 11, 12]] tf.concat([t1, t2], 0) <tf.Tensor: shape=(4, 3), dtype=int32, numpy= array([[ 1, 2, 3], [ 4, 5, 6], [ 7, 8, 9], [10, 11, 12]], dtype=int32)> tf.concat([t1, t2], 1) <tf.Tensor: shape=(2, 6), dtype=int32, numpy= array([[ 1, 2, 3, 7, 8, 9], [ 4, 5, 6, 10, 11, 12]], dtype=int32)> As in Python, the axis could also be negative numbers. Negative axis are interpreted as counting from the end of the rank, i.e., axis + rank(values)-th dimension. For example: t1 = [[[1, 2], [2, 3]], [[4, 4], [5, 3]]] t2 = [[[7, 4], [8, 4]], [[2, 10], [15, 11]]] tf.concat([t1, t2], -1) <tf.Tensor: shape=(2, 2, 4), dtype=int32, numpy= array([[[ 1, 2, 7, 4], [ 2, 3, 8, 4]], [[ 4, 4, 2, 10], [ 5, 3, 15, 11]]], dtype=int32)> Note: If you are concatenating along a new axis consider using stack. E.g. tf.concat([tf.expand_dims(t, axis) for t in tensors], axis) can be rewritten as tf.stack(tensors, axis=axis) Args values A list of Tensor objects or a single Tensor. axis 0-D int32 Tensor. Dimension along which to concatenate. Must be in the range [-rank(values), rank(values)). As in Python, indexing for axis is 0-based. Positive axis in the rage of [0, rank(values)) refers to axis-th dimension. And negative axis refers to axis + rank(values)-th dimension. name A name for the operation (optional). Returns A Tensor resulting from concatenation of the input tensors.
tensorflow.concat
tf.cond View source on GitHub Return true_fn() if the predicate pred is true else false_fn(). tf.cond( pred, true_fn=None, false_fn=None, name=None ) true_fn and false_fn both return lists of output tensors. true_fn and false_fn must have the same non-zero number and type of outputs. Warning: Any Tensors or Operations created outside of true_fn and false_fn will be executed regardless of which branch is selected at runtime. Although this behavior is consistent with the dataflow model of TensorFlow, it has frequently surprised users who expected a lazier semantics. Consider the following simple program: z = tf.multiply(a, b) result = tf.cond(x < y, lambda: tf.add(x, z), lambda: tf.square(y)) If x < y, the tf.add operation will be executed and tf.square operation will not be executed. Since z is needed for at least one branch of the cond, the tf.multiply operation is always executed, unconditionally. Note that cond calls true_fn and false_fn exactly once (inside the call to cond, and not at all during Session.run()). cond stitches together the graph fragments created during the true_fn and false_fn calls with some additional graph nodes to ensure that the right branch gets executed depending on the value of pred. tf.cond supports nested structures as implemented in tensorflow.python.util.nest. Both true_fn and false_fn must return the same (possibly nested) value structure of lists, tuples, and/or named tuples. Singleton lists and tuples form the only exceptions to this: when returned by true_fn and/or false_fn, they are implicitly unpacked to single values. Note: It is illegal to "directly" use tensors created inside a cond branch outside it, e.g. by storing a reference to a branch tensor in the python state. If you need to use a tensor created in a branch function you should return it as an output of the branch function and use the output from tf.cond instead. Args pred A scalar determining whether to return the result of true_fn or false_fn. true_fn The callable to be performed if pred is true. false_fn The callable to be performed if pred is false. name Optional name prefix for the returned tensors. Returns Tensors returned by the call to either true_fn or false_fn. If the callables return a singleton list, the element is extracted from the list. Raises TypeError if true_fn or false_fn is not callable. ValueError if true_fn and false_fn do not return the same number of tensors, or return tensors of different types. Example: x = tf.constant(2) y = tf.constant(5) def f1(): return tf.multiply(x, 17) def f2(): return tf.add(y, 23) r = tf.cond(tf.less(x, y), f1, f2) # r is set to f1(). # Operations in f2 (e.g., tf.add) are not executed.
tensorflow.cond
Module: tf.config Public API for tf.config namespace. Modules experimental module: Public API for tf.config.experimental namespace. optimizer module: Public API for tf.config.optimizer namespace. threading module: Public API for tf.config.threading namespace. Classes class LogicalDevice: Abstraction for a logical device initialized by the runtime. class LogicalDeviceConfiguration: Configuration class for a logical devices. class PhysicalDevice: Abstraction for a locally visible physical device. Functions experimental_connect_to_cluster(...): Connects to the given cluster. experimental_connect_to_host(...): Connects to a single machine to enable remote execution on it. experimental_functions_run_eagerly(...): Returns the value of the experimental_run_functions_eagerly setting. (deprecated) experimental_run_functions_eagerly(...): Enables / disables eager execution of tf.functions. (deprecated) functions_run_eagerly(...): Returns the value of the run_functions_eagerly setting. get_logical_device_configuration(...): Get the virtual device configuration for a tf.config.PhysicalDevice. get_soft_device_placement(...): Get if soft device placement is enabled. get_visible_devices(...): Get the list of visible physical devices. list_logical_devices(...): Return a list of logical devices created by runtime. list_physical_devices(...): Return a list of physical devices visible to the host runtime. run_functions_eagerly(...): Enables / disables eager execution of tf.functions. set_logical_device_configuration(...): Set the logical device configuration for a tf.config.PhysicalDevice. set_soft_device_placement(...): Set if soft device placement is enabled. set_visible_devices(...): Set the list of visible devices.
tensorflow.config
Module: tf.config.experimental Public API for tf.config.experimental namespace. Classes class ClusterDeviceFilters: Represent a collection of device filters for the remote workers in cluster. class VirtualDeviceConfiguration: Configuration class for a logical devices. Functions disable_mlir_bridge(...): Disables experimental MLIR-Based TensorFlow Compiler Bridge. disable_mlir_graph_optimization(...): Disables experimental MLIR-Based TensorFlow Compiler Optimizations. enable_mlir_bridge(...): Enables experimental MLIR-Based TensorFlow Compiler Bridge. enable_mlir_graph_optimization(...): Enables experimental MLIR-Based TensorFlow Compiler Optimizations. enable_tensor_float_32_execution(...): Enable or disable the use of TensorFloat-32 on supported hardware. get_device_details(...): Returns details about a physical devices. get_device_policy(...): Gets the current device policy. get_memory_growth(...): Get if memory growth is enabled for a PhysicalDevice. get_memory_usage(...): Get the memory usage, in bytes, for the chosen device. get_synchronous_execution(...): Gets whether operations are executed synchronously or asynchronously. get_virtual_device_configuration(...): Get the virtual device configuration for a tf.config.PhysicalDevice. get_visible_devices(...): Get the list of visible physical devices. list_logical_devices(...): Return a list of logical devices created by runtime. list_physical_devices(...): Return a list of physical devices visible to the host runtime. set_device_policy(...): Sets the current thread device policy. set_memory_growth(...): Set if memory growth should be enabled for a PhysicalDevice. set_synchronous_execution(...): Specifies whether operations are executed synchronously or asynchronously. set_virtual_device_configuration(...): Set the logical device configuration for a tf.config.PhysicalDevice. set_visible_devices(...): Set the list of visible devices. tensor_float_32_execution_enabled(...): Returns whether TensorFloat-32 is enabled.
tensorflow.config.experimental
tf.config.experimental.ClusterDeviceFilters Represent a collection of device filters for the remote workers in cluster. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.experimental.ClusterDeviceFilters tf.config.experimental.ClusterDeviceFilters() Note: this is an experimental API and subject to changes. Set device filters for selective jobs and tasks. For each remote worker, the device filters are a list of strings. When any filters are present, the remote worker will ignore all devices which do not match any of its filters. Each filter can be partially specified, e.g. "/job:ps", "/job:worker/replica:3", etc. Note that a device is always visible to the worker it is located on. For example, to set the device filters for a parameter server cluster: cdf = tf.config.experimental.ClusterDeviceFilters() for i in range(num_workers): cdf.set_device_filters('worker', i, ['/job:ps']) for i in range(num_ps): cdf.set_device_filters('ps', i, ['/job:worker']) tf.config.experimental_connect_to_cluster(cluster_def, cluster_device_filters=cdf) The device filters can be partically specified. For remote tasks that do not have device filters specified, all devices will be visible to them. Methods set_device_filters View source set_device_filters( job_name, task_index, device_filters ) Set the device filters for given job name and task id.
tensorflow.config.experimental.clusterdevicefilters
tf.config.experimental.disable_mlir_bridge Disables experimental MLIR-Based TensorFlow Compiler Bridge. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.experimental.disable_mlir_bridge tf.config.experimental.disable_mlir_bridge()
tensorflow.config.experimental.disable_mlir_bridge
tf.config.experimental.disable_mlir_graph_optimization Disables experimental MLIR-Based TensorFlow Compiler Optimizations. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.experimental.disable_mlir_graph_optimization tf.config.experimental.disable_mlir_graph_optimization()
tensorflow.config.experimental.disable_mlir_graph_optimization
tf.config.experimental.enable_mlir_bridge Enables experimental MLIR-Based TensorFlow Compiler Bridge. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.experimental.enable_mlir_bridge tf.config.experimental.enable_mlir_bridge() DO NOT USE, DEV AND TESTING ONLY AT THE MOMENT. Note: MLIR-Based TensorFlow Compiler is under active development and has missing features, please refrain from using. This API exists for development and testing only. TensorFlow Compiler Bridge (TF Bridge) is responsible for translating parts of TensorFlow graph into a form that can be accepted as an input by a backend compiler such as XLA.
tensorflow.config.experimental.enable_mlir_bridge
tf.config.experimental.enable_mlir_graph_optimization Enables experimental MLIR-Based TensorFlow Compiler Optimizations. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.experimental.enable_mlir_graph_optimization tf.config.experimental.enable_mlir_graph_optimization() DO NOT USE, DEV AND TESTING ONLY AT THE MOMENT. Note: MLIR-Based TensorFlow Compiler is under active development and has missing features, please refrain from using. This API exists for development and testing only. TensorFlow Compiler Optimizations are responsible general graph level optimizations that in the current stack mostly done by Grappler graph optimizers.
tensorflow.config.experimental.enable_mlir_graph_optimization
tf.config.experimental.enable_tensor_float_32_execution Enable or disable the use of TensorFloat-32 on supported hardware. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.experimental.enable_tensor_float_32_execution tf.config.experimental.enable_tensor_float_32_execution( enabled ) TensorFloat-32, or TF32 for short, is a math mode for NVIDIA Ampere GPUs. TensorFloat-32 execution causes certain float32 ops, such as matrix multiplications and convolutions, to run much faster on Ampere GPUs but with reduced precision. This reduced precision should not impact convergence of deep learning models in practice. TensorFloat-32 is enabled by default. TensorFloat-32 is only supported on Ampere GPUs, so all other hardware will use the full float32 precision regardless of whether TensorFloat-32 is enabled or not. If you want to use the full float32 precision on Ampere, you can disable TensorFloat-32 execution with this function. For example: x = tf.fill((2, 2), 1.0001) y = tf.fill((2, 2), 1.) # TensorFloat-32 is enabled, so matmul is run with reduced precision print(tf.linalg.matmul(x, y)) # [[2., 2.], [2., 2.]] tf.config.experimental.enable_tensor_float_32_execution(False) # Matmul is run with full precision print(tf.linalg.matmul(x, y)) # [[2.0002, 2.0002], [2.0002, 2.0002]] To check whether TensorFloat-32 execution is currently enabled, use tf.config.experimental.tensor_float_32_execution_enabled. If TensorFloat-32 is enabled, float32 inputs of supported ops, such as tf.linalg.matmul, will be rounded from 23 bits of precision to 10 bits of precision in most cases. This allows the ops to execute much faster by utilizing the GPU's tensor cores. TensorFloat-32 has the same dynamic range as float32, meaning it is no more likely to underflow or overflow than float32. Ops still use float32 accumulation when TensorFloat-32 is enabled. Enabling or disabling TensorFloat-32 only affects Ampere GPUs and subsequent GPUs that support TensorFloat-32. Note TensorFloat-32 is not always used in supported ops, as only inputs of certain shapes are supported. Support for more input shapes and more ops may be added in the future. As a result, precision of float32 ops may decrease in minor versions of TensorFlow. TensorFloat-32 is also used for some complex64 ops. Currently, TensorFloat-32 is used in fewer cases for complex64 as it is for float32. Args enabled Bool indicating whether to enable TensorFloat-32 execution.
tensorflow.config.experimental.enable_tensor_float_32_execution
tf.config.experimental.get_device_details Returns details about a physical devices. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.experimental.get_device_details tf.config.experimental.get_device_details( device ) This API takes in a tf.config.PhysicalDevice returned by tf.config.list_physical_devices. It returns a dict with string keys containing various details about the device. Each key is only supported by a subset of devices, so you should not assume the returned dict will have any particular key. gpu_devices = tf.config.list_physical_devices('GPU') if gpu_devices: details = tf.config.experimental.get_device_details(gpu_devices[0]) details.get('device_name', 'Unknown GPU') Currently, details are only returned for GPUs. This function returns an empty dict if passed a non-GPU device. The returned dict may have the following keys: 'device_name': A human-readable name of the device as a string, e.g. "Titan V". Unlike tf.config.PhysicalDevice.name, this will be the same for multiple devices if each device is the same model. Currently only available for GPUs. 'compute_capability': The compute capability of the device as a tuple of two ints, in the form (major_version, minor_version). Only available for NVIDIA GPUs Note: This is similar to tf.sysconfig.get_build_info in that both functions can return information relating to GPUs. However, this function returns run-time information about a specific device (such as a GPU's compute capability), while tf.sysconfig.get_build_info returns compile-time information about how TensorFlow was built (such as what version of CUDA TensorFlow was built for). Args device A tf.config.PhysicalDevice returned by tf.config.list_physical_devices or tf.config.get_visible_devices. Returns A dict with string keys.
tensorflow.config.experimental.get_device_details
tf.config.experimental.get_device_policy View source on GitHub Gets the current device policy. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.experimental.get_device_policy tf.config.experimental.get_device_policy() The device policy controls how operations requiring inputs on a specific device (e.g., on GPU:0) handle inputs on a different device (e.g. GPU:1). This function only gets the device policy for the current thread. Any subsequently started thread will again use the default policy. Returns Current thread device policy
tensorflow.config.experimental.get_device_policy
tf.config.experimental.get_memory_growth View source on GitHub Get if memory growth is enabled for a PhysicalDevice. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.experimental.get_memory_growth tf.config.experimental.get_memory_growth( device ) If memory growth is enabled for a PhysicalDevice, the runtime initialization will not allocate all memory on the device. For example: physical_devices = tf.config.list_physical_devices('GPU') try: tf.config.experimental.set_memory_growth(physical_devices[0], True) assert tf.config.experimental.get_memory_growth(physical_devices[0]) except: # Invalid device or cannot modify virtual devices once initialized. pass Args device PhysicalDevice to query Returns A boolean indicating the memory growth setting for the PhysicalDevice. Raises ValueError Invalid PhysicalDevice specified.
tensorflow.config.experimental.get_memory_growth
tf.config.experimental.get_memory_usage Get the memory usage, in bytes, for the chosen device. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.experimental.get_memory_usage tf.config.experimental.get_memory_usage( device ) See https://www.tensorflow.org/api_docs/python/tf/device for specifying device strings. For example: gpu_devices = tf.config.list_physical_devices('GPU') if gpu_devices: tf.config.experimental.get_memory_usage('GPU:0') Does not work for CPU. Args device Device string to get the bytes in use for. Returns Total memory usage in bytes. Raises ValueError Non-existent or CPU device specified.
tensorflow.config.experimental.get_memory_usage
tf.config.experimental.get_synchronous_execution View source on GitHub Gets whether operations are executed synchronously or asynchronously. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.experimental.get_synchronous_execution tf.config.experimental.get_synchronous_execution() TensorFlow can execute operations synchronously or asynchronously. If asynchronous execution is enabled, operations may return "non-ready" handles. Returns Current thread execution mode
tensorflow.config.experimental.get_synchronous_execution
tf.config.experimental.set_device_policy View source on GitHub Sets the current thread device policy. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.experimental.set_device_policy tf.config.experimental.set_device_policy( device_policy ) The device policy controls how operations requiring inputs on a specific device (e.g., on GPU:0) handle inputs on a different device (e.g. GPU:1). When using the default, an appropriate policy will be picked automatically. The default policy may change over time. This function only sets the device policy for the current thread. Any subsequently started thread will again use the default policy. Args device_policy A device policy. Valid values: None: Switch to a system default. 'warn': Copies the tensors which are not on the right device and logs a warning. 'explicit': Raises an error if the placement is not as required. 'silent': Silently copies the tensors. Note that this may hide performance problems as there is no notification provided when operations are blocked on the tensor being copied between devices. 'silent_for_int32': silently copies int32 tensors, raising errors on the other ones. Raises ValueError If an invalid device_policy is passed.
tensorflow.config.experimental.set_device_policy
tf.config.experimental.set_memory_growth View source on GitHub Set if memory growth should be enabled for a PhysicalDevice. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.experimental.set_memory_growth tf.config.experimental.set_memory_growth( device, enable ) If memory growth is enabled for a PhysicalDevice, the runtime initialization will not allocate all memory on the device. Memory growth cannot be configured on a PhysicalDevice with virtual devices configured. For example: physical_devices = tf.config.list_physical_devices('GPU') try: tf.config.experimental.set_memory_growth(physical_devices[0], True) except: # Invalid device or cannot modify virtual devices once initialized. pass Args device PhysicalDevice to configure enable (Boolean) Whether to enable or disable memory growth Raises ValueError Invalid PhysicalDevice specified. RuntimeError Runtime is already initialized.
tensorflow.config.experimental.set_memory_growth
tf.config.experimental.set_synchronous_execution View source on GitHub Specifies whether operations are executed synchronously or asynchronously. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.experimental.set_synchronous_execution tf.config.experimental.set_synchronous_execution( enable ) TensorFlow can execute operations synchronously or asynchronously. If asynchronous execution is enabled, operations may return "non-ready" handles. When enable is set to None, an appropriate value will be picked automatically. The value picked may change between TensorFlow releases. Args enable Whether operations should be dispatched synchronously. Valid values: None: sets the system default. True: executes each operation synchronously. False: executes each operation asynchronously.
tensorflow.config.experimental.set_synchronous_execution
tf.config.experimental.tensor_float_32_execution_enabled Returns whether TensorFloat-32 is enabled. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.experimental.tensor_float_32_execution_enabled tf.config.experimental.tensor_float_32_execution_enabled() By default, TensorFloat-32 is enabled, but this can be changed with tf.config.experimental.enable_tensor_float_32_execution. Returns True if TensorFloat-32 is enabled (the default) and False otherwise
tensorflow.config.experimental.tensor_float_32_execution_enabled
tf.config.experimental_connect_to_cluster View source on GitHub Connects to the given cluster. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.experimental_connect_to_cluster tf.config.experimental_connect_to_cluster( cluster_spec_or_resolver, job_name='localhost', task_index=0, protocol=None, make_master_device_default=True, cluster_device_filters=None ) Will make devices on the cluster available to use. Note that calling this more than once will work, but will invalidate any tensor handles on the old remote devices. If the given local job name is not present in the cluster specification, it will be automatically added, using an unused port on the localhost. Device filters can be specified to isolate groups of remote tasks to avoid undesired accesses between workers. Workers accessing resources or launching ops / functions on filtered remote devices will result in errors (unknown devices). For any remote task, if no device filter is present, all cluster devices will be visible; if any device filter is specified, it can only see devices matching at least one filter. Devices on the task itself are always visible. Device filters can be particially specified. For example, for a cluster set up for parameter server training, the following device filters might be specified: cdf = tf.config.experimental.ClusterDeviceFilters() # For any worker, only the devices on PS nodes and itself are visible for i in range(num_workers): cdf.set_device_filters('worker', i, ['/job:ps']) # Similarly for any ps, only the devices on workers and itself are visible for i in range(num_ps): cdf.set_device_filters('ps', i, ['/job:worker']) tf.config.experimental_connect_to_cluster(cluster_def, cluster_device_filters=cdf) Args cluster_spec_or_resolver A ClusterSpec or ClusterResolver describing the cluster. job_name The name of the local job. task_index The local task index. protocol The communication protocol, such as "grpc". If unspecified, will use the default from python/platform/remote_utils.py. make_master_device_default If True and a cluster resolver is passed, will automatically enter the master task device scope, which indicates the master becomes the default device to run ops. It won't do anything if a cluster spec is passed. Will throw an error if the caller is currently already in some device scope. cluster_device_filters an instance of tf.train.experimental/ClusterDeviceFilters that specify device filters to the remote tasks in cluster.
tensorflow.config.experimental_connect_to_cluster
tf.config.experimental_connect_to_host View source on GitHub Connects to a single machine to enable remote execution on it. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.experimental_connect_to_host tf.config.experimental_connect_to_host( remote_host=None, job_name='worker' ) Will make devices on the remote host available to use. Note that calling this more than once will work, but will invalidate any tensor handles on the old remote devices. Using the default job_name of worker, you can schedule ops to run remotely as follows: # When eager execution is enabled, connect to the remote host. tf.config.experimental_connect_to_host("exampleaddr.com:9876") with ops.device("job:worker/replica:0/task:1/device:CPU:0"): # The following tensors should be resident on the remote device, and the op # will also execute remotely. x1 = array_ops.ones([2, 2]) x2 = array_ops.ones([2, 2]) y = math_ops.matmul(x1, x2) Args remote_host a single or a list the remote server addr in host-port format. job_name The job name under which the new server will be accessible. Raises ValueError if remote_host is None.
tensorflow.config.experimental_connect_to_host
tf.config.experimental_functions_run_eagerly Returns the value of the experimental_run_functions_eagerly setting. (deprecated) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.experimental_functions_run_eagerly tf.config.experimental_functions_run_eagerly() Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.config.functions_run_eagerly instead of the experimental version.
tensorflow.config.experimental_functions_run_eagerly
tf.config.experimental_run_functions_eagerly View source on GitHub Enables / disables eager execution of tf.functions. (deprecated) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.experimental_run_functions_eagerly tf.config.experimental_run_functions_eagerly( run_eagerly ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.config.run_functions_eagerly instead of the experimental version. Calling tf.config.experimental_run_functions_eagerly(True) will make all invocations of tf.function run eagerly instead of running as a traced graph function. See tf.config.run_functions_eagerly for an example. Note: This flag has no effect on functions passed into tf.data transformations as arguments. tf.data functions are never executed eagerly and are always executed as a compiled Tensorflow Graph. Args run_eagerly Boolean. Whether to run functions eagerly.
tensorflow.config.experimental_run_functions_eagerly
tf.config.functions_run_eagerly Returns the value of the run_functions_eagerly setting. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.functions_run_eagerly tf.config.functions_run_eagerly()
tensorflow.config.functions_run_eagerly
tf.config.get_logical_device_configuration Get the virtual device configuration for a tf.config.PhysicalDevice. View aliases Main aliases tf.config.experimental.get_virtual_device_configuration Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.experimental.get_virtual_device_configuration, tf.compat.v1.config.get_logical_device_configuration tf.config.get_logical_device_configuration( device ) Returns the list of tf.config.LogicalDeviceConfiguration objects previously configured by a call to tf.config.set_logical_device_configuration. For example: physical_devices = tf.config.list_physical_devices('CPU') assert len(physical_devices) == 1, "No CPUs found" configs = tf.config.get_logical_device_configuration( physical_devices[0]) try: assert configs is None tf.config.set_logical_device_configuration( physical_devices[0], [tf.config.LogicalDeviceConfiguration(), tf.config.LogicalDeviceConfiguration()]) configs = tf.config.get_logical_device_configuration( physical_devices[0]) assert len(configs) == 2 except: # Cannot modify virtual devices once initialized. pass Args device PhysicalDevice to query Returns List of tf.config.LogicalDeviceConfiguration objects or None if no virtual device configuration has been set for this physical device.
tensorflow.config.get_logical_device_configuration
tf.config.get_soft_device_placement View source on GitHub Get if soft device placement is enabled. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.get_soft_device_placement tf.config.get_soft_device_placement() If enabled, an op will be placed on CPU if any of the following are true there's no GPU implementation for the OP no GPU devices are known or registered need to co-locate with reftype input(s) which are from CPU Returns If soft placement is enabled.
tensorflow.config.get_soft_device_placement
tf.config.get_visible_devices Get the list of visible physical devices. View aliases Main aliases tf.config.experimental.get_visible_devices Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.experimental.get_visible_devices, tf.compat.v1.config.get_visible_devices tf.config.get_visible_devices( device_type=None ) Returns the list of PhysicalDevices currently marked as visible to the runtime. A visible device will have at least one LogicalDevice associated with it once the runtime is initialized. The following example verifies all visible GPUs have been disabled: physical_devices = tf.config.list_physical_devices('GPU') try: # Disable all GPUS tf.config.set_visible_devices([], 'GPU') visible_devices = tf.config.get_visible_devices() for device in visible_devices: assert device.device_type != 'GPU' except: # Invalid device or cannot modify virtual devices once initialized. pass Args device_type (optional string) Only include devices matching this device type. For example "CPU" or "GPU". Returns List of visible PhysicalDevices
tensorflow.config.get_visible_devices
tf.config.list_logical_devices Return a list of logical devices created by runtime. View aliases Main aliases tf.config.experimental.list_logical_devices Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.experimental.list_logical_devices, tf.compat.v1.config.list_logical_devices tf.config.list_logical_devices( device_type=None ) Logical devices may correspond to physical devices or remote devices in the cluster. Operations and tensors may be placed on these devices by using the name of the tf.config.LogicalDevice. Calling tf.config.list_logical_devices triggers the runtime to configure any tf.config.PhysicalDevice visible to the runtime, thereby preventing further configuration. To avoid runtime initialization, call tf.config.list_physical_devices instead. For example: logical_devices = tf.config.list_logical_devices('GPU') if len(logical_devices) > 0: # Allocate on GPU:0 with tf.device(logical_devices[0].name): one = tf.constant(1) # Allocate on GPU:1 with tf.device(logical_devices[1].name): two = tf.constant(2) Args device_type (optional string) Only include devices matching this device type. For example "CPU" or "GPU". Returns List of initialized LogicalDevices
tensorflow.config.list_logical_devices
tf.config.list_physical_devices Return a list of physical devices visible to the host runtime. View aliases Main aliases tf.config.experimental.list_physical_devices Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.experimental.list_physical_devices, tf.compat.v1.config.list_physical_devices tf.config.list_physical_devices( device_type=None ) Physical devices are hardware devices present on the host machine. By default all discovered CPU and GPU devices are considered visible. This API allows querying the physical hardware resources prior to runtime initialization. Thus, giving an opportunity to call any additional configuration APIs. This is in contrast to tf.config.list_logical_devices, which triggers runtime initialization in order to list the configured devices. The following example lists the number of visible GPUs on the host. physical_devices = tf.config.list_physical_devices('GPU') print("Num GPUs:", len(physical_devices)) Num GPUs: ... However, the number of GPUs available to the runtime may change during runtime initialization due to marking certain devices as not visible or configuring multiple logical devices. Args device_type (optional string) Only include devices matching this device type. For example "CPU" or "GPU". Returns List of discovered tf.config.PhysicalDevice objects
tensorflow.config.list_physical_devices
tf.config.LogicalDevice Abstraction for a logical device initialized by the runtime. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.LogicalDevice tf.config.LogicalDevice( name, device_type ) A tf.config.LogicalDevice corresponds to an initialized logical device on a tf.config.PhysicalDevice or a remote device visible to the cluster. Tensors and operations can be placed on a specific logical device by calling tf.device with a specified tf.config.LogicalDevice. Fields: name: The fully qualified name of the device. Can be used for Op or function placement. device_type: String declaring the type of device such as "CPU" or "GPU". Attributes name device_type
tensorflow.config.logicaldevice
tf.config.LogicalDeviceConfiguration Configuration class for a logical devices. View aliases Main aliases tf.config.experimental.VirtualDeviceConfiguration Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.LogicalDeviceConfiguration, tf.compat.v1.config.experimental.VirtualDeviceConfiguration tf.config.LogicalDeviceConfiguration( memory_limit=None, experimental_priority=None ) The class specifies the parameters to configure a tf.config.PhysicalDevice as it is initialized to a tf.config.LogicalDevice during runtime initialization. Not all fields are valid for all device types. See tf.config.get_logical_device_configuration and tf.config.set_logical_device_configuration for usage examples. Fields: memory_limit: (optional) Maximum memory (in MB) to allocate on the virtual device. Currently only supported for GPUs. experimental_priority: (optional) Priority to assign to a virtual device. Lower values have higher priorities and 0 is the default. Within a physical GPU, the GPU scheduler will prioritize ops on virtual devices with higher priority. Currently only supported for Nvidia GPUs. Attributes memory_limit experimental_priority
tensorflow.config.logicaldeviceconfiguration
Module: tf.config.optimizer Public API for tf.config.optimizer namespace. Functions get_experimental_options(...): Get experimental optimizer options. get_jit(...): Get if JIT compilation is enabled. set_experimental_options(...): Set experimental optimizer options. set_jit(...): Set if JIT compilation is enabled.
tensorflow.config.optimizer
tf.config.optimizer.get_experimental_options View source on GitHub Get experimental optimizer options. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.optimizer.get_experimental_options tf.config.optimizer.get_experimental_options() Refer to tf.config.optimizer.set_experimental_options for a list of current options. Note that optimizations are only applied in graph mode, (within tf.function). In addition, as these are experimental options, the list is subject to change. Returns Dictionary of configured experimental optimizer options
tensorflow.config.optimizer.get_experimental_options
tf.config.optimizer.get_jit View source on GitHub Get if JIT compilation is enabled. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.optimizer.get_jit tf.config.optimizer.get_jit() Note that optimizations are only applied to code that is compiled into a graph. In eager mode, which is the TF2 API default, that means only code that is defined under a tf.function decorator. Returns If JIT compilation is enabled.
tensorflow.config.optimizer.get_jit
tf.config.optimizer.set_experimental_options View source on GitHub Set experimental optimizer options. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.optimizer.set_experimental_options tf.config.optimizer.set_experimental_options( options ) Note that optimizations are only applied in graph mode, (within tf.function). In addition, as these are experimental options, the list is subject to change. Args options Dictionary of experimental optimizer options to configure. Valid keys: layout_optimizer: Optimize tensor layouts e.g. This will try to use NCHW layout on GPU which is faster. constant_folding: Fold constants Statically infer the value of tensors when possible, and materialize the result using constants. shape_optimization: Simplify computations made on shapes. remapping: Remap subgraphs onto more efficient implementations. arithmetic_optimization: Simplify arithmetic ops with common sub-expression elimination and arithmetic simplification. dependency_optimization: Control dependency optimizations. Remove redundant control dependencies, which may enable other optimization. This optimizer is also essential for pruning Identity and NoOp nodes. loop_optimization: Loop optimizations. function_optimization: Function optimizations and inlining. debug_stripper: Strips debug-related nodes from the graph. disable_model_pruning: Disable removal of unnecessary ops from the graph scoped_allocator_optimization: Try to allocate some independent Op outputs contiguously in order to merge or eliminate downstream Ops. pin_to_host_optimization: Force small ops onto the CPU. implementation_selector: Enable the swap of kernel implementations based on the device placement. auto_mixed_precision: Change certain float32 ops to float16 on Volta GPUs and above. Without the use of loss scaling, this can cause numerical underflow (see keras.mixed_precision.experimental.LossScaleOptimizer). disable_meta_optimizer: Disable the entire meta optimizer. min_graph_nodes: The minimum number of nodes in a graph to optimizer. For smaller graphs, optimization is skipped.
tensorflow.config.optimizer.set_experimental_options
tf.config.optimizer.set_jit View source on GitHub Set if JIT compilation is enabled. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.optimizer.set_jit tf.config.optimizer.set_jit( enabled ) Note that optimizations are only applied to code that is compiled into a graph. In eager mode, which is the TF2 API default, that means only code that is defined under a tf.function decorator. Args enabled Whether to enable JIT compilation.
tensorflow.config.optimizer.set_jit
tf.config.PhysicalDevice Abstraction for a locally visible physical device. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.PhysicalDevice tf.config.PhysicalDevice( name, device_type ) TensorFlow can utilize various devices such as the CPU or multiple GPUs for computation. Before initializing a local device for use, the user can customize certain properties of the device such as it's visibility or memory configuration. Once a visible tf.config.PhysicalDevice is initialized one or more tf.config.LogicalDevice objects are created. Use tf.config.set_visible_devices to configure the visibility of a physical device and tf.config.set_logical_device_configuration to configure multiple tf.config.LogicalDevice objects for a tf.config.PhysicalDevice. This is useful when separation between models is needed or to simulate a multi-device environment. Fields: name: Unique identifier for device. device_type: String declaring the type of device such as "CPU" or "GPU". Attributes name device_type
tensorflow.config.physicaldevice
tf.config.run_functions_eagerly Enables / disables eager execution of tf.functions. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.run_functions_eagerly tf.config.run_functions_eagerly( run_eagerly ) Calling tf.config.run_functions_eagerly(True) will make all invocations of tf.function run eagerly instead of running as a traced graph function. This can be useful for debugging. def my_func(a): print("Python side effect") return a + a a_fn = tf.function(my_func) # A side effect the first time the function is traced a_fn(tf.constant(1)) Python side effect <tf.Tensor: shape=(), dtype=int32, numpy=2> # No further side effect, as the traced function is called a_fn(tf.constant(2)) <tf.Tensor: shape=(), dtype=int32, numpy=4> # Now, switch to eager running tf.config.run_functions_eagerly(True) # Side effect, as the function is called directly a_fn(tf.constant(2)) Python side effect <tf.Tensor: shape=(), dtype=int32, numpy=4> # Turn this back off tf.config.run_functions_eagerly(False) Note: This flag has no effect on functions passed into tf.data transformations as arguments. tf.data functions are never executed eagerly and are always executed as a compiled Tensorflow Graph. Args run_eagerly Boolean. Whether to run functions eagerly.
tensorflow.config.run_functions_eagerly
tf.config.set_logical_device_configuration Set the logical device configuration for a tf.config.PhysicalDevice. View aliases Main aliases tf.config.experimental.set_virtual_device_configuration Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.experimental.set_virtual_device_configuration, tf.compat.v1.config.set_logical_device_configuration tf.config.set_logical_device_configuration( device, logical_devices ) A visible tf.config.PhysicalDevice will by default have a single tf.config.LogicalDevice associated with it once the runtime is initialized. Specifying a list of tf.config.LogicalDeviceConfiguration objects allows multiple devices to be created on the same tf.config.PhysicalDevice. The following example splits the CPU into 2 logical devices: physical_devices = tf.config.list_physical_devices('CPU') assert len(physical_devices) == 1, "No CPUs found" # Specify 2 virtual CPUs. Note currently memory limit is not supported. try: tf.config.set_logical_device_configuration( physical_devices[0], [tf.config.LogicalDeviceConfiguration(), tf.config.LogicalDeviceConfiguration()]) logical_devices = tf.config.list_logical_devices('CPU') assert len(logical_devices) == 2 tf.config.set_logical_device_configuration( physical_devices[0], [tf.config.LogicalDeviceConfiguration(), tf.config.LogicalDeviceConfiguration(), tf.config.LogicalDeviceConfiguration(), tf.config.LogicalDeviceConfiguration()]) except: # Cannot modify logical devices once initialized. pass The following example splits the GPU into 2 logical devices with 100 MB each: physical_devices = tf.config.list_physical_devices('GPU') try: tf.config.set_logical_device_configuration( physical_devices[0], [tf.config.LogicalDeviceConfiguration(memory_limit=100), tf.config.LogicalDeviceConfiguration(memory_limit=100)]) logical_devices = tf.config.list_logical_devices('GPU') assert len(logical_devices) == len(physical_devices) + 1 tf.config.set_logical_device_configuration( physical_devices[0], [tf.config.LogicalDeviceConfiguration(memory_limit=10), tf.config.LogicalDeviceConfiguration(memory_limit=10)]) except: # Invalid device or cannot modify logical devices once initialized. pass Args device The PhysicalDevice to configure. logical_devices (optional) List of tf.config.LogicalDeviceConfiguration objects to allocate for the specified PhysicalDevice. If None, the default configuration will be used. Raises ValueError If argument validation fails. RuntimeError Runtime is already initialized.
tensorflow.config.set_logical_device_configuration
tf.config.set_soft_device_placement View source on GitHub Set if soft device placement is enabled. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.set_soft_device_placement tf.config.set_soft_device_placement( enabled ) If enabled, an op will be placed on CPU if any of the following are true there's no GPU implementation for the OP no GPU devices are known or registered need to co-locate with reftype input(s) which are from CPU Args enabled Whether to enable soft placement.
tensorflow.config.set_soft_device_placement
tf.config.set_visible_devices Set the list of visible devices. View aliases Main aliases tf.config.experimental.set_visible_devices Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.experimental.set_visible_devices, tf.compat.v1.config.set_visible_devices tf.config.set_visible_devices( devices, device_type=None ) Specifies which PhysicalDevice objects are visible to the runtime. TensorFlow will only allocate memory and place operations on visible physical devices, as otherwise no LogicalDevice will be created on them. By default all discovered devices are marked as visible. The following example demonstrates disabling the first GPU on the machine. physical_devices = tf.config.list_physical_devices('GPU') try: # Disable first GPU tf.config.set_visible_devices(physical_devices[1:], 'GPU') logical_devices = tf.config.list_logical_devices('GPU') # Logical device was not created for first GPU assert len(logical_devices) == len(physical_devices) - 1 except: # Invalid device or cannot modify virtual devices once initialized. pass Args devices List of PhysicalDevices to make visible device_type (optional) Only configure devices matching this device type. For example "CPU" or "GPU". Other devices will be left unaltered. Raises ValueError If argument validation fails. RuntimeError Runtime is already initialized.
tensorflow.config.set_visible_devices
Module: tf.config.threading Public API for tf.config.threading namespace. Functions get_inter_op_parallelism_threads(...): Get number of threads used for parallelism between independent operations. get_intra_op_parallelism_threads(...): Get number of threads used within an individual op for parallelism. set_inter_op_parallelism_threads(...): Set number of threads used for parallelism between independent operations. set_intra_op_parallelism_threads(...): Set number of threads used within an individual op for parallelism.
tensorflow.config.threading
tf.config.threading.get_inter_op_parallelism_threads View source on GitHub Get number of threads used for parallelism between independent operations. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.threading.get_inter_op_parallelism_threads tf.config.threading.get_inter_op_parallelism_threads() Determines the number of threads used by independent non-blocking operations. 0 means the system picks an appropriate number. Returns Number of parallel threads
tensorflow.config.threading.get_inter_op_parallelism_threads
tf.config.threading.get_intra_op_parallelism_threads View source on GitHub Get number of threads used within an individual op for parallelism. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.threading.get_intra_op_parallelism_threads tf.config.threading.get_intra_op_parallelism_threads() Certain operations like matrix multiplication and reductions can utilize parallel threads for speed ups. A value of 0 means the system picks an appropriate number. Returns Number of parallel threads
tensorflow.config.threading.get_intra_op_parallelism_threads
tf.config.threading.set_inter_op_parallelism_threads View source on GitHub Set number of threads used for parallelism between independent operations. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.threading.set_inter_op_parallelism_threads tf.config.threading.set_inter_op_parallelism_threads( num_threads ) Determines the number of threads used by independent non-blocking operations. 0 means the system picks an appropriate number. Args num_threads Number of parallel threads
tensorflow.config.threading.set_inter_op_parallelism_threads
tf.config.threading.set_intra_op_parallelism_threads View source on GitHub Set number of threads used within an individual op for parallelism. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.config.threading.set_intra_op_parallelism_threads tf.config.threading.set_intra_op_parallelism_threads( num_threads ) Certain operations like matrix multiplication and reductions can utilize parallel threads for speed ups. A value of 0 means the system picks an appropriate number. Args num_threads Number of parallel threads
tensorflow.config.threading.set_intra_op_parallelism_threads
tf.constant View source on GitHub Creates a constant tensor from a tensor-like object. tf.constant( value, dtype=None, shape=None, name='Const' ) Note: All eager tf.Tensor values are immutable (in contrast to tf.Variable). There is nothing especially constant about the value returned from tf.constant. This function it is not fundamentally different from tf.convert_to_tensor. The name tf.constant comes from the value being embeded in a Const node in the tf.Graph. tf.constant is useful for asserting that the value can be embedded that way. If the argument dtype is not specified, then the type is inferred from the type of value. # Constant 1-D Tensor from a python list. tf.constant([1, 2, 3, 4, 5, 6]) <tf.Tensor: shape=(6,), dtype=int32, numpy=array([1, 2, 3, 4, 5, 6], dtype=int32)> # Or a numpy array a = np.array([[1, 2, 3], [4, 5, 6]]) tf.constant(a) <tf.Tensor: shape=(2, 3), dtype=int64, numpy= array([[1, 2, 3], [4, 5, 6]])> If dtype is specified the resulting tensor values are cast to the requested dtype. tf.constant([1, 2, 3, 4, 5, 6], dtype=tf.float64) <tf.Tensor: shape=(6,), dtype=float64, numpy=array([1., 2., 3., 4., 5., 6.])> If shape is set, the value is reshaped to match. Scalars are expanded to fill the shape: tf.constant(0, shape=(2, 3)) <tf.Tensor: shape=(2, 3), dtype=int32, numpy= array([[0, 0, 0], [0, 0, 0]], dtype=int32)> tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) <tf.Tensor: shape=(2, 3), dtype=int32, numpy= array([[1, 2, 3], [4, 5, 6]], dtype=int32)> tf.constant has no effect if an eager Tensor is passed as the value, it even transmits gradients: v = tf.Variable([0.0]) with tf.GradientTape() as g: loss = tf.constant(v + v) g.gradient(loss, v).numpy() array([2.], dtype=float32) But, since tf.constant embeds the value in the tf.Graph this fails for symbolic tensors: with tf.compat.v1.Graph().as_default(): i = tf.compat.v1.placeholder(shape=[None, None], dtype=tf.float32) t = tf.constant(i) Traceback (most recent call last): TypeError: ... tf.constant will always create CPU (host) tensors. In order to create tensors on other devices, use tf.identity. (If the value is an eager Tensor, however, the tensor will be returned unmodified as mentioned above.) Related Ops: tf.convert_to_tensor is similar but: It has no shape argument. Symbolic tensors are allowed to pass through. with tf.compat.v1.Graph().as_default(): i = tf.compat.v1.placeholder(shape=[None, None], dtype=tf.float32) t = tf.convert_to_tensor(i) tf.fill: differs in a few ways: tf.constant supports arbitrary constants, not just uniform scalar Tensors like tf.fill. tf.fill creates an Op in the graph that is expanded at runtime, so it can efficiently represent large tensors. Since tf.fill does not embed the value, it can produce dynamically sized outputs. Args value A constant value (or list) of output type dtype. dtype The type of the elements of the resulting tensor. shape Optional dimensions of resulting tensor. name Optional name for the tensor. Returns A Constant Tensor. Raises TypeError if shape is incorrectly specified or unsupported. ValueError if called on a symbolic tensor.
tensorflow.constant
tf.constant_initializer Initializer that generates tensors with constant values. tf.constant_initializer( value=0 ) Initializers allow you to pre-specify an initialization strategy, encoded in the Initializer object, without knowing the shape and dtype of the variable being initialized. tf.constant_initializer returns an object which when called returns a tensor populated with the value specified in the constructor. This value must be convertible to the requested dtype. The argument value can be a scalar constant value, or a list of values. Scalars broadcast to whichever shape is requested from the initializer. If value is a list, then the length of the list must be equal to the number of elements implied by the desired shape of the tensor. If the total number of elements in value is not equal to the number of elements required by the tensor shape, the initializer will raise a TypeError. Examples: def make_variables(k, initializer): return (tf.Variable(initializer(shape=[k], dtype=tf.float32)), tf.Variable(initializer(shape=[k, k], dtype=tf.float32))) v1, v2 = make_variables(3, tf.constant_initializer(2.)) v1 <tf.Variable ... shape=(3,) ... numpy=array([2., 2., 2.], dtype=float32)> v2 <tf.Variable ... shape=(3, 3) ... numpy= array([[2., 2., 2.], [2., 2., 2.], [2., 2., 2.]], dtype=float32)> make_variables(4, tf.random_uniform_initializer(minval=-1., maxval=1.)) (<tf.Variable...shape=(4,) dtype=float32...>, <tf.Variable...shape=(4, 4) ... value = [0, 1, 2, 3, 4, 5, 6, 7] init = tf.constant_initializer(value) # Fitting shape tf.Variable(init(shape=[2, 4], dtype=tf.float32)) <tf.Variable ... array([[0., 1., 2., 3.], [4., 5., 6., 7.]], dtype=float32)> # Larger shape tf.Variable(init(shape=[3, 4], dtype=tf.float32)) Traceback (most recent call last): TypeError: ...value has 8 elements, shape is (3, 4) with 12 elements... # Smaller shape tf.Variable(init(shape=[2, 3], dtype=tf.float32)) Traceback (most recent call last): TypeError: ...value has 8 elements, shape is (2, 3) with 6 elements... Args value A Python scalar, list or tuple of values, or a N-dimensional numpy array. All elements of the initialized variable will be set to the corresponding value in the value argument. Raises TypeError If the input value is not one of the expected types. Methods from_config View source @classmethod from_config( config ) Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1) config = initializer.get_config() initializer = RandomUniform.from_config(config) Args config A Python dictionary. It will typically be the output of get_config. Returns An Initializer instance. get_config View source get_config() Returns the configuration of the initializer as a JSON-serializable dict. Returns A JSON-serializable Python dict. __call__ View source __call__( shape, dtype=None, **kwargs ) Returns a tensor object initialized as specified by the initializer. Args shape Shape of the tensor. dtype Optional dtype of the tensor. If not provided the dtype of the tensor created will be the type of the inital value. **kwargs Additional keyword arguments. Raises TypeError If the initializer cannot create a tensor of the requested dtype.
tensorflow.constant_initializer
tf.control_dependencies View source on GitHub Wrapper for Graph.control_dependencies() using the default graph. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.control_dependencies tf.control_dependencies( control_inputs ) See tf.Graph.control_dependencies for more details. Note: In TensorFlow 2 with eager and/or Autograph, you should not require this method, as code executes in the expected order. Only use tf.control_dependencies when working with v1-style code or in a graph context such as inside Dataset.map. When eager execution is enabled, any callable object in the control_inputs list will be called. Args control_inputs A list of Operation or Tensor objects which must be executed or computed before running the operations defined in the context. Can also be None to clear the control dependencies. If eager execution is enabled, any callable object in the control_inputs list will be called. Returns A context manager that specifies control dependencies for all operations constructed within the context.
tensorflow.control_dependencies
tf.convert_to_tensor View source on GitHub Converts the given value to a Tensor. tf.convert_to_tensor( value, dtype=None, dtype_hint=None, name=None ) This function converts Python objects of various types to Tensor objects. It accepts Tensor objects, numpy arrays, Python lists, and Python scalars. For example: import numpy as np def my_func(arg): arg = tf.convert_to_tensor(arg, dtype=tf.float32) return arg # The following calls are equivalent. value_1 = my_func(tf.constant([[1.0, 2.0], [3.0, 4.0]])) print(value_1) tf.Tensor( [[1. 2.] [3. 4.]], shape=(2, 2), dtype=float32) value_2 = my_func([[1.0, 2.0], [3.0, 4.0]]) print(value_2) tf.Tensor( [[1. 2.] [3. 4.]], shape=(2, 2), dtype=float32) value_3 = my_func(np.array([[1.0, 2.0], [3.0, 4.0]], dtype=np.float32)) print(value_3) tf.Tensor( [[1. 2.] [3. 4.]], shape=(2, 2), dtype=float32) This function can be useful when composing a new operation in Python (such as my_func in the example above). All standard Python op constructors apply this function to each of their Tensor-valued inputs, which allows those ops to accept numpy arrays, Python lists, and scalars in addition to Tensor objects. Note: This function diverges from default Numpy behavior for float and string types when None is present in a Python list or scalar. Rather than silently converting None values, an error will be thrown. Args value An object whose type has a registered Tensor conversion function. dtype Optional element type for the returned tensor. If missing, the type is inferred from the type of value. dtype_hint Optional element type for the returned tensor, used when dtype is None. In some cases, a caller may not have a dtype in mind when converting to a tensor, so dtype_hint can be used as a soft preference. If the conversion to dtype_hint is not possible, this argument has no effect. name Optional name to use if a new Tensor is created. Returns A Tensor based on value. Raises TypeError If no conversion function is registered for value to dtype. RuntimeError If a registered conversion function returns an invalid value. ValueError If the value is a tensor not of given dtype in graph mode.
tensorflow.convert_to_tensor
tf.CriticalSection View source on GitHub Critical section. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.CriticalSection tf.CriticalSection( name=None, shared_name=None, critical_section_def=None, import_scope=None ) A CriticalSection object is a resource in the graph which executes subgraphs in serial order. A common example of a subgraph one may wish to run exclusively is the one given by the following function: v = resource_variable_ops.ResourceVariable(0.0, name="v") def count(): value = v.read_value() with tf.control_dependencies([value]): with tf.control_dependencies([v.assign_add(1)]): return tf.identity(value) Here, a snapshot of v is captured in value; and then v is updated. The snapshot value is returned. If multiple workers or threads all execute count in parallel, there is no guarantee that access to the variable v is atomic at any point within any thread's calculation of count. In fact, even implementing an atomic counter that guarantees that the user will see each value 0, 1, ..., is currently impossible. The solution is to ensure any access to the underlying resource v is only processed through a critical section: cs = CriticalSection() f1 = cs.execute(count) f2 = cs.execute(count) output = f1 + f2 session.run(output) The functions f1 and f2 will be executed serially, and updates to v will be atomic. NOTES All resource objects, including the critical section and any captured variables of functions executed on that critical section, will be colocated to the same device (host and cpu/gpu). When using multiple critical sections on the same resources, there is no guarantee of exclusive access to those resources. This behavior is disallowed by default (but see the kwarg exclusive_resource_access). For example, running the same function in two separate critical sections will not ensure serial execution: v = tf.compat.v1.get_variable("v", initializer=0.0, use_resource=True) def accumulate(up): x = v.read_value() with tf.control_dependencies([x]): with tf.control_dependencies([v.assign_add(up)]): return tf.identity(x) ex1 = CriticalSection().execute( accumulate, 1.0, exclusive_resource_access=False) ex2 = CriticalSection().execute( accumulate, 1.0, exclusive_resource_access=False) bad_sum = ex1 + ex2 sess.run(v.initializer) sess.run(bad_sum) # May return 0.0 Attributes name Methods execute View source execute( fn, exclusive_resource_access=True, name=None ) Execute function fn() inside the critical section. fn should not accept any arguments. To add extra arguments to when calling fn in the critical section, create a lambda: critical_section.execute(lambda: fn(*my_args, **my_kwargs)) Args fn The function to execute. Must return at least one tensor. exclusive_resource_access Whether the resources required by fn should be exclusive to this CriticalSection. Default: True. You may want to set this to False if you will be accessing a resource in read-only mode in two different CriticalSections. name The name to use when creating the execute operation. Returns The tensors returned from fn(). Raises ValueError If fn attempts to lock this CriticalSection in any nested or lazy way that may cause a deadlock. ValueError If exclusive_resource_access == True and another CriticalSection has an execution requesting the same resources as fn. Note, even ifexclusive_resource_accessisTrue, if another execution in anotherCriticalSectionwas created withoutexclusive_resource_access=True, aValueError` will be raised.
tensorflow.criticalsection
tf.custom_gradient View source on GitHub Decorator to define a function with a custom gradient. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.custom_gradient tf.custom_gradient( f=None ) This decorator allows fine grained control over the gradients of a sequence for operations. This may be useful for multiple reasons, including providing a more efficient or numerically stable gradient for a sequence of operations. For example, consider the following function that commonly occurs in the computation of cross entropy and log likelihoods: def log1pexp(x): return tf.math.log(1 + tf.exp(x)) Due to numerical instability, the gradient of this function evaluated at x=100 is NaN. For example: x = tf.constant(100.) y = log1pexp(x) dy = tf.gradients(y, x) # Will be NaN when evaluated. The gradient expression can be analytically simplified to provide numerical stability: @tf.custom_gradient def log1pexp(x): e = tf.exp(x) def grad(dy): return dy * (1 - 1 / (1 + e)) return tf.math.log(1 + e), grad With this definition, the gradient at x=100 will be correctly evaluated as 1.0. The variable dy is defined as the upstream gradient. i.e. the gradient from all the layers or functions originating from this layer. By chain rule we know that dy/dx = dy/x_0 * dx_0/dx_1 * ... * dx_i/dx_i+1 * ... * dx_n/dx In this case the gradient of our current function defined as dx_i/dx_i+1 = (1 - 1 / (1 + e)). The upstream gradient dy would be dx_i+1/dx_i+2 * dx_i+2/dx_i+3 * ... * dx_n/dx. The upstream gradient multiplied by the current gradient is then passed downstream. In case the function takes multiple variables as input, the grad function must also return the same number of variables. We take the function z = x * y as an example. @tf.custom_gradient def bar(x, y): def grad(upstream): dz_dx = y dz_dy = x return upstream * dz_dx, upstream * dz_dy z = x * y return z, grad x = tf.constant(2.0, dtype=tf.float32) y = tf.constant(3.0, dtype=tf.float32) with tf.GradientTape(persistent=True) as tape: tape.watch(x) tape.watch(y) z = bar(x, y) z <tf.Tensor: shape=(), dtype=float32, numpy=6.0> tape.gradient(z, x) <tf.Tensor: shape=(), dtype=float32, numpy=3.0> tape.gradient(z, y) <tf.Tensor: shape=(), dtype=float32, numpy=2.0> Nesting custom gradients can lead to unintuitive results. The default behavior does not correspond to n-th order derivatives. For example @tf.custom_gradient def op(x): y = op1(x) @tf.custom_gradient def grad_fn(dy): gdy = op2(x, y, dy) def grad_grad_fn(ddy): # Not the 2nd order gradient of op w.r.t. x. return op3(x, y, dy, ddy) return gdy, grad_grad_fn return y, grad_fn The function grad_grad_fn will be calculating the first order gradient of grad_fn with respect to dy, which is used to generate forward-mode gradient graphs from backward-mode gradient graphs, but is not the same as the second order gradient of op with respect to x. Instead, wrap nested @tf.custom_gradients in another function: @tf.custom_gradient def op_with_fused_backprop(x): y, x_grad = fused_op(x) def first_order_gradient(dy): @tf.custom_gradient def first_order_custom(unused_x): def second_order_and_transpose(ddy): return second_order_for_x(...), gradient_wrt_dy(...) return x_grad, second_order_and_transpose return dy * first_order_custom(x) return y, first_order_gradient Additional arguments to the inner @tf.custom_gradient-decorated function control the expected return values of the innermost function. See also tf.RegisterGradient which registers a gradient function for a primitive TensorFlow operation. tf.custom_gradient on the other hand allows for fine grained control over the gradient computation of a sequence of operations. Note that if the decorated function uses Variables, the enclosing variable scope must be using ResourceVariables. Args f function f(*x) that returns a tuple (y, grad_fn) where: x is a sequence of (nested structures of) Tensor inputs to the function. y is a (nested structure of) Tensor outputs of applying TensorFlow operations in f to x. grad_fn is a function with the signature g(*grad_ys) which returns a list of Tensors the same size as (flattened) x - the derivatives of Tensors in y with respect to the Tensors in x. grad_ys is a sequence of Tensors the same size as (flattened) y holding the initial value gradients for each Tensor in y. In a pure mathematical sense, a vector-argument vector-valued function f's derivatives should be its Jacobian matrix J. Here we are expressing the Jacobian J as a function grad_fn which defines how J will transform a vector grad_ys when left-multiplied with it (grad_ys * J, the vector-Jacobian product, or VJP). This functional representation of a matrix is convenient to use for chain-rule calculation (in e.g. the back-propagation algorithm). If f uses Variables (that are not part of the inputs), i.e. through get_variable, then grad_fn should have signature g(*grad_ys, variables=None), where variables is a list of the Variables, and return a 2-tuple (grad_xs, grad_vars), where grad_xs is the same as above, and grad_vars is a list<Tensor> with the derivatives of Tensors in y with respect to the variables (that is, grad_vars has one Tensor per variable in variables). Returns A function h(x) which returns the same value as f(x)[0] and whose gradient (as calculated by tf.gradients) is determined by f(x)[1].
tensorflow.custom_gradient
Module: tf.data tf.data.Dataset API for input pipelines. See Importing Data for an overview. Modules experimental module: Experimental API for building input pipelines. Classes class Dataset: Represents a potentially large set of elements. class DatasetSpec: Type specification for tf.data.Dataset. class FixedLengthRecordDataset: A Dataset of fixed-length records from one or more binary files. class Iterator: Represents an iterator of a tf.data.Dataset. class IteratorSpec: Type specification for tf.data.Iterator. class Options: Represents options for tf.data.Dataset. class TFRecordDataset: A Dataset comprising records from one or more TFRecord files. class TextLineDataset: A Dataset comprising lines from one or more text files. Other Members AUTOTUNE -1 INFINITE_CARDINALITY -1 UNKNOWN_CARDINALITY -2
tensorflow.data
tf.data.Dataset View source on GitHub Represents a potentially large set of elements. tf.data.Dataset( variant_tensor ) The tf.data.Dataset API supports writing descriptive and efficient input pipelines. Dataset usage follows a common pattern: Create a source dataset from your input data. Apply dataset transformations to preprocess the data. Iterate over the dataset and process the elements. Iteration happens in a streaming fashion, so the full dataset does not need to fit into memory. Source Datasets: The simplest way to create a dataset is to create it from a python list: dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset: print(element) tf.Tensor(1, shape=(), dtype=int32) tf.Tensor(2, shape=(), dtype=int32) tf.Tensor(3, shape=(), dtype=int32) To process lines from files, use tf.data.TextLineDataset: dataset = tf.data.TextLineDataset(["file1.txt", "file2.txt"]) To process records written in the TFRecord format, use TFRecordDataset: dataset = tf.data.TFRecordDataset(["file1.tfrecords", "file2.tfrecords"]) To create a dataset of all files matching a pattern, use tf.data.Dataset.list_files: dataset = tf.data.Dataset.list_files("/path/*.txt") # doctest: +SKIP See tf.data.FixedLengthRecordDataset and tf.data.Dataset.from_generator for more ways to create datasets. Transformations: Once you have a dataset, you can apply transformations to prepare the data for your model: dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.map(lambda x: x*2) list(dataset.as_numpy_iterator()) [2, 4, 6] Common Terms: Element: A single output from calling next() on a dataset iterator. Elements may be nested structures containing multiple components. For example, the element (1, (3, "apple")) has one tuple nested in another tuple. The components are 1, 3, and "apple". Component: The leaf in the nested structure of an element. Supported types: Elements can be nested structures of tuples, named tuples, and dictionaries. Note that Python lists are not treated as nested structures of components. Instead, lists are converted to tensors and treated as components. For example, the element (1, [1, 2, 3]) has only two components; the tensor 1 and the tensor [1, 2, 3]. Element components can be of any type representable by tf.TypeSpec, including tf.Tensor, tf.data.Dataset, tf.sparse.SparseTensor, tf.RaggedTensor, and tf.TensorArray. a = 1 # Integer element b = 2.0 # Float element c = (1, 2) # Tuple element with 2 components d = {"a": (2, 2), "b": 3} # Dict element with 3 components Point = collections.namedtuple("Point", ["x", "y"]) # doctest: +SKIP e = Point(1, 2) # Named tuple # doctest: +SKIP f = tf.data.Dataset.range(10) # Dataset element Args variant_tensor A DT_VARIANT tensor that represents the dataset. Attributes element_spec The type specification of an element of this dataset. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset.element_spec TensorSpec(shape=(), dtype=tf.int32, name=None) Methods apply View source apply( transformation_func ) Applies a transformation function to this dataset. apply enables chaining of custom Dataset transformations, which are represented as functions that take one Dataset argument and return a transformed Dataset. dataset = tf.data.Dataset.range(100) def dataset_fn(ds): return ds.filter(lambda x: x < 5) dataset = dataset.apply(dataset_fn) list(dataset.as_numpy_iterator()) [0, 1, 2, 3, 4] Args transformation_func A function that takes one Dataset argument and returns a Dataset. Returns Dataset The Dataset returned by applying transformation_func to this dataset. as_numpy_iterator View source as_numpy_iterator() Returns an iterator which converts all elements of the dataset to numpy. Use as_numpy_iterator to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using as_numpy_iterator. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset: print(element) tf.Tensor(1, shape=(), dtype=int32) tf.Tensor(2, shape=(), dtype=int32) tf.Tensor(3, shape=(), dtype=int32) This method requires that you are running in eager mode and the dataset's element_spec contains only TensorSpec components. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset.as_numpy_iterator(): print(element) 1 2 3 dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) print(list(dataset.as_numpy_iterator())) [1, 2, 3] as_numpy_iterator() will preserve the nested structure of dataset elements. dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), 'b': [5, 6]}) list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, {'a': (2, 4), 'b': 6}] True Returns An iterable over the elements of the dataset, with their tensors converted to numpy arrays. Raises TypeError if an element contains a non-Tensor value. RuntimeError if eager execution is not enabled. batch View source batch( batch_size, drop_remainder=False ) Combines consecutive elements of this dataset into batches. dataset = tf.data.Dataset.range(8) dataset = dataset.batch(3) list(dataset.as_numpy_iterator()) [array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] dataset = tf.data.Dataset.range(8) dataset = dataset.batch(3, drop_remainder=True) list(dataset.as_numpy_iterator()) [array([0, 1, 2]), array([3, 4, 5])] The components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced. Args batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch. drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch. Returns Dataset A Dataset. cache View source cache( filename='' ) Caches the elements in this dataset. The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data. Note: For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data. dataset = tf.data.Dataset.range(5) dataset = dataset.map(lambda x: x**2) dataset = dataset.cache() # The first time reading through the data will generate the data using # `range` and `map`. list(dataset.as_numpy_iterator()) [0, 1, 4, 9, 16] # Subsequent iterations read from the cache. list(dataset.as_numpy_iterator()) [0, 1, 4, 9, 16] When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to .cache() will have no effect until the cache file is removed or the filename is changed. dataset = tf.data.Dataset.range(5) dataset = dataset.cache("/path/to/file") # doctest: +SKIP list(dataset.as_numpy_iterator()) # doctest: +SKIP [0, 1, 2, 3, 4] dataset = tf.data.Dataset.range(10) dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP list(dataset.as_numpy_iterator()) # doctest: +SKIP [0, 1, 2, 3, 4] Note: cache will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call shuffle after calling cache. Args filename A tf.string scalar tf.Tensor, representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory. Returns Dataset A Dataset. cardinality View source cardinality() Returns the cardinality of the dataset, if known. cardinality may return tf.data.INFINITE_CARDINALITY if the dataset contains an infinite number of elements or tf.data.UNKNOWN_CARDINALITY if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file). dataset = tf.data.Dataset.range(42) print(dataset.cardinality().numpy()) 42 dataset = dataset.repeat() cardinality = dataset.cardinality() print((cardinality == tf.data.INFINITE_CARDINALITY).numpy()) True dataset = dataset.filter(lambda x: True) cardinality = dataset.cardinality() print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy()) True Returns A scalar tf.int64 Tensor representing the cardinality of the dataset. If the cardinality is infinite or unknown, cardinality returns the named constants tf.data.INFINITE_CARDINALITY and tf.data.UNKNOWN_CARDINALITY respectively. concatenate View source concatenate( dataset ) Creates a Dataset by concatenating the given dataset with this dataset. a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ] ds = a.concatenate(b) list(ds.as_numpy_iterator()) [1, 2, 3, 4, 5, 6, 7] # The input dataset and dataset to be concatenated should have the same # nested structures and output types. c = tf.data.Dataset.zip((a, b)) a.concatenate(c) Traceback (most recent call last): TypeError: Two datasets to concatenate have different types <dtype: 'int64'> and (tf.int64, tf.int64) d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) a.concatenate(d) Traceback (most recent call last): TypeError: Two datasets to concatenate have different types <dtype: 'int64'> and <dtype: 'string'> Args dataset Dataset to be concatenated. Returns Dataset A Dataset. enumerate View source enumerate( start=0 ) Enumerates the elements of this dataset. It is similar to python's enumerate. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.enumerate(start=5) for element in dataset.as_numpy_iterator(): print(element) (5, 1) (6, 2) (7, 3) # The nested structure of the input dataset determines the structure of # elements in the resulting dataset. dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) dataset = dataset.enumerate() for element in dataset.as_numpy_iterator(): print(element) (0, array([7, 8], dtype=int32)) (1, array([ 9, 10], dtype=int32)) Args start A tf.int64 scalar tf.Tensor, representing the start value for enumeration. Returns Dataset A Dataset. filter View source filter( predicate ) Filters this dataset according to predicate. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.filter(lambda x: x < 3) list(dataset.as_numpy_iterator()) [1, 2] # `tf.math.equal(x, y)` is required for equality comparison def filter_fn(x): return tf.math.equal(x, 1) dataset = dataset.filter(filter_fn) list(dataset.as_numpy_iterator()) [1] Args predicate A function mapping a dataset element to a boolean. Returns Dataset The Dataset containing the elements of this dataset for which predicate is True. flat_map View source flat_map( map_func ) Maps map_func across this dataset and flattens the result. Use flat_map if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements: dataset = tf.data.Dataset.from_tensor_slices( [[1, 2, 3], [4, 5, 6], [7, 8, 9]]) dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x)) list(dataset.as_numpy_iterator()) [1, 2, 3, 4, 5, 6, 7, 8, 9] tf.data.Dataset.interleave() is a generalization of flat_map, since flat_map produces the same output as tf.data.Dataset.interleave(cycle_length=1) Args map_func A function mapping a dataset element to a dataset. Returns Dataset A Dataset. from_generator View source @staticmethod from_generator( generator, output_types=None, output_shapes=None, args=None, output_signature=None ) Creates a Dataset whose elements are generated by generator. (deprecated arguments) Warning: SOME ARGUMENTS ARE DEPRECATED: (output_shapes, output_types). They will be removed in a future version. Instructions for updating: Use output_signature instead The generator argument must be a callable object that returns an object that supports the iter() protocol (e.g. a generator function). The elements generated by generator must be compatible with either the given output_signature argument or with the given output_types and (optionally) output_shapes arguments, whichiver was specified. The recommended way to call from_generator is to use the output_signature argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by tf.TypeSpec objects from output_signature argument: def gen(): ragged_tensor = tf.ragged.constant([[1, 2], [3]]) yield 42, ragged_tensor dataset = tf.data.Dataset.from_generator( gen, output_signature=( tf.TensorSpec(shape=(), dtype=tf.int32), tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32))) list(dataset.take(1)) [(<tf.Tensor: shape=(), dtype=int32, numpy=42>, <tf.RaggedTensor [[1, 2], [3]]>)] There is also a deprecated way to call from_generator by either with output_types argument alone or together with output_shapes argument. In this case the output of the function will be assumed to consist of tf.Tensor objects with with the types defined by output_types and with the shapes which are either unknown or defined by output_shapes. Note: The current implementation of Dataset.from_generator() uses tf.numpy_function and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called Dataset.from_generator(). The body of generator will not be serialized in a GraphDef, and you should not use this method if you need to serialize your model and restore it in a different environment. Note: If generator depends on mutable global variables or other external state, be aware that the runtime may invoke generator multiple times (in order to support repeating the Dataset) and at any time between the call to Dataset.from_generator() and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in generator before calling Dataset.from_generator(). Args generator A callable object that returns an object that supports the iter() protocol. If args is not specified, generator must take no arguments; otherwise it must take as many arguments as there are values in args. output_types (Optional.) A nested structure of tf.DType objects corresponding to each component of an element yielded by generator. output_shapes (Optional.) A nested structure of tf.TensorShape objects corresponding to each component of an element yielded by generator. args (Optional.) A tuple of tf.Tensor objects that will be evaluated and passed to generator as NumPy-array arguments. output_signature (Optional.) A nested structure of tf.TypeSpec objects corresponding to each component of an element yielded by generator. Returns Dataset A Dataset. from_tensor_slices View source @staticmethod from_tensor_slices( tensors ) Creates a Dataset whose elements are slices of the given tensors. The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions. # Slicing a 1D tensor produces scalar tensor elements. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) list(dataset.as_numpy_iterator()) [1, 2, 3] # Slicing a 2D tensor produces 1D tensor elements. dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) list(dataset.as_numpy_iterator()) [array([1, 2], dtype=int32), array([3, 4], dtype=int32)] # Slicing a tuple of 1D tensors produces tuple elements containing # scalar tensors. dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) list(dataset.as_numpy_iterator()) [(1, 3, 5), (2, 4, 6)] # Dictionary structure is also preserved. dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, {'a': 2, 'b': 4}] True # Two tensors can be combined into one Dataset object. features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor dataset = Dataset.from_tensor_slices((features, labels)) # Both the features and the labels tensors can be converted # to a Dataset object separately and combined after. features_dataset = Dataset.from_tensor_slices(features) labels_dataset = Dataset.from_tensor_slices(labels) dataset = Dataset.zip((features_dataset, labels_dataset)) # A batched feature and label set can be converted to a Dataset # in similar fashion. batched_features = tf.constant([[[1, 3], [2, 3]], [[2, 1], [1, 2]], [[3, 3], [3, 2]]], shape=(3, 2, 2)) batched_labels = tf.constant([['A', 'A'], ['B', 'B'], ['A', 'B']], shape=(3, 2, 1)) dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) for element in dataset.as_numpy_iterator(): print(element) (array([[1, 3], [2, 3]], dtype=int32), array([[b'A'], [b'A']], dtype=object)) (array([[2, 1], [1, 2]], dtype=int32), array([[b'B'], [b'B']], dtype=object)) (array([[3, 3], [3, 2]], dtype=int32), array([[b'A'], [b'B']], dtype=object)) Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide. Args tensors A dataset element, with each component having the same size in the first dimension. Returns Dataset A Dataset. from_tensors View source @staticmethod from_tensors( tensors ) Creates a Dataset with a single element, comprising the given tensors. from_tensors produces a dataset containing only a single element. To slice the input tensor into multiple elements, use from_tensor_slices instead. dataset = tf.data.Dataset.from_tensors([1, 2, 3]) list(dataset.as_numpy_iterator()) [array([1, 2, 3], dtype=int32)] dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) list(dataset.as_numpy_iterator()) [(array([1, 2, 3], dtype=int32), b'A')] # You can use `from_tensors` to produce a dataset which repeats # the same example many times. example = tf.constant([1,2,3]) dataset = tf.data.Dataset.from_tensors(example).repeat(2) list(dataset.as_numpy_iterator()) [array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide. Args tensors A dataset element. Returns Dataset A Dataset. interleave View source interleave( map_func, cycle_length=None, block_length=None, num_parallel_calls=None, deterministic=None ) Maps map_func across this dataset, and interleaves the results. For example, you can use Dataset.interleave() to process many input files concurrently: # Preprocess 4 files concurrently, and interleave blocks of 16 records # from each file. filenames = ["/var/data/file1.txt", "/var/data/file2.txt", "/var/data/file3.txt", "/var/data/file4.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) def parse_fn(filename): return tf.data.Dataset.range(10) dataset = dataset.interleave(lambda x: tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), cycle_length=4, block_length=16) The cycle_length and block_length arguments control the order in which elements are produced. cycle_length controls the number of input elements that are processed concurrently. If you set cycle_length to 1, this transformation will handle one input element at a time, and will produce identical results to tf.data.Dataset.flat_map. In general, this transformation will apply map_func to cycle_length input elements, open iterators on the returned Dataset objects, and cycle through them producing block_length consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator. For example: dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] # NOTE: New lines indicate "block" boundaries. dataset = dataset.interleave( lambda x: Dataset.from_tensors(x).repeat(6), cycle_length=2, block_length=4) list(dataset.as_numpy_iterator()) [1, 1, 1, 1, 2, 2, 2, 2, 1, 1, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 3, 3, 4, 4, 5, 5, 5, 5, 5, 5] Note: The order of elements yielded by this transformation is deterministic, as long as map_func is a pure function and deterministic=True. If map_func contains any stateful operations, the order in which that state is accessed is undefined. Performance can often be improved by setting num_parallel_calls so that interleave will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set deterministic=False. filenames = ["/var/data/file1.txt", "/var/data/file2.txt", "/var/data/file3.txt", "/var/data/file4.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE, deterministic=False) Args map_func A function mapping a dataset element to a dataset. cycle_length (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If num_parallel_calls is set to tf.data.AUTOTUNE, the cycle_length argument identifies the maximum degree of parallelism. block_length (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1. num_parallel_calls (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU. deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically. Returns Dataset A Dataset. list_files View source @staticmethod list_files( file_pattern, shuffle=None, seed=None ) A dataset of all files matching one or more glob patterns. The file_pattern argument should be a small number of glob patterns. If your filenames have already been globbed, use Dataset.from_tensor_slices(filenames) instead, as re-globbing every filename with list_files may result in poor performance with remote storage systems. Note: The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a seed or shuffle=False to get results in a deterministic order. Example: If we had the following files on our filesystem: /path/to/dir/a.txt /path/to/dir/b.py /path/to/dir/c.py If we pass "/path/to/dir/*.py" as the directory, the dataset would produce: /path/to/dir/b.py /path/to/dir/c.py Args file_pattern A string, a list of strings, or a tf.Tensor of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched. shuffle (Optional.) If True, the file names will be shuffled randomly. Defaults to True. seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior. Returns Dataset A Dataset of strings corresponding to file names. map View source map( map_func, num_parallel_calls=None, deterministic=None ) Maps map_func across the elements of this dataset. This transformation applies map_func to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. map_func can be used to change both the values and the structure of a dataset's elements. For example, adding 1 to each element, or projecting a subset of element components. dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] dataset = dataset.map(lambda x: x + 1) list(dataset.as_numpy_iterator()) [2, 3, 4, 5, 6] The input signature of map_func is determined by the structure of each element in this dataset. dataset = Dataset.range(5) # `map_func` takes a single argument of type `tf.Tensor` with the same # shape and dtype. result = dataset.map(lambda x: x + 1) # Each element is a tuple containing two `tf.Tensor` objects. elements = [(1, "foo"), (2, "bar"), (3, "baz")] dataset = tf.data.Dataset.from_generator( lambda: elements, (tf.int32, tf.string)) # `map_func` takes two arguments of type `tf.Tensor`. This function # projects out just the first component. result = dataset.map(lambda x_int, y_str: x_int) list(result.as_numpy_iterator()) [1, 2, 3] # Each element is a dictionary mapping strings to `tf.Tensor` objects. elements = ([{"a": 1, "b": "foo"}, {"a": 2, "b": "bar"}, {"a": 3, "b": "baz"}]) dataset = tf.data.Dataset.from_generator( lambda: elements, {"a": tf.int32, "b": tf.string}) # `map_func` takes a single argument of type `dict` with the same keys # as the elements. result = dataset.map(lambda d: str(d["a"]) + d["b"]) The value or values returned by map_func determine the structure of each element in the returned dataset. dataset = tf.data.Dataset.range(3) # `map_func` returns two `tf.Tensor` objects. def g(x): return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"]) result = dataset.map(g) result.element_spec (TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None)) # Python primitives, lists, and NumPy arrays are implicitly converted to # `tf.Tensor`. def h(x): return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64) result = dataset.map(h) result.element_spec (TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None)) # `map_func` can return nested structures. def i(x): return (37.0, [42, 16]), "foo" result = dataset.map(i) result.element_spec ((TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.int32, name=None)), TensorSpec(shape=(), dtype=tf.string, name=None)) map_func can accept as arguments and return any type of dataset element. Note that irrespective of the context in which map_func is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options: 1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code. 2) Use tf.py_function, which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example: d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) # transform a string tensor to upper case string using a Python function def upper_case_fn(t: tf.Tensor): return t.numpy().decode('utf-8').upper() d = d.map(lambda x: tf.py_function(func=upper_case_fn, inp=[x], Tout=tf.string)) list(d.as_numpy_iterator()) [b'HELLO', b'WORLD'] 3) Use tf.numpy_function, which also allows you to write arbitrary Python code. Note that tf.py_function accepts tf.Tensor whereas tf.numpy_function accepts numpy arrays and returns only numpy arrays. For example: d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) def upper_case_fn(t: np.ndarray): return t.decode('utf-8').upper() d = d.map(lambda x: tf.numpy_function(func=upper_case_fn, inp=[x], Tout=tf.string)) list(d.as_numpy_iterator()) [b'HELLO', b'WORLD'] Note that the use of tf.numpy_function and tf.py_function in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL). Performance can often be improved by setting num_parallel_calls so that map will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set deterministic=False. dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] dataset = dataset.map(lambda x: x + 1, num_parallel_calls=tf.data.AUTOTUNE, deterministic=False) Args map_func A function mapping a dataset element to another dataset element. num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU. deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically. Returns Dataset A Dataset. options View source options() Returns the options for this dataset and its inputs. Returns A tf.data.Options object representing the dataset options. padded_batch View source padded_batch( batch_size, padded_shapes=None, padding_values=None, drop_remainder=False ) Combines consecutive elements of this dataset into padded batches. This transformation combines multiple consecutive elements of the input dataset into a single element. Like tf.data.Dataset.batch, the components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced. Unlike tf.data.Dataset.batch, the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in padded_shapes. The padded_shapes argument determines the resulting shape for each dimension of each component in an output element: If the dimension is a constant, the component will be padded out to that length in that dimension. If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension. A = (tf.data.Dataset .range(1, 5, output_type=tf.int32) .map(lambda x: tf.fill([x], x))) # Pad to the smallest per-batch size that fits all elements. B = A.padded_batch(2) for element in B.as_numpy_iterator(): print(element) [[1 0] [2 2]] [[3 3 3 0] [4 4 4 4]] # Pad to a fixed size. C = A.padded_batch(2, padded_shapes=5) for element in C.as_numpy_iterator(): print(element) [[1 0 0 0 0] [2 2 0 0 0]] [[3 3 3 0 0] [4 4 4 4 0]] # Pad with a custom value. D = A.padded_batch(2, padded_shapes=5, padding_values=-1) for element in D.as_numpy_iterator(): print(element) [[ 1 -1 -1 -1 -1] [ 2 2 -1 -1 -1]] [[ 3 3 3 -1 -1] [ 4 4 4 4 -1]] # Components of nested elements can be padded independently. elements = [([1, 2, 3], [10]), ([4, 5], [11, 12])] dataset = tf.data.Dataset.from_generator( lambda: iter(elements), (tf.int32, tf.int32)) # Pad the first component of the tuple to length 4, and the second # component to the smallest size that fits. dataset = dataset.padded_batch(2, padded_shapes=([4], [None]), padding_values=(-1, 100)) list(dataset.as_numpy_iterator()) [(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32), array([[ 10, 100], [ 11, 12]], dtype=int32))] # Pad with a single value and multiple components. E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1) for element in E.as_numpy_iterator(): print(element) (array([[ 1, -1], [ 2, 2]], dtype=int32), array([[ 1, -1], [ 2, 2]], dtype=int32)) (array([[ 3, 3, 3, -1], [ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1], [ 4, 4, 4, 4]], dtype=int32)) See also tf.data.experimental.dense_to_sparse_batch, which combines elements that may have different shapes into a tf.sparse.SparseTensor. Args batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch. padded_shapes (Optional.) A nested structure of tf.TensorShape or tf.int64 vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. padded_shapes must be set if any component has an unknown rank. padding_values (Optional.) A nested structure of scalar-shaped tf.Tensor, representing the padding values to use for the respective components. None represents that the nested structure should be padded with default values. Defaults are 0 for numeric types and the empty string for string types. The padding_values should have the same structure as the input dataset. If padding_values is a single element and the input dataset has multiple components, then the same padding_values will be used to pad every component of the dataset. If padding_values is a scalar, then its value will be broadcasted to match the shape of each component. drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch. Returns Dataset A Dataset. Raises ValueError If a component has an unknown rank, and the padded_shapes argument is not set. prefetch View source prefetch( buffer_size ) Creates a Dataset that prefetches elements from this dataset. Most dataset input pipelines should end with a call to prefetch. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements. Note: Like other Dataset methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. examples.prefetch(2) will prefetch two elements (2 examples), while examples.batch(20).prefetch(2) will prefetch 2 elements (2 batches, of 20 examples each). dataset = tf.data.Dataset.range(3) dataset = dataset.prefetch(2) list(dataset.as_numpy_iterator()) [0, 1, 2] Args buffer_size A tf.int64 scalar tf.Tensor, representing the maximum number of elements that will be buffered when prefetching. Returns Dataset A Dataset. range View source @staticmethod range( *args, **kwargs ) Creates a Dataset of a step-separated range of values. list(Dataset.range(5).as_numpy_iterator()) [0, 1, 2, 3, 4] list(Dataset.range(2, 5).as_numpy_iterator()) [2, 3, 4] list(Dataset.range(1, 5, 2).as_numpy_iterator()) [1, 3] list(Dataset.range(1, 5, -2).as_numpy_iterator()) [] list(Dataset.range(5, 1).as_numpy_iterator()) [] list(Dataset.range(5, 1, -2).as_numpy_iterator()) [5, 3] list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator()) [2, 3, 4] list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator()) [1.0, 3.0] Args *args follows the same semantics as python's xrange. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. **kwargs output_type: Its expected dtype. (Optional, default: tf.int64). Returns Dataset A RangeDataset. Raises ValueError if len(args) == 0. reduce View source reduce( initial_state, reduce_func ) Reduces the input dataset to a single element. The transformation calls reduce_func successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The initial_state argument is used for the initial state and the final state is returned as the result. tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy() 5 tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy() 10 Args initial_state An element representing the initial state of the transformation. reduce_func A function that maps (old_state, input_element) to new_state. It must take two arguments and return a new element The structure of new_state must match the structure of initial_state. Returns A dataset element corresponding to the final state of the transformation. repeat View source repeat( count=None ) Repeats this dataset so each original value is seen count times. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.repeat(3) list(dataset.as_numpy_iterator()) [1, 2, 3, 1, 2, 3, 1, 2, 3] Note: If this dataset is a function of global state (e.g. a random number generator), then different repetitions may produce different elements. Args count (Optional.) A tf.int64 scalar tf.Tensor, representing the number of times the dataset should be repeated. The default behavior (if count is None or -1) is for the dataset be repeated indefinitely. Returns Dataset A Dataset. shard View source shard( num_shards, index ) Creates a Dataset that includes only 1/num_shards of this dataset. shard is deterministic. The Dataset produced by A.shard(n, i) will contain all elements of A whose index mod n = i. A = tf.data.Dataset.range(10) B = A.shard(num_shards=3, index=0) list(B.as_numpy_iterator()) [0, 3, 6, 9] C = A.shard(num_shards=3, index=1) list(C.as_numpy_iterator()) [1, 4, 7] D = A.shard(num_shards=3, index=2) list(D.as_numpy_iterator()) [2, 5, 8] This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset. When reading a single input file, you can shard elements as follows: d = tf.data.TFRecordDataset(input_file) d = d.shard(num_workers, worker_index) d = d.repeat(num_epochs) d = d.shuffle(shuffle_buffer_size) d = d.map(parser_fn, num_parallel_calls=num_map_threads) Important caveats: Be sure to shard before you use any randomizing operator (such as shuffle). Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline: d = Dataset.list_files(pattern) d = d.shard(num_workers, worker_index) d = d.repeat(num_epochs) d = d.shuffle(shuffle_buffer_size) d = d.interleave(tf.data.TFRecordDataset, cycle_length=num_readers, block_length=1) d = d.map(parser_fn, num_parallel_calls=num_map_threads) Args num_shards A tf.int64 scalar tf.Tensor, representing the number of shards operating in parallel. index A tf.int64 scalar tf.Tensor, representing the worker index. Returns Dataset A Dataset. Raises InvalidArgumentError if num_shards or index are illegal values. Note: error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.) shuffle View source shuffle( buffer_size, seed=None, reshuffle_each_iteration=None ) Randomly shuffles the elements of this dataset. This dataset fills a buffer with buffer_size elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required. For instance, if your dataset contains 10,000 elements but buffer_size is set to 1,000, then shuffle will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer. reshuffle_each_iteration controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the repeat transformation: dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=True) dataset = dataset.repeat(2) # doctest: +SKIP [1, 0, 2, 1, 2, 0] dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=False) dataset = dataset.repeat(2) # doctest: +SKIP [1, 0, 2, 1, 0, 2] In TF 2.0, tf.data.Dataset objects are Python iterables which makes it possible to also create epochs through Python iteration: dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=True) list(dataset.as_numpy_iterator()) # doctest: +SKIP [1, 0, 2] list(dataset.as_numpy_iterator()) # doctest: +SKIP [1, 2, 0] dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=False) list(dataset.as_numpy_iterator()) # doctest: +SKIP [1, 0, 2] list(dataset.as_numpy_iterator()) # doctest: +SKIP [1, 0, 2] Args buffer_size A tf.int64 scalar tf.Tensor, representing the number of elements from this dataset from which the new dataset will sample. seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior. reshuffle_each_iteration (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to True.) Returns Dataset A Dataset. skip View source skip( count ) Creates a Dataset that skips count elements from this dataset. dataset = tf.data.Dataset.range(10) dataset = dataset.skip(7) list(dataset.as_numpy_iterator()) [7, 8, 9] Args count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be skipped to form the new dataset. If count is greater than the size of this dataset, the new dataset will contain no elements. If count is -1, skips the entire dataset. Returns Dataset A Dataset. take View source take( count ) Creates a Dataset with at most count elements from this dataset. dataset = tf.data.Dataset.range(10) dataset = dataset.take(3) list(dataset.as_numpy_iterator()) [0, 1, 2] Args count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be taken to form the new dataset. If count is -1, or if count is greater than the size of this dataset, the new dataset will contain all elements of this dataset. Returns Dataset A Dataset. unbatch View source unbatch() Splits elements of a dataset into multiple elements. For example, if elements of the dataset are shaped [B, a0, a1, ...], where B may vary for each input element, then for each element in the dataset, the unbatched dataset will contain B consecutive elements of shape [a0, a1, ...]. elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64) dataset = dataset.unbatch() list(dataset.as_numpy_iterator()) [1, 2, 3, 1, 2, 1, 2, 3, 4] Note: unbatch requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of unbatch. Returns A Dataset. window View source window( size, shift=None, stride=1, drop_remainder=False ) Combines (nests of) input elements into a dataset of (nests of) windows. A "window" is a finite dataset of flat elements of size size (or possibly fewer if there are not enough input elements to fill the window and drop_remainder evaluates to False). The shift argument determines the number of input elements by which the window moves on each iteration. If windows and elements are both numbered starting at 0, the first element in window k will be element k * shift of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset. The stride argument determines the stride of the input elements, and the shift argument determines the shift of the window. For example: dataset = tf.data.Dataset.range(7).window(2) for window in dataset: print(list(window.as_numpy_iterator())) [0, 1] [2, 3] [4, 5] [6] dataset = tf.data.Dataset.range(7).window(3, 2, 1, True) for window in dataset: print(list(window.as_numpy_iterator())) [0, 1, 2] [2, 3, 4] [4, 5, 6] dataset = tf.data.Dataset.range(7).window(3, 1, 2, True) for window in dataset: print(list(window.as_numpy_iterator())) [0, 2, 4] [1, 3, 5] [2, 4, 6] Note that when the window transformation is applied to a dataset of nested elements, it produces a dataset of nested windows. nested = ([1, 2, 3, 4], [5, 6, 7, 8]) dataset = tf.data.Dataset.from_tensor_slices(nested).window(2) for window in dataset: def to_numpy(ds): return list(ds.as_numpy_iterator()) print(tuple(to_numpy(component) for component in window)) ([1, 2], [5, 6]) ([3, 4], [7, 8]) dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]}) dataset = dataset.window(2) for window in dataset: def to_numpy(ds): return list(ds.as_numpy_iterator()) print({'a': to_numpy(window['a'])}) {'a': [1, 2]} {'a': [3, 4]} Args size A tf.int64 scalar tf.Tensor, representing the number of elements of the input dataset to combine into a window. Must be positive. shift (Optional.) A tf.int64 scalar tf.Tensor, representing the number of input elements by which the window moves in each iteration. Defaults to size. Must be positive. stride (Optional.) A tf.int64 scalar tf.Tensor, representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element". drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last windows should be dropped if their size is smaller than size. Returns Dataset A Dataset of (nests of) windows -- a finite datasets of flat elements created from the (nests of) input elements. with_options View source with_options( options ) Returns a new tf.data.Dataset with the given options set. The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values. ds = tf.data.Dataset.range(5) ds = ds.interleave(lambda x: tf.data.Dataset.range(5), cycle_length=3, num_parallel_calls=3) options = tf.data.Options() # This will make the interleave order non-deterministic. options.experimental_deterministic = False ds = ds.with_options(options) Args options A tf.data.Options that identifies the options the use. Returns Dataset A Dataset with the given options. Raises ValueError when an option is set more than once to a non-default value zip View source @staticmethod zip( datasets ) Creates a Dataset by zipping together the given datasets. This method has similar semantics to the built-in zip() function in Python, with the main difference being that the datasets argument can be an arbitrary nested structure of Dataset objects. # The nested structure of the `datasets` argument determines the # structure of elements in the resulting dataset. a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ] ds = tf.data.Dataset.zip((a, b)) list(ds.as_numpy_iterator()) [(1, 4), (2, 5), (3, 6)] ds = tf.data.Dataset.zip((b, a)) list(ds.as_numpy_iterator()) [(4, 1), (5, 2), (6, 3)] # The `datasets` argument may contain an arbitrary number of datasets. c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8], # [9, 10], # [11, 12] ] ds = tf.data.Dataset.zip((a, b, c)) for element in ds.as_numpy_iterator(): print(element) (1, 4, array([7, 8])) (2, 5, array([ 9, 10])) (3, 6, array([11, 12])) # The number of elements in the resulting dataset is the same as # the size of the smallest dataset in `datasets`. d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ] ds = tf.data.Dataset.zip((a, d)) list(ds.as_numpy_iterator()) [(1, 13), (2, 14)] Args datasets A nested structure of datasets. Returns Dataset A Dataset. __bool__ View source __bool__() __iter__ View source __iter__() Creates an iterator for elements of this dataset. The returned iterator implements the Python Iterator protocol. Returns An tf.data.Iterator for the elements of this dataset. Raises RuntimeError If not inside of tf.function and not executing eagerly. __len__ View source __len__() Returns the length of the dataset if it is known and finite. This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use tf.data.Dataset.cardinality instead. Returns An integer representing the length of the dataset. Raises RuntimeError If the dataset length is unknown or infinite, or if eager execution is not enabled. __nonzero__ View source __nonzero__()
tensorflow.data.dataset
tf.data.DatasetSpec View source on GitHub Type specification for tf.data.Dataset. Inherits From: TypeSpec View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.DatasetSpec, tf.compat.v1.data.experimental.DatasetStructure tf.data.DatasetSpec( element_spec, dataset_shape=() ) See tf.TypeSpec for more information about TensorFlow type specifications. dataset = tf.data.Dataset.range(3) tf.data.DatasetSpec.from_value(dataset) DatasetSpec(TensorSpec(shape=(), dtype=tf.int64, name=None), TensorShape([])) Attributes value_type The Python type for values that are compatible with this TypeSpec. In particular, all values that are compatible with this TypeSpec must be an instance of this type. Methods from_value View source @staticmethod from_value( value ) Creates a DatasetSpec for the given tf.data.Dataset value. is_compatible_with View source is_compatible_with( spec_or_value ) Returns true if spec_or_value is compatible with this TypeSpec. most_specific_compatible_type View source most_specific_compatible_type( other ) Returns the most specific TypeSpec compatible with self and other. Args other A TypeSpec. Raises ValueError If there is no TypeSpec that is compatible with both self and other. __eq__ View source __eq__( other ) Return self==value. __ne__ View source __ne__( other ) Return self!=value.
tensorflow.data.datasetspec
Module: tf.data.experimental Experimental API for building input pipelines. This module contains experimental Dataset sources and transformations that can be used in conjunction with the tf.data.Dataset API. Note that the tf.data.experimental API is not subject to the same backwards compatibility guarantees as tf.data, but we will provide deprecation advice in advance of removing existing functionality. See Importing Data for an overview. Modules service module: API for using the tf.data service. Classes class AutoShardPolicy: Represents the type of auto-sharding we enable. class CheckpointInputPipelineHook: Checkpoints input pipeline state every N steps or seconds. class CsvDataset: A Dataset comprising lines from one or more CSV files. class DistributeOptions: Represents options for distributed data processing. class MapVectorizationOptions: Represents options for the MapVectorization optimization. class OptimizationOptions: Represents options for dataset optimizations. class Optional: Represents a value that may or may not be present. class RandomDataset: A Dataset of pseudorandom values. class Reducer: A reducer is used for reducing a set of elements. class SqlDataset: A Dataset consisting of the results from a SQL query. class StatsAggregator: A stateful resource that aggregates statistics from one or more iterators. class StatsOptions: Represents options for collecting dataset stats using StatsAggregator. class TFRecordWriter: Writes a dataset to a TFRecord file. class ThreadingOptions: Represents options for dataset threading. Functions Counter(...): Creates a Dataset that counts from start in steps of size step. assert_cardinality(...): Asserts the cardinality of the input dataset. bucket_by_sequence_length(...): A transformation that buckets elements in a Dataset by length. bytes_produced_stats(...): Records the number of bytes produced by each element of the input dataset. cardinality(...): Returns the cardinality of dataset, if known. choose_from_datasets(...): Creates a dataset that deterministically chooses elements from datasets. copy_to_device(...): A transformation that copies dataset elements to the given target_device. dense_to_ragged_batch(...): A transformation that batches ragged elements into tf.RaggedTensors. dense_to_sparse_batch(...): A transformation that batches ragged elements into tf.sparse.SparseTensors. enumerate_dataset(...): A transformation that enumerates the elements of a dataset. (deprecated) from_variant(...): Constructs a dataset from the given variant and structure. get_next_as_optional(...): Returns a tf.experimental.Optional with the next element of the iterator. (deprecated) get_single_element(...): Returns the single element in dataset as a nested structure of tensors. get_structure(...): Returns the type signature for elements of the input dataset / iterator. group_by_reducer(...): A transformation that groups elements and performs a reduction. group_by_window(...): A transformation that groups windows of elements by key and reduces them. ignore_errors(...): Creates a Dataset from another Dataset and silently ignores any errors. latency_stats(...): Records the latency of producing each element of the input dataset. load(...): Loads a previously saved dataset. make_batched_features_dataset(...): Returns a Dataset of feature dictionaries from Example protos. make_csv_dataset(...): Reads CSV files into a dataset. make_saveable_from_iterator(...): Returns a SaveableObject for saving/restoring iterator state using Saver. (deprecated) map_and_batch(...): Fused implementation of map and batch. (deprecated) parallel_interleave(...): A parallel version of the Dataset.interleave() transformation. (deprecated) parse_example_dataset(...): A transformation that parses Example protos into a dict of tensors. prefetch_to_device(...): A transformation that prefetches dataset values to the given device. rejection_resample(...): A transformation that resamples a dataset to achieve a target distribution. sample_from_datasets(...): Samples elements at random from the datasets in datasets. save(...): Saves the content of the given dataset. scan(...): A transformation that scans a function across an input dataset. shuffle_and_repeat(...): Shuffles and repeats a Dataset, reshuffling with each repetition. (deprecated) snapshot(...): API to persist the output of the input dataset. take_while(...): A transformation that stops dataset iteration based on a predicate. to_variant(...): Returns a variant representing the given dataset. unbatch(...): Splits elements of a dataset into multiple elements on the batch dimension. (deprecated) unique(...): Creates a Dataset from another Dataset, discarding duplicates. Other Members AUTOTUNE -1 INFINITE_CARDINALITY -1 UNKNOWN_CARDINALITY -2
tensorflow.data.experimental
tf.data.experimental.assert_cardinality Asserts the cardinality of the input dataset. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.assert_cardinality tf.data.experimental.assert_cardinality( expected_cardinality ) Note: The following assumes that "examples.tfrecord" contains 42 records. dataset = tf.data.TFRecordDataset("examples.tfrecord") cardinality = tf.data.experimental.cardinality(dataset) print((cardinality == tf.data.experimental.UNKNOWN_CARDINALITY).numpy()) True dataset = dataset.apply(tf.data.experimental.assert_cardinality(42)) print(tf.data.experimental.cardinality(dataset).numpy()) 42 Args expected_cardinality The expected cardinality of the input dataset. Returns A Dataset transformation function, which can be passed to tf.data.Dataset.apply. Raises FailedPreconditionError The assertion is checked at runtime (when iterating the dataset) and an error is raised if the actual and expected cardinality differ.
tensorflow.data.experimental.assert_cardinality
tf.data.experimental.AutoShardPolicy Represents the type of auto-sharding we enable. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.AutoShardPolicy Please see the DistributeOptions.auto_shard_policy documentation for more information on each type of autosharding. Class Variables AUTO <AutoShardPolicy.AUTO: 0> DATA <AutoShardPolicy.DATA: 2> FILE <AutoShardPolicy.FILE: 1> OFF <AutoShardPolicy.OFF: -1>
tensorflow.data.experimental.autoshardpolicy
tf.data.experimental.bucket_by_sequence_length View source on GitHub A transformation that buckets elements in a Dataset by length. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.bucket_by_sequence_length tf.data.experimental.bucket_by_sequence_length( element_length_func, bucket_boundaries, bucket_batch_sizes, padded_shapes=None, padding_values=None, pad_to_bucket_boundary=False, no_padding=False, drop_remainder=False ) Elements of the Dataset are grouped together by length and then are padded and batched. This is useful for sequence tasks in which the elements have variable length. Grouping together elements that have similar lengths reduces the total fraction of padding in a batch which increases training step efficiency. Args element_length_func function from element in Dataset to tf.int32, determines the length of the element, which will determine the bucket it goes into. bucket_boundaries list<int>, upper length boundaries of the buckets. bucket_batch_sizes list<int>, batch size per bucket. Length should be len(bucket_boundaries) + 1. padded_shapes Nested structure of tf.TensorShape to pass to tf.data.Dataset.padded_batch. If not provided, will use dataset.output_shapes, which will result in variable length dimensions being padded out to the maximum length in each batch. padding_values Values to pad with, passed to tf.data.Dataset.padded_batch. Defaults to padding with 0. pad_to_bucket_boundary bool, if False, will pad dimensions with unknown size to maximum length in batch. If True, will pad dimensions with unknown size to bucket boundary minus 1 (i.e., the maximum length in each bucket), and caller must ensure that the source Dataset does not contain any elements with length longer than max(bucket_boundaries). no_padding bool, indicates whether to pad the batch features (features need to be either of type tf.sparse.SparseTensor or of same shape). drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch. Returns A Dataset transformation function, which can be passed to tf.data.Dataset.apply. Raises ValueError if len(bucket_batch_sizes) != len(bucket_boundaries) + 1.
tensorflow.data.experimental.bucket_by_sequence_length
tf.data.experimental.bytes_produced_stats View source on GitHub Records the number of bytes produced by each element of the input dataset. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.bytes_produced_stats tf.data.experimental.bytes_produced_stats( tag ) To consume the statistics, associate a StatsAggregator with the output dataset. Args tag String. All statistics recorded by the returned transformation will be associated with the given tag. Returns A Dataset transformation function, which can be passed to tf.data.Dataset.apply.
tensorflow.data.experimental.bytes_produced_stats
tf.data.experimental.cardinality View source on GitHub Returns the cardinality of dataset, if known. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.cardinality tf.data.experimental.cardinality( dataset ) The operation returns the cardinality of dataset. The operation may return tf.data.experimental.INFINITE_CARDINALITY if dataset contains an infinite number of elements or tf.data.experimental.UNKNOWN_CARDINALITY if the analysis fails to determine the number of elements in dataset (e.g. when the dataset source is a file). dataset = tf.data.Dataset.range(42) print(tf.data.experimental.cardinality(dataset).numpy()) 42 dataset = dataset.repeat() cardinality = tf.data.experimental.cardinality(dataset) print((cardinality == tf.data.experimental.INFINITE_CARDINALITY).numpy()) True dataset = dataset.filter(lambda x: True) cardinality = tf.data.experimental.cardinality(dataset) print((cardinality == tf.data.experimental.UNKNOWN_CARDINALITY).numpy()) True Args dataset A tf.data.Dataset for which to determine cardinality. Returns A scalar tf.int64 Tensor representing the cardinality of dataset. If the cardinality is infinite or unknown, the operation returns the named constant INFINITE_CARDINALITY and UNKNOWN_CARDINALITY respectively.
tensorflow.data.experimental.cardinality
tf.data.experimental.CheckpointInputPipelineHook View source on GitHub Checkpoints input pipeline state every N steps or seconds. Inherits From: SessionRunHook View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.CheckpointInputPipelineHook tf.data.experimental.CheckpointInputPipelineHook( estimator, external_state_policy='fail' ) This hook saves the state of the iterators in the Graph so that when training is resumed the input pipeline continues from where it left off. This could potentially avoid overfitting in certain pipelines where the number of training steps per eval are small compared to the dataset size or if the training pipeline is pre-empted. Differences from CheckpointSaverHook: Saves only the input pipelines in the "iterators" collection and not the global variables or other saveable objects. Does not write the GraphDef and MetaGraphDef to the summary. Example of checkpointing the training pipeline: est = tf.estimator.Estimator(model_fn) while True: est.train( train_input_fn, hooks=[tf.data.experimental.CheckpointInputPipelineHook(est)], steps=train_steps_per_eval) # Note: We do not pass the hook here. metrics = est.evaluate(eval_input_fn) if should_stop_the_training(metrics): break This hook should be used if the input pipeline state needs to be saved separate from the model checkpoint. Doing so may be useful for a few reasons: The input pipeline checkpoint may be large, if there are large shuffle or prefetch buffers for instance, and may bloat the checkpoint size. If the input pipeline is shared between training and validation, restoring the checkpoint during validation may override the validation input pipeline. For saving the input pipeline checkpoint alongside the model weights use tf.data.experimental.make_saveable_from_iterator directly to create a SaveableObject and add to the SAVEABLE_OBJECTS collection. Note, however, that you will need to be careful not to restore the training iterator during eval. You can do that by not adding the iterator to the SAVEABLE_OBJECTS collector when building the eval graph. Args estimator Estimator. external_state_policy A string that identifies how to handle input pipelines that depend on external state. Possible values are 'ignore': The external state is silently ignored. 'warn': The external state is ignored, logging a warning. 'fail': The operation fails upon encountering external state. By default we set it to 'fail'. Raises ValueError One of save_steps or save_secs should be set. ValueError At most one of saver or scaffold should be set. ValueError If external_state_policy is not one of 'warn', 'ignore' or 'fail'. Methods after_create_session View source after_create_session( session, coord ) Called when new TensorFlow session is created. This is called to signal the hooks that a new session has been created. This has two essential differences with the situation in which begin is called: When this is called, the graph is finalized and ops can no longer be added to the graph. This method will also be called as a result of recovering a wrapped session, not only at the beginning of the overall session. Args session A TensorFlow Session that has been created. coord A Coordinator object which keeps track of all threads. after_run View source after_run( run_context, run_values ) Called after each call to run(). The run_values argument contains results of requested ops/tensors by before_run(). The run_context argument is the same one send to before_run call. run_context.request_stop() can be called to stop the iteration. If session.run() raises any exceptions then after_run() is not called. Args run_context A SessionRunContext object. run_values A SessionRunValues object. before_run View source before_run( run_context ) Called before each call to run(). You can return from this call a SessionRunArgs object indicating ops or tensors to add to the upcoming run() call. These ops/tensors will be run together with the ops/tensors originally passed to the original run() call. The run args you return can also contain feeds to be added to the run() call. The run_context argument is a SessionRunContext that provides information about the upcoming run() call: the originally requested op/tensors, the TensorFlow Session. At this point graph is finalized and you can not add ops. Args run_context A SessionRunContext object. Returns None or a SessionRunArgs object. begin View source begin() Called once before using the session. When called, the default graph is the one that will be launched in the session. The hook can modify the graph by adding new operations to it. After the begin() call the graph will be finalized and the other callbacks can not modify the graph anymore. Second call of begin() on the same graph, should not change the graph. end View source end( session ) Called at the end of session. The session argument can be used in case the hook wants to run final ops, such as saving a last checkpoint. If session.run() raises exception other than OutOfRangeError or StopIteration then end() is not called. Note the difference between end() and after_run() behavior when session.run() raises OutOfRangeError or StopIteration. In that case end() is called but after_run() is not called. Args session A TensorFlow Session that will be soon closed.
tensorflow.data.experimental.checkpointinputpipelinehook
tf.data.experimental.choose_from_datasets View source on GitHub Creates a dataset that deterministically chooses elements from datasets. tf.data.experimental.choose_from_datasets( datasets, choice_dataset ) For example, given the following datasets: datasets = [tf.data.Dataset.from_tensors("foo").repeat(), tf.data.Dataset.from_tensors("bar").repeat(), tf.data.Dataset.from_tensors("baz").repeat()] # Define a dataset containing `[0, 1, 2, 0, 1, 2, 0, 1, 2]`. choice_dataset = tf.data.Dataset.range(3).repeat(3) result = tf.data.experimental.choose_from_datasets(datasets, choice_dataset) The elements of result will be: "foo", "bar", "baz", "foo", "bar", "baz", "foo", "bar", "baz" Args datasets A list of tf.data.Dataset objects with compatible structure. choice_dataset A tf.data.Dataset of scalar tf.int64 tensors between 0 and len(datasets) - 1. Returns A dataset that interleaves elements from datasets according to the values of choice_dataset. Raises TypeError If the datasets or choice_dataset arguments have the wrong type.
tensorflow.data.experimental.choose_from_datasets
tf.data.experimental.copy_to_device View source on GitHub A transformation that copies dataset elements to the given target_device. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.copy_to_device tf.data.experimental.copy_to_device( target_device, source_device='/cpu:0' ) Args target_device The name of a device to which elements will be copied. source_device The original device on which input_dataset will be placed. Returns A Dataset transformation function, which can be passed to tf.data.Dataset.apply.
tensorflow.data.experimental.copy_to_device
tf.data.experimental.Counter View source on GitHub Creates a Dataset that counts from start in steps of size step. tf.data.experimental.Counter( start=0, step=1, dtype=tf.dtypes.int64 ) For example: Dataset.count() == [0, 1, 2, ...) Dataset.count(2) == [2, 3, ...) Dataset.count(2, 5) == [2, 7, 12, ...) Dataset.count(0, -1) == [0, -1, -2, ...) Dataset.count(10, -1) == [10, 9, ...) Args start (Optional.) The starting value for the counter. Defaults to 0. step (Optional.) The step size for the counter. Defaults to 1. dtype (Optional.) The data type for counter elements. Defaults to tf.int64. Returns A Dataset of scalar dtype elements.
tensorflow.data.experimental.counter
tf.data.experimental.CsvDataset View source on GitHub A Dataset comprising lines from one or more CSV files. Inherits From: Dataset tf.data.experimental.CsvDataset( filenames, record_defaults, compression_type=None, buffer_size=None, header=False, field_delim=',', use_quote_delim=True, na_value='', select_cols=None, exclude_cols=None ) Args filenames A tf.string tensor containing one or more filenames. record_defaults A list of default values for the CSV fields. Each item in the list is either a valid CSV DType (float32, float64, int32, int64, string), or a Tensor object with one of the above types. One per column of CSV data, with either a scalar Tensor default value for the column if it is optional, or DType or empty Tensor if required. If both this and select_columns are specified, these must have the same lengths, and column_defaults is assumed to be sorted in order of increasing column index. If both this and 'exclude_cols' are specified, the sum of lengths of record_defaults and exclude_cols should equal the total number of columns in the CSV file. compression_type (Optional.) A tf.string scalar evaluating to one of "" (no compression), "ZLIB", or "GZIP". Defaults to no compression. buffer_size (Optional.) A tf.int64 scalar denoting the number of bytes to buffer while reading files. Defaults to 4MB. header (Optional.) A tf.bool scalar indicating whether the CSV file(s) have header line(s) that should be skipped when parsing. Defaults to False. field_delim (Optional.) A tf.string scalar containing the delimiter character that separates fields in a record. Defaults to ",". use_quote_delim (Optional.) A tf.bool scalar. If False, treats double quotation marks as regular characters inside of string fields (ignoring RFC 4180, Section 2, Bullet 5). Defaults to True. na_value (Optional.) A tf.string scalar indicating a value that will be treated as NA/NaN. select_cols (Optional.) A sorted list of column indices to select from the input data. If specified, only this subset of columns will be parsed. Defaults to parsing all columns. At most one of select_cols and exclude_cols can be specified. exclude_cols (Optional.) A sorted list of column indices to exclude from the input data. If specified, only the complement of this set of column will be parsed. Defaults to parsing all columns. At most one of select_cols and exclude_cols can be specified. Raises InvalidArgumentError If exclude_cols is not None and len(exclude_cols) + len(record_defaults) does not match the total number of columns in the file(s) Attributes element_spec The type specification of an element of this dataset. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset.element_spec TensorSpec(shape=(), dtype=tf.int32, name=None) Methods apply View source apply( transformation_func ) Applies a transformation function to this dataset. apply enables chaining of custom Dataset transformations, which are represented as functions that take one Dataset argument and return a transformed Dataset. dataset = tf.data.Dataset.range(100) def dataset_fn(ds): return ds.filter(lambda x: x < 5) dataset = dataset.apply(dataset_fn) list(dataset.as_numpy_iterator()) [0, 1, 2, 3, 4] Args transformation_func A function that takes one Dataset argument and returns a Dataset. Returns Dataset The Dataset returned by applying transformation_func to this dataset. as_numpy_iterator View source as_numpy_iterator() Returns an iterator which converts all elements of the dataset to numpy. Use as_numpy_iterator to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using as_numpy_iterator. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset: print(element) tf.Tensor(1, shape=(), dtype=int32) tf.Tensor(2, shape=(), dtype=int32) tf.Tensor(3, shape=(), dtype=int32) This method requires that you are running in eager mode and the dataset's element_spec contains only TensorSpec components. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset.as_numpy_iterator(): print(element) 1 2 3 dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) print(list(dataset.as_numpy_iterator())) [1, 2, 3] as_numpy_iterator() will preserve the nested structure of dataset elements. dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), 'b': [5, 6]}) list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, {'a': (2, 4), 'b': 6}] True Returns An iterable over the elements of the dataset, with their tensors converted to numpy arrays. Raises TypeError if an element contains a non-Tensor value. RuntimeError if eager execution is not enabled. batch View source batch( batch_size, drop_remainder=False ) Combines consecutive elements of this dataset into batches. dataset = tf.data.Dataset.range(8) dataset = dataset.batch(3) list(dataset.as_numpy_iterator()) [array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] dataset = tf.data.Dataset.range(8) dataset = dataset.batch(3, drop_remainder=True) list(dataset.as_numpy_iterator()) [array([0, 1, 2]), array([3, 4, 5])] The components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced. Args batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch. drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch. Returns Dataset A Dataset. cache View source cache( filename='' ) Caches the elements in this dataset. The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data. Note: For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data. dataset = tf.data.Dataset.range(5) dataset = dataset.map(lambda x: x**2) dataset = dataset.cache() # The first time reading through the data will generate the data using # `range` and `map`. list(dataset.as_numpy_iterator()) [0, 1, 4, 9, 16] # Subsequent iterations read from the cache. list(dataset.as_numpy_iterator()) [0, 1, 4, 9, 16] When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to .cache() will have no effect until the cache file is removed or the filename is changed. dataset = tf.data.Dataset.range(5) dataset = dataset.cache("/path/to/file") # doctest: +SKIP list(dataset.as_numpy_iterator()) # doctest: +SKIP [0, 1, 2, 3, 4] dataset = tf.data.Dataset.range(10) dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP list(dataset.as_numpy_iterator()) # doctest: +SKIP [0, 1, 2, 3, 4] Note: cache will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call shuffle after calling cache. Args filename A tf.string scalar tf.Tensor, representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory. Returns Dataset A Dataset. cardinality View source cardinality() Returns the cardinality of the dataset, if known. cardinality may return tf.data.INFINITE_CARDINALITY if the dataset contains an infinite number of elements or tf.data.UNKNOWN_CARDINALITY if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file). dataset = tf.data.Dataset.range(42) print(dataset.cardinality().numpy()) 42 dataset = dataset.repeat() cardinality = dataset.cardinality() print((cardinality == tf.data.INFINITE_CARDINALITY).numpy()) True dataset = dataset.filter(lambda x: True) cardinality = dataset.cardinality() print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy()) True Returns A scalar tf.int64 Tensor representing the cardinality of the dataset. If the cardinality is infinite or unknown, cardinality returns the named constants tf.data.INFINITE_CARDINALITY and tf.data.UNKNOWN_CARDINALITY respectively. concatenate View source concatenate( dataset ) Creates a Dataset by concatenating the given dataset with this dataset. a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ] ds = a.concatenate(b) list(ds.as_numpy_iterator()) [1, 2, 3, 4, 5, 6, 7] # The input dataset and dataset to be concatenated should have the same # nested structures and output types. c = tf.data.Dataset.zip((a, b)) a.concatenate(c) Traceback (most recent call last): TypeError: Two datasets to concatenate have different types <dtype: 'int64'> and (tf.int64, tf.int64) d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) a.concatenate(d) Traceback (most recent call last): TypeError: Two datasets to concatenate have different types <dtype: 'int64'> and <dtype: 'string'> Args dataset Dataset to be concatenated. Returns Dataset A Dataset. enumerate View source enumerate( start=0 ) Enumerates the elements of this dataset. It is similar to python's enumerate. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.enumerate(start=5) for element in dataset.as_numpy_iterator(): print(element) (5, 1) (6, 2) (7, 3) # The nested structure of the input dataset determines the structure of # elements in the resulting dataset. dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) dataset = dataset.enumerate() for element in dataset.as_numpy_iterator(): print(element) (0, array([7, 8], dtype=int32)) (1, array([ 9, 10], dtype=int32)) Args start A tf.int64 scalar tf.Tensor, representing the start value for enumeration. Returns Dataset A Dataset. filter View source filter( predicate ) Filters this dataset according to predicate. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.filter(lambda x: x < 3) list(dataset.as_numpy_iterator()) [1, 2] # `tf.math.equal(x, y)` is required for equality comparison def filter_fn(x): return tf.math.equal(x, 1) dataset = dataset.filter(filter_fn) list(dataset.as_numpy_iterator()) [1] Args predicate A function mapping a dataset element to a boolean. Returns Dataset The Dataset containing the elements of this dataset for which predicate is True. flat_map View source flat_map( map_func ) Maps map_func across this dataset and flattens the result. Use flat_map if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements: dataset = tf.data.Dataset.from_tensor_slices( [[1, 2, 3], [4, 5, 6], [7, 8, 9]]) dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x)) list(dataset.as_numpy_iterator()) [1, 2, 3, 4, 5, 6, 7, 8, 9] tf.data.Dataset.interleave() is a generalization of flat_map, since flat_map produces the same output as tf.data.Dataset.interleave(cycle_length=1) Args map_func A function mapping a dataset element to a dataset. Returns Dataset A Dataset. from_generator View source @staticmethod from_generator( generator, output_types=None, output_shapes=None, args=None, output_signature=None ) Creates a Dataset whose elements are generated by generator. (deprecated arguments) Warning: SOME ARGUMENTS ARE DEPRECATED: (output_shapes, output_types). They will be removed in a future version. Instructions for updating: Use output_signature instead The generator argument must be a callable object that returns an object that supports the iter() protocol (e.g. a generator function). The elements generated by generator must be compatible with either the given output_signature argument or with the given output_types and (optionally) output_shapes arguments, whichiver was specified. The recommended way to call from_generator is to use the output_signature argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by tf.TypeSpec objects from output_signature argument: def gen(): ragged_tensor = tf.ragged.constant([[1, 2], [3]]) yield 42, ragged_tensor dataset = tf.data.Dataset.from_generator( gen, output_signature=( tf.TensorSpec(shape=(), dtype=tf.int32), tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32))) list(dataset.take(1)) [(<tf.Tensor: shape=(), dtype=int32, numpy=42>, <tf.RaggedTensor [[1, 2], [3]]>)] There is also a deprecated way to call from_generator by either with output_types argument alone or together with output_shapes argument. In this case the output of the function will be assumed to consist of tf.Tensor objects with with the types defined by output_types and with the shapes which are either unknown or defined by output_shapes. Note: The current implementation of Dataset.from_generator() uses tf.numpy_function and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called Dataset.from_generator(). The body of generator will not be serialized in a GraphDef, and you should not use this method if you need to serialize your model and restore it in a different environment. Note: If generator depends on mutable global variables or other external state, be aware that the runtime may invoke generator multiple times (in order to support repeating the Dataset) and at any time between the call to Dataset.from_generator() and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in generator before calling Dataset.from_generator(). Args generator A callable object that returns an object that supports the iter() protocol. If args is not specified, generator must take no arguments; otherwise it must take as many arguments as there are values in args. output_types (Optional.) A nested structure of tf.DType objects corresponding to each component of an element yielded by generator. output_shapes (Optional.) A nested structure of tf.TensorShape objects corresponding to each component of an element yielded by generator. args (Optional.) A tuple of tf.Tensor objects that will be evaluated and passed to generator as NumPy-array arguments. output_signature (Optional.) A nested structure of tf.TypeSpec objects corresponding to each component of an element yielded by generator. Returns Dataset A Dataset. from_tensor_slices View source @staticmethod from_tensor_slices( tensors ) Creates a Dataset whose elements are slices of the given tensors. The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions. # Slicing a 1D tensor produces scalar tensor elements. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) list(dataset.as_numpy_iterator()) [1, 2, 3] # Slicing a 2D tensor produces 1D tensor elements. dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) list(dataset.as_numpy_iterator()) [array([1, 2], dtype=int32), array([3, 4], dtype=int32)] # Slicing a tuple of 1D tensors produces tuple elements containing # scalar tensors. dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) list(dataset.as_numpy_iterator()) [(1, 3, 5), (2, 4, 6)] # Dictionary structure is also preserved. dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, {'a': 2, 'b': 4}] True # Two tensors can be combined into one Dataset object. features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor dataset = Dataset.from_tensor_slices((features, labels)) # Both the features and the labels tensors can be converted # to a Dataset object separately and combined after. features_dataset = Dataset.from_tensor_slices(features) labels_dataset = Dataset.from_tensor_slices(labels) dataset = Dataset.zip((features_dataset, labels_dataset)) # A batched feature and label set can be converted to a Dataset # in similar fashion. batched_features = tf.constant([[[1, 3], [2, 3]], [[2, 1], [1, 2]], [[3, 3], [3, 2]]], shape=(3, 2, 2)) batched_labels = tf.constant([['A', 'A'], ['B', 'B'], ['A', 'B']], shape=(3, 2, 1)) dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) for element in dataset.as_numpy_iterator(): print(element) (array([[1, 3], [2, 3]], dtype=int32), array([[b'A'], [b'A']], dtype=object)) (array([[2, 1], [1, 2]], dtype=int32), array([[b'B'], [b'B']], dtype=object)) (array([[3, 3], [3, 2]], dtype=int32), array([[b'A'], [b'B']], dtype=object)) Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide. Args tensors A dataset element, with each component having the same size in the first dimension. Returns Dataset A Dataset. from_tensors View source @staticmethod from_tensors( tensors ) Creates a Dataset with a single element, comprising the given tensors. from_tensors produces a dataset containing only a single element. To slice the input tensor into multiple elements, use from_tensor_slices instead. dataset = tf.data.Dataset.from_tensors([1, 2, 3]) list(dataset.as_numpy_iterator()) [array([1, 2, 3], dtype=int32)] dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) list(dataset.as_numpy_iterator()) [(array([1, 2, 3], dtype=int32), b'A')] # You can use `from_tensors` to produce a dataset which repeats # the same example many times. example = tf.constant([1,2,3]) dataset = tf.data.Dataset.from_tensors(example).repeat(2) list(dataset.as_numpy_iterator()) [array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide. Args tensors A dataset element. Returns Dataset A Dataset. interleave View source interleave( map_func, cycle_length=None, block_length=None, num_parallel_calls=None, deterministic=None ) Maps map_func across this dataset, and interleaves the results. For example, you can use Dataset.interleave() to process many input files concurrently: # Preprocess 4 files concurrently, and interleave blocks of 16 records # from each file. filenames = ["/var/data/file1.txt", "/var/data/file2.txt", "/var/data/file3.txt", "/var/data/file4.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) def parse_fn(filename): return tf.data.Dataset.range(10) dataset = dataset.interleave(lambda x: tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), cycle_length=4, block_length=16) The cycle_length and block_length arguments control the order in which elements are produced. cycle_length controls the number of input elements that are processed concurrently. If you set cycle_length to 1, this transformation will handle one input element at a time, and will produce identical results to tf.data.Dataset.flat_map. In general, this transformation will apply map_func to cycle_length input elements, open iterators on the returned Dataset objects, and cycle through them producing block_length consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator. For example: dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] # NOTE: New lines indicate "block" boundaries. dataset = dataset.interleave( lambda x: Dataset.from_tensors(x).repeat(6), cycle_length=2, block_length=4) list(dataset.as_numpy_iterator()) [1, 1, 1, 1, 2, 2, 2, 2, 1, 1, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 3, 3, 4, 4, 5, 5, 5, 5, 5, 5] Note: The order of elements yielded by this transformation is deterministic, as long as map_func is a pure function and deterministic=True. If map_func contains any stateful operations, the order in which that state is accessed is undefined. Performance can often be improved by setting num_parallel_calls so that interleave will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set deterministic=False. filenames = ["/var/data/file1.txt", "/var/data/file2.txt", "/var/data/file3.txt", "/var/data/file4.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE, deterministic=False) Args map_func A function mapping a dataset element to a dataset. cycle_length (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If num_parallel_calls is set to tf.data.AUTOTUNE, the cycle_length argument identifies the maximum degree of parallelism. block_length (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1. num_parallel_calls (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU. deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically. Returns Dataset A Dataset. list_files View source @staticmethod list_files( file_pattern, shuffle=None, seed=None ) A dataset of all files matching one or more glob patterns. The file_pattern argument should be a small number of glob patterns. If your filenames have already been globbed, use Dataset.from_tensor_slices(filenames) instead, as re-globbing every filename with list_files may result in poor performance with remote storage systems. Note: The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a seed or shuffle=False to get results in a deterministic order. Example: If we had the following files on our filesystem: /path/to/dir/a.txt /path/to/dir/b.py /path/to/dir/c.py If we pass "/path/to/dir/*.py" as the directory, the dataset would produce: /path/to/dir/b.py /path/to/dir/c.py Args file_pattern A string, a list of strings, or a tf.Tensor of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched. shuffle (Optional.) If True, the file names will be shuffled randomly. Defaults to True. seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior. Returns Dataset A Dataset of strings corresponding to file names. map View source map( map_func, num_parallel_calls=None, deterministic=None ) Maps map_func across the elements of this dataset. This transformation applies map_func to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. map_func can be used to change both the values and the structure of a dataset's elements. For example, adding 1 to each element, or projecting a subset of element components. dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] dataset = dataset.map(lambda x: x + 1) list(dataset.as_numpy_iterator()) [2, 3, 4, 5, 6] The input signature of map_func is determined by the structure of each element in this dataset. dataset = Dataset.range(5) # `map_func` takes a single argument of type `tf.Tensor` with the same # shape and dtype. result = dataset.map(lambda x: x + 1) # Each element is a tuple containing two `tf.Tensor` objects. elements = [(1, "foo"), (2, "bar"), (3, "baz")] dataset = tf.data.Dataset.from_generator( lambda: elements, (tf.int32, tf.string)) # `map_func` takes two arguments of type `tf.Tensor`. This function # projects out just the first component. result = dataset.map(lambda x_int, y_str: x_int) list(result.as_numpy_iterator()) [1, 2, 3] # Each element is a dictionary mapping strings to `tf.Tensor` objects. elements = ([{"a": 1, "b": "foo"}, {"a": 2, "b": "bar"}, {"a": 3, "b": "baz"}]) dataset = tf.data.Dataset.from_generator( lambda: elements, {"a": tf.int32, "b": tf.string}) # `map_func` takes a single argument of type `dict` with the same keys # as the elements. result = dataset.map(lambda d: str(d["a"]) + d["b"]) The value or values returned by map_func determine the structure of each element in the returned dataset. dataset = tf.data.Dataset.range(3) # `map_func` returns two `tf.Tensor` objects. def g(x): return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"]) result = dataset.map(g) result.element_spec (TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None)) # Python primitives, lists, and NumPy arrays are implicitly converted to # `tf.Tensor`. def h(x): return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64) result = dataset.map(h) result.element_spec (TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None)) # `map_func` can return nested structures. def i(x): return (37.0, [42, 16]), "foo" result = dataset.map(i) result.element_spec ((TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.int32, name=None)), TensorSpec(shape=(), dtype=tf.string, name=None)) map_func can accept as arguments and return any type of dataset element. Note that irrespective of the context in which map_func is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options: 1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code. 2) Use tf.py_function, which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example: d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) # transform a string tensor to upper case string using a Python function def upper_case_fn(t: tf.Tensor): return t.numpy().decode('utf-8').upper() d = d.map(lambda x: tf.py_function(func=upper_case_fn, inp=[x], Tout=tf.string)) list(d.as_numpy_iterator()) [b'HELLO', b'WORLD'] 3) Use tf.numpy_function, which also allows you to write arbitrary Python code. Note that tf.py_function accepts tf.Tensor whereas tf.numpy_function accepts numpy arrays and returns only numpy arrays. For example: d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) def upper_case_fn(t: np.ndarray): return t.decode('utf-8').upper() d = d.map(lambda x: tf.numpy_function(func=upper_case_fn, inp=[x], Tout=tf.string)) list(d.as_numpy_iterator()) [b'HELLO', b'WORLD'] Note that the use of tf.numpy_function and tf.py_function in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL). Performance can often be improved by setting num_parallel_calls so that map will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set deterministic=False. dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] dataset = dataset.map(lambda x: x + 1, num_parallel_calls=tf.data.AUTOTUNE, deterministic=False) Args map_func A function mapping a dataset element to another dataset element. num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU. deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically. Returns Dataset A Dataset. options View source options() Returns the options for this dataset and its inputs. Returns A tf.data.Options object representing the dataset options. padded_batch View source padded_batch( batch_size, padded_shapes=None, padding_values=None, drop_remainder=False ) Combines consecutive elements of this dataset into padded batches. This transformation combines multiple consecutive elements of the input dataset into a single element. Like tf.data.Dataset.batch, the components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced. Unlike tf.data.Dataset.batch, the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in padded_shapes. The padded_shapes argument determines the resulting shape for each dimension of each component in an output element: If the dimension is a constant, the component will be padded out to that length in that dimension. If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension. A = (tf.data.Dataset .range(1, 5, output_type=tf.int32) .map(lambda x: tf.fill([x], x))) # Pad to the smallest per-batch size that fits all elements. B = A.padded_batch(2) for element in B.as_numpy_iterator(): print(element) [[1 0] [2 2]] [[3 3 3 0] [4 4 4 4]] # Pad to a fixed size. C = A.padded_batch(2, padded_shapes=5) for element in C.as_numpy_iterator(): print(element) [[1 0 0 0 0] [2 2 0 0 0]] [[3 3 3 0 0] [4 4 4 4 0]] # Pad with a custom value. D = A.padded_batch(2, padded_shapes=5, padding_values=-1) for element in D.as_numpy_iterator(): print(element) [[ 1 -1 -1 -1 -1] [ 2 2 -1 -1 -1]] [[ 3 3 3 -1 -1] [ 4 4 4 4 -1]] # Components of nested elements can be padded independently. elements = [([1, 2, 3], [10]), ([4, 5], [11, 12])] dataset = tf.data.Dataset.from_generator( lambda: iter(elements), (tf.int32, tf.int32)) # Pad the first component of the tuple to length 4, and the second # component to the smallest size that fits. dataset = dataset.padded_batch(2, padded_shapes=([4], [None]), padding_values=(-1, 100)) list(dataset.as_numpy_iterator()) [(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32), array([[ 10, 100], [ 11, 12]], dtype=int32))] # Pad with a single value and multiple components. E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1) for element in E.as_numpy_iterator(): print(element) (array([[ 1, -1], [ 2, 2]], dtype=int32), array([[ 1, -1], [ 2, 2]], dtype=int32)) (array([[ 3, 3, 3, -1], [ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1], [ 4, 4, 4, 4]], dtype=int32)) See also tf.data.experimental.dense_to_sparse_batch, which combines elements that may have different shapes into a tf.sparse.SparseTensor. Args batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch. padded_shapes (Optional.) A nested structure of tf.TensorShape or tf.int64 vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. padded_shapes must be set if any component has an unknown rank. padding_values (Optional.) A nested structure of scalar-shaped tf.Tensor, representing the padding values to use for the respective components. None represents that the nested structure should be padded with default values. Defaults are 0 for numeric types and the empty string for string types. The padding_values should have the same structure as the input dataset. If padding_values is a single element and the input dataset has multiple components, then the same padding_values will be used to pad every component of the dataset. If padding_values is a scalar, then its value will be broadcasted to match the shape of each component. drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch. Returns Dataset A Dataset. Raises ValueError If a component has an unknown rank, and the padded_shapes argument is not set. prefetch View source prefetch( buffer_size ) Creates a Dataset that prefetches elements from this dataset. Most dataset input pipelines should end with a call to prefetch. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements. Note: Like other Dataset methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. examples.prefetch(2) will prefetch two elements (2 examples), while examples.batch(20).prefetch(2) will prefetch 2 elements (2 batches, of 20 examples each). dataset = tf.data.Dataset.range(3) dataset = dataset.prefetch(2) list(dataset.as_numpy_iterator()) [0, 1, 2] Args buffer_size A tf.int64 scalar tf.Tensor, representing the maximum number of elements that will be buffered when prefetching. Returns Dataset A Dataset. range View source @staticmethod range( *args, **kwargs ) Creates a Dataset of a step-separated range of values. list(Dataset.range(5).as_numpy_iterator()) [0, 1, 2, 3, 4] list(Dataset.range(2, 5).as_numpy_iterator()) [2, 3, 4] list(Dataset.range(1, 5, 2).as_numpy_iterator()) [1, 3] list(Dataset.range(1, 5, -2).as_numpy_iterator()) [] list(Dataset.range(5, 1).as_numpy_iterator()) [] list(Dataset.range(5, 1, -2).as_numpy_iterator()) [5, 3] list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator()) [2, 3, 4] list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator()) [1.0, 3.0] Args *args follows the same semantics as python's xrange. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. **kwargs output_type: Its expected dtype. (Optional, default: tf.int64). Returns Dataset A RangeDataset. Raises ValueError if len(args) == 0. reduce View source reduce( initial_state, reduce_func ) Reduces the input dataset to a single element. The transformation calls reduce_func successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The initial_state argument is used for the initial state and the final state is returned as the result. tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy() 5 tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy() 10 Args initial_state An element representing the initial state of the transformation. reduce_func A function that maps (old_state, input_element) to new_state. It must take two arguments and return a new element The structure of new_state must match the structure of initial_state. Returns A dataset element corresponding to the final state of the transformation. repeat View source repeat( count=None ) Repeats this dataset so each original value is seen count times. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.repeat(3) list(dataset.as_numpy_iterator()) [1, 2, 3, 1, 2, 3, 1, 2, 3] Note: If this dataset is a function of global state (e.g. a random number generator), then different repetitions may produce different elements. Args count (Optional.) A tf.int64 scalar tf.Tensor, representing the number of times the dataset should be repeated. The default behavior (if count is None or -1) is for the dataset be repeated indefinitely. Returns Dataset A Dataset. shard View source shard( num_shards, index ) Creates a Dataset that includes only 1/num_shards of this dataset. shard is deterministic. The Dataset produced by A.shard(n, i) will contain all elements of A whose index mod n = i. A = tf.data.Dataset.range(10) B = A.shard(num_shards=3, index=0) list(B.as_numpy_iterator()) [0, 3, 6, 9] C = A.shard(num_shards=3, index=1) list(C.as_numpy_iterator()) [1, 4, 7] D = A.shard(num_shards=3, index=2) list(D.as_numpy_iterator()) [2, 5, 8] This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset. When reading a single input file, you can shard elements as follows: d = tf.data.TFRecordDataset(input_file) d = d.shard(num_workers, worker_index) d = d.repeat(num_epochs) d = d.shuffle(shuffle_buffer_size) d = d.map(parser_fn, num_parallel_calls=num_map_threads) Important caveats: Be sure to shard before you use any randomizing operator (such as shuffle). Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline: d = Dataset.list_files(pattern) d = d.shard(num_workers, worker_index) d = d.repeat(num_epochs) d = d.shuffle(shuffle_buffer_size) d = d.interleave(tf.data.TFRecordDataset, cycle_length=num_readers, block_length=1) d = d.map(parser_fn, num_parallel_calls=num_map_threads) Args num_shards A tf.int64 scalar tf.Tensor, representing the number of shards operating in parallel. index A tf.int64 scalar tf.Tensor, representing the worker index. Returns Dataset A Dataset. Raises InvalidArgumentError if num_shards or index are illegal values. Note: error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.) shuffle View source shuffle( buffer_size, seed=None, reshuffle_each_iteration=None ) Randomly shuffles the elements of this dataset. This dataset fills a buffer with buffer_size elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required. For instance, if your dataset contains 10,000 elements but buffer_size is set to 1,000, then shuffle will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer. reshuffle_each_iteration controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the repeat transformation: dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=True) dataset = dataset.repeat(2) # doctest: +SKIP [1, 0, 2, 1, 2, 0] dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=False) dataset = dataset.repeat(2) # doctest: +SKIP [1, 0, 2, 1, 0, 2] In TF 2.0, tf.data.Dataset objects are Python iterables which makes it possible to also create epochs through Python iteration: dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=True) list(dataset.as_numpy_iterator()) # doctest: +SKIP [1, 0, 2] list(dataset.as_numpy_iterator()) # doctest: +SKIP [1, 2, 0] dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=False) list(dataset.as_numpy_iterator()) # doctest: +SKIP [1, 0, 2] list(dataset.as_numpy_iterator()) # doctest: +SKIP [1, 0, 2] Args buffer_size A tf.int64 scalar tf.Tensor, representing the number of elements from this dataset from which the new dataset will sample. seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior. reshuffle_each_iteration (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to True.) Returns Dataset A Dataset. skip View source skip( count ) Creates a Dataset that skips count elements from this dataset. dataset = tf.data.Dataset.range(10) dataset = dataset.skip(7) list(dataset.as_numpy_iterator()) [7, 8, 9] Args count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be skipped to form the new dataset. If count is greater than the size of this dataset, the new dataset will contain no elements. If count is -1, skips the entire dataset. Returns Dataset A Dataset. take View source take( count ) Creates a Dataset with at most count elements from this dataset. dataset = tf.data.Dataset.range(10) dataset = dataset.take(3) list(dataset.as_numpy_iterator()) [0, 1, 2] Args count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be taken to form the new dataset. If count is -1, or if count is greater than the size of this dataset, the new dataset will contain all elements of this dataset. Returns Dataset A Dataset. unbatch View source unbatch() Splits elements of a dataset into multiple elements. For example, if elements of the dataset are shaped [B, a0, a1, ...], where B may vary for each input element, then for each element in the dataset, the unbatched dataset will contain B consecutive elements of shape [a0, a1, ...]. elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64) dataset = dataset.unbatch() list(dataset.as_numpy_iterator()) [1, 2, 3, 1, 2, 1, 2, 3, 4] Note: unbatch requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of unbatch. Returns A Dataset. window View source window( size, shift=None, stride=1, drop_remainder=False ) Combines (nests of) input elements into a dataset of (nests of) windows. A "window" is a finite dataset of flat elements of size size (or possibly fewer if there are not enough input elements to fill the window and drop_remainder evaluates to False). The shift argument determines the number of input elements by which the window moves on each iteration. If windows and elements are both numbered starting at 0, the first element in window k will be element k * shift of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset. The stride argument determines the stride of the input elements, and the shift argument determines the shift of the window. For example: dataset = tf.data.Dataset.range(7).window(2) for window in dataset: print(list(window.as_numpy_iterator())) [0, 1] [2, 3] [4, 5] [6] dataset = tf.data.Dataset.range(7).window(3, 2, 1, True) for window in dataset: print(list(window.as_numpy_iterator())) [0, 1, 2] [2, 3, 4] [4, 5, 6] dataset = tf.data.Dataset.range(7).window(3, 1, 2, True) for window in dataset: print(list(window.as_numpy_iterator())) [0, 2, 4] [1, 3, 5] [2, 4, 6] Note that when the window transformation is applied to a dataset of nested elements, it produces a dataset of nested windows. nested = ([1, 2, 3, 4], [5, 6, 7, 8]) dataset = tf.data.Dataset.from_tensor_slices(nested).window(2) for window in dataset: def to_numpy(ds): return list(ds.as_numpy_iterator()) print(tuple(to_numpy(component) for component in window)) ([1, 2], [5, 6]) ([3, 4], [7, 8]) dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]}) dataset = dataset.window(2) for window in dataset: def to_numpy(ds): return list(ds.as_numpy_iterator()) print({'a': to_numpy(window['a'])}) {'a': [1, 2]} {'a': [3, 4]} Args size A tf.int64 scalar tf.Tensor, representing the number of elements of the input dataset to combine into a window. Must be positive. shift (Optional.) A tf.int64 scalar tf.Tensor, representing the number of input elements by which the window moves in each iteration. Defaults to size. Must be positive. stride (Optional.) A tf.int64 scalar tf.Tensor, representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element". drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last windows should be dropped if their size is smaller than size. Returns Dataset A Dataset of (nests of) windows -- a finite datasets of flat elements created from the (nests of) input elements. with_options View source with_options( options ) Returns a new tf.data.Dataset with the given options set. The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values. ds = tf.data.Dataset.range(5) ds = ds.interleave(lambda x: tf.data.Dataset.range(5), cycle_length=3, num_parallel_calls=3) options = tf.data.Options() # This will make the interleave order non-deterministic. options.experimental_deterministic = False ds = ds.with_options(options) Args options A tf.data.Options that identifies the options the use. Returns Dataset A Dataset with the given options. Raises ValueError when an option is set more than once to a non-default value zip View source @staticmethod zip( datasets ) Creates a Dataset by zipping together the given datasets. This method has similar semantics to the built-in zip() function in Python, with the main difference being that the datasets argument can be an arbitrary nested structure of Dataset objects. # The nested structure of the `datasets` argument determines the # structure of elements in the resulting dataset. a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ] ds = tf.data.Dataset.zip((a, b)) list(ds.as_numpy_iterator()) [(1, 4), (2, 5), (3, 6)] ds = tf.data.Dataset.zip((b, a)) list(ds.as_numpy_iterator()) [(4, 1), (5, 2), (6, 3)] # The `datasets` argument may contain an arbitrary number of datasets. c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8], # [9, 10], # [11, 12] ] ds = tf.data.Dataset.zip((a, b, c)) for element in ds.as_numpy_iterator(): print(element) (1, 4, array([7, 8])) (2, 5, array([ 9, 10])) (3, 6, array([11, 12])) # The number of elements in the resulting dataset is the same as # the size of the smallest dataset in `datasets`. d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ] ds = tf.data.Dataset.zip((a, d)) list(ds.as_numpy_iterator()) [(1, 13), (2, 14)] Args datasets A nested structure of datasets. Returns Dataset A Dataset. __bool__ View source __bool__() __iter__ View source __iter__() Creates an iterator for elements of this dataset. The returned iterator implements the Python Iterator protocol. Returns An tf.data.Iterator for the elements of this dataset. Raises RuntimeError If not inside of tf.function and not executing eagerly. __len__ View source __len__() Returns the length of the dataset if it is known and finite. This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use tf.data.Dataset.cardinality instead. Returns An integer representing the length of the dataset. Raises RuntimeError If the dataset length is unknown or infinite, or if eager execution is not enabled. __nonzero__ View source __nonzero__()
tensorflow.data.experimental.csvdataset
tf.data.experimental.dense_to_ragged_batch A transformation that batches ragged elements into tf.RaggedTensors. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.dense_to_ragged_batch tf.data.experimental.dense_to_ragged_batch( batch_size, drop_remainder=False, row_splits_dtype=tf.dtypes.int64 ) This transformation combines multiple consecutive elements of the input dataset into a single element. Like tf.data.Dataset.batch, the components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced. Unlike tf.data.Dataset.batch, the input elements to be batched may have different shapes: If an input element is a tf.Tensor whose static tf.TensorShape is fully defined, then it is batched as normal. If an input element is a tf.Tensor whose static tf.TensorShape contains one or more axes with unknown size (i.e., shape[i]=None), then the output will contain a tf.RaggedTensor that is ragged up to any of such dimensions. If an input element is a tf.RaggedTensor or any other type, then it is batched as normal. Example: dataset = tf.data.Dataset.from_tensor_slices(np.arange(6)) dataset = dataset.map(lambda x: tf.range(x)) dataset.element_spec.shape TensorShape([None]) dataset = dataset.apply( tf.data.experimental.dense_to_ragged_batch(batch_size=2)) for batch in dataset: print(batch) <tf.RaggedTensor [[], [0]]> <tf.RaggedTensor [[0, 1], [0, 1, 2]]> <tf.RaggedTensor [[0, 1, 2, 3], [0, 1, 2, 3, 4]]> Args batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch. drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch. row_splits_dtype The dtype that should be used for the row_splits of any new ragged tensors. Existing tf.RaggedTensor elements do not have their row_splits dtype changed. Returns Dataset A Dataset.
tensorflow.data.experimental.dense_to_ragged_batch
tf.data.experimental.dense_to_sparse_batch View source on GitHub A transformation that batches ragged elements into tf.sparse.SparseTensors. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.dense_to_sparse_batch tf.data.experimental.dense_to_sparse_batch( batch_size, row_shape ) Like Dataset.padded_batch(), this transformation combines multiple consecutive elements of the dataset, which might have different shapes, into a single element. The resulting element has three components (indices, values, and dense_shape), which comprise a tf.sparse.SparseTensor that represents the same data. The row_shape represents the dense shape of each row in the resulting tf.sparse.SparseTensor, to which the effective batch size is prepended. For example: # NOTE: The following examples use `{ ... }` to represent the # contents of a dataset. a = { ['a', 'b', 'c'], ['a', 'b'], ['a', 'b', 'c', 'd'] } a.apply(tf.data.experimental.dense_to_sparse_batch( batch_size=2, row_shape=[6])) == { ([[0, 0], [0, 1], [0, 2], [1, 0], [1, 1]], # indices ['a', 'b', 'c', 'a', 'b'], # values [2, 6]), # dense_shape ([[0, 0], [0, 1], [0, 2], [0, 3]], ['a', 'b', 'c', 'd'], [1, 6]) } Args batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch. row_shape A tf.TensorShape or tf.int64 vector tensor-like object representing the equivalent dense shape of a row in the resulting tf.sparse.SparseTensor. Each element of this dataset must have the same rank as row_shape, and must have size less than or equal to row_shape in each dimension. Returns A Dataset transformation function, which can be passed to tf.data.Dataset.apply.
tensorflow.data.experimental.dense_to_sparse_batch
tf.data.experimental.DistributeOptions View source on GitHub Represents options for distributed data processing. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.DistributeOptions tf.data.experimental.DistributeOptions() You can set the distribution options of a dataset through the experimental_distribute property of tf.data.Options; the property is an instance of tf.data.experimental.DistributeOptions. options = tf.data.Options() options.experimental_distribute.auto_shard_policy = AutoShardPolicy.OFF dataset = dataset.with_options(options) Attributes auto_shard_policy The type of sharding that auto-shard should attempt. If this is set to FILE, then we will attempt to shard by files (each worker will get a set of files to process). If we cannot find a set of files to shard for at least one file per worker, we will error out. When this option is selected, make sure that you have enough files so that each worker gets at least one file. There will be a runtime error thrown if there are insufficient files. If this is set to DATA, then we will shard by elements produced by the dataset, and each worker will process the whole dataset and discard the portion that is not for itself. If this is set to OFF, then we will not autoshard, and each worker will receive a copy of the full dataset. This option is set to AUTO by default, AUTO will attempt to first shard by FILE, and fall back to sharding by DATA if we cannot find a set of files to shard. num_devices The number of devices attached to this input pipeline. This will be automatically set by MultiDeviceIterator. Methods __eq__ View source __eq__( other ) Return self==value. __ne__ View source __ne__( other ) Return self!=value.
tensorflow.data.experimental.distributeoptions
tf.data.experimental.enumerate_dataset View source on GitHub A transformation that enumerates the elements of a dataset. (deprecated) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.enumerate_dataset tf.data.experimental.enumerate_dataset( start=0 ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.data.Dataset.enumerate(). It is similar to python's enumerate. For example: # NOTE: The following examples use `{ ... }` to represent the # contents of a dataset. a = { 1, 2, 3 } b = { (7, 8), (9, 10) } # The nested structure of the `datasets` argument determines the # structure of elements in the resulting dataset. a.apply(tf.data.experimental.enumerate_dataset(start=5)) => { (5, 1), (6, 2), (7, 3) } b.apply(tf.data.experimental.enumerate_dataset()) => { (0, (7, 8)), (1, (9, 10)) } Args start A tf.int64 scalar tf.Tensor, representing the start value for enumeration. Returns A Dataset transformation function, which can be passed to tf.data.Dataset.apply.
tensorflow.data.experimental.enumerate_dataset
tf.data.experimental.from_variant View source on GitHub Constructs a dataset from the given variant and structure. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.from_variant tf.data.experimental.from_variant( variant, structure ) Args variant A scalar tf.variant tensor representing a dataset. structure A tf.data.experimental.Structure object representing the structure of each element in the dataset. Returns A tf.data.Dataset instance.
tensorflow.data.experimental.from_variant
tf.data.experimental.get_next_as_optional View source on GitHub Returns a tf.experimental.Optional with the next element of the iterator. (deprecated) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.get_next_as_optional tf.data.experimental.get_next_as_optional( iterator ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.data.Iterator.get_next_as_optional() instead. If the iterator has reached the end of the sequence, the returned tf.experimental.Optional will have no value. Args iterator A tf.data.Iterator. Returns A tf.experimental.Optional object which either contains the next element of the iterator (if it exists) or no value.
tensorflow.data.experimental.get_next_as_optional
tf.data.experimental.get_single_element View source on GitHub Returns the single element in dataset as a nested structure of tensors. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.get_single_element tf.data.experimental.get_single_element( dataset ) This function enables you to use a tf.data.Dataset in a stateless "tensor-in tensor-out" expression, without creating an iterator. This can be useful when your preprocessing transformations are expressed as a Dataset, and you want to use the transformation at serving time. For example: def preprocessing_fn(input_str): # ... return image, label input_batch = ... # input batch of BATCH_SIZE elements dataset = (tf.data.Dataset.from_tensor_slices(input_batch) .map(preprocessing_fn, num_parallel_calls=BATCH_SIZE) .batch(BATCH_SIZE)) image_batch, label_batch = tf.data.experimental.get_single_element(dataset) Args dataset A tf.data.Dataset object containing a single element. Returns A nested structure of tf.Tensor objects, corresponding to the single element of dataset. Raises TypeError if dataset is not a tf.data.Dataset object. InvalidArgumentError (at runtime): if dataset does not contain exactly one element.
tensorflow.data.experimental.get_single_element
tf.data.experimental.get_structure View source on GitHub Returns the type signature for elements of the input dataset / iterator. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.get_structure tf.data.experimental.get_structure( dataset_or_iterator ) Args dataset_or_iterator A tf.data.Dataset or an tf.data.Iterator. Returns A nested structure of tf.TypeSpec objects matching the structure of an element of dataset_or_iterator and specifying the type of individual components. Raises TypeError If input is not a tf.data.Dataset or an tf.data.Iterator object.
tensorflow.data.experimental.get_structure
tf.data.experimental.group_by_reducer View source on GitHub A transformation that groups elements and performs a reduction. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.group_by_reducer tf.data.experimental.group_by_reducer( key_func, reducer ) This transformation maps element of a dataset to a key using key_func and groups the elements by key. The reducer is used to process each group; its init_func is used to initialize state for each group when it is created, the reduce_func is used to update the state every time an element is mapped to the matching group, and the finalize_func is used to map the final state to an output value. Args key_func A function mapping a nested structure of tensors (having shapes and types defined by self.output_shapes and self.output_types) to a scalar tf.int64 tensor. reducer An instance of Reducer, which captures the reduction logic using the init_func, reduce_func, and finalize_func functions. Returns A Dataset transformation function, which can be passed to tf.data.Dataset.apply.
tensorflow.data.experimental.group_by_reducer
tf.data.experimental.group_by_window View source on GitHub A transformation that groups windows of elements by key and reduces them. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.group_by_window tf.data.experimental.group_by_window( key_func, reduce_func, window_size=None, window_size_func=None ) This transformation maps each consecutive element in a dataset to a key using key_func and groups the elements by key. It then applies reduce_func to at most window_size_func(key) elements matching the same key. All except the final window for each key will contain window_size_func(key) elements; the final window may be smaller. You may provide either a constant window_size or a window size determined by the key through window_size_func. Args key_func A function mapping a nested structure of tensors (having shapes and types defined by self.output_shapes and self.output_types) to a scalar tf.int64 tensor. reduce_func A function mapping a key and a dataset of up to window_size consecutive elements matching that key to another dataset. window_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to reduce_func. Mutually exclusive with window_size_func. window_size_func A function mapping a key to a tf.int64 scalar tf.Tensor, representing the number of consecutive elements matching the same key to combine in a single batch, which will be passed to reduce_func. Mutually exclusive with window_size. Returns A Dataset transformation function, which can be passed to tf.data.Dataset.apply. Raises ValueError if neither or both of {window_size, window_size_func} are passed.
tensorflow.data.experimental.group_by_window
tf.data.experimental.ignore_errors View source on GitHub Creates a Dataset from another Dataset and silently ignores any errors. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.ignore_errors tf.data.experimental.ignore_errors( log_warning=False ) Use this transformation to produce a dataset that contains the same elements as the input, but silently drops any elements that caused an error. For example: dataset = tf.data.Dataset.from_tensor_slices([1., 2., 0., 4.]) # Computing `tf.debugging.check_numerics(1. / 0.)` will raise an InvalidArgumentError. dataset = dataset.map(lambda x: tf.debugging.check_numerics(1. / x, "error")) # Using `ignore_errors()` will drop the element that causes an error. dataset = dataset.apply(tf.data.experimental.ignore_errors()) # ==> {1., 0.5, 0.2} Args: log_warning: (Optional.) A 'tf.bool' scalar indicating whether ignored errors should be logged to stderr. Defaults to 'False'. Returns A Dataset transformation function, which can be passed to tf.data.Dataset.apply.
tensorflow.data.experimental.ignore_errors
tf.data.experimental.latency_stats View source on GitHub Records the latency of producing each element of the input dataset. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.latency_stats tf.data.experimental.latency_stats( tag ) To consume the statistics, associate a StatsAggregator with the output dataset. Args tag String. All statistics recorded by the returned transformation will be associated with the given tag. Returns A Dataset transformation function, which can be passed to tf.data.Dataset.apply.
tensorflow.data.experimental.latency_stats
tf.data.experimental.load Loads a previously saved dataset. tf.data.experimental.load( path, element_spec, compression=None, reader_func=None ) Example usage: import tempfile path = os.path.join(tempfile.gettempdir(), "saved_data") # Save a dataset dataset = tf.data.Dataset.range(2) tf.data.experimental.save(dataset, path) new_dataset = tf.data.experimental.load(path, tf.TensorSpec(shape=(), dtype=tf.int64)) for elem in new_dataset: print(elem) tf.Tensor(0, shape=(), dtype=int64) tf.Tensor(1, shape=(), dtype=int64) Note that to load a previously saved dataset, you need to specify element_spec -- a type signature of the elements of the saved dataset, which can be obtained via tf.data.Dataset.element_spec. This requirement exists so that shape inference of the loaded dataset does not need to perform I/O. If the default option of sharding the saved dataset was used, the element order of the saved dataset will be preserved when loading it. The reader_func argument can be used to specify a custom order in which elements should be loaded from the individual shards. The reader_func is expected to take a single argument -- a dataset of datasets, each containing elements of one of the shards -- and return a dataset of elements. For example, the order of shards can be shuffled when loading them as follows: def custom_reader_func(datasets): datasets = datasets.shuffle(NUM_SHARDS) return datasets.interleave(lambda x: x, num_parallel_calls=AUTOTUNE) dataset = tf.data.experimental.load( path="/path/to/data", ..., reader_func=custom_reader_func) Args path Required. A path pointing to a previously saved dataset. element_spec Required. A nested structure of tf.TypeSpec objects matching the structure of an element of the saved dataset and specifying the type of individual element components. compression Optional. The algorithm to use to decompress the data when reading it. Supported options are GZIP and NONE. Defaults to NONE. reader_func Optional. A function to control how to read data from shards. If present, the function will be traced and executed as graph computation. Returns A tf.data.Dataset instance.
tensorflow.data.experimental.load
tf.data.experimental.make_batched_features_dataset View source on GitHub Returns a Dataset of feature dictionaries from Example protos. tf.data.experimental.make_batched_features_dataset( file_pattern, batch_size, features, reader=None, label_key=None, reader_args=None, num_epochs=None, shuffle=True, shuffle_buffer_size=10000, shuffle_seed=None, prefetch_buffer_size=None, reader_num_threads=None, parser_num_threads=None, sloppy_ordering=False, drop_final_batch=False ) If label_key argument is provided, returns a Dataset of tuple comprising of feature dictionaries and label. Example: serialized_examples = [ features { feature { key: "age" value { int64_list { value: [ 0 ] } } } feature { key: "gender" value { bytes_list { value: [ "f" ] } } } feature { key: "kws" value { bytes_list { value: [ "code", "art" ] } } } }, features { feature { key: "age" value { int64_list { value: [] } } } feature { key: "gender" value { bytes_list { value: [ "f" ] } } } feature { key: "kws" value { bytes_list { value: [ "sports" ] } } } } ] We can use arguments: features: { "age": FixedLenFeature([], dtype=tf.int64, default_value=-1), "gender": FixedLenFeature([], dtype=tf.string), "kws": VarLenFeature(dtype=tf.string), } And the expected output is: { "age": [[0], [-1]], "gender": [["f"], ["f"]], "kws": SparseTensor( indices=[[0, 0], [0, 1], [1, 0]], values=["code", "art", "sports"] dense_shape=[2, 2]), } Args file_pattern List of files or patterns of file paths containing Example records. See tf.io.gfile.glob for pattern rules. batch_size An int representing the number of records to combine in a single batch. features A dict mapping feature keys to FixedLenFeature or VarLenFeature values. See tf.io.parse_example. reader A function or class that can be called with a filenames tensor and (optional) reader_args and returns a Dataset of Example tensors. Defaults to tf.data.TFRecordDataset. label_key (Optional) A string corresponding to the key labels are stored in tf.Examples. If provided, it must be one of the features key, otherwise results in ValueError. reader_args Additional arguments to pass to the reader class. num_epochs Integer specifying the number of times to read through the dataset. If None, cycles through the dataset forever. Defaults to None. shuffle A boolean, indicates whether the input should be shuffled. Defaults to True. shuffle_buffer_size Buffer size of the ShuffleDataset. A large capacity ensures better shuffling but would increase memory usage and startup time. shuffle_seed Randomization seed to use for shuffling. prefetch_buffer_size Number of feature batches to prefetch in order to improve performance. Recommended value is the number of batches consumed per training step. Defaults to auto-tune. reader_num_threads Number of threads used to read Example records. If >1, the results will be interleaved. Defaults to 1. parser_num_threads Number of threads to use for parsing Example tensors into a dictionary of Feature tensors. Defaults to 2. sloppy_ordering If True, reading performance will be improved at the cost of non-deterministic ordering. If False, the order of elements produced is deterministic prior to shuffling (elements are still randomized if shuffle=True. Note that if the seed is set, then order of elements after shuffling is deterministic). Defaults to False. drop_final_batch If True, and the batch size does not evenly divide the input dataset size, the final smaller batch will be dropped. Defaults to False. Returns A dataset of dict elements, (or a tuple of dict elements and label). Each dict maps feature keys to Tensor or SparseTensor objects. Raises TypeError If reader is of the wrong type. ValueError If label_key is not one of the features keys.
tensorflow.data.experimental.make_batched_features_dataset
tf.data.experimental.make_csv_dataset View source on GitHub Reads CSV files into a dataset. tf.data.experimental.make_csv_dataset( file_pattern, batch_size, column_names=None, column_defaults=None, label_name=None, select_columns=None, field_delim=',', use_quote_delim=True, na_value='', header=True, num_epochs=None, shuffle=True, shuffle_buffer_size=10000, shuffle_seed=None, prefetch_buffer_size=None, num_parallel_reads=None, sloppy=False, num_rows_for_inference=100, compression_type=None, ignore_errors=False ) Reads CSV files into a dataset, where each element is a (features, labels) tuple that corresponds to a batch of CSV rows. The features dictionary maps feature column names to Tensors containing the corresponding feature data, and labels is a Tensor containing the batch's label data. Args file_pattern List of files or patterns of file paths containing CSV records. See tf.io.gfile.glob for pattern rules. batch_size An int representing the number of records to combine in a single batch. column_names An optional list of strings that corresponds to the CSV columns, in order. One per column of the input record. If this is not provided, infers the column names from the first row of the records. These names will be the keys of the features dict of each dataset element. column_defaults A optional list of default values for the CSV fields. One item per selected column of the input record. Each item in the list is either a valid CSV dtype (float32, float64, int32, int64, or string), or a Tensor with one of the aforementioned types. The tensor can either be a scalar default value (if the column is optional), or an empty tensor (if the column is required). If a dtype is provided instead of a tensor, the column is also treated as required. If this list is not provided, tries to infer types based on reading the first num_rows_for_inference rows of files specified, and assumes all columns are optional, defaulting to 0 for numeric values and "" for string values. If both this and select_columns are specified, these must have the same lengths, and column_defaults is assumed to be sorted in order of increasing column index. label_name A optional string corresponding to the label column. If provided, the data for this column is returned as a separate Tensor from the features dictionary, so that the dataset complies with the format expected by a tf.Estimator.train or tf.Estimator.evaluate input function. select_columns An optional list of integer indices or string column names, that specifies a subset of columns of CSV data to select. If column names are provided, these must correspond to names provided in column_names or inferred from the file header lines. When this argument is specified, only a subset of CSV columns will be parsed and returned, corresponding to the columns specified. Using this results in faster parsing and lower memory usage. If both this and column_defaults are specified, these must have the same lengths, and column_defaults is assumed to be sorted in order of increasing column index. field_delim An optional string. Defaults to ",". Char delimiter to separate fields in a record. use_quote_delim An optional bool. Defaults to True. If false, treats double quotation marks as regular characters inside of the string fields. na_value Additional string to recognize as NA/NaN. header A bool that indicates whether the first rows of provided CSV files correspond to header lines with column names, and should not be included in the data. num_epochs An int specifying the number of times this dataset is repeated. If None, cycles through the dataset forever. shuffle A bool that indicates whether the input should be shuffled. shuffle_buffer_size Buffer size to use for shuffling. A large buffer size ensures better shuffling, but increases memory usage and startup time. shuffle_seed Randomization seed to use for shuffling. prefetch_buffer_size An int specifying the number of feature batches to prefetch for performance improvement. Recommended value is the number of batches consumed per training step. Defaults to auto-tune. num_parallel_reads Number of threads used to read CSV records from files. If >1, the results will be interleaved. Defaults to 1. sloppy If True, reading performance will be improved at the cost of non-deterministic ordering. If False, the order of elements produced is deterministic prior to shuffling (elements are still randomized if shuffle=True. Note that if the seed is set, then order of elements after shuffling is deterministic). Defaults to False. num_rows_for_inference Number of rows of a file to use for type inference if record_defaults is not provided. If None, reads all the rows of all the files. Defaults to 100. compression_type (Optional.) A tf.string scalar evaluating to one of "" (no compression), "ZLIB", or "GZIP". Defaults to no compression. ignore_errors (Optional.) If True, ignores errors with CSV file parsing, such as malformed data or empty lines, and moves on to the next valid CSV record. Otherwise, the dataset raises an error and stops processing when encountering any invalid records. Defaults to False. Returns A dataset, where each element is a (features, labels) tuple that corresponds to a batch of batch_size CSV rows. The features dictionary maps feature column names to Tensors containing the corresponding column data, and labels is a Tensor containing the column data for the label column specified by label_name. Raises ValueError If any of the arguments is malformed.
tensorflow.data.experimental.make_csv_dataset
tf.data.experimental.make_saveable_from_iterator View source on GitHub Returns a SaveableObject for saving/restoring iterator state using Saver. (deprecated) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.make_saveable_from_iterator tf.data.experimental.make_saveable_from_iterator( iterator, external_state_policy='fail' ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: make_saveable_from_iterator is intended for use in TF1 with tf.compat.v1.Saver. In TF2, use tf.train.Checkpoint instead. Args iterator Iterator. external_state_policy A string that identifies how to handle input pipelines that depend on external state. Possible values are 'ignore': The external state is silently ignored. 'warn': The external state is ignored, logging a warning. 'fail': The operation fails upon encountering external state. By default we set it to 'fail'. Returns A SaveableObject for saving/restoring iterator state using Saver. Raises ValueError If iterator does not support checkpointing. ValueError If external_state_policy is not one of 'warn', 'ignore' or 'fail'. For example: with tf.Graph().as_default(): ds = tf.data.Dataset.range(10) iterator = ds.make_initializable_iterator() # Build the iterator SaveableObject. saveable_obj = tf.data.experimental.make_saveable_from_iterator(iterator) # Add the SaveableObject to the SAVEABLE_OBJECTS collection so # it can be automatically saved using Saver. tf.compat.v1.add_to_collection(tf.GraphKeys.SAVEABLE_OBJECTS, saveable_obj) saver = tf.compat.v1.train.Saver() while continue_training: ... Perform training ... if should_save_checkpoint: saver.save() Note: When restoring the iterator, the existing iterator state is completely discarded. This means that any changes you may have made to the Dataset graph will be discarded as well! This includes the new Dataset graph that you may have built during validation. So, while running validation, make sure to run the initializer for the validation input pipeline after restoring the checkpoint. Note: Not all iterators support checkpointing yet. Attempting to save the state of an unsupported iterator will throw an error.
tensorflow.data.experimental.make_saveable_from_iterator
tf.data.experimental.MapVectorizationOptions View source on GitHub Represents options for the MapVectorization optimization. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.MapVectorizationOptions tf.data.experimental.MapVectorizationOptions() Attributes enabled Whether to vectorize map transformations. If None, defaults to False. use_choose_fastest Whether to use ChooseFastestBranchDataset with this transformation. If True, the pipeline picks between the vectorized and original segment at runtime based on their iterations speed. If None, defaults to False. Methods __eq__ View source __eq__( other ) Return self==value. __ne__ View source __ne__( other ) Return self!=value.
tensorflow.data.experimental.mapvectorizationoptions
tf.data.experimental.map_and_batch View source on GitHub Fused implementation of map and batch. (deprecated) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.map_and_batch tf.data.experimental.map_and_batch( map_func, batch_size, num_parallel_batches=None, drop_remainder=False, num_parallel_calls=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.data.Dataset.map(map_func, num_parallel_calls) followed by tf.data.Dataset.batch(batch_size, drop_remainder). Static tf.data optimizations will take care of using the fused implementation. Maps map_func across batch_size consecutive elements of this dataset and then combines them into a batch. Functionally, it is equivalent to map followed by batch. This API is temporary and deprecated since input pipeline optimization now fuses consecutive map and batch operations automatically. Args map_func A function mapping a nested structure of tensors to another nested structure of tensors. batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch. num_parallel_batches (Optional.) A tf.int64 scalar tf.Tensor, representing the number of batches to create in parallel. On one hand, higher values can help mitigate the effect of stragglers. On the other hand, higher values can increase contention if CPU is scarce. drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in case its size is smaller than desired; the default behavior is not to drop the smaller batch. num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number of elements to process in parallel. If not specified, batch_size * num_parallel_batches elements will be processed in parallel. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU. Returns A Dataset transformation function, which can be passed to tf.data.Dataset.apply. Raises ValueError If both num_parallel_batches and num_parallel_calls are specified.
tensorflow.data.experimental.map_and_batch
tf.data.experimental.OptimizationOptions View source on GitHub Represents options for dataset optimizations. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.OptimizationOptions tf.data.experimental.OptimizationOptions() You can set the optimization options of a dataset through the experimental_optimization property of tf.data.Options; the property is an instance of tf.data.experimental.OptimizationOptions. options = tf.data.Options() options.experimental_optimization.noop_elimination = True options.experimental_optimization.map_vectorization.enabled = True options.experimental_optimization.apply_default_optimizations = False dataset = dataset.with_options(options) Attributes apply_default_optimizations Whether to apply default graph optimizations. If False, only graph optimizations that have been explicitly enabled will be applied. autotune Whether to automatically tune performance knobs. If None, defaults to True. autotune_buffers When autotuning is enabled (through autotune), determines whether to also autotune buffer sizes for datasets with parallelism. If None, defaults to False. autotune_cpu_budget When autotuning is enabled (through autotune), determines the CPU budget to use. Values greater than the number of schedulable CPU cores are allowed but may result in CPU contention. If None, defaults to the number of schedulable CPU cores. autotune_ram_budget When autotuning is enabled (through autotune), determines the RAM budget to use. Values greater than the available RAM in bytes may result in OOM. If None, defaults to half of the available RAM in bytes. filter_fusion Whether to fuse filter transformations. If None, defaults to False. filter_with_random_uniform_fusion Whether to fuse filter dataset that predicts random_uniform < rate into a sampling dataset. If None, defaults to False. hoist_random_uniform Whether to hoist tf.random_uniform() ops out of map transformations. If None, defaults to False. map_and_batch_fusion Whether to fuse map and batch transformations. If None, defaults to True. map_and_filter_fusion Whether to fuse map and filter transformations. If None, defaults to False. map_fusion Whether to fuse map transformations. If None, defaults to False. map_parallelization Whether to parallelize stateless map transformations. If None, defaults to False. map_vectorization The map vectorization options associated with the dataset. See tf.data.experimental.MapVectorizationOptions for more details. noop_elimination Whether to eliminate no-op transformations. If None, defaults to True. parallel_batch Whether to parallelize copying of batch elements. This optimization is highly experimental and can cause performance degradation (e.g. when the parallelization overhead exceeds the benefits of performing the data copies in parallel). You should only enable this optimization if a) your input pipeline is bottlenecked on batching and b) you have validated that this optimization improves performance. If None, defaults to False. reorder_data_discarding_ops Whether to reorder ops that will discard data to the front of unary cardinality preserving transformations, e.g. dataset.map(...).take(3) will be optimized to dataset.take(3).map(...). For now this optimization will move skip, shard and take to the front of map and prefetch. This optimization is only for performance; it will not affect the output of the dataset. If None, defaults to True. shuffle_and_repeat_fusion Whether to fuse shuffle and repeat transformations. If None, defaults to True. Methods __eq__ View source __eq__( other ) Return self==value. __ne__ View source __ne__( other ) Return self!=value.
tensorflow.data.experimental.optimizationoptions
tf.data.experimental.parallel_interleave View source on GitHub A parallel version of the Dataset.interleave() transformation. (deprecated) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.parallel_interleave tf.data.experimental.parallel_interleave( map_func, cycle_length, block_length=1, sloppy=False, buffer_output_elements=None, prefetch_input_elements=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE) instead. If sloppy execution is desired, use tf.data.Options.experimental_deterministic. parallel_interleave() maps map_func across its input to produce nested datasets, and outputs their elements interleaved. Unlike tf.data.Dataset.interleave, it gets elements from cycle_length nested datasets in parallel, which increases the throughput, especially in the presence of stragglers. Furthermore, the sloppy argument can be used to improve performance, by relaxing the requirement that the outputs are produced in a deterministic order, and allowing the implementation to skip over nested datasets whose elements are not readily available when requested. Example usage: # Preprocess 4 files concurrently. filenames = tf.data.Dataset.list_files("/path/to/data/train*.tfrecords") dataset = filenames.apply( tf.data.experimental.parallel_interleave( lambda filename: tf.data.TFRecordDataset(filename), cycle_length=4)) Warning: If sloppy is True, the order of produced elements is not deterministic. Args map_func A function mapping a nested structure of tensors to a Dataset. cycle_length The number of input Datasets to interleave from in parallel. block_length The number of consecutive elements to pull from an input Dataset before advancing to the next input Dataset. sloppy A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If sloppy is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to enforce a deterministic order. buffer_output_elements The number of elements each iterator being interleaved should buffer (similar to the .prefetch() transformation for each interleaved iterator). prefetch_input_elements The number of input elements to transform to iterators before they are needed for interleaving. Returns A Dataset transformation function, which can be passed to tf.data.Dataset.apply.
tensorflow.data.experimental.parallel_interleave
tf.data.experimental.parse_example_dataset View source on GitHub A transformation that parses Example protos into a dict of tensors. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.parse_example_dataset tf.data.experimental.parse_example_dataset( features, num_parallel_calls=1, deterministic=None ) Parses a number of serialized Example protos given in serialized. We refer to serialized as a batch with batch_size many entries of individual Example protos. This op parses serialized examples into a dictionary mapping keys to Tensor, SparseTensor, and RaggedTensor objects. features is a dict from keys to VarLenFeature, RaggedFeature, SparseFeature, and FixedLenFeature objects. Each VarLenFeature and SparseFeature is mapped to a SparseTensor; each RaggedFeature is mapped to a RaggedTensor; and each FixedLenFeature is mapped to a Tensor. See tf.io.parse_example for more details about feature dictionaries. Args features A dict mapping feature keys to FixedLenFeature, VarLenFeature, RaggedFeature, and SparseFeature values. num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number of parsing processes to call in parallel. deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order if some parsing calls complete faster than others. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically. Returns A dataset transformation function, which can be passed to tf.data.Dataset.apply. Raises ValueError if features argument is None.
tensorflow.data.experimental.parse_example_dataset
tf.data.experimental.prefetch_to_device View source on GitHub A transformation that prefetches dataset values to the given device. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.prefetch_to_device tf.data.experimental.prefetch_to_device( device, buffer_size=None ) Note: Although the transformation creates a tf.data.Dataset, the transformation must be the final Dataset in the input pipeline. Args device A string. The name of a device to which elements will be prefetched. buffer_size (Optional.) The number of elements to buffer on device. Defaults to an automatically chosen value. Returns A Dataset transformation function, which can be passed to tf.data.Dataset.apply.
tensorflow.data.experimental.prefetch_to_device
tf.data.experimental.RandomDataset View source on GitHub A Dataset of pseudorandom values. Inherits From: Dataset tf.data.experimental.RandomDataset( seed=None ) Attributes element_spec The type specification of an element of this dataset. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset.element_spec TensorSpec(shape=(), dtype=tf.int32, name=None) Methods apply View source apply( transformation_func ) Applies a transformation function to this dataset. apply enables chaining of custom Dataset transformations, which are represented as functions that take one Dataset argument and return a transformed Dataset. dataset = tf.data.Dataset.range(100) def dataset_fn(ds): return ds.filter(lambda x: x < 5) dataset = dataset.apply(dataset_fn) list(dataset.as_numpy_iterator()) [0, 1, 2, 3, 4] Args transformation_func A function that takes one Dataset argument and returns a Dataset. Returns Dataset The Dataset returned by applying transformation_func to this dataset. as_numpy_iterator View source as_numpy_iterator() Returns an iterator which converts all elements of the dataset to numpy. Use as_numpy_iterator to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using as_numpy_iterator. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset: print(element) tf.Tensor(1, shape=(), dtype=int32) tf.Tensor(2, shape=(), dtype=int32) tf.Tensor(3, shape=(), dtype=int32) This method requires that you are running in eager mode and the dataset's element_spec contains only TensorSpec components. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset.as_numpy_iterator(): print(element) 1 2 3 dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) print(list(dataset.as_numpy_iterator())) [1, 2, 3] as_numpy_iterator() will preserve the nested structure of dataset elements. dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), 'b': [5, 6]}) list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, {'a': (2, 4), 'b': 6}] True Returns An iterable over the elements of the dataset, with their tensors converted to numpy arrays. Raises TypeError if an element contains a non-Tensor value. RuntimeError if eager execution is not enabled. batch View source batch( batch_size, drop_remainder=False ) Combines consecutive elements of this dataset into batches. dataset = tf.data.Dataset.range(8) dataset = dataset.batch(3) list(dataset.as_numpy_iterator()) [array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] dataset = tf.data.Dataset.range(8) dataset = dataset.batch(3, drop_remainder=True) list(dataset.as_numpy_iterator()) [array([0, 1, 2]), array([3, 4, 5])] The components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced. Args batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch. drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch. Returns Dataset A Dataset. cache View source cache( filename='' ) Caches the elements in this dataset. The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data. Note: For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data. dataset = tf.data.Dataset.range(5) dataset = dataset.map(lambda x: x**2) dataset = dataset.cache() # The first time reading through the data will generate the data using # `range` and `map`. list(dataset.as_numpy_iterator()) [0, 1, 4, 9, 16] # Subsequent iterations read from the cache. list(dataset.as_numpy_iterator()) [0, 1, 4, 9, 16] When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to .cache() will have no effect until the cache file is removed or the filename is changed. dataset = tf.data.Dataset.range(5) dataset = dataset.cache("/path/to/file") # doctest: +SKIP list(dataset.as_numpy_iterator()) # doctest: +SKIP [0, 1, 2, 3, 4] dataset = tf.data.Dataset.range(10) dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP list(dataset.as_numpy_iterator()) # doctest: +SKIP [0, 1, 2, 3, 4] Note: cache will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call shuffle after calling cache. Args filename A tf.string scalar tf.Tensor, representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory. Returns Dataset A Dataset. cardinality View source cardinality() Returns the cardinality of the dataset, if known. cardinality may return tf.data.INFINITE_CARDINALITY if the dataset contains an infinite number of elements or tf.data.UNKNOWN_CARDINALITY if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file). dataset = tf.data.Dataset.range(42) print(dataset.cardinality().numpy()) 42 dataset = dataset.repeat() cardinality = dataset.cardinality() print((cardinality == tf.data.INFINITE_CARDINALITY).numpy()) True dataset = dataset.filter(lambda x: True) cardinality = dataset.cardinality() print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy()) True Returns A scalar tf.int64 Tensor representing the cardinality of the dataset. If the cardinality is infinite or unknown, cardinality returns the named constants tf.data.INFINITE_CARDINALITY and tf.data.UNKNOWN_CARDINALITY respectively. concatenate View source concatenate( dataset ) Creates a Dataset by concatenating the given dataset with this dataset. a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ] ds = a.concatenate(b) list(ds.as_numpy_iterator()) [1, 2, 3, 4, 5, 6, 7] # The input dataset and dataset to be concatenated should have the same # nested structures and output types. c = tf.data.Dataset.zip((a, b)) a.concatenate(c) Traceback (most recent call last): TypeError: Two datasets to concatenate have different types <dtype: 'int64'> and (tf.int64, tf.int64) d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) a.concatenate(d) Traceback (most recent call last): TypeError: Two datasets to concatenate have different types <dtype: 'int64'> and <dtype: 'string'> Args dataset Dataset to be concatenated. Returns Dataset A Dataset. enumerate View source enumerate( start=0 ) Enumerates the elements of this dataset. It is similar to python's enumerate. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.enumerate(start=5) for element in dataset.as_numpy_iterator(): print(element) (5, 1) (6, 2) (7, 3) # The nested structure of the input dataset determines the structure of # elements in the resulting dataset. dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) dataset = dataset.enumerate() for element in dataset.as_numpy_iterator(): print(element) (0, array([7, 8], dtype=int32)) (1, array([ 9, 10], dtype=int32)) Args start A tf.int64 scalar tf.Tensor, representing the start value for enumeration. Returns Dataset A Dataset. filter View source filter( predicate ) Filters this dataset according to predicate. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.filter(lambda x: x < 3) list(dataset.as_numpy_iterator()) [1, 2] # `tf.math.equal(x, y)` is required for equality comparison def filter_fn(x): return tf.math.equal(x, 1) dataset = dataset.filter(filter_fn) list(dataset.as_numpy_iterator()) [1] Args predicate A function mapping a dataset element to a boolean. Returns Dataset The Dataset containing the elements of this dataset for which predicate is True. flat_map View source flat_map( map_func ) Maps map_func across this dataset and flattens the result. Use flat_map if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements: dataset = tf.data.Dataset.from_tensor_slices( [[1, 2, 3], [4, 5, 6], [7, 8, 9]]) dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x)) list(dataset.as_numpy_iterator()) [1, 2, 3, 4, 5, 6, 7, 8, 9] tf.data.Dataset.interleave() is a generalization of flat_map, since flat_map produces the same output as tf.data.Dataset.interleave(cycle_length=1) Args map_func A function mapping a dataset element to a dataset. Returns Dataset A Dataset. from_generator View source @staticmethod from_generator( generator, output_types=None, output_shapes=None, args=None, output_signature=None ) Creates a Dataset whose elements are generated by generator. (deprecated arguments) Warning: SOME ARGUMENTS ARE DEPRECATED: (output_shapes, output_types). They will be removed in a future version. Instructions for updating: Use output_signature instead The generator argument must be a callable object that returns an object that supports the iter() protocol (e.g. a generator function). The elements generated by generator must be compatible with either the given output_signature argument or with the given output_types and (optionally) output_shapes arguments, whichiver was specified. The recommended way to call from_generator is to use the output_signature argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by tf.TypeSpec objects from output_signature argument: def gen(): ragged_tensor = tf.ragged.constant([[1, 2], [3]]) yield 42, ragged_tensor dataset = tf.data.Dataset.from_generator( gen, output_signature=( tf.TensorSpec(shape=(), dtype=tf.int32), tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32))) list(dataset.take(1)) [(<tf.Tensor: shape=(), dtype=int32, numpy=42>, <tf.RaggedTensor [[1, 2], [3]]>)] There is also a deprecated way to call from_generator by either with output_types argument alone or together with output_shapes argument. In this case the output of the function will be assumed to consist of tf.Tensor objects with with the types defined by output_types and with the shapes which are either unknown or defined by output_shapes. Note: The current implementation of Dataset.from_generator() uses tf.numpy_function and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called Dataset.from_generator(). The body of generator will not be serialized in a GraphDef, and you should not use this method if you need to serialize your model and restore it in a different environment. Note: If generator depends on mutable global variables or other external state, be aware that the runtime may invoke generator multiple times (in order to support repeating the Dataset) and at any time between the call to Dataset.from_generator() and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in generator before calling Dataset.from_generator(). Args generator A callable object that returns an object that supports the iter() protocol. If args is not specified, generator must take no arguments; otherwise it must take as many arguments as there are values in args. output_types (Optional.) A nested structure of tf.DType objects corresponding to each component of an element yielded by generator. output_shapes (Optional.) A nested structure of tf.TensorShape objects corresponding to each component of an element yielded by generator. args (Optional.) A tuple of tf.Tensor objects that will be evaluated and passed to generator as NumPy-array arguments. output_signature (Optional.) A nested structure of tf.TypeSpec objects corresponding to each component of an element yielded by generator. Returns Dataset A Dataset. from_tensor_slices View source @staticmethod from_tensor_slices( tensors ) Creates a Dataset whose elements are slices of the given tensors. The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions. # Slicing a 1D tensor produces scalar tensor elements. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) list(dataset.as_numpy_iterator()) [1, 2, 3] # Slicing a 2D tensor produces 1D tensor elements. dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) list(dataset.as_numpy_iterator()) [array([1, 2], dtype=int32), array([3, 4], dtype=int32)] # Slicing a tuple of 1D tensors produces tuple elements containing # scalar tensors. dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) list(dataset.as_numpy_iterator()) [(1, 3, 5), (2, 4, 6)] # Dictionary structure is also preserved. dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, {'a': 2, 'b': 4}] True # Two tensors can be combined into one Dataset object. features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor dataset = Dataset.from_tensor_slices((features, labels)) # Both the features and the labels tensors can be converted # to a Dataset object separately and combined after. features_dataset = Dataset.from_tensor_slices(features) labels_dataset = Dataset.from_tensor_slices(labels) dataset = Dataset.zip((features_dataset, labels_dataset)) # A batched feature and label set can be converted to a Dataset # in similar fashion. batched_features = tf.constant([[[1, 3], [2, 3]], [[2, 1], [1, 2]], [[3, 3], [3, 2]]], shape=(3, 2, 2)) batched_labels = tf.constant([['A', 'A'], ['B', 'B'], ['A', 'B']], shape=(3, 2, 1)) dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) for element in dataset.as_numpy_iterator(): print(element) (array([[1, 3], [2, 3]], dtype=int32), array([[b'A'], [b'A']], dtype=object)) (array([[2, 1], [1, 2]], dtype=int32), array([[b'B'], [b'B']], dtype=object)) (array([[3, 3], [3, 2]], dtype=int32), array([[b'A'], [b'B']], dtype=object)) Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide. Args tensors A dataset element, with each component having the same size in the first dimension. Returns Dataset A Dataset. from_tensors View source @staticmethod from_tensors( tensors ) Creates a Dataset with a single element, comprising the given tensors. from_tensors produces a dataset containing only a single element. To slice the input tensor into multiple elements, use from_tensor_slices instead. dataset = tf.data.Dataset.from_tensors([1, 2, 3]) list(dataset.as_numpy_iterator()) [array([1, 2, 3], dtype=int32)] dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) list(dataset.as_numpy_iterator()) [(array([1, 2, 3], dtype=int32), b'A')] # You can use `from_tensors` to produce a dataset which repeats # the same example many times. example = tf.constant([1,2,3]) dataset = tf.data.Dataset.from_tensors(example).repeat(2) list(dataset.as_numpy_iterator()) [array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide. Args tensors A dataset element. Returns Dataset A Dataset. interleave View source interleave( map_func, cycle_length=None, block_length=None, num_parallel_calls=None, deterministic=None ) Maps map_func across this dataset, and interleaves the results. For example, you can use Dataset.interleave() to process many input files concurrently: # Preprocess 4 files concurrently, and interleave blocks of 16 records # from each file. filenames = ["/var/data/file1.txt", "/var/data/file2.txt", "/var/data/file3.txt", "/var/data/file4.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) def parse_fn(filename): return tf.data.Dataset.range(10) dataset = dataset.interleave(lambda x: tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), cycle_length=4, block_length=16) The cycle_length and block_length arguments control the order in which elements are produced. cycle_length controls the number of input elements that are processed concurrently. If you set cycle_length to 1, this transformation will handle one input element at a time, and will produce identical results to tf.data.Dataset.flat_map. In general, this transformation will apply map_func to cycle_length input elements, open iterators on the returned Dataset objects, and cycle through them producing block_length consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator. For example: dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] # NOTE: New lines indicate "block" boundaries. dataset = dataset.interleave( lambda x: Dataset.from_tensors(x).repeat(6), cycle_length=2, block_length=4) list(dataset.as_numpy_iterator()) [1, 1, 1, 1, 2, 2, 2, 2, 1, 1, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 3, 3, 4, 4, 5, 5, 5, 5, 5, 5] Note: The order of elements yielded by this transformation is deterministic, as long as map_func is a pure function and deterministic=True. If map_func contains any stateful operations, the order in which that state is accessed is undefined. Performance can often be improved by setting num_parallel_calls so that interleave will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set deterministic=False. filenames = ["/var/data/file1.txt", "/var/data/file2.txt", "/var/data/file3.txt", "/var/data/file4.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE, deterministic=False) Args map_func A function mapping a dataset element to a dataset. cycle_length (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If num_parallel_calls is set to tf.data.AUTOTUNE, the cycle_length argument identifies the maximum degree of parallelism. block_length (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1. num_parallel_calls (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU. deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically. Returns Dataset A Dataset. list_files View source @staticmethod list_files( file_pattern, shuffle=None, seed=None ) A dataset of all files matching one or more glob patterns. The file_pattern argument should be a small number of glob patterns. If your filenames have already been globbed, use Dataset.from_tensor_slices(filenames) instead, as re-globbing every filename with list_files may result in poor performance with remote storage systems. Note: The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a seed or shuffle=False to get results in a deterministic order. Example: If we had the following files on our filesystem: /path/to/dir/a.txt /path/to/dir/b.py /path/to/dir/c.py If we pass "/path/to/dir/*.py" as the directory, the dataset would produce: /path/to/dir/b.py /path/to/dir/c.py Args file_pattern A string, a list of strings, or a tf.Tensor of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched. shuffle (Optional.) If True, the file names will be shuffled randomly. Defaults to True. seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior. Returns Dataset A Dataset of strings corresponding to file names. map View source map( map_func, num_parallel_calls=None, deterministic=None ) Maps map_func across the elements of this dataset. This transformation applies map_func to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. map_func can be used to change both the values and the structure of a dataset's elements. For example, adding 1 to each element, or projecting a subset of element components. dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] dataset = dataset.map(lambda x: x + 1) list(dataset.as_numpy_iterator()) [2, 3, 4, 5, 6] The input signature of map_func is determined by the structure of each element in this dataset. dataset = Dataset.range(5) # `map_func` takes a single argument of type `tf.Tensor` with the same # shape and dtype. result = dataset.map(lambda x: x + 1) # Each element is a tuple containing two `tf.Tensor` objects. elements = [(1, "foo"), (2, "bar"), (3, "baz")] dataset = tf.data.Dataset.from_generator( lambda: elements, (tf.int32, tf.string)) # `map_func` takes two arguments of type `tf.Tensor`. This function # projects out just the first component. result = dataset.map(lambda x_int, y_str: x_int) list(result.as_numpy_iterator()) [1, 2, 3] # Each element is a dictionary mapping strings to `tf.Tensor` objects. elements = ([{"a": 1, "b": "foo"}, {"a": 2, "b": "bar"}, {"a": 3, "b": "baz"}]) dataset = tf.data.Dataset.from_generator( lambda: elements, {"a": tf.int32, "b": tf.string}) # `map_func` takes a single argument of type `dict` with the same keys # as the elements. result = dataset.map(lambda d: str(d["a"]) + d["b"]) The value or values returned by map_func determine the structure of each element in the returned dataset. dataset = tf.data.Dataset.range(3) # `map_func` returns two `tf.Tensor` objects. def g(x): return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"]) result = dataset.map(g) result.element_spec (TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None)) # Python primitives, lists, and NumPy arrays are implicitly converted to # `tf.Tensor`. def h(x): return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64) result = dataset.map(h) result.element_spec (TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None)) # `map_func` can return nested structures. def i(x): return (37.0, [42, 16]), "foo" result = dataset.map(i) result.element_spec ((TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.int32, name=None)), TensorSpec(shape=(), dtype=tf.string, name=None)) map_func can accept as arguments and return any type of dataset element. Note that irrespective of the context in which map_func is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options: 1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code. 2) Use tf.py_function, which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example: d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) # transform a string tensor to upper case string using a Python function def upper_case_fn(t: tf.Tensor): return t.numpy().decode('utf-8').upper() d = d.map(lambda x: tf.py_function(func=upper_case_fn, inp=[x], Tout=tf.string)) list(d.as_numpy_iterator()) [b'HELLO', b'WORLD'] 3) Use tf.numpy_function, which also allows you to write arbitrary Python code. Note that tf.py_function accepts tf.Tensor whereas tf.numpy_function accepts numpy arrays and returns only numpy arrays. For example: d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) def upper_case_fn(t: np.ndarray): return t.decode('utf-8').upper() d = d.map(lambda x: tf.numpy_function(func=upper_case_fn, inp=[x], Tout=tf.string)) list(d.as_numpy_iterator()) [b'HELLO', b'WORLD'] Note that the use of tf.numpy_function and tf.py_function in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL). Performance can often be improved by setting num_parallel_calls so that map will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set deterministic=False. dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] dataset = dataset.map(lambda x: x + 1, num_parallel_calls=tf.data.AUTOTUNE, deterministic=False) Args map_func A function mapping a dataset element to another dataset element. num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU. deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically. Returns Dataset A Dataset. options View source options() Returns the options for this dataset and its inputs. Returns A tf.data.Options object representing the dataset options. padded_batch View source padded_batch( batch_size, padded_shapes=None, padding_values=None, drop_remainder=False ) Combines consecutive elements of this dataset into padded batches. This transformation combines multiple consecutive elements of the input dataset into a single element. Like tf.data.Dataset.batch, the components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced. Unlike tf.data.Dataset.batch, the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in padded_shapes. The padded_shapes argument determines the resulting shape for each dimension of each component in an output element: If the dimension is a constant, the component will be padded out to that length in that dimension. If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension. A = (tf.data.Dataset .range(1, 5, output_type=tf.int32) .map(lambda x: tf.fill([x], x))) # Pad to the smallest per-batch size that fits all elements. B = A.padded_batch(2) for element in B.as_numpy_iterator(): print(element) [[1 0] [2 2]] [[3 3 3 0] [4 4 4 4]] # Pad to a fixed size. C = A.padded_batch(2, padded_shapes=5) for element in C.as_numpy_iterator(): print(element) [[1 0 0 0 0] [2 2 0 0 0]] [[3 3 3 0 0] [4 4 4 4 0]] # Pad with a custom value. D = A.padded_batch(2, padded_shapes=5, padding_values=-1) for element in D.as_numpy_iterator(): print(element) [[ 1 -1 -1 -1 -1] [ 2 2 -1 -1 -1]] [[ 3 3 3 -1 -1] [ 4 4 4 4 -1]] # Components of nested elements can be padded independently. elements = [([1, 2, 3], [10]), ([4, 5], [11, 12])] dataset = tf.data.Dataset.from_generator( lambda: iter(elements), (tf.int32, tf.int32)) # Pad the first component of the tuple to length 4, and the second # component to the smallest size that fits. dataset = dataset.padded_batch(2, padded_shapes=([4], [None]), padding_values=(-1, 100)) list(dataset.as_numpy_iterator()) [(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32), array([[ 10, 100], [ 11, 12]], dtype=int32))] # Pad with a single value and multiple components. E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1) for element in E.as_numpy_iterator(): print(element) (array([[ 1, -1], [ 2, 2]], dtype=int32), array([[ 1, -1], [ 2, 2]], dtype=int32)) (array([[ 3, 3, 3, -1], [ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1], [ 4, 4, 4, 4]], dtype=int32)) See also tf.data.experimental.dense_to_sparse_batch, which combines elements that may have different shapes into a tf.sparse.SparseTensor. Args batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch. padded_shapes (Optional.) A nested structure of tf.TensorShape or tf.int64 vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. padded_shapes must be set if any component has an unknown rank. padding_values (Optional.) A nested structure of scalar-shaped tf.Tensor, representing the padding values to use for the respective components. None represents that the nested structure should be padded with default values. Defaults are 0 for numeric types and the empty string for string types. The padding_values should have the same structure as the input dataset. If padding_values is a single element and the input dataset has multiple components, then the same padding_values will be used to pad every component of the dataset. If padding_values is a scalar, then its value will be broadcasted to match the shape of each component. drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch. Returns Dataset A Dataset. Raises ValueError If a component has an unknown rank, and the padded_shapes argument is not set. prefetch View source prefetch( buffer_size ) Creates a Dataset that prefetches elements from this dataset. Most dataset input pipelines should end with a call to prefetch. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements. Note: Like other Dataset methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. examples.prefetch(2) will prefetch two elements (2 examples), while examples.batch(20).prefetch(2) will prefetch 2 elements (2 batches, of 20 examples each). dataset = tf.data.Dataset.range(3) dataset = dataset.prefetch(2) list(dataset.as_numpy_iterator()) [0, 1, 2] Args buffer_size A tf.int64 scalar tf.Tensor, representing the maximum number of elements that will be buffered when prefetching. Returns Dataset A Dataset. range View source @staticmethod range( *args, **kwargs ) Creates a Dataset of a step-separated range of values. list(Dataset.range(5).as_numpy_iterator()) [0, 1, 2, 3, 4] list(Dataset.range(2, 5).as_numpy_iterator()) [2, 3, 4] list(Dataset.range(1, 5, 2).as_numpy_iterator()) [1, 3] list(Dataset.range(1, 5, -2).as_numpy_iterator()) [] list(Dataset.range(5, 1).as_numpy_iterator()) [] list(Dataset.range(5, 1, -2).as_numpy_iterator()) [5, 3] list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator()) [2, 3, 4] list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator()) [1.0, 3.0] Args *args follows the same semantics as python's xrange. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. **kwargs output_type: Its expected dtype. (Optional, default: tf.int64). Returns Dataset A RangeDataset. Raises ValueError if len(args) == 0. reduce View source reduce( initial_state, reduce_func ) Reduces the input dataset to a single element. The transformation calls reduce_func successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The initial_state argument is used for the initial state and the final state is returned as the result. tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy() 5 tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy() 10 Args initial_state An element representing the initial state of the transformation. reduce_func A function that maps (old_state, input_element) to new_state. It must take two arguments and return a new element The structure of new_state must match the structure of initial_state. Returns A dataset element corresponding to the final state of the transformation. repeat View source repeat( count=None ) Repeats this dataset so each original value is seen count times. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.repeat(3) list(dataset.as_numpy_iterator()) [1, 2, 3, 1, 2, 3, 1, 2, 3] Note: If this dataset is a function of global state (e.g. a random number generator), then different repetitions may produce different elements. Args count (Optional.) A tf.int64 scalar tf.Tensor, representing the number of times the dataset should be repeated. The default behavior (if count is None or -1) is for the dataset be repeated indefinitely. Returns Dataset A Dataset. shard View source shard( num_shards, index ) Creates a Dataset that includes only 1/num_shards of this dataset. shard is deterministic. The Dataset produced by A.shard(n, i) will contain all elements of A whose index mod n = i. A = tf.data.Dataset.range(10) B = A.shard(num_shards=3, index=0) list(B.as_numpy_iterator()) [0, 3, 6, 9] C = A.shard(num_shards=3, index=1) list(C.as_numpy_iterator()) [1, 4, 7] D = A.shard(num_shards=3, index=2) list(D.as_numpy_iterator()) [2, 5, 8] This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset. When reading a single input file, you can shard elements as follows: d = tf.data.TFRecordDataset(input_file) d = d.shard(num_workers, worker_index) d = d.repeat(num_epochs) d = d.shuffle(shuffle_buffer_size) d = d.map(parser_fn, num_parallel_calls=num_map_threads) Important caveats: Be sure to shard before you use any randomizing operator (such as shuffle). Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline: d = Dataset.list_files(pattern) d = d.shard(num_workers, worker_index) d = d.repeat(num_epochs) d = d.shuffle(shuffle_buffer_size) d = d.interleave(tf.data.TFRecordDataset, cycle_length=num_readers, block_length=1) d = d.map(parser_fn, num_parallel_calls=num_map_threads) Args num_shards A tf.int64 scalar tf.Tensor, representing the number of shards operating in parallel. index A tf.int64 scalar tf.Tensor, representing the worker index. Returns Dataset A Dataset. Raises InvalidArgumentError if num_shards or index are illegal values. Note: error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.) shuffle View source shuffle( buffer_size, seed=None, reshuffle_each_iteration=None ) Randomly shuffles the elements of this dataset. This dataset fills a buffer with buffer_size elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required. For instance, if your dataset contains 10,000 elements but buffer_size is set to 1,000, then shuffle will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer. reshuffle_each_iteration controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the repeat transformation: dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=True) dataset = dataset.repeat(2) # doctest: +SKIP [1, 0, 2, 1, 2, 0] dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=False) dataset = dataset.repeat(2) # doctest: +SKIP [1, 0, 2, 1, 0, 2] In TF 2.0, tf.data.Dataset objects are Python iterables which makes it possible to also create epochs through Python iteration: dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=True) list(dataset.as_numpy_iterator()) # doctest: +SKIP [1, 0, 2] list(dataset.as_numpy_iterator()) # doctest: +SKIP [1, 2, 0] dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=False) list(dataset.as_numpy_iterator()) # doctest: +SKIP [1, 0, 2] list(dataset.as_numpy_iterator()) # doctest: +SKIP [1, 0, 2] Args buffer_size A tf.int64 scalar tf.Tensor, representing the number of elements from this dataset from which the new dataset will sample. seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior. reshuffle_each_iteration (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to True.) Returns Dataset A Dataset. skip View source skip( count ) Creates a Dataset that skips count elements from this dataset. dataset = tf.data.Dataset.range(10) dataset = dataset.skip(7) list(dataset.as_numpy_iterator()) [7, 8, 9] Args count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be skipped to form the new dataset. If count is greater than the size of this dataset, the new dataset will contain no elements. If count is -1, skips the entire dataset. Returns Dataset A Dataset. take View source take( count ) Creates a Dataset with at most count elements from this dataset. dataset = tf.data.Dataset.range(10) dataset = dataset.take(3) list(dataset.as_numpy_iterator()) [0, 1, 2] Args count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be taken to form the new dataset. If count is -1, or if count is greater than the size of this dataset, the new dataset will contain all elements of this dataset. Returns Dataset A Dataset. unbatch View source unbatch() Splits elements of a dataset into multiple elements. For example, if elements of the dataset are shaped [B, a0, a1, ...], where B may vary for each input element, then for each element in the dataset, the unbatched dataset will contain B consecutive elements of shape [a0, a1, ...]. elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64) dataset = dataset.unbatch() list(dataset.as_numpy_iterator()) [1, 2, 3, 1, 2, 1, 2, 3, 4] Note: unbatch requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of unbatch. Returns A Dataset. window View source window( size, shift=None, stride=1, drop_remainder=False ) Combines (nests of) input elements into a dataset of (nests of) windows. A "window" is a finite dataset of flat elements of size size (or possibly fewer if there are not enough input elements to fill the window and drop_remainder evaluates to False). The shift argument determines the number of input elements by which the window moves on each iteration. If windows and elements are both numbered starting at 0, the first element in window k will be element k * shift of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset. The stride argument determines the stride of the input elements, and the shift argument determines the shift of the window. For example: dataset = tf.data.Dataset.range(7).window(2) for window in dataset: print(list(window.as_numpy_iterator())) [0, 1] [2, 3] [4, 5] [6] dataset = tf.data.Dataset.range(7).window(3, 2, 1, True) for window in dataset: print(list(window.as_numpy_iterator())) [0, 1, 2] [2, 3, 4] [4, 5, 6] dataset = tf.data.Dataset.range(7).window(3, 1, 2, True) for window in dataset: print(list(window.as_numpy_iterator())) [0, 2, 4] [1, 3, 5] [2, 4, 6] Note that when the window transformation is applied to a dataset of nested elements, it produces a dataset of nested windows. nested = ([1, 2, 3, 4], [5, 6, 7, 8]) dataset = tf.data.Dataset.from_tensor_slices(nested).window(2) for window in dataset: def to_numpy(ds): return list(ds.as_numpy_iterator()) print(tuple(to_numpy(component) for component in window)) ([1, 2], [5, 6]) ([3, 4], [7, 8]) dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]}) dataset = dataset.window(2) for window in dataset: def to_numpy(ds): return list(ds.as_numpy_iterator()) print({'a': to_numpy(window['a'])}) {'a': [1, 2]} {'a': [3, 4]} Args size A tf.int64 scalar tf.Tensor, representing the number of elements of the input dataset to combine into a window. Must be positive. shift (Optional.) A tf.int64 scalar tf.Tensor, representing the number of input elements by which the window moves in each iteration. Defaults to size. Must be positive. stride (Optional.) A tf.int64 scalar tf.Tensor, representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element". drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last windows should be dropped if their size is smaller than size. Returns Dataset A Dataset of (nests of) windows -- a finite datasets of flat elements created from the (nests of) input elements. with_options View source with_options( options ) Returns a new tf.data.Dataset with the given options set. The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values. ds = tf.data.Dataset.range(5) ds = ds.interleave(lambda x: tf.data.Dataset.range(5), cycle_length=3, num_parallel_calls=3) options = tf.data.Options() # This will make the interleave order non-deterministic. options.experimental_deterministic = False ds = ds.with_options(options) Args options A tf.data.Options that identifies the options the use. Returns Dataset A Dataset with the given options. Raises ValueError when an option is set more than once to a non-default value zip View source @staticmethod zip( datasets ) Creates a Dataset by zipping together the given datasets. This method has similar semantics to the built-in zip() function in Python, with the main difference being that the datasets argument can be an arbitrary nested structure of Dataset objects. # The nested structure of the `datasets` argument determines the # structure of elements in the resulting dataset. a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ] ds = tf.data.Dataset.zip((a, b)) list(ds.as_numpy_iterator()) [(1, 4), (2, 5), (3, 6)] ds = tf.data.Dataset.zip((b, a)) list(ds.as_numpy_iterator()) [(4, 1), (5, 2), (6, 3)] # The `datasets` argument may contain an arbitrary number of datasets. c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8], # [9, 10], # [11, 12] ] ds = tf.data.Dataset.zip((a, b, c)) for element in ds.as_numpy_iterator(): print(element) (1, 4, array([7, 8])) (2, 5, array([ 9, 10])) (3, 6, array([11, 12])) # The number of elements in the resulting dataset is the same as # the size of the smallest dataset in `datasets`. d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ] ds = tf.data.Dataset.zip((a, d)) list(ds.as_numpy_iterator()) [(1, 13), (2, 14)] Args datasets A nested structure of datasets. Returns Dataset A Dataset. __bool__ View source __bool__() __iter__ View source __iter__() Creates an iterator for elements of this dataset. The returned iterator implements the Python Iterator protocol. Returns An tf.data.Iterator for the elements of this dataset. Raises RuntimeError If not inside of tf.function and not executing eagerly. __len__ View source __len__() Returns the length of the dataset if it is known and finite. This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use tf.data.Dataset.cardinality instead. Returns An integer representing the length of the dataset. Raises RuntimeError If the dataset length is unknown or infinite, or if eager execution is not enabled. __nonzero__ View source __nonzero__()
tensorflow.data.experimental.randomdataset
tf.data.experimental.Reducer View source on GitHub A reducer is used for reducing a set of elements. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.Reducer tf.data.experimental.Reducer( init_func, reduce_func, finalize_func ) A reducer is represented as a tuple of the three functions: 1) initialization function: key => initial state 2) reduce function: (old state, input) => new state 3) finalization function: state => result Attributes finalize_func init_func reduce_func
tensorflow.data.experimental.reducer
tf.data.experimental.rejection_resample View source on GitHub A transformation that resamples a dataset to achieve a target distribution. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.rejection_resample tf.data.experimental.rejection_resample( class_func, target_dist, initial_dist=None, seed=None ) Note: Resampling is performed via rejection sampling; some fraction of the input values will be dropped. Args class_func A function mapping an element of the input dataset to a scalar tf.int32 tensor. Values should be in [0, num_classes). target_dist A floating point type tensor, shaped [num_classes]. initial_dist (Optional.) A floating point type tensor, shaped [num_classes]. If not provided, the true class distribution is estimated live in a streaming fashion. seed (Optional.) Python integer seed for the resampler. Returns A Dataset transformation function, which can be passed to tf.data.Dataset.apply.
tensorflow.data.experimental.rejection_resample
tf.data.experimental.sample_from_datasets View source on GitHub Samples elements at random from the datasets in datasets. tf.data.experimental.sample_from_datasets( datasets, weights=None, seed=None ) Args datasets A list of tf.data.Dataset objects with compatible structure. weights (Optional.) A list of len(datasets) floating-point values where weights[i] represents the probability with which an element should be sampled from datasets[i], or a tf.data.Dataset object where each element is such a list. Defaults to a uniform distribution across datasets. seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior. Returns A dataset that interleaves elements from datasets at random, according to weights if provided, otherwise with uniform probability. Raises TypeError If the datasets or weights arguments have the wrong type. ValueError If the weights argument is specified and does not match the length of the datasets element.
tensorflow.data.experimental.sample_from_datasets
tf.data.experimental.save Saves the content of the given dataset. tf.data.experimental.save( dataset, path, compression=None, shard_func=None ) Example usage: import tempfile path = os.path.join(tempfile.gettempdir(), "saved_data") # Save a dataset dataset = tf.data.Dataset.range(2) tf.data.experimental.save(dataset, path) new_dataset = tf.data.experimental.load(path, tf.TensorSpec(shape=(), dtype=tf.int64)) for elem in new_dataset: print(elem) tf.Tensor(0, shape=(), dtype=int64) tf.Tensor(1, shape=(), dtype=int64) The saved dataset is saved in multiple file "shards". By default, the dataset output is divided to shards in a round-robin fashion but custom sharding can be specified via the shard_func function. For example, you can save the dataset to using a single shard as follows: dataset = make_dataset() def custom_shard_func(element): return 0 dataset = tf.data.experimental.save( path="/path/to/data", ..., shard_func=custom_shard_func) Note: The directory layout and file format used for saving the dataset is considered an implementation detail and may change. For this reason, datasets saved through tf.data.experimental.save should only be consumed through tf.data.experimental.load, which is guaranteed to be backwards compatible. Args dataset The dataset to save. path Required. A directory to use for saving the dataset. compression Optional. The algorithm to use to compress data when writing it. Supported options are GZIP and NONE. Defaults to NONE. shard_func Optional. A function to control the mapping of dataset elements to file shards. The function is expected to map elements of the input dataset to int64 shard IDs. If present, the function will be traced and executed as graph computation.
tensorflow.data.experimental.save
tf.data.experimental.scan View source on GitHub A transformation that scans a function across an input dataset. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.scan tf.data.experimental.scan( initial_state, scan_func ) This transformation is a stateful relative of tf.data.Dataset.map. In addition to mapping scan_func across the elements of the input dataset, scan() accumulates one or more state tensors, whose initial values are initial_state. Args initial_state A nested structure of tensors, representing the initial state of the accumulator. scan_func A function that maps (old_state, input_element) to (new_state, output_element). It must take two arguments and return a pair of nested structures of tensors. The new_state must match the structure of initial_state. Returns A Dataset transformation function, which can be passed to tf.data.Dataset.apply.
tensorflow.data.experimental.scan
Module: tf.data.experimental.service API for using the tf.data service. This module contains: tf.data server implementations for running the tf.data service. A distribute dataset transformation that moves a dataset's preprocessing to happen in the tf.data service. The tf.data service offers a way to improve training speed when the host attached to a training device can't keep up with the data consumption of the model. For example, suppose a host can generate 100 examples/second, but the model can process 200 examples/second. Training speed could be doubled by using the tf.data service to generate 200 examples/second. Before using the tf.data service There are a few things to do before using the tf.data service to speed up training. Understand processing_mode The tf.data service uses a cluster of workers to prepare data for training your model. The processing_mode argument to tf.data.experimental.service.distribute describes how to leverage multiple workers to process the input dataset. Currently, there are two processing modes to choose from: "distributed_epoch" and "parallel_epochs". "distributed_epoch" means that the dataset will be split across all tf.data service workers. The dispatcher produces "splits" for the dataset and sends them to workers for further processing. For example, if a dataset begins with a list of filenames, the dispatcher will iterate through the filenames and send the filenames to tf.data workers, which will perform the rest of the dataset transformations on those files. "distributed_epoch" is useful when your model needs to see each element of the dataset exactly once, or if it needs to see the data in a generally-sequential order. "distributed_epoch" only works for datasets with splittable sources, such as Dataset.from_tensor_slices, Dataset.list_files, or Dataset.range. "parallel_epochs" means that the entire input dataset will be processed independently by each of the tf.data service workers. For this reason, it is important to shuffle data (e.g. filenames) non-deterministically, so that each worker will process the elements of the dataset in a different order. "parallel_epochs" can be used to distribute datasets that aren't splittable. Measure potential impact Before using the tf.data service, it is useful to first measure the potential performance improvement. To do this, add dataset = dataset.take(1).cache().repeat() at the end of your dataset, and see how it affects your model's step time. take(1).cache().repeat() will cache the first element of your dataset and produce it repeatedly. This should make the dataset very fast, so that the model becomes the bottleneck and you can identify the ideal model speed. With enough workers, the tf.data service should be able to achieve similar speed. Running the tf.data service tf.data servers should be brought up alongside your training jobs, and brought down when the jobs are finished. The tf.data service uses one DispatchServer and any number of WorkerServers. See https://github.com/tensorflow/ecosystem/tree/master/data_service for an example of using Google Kubernetes Engine (GKE) to manage the tf.data service. The server implementation in tf_std_data_server.py is not GKE-specific, and can be used to run the tf.data service in other contexts. Fault tolerance By default, the tf.data dispatch server stores its state in-memory, making it a single point of failure during training. To avoid this, pass fault_tolerant_mode=True when creating your DispatchServer. Dispatcher fault tolerance requires work_dir to be configured and accessible from the dispatcher both before and after restart (e.g. a GCS path). With fault tolerant mode enabled, the dispatcher will journal its state to the work directory so that no state is lost when the dispatcher is restarted. WorkerServers may be freely restarted, added, or removed during training. At startup, workers will register with the dispatcher and begin processing all outstanding jobs from the beginning. Using the tf.data service from your training job Once you have a tf.data service cluster running, take note of the dispatcher IP address and port. To connect to the service, you will use a string in the format "grpc://:". # Create the dataset however you were before using the tf.data service. dataset = your_dataset_factory() service = "grpc://{}:{}".format(dispatcher_address, dispatcher_port) # This will register the dataset with the tf.data service cluster so that # tf.data workers can run the dataset to produce elements. The dataset returned # from applying `distribute` will fetch elements produced by tf.data workers. dataset = dataset.apply(tf.data.experimental.service.distribute( processing_mode="parallel_epochs", service=service)) Below is a toy example that you can run yourself. dispatcher = tf.data.experimental.service.DispatchServer() dispatcher_address = dispatcher.target.split("://")[1] worker = tf.data.experimental.service.WorkerServer( tf.data.experimental.service.WorkerConfig( dispatcher_address=dispatcher_address)) dataset = tf.data.Dataset.range(10) dataset = dataset.apply(tf.data.experimental.service.distribute( processing_mode="parallel_epochs", service=dispatcher.target)) print(list(dataset.as_numpy_iterator())) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] See the documentation of tf.data.experimental.service.distribute for more details about using the distribute transformation. Classes class DispatchServer: An in-process tf.data service dispatch server. class DispatcherConfig: Configuration class for tf.data service dispatchers. class WorkerConfig: Configuration class for tf.data service dispatchers. class WorkerServer: An in-process tf.data service worker server. Functions distribute(...): A transformation that moves dataset processing to the tf.data service. from_dataset_id(...): Creates a dataset which reads data from the tf.data service. register_dataset(...): Registers a dataset with the tf.data service.
tensorflow.data.experimental.service