Devices#

The devices currently supported by Ivy are as follows:

  • cpu

  • gpu:idx

  • tpu:idx

In a similar manner to the ivy.Dtype and ivy.NativeDtype classes (see Data Types), there is both an ivy.Device class and an ivy.NativeDevice class, with ivy.NativeDevice initially set as an empty class. The ivy.Device class derives from str, and has simple logic in the constructor to verify that the string formatting is correct. When a backend is set, the ivy.NativeDevice is replaced with the backend-specific device class.

Device Module#

The device.py module provides a variety of functions for working with devices. A few examples include ivy.get_all_ivy_arrays_on_dev() which gets all arrays which are currently alive on the specified device, ivy.dev() which gets the device for input array, and ivy.num_gpus() which determines the number of available GPUs for use with the backend framework.

Many functions in the device.py module are convenience functions, which means that they do not directly modify arrays, as explained in the Function Types section.

For example, the following are all convenience functions: ivy.total_mem_on_dev, which gets the total amount of memory for a given device, ivy.dev_util, which gets the current utilization (%) for a given device, ivy.num_cpu_cores, which determines the number of cores available in the CPU, and ivy.default_device, which returns the correct device to use.

ivy.default_device is arguably the most important function. Any function in the functional API that receives a device argument will make use of this function, as explained below.

Arguments in other Functions#

Like with dtype, all device arguments are also keyword-only. All creation functions include the device argument, for specifying the device on which to place the created array. Some other functions outside of the creation.py submodule also support the device argument, such as ivy.random_uniform() which is located in random.py, but this is simply because of dual categorization. ivy.random_uniform() is also essentially a creation function, despite not being located in creation.py.

The device argument is generally not included for functions which accept arrays in the input and perform operations on these arrays. In such cases, the device of the output arrays is the same as the device for the input arrays. In cases where the input arrays are located on different devices, an error will generally be thrown, unless the function is specific to distributed training.

The device argument is handled in infer_device for all functions which have the @infer_device decorator, similar to how dtype is handled. This function calls ivy.default_device in order to determine the correct device. As discussed in the Function Wrapping section, this is applied to all applicable functions dynamically during backend setting.

Overall, ivy.default_device infers the device as follows:

  1. if the device argument is provided, use this directly

  2. otherwise, if an array is present in the arguments (very rare if the device argument is present), set arr to this array. This will then be used to infer the device by calling ivy.dev() on the array

  3. otherwise, if no arrays are present in the arguments (by far the most common case if the device argument is present), then use the global default device, which currently can either be cpu, gpu:idx or tpu:idx. The default device is settable via ivy.set_default_device().

For the majority of functions which defer to infer_device for handling the device, these steps will have been followed and the device argument will be populated with the correct value before the backend-specific implementation is even entered into. Therefore, whereas the device argument is listed as optional in the ivy API at ivy/functional/ivy/category_name.py, the argument is listed as required in the backend-specific implementations at ivy/functional/backends/backend_name/category_name.py.

This is exactly the same as with the dtype argument, as explained in the Data Types section.

Let’s take a look at the function ivy.zeros() as an example.

The implementation in ivy/functional/ivy/creation.py has the following signature:

@outputs_to_ivy_arrays
@handle_out_argument
@infer_dtype
@infer_device
def zeros(
    shape: Union[int, Sequence[int]],
    *,
    dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype]] = None,
    device: Optional[Union[ivy.Device, ivy.NativeDevice]] = None,
) -> ivy.Array:

Whereas the backend-specific implementations in ivy/functional/backends/backend_name/creation.py all list device as required.

Jax:

def zeros(
    shape: Union[int, Sequence[int]],
    *,
    dtype: jnp.dtype,
    device: jaxlib.xla_extension.Device,
) -> JaxArray:

NumPy:

def zeros(
    shape: Union[int, Sequence[int]],
    *,
    dtype: np.dtype,
    device: str,
) -> np.ndarray:

TensorFlow:

def zeros(
    shape: Union[int, Sequence[int]],
    *,
    dtype: tf.DType,
    device: str,
) -> Tensor:

PyTorch:

def zeros(
    shape: Union[int, Sequence[int]],
    *,
    dtype: torch.dtype,
    device: torch.device,
) -> Tensor:

This makes it clear that these backend-specific functions are only entered into once the correct device has been determined.

However, the device argument for functions without the @infer_device decorator is not handled by infer_device, and so these defaults must be handled by the backend-specific implementations themselves, by calling ivy.default_device() internally.

Device handling#

Different frameworks handle devices differently while performing an operation. For example, torch expects all the tensors to be on the same device while performing an operation, or else, it throws a device exception. On the other hand, tensorflow doesn’t care about this, it moves all the tensors to the same device before performing an operation.

Controlling Device Handling Behaviour

In Ivy, users can control the device on which the operation is to be executed using ivy.set_soft_device_mode flag. There are two cases for this, either the soft device mode is set to True or False.

When ivy.set_soft_device_mode(True):

a. All the input arrays are moved to ivy.default_device() while performing an operation. If the array is already present in the default device, no device shifting is done.

In the example below, even though the input arrays x and y are created on different devices(‘cpu’ and ‘gpu:0’), the arrays are moved to ivy.default_device() while performing ivy.add operation, and the output array will be on this device.

ivy.set_backend("torch")
ivy.set_soft_device_mode(True)
x = ivy.array([1], device="cpu")
y = ivy.array([34], device="gpu:0")
ivy.add(x, y)

The priority of device shifting is the following in this mode:

  1. The device argument.

  2. device the arrays are on.

  3. default_device

When ivy.set_soft_device_mode(False):

  1. If any of the input arrays are on a different device, a device exception is raised.

In the example below, since the input arrays are on different devices(‘cpu’ and ‘gpu:0’), an IvyBackendException is raised while performing ivy.add.

ivy.set_backend("torch")
ivy.set_soft_device_mode(False)
x = ivy.array([1], device="cpu")
y = ivy.array([34], device="gpu:0")
ivy.add(x, y)

This is the exception you will get while running the code above:

IvyBackendException: torch: add:   File "/content/ivy/ivy/utils/exceptions.py", line 210, in _handle_exceptions
    return fn(*args, **kwargs)
File "/content/ivy/ivy/func_wrapper.py", line 1013, in _handle_nestable
    return fn(*args, **kwargs)
File "/content/ivy/ivy/func_wrapper.py", line 905, in _handle_out_argument
    return fn(*args, out=out, **kwargs)
File "/content/ivy/ivy/func_wrapper.py", line 441, in _inputs_to_native_arrays
    return fn(*new_args, **new_kwargs)
File "/content/ivy/ivy/func_wrapper.py", line 547, in _outputs_to_ivy_arrays
    ret = fn(*args, **kwargs)
File "/content/ivy/ivy/func_wrapper.py", line 358, in _handle_array_function
    return fn(*args, **kwargs)
File "/content/ivy/ivy/func_wrapper.py", line 863, in _handle_device_shifting
    raise ivy.utils.exceptions.IvyException(
During the handling of the above exception, another exception occurred:
Expected all input arrays to be on the same device, but found at least two devices - ('cpu', 'gpu:0'),
set `ivy.set_soft_device_mode(True)` to handle this problem.
  1. If all the input arrays are on the same device, the operation is executed without raising any device exceptions.

The example below runs without issues since both the input arrays are on ‘gpu:0’ device:

ivy.set_backend("torch")
ivy.set_soft_device_mode(False)
x = ivy.array([1], device="gpu:0")
y = ivy.array([34], device="gpu:0")
ivy.add(x, y)

The code to handle all these cases are present inside @handle_device_shifting decorator, which is wrapped around all the functions that accept at least one array as input(except mixed and compositional functions) in ivy.functional.ivy submodule. The decorator calls ivy.handle_soft_device_variable function under the hood to handle device shifting for each backend.

The priority of device shifting is following in this mode:

  1. The device argument.

  2. default_device

Soft Device Handling Function

This is a function which plays a crucial role in the handle_device_shifting decorator. The purpose of this function is to ensure that the function fn passed to it is executed on the device passed in device_shifting_dev argument. If it is passed as None, then the function will be executed on the default device.

Most of the backend implementations are very similar, first they move all the arrays to the desired device using ivy.nested_map and then execute the function inside the device handling context manager from that native framework. The purpose of executing the function inside the context manager is to handle the functions that do not accept any arrays, the only way in that case to let the native framework know on which device we want the function to be executed on is through the context manager. This approach is used in most backend implementations with the exception being tensorflow, where we don’t have to move all the tensors to the desired device because just using its context manager is enough, it moves all the tensors itself internally, and numpy, since it only accepts cpu as a device.

Forcing Operations on User Specified Device

The ivy.DefaultDevice context manager can be used to force the operations to be performed on to a specific device. For example, in the code below, both x and y will be moved from ‘gpu:0’ to ‘cpu’ device and ivy.add operation will be performed on ‘cpu’ device:

x = ivy.array([1], device="gpu:0")
y = ivy.array([34], device="gpu:0")
with ivy.DefaultDevice("cpu"):
    z = ivy.add(x, y)

On entering ivy.DefaultDevice("cpu") context manager, under the hood, the default device is set to ‘cpu’ and soft device mode is turned on. All these happens under the __enter__ method of the context manager. So from now on, all the operations will be executed on ‘cpu’ device.

On exiting the context manager(__exit__ method), the default device and soft device mode is reset to the previous state using ivy.unset_default_device() and ivy.unset_soft_device_mode() respectively, to move back to the previous state.

There are some functions(mostly creation function) which accept a device argument. This is for specifying on which device the function is executed on and the device of the returned array. handle_device_shifting deals with this argument by first checking if it exists and then setting device_shifting_dev to that which is then passed to the handle_soft_device_variable function depending on the soft_device mode.

Round Up

This should have hopefully given you a good feel for devices, and how these are handled in Ivy.

If you have any questions, please feel free to reach out on discord in the devices thread!