Device#
Collection of device Ivy functions.
- ivy.as_ivy_dev(device, /)[source]#
Convert device to string representation.
- Parameters:
device (
Union
[Device
,str
]) – The device handle to convert to string.- Return type:
Device
- Returns:
ret – Device string e.g. ‘cuda:0’.
Examples
>>> y = ivy.as_ivy_dev('cpu') >>> print(y) cpu
- ivy.as_native_dev(device, /)[source]#
Convert device string representation to native device type.
- Parameters:
device (
Union
[Device
,NativeDevice
]) – The device string to convert to native device handle. A native device handle can be passed in instead - in this case the unmodified parameter is returned.- Return type:
NativeDevice
- Returns:
ret – Native device handle.
Examples
With
ivy.Device
input:>>> ivy.set_backend("numpy") >>> ivy.as_native_dev("cpu") 'cpu'
>>> ivy.set_backend("tensorflow") >>> ivy.as_native_dev("tpu:3") '/TPU:3'
With
ivy.NativeDevice
input:>>> import torch >>> device = torch.device("cuda") >>> device device(type='cuda')
>>> ivy.as_native_dev(device) device(type='cuda')
- ivy.clear_cached_mem_on_dev(device, /)[source]#
Clear memory cache on target device.
- Parameters:
device (
Union
[Device
,NativeDevice
]) – The device string to convert to native device handle or native device handle.- Return type:
None
Examples
>>> import torch >>> ivy.set_backend("torch") >>> device = torch.device("cuda") >>> ivy.clear_cached_mem_on_dev(device)
- ivy.default_device(device=None, /, *, item=None, as_native=None)[source]#
Return the input device or the default device. If the as_native flag is set, the device will be converted to a native device. If the item is provided, the item’s device is returned. If the device is not provided, the last default device is returned. If a default device has not been set, the first gpu is returned if available, otherwise the cpu is returned.
- Parameters:
device (
Optional
[Union
[Device
,NativeDevice
]], default:None
) – The device to be returned or converted.item (
Optional
[Union
[list
,tuple
,dict
,Array
,NativeArray
]], default:None
) – The item to get the device from.as_native (
Optional
[bool
], default:None
) – Whether to convert the device to a native device.
- Return type:
Union
[Device
,NativeDevice
]- Returns:
ret – Device handle or string.
Examples
>>> ivy.default_device() device(type='cpu')
>>> ivy.default_device("gpu:0") 'gpu:0'
>>> ivy.default_device(item=[], as_native=False) 'cpu'
>>> ivy.default_device(item=(), as_native=True) device(type='cpu')
>>> ivy.default_device(item={"a": 1}, as_native=True) device(type='cpu')
>>> x = ivy.array([1., 2., 3.]) >>> x = ivy.to_device(x, 'gpu:0') >>> ivy.default_device(item=x, as_native=True) device(type='gpu', id=0)
- ivy.dev(x, /, *, as_native=False)[source]#
Get the native device handle for input array x.
- Parameters:
x (
Union
[Array
,NativeArray
]) – array for which to get the device handle.as_native (
bool
, default:False
) – Whether or not to return the dev in native format. Default isFalse
.
- Return type:
Union
[Device
,NativeDevice
]- Returns:
- ret
Device handle for the array.
Examples
With
ivy.Array
input:>>> x = ivy.array([3, 1, 4, 5]) >>> y = ivy.dev(x) >>> print(y) cpu
With
ivy.NativeArray
input:>>> x = ivy.native_array([[2, 5, 4], [3, 1, 5]]) >>> y = ivy.dev(x, as_native=True) >>> print(y) cpu
- ivy.dev_util(device, /)[source]#
Get the current utilization (%) for a given device.
- Parameters:
device (
Union
[Device
,NativeDevice
]) – The device string of the device to query utilization for.- Return type:
float
- Returns:
ret – The device utilization (%)
Example
>>> ivy.dev_util('cpu') 13.4 >>> ivy.dev_util('gpu:0') 7.8 >>> ivy.dev_util('cpu') 93.4 >>> ivy.dev_util('gpu:2') 57.4 >>> ivy.dev_util('cpu') 84.2
- ivy.function_supported_devices(fn, recurse=True)[source]#
Return the supported devices of the current backend’s function. The function returns a dict containing the supported devices for the compositional and primary implementations in case of partial mixed functions.
- Parameters:
fn (
Callable
) – The function to check for the supported device attributerecurse (
bool
, default:True
) – Whether to recurse into used ivy functions. Default isTrue
.
- Return type:
Union
[Tuple
,dict
]- Returns:
ret – Tuple or dict containing the supported devices of the function
Examples
>>> import ivy >>> ivy.set_backend('numpy') >>> print(ivy.function_supported_devices(ivy.ones)) ('cpu',)
>>> ivy.set_backend('torch') >>> x = ivy.function_supported_devices(ivy.ones) >>> x = sorted(x) ('cpu', 'gpu')
- ivy.function_unsupported_devices(fn, recurse=True)[source]#
Return the unsupported devices of the current backend’s function. The function returns a dict containing the unsupported devices for the compositional and primary implementations in case of partial mixed functions.
- Parameters:
fn (
Callable
) – The function to check for the unsupported device attributerecurse (
bool
, default:True
) – Whether to recurse into used ivy functions. Default isTrue
.
- Return type:
Union
[Tuple
,dict
]- Returns:
ret – Tuple or dict containing the unsupported devices of the function
Examples
>>> print(ivy.function_unsupported_devices(ivy.ones)) ('tpu',)
- ivy.get_all_ivy_arrays_on_dev(device, /)[source]#
Get all ivy arrays which are currently alive on the specified device.
- Parameters:
device (
Union
[Device
,NativeDevice
]) – The device handle from which to get the arrays- Return type:
- Returns:
ret – Container with the arrays found for the specified device [identity, array]
Examples
>>> x = ivy.array([1,0,2]) >>> y = ivy.dev(x) >>> z = ivy.get_all_ivy_arrays_on_dev(y) >>> print(z) {139740789224448:ivy.array([1,0,2])},
- ivy.gpu_is_available()[source]#
Determine whether a GPU is available to use, with the backend framework.
- Return type:
bool
- Returns:
ret – Boolean, as to whether a gpu is available.
Examples
>>> print(ivy.gpu_is_available()) False
- ivy.num_cpu_cores(*, logical=True)[source]#
Determine the number of cores available in the cpu.
- Parameters:
logical (
bool
, default:True
) – Whether request is for number of physical or logical cores available in CPU- Return type:
int
- Returns:
ret – Number of cores available in CPU
Examples
>>> print(ivy.num_cpu_cores(logical=False)) 2
- ivy.num_gpus()[source]#
Determine the number of available GPUs, with the backend framework.
- Return type:
int
- Returns:
ret – Number of available GPUs.
Examples
>>> print(ivy.num_gpus()) 1
- ivy.num_ivy_arrays_on_dev(device, /)[source]#
Return the number of arrays which are currently alive on the specified device.
- Parameters:
device (
Union
[Device
,NativeDevice
]) – The device handle from which to count the arrays- Return type:
int
- Returns:
ret – Number of arrays on the specified device
Examples
>>> x1 = ivy.array([-1, 0, 5.2]) >>> x2 = ivy.array([-1, 0, 5.2, 4, 5]) >>> y = ivy.num_ivy_arrays_on_dev(ivy.default_device()) >>> print(y) 2
>>> x1 = ivy.native_array([-1, 0, 5.2]) >>> y = ivy.num_ivy_arrays_on_dev(ivy.default_device()) >>> print(y) 0
>>> x = ivy.Container(x1=ivy.array([-1]), ... x2=ivy.native_array([-1])) >>> y = ivy.num_ivy_arrays_on_dev(ivy.default_device()) >>> print(y) 1
- ivy.percent_used_mem_on_dev(device, /, *, process_specific=False)[source]#
Get the percentage used memory for a given device string. In case of CPU, the used RAM is returned.
- Parameters:
device (
Union
[Device
,NativeDevice
]) – The device string to convert to native device handle.process_specific (
bool
, default:False
) – Whether the check the memory used by this python process alone. Default is False.
- Return type:
float
- Returns:
ret – The percentage used memory on the device.
Examples
>>> x = ivy.percent_used_mem_on_dev("cpu", process_specific = False) >>> print(x) 94.036902561555
>>> x = ivy.percent_used_mem_on_dev("cpu", process_specific = True) >>> print(x) 0.7024003467681645
>>> x = ivy.as_native_dev("gpu:0") >>> y = ivy.percent_used_mem_on_dev(x, process_specific = False) >>> print(y) 0.7095597456708771
- ivy.print_all_ivy_arrays_on_dev(*, device=None, attr_only=True)[source]#
Print the shape and dtype for all ivy arrays which are currently alive on the specified device.
- Parameters:
device (
Optional
[Union
[Device
,NativeDevice
]], default:None
) – The device on which to print the arraysattr_only (
bool
, default:True
) – Whether or not to only print the shape and dtype attributes of the array
- Return type:
None
Examples
>>> x = ivy.array([[1,0,2], [3,2,1]]) >>> y = ivy.dev(x) >>> ivy.print_all_ivy_arrays_on_dev(y) ((3,), 'int32') ((3,), 'int32')
>>> x = ivy.array([[1,0,2], [3,2,1]]) >>> y = ivy.dev(x) >>> ivy.print_all_ivy_arrays_on_dev(y, attr_only = False) [1,0,2] [3,2,1]
- ivy.set_default_device(device, /)[source]#
Sets the default device to the argument provided in the function.
- Parameters:
device (
Union
[Device
,NativeDevice
]) – The device to be set as the default device.- Return type:
None
- Returns:
ret – The new default device.
Examples
>>> ivy.default_device() 'cpu'
>>> ivy.set_backend('jax') >>> ivy.set_default_device('gpu:0') >>> ivy.default_device() 'gpu:0'
>>> ivy.set_backend('torch') >>> ivy.set_default_device('gpu:1') >>> ivy.default_device() 'gpu:1
>>> ivy.set_backend('tensorflow') >>> ivy.set_default_device('tpu:0) >>> ivy.default_device() 'tpu:0
>>> ivy.set_backend('paddle') >>> ivy.set_default_device('cpu) >>> ivy.default_device() 'cpu'
>>> ivy.set_backend('mxnet') >>> ivy.set_default_device('cpu') >>> ivy.default_device() 'cpu'
- ivy.set_soft_device_mode(mode)[source]#
Set the mode of whether to move input arrays to ivy.default_device() before performing an operation.
- Return type:
None
Parameter#
- mode
boolean whether to move input arrays
Examples
>>> ivy.set_soft_device_mode(False) >>> ivy.soft_device_mode False >>> ivy.set_soft_device_mode(True) >>> ivy.soft_device_mode True
- ivy.set_split_factor(factor, /, *, device=None)[source]#
Set the global split factor for a given device, which can be used to scale batch splitting chunk sizes for the device across the codebase.
- Parameters:
factor (
float
) – The factor to set the device-specific split factor to.device (
Optional
[Union
[Device
,NativeDevice
]], default:None
) – The device to set the split factor for. Sets the default device by default.
- Return type:
None
Examples
>>> print(ivy.default_device()) cpu
>>> ivy.set_split_factor(0.5) >>> print(ivy.split_factors) {'cpu': 0.5}
>>> import torch >>> ivy.set_backend("torch") >>> device = torch.device("cuda") >>> ivy.set_split_factor(0.3, device=device) >>> print(ivy.split_factors) {device(type='cuda'): 0.3}
>>> ivy.set_split_factor(0.4, device="tpu") >>> print(ivy.split_factors) {'tpu': 0.4}
>>> import torch >>> ivy.set_backend("torch") >>> device = torch.device("cuda") >>> ivy.set_split_factor(0.2) >>> ivy.set_split_factor(0.3, device='gpu') >>> print(ivy.split_factors) {'cpu': 0.2, 'gpu': 0.3}
- ivy.split_factor(device=None, /)[source]#
Get a device’s global split factor, which can be used to scale the device’s batch splitting chunk sizes across the codebase.
- If the global split factor is set for a given device,
returns the split factor value for the device from the split factors dictionary
- If the global split factor for a device is not configured,
returns the default value which is 0.0
- Parameters:
device (
Optional
[Union
[Device
,NativeDevice
]], default:None
) – The device to query the split factor for. Sets the default device by default.- Return type:
float
- Returns:
ret – The split factor for the specified device.
Examples
>>> x = ivy.split_factor() >>> print(x) 0.0
>>> y = ivy.split_factor("gpu:0") >>> print(y) 0.0
- ivy.split_func_call(func, inputs, mode, /, *, max_chunk_size=None, chunk_size=None, input_axes=0, output_axes=None, stop_gradients=False, device=None)[source]#
Call a function by splitting its inputs along a given axis, and calling the function in chunks, rather than feeding the entire input array at once. This can be useful to reduce memory usage of the device the arrays are on.
- Parameters:
func (
Callable
) – The function to be called.inputs (
Union
[Array
,NativeArray
]) – A list of inputs to pass into the function.mode (
str
) – The mode by which to unify the return values, must be one of [ concat | mean | sum ]max_chunk_size (
Optional
[int
], default:None
) – The maximum size of each of the chunks to be fed into the function.chunk_size (
Optional
[int
], default:None
) – The size of each of the chunks to be fed into the function. Specifying this arg overwrites the global split factor. Default isNone
.input_axes (
Union
[int
,Iterable
[int
]], default:0
) – The axes along which to split each of the inputs, before passing to the function. Default is0
.output_axes (
Optional
[Union
[int
,Iterable
[int
]]], default:None
) – The axes along which to concat each of the returned outputs. Default is same as fist input axis.stop_gradients (
bool
, default:False
) – Whether to stop the gradients for each computed return. Default isFalse
.device (
Optional
[Union
[Device
,NativeDevice
]], default:None
) – The device to set the split factor for. Sets the default device by default.
- Return type:
Union
[Array
,NativeArray
]- Returns:
ret – The return from the function, following input splitting and re-concattenation.
- ivy.to_device(x, device, /, *, stream=None, out=None)[source]#
Move the input array x to the desired device, specified by device string.
- Parameters:
x (
Union
[Array
,NativeArray
]) – input array to be moved to the desired devicedevice (
Union
[Device
,NativeDevice
]) – device to move the input array x tostream (
Optional
[Union
[int
,Any
]], default:None
) – stream object to use during copy. In addition to the types supported in array.__dlpack__(), implementations may choose to support any library-specific stream object with the caveat that any code using such an object would not be portable.out (
Optional
[Array
], default:None
) – optional output array, for writing the result to. It must have a shape that the inputs broadcast to.
- Return type:
- Returns:
ret – input array x placed on the desired device
Examples
>>> x = ivy.array([1., 2., 3.]) >>> x = ivy.to_device(x, 'cpu') >>> print(x.device) cpu
- ivy.total_mem_on_dev(device, /)[source]#
Get the total amount of memory (in GB) for a given device string. In case of CPU, the total RAM is returned.
- Parameters:
device (
Union
[Device
,NativeDevice
]) – The device string to convert to native device handle.- Return type:
float
- Returns:
ret – The total memory on the device in GB.
Examples
>>> x = ivy.total_mem_on_dev("cpu") >>> print(x) 53.66700032
>>> x = ivy.total_mem_on_dev("gpu:0") >>> print(x) 8.589934592
- ivy.tpu_is_available()[source]#
Determine whether a TPU is available to use, with the backend framework.
- Return type:
bool
- Returns:
ret – Boolean, as to whether a tpu is available.
Examples
>>> ivy.set_backend("torch") >>> print(ivy.tpu_is_available()) False
- ivy.unset_default_device()[source]#
Reset the default device to “cpu”.
- Return type:
None
Examples
>>> ivy.set_default_device("gpu:0") >>> ivy.default_device() "gpu:0" >>> ivy.unset_default_device() >>> ivy.default_device() "cpu"
- ivy.unset_soft_device_mode()[source]#
Reset the mode of moving input arrays to ivy.default_device() before performing an operation.
- Return type:
None
Examples
>>> ivy.set_soft_device_mode(False) >>> ivy.soft_device_mode False >>> ivy.unset_soft_device_mode() >>> ivy.soft_device_mode True
- ivy.used_mem_on_dev(device, /, *, process_specific=False)[source]#
Get the used memory (in GB) for a given device string. In case of CPU, the used RAM is returned.
- Parameters:
device (
Union
[Device
,NativeDevice
]) – The device string to convert to native device handle.process_specific (
bool
, default:False
) – Whether to check the memory used by this python process alone. Default is False.
- Return type:
float
- Returns:
ret – The used memory on the device in GB.
Examples
>>> x = ivy.used_mem_on_dev("cpu", process_specific = False) >>> print(x) 6.219563008
>>> x = ivy.used_mem_on_dev("cpu", process_specific = True) >>> print(x) 0.902400346
>>> y = ivy.used_mem_on_dev("gpu:0", process_specific = False) >>> print(y) 0.525205504
- class ivy.Profiler(save_dir)[source]#
The profiler class is used to profile the execution of some code.
- Parameters:
save_dir (
str
) – The directory to save the profile data to.
This should have hopefully given you an overview of the device submodule, if you have any questions, please feel free to reach out on our discord!