How to use decorators#
Learn about the different ways to use tracing and transpilation functions.
⚠️ If you are running this notebook in Colab, you will have to install Ivy
and some dependencies manually. You can do so by running the cell below ⬇️
If you want to run the notebook locally but don’t have Ivy installed just yet, you can check out the Get Started section of the docs.
[ ]:
!pip install ivy
[1]:
import ivy
import numpy as np
import tensorflow as tf
import torch
ivy.set_backend("tensorflow")
WARNING:root: Some binaries seem to be missing in your system. This could be either because we don't have compatible binaries for your system or that newer binaries were available. In the latter case, calling ivy.utils.cleanup_and_fetch_binaries() should fetch the binaries binaries. Feel free to create an issue on https://github.com/ivy-llc/ivy.git in case of the former
WARNING:root:
Following are the supported configurations :
compiler : cp38-cp38-manylinux_2_17_x86_64, cp38-cp38-win_amd64, cp39-cp39-manylinux_2_17_x86_64, cp39-cp39-win_amd64, cp310-cp310-manylinux_2_17_x86_64, cp310-cp310-win_amd64, cp310-cp310-macosx_12_0_arm64, cp311-cp311-manylinux_2_17_x86_64, cp311-cp311-win_amd64, cp311-cp311-macosx_12_0_arm64, cp312-cp312-manylinux_2_17_x86_64, cp312-cp312-win_amd64, cp312-cp312-macosx_12_0_arm64
2024-10-19 02:30:39.841075: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-10-19 02:30:39.891816: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-10-19 02:30:39.926198: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-10-19 02:30:39.936573: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-10-19 02:30:39.984346: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-10-19 02:30:41.118217: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1729287053.334256 885678 cuda_executor.cc:1001] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2024-10-19 02:30:53.337020: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2343] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
[1]:
<module 'ivy.functional.backends.tensorflow' from '/mnt/d/Work/Jobs/Ivy/Repositories/tracer-transpiler/ivy_repo/ivy/functional/backends/tensorflow/__init__.py'>
Trace#
In the example below, the ivy.trace_graph
function is called as a decorator.
[2]:
@ivy.trace_graph
def normalize(x):
mean = ivy.mean(x)
std = ivy.std(x, correction=1)
return ivy.divide(ivy.subtract(x, mean), std)
[3]:
x = tf.random.uniform((10,))
normalize(x) # tracing happens here
[3]:
<tf.Tensor: shape=(10,), dtype=float32, numpy=
array([-1.273668 , 1.4903533 , -0.17624412, -1.4014657 , 1.2566849 ,
0.55778235, -0.41806665, -0.49729362, 0.84018195, -0.3782647 ],
dtype=float32)>
Likewise, the function can still be called either eagerly or lazily when calling as a decorator. The example above is lazy, whereas the example below is eager:
[4]:
@ivy.trace_graph(args=(x,))
def normalize(x):
mean = ivy.mean(x)
std = ivy.std(x, correction=1)
return ivy.divide(ivy.subtract(x, mean), std)
[5]:
normalize(x) # already traced
[5]:
<tf.Tensor: shape=(10,), dtype=float32, numpy=
array([-1.273668 , 1.4903533 , -0.17624412, -1.4014657 , 1.2566849 ,
0.55778235, -0.41806665, -0.49729362, 0.84018195, -0.3782647 ],
dtype=float32)>
Transpile 🚧#
In the future, ivy.transpile
will be able to be used as a decorator like so:
[ ]:
@ivy.transpile(source="torch", target="tensorflow")
def normalize(x):
mean = torch.mean(x)
std = torch.std(x)
return torch.div(torch.sub(x, mean), std)
# use normalize as a tensorflow function
normalize(tf.random.uniform((10,)))
Round Up#
That’s it, now you are equipped with the basics needed to power up your projects with ivy
! Next up, let’s take a look at some practical examples of how to use ivy
in your projects.