Lazy vs Eager#

Understand the difference between eager and lazy graph tracing and transpilation.

⚠️ If you are running this notebook in Colab, you will have to install Ivy and some dependencies manually. You can do so by running the cell below ⬇️

If you want to run the notebook locally but don’t have Ivy installed just yet, you can check out the Get Started section of the docs.

[1]:
!pip install ivy
Requirement already satisfied: ivy in /workspaces/ivy (0.0.4.0)
Requirement already satisfied: numpy in /opt/fw/mxnet (from ivy) (1.26.1)
Requirement already satisfied: einops in /opt/miniconda/envs/multienv/lib/python3.10/site-packages (from ivy) (0.7.0)
Requirement already satisfied: psutil in /opt/miniconda/envs/multienv/lib/python3.10/site-packages (from ivy) (5.9.6)
Requirement already satisfied: termcolor in /opt/fw/tensorflow (from ivy) (2.3.0)
Requirement already satisfied: colorama in /opt/miniconda/envs/multienv/lib/python3.10/site-packages (from ivy) (0.4.6)
Requirement already satisfied: packaging in /opt/fw/tensorflow (from ivy) (23.2)
Requirement already satisfied: nvidia-ml-py in /opt/miniconda/envs/multienv/lib/python3.10/site-packages (from ivy) (12.535.108)
Requirement already satisfied: diskcache in /opt/miniconda/envs/multienv/lib/python3.10/site-packages (from ivy) (5.6.3)
Requirement already satisfied: google-auth in /opt/fw/tensorflow (from ivy) (2.23.3)
Requirement already satisfied: urllib3<2.0 in /root/.local/lib/python3.10/site-packages (from ivy) (1.26.18)
Requirement already satisfied: requests in /opt/fw/mxnet (from ivy) (2.31.0)
Requirement already satisfied: pyvis in /opt/miniconda/envs/multienv/lib/python3.10/site-packages (from ivy) (0.3.2)
Requirement already satisfied: dill in /opt/miniconda/envs/multienv/lib/python3.10/site-packages (from ivy) (0.3.7)
Requirement already satisfied: astunparse in /opt/fw/tensorflow (from ivy) (1.6.3)
Requirement already satisfied: ml-dtypes in /opt/fw/tensorflow (from ivy) (0.2.0)
Requirement already satisfied: cloudpickle in /opt/fw/tensorflow (from ivy) (3.0.0)
Requirement already satisfied: gast in /opt/fw/tensorflow (from ivy) (0.5.4)
Requirement already satisfied: tqdm in /opt/miniconda/envs/multienv/lib/python3.10/site-packages (from ivy) (4.66.1)
Requirement already satisfied: wheel<1.0,>=0.23.0 in /opt/fw/tensorflow (from astunparse->ivy) (0.41.3)
Requirement already satisfied: six<2.0,>=1.6.1 in /opt/fw/tensorflow (from astunparse->ivy) (1.16.0)
Requirement already satisfied: cachetools<6.0,>=2.0.0 in /opt/fw/tensorflow (from google-auth->ivy) (5.3.2)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /opt/fw/tensorflow (from google-auth->ivy) (0.3.0)
Requirement already satisfied: rsa<5,>=3.1.4 in /opt/fw/tensorflow (from google-auth->ivy) (4.9)
Requirement already satisfied: ipython>=5.3.0 in /opt/miniconda/envs/multienv/lib/python3.10/site-packages (from pyvis->ivy) (8.17.1)
Requirement already satisfied: jinja2>=2.9.6 in /opt/fw/torch (from pyvis->ivy) (3.1.2)
Requirement already satisfied: jsonpickle>=1.4.1 in /opt/miniconda/envs/multienv/lib/python3.10/site-packages (from pyvis->ivy) (3.0.2)
Requirement already satisfied: networkx>=1.11 in /opt/fw/torch (from pyvis->ivy) (3.2.1)
Requirement already satisfied: charset-normalizer<4,>=2 in /opt/fw/mxnet (from requests->ivy) (3.3.1)
Requirement already satisfied: idna<4,>=2.5 in /opt/fw/mxnet (from requests->ivy) (3.4)
Requirement already satisfied: certifi>=2017.4.17 in /opt/fw/mxnet (from requests->ivy) (2023.7.22)
Requirement already satisfied: decorator in /opt/fw/tensorflow (from ipython>=5.3.0->pyvis->ivy) (5.1.1)
Requirement already satisfied: jedi>=0.16 in /opt/miniconda/envs/multienv/lib/python3.10/site-packages (from ipython>=5.3.0->pyvis->ivy) (0.19.1)
Requirement already satisfied: matplotlib-inline in /opt/miniconda/envs/multienv/lib/python3.10/site-packages (from ipython>=5.3.0->pyvis->ivy) (0.1.6)
Requirement already satisfied: prompt-toolkit!=3.0.37,<3.1.0,>=3.0.30 in /opt/miniconda/envs/multienv/lib/python3.10/site-packages (from ipython>=5.3.0->pyvis->ivy) (3.0.39)
Requirement already satisfied: pygments>=2.4.0 in /opt/fw/jax (from ipython>=5.3.0->pyvis->ivy) (2.16.1)
Requirement already satisfied: stack-data in /opt/miniconda/envs/multienv/lib/python3.10/site-packages (from ipython>=5.3.0->pyvis->ivy) (0.6.3)
Requirement already satisfied: traitlets>=5 in /opt/miniconda/envs/multienv/lib/python3.10/site-packages (from ipython>=5.3.0->pyvis->ivy) (5.13.0)
Requirement already satisfied: exceptiongroup in /opt/fw/paddle (from ipython>=5.3.0->pyvis->ivy) (1.1.3)
Requirement already satisfied: pexpect>4.3 in /opt/miniconda/envs/multienv/lib/python3.10/site-packages (from ipython>=5.3.0->pyvis->ivy) (4.8.0)
Requirement already satisfied: MarkupSafe>=2.0 in /opt/fw/tensorflow (from jinja2>=2.9.6->pyvis->ivy) (2.1.3)
Requirement already satisfied: pyasn1<0.6.0,>=0.4.6 in /opt/fw/tensorflow (from pyasn1-modules>=0.2.1->google-auth->ivy) (0.5.0)
Requirement already satisfied: parso<0.9.0,>=0.8.3 in /opt/miniconda/envs/multienv/lib/python3.10/site-packages (from jedi>=0.16->ipython>=5.3.0->pyvis->ivy) (0.8.3)
Requirement already satisfied: ptyprocess>=0.5 in /opt/miniconda/envs/multienv/lib/python3.10/site-packages (from pexpect>4.3->ipython>=5.3.0->pyvis->ivy) (0.7.0)
Requirement already satisfied: wcwidth in /opt/miniconda/envs/multienv/lib/python3.10/site-packages (from prompt-toolkit!=3.0.37,<3.1.0,>=3.0.30->ipython>=5.3.0->pyvis->ivy) (0.2.9)
Requirement already satisfied: executing>=1.2.0 in /opt/miniconda/envs/multienv/lib/python3.10/site-packages (from stack-data->ipython>=5.3.0->pyvis->ivy) (2.0.1)
Requirement already satisfied: asttokens>=2.1.0 in /opt/miniconda/envs/multienv/lib/python3.10/site-packages (from stack-data->ipython>=5.3.0->pyvis->ivy) (2.4.1)
Requirement already satisfied: pure-eval in /opt/miniconda/envs/multienv/lib/python3.10/site-packages (from stack-data->ipython>=5.3.0->pyvis->ivy) (0.2.2)
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv

ivy.trace_graph can be performed either eagerly or lazily since it depends on function tracing, which requires function arguments to use for tracing. ivy.transpile however doesn’t depend on function tracing and so is applied eagerly without requiriing function arguments. One exception though where ivy.transpile defaults to lazy transpilation is when transpiling entire modules like kornia. It isn’t until a function or a class is executed from the lazy transpiled kornia module that the transpilation is actually performed.

For ivy.trace_graph, the arguments can be provided during the ivy.trace_graph call itself, in which case the process is performed eagerly. We show some simple examples for each case below.

Trace#

In the example below, the function is traced lazily, which means the first function call will execute slowly, as this is when the tracing process actually occurs.

[1]:
import ivy

def normalize(x):
    mean = ivy.mean(x)
    std = ivy.std(x)
    return ivy.divide(ivy.subtract(x, mean), std)
WARNING:root:   Some binaries seem to be missing in your system. This could be either because we don't have compatible binaries for your system or that newer binaries were available. In the latter case, calling ivy.utils.cleanup_and_fetch_binaries() should fetch the binaries binaries. Feel free to create an issue on https://github.com/ivy-llc/ivy.git in case of the former

WARNING:root:
Following are the supported configurations :
compiler : cp38-cp38-manylinux_2_17_x86_64, cp38-cp38-win_amd64, cp39-cp39-manylinux_2_17_x86_64, cp39-cp39-win_amd64, cp310-cp310-manylinux_2_17_x86_64, cp310-cp310-win_amd64, cp310-cp310-macosx_12_0_arm64, cp311-cp311-manylinux_2_17_x86_64, cp311-cp311-win_amd64, cp311-cp311-macosx_12_0_arm64, cp312-cp312-manylinux_2_17_x86_64, cp312-cp312-win_amd64, cp312-cp312-macosx_12_0_arm64


And let’s also create the dummy numpy arrays as before:

[2]:
# import NumPy
import numpy as np
np.random.seed(0)

# create random numpy array for testing
x = np.random.uniform(size=10)

Let’s assume that our target framework is tensorflow:

[3]:
import tensorflow as tf
ivy.set_backend("tensorflow")

x = tf.constant(x)
2024-10-19 02:11:29.834162: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-10-19 02:11:29.965072: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-10-19 02:11:30.015897: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-10-19 02:11:30.028543: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-10-19 02:11:30.107819: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-10-19 02:11:31.669139: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1729285894.477775  848849 cuda_executor.cc:1001] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2024-10-19 02:11:34.711139: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2343] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
[4]:
norm_trace = ivy.trace_graph(normalize)  # Lazy transpilation as no args are provided, executes quicker
[5]:
norm_trace(x) # lazy -> eager transpilation as args are provided and the function is called for the first time, executes slowly
[5]:
<tf.Tensor: shape=(10,), dtype=float64, numpy=
array([-0.36293708,  0.53895184, -0.07048605, -0.38424253, -1.04139636,
        0.16331669, -0.96587166,  1.49617496,  1.88587438, -1.25938418])>
[6]:
norm_trace(x) # fast, traced on previous call
[6]:
<tf.Tensor: shape=(10,), dtype=float64, numpy=
array([-0.36293708,  0.53895184, -0.07048605, -0.38424253, -1.04139636,
        0.16331669, -0.96587166,  1.49617496,  1.88587438, -1.25938418])>

However, in the following example the tracing occurs eagerly, and both function calls will be fast:

[7]:
norm_tracing = ivy.trace_graph(normalize, args=(x,))  # eager transpilation as args are provided, executes slowly
[8]:
norm_tracing(x) # fast, traced at ivy.trace_graph
[8]:
<tf.Tensor: shape=(10,), dtype=float64, numpy=
array([-0.36293708,  0.53895184, -0.07048605, -0.38424253, -1.04139636,
        0.16331669, -0.96587166,  1.49617496,  1.88587438, -1.25938418])>
[9]:
norm_tracing(x) # fast, traced at ivy.trace_graph
[9]:
<tf.Tensor: shape=(10,), dtype=float64, numpy=
array([-0.36293708,  0.53895184, -0.07048605, -0.38424253, -1.04139636,
        0.16331669, -0.96587166,  1.49617496,  1.88587438, -1.25938418])>

Transpile#

Let’s redefine the normalize function in PyTorch and transpile it to TensorFlow:

[10]:
import torch


def normalize(x):
    mean = torch.mean(x)
    std = torch.std(x)
    return torch.div(torch.sub(x, mean), std)

In the example below, the function is transpiled eagerly and so this cell executes slower because this is when the transpilation process actually occurs.

[11]:
norm_trans = ivy.transpile(normalize, source="torch", target="tensorflow")
Transpilation of normalize complete.
[12]:
norm_trans(x) # fast, transpiled at ivy.transpile
[12]:
<tf.Tensor: shape=(10,), dtype=float64, numpy=
array([-0.34431235,  0.51129461, -0.06686894, -0.36452447, -0.98795534,
        0.15493582, -0.91630631,  1.41939619,  1.78909753, -1.19475674])>

However, in the following example the transpilation occurs lazily since we are transpiling a module kornia to jax, hence the first function call will be slower where the actual transpilation will happen while the subsequent function calls will be fast:

[2]:
import kornia

jax_kornia = ivy.transpile(kornia, source="torch", target="jax")

Let’s actually call a function from our transpiled kornia which will trigger the transpilation to happen eagerly, resulting in the first invocation to be slow but the subsequent ones to be faster

[3]:
import jax
import numpy as np

input = jax.numpy.array(np.random.rand(2, 3, 4, 5))
gray = jax_kornia.color.bgr_to_grayscale(input)  # 2x1x4x5
WARNING:jax._src.xla_bridge:An NVIDIA GPU may be present on this machine, but a CUDA-enabled jaxlib is not installed. Falling back to cpu.

Let’s run another function from jax_kornia to finally wrap up this demo!

[8]:
gray = jax_kornia.color.bgr_to_grayscale(input)  # 2x1x4x5
print(type(gray))
<class 'jaxlib.xla_extension.ArrayImpl'>

Round Up#

That’s it, you now know the difference between lazy vs eager execution for ivy.trace_graph and ivy.transpile! Next, we’ll be exploring how ivy.trace_graphcan be called as a function decorator!