Back

Blog

Supercharge Your Python Code with Numba — A Programmer’s Guide

Web Development

Apr 04, 2023

If you’re anything like me, you’ve probably experienced the struggle of writing high-performance Python code. Python’s fantastic readability and ease-of-use sometimes come at the cost of performance. Well, today, I’m going to introduce you to a nifty little library that can help you speed up your Python code: Numba!

Numba is a just-in-time (JIT) compiler for Python that uses the LLVM compiler infrastructure to translate a subset of Python and NumPy code into fast machine code. In this blog post, we’ll explore how to use Numba to accelerate your Python code and get a taste of the performance gains you can achieve.

Setting Up

To get started, you’ll need to install Numba. You can easily do this using pip or conda:

pip install numba

or

conda install numba

Now that we have Numba installed, let’s dive into the world of Python acceleration!

A Simple Example

Let’s start with a basic example to demonstrate Numba’s capabilities. We’ll create a Python function to calculate the nth Fibonacci number using recursion:

def fib(n):
    if n < 2:
        return n
    return fib(n - 1) + fib(n - 2)

While this code is straightforward and easy to read, it can be quite slow for larger values of n. Let’s see how Numba can help us speed things up.

To accelerate our fib function with Numba, all we need to do is add a decorator to the function definition:

from numba import jit

@jit(nopython=True)
def fib(n):
    if n < 2:
        return n
    return fib(n - 1) + fib(n - 2)

With just two lines of code, we’ve enabled Numba’s JIT compilation for our function. The nopython=True argument tells Numba to compile the function in "no Python" mode, which means it'll try to generate native machine code without any reliance on the Python interpreter. This mode generally provides the best performance.

Now let’s test the performance difference:

import time

# Test the original Python function
start_time = time.time()
result = fib(35)
print(f"Original Python: {result}, time: {time.time() - start_time:.2f}s")

# Test the Numba-accelerated function
start_time = time.time()
result = fib(35)
print(f"Numba-accelerated: {result}, time: {time.time() - start_time:.2f}s")

You should see a significant speedup with the Numba-accelerated version of the function. The exact speedup will depend on your machine, but it’s not uncommon to see improvements of 100x or more!

Using Numba with NumPy

Numba really shines when used with NumPy, a popular library for numerical computing in Python. Let’s take a look at another example that demonstrates Numba’s power when combined with NumPy.

Imagine you have a large 2D array, and you want to calculate the element-wise product of each element with its corresponding element in another 2D array of the same size. In NumPy, this operation is straightforward using the * operator:

import numpy as np

a = np.random.random((1000, 1000))
b = np.random.random((1000, 1000))

result = a * b

Now let’s say you want to apply a more complex operation to each element pair. For instance, let’s calculate the sine of the sum of each element pair, and then multiply the result by the square root of their product. Here’s how we could do this in pure Python:

def complex_operation(a, b):
    result = np.empty_like(a)
    for i in range(a.shape[0]):
        for j in range(a.shape[1]):
            result[i, j] = np.sin(a[i, j] + b[i, j]) * np.sqrt(a[i, j] * b[i, j])
    return result

This code, while straightforward, will be quite slow due to the nested loops. Now let’s see how Numba can help us speed it up:

from numba import njit

@njit
def complex_operation_numba(a, b):
    result = np.empty_like(a)
    for i in range(a.shape[0]):
        for j in range(a.shape[1]):
            result[i, j] = np.sin(a[i, j] + b[i, j]) * np.sqrt(a[i, j] * b[i, j])
    return result

All we’ve done is add the @njit decorator to our function, which is an alias for @jit(nopython=True). Now let's compare the performance of the original Python function and the Numba-accelerated version:

# Test the original Python function
start_time = time.time()
result = complex_operation(a, b)
print(f"Original Python: time: {time.time() - start_time:.2f}s")

# Test the Numba-accelerated function
start_time = time.time()
result = complex_operation_numba(a, b)
print(f"Numba-accelerated: time: {time.time() - start_time:.2f}s")

Once again, you should see a significant speedup with the Numba-accelerated version of the function. The exact improvement will depend on your machine, but it’s not uncommon to see speedups of 10x or more for this type of operation.

Parallelization with Numba

Numba can also help you parallelize your code for even more performance gains. To do this, you’ll need to use the prange function from the numba package instead of the regular range function in your loops, and you'll need to specify the parallel=True option in the jit or njit decorator.

Here’s our previous example modified for parallel execution:

from numba import njit, prange

@njit(parallel=True)
def complex_operation_parallel(a, b):
    result = np.empty_like(a)
    for i in prange(a.shape[0]):
        for j in prange(a.shape[1]):
            result[i, j] = np.sin(a[i, j] + b[i, j]) * np.sqrt(a[i, j] * b[i, j])
return result

Now let’s compare the performance of the Numba-accelerated function with and without parallelization:

# Test the Numba-accelerated function
start_time = time.time()
result = complex_operation_numba(a, b)
print(f"Numba-accelerated: time: {time.time() - start_time:.2f}s")

# Test the Numba-accelerated parallel function
start_time = time.time()
result = complex_operation_parallel(a, b)
print(f"Numba-accelerated (parallel): time: {time.time() - start_time:.2f}s")

Depending on your machine and the number of available cores, you should see an additional speedup when using the parallelized version of the function. This performance boost comes from Numba’s ability to distribute the workload across multiple CPU cores.

Numba is a powerful tool for accelerating Python code, especially when working with numerical operations and NumPy arrays. With just a few lines of code, you can achieve significant performance improvements and even parallelize your code for further gains.

Keep in mind that Numba is not always the best solution, and it might not work well with some Python features or libraries. However, when dealing with computationally intensive tasks, it’s definitely worth giving Numba a try.

So go ahead, supercharge your Python code with Numba, and let me know your thoughts and experiences in the comments below.

Anton Emelianov

CTO (Chief Technology Officer)

Other articles

By continuing to use this website you agree to our Cookie Policy