In this article, we’re going to explore why Torch is better than NumPY at doing math (but not really).
First off, what these two libraries are all about. NumPy is a Python library that provides support for large multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on them. Torch, on the other hand, is a popular open-source machine learning framework that provides a wide range of tools for deep neural networks.
Now, Let’s roll with why people think Torch is better than NumPy at doing math (but not really). First, Torch has a much more intuitive syntax when it comes to matrix operations. For example, if you want to add two matrices together in NumPY, you would do something like this:
# Import the NumPy library and assign it to the variable "np"
import numpy as np
# Create a NumPy array "a" with values 1, 2, 3, 4 in a 2x2 matrix
a = np.array([[1, 2], [3, 4]])
# Create another NumPy array "b" with values 5, 6, 7, 8 in a 2x2 matrix
b = np.array([[5, 6], [7, 8]])
# Add the two matrices "a" and "b" together and assign it to the variable "c"
c = a + b
# Print the result of the addition
print(c)
# Output:
# [[6 8]
# [10 12]]
# NumPy is a library used for scientific computing and provides efficient data structures for handling large datasets.
# The "np" variable is used as a shorthand for the NumPy library to make the code more concise.
# The "a" and "b" arrays are created using the "np.array" function, which takes in a list of values and converts it into a NumPy array.
# The values are arranged in a 2x2 matrix, with the first row being [1, 2] and the second row being [3, 4].
# The "c" variable is created to store the result of adding the two matrices "a" and "b" together.
# This is done using the "+" operator, which performs element-wise addition on the two matrices.
# The "print" function is used to display the result of the addition, which is the matrix "c".
# The output shows that the values in the two matrices were added together to create a new matrix with the same dimensions.
This code would output:
# The following code creates a 2x2 matrix with values 6, 8, 10, and 12
matrix = [[6, 8], [10, 12]]
# The matrix is then printed, resulting in the following output:
# [[6, 8], [10, 12]]
print(matrix)
But in Torch, you can do the same thing with much less typing and syntax overhead. Here’s how it looks like:
# Import the torch library and alias it as "t"
import torch as t
# Create a tensor "a" with values 1, 2, 3, 4 in a 2x2 matrix
a = t.tensor([[1, 2], [3, 4]])
# Create a tensor "b" with values 5, 6, 7, 8 in a 2x2 matrix
b = t.tensor([[5, 6], [7, 8]])
# Add tensors "a" and "b" together and store the result in tensor "c"
c = a + b
# Print the result of tensor "c"
print(c)
This code would output the same result:
# This code creates a tensor with values 6, 8, 10, 12 and specifies the data type as float32
tensor = torch.tensor([[6, 8], [10, 12]], dtype=torch.float32)
# This code prints the tensor
print(tensor)
Output:
tensor([[ 6., 8.],
[ 10., 12.]])
Explanation:
- `torch.tensor()` creates a tensor with the specified values and data type.
- `dtype` is used to specify the data type of the tensor.
- `print()` is used to output the tensor.
As you can see, Torch’s syntax is much more concise and easier to read. But that doesn’t mean NumPy isn’t good at doing math it just means that Torch has a better interface for matrix operations (but not really). In fact, under the hood, both libraries use similar algorithms to perform these calculations.
Another reason why people prefer Torch over NumPY is because of its support for GPU acceleration. If you’re working with large datasets or training deep neural networks, using a GPU can significantly speed up your computations. With Torch, it’s easy to move your data and models onto the GPU with just a few lines of code:
# Import the torch library and alias it as "t"
import torch as t
# Create a tensor "a" with values 1, 2, 3, 4
a = t.tensor([[1, 2], [3, 4]])
# Create a tensor "b" with values 5, 6, 7, 8
b = t.tensor([[5, 6], [7, 8]])
# Add tensors "a" and "b" together and store the result in "c"
c = a + b
# Print the result of "c"
print(c)
# Check if GPU is available and assign the device accordingly
device = 'cuda' if t.cuda.is_available() else 'cpu'
# Move tensors "a", "b", and "c" to the GPU
a = a.to(device)
b = b.to(device)
c = c.to(device)
This code would output the same result as before, but with much faster computations on a GPU (if available). However, if you don’t have access to a GPU or prefer not to use one, NumPY can still be a great choice for matrix operations it just doesn’t have the same level of support for acceleration.