PyTorch basics: The most easy way to learn fundamentals of PyTorch

Posted by

What is PyTorch?

PyTorch is a deep learning library like Keras and Tensorflow. It is an open-source library  and it is  based on the Torch, was developed by Facebook’s AI team. PyTorch heavily used for building neural network models. In this blog we shall learn PyTorch basics which will help us in future to build small models using PyTorch library.

 

Deep Learning:Pytorch
Deep Learning:Pytorch

What are the benefits of using PyTorch?

Broadly speaking, there are two main benefits of using PyTorch.

PyTorch can use GPU. it very similar to the NumPy library but NumPy can only run on CPU. For this reason, matrix operations are slower in Numpy. On the other hand, as PyTorch can use GPU, parallel multiplications are blazingly fast.

PyTorch can calculate differentiation automatically. PyTorch variables have a parameter “autograd”, if we enable this feature, it can calculate and store the differential value.

Another important feature of PyTorch is that it is very similar to the core python coding format. This means that, if anyone is familiar with Python, it will be very easy for him/her to adapt PyTorch.

What is a Tensor ?

A tensor is a container that can store data in N-dimensions along with its linear operations.

Essential codes of PyTorch Basics:

Enough of introduction, from now on-wards, we shall through some basic Pytorch coding.

How to import PyTorch?

import torch

Creating tensor directly from numbers:

x = [[1, 2],[3, 4]]
x = torch.tensor(x)

First, we shall declare the size of the tensor, then PyTorch will automatically create a tensor of that shape.

shape = (2,3,)
some_tensor = torch.ones(shape)

Next we shall see how to create an empty tensor?

x = torch.empty(5,3)

This will create an empty tensor of 5X3.

Creating tensors with random values.

x = torch.rand(4,3)

Creating tensor with all ones in a defined shape.

x = torch.ones(3,3)

Specifying tensor datatype during the time of creation.

x = torch.ones(4,4,dtype=torch.int16)

In the above lines, we have created 4X4 integer-16 tensors.

Creating tensors of double type.

x = torch.ones(2,2,dtype=torch.double)

There are different types of datatypes, like float64, float16, int32 etc.

We can also do all the arithmetic operations like addition, subtraction, multiplication and division. There are different ways to execute these operations.

The first way:

mark_1 = [34,23,67,55,70]

mark_2 = [42,50,21,65,27]

mark_1_tensor = torch.tensor(mark_1)

mark_2_tensor = torch.tensor(mark_2)

total = mark_1_tensor + mark_2_tensor

The second way:

total = mark_1_tensor.add_(mark_2_tensor)

The third way:

mark_1_tensor.add_(mark_2_tensor)

This is one difference with the methods ending with _ . In above case the old value of mark_1 is updated. We can think this is like                             inplace   =   True. Therefore, the old value of mark_1_tensor is lost.

The slicing operations of PyTorch are very similar to the NumPy.

Let’s say we have a 5X3 tensor.

 x = torch.rand(5,3)

Getting all columns zeroth row.

x[ 0, : ]

Getting the first element of the matrix.

x[ 1, 1]

In the above case, x[1,1] points to a scalar value. Therefore, we can also use x[1, 1].item(), this will return only the value. In contrast, we can not use .item() if we have a vector variable. For example,

x[:0].item() will not work, as it accessing more than one value.

How to reshape a tensor?

The view() is used to change the shape of tensor.

x = torch.rand(5,3)

y = x.view(15)

Here, x is a 5X3 tensor and in the above line. Therefore, y will 1-D array after we have flattened it.

We can also use x.view( -1, 15). -1 says that ignore the number of rows but a number of columns have to be 15. In other words, -1 means ignore that dimension. Similarly, if we write, x.view(15, -1) means ignore a number of columns but a number of rows have to be 15.

Converting a torch tensor to Numpy.

x.numpy()

Converting Numpy to torch tensor.

x = torch.from_numpy(x)

Checking availability of cuda:

if torch.cuda.is_available():

       print("cuda available")

Getting the cuda number:

torch.cuda.current_device()

Storing a variable in the GPU during the time of declaration:

x = torch.ones(5,device=’cuda:0’)

Transferring a variable to GPU after declaration.

y = y.to(GPU)

Let’s say a variable is stored in GPU and we want to convert it into a NumPy array. Remember, NumPy doest have GPU support. Therefore, we have to first transfer the variable to the CPU then we can convert it into a NumPy array. Below is the code for that.

z = z.to("cpu")

z.numpy()

During optimization of deep learning models, we need to calculate the differential of a variable. Now, to make this happen, PyTorch provides a feature called “requies_grad = True” by specifying this, in the future, we can calculate the gradient of that variable. By enabling this feature, PyTorch can track the gradient of a variable.

x = torch.ones(5, requires_grad = True)

PyTorch can be stopped from tracking the gradient. Following are the two way to do that.

1st method:

With torch.no_grad():

         z = torch.matmul(x, w)+b

2nd method:

z = torch.matmul(x, w)+b

z_det = z.detach()

Conclusion of PyTorch basics:

In this blog of Pytorch basics, we discussed about  elementary  operations of PyTorch. After going through this blog, we hope that now you can write and understand fundamental codes of PyTorch. As you might have noticed codes in PyTorch are very similar to the core Python codes.

This is about PyTorch basics. In the upcoming blogs of PyTorch, we build small scale models like Linear regression, Logistic Regression and so on.

Leave a Reply

Your email address will not be published. Required fields are marked *