web123456

【PyTorch Learning (II)】Mutual conversion of Tensor and Numpy arrays

Torch's Tensor conversion toNumPyArray

Tensor and NumPy arrays share the same underlying storage location, so when one changes, the other changes too.
Create a tensor
First import torch and numpy packages

import torch
import numpy as np
  • 1
  • 2

Create a tensor

a = torch.zeros(5)
print(a)
  • 1
  • 2

Output result:

tensor([0., 0., 0., 0., 0.])
  • 1

Convert tensor to numpy

b = a.numpy()
print(b)
  • 1
  • 2

Output result:

[0. 0. 0. 0. 0.]
  • 1

When the converted tensor changes, the numpy array will also change.

a.add_(1)
print(a)
print(b)
  • 1
  • 2
  • 3

Output result:

tensor([1., 1., 1., 1., 1.])
[1. 1. 1. 1. 1.]

  • 1
  • 2
  • 3

Convert NumPy array to Tensor

Create a numpy array

a = np.zeros(5)
print(a)
  • 1
  • 2

Output result:

[0. 0. 0. 0. 0.]
  • 1

Convert numpy array to tensor

b = torch.from_numpy(a)
print(b)
  • 1
  • 2

Output result:

tensor([0., 0., 0., 0., 0.], dtype=torch.float64)
  • 1

When the converted numpy array develops and changes, the converted tensor will also change.

np.add(a, 2,out=a)
print(a)
print(b)
  • 1
  • 2
  • 3

Output result:

[2. 2. 2. 2. 2.]
tensor([2., 2., 2., 2., 2.], dtype=torch.float64)
  • 1
  • 2

All tensors on the CPU (except CharTensor) support mutual conversion with Numpy arrays.

CUDATensor on

Tensors can be moved to any device using the .to() method:
Prerequisite GPU is available and usedTo move tensor into and out of GPU
Create a tensor

x = torch.rand(2,3)
y = torch.rand_like(x) 
print(x)
print(y)
  • 1
  • 2
  • 3
  • 4

Output result:

tensor([[0.7374, 0.2935, 0.4500],
        [0.9148, 0.7752, 0.5846]])
tensor([[0.0828, 0.5807, 0.8807],
        [0.9329, 0.8767, 0.0201]])
  • 1
  • 2
  • 3
  • 4

Put the tensor into the GPU for acceleration:

x = x.to('cuda')
print(x)
  • 1
  • 2

Output result:

tensor([[0.7374, 0.2935, 0.4500],
        [0.9148, 0.7752, 0.5846]], device='cuda:0')
  • 1
  • 2

We have put x into the GPU, and y is still in the CPU. If the operation between x and y is performed at this time, an error will be reported as follows:

z = x + y
  • 1

Output result:

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 andcpu!
 Translation: RuntimeError:Expect all tensors on the same device, but find at least two devices, cuda:0and cpu!
  • 1
  • 2

Therefore, we can only operate and process the data on the same device.
Put x in the GPU into the CPU, and x and y in the same device can be added together.

x = x.to('cpu')
print(x)
z = x +y
print(z)
  • 1
  • 2
  • 3
  • 4

Output result:

tensor([[0.7374, 0.2935, 0.4500],
        [0.9148, 0.7752, 0.5846]])
tensor([[0.8202, 0.8742, 1.3308],
        [1.8477, 1.6520, 0.6047]])
  • 1
  • 2
  • 3
  • 4