Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Things on this page are fragmentary and immature notes/thoughts of the author. Please read with your own judgement!

import torch
x = torch.tensor(
    [
        [1, 2, 3, 4, 5],
        [6, 7, 8, 9, 10],
    ]
)
x
tensor([[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10]])
x.shape
torch.Size([2, 5])

Tensor.resize_

Notice that Tensor.resize_ is in-place and the non in-place version Tensor.resize is deprecated.

xc = x.clone()
xc
tensor([[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10]])
xc is x
False
xc.resize_((1, 2, 5))
tensor([[[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10]]])
xc.shape
torch.Size([1, 2, 5])

You can resize a Tensor to have more elements. The Tensor is padded on the right.

  • A float Tensor is padded with the value 0.

  • It seems that an int Tensor is padded with random integers.

Resize an int Tensor.

t = torch.tensor([1, 2, 3])
t
tensor([1, 2, 3])
t.resize_(5)
tensor([ 1, 2, 3, 112, 0])

Resize a float Tensor.

t = torch.tensor([1.0, 2.0, 3.0])
t
tensor([1., 2., 3.])
t.resize_(5)
tensor([1.0000e+00, 2.0000e+00, 3.0000e+00, 4.5862e-41, 8.9683e-44])

Tensor.squeeze

x.squeeze(0)
tensor([[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10]])
x.squeeze(0).shape
torch.Size([2, 5])
y = x.unsqueeze(0)
y
tensor([[[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10]]])
y.shape
torch.Size([1, 2, 5])
y.squeeze(0)
tensor([[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10]])
y.squeeze(0).shape
torch.Size([2, 5])

Tensor.unsqueeze

x.unsqueeze(0)
tensor([[[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10]]])
x.unsqueeze(0).shape
torch.Size([1, 2, 5])

Tensor.view

x.view((1, 2, 5))
tensor([[[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10]]])
x.view((1, 2, 5)).shape
torch.Size([1, 2, 5])

Tensor.view_as

View this tensor as the same size as other. self.view_as(other) is equivalent to self.view(other.size()).

Tensor.expand

Returns a new view of the self tensor with singleton dimensions expanded to a larger size.

Tensor.expand is used to replicate data in a tensor. If x is the tensor to be expanded. The new shape must be (k, *x.shape), where k is a non-negative integer.

x
tensor([[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10]])
x.expand((3, 2, 5))
tensor([[[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10]], [[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10]], [[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10]]])
x.expand((1, 2, 5))
tensor([[[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10]]])
x.expand((1, 2, 5)).shape
torch.Size([1, 2, 5])
x = torch.tensor([1, 2, 3])
x
tensor([1, 2, 3])
x.repeat(2)
tensor([1, 2, 3, 1, 2, 3])
x.repeat(2, 4, 3)
tensor([[[1, 2, 3, 1, 2, 3, 1, 2, 3], [1, 2, 3, 1, 2, 3, 1, 2, 3], [1, 2, 3, 1, 2, 3, 1, 2, 3], [1, 2, 3, 1, 2, 3, 1, 2, 3]], [[1, 2, 3, 1, 2, 3, 1, 2, 3], [1, 2, 3, 1, 2, 3, 1, 2, 3], [1, 2, 3, 1, 2, 3, 1, 2, 3], [1, 2, 3, 1, 2, 3, 1, 2, 3]]])

Add a Dummy Dimension

If you need to add a dummy dimension at the beginning or the end, it is the easiest to use the method Tensor.unsqueeze since you only need to specify the position to add the dummy dimension instead of specifying the full new dimension. Otherwise, I’d suggest you to use the method Tensor.view as it can be used for both adding/removing a dummy dimension (at the slight inconvenience of specifying the full new dimension).

x.expand((1, 2, 5))
tensor([[[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10]]])
x.view((1, 2, 5))
tensor([[[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10]]])
x.unsqueeze(0)
tensor([[[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10]]])
x.clone().resize_((1, 2, 5))
tensor([[[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10]]])

You can also use numpy-style slicing to add a dummy dimenson!!!

x[None, :, :]
tensor([[[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10]]])

Remove a Dummy Dimension

If you need to remove a dummy dimension from the beginning or the end, it is the easiest to use the method Tensor.squeeze since you only need to specify the position to remove the dummy dimension instead of specifying the full new dimension. Otherwise, I’d suggest you to use the method Tensor.view as it can be used for both adding/removing a dummy dimension (at the slight inconvenience of specifying the full new dimension).

y = x.unsqueeze(0)
y
tensor([[[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10]]])
y.shape
torch.Size([1, 2, 5])
y.view((2, 5))
tensor([[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10]])
y.squeeze(0)
tensor([[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10]])
y.clone().resize_((2, 5))
tensor([[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10]])