通过示例学习Pytorch(LEARNING PYTORCH WITH EXAMPLES)

tech2022-08-12  135

本教程通过独立的示例介绍了PyTorch的基本概念 。

PyTorch的核心是提供两个主要功能:

n维张量,类似于numpy,但可以在GPUs上运行Automatic differentiation以构建和训练神经网络

我们将使用一个完全连接的ReLU网络作为我们的运行示例。该网络将具有单个隐藏层,并且将通过最小化网络输出与真实输出之间的欧几里德距离来进行梯度下降训练,以适应随机数据。

注意

大家可以在本页结尾浏览各个示例 。

 

Tensors

 

Warm-up: numpy

# -*- coding: utf-8 -*- import numpy as np ​ # N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 64, 1000, 100, 10 ​ # Create random input and output data x = np.random.randn(N, D_in) y = np.random.randn(N, D_out) ​ # Randomly initialize weights w1 = np.random.randn(D_in, H) w2 = np.random.randn(H, D_out) ​ learning_rate = 1e-6 for t in range(500): # Forward pass: compute predicted y h = x.dot(w1) h_relu = np.maximum(h, 0) y_pred = h_relu.dot(w2) ​ # Compute and print loss loss = np.square(y_pred - y).sum() print(t, loss) ​ # Backprop to compute gradients of w1 and w2 with respect to loss grad_y_pred = 2.0 * (y_pred - y) grad_w2 = h_relu.T.dot(grad_y_pred) grad_h_relu = grad_y_pred.dot(w2.T) grad_h = grad_h_relu.copy() grad_h[h < 0] = 0 grad_w1 = x.T.dot(grad_h) ​ # Update weights w1 -= learning_rate * grad_w1 w2 -= learning_rate * grad_w2

 

PyTorch: Tensors

Numpy是一个很棒的框架,但是它不能利用GPU来加速其数值计算。对于现代深度神经网络,GPU通常可以提供50倍或更高的加速比,因此不幸的是,仅凭numpy不足以实现现代深度学习。

在这里,我们介绍最基本的PyTorch概念:Tensor。PyTorch张量在概念上与numpy数组相同:张量是n维数组,PyTorch提供了许多在这些张量上运行的功能。在幕后,Tensors 可以跟踪计算图和渐变,但它们也可用作科学计算的通用工具。

与numpy不同,PyTorch Tensors 可以利用GPU加速其数字计算。要在GPU上运行PyTorch Tensor,只需将其转换为新的数据类型。

在这里,我们使用PyTorch张量使两层网络适合随机数据。像上面的numpy示例一样,我们需要手动实现通过网络的正向和反向传递:

# -*- coding: utf-8 -*- ​ import torch ​ ​ dtype = torch.float device = torch.device("cpu") # device = torch.device("cuda:0") # Uncomment this to run on GPU ​ # N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 64, 1000, 100, 10 ​ # Create random input and output data x = torch.randn(N, D_in, device=device, dtype=dtype) y = torch.randn(N, D_out, device=device, dtype=dtype) ​ # Randomly initialize weights w1 = torch.randn(D_in, H, device=device, dtype=dtype) w2 = torch.randn(H, D_out, device=device, dtype=dtype) ​ learning_rate = 1e-6 for t in range(500): # Forward pass: compute predicted y h = x.mm(w1) h_relu = h.clamp(min=0) y_pred = h_relu.mm(w2) ​ # Compute and print loss loss = (y_pred - y).pow(2).sum().item() if t % 100 == 99: print(t, loss) ​ # Backprop to compute gradients of w1 and w2 with respect to loss grad_y_pred = 2.0 * (y_pred - y) grad_w2 = h_relu.t().mm(grad_y_pred) grad_h_relu = grad_y_pred.mm(w2.t()) grad_h = grad_h_relu.clone() grad_h[h < 0] = 0 grad_w1 = x.t().mm(grad_h) ​ # Update weights using gradient descent w1 -= learning_rate * grad_w1 w2 -= learning_rate * grad_w2

 

 

Autograd

 

PyTorch: Tensors and autograd

在上述示例中,我们必须手动实现神经网络的前向和后向传递。对于小型的两层网络,手动实施反向传递并不是什么大问题,但对于大型的复杂网络而言,可以很快变得非常麻烦。

幸运的是,我们可以使用自动微分 来自动计算神经网络中的反向通过。PyTorch中的 autograd软件包完全提供了此功能。使用autograd时,网络的前向传递将定义一个 计算图;图中的节点为Tensors,边为从输入Tensors产生输出张量的函数。然后通过该图进行反向传播,可以轻松计算梯度。

这听起来很复杂,在实践中非常简单。每个Tensor 代表计算图中的一个节点。If x是一个Tensor, x.requires_grad=True然后x.grad是另一个Tensor,它持有x相对于某个标量值的梯度。

在这里,我们使用PyTorch张量和autograd来实现我们的两层网络。现在我们不再需要手动通过网络实现反向传递:

# -*- coding: utf-8 -*- import torch ​ dtype = torch.float device = torch.device("cpu") # device = torch.device("cuda:0") # Uncomment this to run on GPU ​ # N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 64, 1000, 100, 10 ​ # Create random Tensors to hold input and outputs. # Setting requires_grad=False indicates that we do not need to compute gradients # with respect to these Tensors during the backward pass. x = torch.randn(N, D_in, device=device, dtype=dtype) y = torch.randn(N, D_out, device=device, dtype=dtype) ​ # Create random Tensors for weights. # Setting requires_grad=True indicates that we want to compute gradients with # respect to these Tensors during the backward pass. w1 = torch.randn(D_in, H, device=device, dtype=dtype, requires_grad=True) w2 = torch.randn(H, D_out, device=device, dtype=dtype, requires_grad=True) ​ learning_rate = 1e-6 for t in range(500): # Forward pass: compute predicted y using operations on Tensors; these # are exactly the same operations we used to compute the forward pass using # Tensors, but we do not need to keep references to intermediate values since # we are not implementing the backward pass by hand. y_pred = x.mm(w1).clamp(min=0).mm(w2) ​ # Compute and print loss using operations on Tensors. # Now loss is a Tensor of shape (1,) # loss.item() gets the scalar value held in the loss. loss = (y_pred - y).pow(2).sum() if t % 100 == 99: print(t, loss.item()) ​ # Use autograd to compute the backward pass. This call will compute the # gradient of loss with respect to all Tensors with requires_grad=True. # After this call w1.grad and w2.grad will be Tensors holding the gradient # of the loss with respect to w1 and w2 respectively. loss.backward() ​ # Manually update weights using gradient descent. Wrap in torch.no_grad() # because weights have requires_grad=True, but we don't need to track this # in autograd. # An alternative way is to operate on weight.data and weight.grad.data. # Recall that tensor.data gives a tensor that shares the storage with # tensor, but doesn't track history. # You can also use torch.optim.SGD to achieve this. with torch.no_grad(): w1 -= learning_rate * w1.grad w2 -= learning_rate * w2.grad ​ # Manually zero the gradients after updating weights w1.grad.zero_() w2.grad.zero_()

 

PyTorch: Defining new autograd functions

在幕后,每个原始的autograd运算符实际上都是在Tensor上运行的两个函数。的正向函数从输入张量计算输出张量。该向后功能接收输出张量的梯度相对于一些标量值,并计算输入张量的梯度相对于该相同标量值。

在PyTorch中,我们可以通过定义torch.autograd.Function和实现forward and backward函数的子类来轻松定义自己的autograd运算符。然后,我们可以通过构造实例并像调用函数一样使用新的autograd运算符,并传递包含输入数据的Tensors 。

在此示例中,我们定义了自己的自定义autograd函数来执行ReLU非线性,并使用它来实现我们的两层网络:

# -*- coding: utf-8 -*- import torch ​ ​ class MyReLU(torch.autograd.Function): """ We can implement our own custom autograd Functions by subclassing torch.autograd.Function and implementing the forward and backward passes which operate on Tensors. """ ​ @staticmethod def forward(ctx, input): """ In the forward pass we receive a Tensor containing the input and return a Tensor containing the output. ctx is a context object that can be used to stash information for backward computation. You can cache arbitrary objects for use in the backward pass using the ctx.save_for_backward method. """ ctx.save_for_backward(input) return input.clamp(min=0) ​ @staticmethod def backward(ctx, grad_output): """ In the backward pass we receive a Tensor containing the gradient of the loss with respect to the output, and we need to compute the gradient of the loss with respect to the input. """ input, = ctx.saved_tensors grad_input = grad_output.clone() grad_input[input < 0] = 0 return grad_input ​ ​ dtype = torch.float device = torch.device("cpu") # device = torch.device("cuda:0") # Uncomment this to run on GPU ​ # N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 64, 1000, 100, 10 ​ # Create random Tensors to hold input and outputs. x = torch.randn(N, D_in, device=device, dtype=dtype) y = torch.randn(N, D_out, device=device, dtype=dtype) ​ # Create random Tensors for weights. w1 = torch.randn(D_in, H, device=device, dtype=dtype, requires_grad=True) w2 = torch.randn(H, D_out, device=device, dtype=dtype, requires_grad=True) ​ learning_rate = 1e-6 for t in range(500): # To apply our Function, we use Function.apply method. We alias this as 'relu'. relu = MyReLU.apply ​ # Forward pass: compute predicted y using operations; we compute # ReLU using our custom autograd operation. y_pred = relu(x.mm(w1)).mm(w2) ​ # Compute and print loss loss = (y_pred - y).pow(2).sum() if t % 100 == 99: print(t, loss.item()) ​ # Use autograd to compute the backward pass. loss.backward() ​ # Update weights using gradient descent with torch.no_grad(): w1 -= learning_rate * w1.grad w2 -= learning_rate * w2.grad ​ # Manually zero the gradients after updating weights w1.grad.zero_() w2.grad.zero_()

 

nn module

 

PyTorch: nn

计算图和autograd是定义复杂运算符并自动采用导数的非常强大的范例。但是,对于大型神经网络,原始的autograd可能有点太低了。

在构建神经网络时,我们经常考虑将计算分为几层,其中一些层具有可学习的参数 ,这些参数将在学习过程中进行优化。

在TensorFlow中,像Keras, TensorFlow-Slim和TFLearn之类的软件包 在原始计算图上提供了更高级别的抽象,这些抽象对构建神经网络很有用。

在PyTorch中,该nn程序包达到了相同的目的。该nn 软件包定义了一组Modules,它们大致等效于神经网络层。模块接收输入张量并计算输出张量,但也可以保持内部状态,例如包含可学习参数的张量。该nn软件包还定义了一组有用的损失函数,这些函数通常在训练神经网络时使用。

在此示例中,我们使用该nn包来实现我们的两层网络:

# -*- coding: utf-8 -*- import torch ​ # N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 64, 1000, 100, 10 ​ # Create random Tensors to hold inputs and outputs x = torch.randn(N, D_in) y = torch.randn(N, D_out) ​ # Use the nn package to define our model as a sequence of layers. nn.Sequential # is a Module which contains other Modules, and applies them in sequence to # produce its output. Each Linear Module computes output from input using a # linear function, and holds internal Tensors for its weight and bias. model = torch.nn.Sequential( torch.nn.Linear(D_in, H), torch.nn.ReLU(), torch.nn.Linear(H, D_out), ) ​ # The nn package also contains definitions of popular loss functions; in this # case we will use Mean Squared Error (MSE) as our loss function. loss_fn = torch.nn.MSELoss(reduction='sum') ​ learning_rate = 1e-4 for t in range(500): # Forward pass: compute predicted y by passing x to the model. Module objects # override the __call__ operator so you can call them like functions. When # doing so you pass a Tensor of input data to the Module and it produces # a Tensor of output data. y_pred = model(x) ​ # Compute and print loss. We pass Tensors containing the predicted and true # values of y, and the loss function returns a Tensor containing the # loss. loss = loss_fn(y_pred, y) if t % 100 == 99: print(t, loss.item()) ​ # Zero the gradients before running the backward pass. model.zero_grad() ​ # Backward pass: compute gradient of the loss with respect to all the learnable # parameters of the model. Internally, the parameters of each Module are stored # in Tensors with requires_grad=True, so this call will compute gradients for # all learnable parameters in the model. loss.backward() ​ # Update the weights using gradient descent. Each parameter is a Tensor, so # we can access its gradients like we did before. with torch.no_grad(): for param in model.parameters(): param -= learning_rate * param.grad

 

 

PyTorch: optim

到现在为止,我们已经通过手动更改持有可学习参数的张量来更新模型的权重(使用torch.no_grad() 或.data避免在autograd中跟踪历史记录)。对于像随机梯度下降这样的简单优化算法而言,这并不是一个巨大的负担,但是在实践中,我们经常使用更复杂的优化器(例如AdaGrad,RMSProp,Adam等)来训练神经网络。

optimPyTorch中的软件包抽象了优化算法的思想,并提供了常用优化算法的实现。

在此示例中,我们将nn像以前一样使用包来定义模型,但是将使用optim包提供的Adam算法来优化模型:

# -*- coding: utf-8 -*- import torch ​ # N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 64, 1000, 100, 10 ​ # Create random Tensors to hold inputs and outputs x = torch.randn(N, D_in) y = torch.randn(N, D_out) ​ # Use the nn package to define our model and loss function. model = torch.nn.Sequential( torch.nn.Linear(D_in, H), torch.nn.ReLU(), torch.nn.Linear(H, D_out), ) loss_fn = torch.nn.MSELoss(reduction='sum') ​ # Use the optim package to define an Optimizer that will update the weights of # the model for us. Here we will use Adam; the optim package contains many other # optimization algorithms. The first argument to the Adam constructor tells the # optimizer which Tensors it should update. learning_rate = 1e-4 optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) for t in range(500): # Forward pass: compute predicted y by passing x to the model. y_pred = model(x) ​ # Compute and print loss. loss = loss_fn(y_pred, y) if t % 100 == 99: print(t, loss.item()) ​ # Before the backward pass, use the optimizer object to zero all of the # gradients for the variables it will update (which are the learnable # weights of the model). This is because by default, gradients are # accumulated in buffers( i.e, not overwritten) whenever .backward() # is called. Checkout docs of torch.autograd.backward for more details. optimizer.zero_grad() ​ # Backward pass: compute gradient of the loss with respect to model # parameters loss.backward() ​ # Calling the step function on an Optimizer makes an update to its # parameters optimizer.step()

 

 

PyTorch: Custom nn Modules

有时,大家需要指定比一系列现有模块更复杂的模型。在这些情况下,大家可以通过子类化nn.Module,forward该模块使用其他模块或在Tensors上的其他autograd操作接收输入张量并生成输出Tensors。

在此示例中,我们将两层网络实现为自定义的Module子类:

# -*- coding: utf-8 -*- import torch ​ ​ class TwoLayerNet(torch.nn.Module): def __init__(self, D_in, H, D_out): """ In the constructor we instantiate two nn.Linear modules and assign them as member variables. """ super(TwoLayerNet, self).__init__() self.linear1 = torch.nn.Linear(D_in, H) self.linear2 = torch.nn.Linear(H, D_out) ​ def forward(self, x): """ In the forward function we accept a Tensor of input data and we must return a Tensor of output data. We can use Modules defined in the constructor as well as arbitrary operators on Tensors. """ h_relu = self.linear1(x).clamp(min=0) y_pred = self.linear2(h_relu) return y_pred ​ ​ # N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 64, 1000, 100, 10 ​ # Create random Tensors to hold inputs and outputs x = torch.randn(N, D_in) y = torch.randn(N, D_out) ​ # Construct our model by instantiating the class defined above model = TwoLayerNet(D_in, H, D_out) ​ # Construct our loss function and an Optimizer. The call to model.parameters() # in the SGD constructor will contain the learnable parameters of the two # nn.Linear modules which are members of the model. criterion = torch.nn.MSELoss(reduction='sum') optimizer = torch.optim.SGD(model.parameters(), lr=1e-4) for t in range(500): # Forward pass: Compute predicted y by passing x to the model y_pred = model(x) ​ # Compute and print loss loss = criterion(y_pred, y) if t % 100 == 99: print(t, loss.item()) ​ # Zero gradients, perform a backward pass, and update the weights. optimizer.zero_grad() loss.backward() optimizer.step()

 

PyTorch: Control Flow + Weight Sharing

作为动态图和权重共享的示例,我们实现了一个非常奇怪的模型:一个完全连接的ReLU网络,该网络在每个前向传递中选择1到4之间的随机数,并使用那么多隐藏层,多次重复使用相同的权重计算最里面的隐藏层。

对于此模型,我们可以使用常规的Python流控制来实现循环,并且可以通过在定义前向传递时简单地多次重复使用同一模块来实现最内层之间的权重共享。

我们可以轻松地将此模型实现为Module子类:

# -*- coding: utf-8 -*- import random import torch ​ ​ class DynamicNet(torch.nn.Module): def __init__(self, D_in, H, D_out): """ In the constructor we construct three nn.Linear instances that we will use in the forward pass. """ super(DynamicNet, self).__init__() self.input_linear = torch.nn.Linear(D_in, H) self.middle_linear = torch.nn.Linear(H, H) self.output_linear = torch.nn.Linear(H, D_out) ​ def forward(self, x): """ For the forward pass of the model, we randomly choose either 0, 1, 2, or 3 and reuse the middle_linear Module that many times to compute hidden layer representations. ​ Since each forward pass builds a dynamic computation graph, we can use normal Python control-flow operators like loops or conditional statements when defining the forward pass of the model. ​ Here we also see that it is perfectly safe to reuse the same Module many times when defining a computational graph. This is a big improvement from Lua Torch, where each Module could be used only once. """ h_relu = self.input_linear(x).clamp(min=0) for _ in range(random.randint(0, 3)): h_relu = self.middle_linear(h_relu).clamp(min=0) y_pred = self.output_linear(h_relu) return y_pred ​ ​ # N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 64, 1000, 100, 10 ​ # Create random Tensors to hold inputs and outputs x = torch.randn(N, D_in) y = torch.randn(N, D_out) ​ # Construct our model by instantiating the class defined above model = DynamicNet(D_in, H, D_out) ​ # Construct our loss function and an Optimizer. Training this strange model with # vanilla stochastic gradient descent is tough, so we use momentum criterion = torch.nn.MSELoss(reduction='sum') optimizer = torch.optim.SGD(model.parameters(), lr=1e-4, momentum=0.9) for t in range(500): # Forward pass: Compute predicted y by passing x to the model y_pred = model(x) ​ # Compute and print loss loss = criterion(y_pred, y) if t % 100 == 99: print(t, loss.item()) ​ # Zero gradients, perform a backward pass, and update the weights. optimizer.zero_grad() loss.backward() optimizer.step()

 

接下来,给大家介绍一下租用GPU做实验的方法,我们是在智星云租用的GPU,使用体验很好。具体大家可以参考:智星云官网: http://www.ai-galaxy.cn/,淘宝店:https://shop36573300.taobao.com/公众号: 智星AI

 

       

 

 

 

Examples

 

Tensors

https://pytorch.org/tutorials/beginner/examples_tensor/two_layer_net_numpy.html#sphx-glr-beginner-examples-tensor-two-layer-net-numpy-py

https://pytorch.org/tutorials/beginner/examples_tensor/two_layer_net_tensor.html#sphx-glr-beginner-examples-tensor-two-layer-net-tensor-py

 

Autograd

https://pytorch.org/tutorials/beginner/examples_autograd/two_layer_net_autograd.html#sphx-glr-beginner-examples-autograd-two-layer-net-autograd-py

https://pytorch.org/tutorials/beginner/examples_autograd/two_layer_net_custom_function.html#sphx-glr-beginner-examples-autograd-two-layer-net-custom-function-py

https://pytorch.org/tutorials/beginner/examples_autograd/tf_two_layer_net.html#sphx-glr-beginner-examples-autograd-tf-two-layer-net-py

 

nn module

https://pytorch.org/tutorials/beginner/examples_nn/two_layer_net_nn.html#sphx-glr-beginner-examples-nn-two-layer-net-nn-py

https://pytorch.org/tutorials/beginner/examples_nn/two_layer_net_optim.html#sphx-glr-beginner-examples-nn-two-layer-net-optim-py

https://pytorch.org/tutorials/beginner/examples_nn/two_layer_net_module.html#sphx-glr-beginner-examples-nn-two-layer-net-module-py

https://pytorch.org/tutorials/beginner/examples_nn/dynamic_net.html#sphx-glr-beginner-examples-nn-dynamic-net-py

最新回复(0)