千家信息网

PyTorch怎么设置随机种子

发表于:2024-11-24 作者:千家信息网编辑
千家信息网最后更新 2024年11月24日,本篇内容介绍了"PyTorch怎么设置随机种子"的有关知识,在实际案例的操作过程中,不少人都会遇到这样的困境,接下来就让小编带领大家学习一下如何处理这些情况吧!希望大家仔细阅读,能够学有所成!impo
千家信息网最后更新 2024年11月24日PyTorch怎么设置随机种子

本篇内容介绍了"PyTorch怎么设置随机种子"的有关知识,在实际案例的操作过程中,不少人都会遇到这样的困境,接下来就让小编带领大家学习一下如何处理这些情况吧!希望大家仔细阅读,能够学有所成!

import torchimport torch.nn as nnimport matplotlib.pyplot as pltfrom tools import set_seedfrom torch.utils.tensorboard import SummaryWriterset_seed(1)  # 设置随机种子n_hidden = 200max_iter = 2000disp_interval = 200lr_init = 0.01def gen_data(num_data=10, x_range=(-1, 1)):    w = 1.5    train_x = torch.linspace(*x_range, num_data).unsqueeze_(1)    train_y = w*train_x + torch.normal(0, 0.5, size=train_x.size())    test_x = torch.linspace(*x_range, num_data).unsqueeze_(1)    test_y = w*test_x + torch.normal(0, 0.3, size=test_x.size())    return train_x, train_y, test_x, test_ytrain_x, train_y, test_x, test_y = gen_data(num_data=10, x_range=(-1, 1))class MLP(nn.Module):    def __init__(self, neural_num):        super(MLP, self).__init__()        self.linears = nn.Sequential(            nn.Linear(1, neural_num),            nn.ReLU(inplace=True),            nn.Linear(neural_num, neural_num),            nn.ReLU(inplace=True),            nn.Linear(neural_num, neural_num),            nn.ReLU(inplace=True),            nn.Linear(neural_num, 1),        )    def forward(self, x):        return self.linears(x)net_n = MLP(neural_num=n_hidden)net_weight_decay = MLP(neural_num=n_hidden)optim_n = torch.optim.SGD(net_n.parameters(), lr=lr_init, momentum=0.9)optim_wdecay = torch.optim.SGD(net_weight_decay.parameters(), lr=lr_init, momentum=0.9, weight_decay=1e-2)loss_fun = torch.nn.MSELoss() #均方损失writer = SummaryWriter(comment='test', filename_suffix='test')for epoch in range(max_iter):    pred_normal, pred_wdecay = net_n(train_x), net_weight_decay(train_x)    loss_n, loss_wdecay = loss_fun(pred_normal, train_y), loss_fun(pred_wdecay, train_y)    optim_n.zero_grad()    optim_wdecay.zero_grad()    loss_n.backward()    loss_wdecay.backward()    optim_n.step() #参数更新    optim_wdecay.step()    if (epoch + 1) % disp_interval == 0:        for name, layer in net_n.named_parameters(): ##            writer.add_histogram(name + '_grad_normal', layer.grad, epoch)            writer.add_histogram(name + '_data_normal', layer, epoch)        for name, layer in net_weight_decay.named_parameters():            writer.add_histogram(name + '_grad_weight_decay', layer.grad, epoch)            writer.add_histogram(name + '_data_weight_decay', layer, epoch)        test_pred_normal, test_pred_wdecay = net_n(test_x), net_weight_decay(test_x)        plt.scatter(train_x.data.numpy(), train_y.data.numpy(), c='blue', s=50, alpha=0.3, label='trainc')        plt.scatter(test_x.data.numpy(), test_y.data.numpy(), c='red', s=50, alpha=0.3, label='test')        plt.plot(test_x.data.numpy(), test_pred_normal.data.numpy(), 'r-', lw=3, label='no weight decay')        plt.plot(test_x.data.numpy(), test_pred_wdecay.data.numpy(), 'b--', lw=3, label='weight decay')        plt.text(-0.25, -1.5, 'no weight decay loss={:.6f}'.format(loss_n.item()),                 fontdict={'size': 15, 'color': 'red'})        plt.text(-0.25, -2, 'weight decay loss={:.6f}'.format(loss_wdecay.item()),                 fontdict={'size': 15, 'color': 'red'})        plt.ylim(-2.5, 2.5)        plt.legend()        plt.title('Epoch: {}'.format(epoch +  1))        plt.show()        plt.close()

作业

1. weight decay在pytorch的SGD中实现代码是哪一行?它对应的数学公式为?

2. PyTorch中,Dropout在训练的时候权值尺度会进行什么操作?

1. weight decay

optim_wdecay = torch.optim.SGD(net_weight_decay.parameters(), lr=lr_init,                                                                                 momentum=0.9, weight_decay=1e-2)optim_wdecay.step()     

2. dropout期望

Dropout随机失活,隐藏单元以一定概率被丢弃,以1-p的概率除以1-p做拉伸,即输出单元的计算不依赖于丢弃的隐藏层单元

"PyTorch怎么设置随机种子"的内容就介绍到这里了,感谢大家的阅读。如果想了解更多行业相关的知识可以关注网站,小编将为大家输出更多高质量的实用文章!

0