千家信息网

Pytorch多层感知机的实现方法

发表于:2025-02-03 作者:千家信息网编辑
千家信息网最后更新 2025年02月03日,这篇文章主要讲解了"Pytorch多层感知机的实现方法",文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习"Pytorch多层感知机的实现方法"吧!impor
千家信息网最后更新 2025年02月03日Pytorch多层感知机的实现方法

这篇文章主要讲解了"Pytorch多层感知机的实现方法",文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习"Pytorch多层感知机的实现方法"吧!

import torchfrom torch import nnfrom torch.nn import initimport numpy as npimport sysimport torchvisionfrom torchvision import transformsnum_inputs=784num_outputs=10num_hiddens=256mnist_train = torchvision.datasets.FashionMNIST(root='~/Datasets/FashionMNIST', train=True, download=True, transform=transforms.ToTensor())mnist_test = torchvision.datasets.FashionMNIST(root='~/Datasets/FashionMNIST', train=False, download=True, transform=transforms.ToTensor())batch_size = 256train_iter = torch.utils.data.DataLoader(mnist_train, batch_size=batch_size, shuffle=True)test_iter = torch.utils.data.DataLoader(mnist_test, batch_size=batch_size, shuffle=False)def evalute_accuracy(data_iter,net):    acc_sum,n=0.0,0    for X,y in data_iter:        acc_sum+=(net(X).argmax(dim=1)==y).float().sum().item()        n+=y.shape[0]    return acc_sum/ndef train(net,train_iter,test_iter,loss,num_epochs,batch_size,params=None,lr=None,optimizer=None):    for epoch in range(num_epochs):        train_l_sum,train_acc_sum,n=0.0,0.0,0        for X,y in train_iter:            y_hat=net(X)            l=loss(y_hat,y).sum()            if optimizer is not None:                optimizer.zero_grad()            elif params is not None and params[0].grad is not None:                for param in params:                    param.grad.data.zero_()            l.backward()            optimizer.step()  # "softmax回归的简洁实现"一节将用到            train_l_sum+=l.item()            train_acc_sum+=(y_hat.argmax(dim=1)==y).sum().item()            n+=y.shape[0]        test_acc=evalute_accuracy(test_iter,net);        print('epoch %d, loss %.4f, train acc %.3f, test acc %.3f'              % (epoch + 1, train_l_sum / n, train_acc_sum / n, test_acc))class Faltten(nn.Module):    def __init__(self):        super(Faltten, self).__init__()    def forward(self,x):        return x.view(x.shape[0],-1)net =nn.Sequential(    Faltten(),    nn.Linear(num_inputs,num_hiddens),    nn.ReLU(),    nn.Linear(num_hiddens,num_outputs))for params in net.parameters():    init.normal_(params,mean=0,std=0.01)batch_size=256loss=torch.nn.CrossEntropyLoss()optimizer=torch.optim.SGD(net.parameters(),lr=0.5)num_epochs=5train(net,train_iter,test_iter,loss,num_epochs,batch_size,None,None,optimizer)

感谢各位的阅读,以上就是"Pytorch多层感知机的实现方法"的内容了,经过本文的学习后,相信大家对Pytorch多层感知机的实现方法这一问题有了更深刻的体会,具体使用情况还需要大家实践验证。这里是,小编将为大家推送更多相关知识点的文章,欢迎关注!

0