十年网站开发经验 + 多家企业客户 + 靠谱的建站团队
量身定制 + 运营维护+专业推广+无忧售后,网站问题一站解决
这篇文章主要讲解了“Pytorch多层感知机的实现方法”,文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习“Pytorch多层感知机的实现方法”吧!
创新互联公司是一家专注于成都网站设计、成都网站建设与策划设计,额济纳网站建设哪家好?创新互联公司做网站,专注于网站建设10余年,网设计领域的专业建站公司;建站业务涵盖:额济纳等地区。额济纳做网站价格咨询:18980820575
import torch from torch import nn from torch.nn import init import numpy as np import sys import torchvision from torchvision import transforms num_inputs=784 num_outputs=10 num_hiddens=256 mnist_train = torchvision.datasets.FashionMNIST(root='~/Datasets/FashionMNIST', train=True, download=True, transform=transforms.ToTensor()) mnist_test = torchvision.datasets.FashionMNIST(root='~/Datasets/FashionMNIST', train=False, download=True, transform=transforms.ToTensor()) batch_size = 256 train_iter = torch.utils.data.DataLoader(mnist_train, batch_size=batch_size, shuffle=True) test_iter = torch.utils.data.DataLoader(mnist_test, batch_size=batch_size, shuffle=False) def evalute_accuracy(data_iter,net): acc_sum,n=0.0,0 for X,y in data_iter: acc_sum+=(net(X).argmax(dim=1)==y).float().sum().item() n+=y.shape[0] return acc_sum/n def train(net,train_iter,test_iter,loss,num_epochs,batch_size,params=None,lr=None,optimizer=None): for epoch in range(num_epochs): train_l_sum,train_acc_sum,n=0.0,0.0,0 for X,y in train_iter: y_hat=net(X) l=loss(y_hat,y).sum() if optimizer is not None: optimizer.zero_grad() elif params is not None and params[0].grad is not None: for param in params: param.grad.data.zero_() l.backward() optimizer.step() # “softmax回归的简洁实现”一节将用到 train_l_sum+=l.item() train_acc_sum+=(y_hat.argmax(dim=1)==y).sum().item() n+=y.shape[0] test_acc=evalute_accuracy(test_iter,net); print('epoch %d, loss %.4f, train acc %.3f, test acc %.3f' % (epoch + 1, train_l_sum / n, train_acc_sum / n, test_acc)) class Faltten(nn.Module): def __init__(self): super(Faltten, self).__init__() def forward(self,x): return x.view(x.shape[0],-1) net =nn.Sequential( Faltten(), nn.Linear(num_inputs,num_hiddens), nn.ReLU(), nn.Linear(num_hiddens,num_outputs) ) for params in net.parameters(): init.normal_(params,mean=0,std=0.01) batch_size=256 loss=torch.nn.CrossEntropyLoss() optimizer=torch.optim.SGD(net.parameters(),lr=0.5) num_epochs=5 train(net,train_iter,test_iter,loss,num_epochs,batch_size,None,None,optimizer)
感谢各位的阅读,以上就是“Pytorch多层感知机的实现方法”的内容了,经过本文的学习后,相信大家对Pytorch多层感知机的实现方法这一问题有了更深刻的体会,具体使用情况还需要大家实践验证。这里是创新互联,小编将为大家推送更多相关知识点的文章,欢迎关注!