可以进行降维也可以进行升维,其实就是矩阵的计算。 升维会出现一个问题,就是超参数的查找和应用。 隐层越多,学习能力越强,单数学习能力太强也不是很好,会把模型里面的噪音也学习进来。 (比如:学习能力太强了,就类似于把书本背死了) 数据集可以在自行下载:链接: https://pan.baidu.com/s/1Ku5c99yDHNFMt8EJAcF5LA 提取码: n4xh 如果失效了可以私信我 torch.sigmoid、torch.nn.Sigmoid和torch.nn.functional.sigmoid的区别
import torch import numpy as np import matplotlib.pyplot as plt import torch.nn.functional as F xy = np.loadtxt('diabetes.csv.gz',delimiter=',',dtype=np.float32) # delimiter分割 dtype=np.float32一般都使用32位的浮点数 x_data = torch.from_numpy(xy[:,:-1]) y_data = torch.from_numpy(xy[:,[-1]]) # [-1]拿出来的是一个矩阵而不是向量,要保证在计算的时候是矩阵 # print(x_data,y_data) class Model(torch.nn.Module): def __init__(self): super(Model,self).__init__() self.linear1 = torch.nn.Linear(8,7) self.linear2 = torch.nn.Linear(7,6) self.linear3 = torch.nn.Linear(6,5) self.linear4 = torch.nn.Linear(5,4) self.linear5 = torch.nn.Linear(4,3) self.linear6 = torch.nn.Linear(3,2) self.linear7 = torch.nn.Linear(2,1) self.sigmoid = torch.nn.Sigmoid() # nn下的sigmoid pytorch激活函数很多,可以都试试,例如:rule...torch.nn.Rule() //取值0-1 # self.activate = torch.nn.Sigmoid() def forward(self,x): # x = self.sigmoid(self.linear1(x)) # x = self.sigmoid(self.linear2(x)) # x = self.sigmoid(self.linear3(x)) x = self.sigmoid(self.linear1(x)) x = self.sigmoid(self.linear2(x)) x = self.sigmoid(self.linear3(x)) x = self.sigmoid(self.linear4(x)) x = self.sigmoid(self.linear5(x)) x = self.sigmoid(self.linear6(x)) x = self.sigmoid(self.linear7(x)) return x model = Model() # 损失函数和优化器 criterion = torch.nn.BCELoss(reduction='sum') optimizer = torch.optim.SGD(model.parameters(),lr = 0.01) epoch_l = [] loss_l = [] for epoch in range(800): # forward y_pred = model(x_data) loss= criterion(y_pred,y_data) print(epoch,loss.item()) # backward optimizer.zero_grad() loss.backward() # update optimizer.step() epoch_l.append(epoch) loss_l.append(loss.item()) plt.plot(epoch_l,loss_l,c='r') plt.show()比较变态的写了7层,哈哈 可以看出来已经收敛了 关于训练的轮数我也不知道怎么选,有没有朋友可以交流一下经验