在PyTorch中,對于MNIST手寫數字分類任務,通常推薦使用ReLU(Rectified Linear Unit)作為激活函數。ReLU函數能夠加速模型的訓練過程,并且有助于解決梯度消失問題。
以下是一個簡單的示例,展示了如何在PyTorch中使用ReLU激活函數進行MNIST分類:
import torch
import torch.nn as nn
import torch.optim as optim
# 定義一個簡單的卷積神經網絡(CNN)模型
class MNISTClassifier(nn.Module):
def __init__(self):
super(MNISTClassifier, self).__init__()
self.conv1 = nn.Conv2d(1, 32, kernel_size=3, stride=1, padding=1)
self.relu1 = nn.ReLU()
self.maxpool1 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv2 = nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1)
self.relu2 = nn.ReLU()
self.maxpool2 = nn.MaxPool2d(kernel_size=2, stride=2)
self.fc1 = nn.Linear(64 * 7 * 7, 128)
self.relu3 = nn.ReLU()
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = self.conv1(x)
x = self.relu1(x)
x = self.maxpool1(x)
x = self.conv2(x)
x = self.relu2(x)
x = self.maxpool2(x)
x = x.view(-1, 64 * 7 * 7)
x = self.fc1(x)
x = self.relu3(x)
x = self.fc2(x)
return x
# 創建模型實例
model = MNISTClassifier()
# 定義損失函數和優化器
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
# 訓練模型
for epoch in range(10):
for data, target in train_loader:
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
print(f"Epoch {epoch+1}, Loss: {loss.item():.4f}")
# 測試模型
model.eval()
with torch.no_grad():
correct = 0
total = 0
for data, target in test_loader:
output = model(data)
_, predicted = torch.max(output.data, 1)
total += target.size(0)
correct += (predicted == target).sum().item()
print(f"Accuracy: {100 * correct / total:.2f}%")
在這個示例中,我們定義了一個簡單的CNN模型,并在其中使用了ReLU激活函數。請注意,ReLU函數在輸入為負數時返回0,這有助于緩解梯度消失問題。此外,我們還使用了交叉熵損失函數(nn.CrossEntropyLoss())和Adam優化器(optim.Adam())來訓練模型。