在CentOS系統下使用PyTorch進行深度學習,通常需要以下幾個步驟:
安裝Anaconda: 訪問Anaconda官方下載頁面,下載適合CentOS系統的Anaconda3安裝包,并按照提示完成安裝。
創建并激活虛擬環境:
conda create -n pytorch python=3.8
conda activate pytorch
安裝PyTorch: 在激活的虛擬環境中,使用conda安裝PyTorch。如果需要GPU支持,確保已安裝相應版本的CUDA和cuDNN,并選擇支持GPU的版本:
conda install pytorch torchvision torchaudio cudatoolkit=11.8 -c pytorch
驗證安裝: 啟動Python交互式環境,輸入以下命令驗證PyTorch是否安裝成功:
import torch
print(torch.__version__)
print(torch.cuda.is_available())
安裝pip(如果尚未安裝):
sudo yum install python3-pip
安裝PyTorch: 使用pip安裝PyTorch,可以通過指定清華大學的鏡像源來加速下載速度:
pip install torch torchvision torchaudio -f https://pypi.tuna.tsinghua.edu.cn/simple
驗證安裝:
import torch
print(torch.__version__)
print(torch.cuda.is_available())
導入必要的庫:
import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import datasets, transforms
定義模型結構:
class SimpleCNN(nn.Module):
def __init__(self):
super(SimpleCNN, self).__init__()
self.conv1 = nn.Conv2d(1, 32, kernel_size=3, stride=1, padding=1)
self.fc1 = nn.Linear(32 * 28 * 28, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = x.view(-1, 32 * 28 * 28)
x = self.fc1(x)
return x
準備數據:
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))])
train_dataset = datasets.MNIST(root='./data', train=True, download=True, transform=transform)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=64, shuffle=True)
初始化模型、損失函數和優化器:
model = SimpleCNN()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
訓練模型:
num_epochs = 10
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
# 前向傳播
outputs = model(images)
loss = criterion(outputs, labels)
# 反向傳播和優化
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i+1) % 100 == 0:
print(f'Epoch [{epoch+1}/{num_epochs}], Step [{i+1}/{len(train_loader)}], Loss: {loss.item():.4f}')
模型評估:
model.eval() # 切換到評估模式
with torch.no_grad(): # 不計算梯度
correct = 0
total = 0
for data, labels in test_loader:
outputs = model(data.view(-1, 784))
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print(f'Accuracy of the network on the 10000 test images: {100 * correct / total}%')