Building Your First Neural Network with PyTorch

Date:

Share post:

1. Introduction to PyTorch

PyTorch is an open-source machine learning library developed by Facebook’s AI Research lab. It is widely used for deep learning applications due to its dynamic computational graph and ease of use.

2. Setting Up the Environment

Before building a neural network, ensure you have PyTorch installed. You can install it using pip:

pip install torch torchvision

3. Importing Libraries

Start by importing the necessary libraries:

import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms

4. Preparing the Dataset

For this tutorial, we’ll use the MNIST dataset, which contains handwritten digits.

transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))])

trainset = torchvision.datasets.MNIST(root='./data', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=32, shuffle=True)

testset = torchvision.datasets.MNIST(root='./data', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=32, shuffle=False)

5. Defining the Neural Network

We’ll define a simple feedforward neural network with one hidden layer.

codeclass Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.fc1 = nn.Linear(28 * 28, 128)  # 28*28 input nodes, 128 nodes in the hidden layer
        self.fc2 = nn.Linear(128, 10)       # 128 nodes in hidden layer, 10 output nodes for 10 classes

    def forward(self, x):
        x = x.view(-1, 28 * 28)  # Flatten the input tensor
        x = torch.relu(self.fc1(x))
        x = self.fc2(x)
        return x

net = Net()

6. Defining the Loss Function and Optimizer

We’ll use Cross-Entropy Loss and the Stochastic Gradient Descent (SGD) optimizer.

criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.01, momentum=0.9)

7. Training the Neural Network

Next, we’ll train the neural network for a specified number of epochs.

for epoch in range(5):  # loop over the dataset multiple times
    running_loss = 0.0
    for i, data in enumerate(trainloader, 0):
        inputs, labels = data
        
        optimizer.zero_grad()
        
        outputs = net(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
        
        running_loss += loss.item()
        if i % 100 == 99:  # print every 100 mini-batches
            print(f'[Epoch {epoch + 1}, Batch {i + 1}] loss: {running_loss / 100:.3f}')
            running_loss = 0.0

print('Finished Training')

8. Evaluating the Model

Finally, evaluate the model on the test dataset.

correct = 0
total = 0
with torch.no_grad():
    for data in testloader:
        images, labels = data
        outputs = net(images)
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

print(f'Accuracy of the network on the 10000 test images: {100 * correct / total:.2f}%')

Conclusion

You’ve successfully built, trained, and evaluated your first neural network using PyTorch. This simple example forms the foundation for more complex models and applications in deep learning.

QABash Nexus—Subscribe before It’s too late!

Monthly Drop- Unreleased resources, pro career moves, and community exclusives.

Ishan Dev Shukl
Ishan Dev Shukl
With 13+ years in SDET leadership, I drive quality and innovation through Test Strategies and Automation. I lead Testing Center of Excellence, ensuring high-quality products across Frontend, Backend, and App Testing."Quality is in the details" defines my approach—creating seamless, impactful user experiences. I embrace challenges, learn from failure, and take risks to drive success.

1 COMMENT

Leave a Reply to Sonia Rawat Cancel reply

Please enter your comment!
Please enter your name here

Advertisement

Related articles

How Predictive AI Is Transforming Testing – Setup, Case Studies, and Top Tools

The Testing Revolution You Never Saw ComingImagine knowing which tests will fail before you even run them. While most...

GPT-5 Reality Check: What SDETs Need to Know

On August 7, 2025, OpenAI officially launched GPT-5, promising revolutionary advances in coding, reasoning, and automation capabilities. With 74.9% accuracy...

Selenium 4 Cheatsheet: Essential Automation Testing Guide

Selenium 4 brings a host of game-changing features that modernize test automation frameworks worldwide. With India’s booming software...

PRD-Based Ticketing: Transforming the Testing Workflow using BDD

IntroductionIn software development, clarity in requirements is crucial. When requirements are unclear, testers struggle with ambiguities, leading to...