The Devices Behind the Internet: Modems, Routers, Switches, and More
What actually sits between your code and the real world
In my third year, our professor asked the class a simple question: "When you deploy your application to a server, how does a user's request actually reach it?"
I said "the internet." He smiled and said, "Yes, but how?"
I had no answer. I'd been writing code for two years, deploying things, using APIs — and I had zero idea what physically happened between someone clicking a button and my server receiving that request. I just assumed the internet was magic.
It's not magic. There's actual hardware involved, specific devices with specific jobs, and understanding them changed how I think about systems entirely. Whether you're a software engineer, a CS student, or someone who just wants to understand what all those blinking boxes in the IT room actually do — this one's for you.
The Big Picture First
Before getting into individual devices, let me give you the 30-second overview.
Think about a factory. Raw materials come in through a gate. They pass through security checks. Workers route materials to the right departments. Supervisors make sure no single department gets overwhelmed. Finished products leave through the same gate.
A network works similarly. Internet data comes in through one device, gets checked, gets directed, gets distributed to the right machines, and responses go back out the same way. Each device in that chain has one primary responsibility. That separation of responsibilities is what makes networks reliable.
Let's meet each device.
The Modem: Your Gateway to the Internet
Modem stands for Modulator-Demodulator. But forget the acronym. Here's what it actually does.
Your internet service provider (Jio, Airtel, ACT — whoever sends you that monthly bill) delivers internet to your building through a physical cable, fiber line, or telephone wire. The signal traveling through that wire isn't something your laptop or router understands directly. It needs translation.
That's the modem's job. It translates signals from your ISP into a format your local network can use, and vice versa. It's the interpreter at the border.
Think of it like this: your ISP speaks French, your home network speaks English. The modem is the translator sitting at the border checkpoint, converting every message in both directions.
The modem gives you one IP address — the public IP your ISP assigns to you. Everything on your network shares this one public IP when talking to the outside world.
One important distinction: the modem only connects your network to the internet. It doesn't distribute that connection to multiple devices. That's the next device's job.
The Router: The Traffic Director
Here's where most people get confused. Modems and routers look similar, they often come in the same box from ISPs, but they do completely different things.
The router's job is traffic direction — routing data packets to the right device within your network.
Imagine you live in an apartment building. All packages come to the building's front desk (the modem). But who delivers Package A to Flat 3B and Package B to Flat 7A? That's the router. It's the front desk manager who knows where everything needs to go.
When you have multiple devices — your phone, your laptop, your smart TV — they all connect to the router. The router assigns each device a private IP address (like 192.168.1.101, 192.168.1.102) and keeps a routing table tracking which device is which.
When a packet arrives from the internet destined for your laptop, the router checks its table: "192.168.1.102 — that's the laptop. Send it there." When your laptop makes a request, the router notes it, sends it out through the modem, and when the response arrives, delivers it back to the right device.
This translation between your private local IPs and the single public IP from your ISP is called NAT — Network Address Translation. Your router does this constantly, for every device, for every request.
This is also why when you're setting up a backend service and need to accept external traffic, you configure port forwarding on the router. You're literally telling it: "When traffic arrives on port 3000, send it to this specific device."
Switch vs Hub: How Local Networks Actually Work
Once you're inside a network — say, an office with 50 computers — you need a way to connect all those devices. That's where switches and hubs come in. And this distinction actually matters for performance.
The Hub: The Loud Broadcaster
A hub is the old, dumb way to connect devices. When Device A sends data to Device B through a hub, the hub does something baffling: it sends that data to every single device connected to it.
Every device receives the packet, checks if it's addressed to them, and discards it if it isn't. Device B gets the message. Devices C, D, E, F get noisy junk they have to throw away.
This is terrible for performance. As more devices connect, more unnecessary traffic floods the network. It's like a teacher who answers every student's question by announcing the answer to the entire school over the PA system, even when only one student asked.
Hubs are mostly obsolete now, but you'll still see them mentioned in networking courses.
The Switch: The Smart Messenger
A switch does the same job — connecting multiple devices on a local network — but it does it intelligently.
When Device A sends data to Device B, the switch reads the MAC address (a unique hardware identifier for each device's network card) and delivers the data only to Device B. No broadcasting. No noise. Just direct delivery.
The switch learns over time. The first time a device sends something, the switch notes its MAC address and which port it's connected to. After that, it knows exactly where to deliver future packets.
Think of a hub as a postal worker who makes photocopies of every letter and delivers them to every house on the street. A switch is a postal worker who actually reads the address and delivers to the correct house only.
In modern offices and data centers, everything uses switches. Hubs are genuinely antique at this point.
Quick separation: Router connects different networks (your home to the internet). Switch connects devices within the same network (all the computers in your office).
The Firewall: The Security Checkpoint
Every network needs a gatekeeper that decides what traffic is allowed in and what gets blocked. That's a firewall.
Think of it as the security guard at the entrance to a building. They check every person (packet) coming in, compare them against a list of rules, and decide: pass or block.
Firewalls work based on rules you define. Common rules:
Block all incoming traffic on port 22 (SSH) except from specific IPs
Allow all outgoing traffic
Block traffic from known malicious IP ranges
Only allow HTTP (port 80) and HTTPS (port 443) from outside
When I first deployed a backend application on a cloud VM, I couldn't access it from my browser even though the server was running. Turns out the cloud provider's firewall was blocking port 3000. I hadn't opened that port in the security group rules.
Security groups in AWS, firewall rules in GCP, network security groups in Azure — these are all software-defined firewalls. Same concept, different interfaces.
Hardware firewalls sit physically between your router and internal network in enterprise setups. Software firewalls run on individual machines (like Windows Defender's firewall or ufw on Ubuntu). Both do the same job: enforce rules about what traffic is permitted.
A firewall doesn't just check source and destination IPs. Sophisticated firewalls do deep packet inspection — actually examining the content of traffic to detect malware, unauthorized data exfiltration, or attack patterns.
For any application you deploy publicly, firewall configuration isn't optional. It's the first line of defence.
The Load Balancer: The Traffic Distributor
Here's where we get into territory that's directly relevant to backend engineering and production systems.
Imagine your API server handles 1000 requests per minute comfortably. But you launch a feature that goes viral and suddenly you're getting 10,000 requests per minute. One server can't handle that. So you spin up five servers.
But how do users know which server to talk to? You can't give them five different IP addresses. You need something that sits in front of all five servers, receives all incoming traffic, and distributes it across them.
That's a load balancer.
It's like a receptionist at a busy company. You call the main number (the load balancer's IP). The receptionist doesn't handle your request — they transfer you to the first available representative (server). You never know which representative you got. From your perspective, you just called the company.
Load balancers use different algorithms to distribute traffic:
Round Robin: Request 1 goes to Server A, Request 2 to Server B, Request 3 to Server C, Request 4 back to Server A. Simple rotation.
Least Connections: Send each new request to whichever server currently has the fewest active connections. Smart when requests have varying processing times.
IP Hash: Same client IP always goes to the same server. Useful when you need session persistence — the user stays connected to the same backend throughout their session.
Load balancers also do health checks. Every few seconds, they ping each server: "Are you alive?" If a server stops responding, the load balancer stops sending traffic to it and routes everything to the healthy servers. No manual intervention needed.
This is why Nginx and HAProxy are so common in production setups. They're software load balancers handling traffic distribution at scale. AWS ALB (Application Load Balancer), GCP's Cloud Load Balancing — same idea, managed by the cloud provider.
How They All Work Together
Let me walk you through what happens when a user opens your web application. Every step involves one of these devices.
User types your-app.com and hits Enter.
Their DNS lookup resolves your domain to a public IP. That IP belongs to your load balancer.
The request travels across the internet and arrives at your modem — the entry point into your infrastructure's network.
The router receives the packet from the modem and directs it internally toward the load balancer based on its routing table.
The firewall inspects the packet. Is it coming from a blocked IP? Is it hitting an allowed port? Is it a valid HTTP request? Rules pass. Traffic gets through.
The load balancer receives the request. Checks which backend server is least busy. Forwards the request to Server 2 of your backend cluster.
The switch in your data center or rack handles the physical delivery of that packet from the load balancer to Server 2 — finding the right machine by MAC address.
Server 2 processes the request, queries the database, builds a response, and sends it back up the same chain.
The response goes back through the switch, through the load balancer, through the firewall (outbound rules check), through the router, through the modem, and out to the user.
All of that happens in milliseconds.
Why Software Engineers Should Care About Hardware
I get it. You write code. You don't configure Cisco switches. But here's the thing — understanding this stack makes you a better engineer, not just someone who can answer networking questions in interviews.
When your API is slow, you'll know whether to look at application code, check if the load balancer is misconfigured, or verify that firewall rules aren't throttling traffic. When a production deployment goes wrong and traffic isn't reaching your new servers, you'll know to check if the load balancer's health checks are passing before you start digging through application logs.
When a security incident happens, you'll understand what the firewall logs are telling you.
These aren't abstract concepts. Every production backend system runs on top of this stack. Cloud providers abstract most of it behind managed services and dashboards, but the same devices, the same responsibilities, the same traffic flow — it's all still there.
Understanding the hardware helps you reason about the system as a whole. And the engineers who can do that — who see beyond just their service to the full stack — are the ones who build systems that actually hold up under real-world conditions.
This is part of my ongoing series on how the internet and backend systems actually work at a foundational level. If you're a developer who wants to go beyond "it just works" to actually understanding the infrastructure underneath your code, this series is for you.