Phantom CodePhantom Code
Earn with UsBlogsHelp Center
Earn with UsBlogsMy WorkspaceFeedbackPricingHelp Center
Home/Blog/Networking Interview Questions for Software Engineers
By PhantomCode Team·Published April 22, 2026·Last reviewed April 29, 2026·13 min read
TL;DR

Networking interview questions at infrastructure-heavy companies test the stack end to end: OSI layering, TCP versus UDP tradeoffs, the three-way handshake and teardown, congestion control (Reno, CUBIC, BBR), TLS 1.3, HTTP/1.1 versus HTTP/2 versus HTTP/3 over QUIC, DNS, CDN anycast, BGP, L4/L7 load balancing, NAT and port exhaustion, and raw sockets. Senior answers focus on failure modes (stale connections, dropped packets, long TTLs) rather than memorized layer definitions.

Networking Interview Questions for Software Engineers

Networking is the layer where distributed systems stop being theoretical. You can build a correct program that still fails in production because a middlebox silently drops packets, a DNS TTL is too long, or a load balancer holds a stale connection. This guide covers the networking questions interviewers at infrastructure-heavy companies (cloud providers, CDNs, payment processors, exchanges) actually ask, and the depth you need to answer them.

At phantomcode.co we see candidates lose offers on questions they assumed were easy because their mental model stopped at HTTP. The sections below walk through the stack end to end, with the failure modes that distinguish a senior answer from a junior one.

Table of Contents

  1. OSI and TCP/IP Layering
  2. TCP vs UDP and When to Choose Which
  3. The TCP Three-Way Handshake and Teardown
  4. Congestion Control: Reno, CUBIC, BBR
  5. TLS 1.3 Handshake and PKI
  6. HTTP/1.1 vs HTTP/2 vs HTTP/3 (QUIC)
  7. DNS Resolution and Caching
  8. CDN Design and Anycast
  9. BGP and Internet Routing
  10. Load Balancing: L4 vs L7, Consistent Hashing
  11. NAT, Port Exhaustion, and Connection Reuse
  12. Sockets and the Bytes You Actually Send
  13. Common Mistakes Candidates Make
  14. FAQ
  15. Conclusion

1. OSI and TCP/IP Layering

Sample question: "Map the OSI model to TCP/IP, and tell me where TLS lives."

OSI has seven layers; TCP/IP is a looser four-layer model (Link, Internet, Transport, Application). In practice engineers talk about L2 (Ethernet), L3 (IP), L4 (TCP/UDP), L7 (HTTP, gRPC). L5 and L6 from OSI do not have crisp analogs in TCP/IP.

TLS sits between L4 and L7. It is commonly called "L6" for convenience, but strictly speaking it is a layered record protocol on top of TCP (or UDP for DTLS, or directly inside QUIC for HTTP/3). An L7 load balancer like Envoy terminates TLS and then routes based on HTTP headers. An L4 load balancer like AWS NLB passes the TCP stream through unchanged, which means it cannot see SNI unless you enable special decoding.

Follow-up: "Where does BGP fit?" BGP is an application on top of TCP on port 179. Routers exchange routing information via BGP messages, but the protocol itself runs at L7 of its own stack. That is confusing precisely because BGP determines how L3 packets are delivered.

2. TCP vs UDP and When to Choose Which

Sample question: "Why would a streaming video service pick UDP over TCP?"

TCP gives in-order, reliable, flow-controlled, congestion-controlled delivery. UDP gives you a best-effort datagram. You pick UDP when you need: lower latency than TCP's retransmission can afford (real-time media, games), multicast (TCP cannot), or application-level framing that does not benefit from TCP's byte stream.

For live video, a dropped frame older than 100 ms is useless. Retransmitting would only waste bandwidth and block the head of line. Protocols like WebRTC run SRTP over UDP and do forward error correction at the application layer.

For storage replication or database writes, you want TCP. The latency cost of retransmission is fine; the correctness cost of a lost byte is not.

QUIC is a recent entrant that gives you TCP-like reliability on top of UDP, with built-in TLS and multiplexed streams. It is what HTTP/3 runs on.

Follow-up: "Is UDP really cheaper than TCP once you add reliability?" Usually not. If you build reliability at the application layer you end up reimplementing sequence numbers, acks, RTO, and congestion control. Picking UDP only pays off when you control the trade-off: you can lose some data but not tolerate head-of-line blocking.

3. The TCP Three-Way Handshake and Teardown

Sample question: "Walk through opening and closing a TCP connection, including the socket state transitions."

Handshake:

  1. Client sends SYN with an initial sequence number (ISN).
  2. Server responds with SYN+ACK: its own ISN plus ACK of client's ISN+1.
  3. Client sends ACK of server's ISN+1. Connection is ESTABLISHED.

Teardown is four-way, often observed as three-way when an ACK piggybacks:

  1. One side sends FIN. It enters FIN_WAIT_1.
  2. Other side ACKs. It enters CLOSE_WAIT; the first side enters FIN_WAIT_2.
  3. The second side sends FIN. It enters LAST_ACK.
  4. The first side ACKs and enters TIME_WAIT. After 2×MSL it closes.
# Watch connection states in real time.
ss -tan state time-wait   # count TIME_WAIT sockets.
ss -tan state established | wc -l
 
# Tune listen backlog if SYN-cookies kick in under load.
cat /proc/sys/net/ipv4/tcp_max_syn_backlog
sysctl -w net.core.somaxconn=4096

Follow-up: "Why does TIME_WAIT exist, and what problems can it cause?" It ensures that delayed duplicate segments from the old connection are not accepted by a new connection reusing the same 4-tuple. The problem is port exhaustion on busy clients. Mitigations: use SO_REUSEADDR with care, enable tcp_tw_reuse (but not the deprecated tcp_tw_recycle), or use connection pooling.

4. Congestion Control: Reno, CUBIC, BBR

Sample question: "Explain how TCP congestion control works and why BBR matters."

Traditional loss-based algorithms (Reno, NewReno, CUBIC) treat packet loss as the congestion signal. CUBIC, the Linux default, grows the congestion window with a cubic function of time since the last loss. This works well on lossy, bufferbloat-free links but struggles under deep buffers: throughput is high but queuing delay spikes.

BBR (Bottleneck Bandwidth and RTT) models the path instead. It estimates the bottleneck bandwidth and the minimum RTT and paces packets at that rate. It ignores loss as a signal. On high-bandwidth, long-RTT links with shallow buffers it is dramatically better; on lossy Wi-Fi it can be worse because it keeps pushing through actual congestion.

# Check and set the congestion control algorithm.
sysctl net.ipv4.tcp_congestion_control
sysctl -w net.ipv4.tcp_congestion_control=bbr
 
# Watch actual cwnd and RTT on a connection.
ss -tin sport = :443
# Output includes cwnd, rtt, rttvar, retrans.

Follow-up: "What is bufferbloat and how does BBR fight it?" Bufferbloat is excessive latency caused by oversized buffers in routers. A loss-based algorithm fills them until a packet drops, at which point RTT has already ballooned. BBR paces below that fill threshold so queues stay short.

5. TLS 1.3 Handshake and PKI

Sample question: "Walk through a TLS 1.3 handshake and explain what is encrypted when."

TLS 1.3 is a one-round-trip handshake (one RTT) for fresh connections, 0-RTT for resumed ones.

  1. Client sends ClientHello with supported versions, key share (an ephemeral ECDHE public key), cipher suites, and SNI.
  2. Server replies with ServerHello (chosen version, key share), then, encrypted under the newly derived handshake secret: Certificate, CertificateVerify (signature over the transcript), and Finished.
  3. Client verifies certificate chain, verifies CertificateVerify, sends its own Finished. Application data flows under the application secret.

Key points: SNI is in the clear in TLS 1.3 unless ECH (Encrypted Client Hello) is in use. Certificates are encrypted, unlike TLS 1.2. The handshake is authenticated end to end via the transcript hash, so a downgrade attempt changes the Finished.

# Inspect a handshake.
openssl s_client -connect example.com:443 -tls1_3 -servername example.com
 
# Enumerate supported suites.
nmap --script ssl-enum-ciphers -p 443 example.com

Follow-up: "What is 0-RTT and what is the risk?" 0-RTT lets a resumed client send application data in the first flight using a PSK. The risk is replay: an attacker can replay the ciphertext and the server will accept it. Safe 0-RTT is limited to idempotent requests (e.g., GET) that you can afford to see twice.

6. HTTP/1.1 vs HTTP/2 vs HTTP/3 (QUIC)

Sample question: "What is head-of-line blocking, and how do HTTP/2 and HTTP/3 address it?"

HTTP/1.1 sends one request per connection at a time (pipelining is effectively dead). If a response is slow, everything behind it waits. Browsers work around this by opening six connections per host.

HTTP/2 multiplexes many streams over one TCP connection. Streams are interleaved with prioritization. This solves application-layer head-of-line blocking, but not transport-layer: a lost TCP segment stalls every stream on that connection until it is retransmitted.

HTTP/3 runs on QUIC, which is multiplexed at the transport layer over UDP. Each stream has its own flow control. A lost packet on stream 1 does not stall stream 7. QUIC also bundles TLS 1.3 into the handshake, supports 0-RTT, and survives network path changes via connection IDs (useful on mobile switching Wi-Fi to cellular).

# Check which protocol your browser negotiated.
curl -v --http3 https://cloudflare.com/
 
# Inspect ALPN in use.
openssl s_client -alpn h2,http/1.1 -connect example.com:443

Follow-up: "Why does HTTP/2 sometimes perform worse than HTTP/1.1?" Because head-of-line blocking at the TCP layer can be worse than opening six parallel TCP connections. Also, HTTP/2 server push has been widely deprecated because the server cannot know what the browser already cached.

7. DNS Resolution and Caching

Sample question: "Trace a DNS lookup for api.example.com from your laptop."

  1. Your app calls getaddrinfo. The stub resolver in glibc reads /etc/nsswitch.conf and /etc/resolv.conf.
  2. It asks the configured resolver (often systemd-resolved, or your ISP/corporate resolver) via UDP port 53 (or DoT/DoH).
  3. The recursive resolver checks its cache. On miss, it queries a root server, gets a referral to the com TLD server, which refers to example.com's authoritative nameservers.
  4. The authoritative server returns A or AAAA records with a TTL.
  5. The resolver caches and returns to you.
# Trace each step.
dig +trace api.example.com
 
# See authoritative TTL (not cached).
dig @ns1.example.com api.example.com
 
# DoH test.
curl -s 'https://cloudflare-dns.com/dns-query?name=example.com&type=A' \
  -H 'Accept: application/dns-json' | jq

Follow-up: "Why did your deploy fail for 30 minutes after a DNS change?" Because a client or downstream resolver cached the old record for its TTL, or negatively cached the NXDOMAIN. Best practice: lower TTL before a cutover, not during.

8. CDN Design and Anycast

Sample question: "How does a CDN route a user to the nearest edge?"

Two dominant approaches:

  1. DNS-based GSLB. The authoritative nameserver returns different IPs based on the client's resolver location (via EDNS Client Subnet, if supported). Accuracy depends on the resolver being close to the user.
  2. Anycast. The same IP prefix is advertised via BGP from many locations. The internet's routing picks the shortest AS path. The user's TCP SYN arrives at whichever PoP is topologically closest.

Cloudflare and Google DNS (8.8.8.8) use anycast for latency and DDoS absorption. Akamai historically used DNS GSLB for fine-grained steering.

Caching at the edge follows standard HTTP semantics: Cache-Control, ETag, Vary. Cache keys include URL plus selected headers. Purging is the hard problem: a global purge can take tens of seconds even at a top-tier CDN.

Follow-up: "How does an anycast TCP connection stay on one PoP?" BGP convergence and ECMP hashing usually keep packets on the same path for the duration of a connection. If the path changes mid-connection, the SYN ACK could land at a different PoP and the connection breaks. Mitigations include using stateless origin routing or QUIC's connection IDs.

9. BGP and Internet Routing

Sample question: "What is BGP and why do outages happen when it misbehaves?"

BGP (Border Gateway Protocol) is how autonomous systems (ASes) exchange routes. Each AS announces the prefixes it owns and learns about prefixes others own. Path selection prefers (roughly): highest local pref, shortest AS path, lowest MED, eBGP over iBGP, lowest router ID.

Famous outages have been caused by accidental prefix leaks (an AS announces a prefix it does not own, and its upstreams accept it), BGP misconfigurations (e.g., the 2021 Facebook outage where their own routers withdrew their prefixes), and slow convergence after a fiber cut.

# Public looking glasses and route collectors.
whois -h whois.radb.net 8.8.8.0/24
# https://bgp.tools for a visual view of AS-level routing.

Follow-up: "What is RPKI and does it actually help?" RPKI signs route origination authorizations. Validators can reject announcements that claim to originate a prefix not authorized by its holder. Adoption is meaningful today: most major transit providers drop RPKI-invalid routes. It prevents accidental leaks and some hijacks but not path manipulation attacks.

10. Load Balancing: L4 vs L7, Consistent Hashing

Sample question: "Design a load balancer that handles 10 million concurrent connections."

Start by clarifying whether you need L4 or L7. L4 (TCP/UDP) is cheaper and can be stateless with techniques like Maglev hashing: given a packet's 5-tuple, hash to a backend deterministically. Both directions of the connection hit the same LB, or direct-server-return is used to skip the LB on the return path.

L7 terminates the connection, parses HTTP, and routes based on path, header, or cookie. This is more expensive but enables features like per-path rate limiting, canary routing, and header rewriting.

Consistent hashing is essential for stateful routing (for cache affinity) and for minimizing remapping when you add or remove backends. Maglev's contribution is a lookup table that gives O(1) lookups with bounded remapping even when backends churn.

// Tiny sketch of consistent hashing.
type Ring struct { sorted []uint32; owner map[uint32]string }
 
func (r *Ring) Pick(key string) string {
    h := crc32.ChecksumIEEE([]byte(key))
    i := sort.Search(len(r.sorted), func(i int) bool { return r.sorted[i] >= h })
    if i == len(r.sorted) { i = 0 }
    return r.owner[r.sorted[i]]
}

Follow-up: "Why do least-connections algorithms go wrong under sticky sessions?" Because a backend with many sticky long-lived connections cannot shed load. You end up with hot backends and idle ones. Solutions: weighted least-connections, connection draining, or moving to stateless sessions.

11. NAT, Port Exhaustion, and Connection Reuse

Sample question: "A client behind NAT is hitting connection failures. Walk me through how to debug."

NAT rewrites source IP and port. A busy client makes many outbound connections, each consuming a source port in the NAT's pool. Pools are per-destination tuple or global depending on the NAT type. Exhausting the pool returns errors or stalls SYNs.

Symptoms: intermittent EADDRNOTAVAIL or timeouts after a traffic spike, recovery after idle. Linux clients also hit net.ipv4.ip_local_port_range exhaustion when making many outbound connections, especially with short TIME_WAIT reuse.

Mitigations:

  • Connection pooling at the application layer (HTTP keep-alive, gRPC multiplexing).
  • Increase ip_local_port_range.
  • Use SO_REUSEADDR and SO_REUSEPORT carefully.
  • At scale, deploy egress through multiple NAT gateways or a single big NAT with a large address pool.
# Check port range.
sysctl net.ipv4.ip_local_port_range
# Count TIME_WAIT.
ss -tan state time-wait | wc -l

Follow-up: "Why is HTTP/2 especially helpful behind NAT?" Because one TCP connection carries many streams. Ten thousand requests do not need ten thousand source ports.

12. Sockets and the Bytes You Actually Send

Sample question: "Write a minimal TCP server that accepts a connection and echoes bytes."

#include <sys/socket.h>
#include <netinet/in.h>
#include <unistd.h>
 
int main(void) {
    int sfd = socket(AF_INET, SOCK_STREAM, 0);
    int yes = 1;
    setsockopt(sfd, SOL_SOCKET, SO_REUSEADDR, &yes, sizeof(yes));
    struct sockaddr_in addr = { .sin_family = AF_INET,
                                .sin_port = htons(9000),
                                .sin_addr.s_addr = htonl(INADDR_ANY) };
    bind(sfd, (struct sockaddr*)&addr, sizeof(addr));
    listen(sfd, 128);
 
    for (;;) {
        int cfd = accept(sfd, NULL, NULL);
        char buf[4096];
        ssize_t n;
        while ((n = read(cfd, buf, sizeof(buf))) > 0) {
            ssize_t w = 0;
            while (w < n) w += write(cfd, buf + w, n - w);
        }
        close(cfd);
    }
}

Follow-ups an interviewer will chain:

  • Why the loop around write? Because write can return fewer bytes than requested.
  • Why SO_REUSEADDR? To allow rebinding while a previous socket is in TIME_WAIT.
  • What if the client writes faster than the server reads? The kernel's receive buffer fills, TCP advertises a smaller window, and the client's send eventually blocks or returns short. This is flow control.
  • What is Nagle's algorithm, and when do you disable it? Nagle coalesces small writes. Latency-sensitive workloads (e.g., interactive protocols) disable it with TCP_NODELAY.

13. Common Mistakes Candidates Make

  • Confusing TCP with HTTP. They are different layers; a load balancer, a proxy, and a browser all handle them differently.
  • Treating DNS as instantaneous and consistent. TTLs, negative caching, and split-horizon DNS routinely cause outages.
  • Forgetting that a TCP socket has two buffers (send and receive) with independent flow control.
  • Claiming that HTTPS authenticates both sides. By default it authenticates only the server. Mutual TLS (mTLS) adds client auth and must be configured.
  • Believing that UDP is "unreliable but fast." It is unreliable, and unless the receiver is ready, the kernel will still drop packets under load. The UDP receive queue can overflow.
  • Saying "I'd use HTTP/2 for everything." HTTP/2's single-connection model interacts poorly with some proxies and with TCP head-of-line blocking. HTTP/3 mitigates some of this; HTTP/1.1 is still fine for some workloads.
  • Ignoring MSS and MTU. A VPN or a misconfigured cloud setup can drop large packets silently and cause connections to hang after the handshake.
  • Saying "load balancer" without specifying L4 or L7. They have different failure modes.

14. FAQ

How much BGP do I need to know as an application engineer? Enough to know that the internet is not a flat network and that routing can change under you. You should recognize the terms AS, prefix, and anycast and understand that DNS and CDN decisions live on top of BGP.

What should I practice for sockets questions? Write a toy HTTP server in C or Go without a framework. Add timeouts, handle partial reads, and add TLS. The depth of follow-ups you survive is proportional to how much you have actually shipped at this level.

How often does HTTP/3 come up in interviews? Increasingly, especially at CDNs and mobile platforms. You should be able to contrast it with HTTP/2, explain why QUIC is on UDP, and discuss 0-RTT safety.

Do I need to read the RFCs? For TCP (RFC 9293), TLS 1.3 (RFC 8446), and QUIC (RFC 9000), skimming at least once is worth it. They are more readable than most docs and will calibrate your vocabulary.

What tools actually matter in the interview? tcpdump, wireshark, curl -v, dig +trace, ss, traceroute. If you can reach for them by name and describe what you would look for, you are ahead.

15. Conclusion

Networking interviews reward specificity. The candidate who says "it uses HTTP/2 for multiplexing" loses to the candidate who says "HTTP/2 gives you stream multiplexing but still hits TCP head-of-line blocking; I would evaluate HTTP/3 if our clients are mobile and loss is bursty." The second answer demonstrates a mental model that survives follow-ups.

The fastest path to that depth is instrumenting real traffic. Capture a TLS handshake in Wireshark. Watch your own ss -tin output under load. Read a CDN's engineering blog on anycast. When you can explain every hop from a keystroke to a pixel, you are ready.

For drilling these topics with live interviewer-style follow-ups and realistic system design scenarios, phantomcode.co offers targeted mock interviews that match the depth real infra and platform teams expect.

Frequently Asked Questions

What is the difference between TCP and UDP and when should you choose each?
TCP provides ordered, reliable, connection-oriented byte streams with congestion control, retransmission, and flow control, at the cost of latency from handshakes and head-of-line blocking. UDP is connectionless, unordered, and unreliable, but has minimal overhead. Choose TCP for HTTP, databases, file transfer; choose UDP for DNS, real-time media, gaming, and protocols like QUIC that build their own reliability on top.
How does the TCP three-way handshake work?
The client sends SYN with an initial sequence number; the server replies with SYN-ACK acknowledging the client's sequence and providing its own; the client sends ACK to confirm. After this three-message exchange both sides know the other's starting sequence numbers and the connection is ESTABLISHED. Teardown uses a four-way exchange (FIN, ACK, FIN, ACK) so each side can close its half-duplex independently.
What are the key differences between HTTP/2 and HTTP/3?
HTTP/2 multiplexes streams over a single TCP connection but suffers TCP-level head-of-line blocking when a packet is lost. HTTP/3 runs over QUIC (UDP-based) which provides per-stream reliability, so a lost packet in one stream does not block others. QUIC also integrates TLS 1.3 into the handshake for fewer round trips and supports connection migration across IP changes, helping mobile clients.
What is the difference between an L4 and an L7 load balancer?
An L4 load balancer routes based on TCP/UDP headers (IP, port) without inspecting payload, so it is fast and protocol-agnostic but cannot make request-content decisions. An L7 load balancer terminates the connection and routes based on application data (HTTP path, headers, cookies), enabling features like sticky sessions, request rewriting, and content-based routing at the cost of higher CPU and TLS termination overhead.
How does TLS 1.3 differ from TLS 1.2?
TLS 1.3 reduces the handshake to one round trip (or zero for resumed sessions with 0-RTT), removes legacy cipher suites and key exchange methods (RSA key exchange, CBC modes, SHA-1), mandates forward secrecy via ephemeral Diffie-Hellman, and encrypts more of the handshake metadata. The result is faster connection setup and a smaller attack surface, which is why most modern HTTP/2 and HTTP/3 deployments require TLS 1.3.

Ready to Ace Your Next Interview?

Phantom Code provides real-time AI assistance during technical interviews. Solve DSA problems, system design questions, and more with instant AI-generated solutions.

Get Started

Related Articles

10 Things Great Candidates Do Differently in Technical Interviews

Ten behaviors that separate offer-winning candidates from average ones, from clarifying questions to optimizing without being asked.

From 5 Rejections to a Google Offer: One Engineer's Story

How a mid-level engineer turned five Google rejections into an L5 offer by fixing communication, system design depth, and exceptional reasoning.

Advanced SQL Interview Questions for Senior Engineers (2026)

Basic SQL gets you through L3. Senior roles require window functions, CTEs, execution plans, and real optimization know-how. Here is the complete advanced playbook.

Salary Guide|Resume Templates|LeetCode Solutions|FAQ|All Blog Posts
Phantom CodePhantom Code
Phantom Code is an undetectable desktop application to help you pass your Leetcode interviews.
All systems online

Legal

Refund PolicyTerms of ServiceCancellation PolicyPrivacy Policy

Pages

Contact SupportHelp CenterFAQBlogPricingBest AI Interview Assistants 2026FeedbackLeetcode ProblemsLoginCreate Account

Compare

Interview Coder AlternativeFinal Round AI AlternativeUltraCode AI AlternativeParakeet AI AlternativeAI Apply AlternativeCoderRank AlternativeInterviewing.io AlternativeShadeCoder Alternative

Resources

Salary GuideResume TemplatesWhat Is PhantomCodeIs PhantomCode Detectable?Use PhantomCode in HackerRankvs LeetCode PremiumIndia Pricing (INR)

Interview Types

Coding InterviewSystem Design InterviewDSA InterviewLeetCode InterviewAlgorithms InterviewData Structure InterviewSQL InterviewOnline Assessment

© 2026 Phantom Code. All rights reserved.