← Back to Blog

TCP vs UDP: Key Differences Every Developer Must Know (2026)

TCP and UDP are the two workhorses of the transport layer. Choosing the wrong one - or not understanding the tradeoffs - is one of the most common sources of latency, dropped connections, and unexpected behavior in networked applications.

Why Transport Protocol Choice Matters

Consider two developers working on the same company's infrastructure. The first is building an internal financial reconciliation system that transfers ledger files between services. The second is building the multiplayer position synchronization for a real time game. Both will use TCP/IP as their network foundation - but the right transport layer protocol is different for each use case, and choosing incorrectly causes measurable problems.

The finance system choosing UDP might silently lose records mid-transfer. The game choosing TCP will stutter and freeze when a single packet is lost and the entire stream stalls waiting for retransmission - a phenomenon called head of line blocking. Understanding TCP vs UDP at a mechanical level is foundational knowledge for any engineer working with networks, distributed systems, or real time applications.

TCP: The Reliable Workhorse

TCP (Transmission Control Protocol) is defined in RFC 793 (1981) and provides a reliable, ordered, byte-stream transport. Every feature TCP has comes with a cost, and that cost is latency and overhead. Here is what TCP guarantees:

Connection-Oriented: The Three-Way Handshake

Before any data flows, TCP establishes a connection with a three-way handshake:

Client                    Server
  |-------- SYN --------->|    (I want to connect, my seq=x)
  |<---- SYN-ACK ---------|    (OK, your seq acknowledged, my seq=y)
  |-------- ACK --------->|    (Your seq acknowledged, connection open)
  |===== DATA FLOWS =======|

This handshake establishes sequence numbers for both sides and confirms the server is reachable and willing to accept connections. The cost: at least one round-trip time (RTT) before the first byte of data can be sent. On a 50ms RTT connection, that is 50ms of latency before any payload transfers. TLS adds another 1–2 RTTs on top for the TLS handshake (TLS 1.3 has optimized this to 1 RTT, and 0-RTT for resumed sessions).

Reliable Delivery

Every TCP segment is acknowledged by the receiver. If an acknowledgment does not arrive within a timeout period, the sender retransmits the segment. This continues until either the segment is acknowledged or the connection is considered failed. The receiver discards duplicate segments using sequence numbers.

Ordered Delivery

TCP segments include sequence numbers. If segment 3 arrives before segment 2 (possible on any real network since packets can take different routes), the receiver buffers segment 3 and waits for segment 2 before delivering either to the application. This is what applications receive as a byte stream - always in order, always complete.

Flow Control and Congestion Control

TCP includes mechanisms to prevent the sender from overwhelming the receiver (flow control via the receive window in each ACK) and to prevent the sender from overwhelming the network (congestion control via algorithms like CUBIC, BBR, and Reno). These mechanisms are sophisticated and largely transparent to application developers, but they are why a TCP connection that encounters packet loss can slow dramatically - congestion control backs off the send rate when it detects loss.

UDP: The Fast, Minimal Protocol

UDP (User Datagram Protocol) is defined in RFC 768 (1980) and is intentionally minimal. A UDP header is only 8 bytes (vs TCP's 20 bytes minimum). UDP provides:

  • Source port and destination port
  • Length of the datagram
  • Optional checksum for error detection

That is it. UDP provides no connection establishment, no acknowledgments, no retransmission, no ordering, no flow control, and no congestion control. It is called a "connectionless" or "fire-and-forget" protocol.

What this means in practice:

  • Datagrams may arrive out of order
  • Datagrams may be lost (silently, with no notification to either side)
  • Datagrams may be duplicated (rare but possible)
  • There is no handshake, so data transmission can begin immediately
  • Each datagram is independent - no head of line blocking

TCP vs UDP: Direct Comparison

PropertyTCPUDP
Connection modelConnection-oriented (3-way handshake)Connectionless
ReliabilityGuaranteed delivery (retransmission)Best-effort, no guarantee
OrderingIn-order delivery guaranteedNo ordering
Error checkingChecksum + retransmit on errorChecksum only (optional on IPv4)
Flow controlYes (receive window)No
Congestion controlYes (CUBIC, BBR, etc.)No
Header size20–60 bytes8 bytes
LatencyHigher (handshake + ACK overhead)Lower (no setup, no ACKs)
ThroughputLower for small/bursty messagesHigher for small/bursty messages
Broadcast/multicastNoYes

When to Use TCP

TCP is the correct choice whenever correctness and completeness matter more than latency. The classic use cases:

HTTP and HTTPS (Web)

Every webpage, API call, and file download uses TCP. A missing packet in an HTTP response would corrupt the page or the JSON payload. Retransmission ensures the complete, correct data arrives. HTTP/1.1 and HTTP/2 both run over TCP. HTTP/3 is the notable exception - it runs over QUIC, which is built on UDP but implements its own reliability.

Database Connections

MySQL, PostgreSQL, MongoDB, Redis - all use TCP. Sending a partial SQL query or receiving a partial result set would be catastrophic. TCP's reliability guarantee means the application never has to handle packet loss.

SSH and Remote Administration

Missing bytes in an SSH session mean corrupted commands. SSH runs on TCP port 22. The ordered, reliable stream maps perfectly to the interactive terminal model.

Email (SMTP, IMAP, POP3)

An email with missing bytes is a corrupted email. All email protocols use TCP.

File Transfer (FTP, SFTP, SCP)

Binary file integrity requires every byte to arrive in order. TCP is mandatory.

When to Use UDP

UDP is the correct choice when latency or throughput matters more than guaranteed delivery, or when the application implements its own error handling.

DNS

DNS queries are tiny (usually under 512 bytes) and fast. If a UDP DNS query is lost, the resolver simply retries. The overhead of a TCP handshake for every DNS lookup would be unacceptably slow. DNS uses UDP port 53 for queries and falls back to TCP for responses larger than 512 bytes.

Real-Time Multiplayer Gaming

A player's position 100ms ago is useless - receiving a stale position after retransmission is worse than receiving nothing. Games like first-person shooters send position updates 30–60 times per second over UDP. If a packet is lost, the next update arrives shortly and the client interpolates. TCP's retransmission would cause the game to freeze while waiting for the lost packet, even though that data is already stale.

Video and Audio Streaming (Live)

A missing frame in a live video stream is acceptable - the stream skips ahead. A retransmitted frame arriving 2 seconds late would cause a 2-second freeze. Live video conferencing (WebRTC, Zoom, Teams), VoIP (SIP/RTP), and live sports streaming all use UDP. Recorded video (Netflix, YouTube) uses TCP because the player buffers enough data that the latency of retransmission is tolerable.

IoT Telemetry and Sensor Data

A temperature sensor sending readings every 5 seconds does not need guaranteed delivery. If one reading is lost, the next one arrives in 5 seconds. The overhead of TCP connections for thousands of sensors sending small packets is wasteful. MQTT over UDP (or QUIC) is common in constrained IoT environments.

QUIC and HTTP/3

This is worth understanding because HTTP/3 is now widely deployed. QUIC runs over UDP and reimplements connection management, reliability, and TLS directly at the application layer. It solves TCP's head of line blocking problem by multiplexing streams independently - a lost packet only blocks its stream, not all streams on the connection. Chrome, Firefox, and major CDNs serve HTTP/3 today. QUIC demonstrates that UDP is not "unreliable" in practice - applications built on UDP can implement exactly the reliability semantics they need.

The choice is not always "TCP for reliability, UDP for speed." Modern applications like QUIC show that reliability and low latency are compatible when built correctly on UDP. But for most developers, TCP remains the right default unless you have a specific reason to use UDP.

TCP Head-of-Line Blocking Explained

This is the most important TCP limitation to understand for application design. TCP delivers a byte stream in order. If packet 5 in a sequence is lost, the receiver holds packets 6, 7, 8... in a buffer and waits for packet 5 to be retransmitted. Nothing is delivered to the application until packet 5 arrives.

In HTTP/2, which multiplexes multiple request/response streams over a single TCP connection, a lost packet blocks ALL streams - not just the one whose data was in the lost packet. A slow image load on a webpage can delay the JavaScript and CSS that are on separate HTTP/2 streams but share the same TCP connection.

HTTP/3 / QUIC solves this by multiplexing streams at the QUIC layer over UDP. A lost packet only blocks the QUIC stream it belongs to; other streams continue uninterrupted.

Common Port Numbers

Understanding which protocol a port uses is practical knowledge for firewall rules, security group configuration, and debugging:

  • TCP 22 - SSH
  • TCP 25, 587, 465 - SMTP (email sending)
  • TCP 80, 443 - HTTP, HTTPS
  • TCP 3306 - MySQL
  • TCP 5432 - PostgreSQL
  • TCP 6379 - Redis
  • UDP 53 - DNS
  • UDP 67, 68 - DHCP
  • UDP 123 - NTP (time synchronization)
  • UDP 161, 162 - SNMP (network monitoring)
  • UDP 500 - IKE (IPsec VPN key exchange)
  • UDP 5004, 5005 - RTP (real time media)

Look Up Any Port Instantly

Find the protocol, service name, and common use case for any port number. Essential for firewall rules and security audits.

Open Port Lookup Tool →

Frequently Asked Questions

Is UDP faster than TCP?

In terms of raw latency for a single message, yes - UDP can deliver the first byte in zero RTTs while TCP requires at least one RTT for the handshake. For sustained high-throughput transfers over a reliable network, TCP's throughput can match or exceed naive UDP implementations because TCP's congestion control algorithms (especially BBR) are highly optimized. The real advantage of UDP is the absence of head of line blocking and the ability to implement exactly the reliability semantics you need.

Can UDP be reliable?

Yes. Applications can implement their own acknowledgment, sequencing, and retransmission logic on top of UDP. QUIC does this, as do many game networking libraries (ENet, RakNet, KCP). The advantage is that you can implement exactly the reliability you need - for example, reliable delivery for critical events but unreliable for position updates - rather than TCP's all-or-nothing approach.

Why does DNS use UDP instead of TCP?

DNS queries are small (typically under 512 bytes for simple records), transactional (one query, one response), and latency-sensitive. UDP allows the resolver to send a query with no handshake and receive a response in a single round trip. If the response is lost, the resolver retries. For large DNS responses (DNSSEC records, zone transfers) DNS falls back to TCP automatically using the TC (Truncated) flag.

What is the TCP four-way termination?

TCP connections are terminated gracefully with a four-way handshake: the client sends FIN, the server ACKs it (the server can still send data), the server sends its own FIN when done, and the client ACKs. Both sides must close independently. The TIME_WAIT state after closure (typically 2 minutes) prevents old duplicate packets from being mistaken as part of a new connection. This is why rapidly cycling TCP connections under load can exhaust ephemeral ports.

What is TCP keepalive and when should I use it?

TCP keepalive is a mechanism to detect dead connections. The kernel sends probe packets on idle connections after a configurable idle timeout (default is 2 hours on Linux, which is too long for most applications). Application-level keepalives (e.g., HTTP keep-alive headers, WebSocket ping/pong, database ping queries) are more practical and configurable. Use keepalives on any long-lived TCP connection that could be silently dropped by a NAT, load balancer, or firewall with idle timeout rules.

The Bottom Line

The decision tree is straightforward:

  1. Do you need every byte to arrive, in order, without errors? Use TCP. Web, databases, email, file transfers, SSH.
  2. Is low latency more important than guaranteed delivery, or are you implementing your own reliability? Use UDP. DNS, live video/audio, gaming, IoT telemetry, QUIC.
  3. Are you building something that needs multiplexed streams with low-latency and reliability? Use QUIC (UDP-based) via an existing library rather than implementing from scratch.

For most web and API developers, TCP via HTTP is the correct and only choice. Understanding UDP and its tradeoffs becomes important when you move into real time systems, custom protocols, or performance-critical infrastructure.

Use our free tool here →

Look up any port number to see its protocol (TCP/UDP), assigned service, and common use cases. Useful for firewall configuration, security audits, and debugging connectivity issues.

Open Port Lookup Tool
UK
Written by Usman Khan
DevOps Engineer | MSc Cybersecurity | CEH | AWS Solutions Architect

Usman has 10+ years of experience securing enterprise infrastructure, managing high-traffic servers, and building zero-knowledge security tools. Read more about the author.