Published

- 7 min read

API Paradigms: REST, GraphQL, and gRPC and the Evolution of HTTP

img of API Paradigms: REST, GraphQL, and gRPC and the Evolution of HTTP

Introduction: More Than Just Acronyms

In software engineering, we’re surrounded by terms like REST, gRPC, and GraphQL. But truly understanding them isn’t about memorizing definitions. It’s about understanding the specific problems they were designed to solve. At their core, they all answer one fundamental question: How do different pieces of software talk to each other?

To make this journey intuitive, let’s use an analogy. Imagine a restaurant:

  • The Client (your app) is the Customer.
  • The Server (the backend) is the Kitchen.
  • The API is the Menu—it defines what you can order and how.

REST, GraphQL, and gRPC are simply different styles of menus and ordering processes. This article follows a detailed conversation that unravels these concepts, starting with the APIs themselves and then diving deep into the “highways” they run on—the HTTP protocols.


Part 1: The Foundation - RPC vs. REST

The conversation begins with a foundational question that cuts through the jargon.

Q: What’s the real difference between RPC and REST? Aren’t all remote API calls just “remote procedure calls” anyway?

A: That’s a sharp observation. Technically, yes, any time a client triggers code on a server over a network, it’s a “remote procedure call.” The difference isn’t in what they do, but in their philosophy and how they do it.

Let’s imagine two craftsmen in a workshop:

  • The RPC Craftsman (Action-Oriented): You give this craftsman direct, verb-based commands like sandTheWood() or tightenTenCmBolt(). The commands are specific actions.
  • The REST Craftsman (Resource-Oriented): You tell this craftsman which noun (or “resource”) to work on and use a standard set of tools (HTTP methods) to describe the action. For example: “Here is the wood (/wood), GET it for me,” or “Here is bolt #123 (/bolts/123), PUT it in a ‘tightened’ state.”

So, while RPC is like calling a function by its name, REST is an architectural style focused on managing resources with a uniform set of operations (GET, POST, PUT, DELETE). This resource-based approach is what allowed the web to scale.


Part 2: Solving REST’s Puzzles with GraphQL

With REST as our baseline, the next question explores its limitations, especially in the age of complex mobile and web applications.

Q: What problems did GraphQL come along to solve?

A: As applications grew, developers using REST often faced two frustrating issues:

  1. Over-fetching: You request a list of users from /users, and the server sends back 10 fields per user (ID, name, email, last login, etc.). But your UI only needs to display their names. The other 9 fields are wasted bandwidth.
  2. Under-fetching: To show a blog post, you first hit /posts/123 to get the article. Then you take the authorId from the response and hit /users/456 to get the author’s name. Then you hit /posts/123/comments to get the comments. That’s three round-trips for one screen.

GraphQL’s solution is elegant. Instead of a fixed menu (REST), it offers a buffet. The client tells the server exactly what data it needs, including nested relationships, and gets it all back in a single request.

Q: What are the downsides of giving the client so much power?

A: This flexibility isn’t free. GraphQL shifts complexity from the client to the server. The backend team now has new responsibilities:

  • Security & Performance: A client could send a maliciously complex, deeply nested query that grinds the database to a halt. The server must implement safeguards like query depth limits, complexity analysis, and timeouts.
  • Resolver Complexity: The “N+1 problem” that REST faced on the client-side can reappear on the server. If a client asks for 100 posts and their authors, a naive implementation might make 1 (for the posts) + 100 (one for each author) database queries. This requires careful server-side logic using techniques like batching with a Dataloader.
  • Caching: RESTful APIs benefit from standard HTTP caching (a GET /users/123 request can be easily cached). GraphQL typically uses a single /graphql endpoint for all queries (via POST), which bypasses these built-in browser and CDN caching mechanisms.

Part 3: The Need for Speed - gRPC

While GraphQL focuses on data-fetching flexibility, gRPC prioritizes something else entirely: raw performance.

Q: What makes gRPC so fast, and where does it shine?

A: gRPC is designed for high-performance, machine-to-machine communication, making it a star in microservice architectures where services talk to each other thousands of times per second. It achieves its speed with two key ingredients:

  1. Data Format - Protocol Buffers (Protobuf): While REST and GraphQL typically use human-readable JSON, gRPC uses Protobuf, a highly efficient binary format. Sending JSON is like mailing a letter; sending a Protobuf message is like sending a compressed microchip with the same information. It’s smaller and much faster for machines to process.
  2. Transport Layer - HTTP/2: Unlike many REST APIs built on the older HTTP/1.1, gRPC is built on the modern and far more efficient HTTP/2 protocol.

Our restaurant analogy needs an upgrade. gRPC is like having a dedicated pneumatic tube system (HTTP/2) between the customer and the kitchen. Orders are sent on machine-readable punch cards (Protobuf). It’s incredibly fast and error-proof but completely rigid—you can’t order anything that’s not on the card.


Part 4: A Deeper Dive - The Highway (HTTP) Evolution

To truly understand gRPC, we have to look under the hood at the “highway” it runs on. This is where the conversation gets to the core of modern network communication.

Q: What were the big problems with HTTP/1.1 that HTTP/2 solved?

A: HTTP/1.1 was like having a single-lane road where only one car could go at a time. If you needed to download a webpage with 100 images, you had to make 100 separate trips, each with its own overhead. HTTP/2 turned that single lane into a multi-lane superhighway built on a single, persistent connection.

  • Multiplexing: It allows multiple requests and responses to be sent and received at the same time over a single connection, eliminating the “Head-of-Line Blocking” problem where a slow request would hold up all others behind it.
  • Single, Persistent Connection: The browser establishes one connection to the server and keeps it open. This means the expensive security handshake (TLS) is done only once, making subsequent requests much faster.

Q: Can the server just send data whenever it wants over HTTP/2, like with a WebSocket?

A: This is a key question. By default, no. Standard HTTP/2 communication must be initiated by the client. However, gRPC leverages the power of HTTP/2’s persistent connection to enable this “socket-like” behavior through a concept called streaming.

  • Server Streaming (1 -> Many): The client makes one request, and the server responds with a continuous stream of data. Think of clicking “play” on a YouTube video.
  • Client Streaming (Many -> 1): The client sends a continuous stream of data, and the server sends back one final response. Think of uploading a large file in chunks.
  • Bi-directional Streaming (Many -> Many): Both client and server can send messages to each other independently at any time. This is perfect for real-time applications like a chat app, an online game, or a collaborative editor like Google Docs. It provides WebSocket-like functionality without leaving the HTTP/2 protocol.

Q: If HTTP/2 is so great, why did we need HTTP/3?

A: Even the HTTP/2 multi-lane superhighway had a hidden bottleneck: all lanes had to pass through the same single-lane TCP tunnel. If a single data packet (one car) got lost or delayed in that tunnel due to a network hiccup, all the lanes behind it had to wait. This is TCP Head-of-Line Blocking.

HTTP/3’s solution is radical: it completely abandons the 40-year-old TCP protocol. Instead, it’s built on a new protocol called QUIC, which runs over UDP.

QUIC is like giving each lane on the highway its own private tunnel. If a car gets stuck in one tunnel, the traffic in all the other tunnels continues to flow freely. It also combines connection and security handshakes into a single step, making initial connection times much faster.


Conclusion: There Is No Silver Bullet

From a simple REST API to the inner workings of QUIC, this journey reveals a core engineering truth: technology is a constant evolution where new tools arise to solve specific problems better.

An effective engineer doesn’t have a favorite. They have a toolkit. They understand the trade-offs and ask the right questions:

  • Do I need a simple, standard, universally understood interface? (REST)
  • Do I need to give my clients maximum data-fetching flexibility to save bandwidth? (GraphQL)
  • Do I need the absolute highest performance for internal machine-to-machine communication where a strict contract is a benefit? (gRPC)

Understanding not just what these tools are, but why they exist, is the key to building robust, efficient, and scalable systems.