I’ve “critiqued” gRPC many times, and one of the most mysterious aspects of its complex design is its reliance on the trailers header. There is basically no information on the web about why gRPC uses trailers to pass status codes. I recently read an article by Carl Why Does gRPC Insist on Trailers?. Carl was a member of the gRPC R&D team and he Carl was part of the gRPC development team, and in this article he details the vision for designing gRPC and the process that led to it getting out of control. Today, I’d like to discuss the design of gRPC in the context of my own understanding.

Many people may not even know about trailers. HTTP protocol usually sends Header information first and then Body data when returning data. Trailers, however, are a special class of Header that is sent to the client after the Body transmission is complete. Because they are sent in a different order, in HTTP/1.1 Trailers can only be used in conjunction with chunked transport encodings. The chunked transport encoding is mainly used in scenarios where the length of the data cannot be determined before the transmission starts (e.g., compression).

As an example.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
HTTP/1.1 200 OK
Content-Type: text/plain
Transfer-Encoding: chunked
Trailer: MD5

7\r\n
Mozilla\r\n
9\r\n
Developer\r\n
7\r\n
Network\r\n
0\r\n
MD5: 68b329da9893e34099c7d8ad5cb9c940\r\n
\r\n

The transfer encoding here is specified as chunked, so the body part needs to be transferred in segments. Each segment starts with a number + a newline, the number indicates the length of the segment (hexadecimal), then the byte stream of the corresponding length, and the newline is appended after the end of the data. So after the segment transfer is completed, another segment of zero length needs to be sent to indicate the end.

The Trailer header indicates that there is an additional Header after the end of the data transfer, and the Header name is MD5, which can be specified more than one, separated by commas. So after the segmented data, the MD5 value of all the data is sent as a checksum. Since the content of the Body is dynamically generated, it is not possible to get the MD5 value in advance. We can only calculate it while transmitting, and then send it to the client using the Trailer header when the transmission is finished.

These are the basic concepts of Trailers. In the HTTP/2 era, because of the concept of frames, Header and Body can be transmitted concurrently and there is no longer the restriction of sending the Header first and then the Body. Therefore, in HTTP/2, Trailers do not need to rely on chunked transport encoding, so the response can send Trailers information.

So the question is, why does gRPC rely on Trailers? The core reason is to support a streaming interface. Because it is a streaming interface, it is not possible to determine the length of the data in advance, and it is not possible to use the HTTP Content-Length header. The corresponding HTTP request looks like this.

1
2
3
4
5
6
7
GET /data HTTP/1.1
Host: example.com

HTTP/1.1 200 OK
Server: example.com

abc123

What? Uncertain length? Carl points out that using chunked is ambiguous. He gives the following example.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
GET /data HTTP/1.1
Host: example.com

HTTP/1.1 200 OK
Server: example.com
Transfer-Encoding: chunked

6\r\n
abc123\r\n
0\r\n

Suppose there is a proxy before the client and the server. The proxy receives the response and starts forwarding the data to the client. The first thing is to send the Header part to the client, so the caller determines that this time the status code is 200, which is successful. Then the data part is forwarded paragraph by paragraph. If the server is down after the proxy forwards the first abc123, what signal does the proxy need to send to the client?

Because the status code has been sent, there is no way to change 200 to 5xx. You can’t send 0\r\n directly to end the chunked transfer, so that the client learns that the server has quit abnormally. The only thing you can do is to close the corresponding underlying connection directly, but this will consume additional resources as the client creates a new connection. So we needed to find a way to notify the client of the server error while reusing the underlying connection as much as possible. The gRPC team finally decided to use Trailers for transport.

One might think that closing the connection after an error is no big deal. But Carl says that back in 2015, Google’s Stubby RPC system was already handling more than 1010 requests per second, so the impact of shutting down a connection should not be underestimated.

HTTP/1.1 itself can connect concurrent requests through pipeline functionality. However, pipeline is too weak, and a single request error can cause the entire connection to be shut down, also not meeting Google’s internal requirements. The gRPC team finally decided to use HTTP/2 as the underlying transport protocol. So a typical gRPC call looks like this.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
HEADERS (flags = END_HEADERS)
:method = POST
:scheme = http
:path = /foo.HelloService/Hi
:authority = taoshu.in
content-type = application/grpc+proto

DATA (flags = END_STREAM)
<Length-Prefixed Message>

HEADERS (flags = END_HEADERS)
:status = 200
content-type = application/grpc+proto

DATA
<Length-Prefixed Message>

HEADERS (flags = END_STREAM, END_HEADERS)
grpc-status = 0 # OK

Here HEADERS and DATA denote HTTP/2 data frames. gRPC sends grpc-status after the data transfer has finished. gRPC indicates the final RPC call status.

But there is a hidden problem here - no browser support! According to Carl, an important goal of gRPC is to support the interconnection of browsers, phones, servers and proxies. But in the HTTP/1.1 era, browsers did not support Trailers well. By the time of HTTP/2, browsers happened to be implementing the Fetch API, and the Fetch interface initially supported Trailers as well, see here. However, the Chrome team eventually decided not to support getting information about Trailers through the Fetch interface, citing security concerns. The details of the debate can be found at here. So there is no way to use gRPC directly for browser communication, which leads to projects like grpc-web.

This is the story of Trailers. In addition, gRPC has an unpleasant design: Length-Prefixed Message, which was also introduced to support streaming interfaces. All messages, whether streaming or not, are prefixed with a length of five bytes. The first byte indicates whether the subsequent content is encrypted, and the next four bytes hold the length of the message content in a big-endian way. This design makes it impossible to debug the gRPC interface directly with curl + json, and you must use special tools, which is very inconvenient.

In addition, since there is a message prefix, it is possible to transfer the function of Trailers to the message prefix. For example, a special prefix could be set to transfer fields like grpc-status. If we had done this, we would have been able to call the gRPC interface directly from the browser. It’s a shame that the wood has been cut.

Although gRPC is used more and more nowadays, it is still not considered to be a good design. The reason is that all its mechanisms are designed to support streaming requests. And streaming interfaces account for a very low percentage of the actual business. I counted 8542 interfaces in our internal microservices today, and there are only 59 streaming interfaces, or less than 0.7%. Most of the interfaces are request-responsive and do not require such a complex mechanism as gRPC. For these few usage scenarios, we can just implement them based on WebSocket/HTTP2/HTTP3 or even TCP in a special way. That’s why I’ve been promoting Twirp.

Finally, I’ll end this article with three lessons that Carl summarized.

  • Organizational problems are much more difficult than technical problems. So solve the organizational problems first.
  • Performance and new features are not a major conflict compared to compatibility. The best protocol is the one currently in use.
  • Don’t work behind closed doors, talk to customers and be empathetic.