gRPC is Google’s open source RPC protocol framework. Because it comes with the aura of a big company, the use of gRPC is also becoming more and more widespread. So should we use gRPC when we make technology choices? This requires consideration of both positive and negative aspects. If gRPC is good, what is it good for? What does it sacrifice for this good? What problems do we face in our business, can we use the advantages of gRPC, and will the disadvantages of gRPC cause us inconvenience? These are the things we need to think about quietly.
Before deciding to go with gRPC, we first need to understand it.
RPC does two things: one is data encoding, and the other is request mapping.
Data encoding, as the name suggests, is the process of converting the requested memory image into a transportable byte stream and sending it to the server, and then converting the received byte stream into a memory image. There are many methods, the common ones are XML, JSON, Protobuf. XML has been dying, JSON is in the ascendant, Protobuf is still emerging. gRPC defaults to Protobuf, which seems to support only Protobuf in the early days, but now it claims to support JSON as well, but I don’t know how many people are using it.
Protobuf is also a Google product, so I think that’s one of the reasons. Another reason would be that Protobuf is more efficient than JSON in certain scenarios. Please keep in mind that there is no free lunch and all optimization comes at a cost. We must think about what we choose and what we give up when we consider a problem.
To understand Protobuf’s optimizations, we need to go back and look at what JSON has to offer. Here is a typical JSON
The first disadvantage is the inefficient encoding of non-strings. For example, an int field with a value of 12345 takes up only two bytes for in-memory representation, but five bytes for conversion to JSON. A bool field takes up four or five bytes.
A further disadvantage is information redundancy. The same interface with the same object, but with a different value for the int field, has to transfer the field name “int” every time.
Wait, is that a disadvantage? Yes! But why does JSON have these problems? Because JSON makes a sacrifice in efficiency by choosing readability between readability and coding efficiency.
Well, now that people feel that efficiency is the main conflict, readability is necessarily sacrificed. For this reason, Protobuf has chosen VarInts to encode numbers on the one hand, solving the efficiency problem; on the other hand, it has assigned an integer number to each field, and only the field number is passed during transmission, solving the redundancy problem.
While transmitting only the field number can improve the efficiency of transmission, how does the receiver know which field corresponds to each number? Only by prior agreement. Protobuf uses .proto files as codebooks to record the correspondence between fields and numbers.
Protobuf provides a set of tools to generate code in various languages for the message described by the proto. The transfer has become more efficient, and the toolchain more complex. If you’ve ever caught a packet for gRPC communication, you’ll miss JSON.
Well, that’s enough about data encoding, let’s move on to the request mapping issue.
Because .proto is the IDL, Protobuf can really do a lot of things that JSON can’t easily do. The most important of these is the RPC description!
The .proto file above defines a Greeter service with a SayHello method that accepts a HelloRequest message and returns a HelloReply message. How this Greeter is implemented is language-independent, hence the name IDL. gRPC uses Protobuf’s service to describe the RPC interface.
So the question is, how does gRPC map requests? To answer this question, we must first answer what transport protocol gRPC uses at the bottom. The answer is the HTTP protocol, or to be precise, gRPC uses the HTTP/2 protocol. For the purposes of our discussion, however, we can ignore the HTTP/2 and HTTP/1 distinction for now.
For now you can simply consider a gRPC request to be an HTTP request (not strictly). This HTTP request uses the POST method, and the corresponding resource path is determined by the .proto definition. The path for the Greeter service we mentioned earlier is /demo.hello.Greeter/SayHello.
A gRPC definition contains three parts, the package name, the service name and the interface name, with the following connection rules.
The package name of SayHello is demo.hello, the service name is Greeter, and the interface name is SayHello, so the corresponding path is /demo.hello.Greeter/SayHello.
If you want to use JSON encoding, you can also set it to application/grpc+json, as long as the service supports it.
Finally, it’s time to define the request body. If you use Protobuf encoding, the body must be the encoded byte stream. Is this the case for gRPC HTTP requests?
The answer is no! Simply put, gRPC requires a five-byte prefix in front of the Protobuf byte stream, the first byte indicates whether the byte stream is compressed, and the last four bytes store the length of the data, and named Length-Prefixed Message.
Anyone familiar with the HTTP protocol knows that the HTTP protocol itself can use Content-Encoding to indicate the compression algorithm and Content-Length to specify the length of the data. gRPC Why redefine a mechanism?
The answer lies in another feature supported by gRPC, stream rpc! Streaming means that messages can be sent and received continuously. This is a significant difference from HTTP’s send and receive.
gRPC holds three streaming interfaces, defined by prefixing the parameters with the stream keyword: request stream, response stream, and bidirectional stream.
The first one, called request stream, can send new request messages continuously after the RPC is initiated. The most typical use scenario for this type of interface is to send a push or SMS message.
The second one is called response flow, which can receive new response messages after the RPC is initiated. The most typical usage scenario for this type of interface is to subscribe to message notifications.
The last one is a two-way stream. It can send and receive messages at the same time after the RPC is initiated. The most typical usage scenario for this interface is real-time voice to subtitle.
In order to achieve streaming, gRPC has to introduce the so-called Length-Prefixed Message. different messages of the same gRPC request share HTTP headers, so each message can only be given a separate five-byte prefix to represent the compression and length information.
It is because of these five bytes, whether you are Protobuf or JSON, gRPC is destined to be a binary protocol only, and the common text tools under UNIX cannot handle gRPC communication very well.
gRPC also defines its own return status and message, which are transmitted using the grpc-status and grpc-message headers, respectively. So the simplest gRPC communication (non-streaming call, unary) would look like this.
If you really understand what was said in the previous article, then you can now write a non-streaming gRPC client. Our own sniper framework comes with one, source code is here.
Are we done with gRPC? No~!
If we look at non-streaming calls alone, i.e. unary calls, gRPC is not complicated and not too different from normal HTTP requests. We can even use HTTP/1.1 to carry gRPC traffic. But (and there is a but for everything), gRPC supports a streaming interface, which is a bit tricky.
We know that HTTP/1.1 also supports multiplexing TCP connections. But this multiplexing has an obvious drawback, all requests must be queued. That means that they must be in the order of request, wait, response, request, wait, response. First-come, first-served. And in the actual business scenario there will certainly be some requests with long response times, and the client will hog the TCP connection until it receives a response. During this time, other requests either wait or initiate new TCP connections. There is indeed room for optimization in terms of efficiency. In a nutshell, HTTP/1.1 does not adequately reuse TCP connections.
Then, HTTP/2 came out of nowhere! The problem of TCP connection reuse was solved by introducing the concept of stream (note that there are trade-offs here as well, so I won’t go into them). You can think of HTTP/2 streams as simply logical TCP connections that send and receive HTTP messages in parallel on a single TCP connection, without the wait that HTTP/1.1 does.
So gRPC has chosen to use HTTP/2 for communication in order to implement the streaming feature. So, the actual communication of the Greeter call in the previous section looks like this.
HTTP/2’s header and data are sent in separate frames, and can be sent multiple times. HTTP/1.1 can only send header and then data (not entirely accurate. Check yourself, hint http trunk), HTTP/2 can be sent alternately. For example, in the gRPC response above, a header frame is sent to inform the http status, then a data frame is sent to transmit the gRPC message, and finally another header frame is sent to inform the grpc-status status, which is a custom status code for gRPC.
Wait a minute! Don’t you usually send the header first and then the data? Why does gRPC need to send the grpc-status header after the data is sent?
Or is the streaming interface causing the problem. Think about it, the server doesn’t know what grpc-status to pass until all the streaming messages have been transferred.
Well, here the request mapping problem has been analyzed. Let’s go back to the beginning of the problem.
If gRPC is good, what is it good for? What does it sacrifice for that goodness? What problems do we face in our business, can we use the advantages of gRPC, and will the disadvantages of gRPC cause us inconvenience?
I think you have it in your mind.