When accessing a gRPC streaming interface (converted to HTTP) with cURL before, a very strange phenomenon occurred.

• when directly cuRL, always the full content is displayed and the connection does not break, because it is a streaming interface, which is exactly the expected behavior.
• And when cURL | ... or cURL > file, the full content is never available, and it doesn’t work no matter how long you wait.

Why is this? This problem has been bothering me for a long time. Until yesterday when I suddenly had an idea why.

Probably because of cURL’s buffer.

So I read the manual of cURL, and there is really one buffer-related issue:

 1 2 3 4 5 6 7  -N, --no-buffer Disables the buffering of the output stream. In normal work situations, curl will use a standard buffered output stream that will have the effect that it will output the data in chunks, not necessarily exactly when the data arrives. Using this option will disable that buffering. Note that this is the negated option name documented. You can thus use --buffer to enforce the buffering. 

Then I tried to turn it on, and the problem was solved.

However, there is still one question, why does the direct cURL always show the full content? Shouldn’t it also buffer up? No answer found. Blind guess is that it’s the version.

I wrote a small piece of code to try to reproduce the problem.

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20  package main import ( "net/http" "strings" "time" ) var long = []byte(strings.Repeat(0123456789, (16<<10)/10+1)) func stream(w http.ResponseWriter, r *http.Request) { w.Header().Set(transfer-encoding, identity) w.Write(long) time.Sleep(time.Second * 10) } func main() { http.HandleFunc(/, stream) http.ListenAndServe(:5626, nil) } 

Note: The reason for adding Transfer-Encoding is that the go language chunked transfers by default, and I don’t know how to simply turn it off.

Since the buffer size of cURL is 16KB, I constructed my code with a little more than 16KB of data.

Now to try to request this interface.

 1 2 3 4  $curl localhost:5626 0123456789... ... 0123  Sure enough, it stops here, and the 456789 that follows is gone. As long as the code sleeps long enough, it will keep waiting here. But the weird thing is that I can’t reproduce the above problem completely, whether I redirect to the file or pipe it to another command (I’m too lazy to write the grpc example). Also, cURL can specify how long to stop the download if the speed continues to fall below a certain value (returning error code 28: Operation timeout):.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16  -Y, --speed-limit If a download is slower than this given speed (in bytes per second) for speed- time seconds it gets aborted. speed-time is set with -y, --speed-time and is 30 if not set. If this option is used several times, the last one will be used. -y, --speed-time If a download is slower than speed-limit bytes per second during a speed-time period, the download gets aborted. If speed-time is used, the default speed- limit will be 1 unless set with -Y, --speed-limit. This option controls transfers and thus will not affect slow connects etc. If this is a concern for you, try the --connect-timeout option. If this option is used several times, the last one will be used.  The service I mentioned earlier is a streaming subscription service that starts with a full subscription and then subscribes to incremental data (less), so the speed generally goes to zero after the subscription, which for me made these parameters work well. I ended up with the following command.  1 2 3  $ curl -Ny1 ... $echo$? 28 

As for the very first question: output to terminal is no problem, output to file or pipe is a problem, and I still don’t have an answer.