In Go, it is easy to write Benchmark functions to test the performance of a function point. For important functions, we can add the corresponding test process in CI/CD to be aware of the function performance changes when they occur. So the question is, how do you detect performance changes in a function?
To put it another way, you write a function and find that it is slow and you need to optimize it, and when you Google it and find a better implementation, you find that it is indeed faster through the Benchmark function. But you can’t say exactly how much faster, and you want to know the performance comparison before and after the function optimization, and how many percentage points of improvement with high confidence?
For the above requirement scenario, there is a tool that can help you, and it is benchstat.
Let’s review the benchmark first. For ease of understanding, here is an example of computing the classical calculation of Fibonacci series values.
The above code is a recursive implementation, and it is clear that the function becomes very time consuming as n gets larger and larger. For example, if n is 20, the Benchmark function looks like this.
go test -bench=BenchmarkFib20 on the command line to get the performance results.
Where -8 represents 8 cpu, the number of function runs is 39452, and the average time spent per function is 30229ns.
If we want to get multiple samples, we can specify the
-count=N parameter of go test. For example, if we want to get the sample data 5 times, we would run
go test -bench=BenchmarkFib20 -count=5.
The iterative implementation to calculate the Fibonacci series values is as follows.
The plainest way to compare the performance difference between these two functions is to benchmark them separately and then analyze these benchmark results by hand, but this is not intuitive.
benchstat is an official Go recommended command line tool for calculating and comparing statistics related to benchmarking.
We can install it with the following command.
Execute the -h parameter to see a description of the tool’s usage.
We would like to compare the performance benchmarks of FibSolution(n) from 15 to 20, for both implementations.
Note that the two commands are executed with recursive and iterative implementation logic corresponding to the FibSolution function, respectively.
At this point, we can compare the performance of these two function implementation logics.
As you can see, the execution time of the recursive implementation of the function increases significantly as the value of n becomes larger. The iterative implementation reduces the average time overhead by more than 99% compared to the recursive implementation, which is a very significant optimization effect.
In addition, p=0.008 indicates the confidence level of the results, and a larger p value indicates a lower confidence level. In general, 0.05 is used as the threshold value, beyond which the results are not credible. n=5+5 indicates the number of valid samples used respectively.
benchstat is a benchmarking statistics tool that can be used to alleviate the cost of manually analyzing data when we do some optimization work.
If you have a project that deploys automated tests in your CI/CD process, you may want to add this tool to the mix. It may help you identify problems in advance when there are changes to functions that add to performance loss.