Writing benchmarks I think is a good tool to have in every developer's arsenal. To an extent it's akin to Unit tests -> Functionality, but for performance. We want to know how our code is performing, but we are very bad at predicting how well it will run. So what other way to do this, other than fact and evidence based tests that are benchmarks. Benchmarks are great. But there's a dark side to this as well.
I love writing and running benchmarks. It gives some sense of affirmation on how your code or even some library performs. But the bitter truth and the dark side about benchmarks or any performance test for that matter is that they lie. There's no guarantee on the results you get on your benchmarks due to many factors. These include; Environment, Optimisations (JVM/Compiler), How CPUs work (i.e. cpu cache misses and memory latencies). And overcoming challenges posed by these factors are really hard. Sure you can ignore these but then everything's nothing but a lie.
It's not all doom-and-gloom though. These challenges are the very reason JMH has been built. Understanding these challenges will make you appreciate it even more. JMH uses different strategies to overcome these challenges and provides a nice set of annotations to use in our benchmark code.
The best way to get started with JMH is to go through the examples that are provided in the project home page. They can be found here.
For the sake of this write up and to highlight some of the key concepts, following is a very basic example of adding an item to a CopyOnWriteArrayList.
@State(Scope.Thread) public class CopyOnArrayListBenchmark { private List<String> benchMarkingList; @Setup public void setup() { benchMarkingList = new CopyOnWriteArrayList<String>(); } @Benchmark public void benchMarkArrayListAddStrings(Blackhole blackhole) { blackhole.consume(benchMarkingList.add("foo")); } public static void main(String[] args) throws RunnerException { Options options = new OptionsBuilder() .warmupIterations(5) // .measurementIterations(5) .mode(Mode.AverageTime) .forks(1) .threads(5) .include(CopyOnArrayListBenchmark.class.getSimpleName()) .timeUnit(TimeUnit.MILLISECONDS) .build(); new Runner(options).run(); } }
The annotations and the main method makes this a JMH benchmark. There's lot more that JMH provides but this I feel is a very basic benchmark that covers some important points. A quick run-down of the annotations used;
@State(Scope.Thread)This is an annotation that is useful to define state for the benchmark. Generally we want to maintain some sort of state in our benchmarks. The different scopes available are;
- Thread - A benchmark gets its own object, field per thread
- Group - A benchmark shares objects, fields amongst a thread group
- Benchmark - A benchmark shares the objects, fields for the whole benchmark
The way the state is defined in this example is by setting a default state. Therefore the instances fields of the benchmark have the state characteristics. The project examples show how per class states can be achieved.
@SetupThis is an annotation I find very useful. It's like the @Before annotation we'd normally use in JUnit tests. This gives us the opportunity to initialise fields, objects without impacting the actual benchmark. I find this very useful since, if not for this, we'd use various techniques to initialise objects and inadvertently incur setup cost in the benchmarking we do.
@BenchmarkThe method annotated with this is the actual benchmark. The results captured will be for invoking code that is executed in this method. An important concept used in this method is the Blackhole. The simple reason for this is to avoid dead code. Dead code elimination is one of the key challenges in benchmarks (this caused by compilers running optimisations). JMH provides infrastructure to eliminate/minimise dead code. In the example above, I could have simply returned the boolean resulted by the List.add(T) method. Returning a value from a benchmark tells JMH to limit dead-code-elimination. However if multiple values need to be returned, its possible we compilers might find the code not returning to be dead-code. This is where Blackholes come in handy. Simply by using Blackhole.consume, we sink the values where JMH helps with dead-code. The example does not need a Blackhole (as it returns one value). It's only shown for highlighting this feature.
And then we have the main method which sets up the benchmark to run. The key highlights here is the various options it provides. I've used a few frequent options I use, but there's more. These options also have annotations that can be used.
JMH provides many benefits. However following are the benefits that stood out for me and got me using it more and more.
- Optimisation proof/aware:
- False sharing
- Forking
- Data setup
- Warming up
- Running tests multiple times
- Support for threads
- State per thread/benchmark
- Parameters
- Artificially consume CPU
- Measurement modes
- Built in profilers (Not your fully blown profiler. Use it with care)
As stated above, the examples are a great way to get cracking on writing benchmarks using JMH. Also I found the resources by Aleksey Shipilëv (the guy who's responsible for JMH) to be very useful. He goes into detail about the challenges and how JMH solves them in this video.
As closing remarks its still worth noting that benchmarks are should not be taken as absolutes. They are an indication and close representation on the throughput/latency of the code under test. I find it very useful as a relative measure when I refactor code. So write benchmarks, run them, analyse results and do it regularly.
No comments:
Post a Comment