In today's episode we're going to look at an awesome benchmarking tool from Alexei called Benchfella. It makes it trivial to benchmark a couple of pieces of code against one another, so I wanted to show it off. Let's get started.


We're going to be benchmarking, so I wanted to pick a good thing to do a comparison on. Consequently, I decided to pit Earmark, which is a pure-elixir Markdown parser, against Discount.ex, which uses a NIF to delegate to a C Markdown parser.

Now you can use benchfella as an archive package, in which case it's available in your applications generally, but when I did that I couldn't get the graphs to work. The other, hackier way to use benchfella is to just clone its repo and put your benchmarks right in it. We'll use that just so I can show you the graph it can output, but I would expect this to not be necessary in the future.

Consequently, we're going to start off by cloning benchfella:

hub clone alco/benchfella
cd benchfella

Now the first step for us is to remove the built-in benchmarks, so:

rm -fr bench/benchfella
rm bench/string_bench.exs

OK, now that that's done let's pull in our dependencies for earmark and discount, and then I'll walk through using it to benchmark:

diff --git a/mix.exs b/mix.exs
index 85102fe..ff8ce3a 100644
--- a/mix.exs
+++ b/mix.exs
@@ -4,6 +4,7 @@ defmodule Benchfella.Mixfile do
   def project do
     [app: :benchfella,
      version: "0.0.2",
+     deps: deps,
      elixir: ">= 0.15.0 and < 2.0.0"]
@@ -13,4 +14,10 @@ defmodule Benchfella.Mixfile do
   # no deps
   # --alco
+  defp deps do
+    [
+      {:discount, "~> 0.5.4"},
+      {:earmark, "~> 0.1.10"}
+    ]
+  end

Fetch the dependencies and compile them:

mix deps.get
mix deps.compile

Alright, now benchmarks just go in the bench directory, and they have to end in _bench.exs. If they satisfy those criteria, benchfella will run them. Let's make bench/markdown_bench.exs:

defmodule MarkdownBench do
  # So your benchmark has to `use Benchfella`
  use Benchfella


  # Then you just name a benchmark and specify the code to execute within the do
  # block
  bench "[Earmark] simple" do
  bench "[Earmark] complex" do

  bench "[Discount] simple" do
  bench "[Discount] complex" do

So here we're just going to read in a simple and a complex markdown file. We need to populate those, so I've written up a simple one and I'm using Elixir's readme as the complex one, so we'll copy those in:

cp ~/tmp/ ~/tmp/ bench/

At this point, running the benchmarks is not difficult, although I hope it gets easier. We need to specify where our dependencies are and then we just run mix bench:

elixir -pa _build/dev/lib/earmark/ebin/ -pa _build/dev/lib/discount/ebin/ -S mix bench

So that gives us the time breakdown. Not at all surprising that the NIF outpaces the pure elixir parser. We'll run it one more time (( do it ))

Now we can just run the bench.graph task and we'll get a nice html page with a graph on it to compare the benchmarks:

elixir -pa _build/dev/lib/earmark/ebin/ -pa _build/dev/lib/discount/ebin/ -S mix bench
go bench/graphs/index.html

So here you can see our benchmarks graphed, where each bar in a given graph is a different run of that benchmark. You can use this dropdown up top to choose whether to use a linear or logarithmic scale.


So that's it for showing off Benchfella. It's a great tool and is pretty useful out of the box. See you soon!