Manak  2.0.0
Quick Guide

Introduction

So let us take a moment to understand the hierarchical structure that Manak uses. When manak.hpp is included in the project it also includes the master benchmarking suit which registers all the benchmarking cases. This master suit is a special case of a manak suit which may have more manak suits or cases as its children. On the other hand benchmarking case is a leaf node in this tree(or so we assume until parametrized benchmarks come into picture). Each benchmark case is charectorized by a unique name. This name contains the path to this case from the root. This hierarchical structure helps us to run specific benchmarks. Now that we know the hierarchical structure we can introduce the concept of library into this picture. A benchmark case can contain multiple library entries which are compared against each other. Benchmark cases in Manak can perform testing work. Check out Manak Hierarchical Structure for more details on the MANAK hierarchy.

So now we can start using Manak. The benchmark cases and suits in Manak can be registred either manually or automatically. But in this quick guide we are only going to consider auto registration. For more information on manual registration check out Manual Registration

Before we start registration of cases let us understand the timing features of Manak. The time is measured in microseconds. The function runs for given number of iterations and the average of all iterations is taken as the final result. When number of iterations is not given the default is used(Which is 10 if not set otherwise). The default can be changed by setting 'MANAK_DEFAULT_ITERATIONS' to the desired value.

Manak can generate ouput in 2 different formats, HTML and TXT. A simple HTML output looks like this - Sample HTML ouput.

The values in bracket are values added for comprison. For more information on comparison framework check Comparison Framework. Manak by default generates HTML output but user can force Manak to generate TXT output with '-f' option(also stands for format :P). Passing argument '-f TXT' will force Manak to generate TXT output. Check out All About Outputs for more information on output formats.

The usual one file usage will look like this -

#include <manak/manak.hpp>
//! register cases

Manak can generate main function for you which runs all your registered cases with given command line arguments. To create such a main we need to define 'MANAK_AUTO_MAIN'. Manak needs to initialize some variable such as module name, master suit etc. This initialization needs to be done only once. For this we need to define 'MANAK_INIT'. This looks foolish in case of one file setup but it plays major role in muli file setup. Check out Using Manak Effectively in Multi File Project for more information.

Next we need to provide a module name to manak. For example a simple Manak program -

#include <iostream>
#define MANAK_AUTO_MAIN
#define MANAK_INIT
#define MANAK_SIMPLE_MODULE Bench
#include <manak/manak.hpp>
//! register cases

Here as you can see the module name is provided with 'MANAK_SIMPLE_MODULE'. Module name is required only where 'MANAK_INIT' is defined(This is important for multi file setup).

Manak offers 2 benchmarking modules. Check out Using Manak Effectively in Multi File Project for how they can be effectively used to make multi file projects simpler.

Simple benchmarking module can be used when the library name is constant for all the cases registered in that file. For detailed information on Simple Benchmark Module see Simple Benchmark Module.

To set the module type of a file to simple module, set 'MANAK_SIMPLE_MODULE' to the name of the module.

#define MANAK_SIMPLE_MODULE name

This will by default add all the cases with library name as module name. This name can be changed by setting MANAK_BASE_LIBRARY_NAME to desired name.

So let us register our first benchmark case to the master suit.

#include <iostream>
#define MANAK_SIMPLE_MODULE bench
#define MANAK_AUTO_MAIN
#define MANAK_INIT
#include <manak/manak.hpp>
{
(
for(size_t i = 0;i < 1000;i++);
)
(
)
}

Thats it! This will automatically registers the case in master suit and the MANAK_AUTO_MAIN will generate the main for you. Manak uses C++11 features so don't forget compile the code with C++11 support. By default the output will be saved to file 'benchmark_stat.html' and will look like - Sample HTML ouput

TXT output for above code in file 'benchmark_stat.txt'

######################################################################
# Manak C++ Benchmarking Library #
# Version 1.1.0 #
# Created at Feb 1 13:56:01 2015 #
######################################################################
Case Name bench
ForLoops 4.6

As you can see 'MEASURE' block can be used to time the code inside in. This block does not change the scope. There can be any number of 'MEASURE' blocks in a case, and the effect will be the sum of all. 'TEST' block can be used for testing. 'TEST' block create a special environment for testing which may slow down the code inside. Thats why its not preferred to have 'MEASURE' block inside the 'TEST' block. Like 'MEASURE', there can be any number of 'TEST' blocks. Remember, 'MANAK_ASSERT_TRUE' or any other test has to used inside 'TEST' block to register it. Check out Test Framework for more information on testing framework.

To change the default output file, set 'MANAK_DEFAULT_OUT_FILENAME' to desired value. Check out Setting Options For Output File for more options. While running the benchmark cases, all the ouput to the standard output will be redirected by default to the file 'benchmark_log.txt'. To chnage the default, set 'MANAK_REDIRECTION_FILENAME' to desired value. For more options about redirection check Setting Redirection Options.

If you want to generate the main for yourselves check Manually creating main function.

For more time related features check out All About Timing

There are 3 numbers associated with any benchmark case. Number of iterations, tolerance for time based comparison and success percentage. Let us understand each one of them. Each benchmark case is run for ceratin number of iteration to take the average performance. If number of iterations are not mentioned(like in above example) default is taken. While comparing the new implementation to the old each benchmark case is compared against the old stored values. This comparison is done with tolerance. And now the success percentage, as any case is run for certain number of iterations test may pass for ceratain number of iteration and fail for others. Success percentage defines the percentage number of iteration for which the test should pass. The default success percentage is 100. Success percentage is helpful while running randomized tests.

The default -

  • Iterations: 10, can be set with 'MANAK_DEFAULT_ITERATIONS'
  • Tolerance: 10ms can be set with 'MANAK_DEFAULT_TOLERANCE'
  • Success percentage: 100 can be set with 'MANAK_DEFAULT_SP'

Check out the macros for providing the above mentioned arguments -

  • Iterations alone: MANAK_AUTO_BENCHMARK_CASE_I
  • Tolerance alone: MANAK_AUTO_BENCHMARK_CASE_T
  • Iterations and tolerance: MANAK_AUTO_BENCHMARK_CASE_TI
  • Iterations and success percentage: MANAK_AUTO_BENCHMARK_CASE_IS
  • All: MANAK_AUTO_BENCHMARK_CASE_TIS
{
Code;
}

Check out Comparison framework Examples for how to save the results and load them at later times for comparison.

Now we will start with 'MANAK_MODULE'.

#define MANAK_AUTO_MAIN
#define MANAK_INIT
#define MANAK_MODULE Bench
#include <manak/manak.hpp>
{
(
for(size_t i = 0;i < 1000;i++);
)
}
{
(
for(size_t i = 0;i < 10000;i++);
)
(
)
}

The output of this code is - HTML ouput. In this output you can see that the different library entries of case 'B1' are compared horizontally in the main table of 'B1'. Check out Comparison Framework of Normal Module for more information on this comparison framework.

For more information on auto benchmark options check Automatic Registration

To register a function as benchmark case -

int fun()
{}
MANAK_ADD_BENCHMARK(MANAK_BENCHMARK_CASE(B1, fun));

Remember 'MEASURE' block can also be used inside manually generated functions.

Parametrized Benchmark Cases

Functions with parameters can be used as templated benchmarks. For example -

int fun(int a, int b)
{
Code;
}
MANAK_ADD_CASE(MANAK_CREATE_BENCHMARK_WITH_TEMPLATE(B1, fun)->AddArgs(0,0)->AddArgs(1,0)...);

Sample output can be seen HERE.

Here we can see that benchmark case contains 2 benchmark sub-cases corresponding to each paramter set. 'MANAK_CREATE_BENCHMARK_WITH_TEMPLATE' generates a parametrized benchmark case which can hold any number of parameter sets. Function 'AddArgs' adds a new parameter set. Sometimes number of parameter sets is too large and 'AddArgs' becomes inconvenient. In such cases 'AddCustomArgs' can be used.

#include <iostream>
#include <tuple>
#include <list>
#define MANAK_SIMPLE_MODULE Bench
#define MANAK_AUTO_MAIN
#define MANAK_INIT
#include <manak/manak.hpp>
using namespace std;
int fun(int a, int b)
{
Code;
}
list<tuple<int, int>> get_args()
{
list<tuple<int, int>> out;
for(size_t i = 0;i < 100;i++)
out.emplace_back(i, i);
return out;
}
MANAK_ADD_CASE(MANAK_CREATE_BENCHMARK_WITH_TEMPLATE(B1, fun)->AddCustomArgs(get_args));

The function passed to 'AddCustomArgs' must return list of tuples. These returned tuples will be unfolded and passed to the parametrized benchmark case one by one. For more advance parametrized benchmark options check Complete guide on Parametrized Benchmarking.

The auto test registration registers the case to the current suit(which happens to be the master suit in global space). 'MANAK_AUTO_SUITE' registers a new suite under the current suit and sets the newly created suite as the current suite.

Here B1 will be registered under 'Suite1' and 'Suite1' will be registered under the master suite. 'MANAK_AUTO_SUITE_END()' can be used to set current suite to the parent. For example -

In this example B1 will be registered under Suite2 where as B2 will be registered under Suite1. And in the same way Suite2 will be registered under Suite1 and Suite1 under master. This hierarchical structure helps in running specific benchmarks at a time. For more information on auto registration see Automatic Registration.

Manak Groups

As a part of the quick guide check out Tutorial on Using Groups.

Manak Comparison Framework

As a part of the quick guide check out Comparison Framework.

Manak parametrized Benchmarks

As a part of the quick guide check out Complete guide on Parametrized Benchmarking.

Running Specific Cases

Manak provides support for running speciic benchmark cases of all. This can be achieved by passing argument '-r <regex>' to the executable. All the benchmark matching the regex will run.

How to write a regex:

  • To access the child of any suite "/" token can be used.
  • Manak uses c++11 regex expressions.
  • All valid regrex is supported

Example - To run a suite named 'B1' under master suite

bench -r /B1/.*

To run a suite named 'B2' under B1

bench -r /B1/B2/.*

To run all children of suite B1(child of master) starting with "Bench"

bench -r /B1/Bench.*