Post has shared content
My lightweight very simple single purpose monad has been sent to boost-dev for community feedback

This monad can hold a fixed variant list of empty, a type R, a lightweight error_type or a heavier exception_type at a space cost of max(20, sizeof(R)+4). Features: 

* Very lightweight on build times and run times up to the point of zero execution cost and just a four byte space overhead. Requires min clang 3.2, GCC 4.7 or VS2015. A quick sample of runtime overhead, min to max opcodes generated by GCC 5.1: 

1 opcodes <= Value transport <= 113 opcodes 

8 opcodes <= Error transport <= 119 opcodes 

22 opcodes <= Exception transport <= 214 opcodes 

4 opcodes <= then() <= 154 opcodes 

5 opcodes <= bind() <= 44 opcodes 

* Just enough monad, nothing more, nothing fancy. Replicates the future API with a fair chunk of the Expected<T> API, so if you know how to use a future you already know how to use this. 

* Enables convenient all-noexcept mathematically verifiable close semantic design, so why bother with Rust anymore? :) 

* Can replace most uses of optional<T>. 

* Deep integration with lightweight future-promise (i.e. async monadic programming) also in this library. 

* Comprehensive unit testing and validation suite. 

* Mirrors noexcept of type R. 

* Type R can have no default constructor, move nor copy. 

* Works inside a STL container, and type R can be a STL container. 

* No comparison operations nor hashing is provided, deliberately to keep things simple. 

Documentation page: 

Source code: 

Online wandbox compiler:

Any opinions or thoughts gratefully received, particularly if I have the exception safety semantics right (i.e. a throw during move or copy leaves the monad empty). Do you also like the polymorphic bind() and map() which does different things depending on the parameter type your callable takes? Do you like that by simply changing the callable's type to a rvalue reference you can move state, or is this being too clever? 

This monad will become the basis of lightweight future-promise which is essentially a "split monad" with the setter interface and getter interface potentially in different system threads. The then(), bind() and map() work as here but only are triggered at the point of the value being changed. This effectively makes the lightweight future a "lazy continued monad". 

That lightweight future-promise will then enter AFIO to replace its async_io_op type hopefully in time for the peer review this time next month.

#c++ #c++14 #boostcpp   #boostafio  

Post has shared content
As much as I only just said earlier today that my world's simplest C++ monad was going to be obsessive about keeping build times low and therefore avoiding all metaprogramming, there is one niche area where heavy metaprogramming makes a lot of sense: the monadic bind and map operations. Under the "you only pay for what you use" principle, if you are using those monadic programming operations it means you accept substantial build time costs over doing the same operations manually by hand. If you don't use them, there is a slight cost to the parser in parsing, but as it's all template stuff nothing gets instantiated until on demand.

Let's take a quick example: In my monad, monad<int> can have state int, empty, error_code or exception_ptr.

monad<int> foo;

Let's do a bind on that: 

monad<void> ret1(foo.bind([](int){/* only called if foo contains an int, else a monad of this lambda's return type is created with the propagated error/empty state */}));

monad<int> ret2(foo.bind([](int &&v){ return std::move(v); /* same as before,  but we move from foo into ret2 */}));

monad<int> ret3(foo.bind([](error_code e){ /* whoah, this is different! Now only called if foo contains an error code! Always returns same monad as originating monad as any other states get propagated */ }));

monad<int> ret4(foo.bind([](){ /* callables taking no parameters only get called if foo is empty */ }));

monad<void> ret5(foo.bind([](auto){ /* C++ 14 generic lambdas are assumed to be value consuming */ }));

monad<void> ret6(foo.bind([](auto &&){ /* and yes, this supplies an rvalue ref to the value same as always */ }));

monad<void> ret7(foo.bind([](monad<T>){ /* it goes without saying this also works */ }));

This inspection of the parameter which the callable takes to determine if it is a rvalue ref and what type it is and therefore effectively implement overloading of bind() and map() based on the call specification of the function supplied is to my best knowledge unique to my monad design.

I can get away with it because all empty, error_code and exception_ptr is always an error, and any other type is always the fixed value type of the monad. Also, for a monad<T> there is always an implicitly available conversion into a monad<T> from T, error_code and exception_ptr plus T may not be error_code nor exception_ptr, so most of the time you rarely create a monad<T> explicitly but instead let the compiler implicitly construct one for you.

In other words, the fixed function nature of this very simple monad gives an unparalleled lack of need to type out code as most other monad libraries require. 

Curious how one inspects the first parameter of some arbitrary possibly templated callable type in C++ 14 which works on VS2015, GCC and clang? I had to go ask Stack Overflow myself in fact as it is not obvious, and the link to wandbox below is a live demo of the callable type deduction code.

All credit for that introspection code goes to T.C. at the question I just tidied it up and made it generic to any callable type.

#c++ #c++14 #boostcpp   #monad  

Post has shared content
As some may know, I've been working on the world's simplest C++ runtime monad these past two weeks, where most of that effort is being spent on getting it to STL quality with a full conformance and validation unit test suite. The hope is that this will become the official Boost monad, and thereafter the monad proposed for standardisation. Unlike any other C++ monad that I am aware of, this design focuses on:

(i) Absolute minimum possible impact on build times and especially runtime overhead. No memory allocation, not ever.
(ii) Deep integration with forthcoming lightweight future-promise which are essentially just async monads and are similarly lightweight to these monads.
(iii) Deliberate design impurity to make it natural and obvious for C++ programmers to use.

The latter feature may worry Haskell purists, so I'll clarify: by impure I mean that the lightweight monad is fixed variant, so in a monad<T> you get either a T, an error_code, an exception_ptr or empty i.e. it's a four state monad, which is of course not a proper monad but it does get you an optional<T> replacement for free. I also ended up calling it a monad<T>, which no monad would ever be called (maybe, result etc etc anything but monad) as per the C++ tradition of bastardising implementations of pure features from other languages.

It'll probably go to boost-dev for community feedback early next week, but to give you a taster, I can guarantee the following runtime overheads when using it:

* Space cost of max(20, sizeof(T)+4)

* state known/state unknown x64 opcodes generated:

  * clang 3.7
  59 opcodes <= Value transport <= 37 opcodes
  7 opcodes <= Error transport <= 52 opcodes
  38 opcodes <= Exception transport <= 39 opcodes

  * GCC 5.1
  1 opcodes <= Value transport <= 113 opcodes
  8 opcodes <= Error transport <= 119 opcodes
  22 opcodes <= Exception transport <= 214 opcodes

  * VS2015
  4 opcodes <= Value transport <= 1881 opcodes
  6 opcodes <= Error transport <= 164 opcodes
  1946 opcodes <= Exception transport <= 1936 opcodes

And by guarantee, I really mean guarantee. Those guarantees are empirically tested per commit by the CI by compiling a list of use cases and counting the assembler ops generated by the compiler. If the count exceeds a limit, the commit is failed.

If you like the Result<T> from Rust and have gotten used to writing code using that design, if this monad is accepted by Boost you'll have pretty much the same thing, except even easier to program with and even more lightweight than Rust's Result<T>!

Watch this space for when this monad gets sent for review!

#c++ #c++14 #boostcpp   #cpp   #cpp14  

Post has attachment
As part of publicising my C++ Now 2015 talk two weeks ago, here is part 16 of 19 from its accompanying Handbook of Examples of Best Practice for C++ 11/14 (Boost) libraries:

16. COUPLING: Consider allowing your library users to dependency inject your dependencies on other libraries

As mentioned earlier, the libraries reviewed overwhelmingly chose to use STL11 over any equivalent Boost libraries, so hardcoded std::thread instead of boost::thread, hardcoded std::shared_ptr over boost::shared_ptr and so on. This makes sense right now as STL11 and Boost are still fairly close in functionality, however in the medium term there will be significant divergence between Boost and the STL as Boost "gets ahead" of the STL in terms of features. Indeed, one may find oneself needing to "swap in" Boost to test one's code with some future STL pattern shortly to become standardised.

Let me put this another way: imagine a near future where Boost.Thread has been rewritten atop of the STL11 enhancing the STL11 threading facilities very substantially with lots of cool features which may not enter the standard until the 2020s. If your library is hardcoded to only use the STL, you may lose out on substantial performance or feature improvements. Your users may clamour to be able to use Boost.Thread with your library. You will then have to add an additional code path for Boost.Thread which replicates the STL11 threading path, probably selectable using a macro and the alternative code paths swapped out with #ifdef. But you still may not be done - what if Boost.Chrono also adds significant new features? Or Boost.Regex? Or any of the Boost libraries now standardised into the STL? Before you know it  your config.hpp may look like the one from ASIO which has already gone all the way in letting users choose their particular ASIO configuration, and let me quote a mere small section of it to give an idea of what is involved:

... // Standard library support for chrono. Some standard libraries (such as the // libstdc++ shipped with gcc 4.6) provide monotonic_clock as per early C++0x // drafts, rather than the eventually standardised name of steady_clock. #if !defined(ASIO_HAS_STD_CHRONO) # if !defined(ASIO_DISABLE_STD_CHRONO) # if defined(__clang__) # if defined(ASIO_HAS_CLANG_LIBCXX) # define ASIO_HAS_STD_CHRONO 1 # elif (__cplusplus >= 201103) # if _has_include(<chrono>) # define ASIO_HAS_STD_CHRONO 1 # endif // __has_include(<chrono>) # endif // (__cplusplus >= 201103) # endif // defined(__clang_) # if defined(__GNUC__) # if ((__GNUC__ == 4) && (__GNUC_MINOR__ >= 6)) || (__GNUC__ > 4) # if defined(__GXX_EXPERIMENTAL_CXX0X__) # define ASIO_HAS_STD_CHRONO 1 # if ((__GNUC__ == 4) && (__GNUC_MINOR__ == 6)) # define ASIO_HAS_STD_CHRONO_MONOTONIC_CLOCK 1 # endif // ((__GNUC__ == 4) && (__GNUC_MINOR__ == 6)) # endif // defined(__GXX_EXPERIMENTAL_CXX0X__) # endif // ((__GNUC__ == 4) && (__GNUC_MINOR__ >= 6)) || (__GNUC__ > 4) # endif // defined(__GNUC__) # if defined(ASIO_MSVC) # if (_MSC_VER >= 1700) # define ASIO_HAS_STD_CHRONO 1 # endif // (_MSC_VER >= 1700) # endif // defined(ASIO_MSVC) # endif // !defined(ASIO_DISABLE_STD_CHRONO) #endif // !defined(ASIO_HAS_STD_CHRONO) // Boost support for chrono. #if !defined(ASIO_HAS_BOOST_CHRONO) # if !defined(ASIO_DISABLE_BOOST_CHRONO) # if (BOOST_VERSION >= 104700) # define ASIO_HAS_BOOST_CHRONO 1 # endif // (BOOST_VERSION >= 104700) # endif // !defined(ASIO_DISABLE_BOOST_CHRONO) #endif // !defined(ASIO_HAS_BOOST_CHRONO) ...

ASIO currently has over 1000 lines of macro logic in its config.hpp with at least twelve different possible combinations, so that is 2 ^ 12 = 4096 different configurations of code paths (note some combinations may not be allowed in the source code, I didn't check). Are all of these tested equally? I actually don't know, but it seems a huge task requiring many days of testing if they are. However there is a far worse problem here: what happens if library A configures ASIO one way and library B configures ASIO a different way, and then a user combines both libraries A and B into the same process?

The answer is that such a combination violates ODR, and therefore is undefined behaviour i.e. it will crash. This makes the ability to so finely configure ASIO much less useful than it could be.

Let me therefore propose something better: allow library users to dependency inject from the outside the configuration of whether to use a STL11 dependency or its Boost equivalent. If one makes sure to encapsulate the dependency injection into a unique inline namespace, that prevents violation of ODR and therefore collision of the incompatibly configured library dependencies. If the dependent library takes care to coexist with alternative configurations and versions of itself inside the same process, this:

* Forces you to formalise your dependencies (this has a major beneficial effect on design, trust me that your code enormously improves when you are forced to think correctly about this).
* Offers maximum convenience and utility to your library's users.
* Lets you better test your code against multiple (future) STL implementations.
* Looser coupling.
* Much easier upgrades later on (i.e. less maintenance).

What it won't do:
* Prevent API and version fragmentation.
* Deal with balkanisation (i.e. two configurations of your library are islands, and cannot interoperate).

In short whether the pros outweigh the cons comes down to your library's use cases, you as a maintainer, and so on. Indeed you might make use of this technique internally for your own needs, but not expose the facility to choose to your library users.

So how does one implement STL dependency injection in C++ 11/14? One entirely valid approach is the ASIO one of a large config.hpp file full of macro logic which switches between Boost and the STL11 for the following header files which were added in C++ 11:

Boost header
no equivalent
Boost namespace
boost, boost::atomics

STL11 header
STL11 namespace

At the time of writing, a very large proportion of STL11 APIs are perfectly substitutable with Boost i.e. they have identical template arguments, parameters and type signatures, so all you need to do is to alias either namespace std or namespace boost::? into your own library namespace as follows:

// In config.hpp
namespace mylib { inline namespace MACRO_UNIQUE_ABI_ID {
#ifdef MYLIB_USING_BOOST_RATIO // The external library user sets this
namespace ratio = ::boost;
namespace ratio = ::std;
} }

// To use inside namespace mylib::MACRO_UNIQUE_ABI_ID, do:
ratio::ratio<2, 1> ...

As much as the above looks straightforward, you will find it quickly multiplies into a lot of work just as with ASIO's config.hpp. You will also probably need to do a lot of code refactoring such that every use of ratio is prefixed with a ratio namespace alias, every use of regex is prefixed with a regex namespace alias and so on. So is there an easier way?

Luckily there is, and it is called  APIBind. APIBind takes away a lot of the grunt work in the above, specifically:

* APIBind provides bind files for the above C++ 11 header files which let you bind just the relevant part of namespace boost or namespace std into your namespace mylib. In other words, in your namespace mylib you simply go ahead and use ratio<N, D> with no namespace prefix because ratio<N, D> has been bound directly into your mylib namespace for you.  APIBind's bind files essentially work as follows:

// In header <ratio> the API being bound
namespace std { template <intmax_t N, intmax_t D = 1> class ratio; }

// Ask APIBind to bind ratio into namespace mylib
#define BOOST_STL11_RATIO_MAP_NAMESPACE_BEGIN namespace mylib {
#include BOOST_APIBIND_INCLUDE_STL11(bindlib, std, ratio) // If you replace std with boost, you bind boost::ratio<N, D> instead.

// Effect on namespace mylib
namespace mylib { template<intmax_t _0, intmax_t _1 = 1> using ratio = ::std::ratio<_0, _1>; }
// You can now use mylib::ratio<N, D> without prefixing. This is usually a very easy find and replace in files operation.

* APIBind provides generation of inline namespaces with an ABI and version specific mangling to ensure different dependency injection configurations do not collide:

// BOOST_AFIO_V1_STL11_IMPL, BOOST_AFIO_V1_FILESYSTEM_IMPL and BOOST_AFIO_V1_ASIO_IMPL all are set to either boost or std in your config.hpp

// Note the last bracketed item is marked inline. On compilers without inline namespace support this bracketed item is ignored.

// From now on, instead of manually writing namespace boost { namespace afio { and boost::afio, instead do:

struct foo;

// Reference struct foo from the global namespace BOOST_AFIO_V1_NAMESPACE::foo;
// Alias hard version dependency into mylib
namespace mylib { namespace afio = BOOST_AFIO_V1_NAMESPACE; }

* APIBind also provides boilerplate for allowing inline reconfiguration of a library during the same translation unit such that the following "just works":

// test_all_multiabi.cpp in the AFIO unit tests
// A copy of AFIO + unit tests completely standalone apart from Boost.Filesystem
#include "test_all.cpp"

// A copy of AFIO + unit tests using Boost.Thread, Boost.Filesystem and Boost.ASIO
// ASIO_STANDALONE undefined
#include "test_all.cpp"

In other words, you can reset the configuration macros and reinclude afio.hpp to generate a new configuration of AFIO as many times as you like within the same translation unit. This allows header only library A to require a different configuration of AFIO than header only library B, and it all "just works". As APIBind is currently lacking documentation, I'd suggest you  review the C++ Now 2015 slides on the topic until proper documentation turns up. The procedure is not hard, and you can examine for a working example of it in action. Do watch out for the comments marking the stanzas which are automatically generated by scripting tools in APIBind, writing those by hand would be tedious.

Presentation slides:

#cpp  #cplusplus #cppnow   #cppnow2015   #c++ #boostcpp   #c++11 #c++14

Post has attachment
As part of publicising my C++ Now 2015 talk last week, here is part 15 of 20 from its accompanying Handbook of Examples of Best Practice for C++ 11/14 (Boost) libraries:

15. BUILD: Consider defaulting to header only, but actively manage facilities for reducing build times

Making your library header only is incredibly convenient for your users - they simply drop in a copy of your project and get to work, no build system worries. Hence most Boost libraries and many C++ libraries are header only capable, often header only default. A minority are even header only only.

One thing noticed in the library review is just how many of the new C++ 11/14 libraries are header only only, and whilst convenient I think library authors should and moreover can do better. For some statistics to put this in perspective, proposed Boost.AFIO v1.3 provides a range of build configurations for its unit tests:

Header only
Precompiled header only (default)
Precompiled not header only (library implementation put into a shared library)
Precompiled header only with link time optimisation
Build flags Microsoft Windows 8.1 x64 with Visual Studio 2013 Ubuntu 14.04 LTS Linux x64 with GCC 4.9 and gold linker Ubuntu 14.04 LTS Linux x64 with clang 3.4 and gold linker
Debug header only 7m17s 12m0s 5m45s
Debug precompiled header only 2m10s 10m26s 5m46s
Debug precompiled not header only 0m55s 3m53s asio failure
Release precompiled header only 2m58s 9m57s 8m10s
Release precompiled not header only 1m10s 3m22s asio failure
Release precompiled header only link time optimisation 7m30s 13m0s 8m11s

These are for a single core 3.9Ghz i7-3770K computer. I think the results speak for themselves, and note that AFIO is only 8k lines with not much metaprogramming.

The approaches for improving build times for your library users are generally as follows, and in order of effect:

1. Offer a non-header only build configuration

Non-header build configurations can offer build time improvements of x4 or more, so these are always the best bang for your buck. Here is how many Boost libraries offer both header only and non-header only build configurations by using something like this in their config.hpp:

// If we are compiling not header only
#if (defined(BOOST_AFIO_DYN_LINK) || defined(BOOST_ALL_DYN_LINK)) && !defined(BOOST_AFIO_STATIC_LINK)

# if defined(BOOST_AFIO_SOURCE)                // If we are compiling the library binary
#  define BOOST_AFIO_DECL BOOST_SYMBOL_EXPORT    // Mark public symbols as exported from the library binary
#  define BOOST_AFIO_BUILD_DLL                   // Tell code we are building a DLL or shared object
# else
#  define BOOST_AFIO_DECL BOOST_SYMBOL_IMPORT    // If not compiling the library binary, mark public symbols are imported from the library binary
# endif
#else                                          // If we are compiling header only
# define BOOST_AFIO_DECL                         // Do no markup of public symbols
#endif // building a shared library

// Configure Boost auto link to get the compiler to auto link your library binary
#if !defined(BOOST_AFIO_SOURCE) && !defined(BOOST_ALL_NO_LIB) && \

#define BOOST_LIB_NAME boost_afio

// tell the auto-link code to select a dll when required:
#if defined(BOOST_ALL_DYN_LINK) || defined(BOOST_AFIO_DYN_LINK)

#include <boost/config/auto_link.hpp>

#endif  // auto-linking disabled

#if BOOST_AFIO_HEADERS_ONLY == 1                              // If AFIO is headers only
# define BOOST_AFIO_HEADERS_ONLY_FUNC_SPEC inline               // Mark all functions as inline
# define BOOST_AFIO_HEADERS_ONLY_MEMFUNC_SPEC inline            // Mark all member functions as inline
# define BOOST_AFIO_HEADERS_ONLY_VIRTUAL_SPEC inline virtual    // Mark all virtual member functions as inline virtual
// GCC gets upset if inline virtual functions aren't defined
# ifdef BOOST_GCC
#  define BOOST_AFIO_HEADERS_ONLY_VIRTUAL_UNDEFINED_SPEC { BOOST_AFIO_THROW_FATAL(std::runtime_error("Attempt to call pure virtual member function")); abort(); }
# else
# endif
#else                                                         // If AFIO is not headers only
# define BOOST_AFIO_HEADERS_ONLY_FUNC_SPEC extern BOOST_AFIO_DECL  // Mark all functions as extern dllimport/dllexport
# define BOOST_AFIO_HEADERS_ONLY_MEMFUNC_SPEC                      // Mark all member functions with nothing
# define BOOST_AFIO_HEADERS_ONLY_VIRTUAL_SPEC virtual              // Mark all virtual member functions as virtual (no inline)
# define BOOST_AFIO_HEADERS_ONLY_VIRTUAL_UNDEFINED_SPEC =0;        // Mark all pure virtual member functions with nothing special

This looks a bit complicated, but isn't really. Generally you will mark up those classes and structs you implement in a .ipp file (this being the file implementing the APIs declared in the header) with BOOST_AFIO_DECL, functions with BOOST_AFIO_HEADERS_ONLY_FUNC_SPEC, all out-of-class member functions (i.e. those not implemented inside the class or struct declaration) with BOOST_AFIO_HEADERS_ONLY_MEMFUNC_SPEC, all virtual member functions with BOOST_AFIO_HEADERS_ONLY_VIRTUAL_SPEC and append to all unimplemented virtual member functions BOOST_AFIO_HEADERS_ONLY_VIRTUAL_UNDEFINED_SPEC. This inserts the correct markup to generate both optimal header only and optimal non header only outcomes.

2. Precompiled headers

You probably noticed that in the table above that precompiled headers gain nothing on clang, +13% on GCC and +70% on MSVC. Those percentages vary according to source code, but I have found them fairly similar across my own projects - on MSVC, precompiled headers are a must have on an already much faster compiler than any of the others.

Turning on precompiled headers in Boost.Build is easy:

cpp-pch afio_pch : afio_pch.hpp : <include>. ;
And now simply link your program to afio_pch to enable. If you're on cmake, you definitely should check out

3. extern template your templates with their most common template parameters in the headers, and force instantiate those same common instances into a separate static library

The following demonstrates the technique:

// Header.hpp
template<class T> struct Foo
  T v;
  inline Foo(T _v);
// Definition must be made outside struct Foo for extern template to have effect
template<class T> inline Foo<T>::Foo(T _v) : v(_v) { }

// Inhibit automatic instantiation of struct Foo for common types
extern template struct Foo<int>;
extern template struct Foo<double>;

// Source.cpp
#include "Header.hpp"
#include <stdio.h>

// Force instantiation of struct Foo with common types. Usually compiled into
// a separate static library as bundling it into the main shared library can
// introduce symbol visibility problems, so it's easier and safer to use a static
// library
template struct Foo<int>;
template struct Foo<double>;

int main(void)
  Foo<long> a(5);   // Works!
  Foo<int> b(5);    // Symbol not found if not force instantiated above
  Foo<double> c(5); // Symbol not found if not force instantiated above
  printf("a=%l, b=%d, c=%lf\n", a.v, b.v, c.v);
  return 0;

The idea behind this is to tell the compiler to not instantiate common instantiations of your template types on demand for every single compiland as you will explicitly instantiate them exactly once elsewhere. This can give quite a bump to build times for template heavy libraries.

Presentation slides:

#cpp  #cplusplus #cppnow   #cppnow2015   #c++ #boostcpp   #c++11 #c++14

Post has attachment
As part of publicising my C++ Now 2015 talk this week, here is part 14 of 20 from its accompanying Handbook of Examples of Best Practice for C++ 11/14 (Boost) libraries:

14. DESIGN: Consider making (more) use of ADL C++ namespace composure as a design pattern

Most C++ programmers are aware of C++ template policy based design. This example is taken from

#include <iostream>
#include <string>
template <typename OutputPolicy, typename LanguagePolicy>
class HelloWorld : private OutputPolicy, private LanguagePolicy
    using OutputPolicy::print;
    using LanguagePolicy::message;
    // Behaviour method
    void run() const
        // Two policy methods
class OutputPolicyWriteToCout
    template<typename MessageType>
    void print(MessageType const &message) const
        std::cout << message << std::endl;
class LanguagePolicyEnglish
    std::string message() const
        return "Hello, World!";
class LanguagePolicyGerman
    std::string message() const
        return "Hallo Welt!";
int main()
    /* Example 1 */
    typedef HelloWorld<OutputPolicyWriteToCout, LanguagePolicyEnglish> HelloWorldEnglish;
    HelloWorldEnglish hello_world;; // prints "Hello, World!"
    /* Example 2 
     * Does the same, but uses another language policy */
    typedef HelloWorld<OutputPolicyWriteToCout, LanguagePolicyGerman> HelloWorldGerman;
    HelloWorldGerman hello_world2;; // prints "Hallo Welt!"

This works very well when (a) your policy implementations fit nicely into template types and (b) the number of policy taking template types is reasonably low (otherwise you'll be doing a lot of typing as any changes to the policy design requires modifying every single instantiation of the policy taking template types). Another problem with policy based design is that it generates a lot of template instantiations, and generating a lot of template instantiations is bad because compilers often cannot do constant time type lookups and instead have linear or worse type lookups, so instantiating tens of million types is always going to be a lot slower to compile and sometimes link than millions of types.

Consider instead doing an ADL based namespace composure design pattern which is just a different way of doing policy based design. It can be highly effective in those niches where the traditional policy taking template approach falls down. Here is the same program above written using ADL namespace composure:

#include <iostream>
#include <string>

template<typename MessageType>
void print(MessageType const &message)
  std::cout << message << std::endl;
namespace HelloWorld
  template<class T> void run(T v)
    print(message(v));  // Cannot instantiate message() nor print() until T is known
namespace LanguagePolicyEnglish
  struct tag {};
  template<class T> std::string message(T)
    return "Hello, World!";
namespace LanguagePolicyGerman
  struct tag {};
  template<class T> std::string message(T)
    return "Hallo Welt!";
namespace LanguagePolicyDefault
  struct tag {};
  using LanguagePolicyGerman::message;
int main()
  /* Example 1 */
    using namespace LanguagePolicyEnglish;
    using namespace HelloWorld;
    run(tag()); // prints "Hello, World!"
    // This works because HelloWorld::run()'s message() resolves inside these
    // braces to LanguagePolicyEnglish::message() to the same namespace as
    // struct tag thanks to argument dependent lookup

  /* Example 2
  * Does the same, but uses another language policy */
    using namespace LanguagePolicyGerman;
    using namespace HelloWorld;
    run(tag()); // prints "Hallo Welt!"
    // Whereas HelloWorld::run()'s message() now resolves inside these
    // braces to LanguagePolicyGerman::message()

  /* Example 3 */
    using namespace LanguagePolicyDefault;
    using namespace HelloWorld;
    run(tag()); // prints "Hallo Welt!"
    // Tries to find message() inside namespace LanguagePolicyDefault,
    // which finds message aliased to LanguagePolicyGerman::message()

The first example instantiates five types to be thence considered during global type lookup, so let's say it has cost five to future code lookups. The second example instantiates no types at all at global lookup scope, so it has cost zero to future code lookups because all the types are declared inside namespaces normally not considered during global type lookups. The second example may also require less refactoring in the face of changes than the traditional form.

The above pattern is in fact entirely C++ 03 code and uses no C++ 11. However, template aliasing in C++ 11 makes the above pattern much more flexible. Have a look at for examples of this ADL invoked namespace composure design pattern.

Presentation slides:

#cpp  #cplusplus #cppnow   #cppnow2015   #c++ #boostcpp   #c++11 #c++14

Post has attachment
As part of publicising my C++ Now 2015 talk this week, here is part 13 of 20 from its accompanying Handbook of Examples of Best Practice for C++ 11/14 (Boost) libraries:

13. CONVENIENCE: Consider creating a status dashboard for your library with everything you need to know shown in one place

I like all-in-one-place software status dashboards where with a quick glance one can tell if there is a problem or not. I feel it makes it far more likely that I will spot a problem quickly if it is somewhere I regularly visit, and for that reason I like to mount my status dashboard at the front of my library docs and on my project's github Readme:

* Front of my library docs:
* Project's github Readme (bottom of page):

Implementing these is ridiculously easy: it's a table in standard HTML which github markdown conveniently will render as-is for me, and you can see its source markdown/HTML at The structure is very simple, columns for OS, Compiler, STL, CPU, Build status, Test status and with three badges in each status row, one each for header only builds, static library builds, and shared DLL builds.

Keen eyes may note that the latter majority of that HTML looks automatically generated, and you would be right. The python script at has a matrix of test targets configured on my Jenkins CI at and it churns out HTML matching those. An alternative approach is which will parse a Jenkins CI test grid from a Matrix Build configuration into a collapsed space HTML table which fits nicely onto github. If you also want your HTML/markdown dashboard to appear in your BoostBook? documentation, the script at with the XSLT at should do a fine job.

All of the above dashboarding is fairly Jenkins centric, so what if you just have Travis + Appveyor? I think Boost.DI has it right by encoding a small but complete status dashboard into its BoostBook docs and github, so examine:

* Front page of library docs (underneath the table of contents):
* Project's github Readme (bottom of page, look for the badges):

As a purely personal thing, I'd personally prefer the line of status badges before the table of contents such that I am much more likely to see it when jumping in and notice that something is red when it shouldn't be. But it's purely a personal thing, and each library author will have their own preference.

Finally, I think that displaying status summaries via badges like this is another highly visible universal mark of software quality. It shows that the library author cares enough to publicly show the current state of their library. Future tooling by Boost which dashboards Boost libraries and/or ranks libraries by a quality score will almost certainly find the application specific ids for Travis, Appveyor, Coveralls etc by searching any in the github repo for status badges, so by including status badges in your github you can guarantee that such Boost library ranking scripts will work out of the box with no additional effort by you in the future.

Presentation slides:

#cpp  #cplusplus #cppnow   #cppnow2015   #c++ #boostcpp   #c++11 #c++14

Post has attachment
As part of publicising my C++ Now 2015 talk this week, here is part 12 of 20 from its accompanying Handbook of Examples of Best Practice for C++ 11/14 (Boost) libraries:

12. CONVENIENCE: Consider having Travis send your unit test code coverage results to

There is quite a neat web service called free for open source projects which graphically displays unit test line coverage in a pretty colour coded source code browser UI. You also get a badge which shows how much of your code is covered as a percentage. It might sound like a bit of a gimmick, but in terms of ease of quickly visualising what you haven't covered when you thought you should it's very handy. Also, if you hook coveralls into your github using travis, coveralls will comment on your pull requests and commits whether your test coverage has risen or fallen, and that can be more than useful when you send in a commit and an unexpected catastrophic fall in coverage occurs as that probably means you just committed buggy code.

Anyway, firstly take a look at these libraries which use and decide if you like what you see:


Assuming you are now convinced, firstly you obviously need travis working. You can use coveralls without travis, but it's a one click enable with travis and github, so we'll assume you've done that. Your next problem with be getting travis to calculate line coverage for you, and to send the results to coveralls.

There are two approaches to this, and we'll start with the official one. Firstly you'll need a coveralls API key securely encoded into travis,  see this page for how. Next have a look at, with the key line being:

  - if [ "${TRAVIS_BRANCH}" == "cpp14" ] && [ "${VARIANT}" == "coverage" ]; then (sudo pip install requests[security] cpp-coveralls && coveralls -r . -b test/ --gcov /usr/bin/${GCOV} --repo-token c3V44Hj0ZTKzz4kaa3gIlVjInFiyNRZ4f); fi

This makes use of the coveralls c++ tool at to do the analysis, and you'll also need to adjust your Jamfile as per with some variant addition like:

extend-feature variant : coverage ;
compose <variant>coverage :
    <cxxflags>"-fprofile-arcs -ftest-coverage" <linkflags>"-fprofile-arcs"

... which gets the gcov files to be output when the unit tests are executed.

That's the official way, and you should try that first. However, I personally couldn't get the above working, though admittedly when I implemented coveralls support it was a good two years ago and I spent a large chunk of it fighting the tooling, so I eventually gave up and wrote my own coveralls coverage calculator which was partially borrowed from others. You can see mine at where you will note that I inject the fprofile-arcs etc arguments into b2 via its cxxflags from the outside. I then invoke a shell script at

# Adapted from
# which itself was adapted from

if [ 0 -eq $(find -iname "*.gcda" | wc -l) ]
  exit 0

gcov-4.8 --source-prefix $1 --preserve-paths --relative-only $(find -iname "*.gcda") 1>/dev/null || exit 0

cat >coverage.json <<EOF
  "service_name": "travis-ci",
  "service_job_id": "${TRAVIS_JOB_ID}",
  "run_at": "$(date --iso-8601=s)",
  "source_files": [

for file in $(find * -iname '*.gcov' -print | egrep '.*' | egrep -v 'valgrind|SpookyV2|bindlib|test')
  FILEPATH=$(echo ${file} | sed -re 's%#%\/%g; s%.gcov$%%')
  echo Reporting coverage for $FILEPATH ...
  cat >>coverage.json <<EOF
      "name": "$FILEPATH",
      "source": $(cat $FILEPATH | python test/,
      "coverage": [$(tail n +3 ${file} | cut -d ':' -f 1 | sed -re 's%^ +%%g; s%%null%g; s%^[#=]+$%0%;' | tr $'\n' ',' | sed -re 's%,$%%')]

#cat coverage.json
mv coverage.json coverage.json.tmp
cat >coverage.json <(head -n -1 coverage.json.tmp) <(echo -e "    }\n  ]\n}")
rm *.gcov coverage.json.tmp

#head coverage.json
curl -F json_file=@coverage.json #head coverage.json

This manually invokes gcov to convert the gcda files into a unified coverage dataset. I then use egrep to include all and egrep -v to exclude anything matching the pattern which is all the stuff not in the actual AFIO library. You'll note I build a JSON fragment as I go into the coverage.json temporary file, and the coverage is generated by chopping up the per line information into a very long string matching the coveralls JSON specification as per its API docs. Do note the separate bit of python called to convert the C++ source code into encoded JSON text (, I had some problems with UTF-8 in my source code, and forcing them through a ISO-8859 JSON string encode made coveralls happy. I then push the JSON to coveralls using curl. All in all a very blunt instrument, and essentially doing exactly the same thing as the official C++ coveralls tool now does, but you may find the manual method useful if the official tool proves too inflexible for your needs.

#cpp  #cplusplus #cppnow   #cppnow2015   #c++ #boostcpp   #c++11 #c++14

Post has attachment
As part of publicising my C++ Now 2015 talk this week, here is part 11 of 20 from its accompanying Handbook of Examples of Best Practice for C++ 11/14 (Boost) libraries:

11. PORTABILITY: Consider not doing compiler feature detection yourself

Something extremely noticeable about nearly all the reviewed C++ 11/14 libraries is that they manually do compiler feature detection in their config.hpp, usually via old fashioned compiler version checking. This tendency is not unsurprising as the number of potential C++ compilers your code usually needs to handle has essentially shrunk to three unlike the dozen common compilers implementing the 1998 C++ standard, and the chances are very high that three compilers will be upper bound going into the long term future. This makes compiler version checking a lot more tractable than say fifteen years ago.

However, C++ 1z is expected to provide a number of feature detection macros via the work of SG-10, and GCC and clang already partially support these, especially in very recent compiler releases. To fill in the gaps in older editions of GCC and clang, and indeed MSVC at all, you might consider making use of the header file at which provides the following SG-10 feature detection macros on all versions of GCC, clang and MSVC:

cpp_exceptions - Whether C++ exceptions are available
cpp_rtti - Whether C++ RTTI is available

The advantage of using these SG-10 macros in C++ 11/14 code is threefold:

1. It should be future proof.
2. It's a lot nicer than testing compiler versions.
3. It expands better if a fourth C++ compiler suddenly turned up.

Why use the header file instead of doing it by hand?

1. Complete compiler support for GCC, clang and MSVC all versions.
2. Updates in compiler support will get reflected into cpp_feature.h for you.
3. You benefit from any extra compilers added automatically.
4. If you're using Boost.APIBind you automatically get cpp_feature.h included for you as soon as you include any APIBind header file.

Problems with cpp_feature.h:

1. No support for detecting STL library feature availability. One can do this somewhat with GCC as it always pairs to a libstdc++ version, and of course one can do this for MSVC. However clang pairs to whatever is the latest STL on the system, plus GCC combined with libc++ is becoming increasingly common on Linux. In short you are on your own for STL library feature detection as I am unaware of any easy way to abstract this without the SG-10 library feature detection facilities built into the compiler.

Incidentally Boost.APIBind wraps these SG-10 feature macros into Boost.Config compatible macros in which would be included, as with Boost, using "boost/config.hpp". You can therefore if you really want use the Boost feature detection macros instead, even without Boost being present.

#cpp  #cplusplus #cppnow   #cppnow2015   #c++ #boostcpp   #c++11 #c++14

Post has attachment
As part of publicising my C++ Now 2015 talk next week, here is part 10 of 20 from its accompanying Handbook of Examples of Best Practice for C++ 11/14 (Boost) libraries:

10. DESIGN/QUALITY: Consider breaking up your testing into per-commit CI testing, 24 hour soak testing, and parameter fuzz testing

When a library is small, you can generally get away with running all tests per commit, and as that is easier that is usually what one does.

However as a library grows and matures, you should really start thinking about categorising your tests into quick ones suitable for per-commit testing, long ones suitable for 24 hour soak testing, and parameter fuzz testing whereby a fuzz tool will try executing your functions with input deliberately designed to exercise unusual code path combinations. The order of these categories generally reflects the maturity of a library, so if a library's API is still undergoing heavy refactoring the second and third categories aren't so cost effective. I haven't mentioned the distinction between  unit testing and functional testing and integration testing here as I personally think that distinction not useful for libraries mostly developed in a person's free time (due to lack of resources, and the fact we all prefer to develop instead of test, one tends to fold unit and functional and integration testing into a single amorphous set of tests which don't strictly delineate as really we should, and instead of proper unit testing one tends to substitute automated parameter fuzz testing, which really isn't the same thing but it does end up covering similar enough ground to make do).

There are two main techniques to categorising tests, and each has substantial pros and cons.

The first technique is that you tag tests in your test suite with keyword tags, so "ci-quick", "ci-slow", "soak-test" and so on. The unit test framework then lets you select at execution time which set of tags you want. This sounds great, but there are two big drawbacks. The first is that each test framework has its own way of doing tags, and these are invariably not compatible so if you have a switchable Boost.Test/CATCH/Google Test generic test code setup then you'll have a problem with the tagging. One nasty but portable workaround I use is to include the tag into the test name and then using a regex test selector string on the command line, this is why I have categorised slashes in the test names exampled in the section above so I can select tests by category via their name. The second drawback is that you will find that tests often end up internally calling some generic implementation with different parameters, and you have to go spell out many sets of parameters in individual test cases when one's gut feeling is that those parameters really should be fuzz variables directly controlled by the test runner. Most test frameworks support passing variables into tests from the command line, but again this varies strongly across test frameworks in a way hard to write generic test code, so you end up hard coding various sets of variables one per test case.

The second technique is a hack, but a very effective one. One simply parameterises tests with environment variables, and then code calling the unit test program can configure special behaviour by setting environment variables before each test iteration. This technique is especially valuable for converting per-commit tests into soak tests because you simply configure an environment variable which means ITERATIONS to something much larger, and now the same per-commit tests are magically transformed into soak tests. Another major use case is to reduce iterations for when you are running under valgrind, or even just a very slow ARM dev board. The big drawback here is the self deception that just iterating per commit tests a lot more does not a proper soak test suite make, and one can fool oneself into believing your code is highly stable and reliable when it is really only highly stable and reliable at running per commit tests, which obviously it will always be because you run those exact same patterns per commit so those are always the use patterns which will behave the best. Boost.AFIO is 24 hour soak tested on its per-commit tests, and yet I have been more than once surprised at segfaults caused by someone simply doing operations in a different order than the tests did them :(

Regarding parameter fuzz testing, there are a number of tools available for C++, some better or more appropriate to your use case than others. The classic is of course, though you'll need  their ABI Compliance Checker working properly first which has become much easier for C++ 11 code since they recently added GCC 4.8 support (note that GCC 4.8 still has incomplete C++ 14 support). You should combine this with an executable built with, as a minimum,  the address and undefined behaviour sanitisers. I haven't played with this tool yet with Boost.AFIO, though it is very high on my todo list as I have very little unit testing in AFIO (only functional and integration testing), and fuzz testing of my internal routines would be an excellent way of implementing comprehensive exception safety testing which I am also missing (and feel highly unmotivated to implement by hand).

#cpp  #cplusplus #cppnow   #cppnow2015   #c++ #boostcpp   #c++11 #c++14
Wait while more posts are being loaded