PyDays 2017

PyDays 2017 was Austria's first conference dedicated to the Python programming language. It took place on May 5 and May 6 and was graciously hosted by the Linuxwochen Wien at FH Technikum in Vienna. It was great on many levels: meeting new people excited about Python and where it's headed, over 20 talks, interesting hallway conversations, etc.

I helped out with the organization of the conference, mostly taking care of catering. We (the organization team) are very happy with how everything went. The atmosphere was welcoming and open, the quality of the talks and workshops was very good, attendance was great, and people seemed to have a really good time. From an organizational perspective, the talks mostly stayed on-time, the audio/video hardware worked fine, and the PyDays booth was well-received. Giving out snacks, coffee, soda, cake during breaks worked out nicely and was received very well. The feedback, both at and after the conference, was very positive.

In addition to the co-organization of the Conference, I also held a talk about Dask, a Python library for parallel computing. It provides data structures similar to NumPy Arrays or Python Dataframes that process operations in parallel and scale to out-of-memory datasets. Dask is exciting since it provides the analytical power and familiar mental model of Dataframes, but handles hundreds of gigabytes of data and allows seamless scaling from a single laptop to a computing cluster. The talk was well-received, and there was a lot of interest at the end and in the hallway afterwards. The slides are online, if you're interested as well.

Finally, I'd like to thank my co-organizers, Claus, Kay, Helmut and Sebastian, as well as the people who helped out during the conference, for all their great work to make the PyDays a success. I'd also like to thank our sponsors: the Python Software Foundation, the Python Software Verband e.V., T-Mobile, UBIMET, and Jetbrains.

Overall it was great to see so much community interest in Python. If you're passionate about Python as well, join us on the Python Austria Slack and our meetups!

GCC compatibility: inline namespaces and ABI tags

Keeping libraries binary-compatible with old versions is hard. Recently, GCC was in the unenviable situation of having to switch its std::string implementation.1 GCC used inline namespaces and ABI tags to minimize the extent of breakage and to ensure that old and new versions could only be combined in a safe way. Here, we'll have a look at those mitigation techniques: what they are and how they work.

First, some background: GCC used to have a copy-on-write implementation for std::string. However, C++11 does not allow this anymore because of new iterator and reference invalidation rules. So, GCC 5.1 introduced a new implementation of std::string. The new version was not binary-compatible with the old one: exchanging strings between old (pre-5.1) and new code would crash. We say the application binary interface (ABI) changed.

To understand the consequences of this, let's look at several scenarios:

  • A program uses only code compiled with GCC < 5.1: only old strings, works
  • A program uses only code compiled with GCC >= 5.1: only new strings, works
  • A program mixes code compiled with different GCC versions: both types of strings exist in the program, it could crash.

Let's dig into the last bullet point. When would it crash? Whenever "new" code accesses an "old" string or vice versa. Here are some examples where f() is compiled with an older GCC version and called from "new" code:

void f(int i);                 // (a) safe, no std::string involved
void f(const std::string& s);  // (b) will crash when f accesses s
std::string f();               // (c) will crash when the returned string is used

Given how common std::string is, such crashes would happen frequently if "old" and "new" code were combined. Unfortunately, it's surprisingly easy to end up with a program with some pre-5.1 parts and some newer ones. It's sufficient to link a "new" executable to an "old" library. Given the bad consequences, the GCC developers needed to solve this.

The solution is to change the symbol names of the GCC 5.1 std::string and all functions using it. Because the linker uses symbol names to resolve function calls into libraries, this would cause link-time errors for cases (b) and (c) while (a) would still work. Exactly the intended behavior.

How could this be done? The mechanism that converts C++ names into symbol names is called name mangling. The generated symbol names contain information about namespaces, function names, argument types, etc. Putting the new std::string into a separate namespace would change the symbol names of all functions accepting std::string as argument. But that's crazy—std::string needs to be in namespace std, right?

The solution for this is inline namespaces. All classes, functions, and templates declared in an inline namespace are automatically imported into the parent namespace. Their mangled name still references the original location, though.

Here's what GCC 5.1 does:

namespace std {
    inline namespace __cxx11 {
        template<typename _CharT, ...>  basic_string;
        typedef basic_string<char>      string;
    }
}

Looking at f(const std::string& s), what would the symbol names be when compiled with older and newer GCC versions?

Older GCCGCC 5.1
Symbol name_Z1fRKSs_Z1fRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
Decoded symbol name2f(std::basic_string<char, ...> const&)f(std::__cxx11::basic_string<char, ...> const&)

Indeed, the symbol names are different and the linker would give an error if GCC 5.1 code called f(const std::string &s) from a library compiled with an older GCC version. This solves the compatibility problem for all functions taking std::string as argument.

One problem remains: case (c) from above, std::string f(). The return value type of a function is not part of its mangled name.3 Thus, the symbol name wouldn't change for functions that return std::string, which could still lead to runtime crashes.

GCC 5.1 solves this with the use of ABI tags. From the documentation:

The abi_tag attribute can be applied to a function, variable, or class declaration. It modifies the mangled name of the entity to incorporate the tag name, in order to distinguish the function or class from an earlier version with a different ABI

[...]

When a type involving an ABI tag is used as the type of a variable or return type of a function where that tag is not already present in the signature of the function, the tag is automatically applied to the variable or function.

GCC applies the attribute __abi_tag__ ("cxx11")) to the std::__cxx11 namespace. This affects all classes therein, including the new version of std::string. The ABI tag propagates to all functions that return a string and changes their symbol name.

Let's look at the symbol names for std::string f() for different compiler versions:

Older GCCGCC 5.1
Symbol name_Z1fv_Z1fB5cxx11v
Decoded symbol name2f()f[abi:cxx11]()

Again, the symbol name is different, and the "cxx11" ABI tag is applied to std::string f() with GCC 5.1. This completes the second part of GCC's solution for the std::string ABI change.

These two methods to induce symbol name changes for anything using std::string make the migration path to GCC 5.1 much easier. If the program links correctly it should work, and be safe from difficult-to-debug runtime errors. There would be more to discuss about the GCC 5.1 changes, such as how libstdc++.so still exports the old std::string implementation for compatibility. But this post has gone on too long already, so let's leave it at that :-)


1 GCC 5.1 also introduced a new version of std::list. It is handled analogously to std::string.
2 Decoded using c++filt from the binutils package.
3 The return value type doesn't need to be part of the symbol name because it doesn't get used for overload resolution.

published October 29, 2016
tags c++

Building External Libraries with Boost Build

I recently ran into the problem of having to use an external library not available for our distribution. Of course, it would be possible to build Debian or RPM packages, deploy them, and then link against those. However, building sanely-behaving library packages is tricky.

The easier way is to include the source distribution in a sub-folder of the project and integrate it into our build system, Boost Build. It seemed prohibitive to create Boost Build rules to build the library from scratch (from *.cpp to *.a/*.so) because the library had fairly complex build requirements. The way to go was to leverage the library's own build system.

Easier said than done. I only arrived at a fully integrated solution after seeking help on the Boost Build Mailing List (thanks Steven, Vladimir). This is the core of it:

path-constant LIBSQUARE_DIR : . ;

make libsquare.a
    : [ glob-tree *.c *.cpp *.h : generated-file.c gen-*.cpp ]
    : @build-libsquare
    ;
actions build-libsquare
{
    ( cd $(LIBSQUARE_DIR) && make )
    cp $(LIBSQUARE_DIR)/libsquare.a $(<)
}

alias libsquare
    : libsquare.a                                        # sources
    :                                                    # requirements
    :                                                    # default-build
    : <include>$(LIBSQUARE_DIR) <dependency>libsquare.a  # usage-requirements
    ;

The first line defines a filesystem constant for the library directory, so the build process becomes independent of the working directory.

The next few lines tell Boost Build about a library called libsquare.a, which is built by calling make. The cp command copies the library to the target location expected by Boost Build ($(<)). glob-tree defines source files that should trigger library rebuilds when changed, taking care to exclude files generated by the build process itself.

The alias command defines a Boost Build target to link against the library. The real magic is the <dependency>libsquare.a directive: it is required because the library build may produce header files used by client code. Thus, build-libsquare must run before compiling any dependent C++ files. Adding libsquare.a to an executable's dependency list won't quite enforce this: it only builds libsquare.a in time for the linking step (*.oexecutable), but not for the compilation (*.cpp*.o). In contrast, the <dependency>libsquare.a directive propagates to all dependent build steps, including the compilation, and induces the required dependencies.

I created an example project on GitHub that demonstrates this. It builds a simple executable relying on a library built via make. The process is fully integrated in the Boost Build framework and triggered by a single call to bjam (or bb2).

If you need something along these lines give this solution a try!

Working with angles (is surprisingly hard)

Due to the periodic nature of angles, especially the discontinuity at 2π / 360°, working with them presents some subtle issues. For example, the angles 5° and 355° are "close" to each other, but are numerically quite different. In a geographic (GIS) context, this often surfaces when working with geometries spanning Earth's date-line at -180°/+180° longitude, which often requires multiple code paths to obtain the desired result.

I've recently run into the problem of computing averages and differences of angles. The difference should increase linearly, it should be 0 if they are equal, negative if the second angle lies to one side of the first, positive if it's on the other side (whether that's left or right depends on whether the angles increase in the clockwise direction or in counter-clockwise direction). Getting that right and coming up with test cases to prove that it's correct was quite interesting. As Bishop notes in Pattern Recognition and Machine Learning, it's often simpler to perform operations on angles in a 2D (x, y) space and then back-transform to angles using the atan2() function. I've used that for the averaging function; the difference is calculated using modulo arithmetic.

Here's the Python version of the two functions:

def average_angles(angles):
    """Average (mean) of angles

    Return the average of an input sequence of angles. The result is between
    ``0`` and ``2 * math.pi``.
    If the average is not defined (e.g. ``average_angles([0, math.pi]))``,
    a ``ValueError`` is raised.
    """

    x = sum(math.cos(a) for a in angles)
    y = sum(math.sin(a) for a in angles)

    if x == 0 and y == 0:
        raise ValueError(
            "The angle average of the inputs is undefined: %r" % angles)

    # To get outputs from -pi to +pi, delete everything but math.atan2() here.
    return math.fmod(math.atan2(y, x) + 2 * math.pi, 2 * math.pi)


def subtract_angles(lhs, rhs):
    """Return the signed difference between angles lhs and rhs

    Return ``(lhs - rhs)``, the value will be within ``[-math.pi, math.pi)``.
    Both ``lhs`` and ``rhs`` may either be zero-based (within
    ``[0, 2*math.pi]``), or ``-pi``-based (within ``[-math.pi, math.pi]``).
    """

    return math.fmod((lhs - rhs) + math.pi * 3, 2 * math.pi) - math.pi

The code, along with test cases can also be found in this GitHub Gist. Translation of these functions to other languages should be straight-forward, sin()/cos()/fmod()/atan2() are pretty ubiquitous.