Posts

Main Branches Renamed

As most git users are aware, the default branch name in git changed from master to main a while ago. Since I maintain projects created both before and after this change, some maintenance tasks have become more difficult to automate as a result (and I'm a sucker for consistency).

I put off dealing with this because I was planning to make master branches disappear at the same time APIs are broken, but that wasn't a good idea for several reasons beyond the scope of this post. So, I've changed all of my personal projects, and all projects maintained under the LV2 umbrella, to use main.

If you're tracking any of those repositories, maintain packaging infrastructure, or similar, please update your local trees and/or configuration accordingly. It's best to rename the local branch with git, since this preserves your local reftree and avoids accidental use of the old name:

git branch -m master main

Lilv 0.24.24

Lilv 0.24.24 has been released. Lilv is a C library to make the use of LV2 plugins as simple as possible for applications. For more information, see http://drobilla.net/software/lilv.

Changes:

  • Avoid overriding state features passed by the host
  • Fix null dereference when trying to load state from a missing file
  • Fix potential null dereferences and conversion warnings

Lilv 0.24.22

Lilv 0.24.22 has been released. Lilv is a C library to make the use of LV2 plugins as simple as possible for applications. For more information, see http://drobilla.net/software/lilv.

Changes:

  • Allow LILV_API to be defined by the user
  • Clean up code
  • Clean up inconsistent tool command line interfaces
  • Convert man pages to mdoc
  • Fix crash when plugins pass NULL to the LV2_State_Retrieve_Function
  • Fix dependencies in pkg-config file
  • Fix potential crash when writing state files fails
  • Order plugin classes by URI
  • Override pkg-config dependency within meson
  • Remove junk files from documentation install
  • Replace duplicated dox_to_sphinx script with sphinxygen dependency
  • Switch to external zix dependency

Suil 0.10.20

Suil 0.10.20 has been released. Suil is a library for loading and wrapping LV2 plugin UIs. For more information, see http://drobilla.net/software/suil.

Changes:

  • Allow SUIL_API to be defined by the user
  • Fix dependencies in pkg-config file
  • Only check for Gtk Quartz support on MacOS
  • Override pkg-config dependency within meson
  • Remove Gtk in Qt and Qt in Gtk wrappers
  • Remove junk files from documentation install
  • Replace duplicated dox_to_sphinx script with sphinxygen dependency

Sratom 0.6.16

Sratom 0.6.16 has been released. Sratom is a small library for serializing LV2 atoms. Sratom reads/writes atoms from/to RDF, allowing them to be converted between binary and text or stored in a model. For more information, see http://drobilla.net/software/sratom.

Changes:

  • Clean up code
  • Constrain relative URI references to the base URI
  • Fix dependencies in pkg-config file
  • Override pkg-config dependency within meson
  • Remove junk files from documentation install
  • Replace duplicated dox_to_sphinx script with sphinxygen dependency

Zix 0.4.2

Zix 0.4.2 has been released. Zix is a lightweight C library of portability wrappers and data structures.

Changes:

  • Clean up documentation build
  • Fix documentation build in a virtualenv
  • Improve test suite code coverage

Zix 0.4.0

Zix 0.4.0 has been released. Zix is a lightweight C library of portability wrappers and data structures.

In Search of the Ultimate Compile-Time Configuration System

One of the many programming side quests I embark on from time to time is finding the best way to do compile-time configuration in C and C++ code. This is one of those characteristically C things that most projects need to do, but that has no well-established best practice. What you can find is all over the place, and often pretty half-baked just to suit the particularities of the "official" build. Let's try to come up with something better.

Ideal requirements:

  • Ability to enable or disable any features from the command line by defining symbols, including the ability to override or completely disable any automatic checks implemented in the code.

  • Good integration with, but no hard dependency on, any build system.

  • The code should build with reasonable defaults when simply thrown at a compiler "as-is".

  • Mistakes, such as forgetting to include the configuration header or using misspelled symbols, are caught by tooling (preferably compiler warnings).

  • It's never necessary to modify the code to achieve a particular build.

Here's a skeleton of the best I've managed to come up with so far, for a made-up "mylib" project and a few POSIX functions. It has a bit of boilerplate, but there's good reasons for everything that I'll get to. This configuration header is written manually (not generated) and included (privately) in the source code:

#ifndef MYLIB_CONFIG_H
#define MYLIB_CONFIG_H

#if !defined(MYLIB_NO_DEFAULT_CONFIG)

// Derive default configuration from the build environment

// We need unistd.h to check _POSIX_VERSION
#  ifdef __has_include
#    if __has_include(<unistd.h>)
#      include <unistd.h>
#    endif
#  elif defined(__unix__)
#    include <unistd.h>
#  endif

// Define MYLIB_POSIX_VERSION unconditionally for convenience below
#  if defined(_POSIX_VERSION)
#    define MYLIB_POSIX_VERSION _POSIX_VERSION
#  else
#    define MYLIB_POSIX_VERSION 0
#  endif

// POSIX.1-2001: fileno()
#  if !defined(HAVE_FILENO)
#    if MYLIB_POSIX_VERSION >= 200112L || defined(_WIN32)
#      define HAVE_FILENO 1
#    endif
#  endif

// POSIX.1-2001: posix_fadvise()
#  if !defined(HAVE_POSIX_FADVISE)
#    if MYLIB__POSIX_VERSION >= 200112L && !defined(__APPLE__)
#      define HAVE_POSIX_FADVISE 1
#    endif
#  endif

#endif // !defined(MYLIB_NO_DEFAULT_CONFIG)

// Define USE variables for use in the code

#if defined(HAVE_FILENO) && HAVE_FILENO
#  define USE_FILENO 1
#else
#  define USE_FILENO 0
#endif

#if defined(HAVE_POSIX_FADVISE) && HAVE_POSIX_FADVISE
#  define USE_POSIX_FADVISE 1
#else
#  define USE_POSIX_FADVISE 0
#endif

User interface:

  • By default, features are enabled if they can be detected or assumed to be available from the build environment, unless MYLIB_NO_DEFAULT_CONFIG is defined, which disables everything by default to allow complete control.

  • If a symbol like HAVE_SOMETHING is defined to non-zero, then the "something" feature is assumed to be available. If it is zero, then the feature is disabled.

Usage in code:

  • To check for a feature, the configuration header must be included, and the symbol like USE_SOMETHING (not HAVE_SOMETHING) used as a boolean in an #if expression, like:

    #include "mylib_config.h"
    
    // [snip]
    
    #if USE_FILENO
        int fd = fileno(file);
    #endif
    
  • None of the other configuration symbols described here may be used directly. In particular, the configuration header should be the only place in the code that touches HAVE symbols.

The main "trick" here which allows for all of the different configuration "modes" is the use of two "kinds" of symbol: HAVE symbols and USE symbols. HAVE symbols are exclusively the interface for the user or build system, and USE symbols are the opposite: exclusively for use in the code and never by the user or build system. This way, use of the configuration header is mandatory for any code that needs configuration.

The USE symbols are defined to 0 or 1 unconditionally, and code must check them with #if, not with #ifdef. This prevents mistakes, since both forgetting to include the configuration header, and misspelling a symbol, will be caught by compiler warnings. Tools like include-what-you-use can also enforce direct inclusion more strictly.

From the command line, basic usage is typical: define symbols like HAVE_SOMETHING to enable features. For complete control over the configuration, define MYLIB_NO_DEFAULT_CONFIG, in which case all features must be explicitly enabled. This is mainly useful for build systems, so that all features can be checked for and only those that are found used in the code. It's also useful for avoiding issues with strange compilers or platforms that aren't supported by the checks.

I think this design covers all of the above requirements, and while the header itself can get a bit verbose, it's relatively straightforward and, more importantly, usage of it is simple and resilient to mistakes.

There is one thing here that isn't caught by tooling though: misspelling a HAVE variable will silently not work. This is a concession to the simple case of just defining a few relevant HAVE symbols on the command line, and to keep command lines from the build system as terse as possible. It is however possible to modify this pattern a bit to catch this potential mistake as well: require all known HAVE variables to be defined to 1 or 0, and check those with #if as well in the configuration header itself. This adds a couple of lines per check to the boilerplate, for example:

// POSIX.1-2001, Windows: fileno()
#  ifndef HAVE_FILENO
#    if defined(_WIN32) || defined(_POSIX_VERSION) && _POSIX_VERSION >= 200112L
#      define HAVE_FILENO 1
#    else
#      define HAVE_FILENO 0
#    endif
#  endif

// [snip]

#if HAVE_FILENO
#  define USE_FILENO 1
#else
#  define USE_FILENO 0
#endif

This way, compiler warnings will catch any mistakes in the build system (because, for example, HAVE_FILENO isn't defined), ensuring that everything is explicitly either enabled or disabled. I'm not sure which style to use. Potential silent errors in the build system are pretty bad, but at the same time, I don't want to sacrifice the ability of the code to be easily compiled "manually". It's probably possible to have both, but I'm not sure how painful the boilerplate cost would be. I did have the stricter version for a while, but the extremely verbose compiler command lines were pretty annoying, so I removed it. Now, as I write this, I'm second guessing myself, but so it goes.

Questions for another day, I suppose. One of the things about programming side quests is that they usually never end.

C++ Bindings

For some C libraries, I'd like to include "official" C++ bindings to make life easier for people using them from C++ (which in the audio world, is most). However, that's not something I know a good pattern for, in terms of project organization, installation, versioning, and so on. Figuring one out is a trickier problem than it may seem at first.

In the - in this case literally - "C and C++" world, there is a notorious lack of consistent conventions and best practices in some areas, and this seems to be one of them. So, I suppose I will have to suss out the "best" (and least weird) way myself. The "best" way should:

  • Provide "official" C++ bindings which are developed, maintained, and shipped with the underlying C library.

  • Avoid having the C++ wrapper be locked to the same version as the C library (which is a strict semver reflection of the ABI).

    Rationale: It must be possible to develop the C++ bindings, including make breaking changes, while the C library version (and therefore the ABI) stays the same. Otherwise, it would be nearly impossible to change them, because that'd require changing the version... but the underlying C API version needs to break as infrequently as possible.

  • Isolate the bindings (and "C++ stuff") from the underlying C library as much as possible. Ensure that builds on systems without a C++ toolchain work (this isn't uncommon on minimal or embedded systems which much of this software is appropriate for use on).

  • Avoid making a completely separate new project (repository, test and releasing infrastructure, and so on) if at all possible. The maintenance burden would be far too high, and the bindings would be prone to rot.

  • Use a simple and predictable naming scheme that works with any "main" project name.

Poking around repositories and tinkering a little bit, the best practices I can come up with (for the sort of libraries I'm thinking about anyway), is:

  • Develop and release bindings as a sub-project within the "main" project.

    This is only a "project" in the build system sense. The bindings are maintained in the same git repository, and released in the same archive, as the C library.

  • Name the bindings sub-project by appending a cpp suffix, for example, mylib-cpp. This scheme is... well, not uncommon (for example, in the Debian repositories), and can easily be applied to any name, including libraries that already have multi-word names.

    Following meson requirements, this means the sub-project lives at a path like subprojects/mylib-cpp in the repository.

  • Install a separate "package" (for example via pkg-config) for the bindings, which depends on the one for the underlying C library. The major version is appended to both, for example, mylib-cpp-1 might depend on mylib-1.

  • Keep the C++ bindings themselves as light as possible, and header-only. This avoids link-time issues, making C++ API compatibility a compile-time issue only.

  • Give the bindings package a separate version number and let it increase as necessary. This version is not aligned with that of the underlying C library in any meaningful way. Technically, a given version of the bindings depends on some version of the C library, but in practice, this is always simply the version it's shipped with.

    A strange consequence of this scheme is that the version of the C++ bindings can only drift ever further away, so in the future even major versions may not correspond at all. This is a bit weird, but is the only way to make everything work and be properly versioned. Effectively, the version of the bindings is just an implementation detail, something developers deal with in configuration scripts. From the perspective of packagers or users, there is just one version of the library, the version of the underlying C library - the C++ bindings just may break sometimes, even within a major version of the project as a whole.

    I can't think of any concrete reason why this could be a problem: the urge to have shiny "4.0.0" type version bumps across everything at the same time smells like... marketing, frankly, not engineering. It does make parallel installation of different major versions more difficult, though. Packagers can split up the installation and make separate packages if they really want to. "Upstream" (me) officially doesn't care about parallel installation of different major versions of the C++ bindings.

    All that said, ideally they happen to stay relatively aligned anyway.

  • Make sure there is a simple and obvious option to disable C++ entirely, leaving a C library package with the broadest compatibility possible.

The short, vibes-based description of all that is something like: there is a stable and strictly versioned C library with every effort put into long-term source and binary compatibility, as always... and then there's a C++ bindings sub-project that tags along with it but is otherwise independent. The bindings are more volatile, but it's C++, so they're going to be volatile no matter what you do anyway. The bindings project is universally named by tacking a -cpp or _cpp on the end as appropriate in every context: include directories, package names, and so on.

So, an installation might look something like this:

include/dostuff-1/dostuff.h
include/dostuff-cpp-4/dostuff.hpp
lib/libdostuff-1.so
lib/libdostuff-1.so.1
lib/libdostuff-1.so.1.2.4
lib/pkgconfig/dostuff-1.pc
lib/pkgconfig/dostuff-cpp-4.pc

In the source code, the bindings and any supporting C++ code is entirely contained within the subproject, except for a minimal skeleton to handle compile time options and so on. This can be more work than a single heterogeneous project in some ways, less work in others, but overall I think it has more maintenance benefits. Importantly, it keeps any new issues or volatility as far away from the C library as possible, making it easy to see if a change could possibly break the ABI or the C library at all, for example.

This scheme may be extended to other languages if that's appropriate. The naming scheme for Python is like python-dostuff. It probably makes more sense to maintain Cython wrappers as separate projects maintained in the Python way (sigh...), but the whole point of a naming scheme is to have space for things in case you need them. In reality, language bindings are usually done independently by other people in separate projects (Rust folks will use Cargo in a separate repository, and so on).

All of this is, obviously, a massively over-thought bikeshed, but adding multiple programming languages and multiple versioning and compatibility schemes/philosophies to a project is a bit tricky. I can't just copy from an existing best-practice pattern I've been honing for years like I can with straight C libraries. This approach seems like it shouldn't cause too much trouble, though.

That said, I'm just making this up as I go along and have no experience maintaining anything quite like this (only more or less homogeneous C or C++ libraries), so feedback is, as always, welcome. I may revise this post if anything turns out to be a mistake, so it can ultimately serve as a reference for the next person trying to figure out how to do "C family" source code releases right.

Down to the Last Man, Doc!

While working through some remaining things in the build/documentation/packaging/infrastructure cleanup phase I've been going through lately, I noticed that some of the man pages hosted at this domain have become stale (there was a helpful ticket about one of them, but more or less all of them were out of date).

I'm still not entirely sure what to do about the general duplication between README files and the neglected home pages I still (barely) maintain here, but this part struck me as obviously needing to be automated.

Of course, now that my attention was drawn to them, I noticed that many of the man pages themselves were in a terrible state. In recent years I've become more enthusiastic about man pages, particularly after moving to mandoc. As a language, mdoc is dramatically better than the "traditional" man macros (the ones most often used on Linux), and includes meaningful semantic markup for more or less everything you'll need. It's still line noise to write, but you actually get something for it. Since it's semantic markup, it feels like a better investment to me: superficially ugly, perhaps, but with high fidelity as a portable source material. Converting from mandoc to just about anything, I imagine, could be done automatically easily enough, if it wasn't already supported by the tool1.

In practice, I find that the language encourages/enables writing more consistent pages with better formatting, and mandoc, unlike groff, emits clean and simple HTML pages that can be easily styled with CSS. It's relatively pleasant to use in general, and emits sensible error messages that point directly to the problem when you screw something up. I've also come to appreciate the BSD style of man page more: the summary is actually a summary, it's less YELLEY, and there is a standard for sections and their order. I may spend most of my time in a GNU/Linux universe, but BSD got man pages right. That's not terribly surprising: BSD culture in general actually likes and cares about man pages. GNU treated them as an unfortunate legacy to prop up a failed successor (info). Of course the BSD tooling around man pages is much better. I tend towards BSD style for command line interfaces anyway, but you can document GNU-style long options in mdoc as well.

So! If I'm going to be putting energy into man pages, I might as well convert them to mdoc. I think good tooling is very important, and nice command-line tools with clear and articulate man pages are a part of that. Unfortunately, neither the LV2 tools themselves nor their documentation have frankly ever been of very high quality. Addressing this is one of the next themes of upcoming work, but for now, simply converting the existing content (with a few touch-ups along the way) is an easy step forward. It meshes nicely with the "bring all projects in line with a consistent set of 'modern' quality standards and automate everything" arc I've been on lately anyway.

The process of going through all of these has made me realize that I should prioritize the tool work, because the state of some of these tools is frankly embarrassing, but, uh, pay no attention to the man behind the curtain! It's a formats and infrastructure kind of day!

Having manually converted all the pages and checked them in the terminal, I needed a place to have them automatically updated, converted to HTML, and uploaded. The (not-really-existing) lv2kit project is the (not-really) parent of LV2 itself and all the "official" libraries, so I put it in CI there for the time being. I don't know what the best organization is for man pages under the LV2 umbrella in the grand scheme of things, but for now, this gets things automatically updated and online:

It's possible to build the documentation into a broader HTML page to integrate with toolbars and the like, but for now these are just complete pages straight out of mandoc. I did however write a simple stylesheet to adopt the same colours, fonts, spacing, typography-inspired minimalism, and infamous green links that I use everywhere, so things at least look reasonably coherent. I think it looks Good Enough™, anyway, but more relevantly, if you poke around in the document structure you'll see that the output is, for the most part, sensible semantic markup as well: lists are lists, sections are sections, different "sorts" of things are given classes, tables aren't abused, and so on. Everything here, from input to output, is reasonably clear and feels like it has obviously had some care put into it. The output looks like something a reasonable human might write themselves (if you run it through a formatter, anyway). I certainly can't say any of that about the equivalent groff-based monstrosity. In short, mandoc is good and you should use it.

As for these pages, the future of lv2kit as an "actual" released project is still unclear, but it is regularly updated (since the libraries rely on it for CI), so the man pages should never be stale again. I certainly wouldn't call these good yet, but not rotten is a start!


  1. mandoc currently supports emitting ascii, html, man, markdown, pdf, ps, tree, and utf8

Page 1 / 15 »