W07: Squashing Bugs

I started this week off with a few small bugfixes that I wanted to shoehorn into the upcoming 0.23.0 release of Documenter.

The first fix made sure that you could not pass the objects you would normally pass via the format keyword as a positional argument to makedocs (#1061, fixing #1046). Basically, the issue was that the standard format objects were declared as subtypes of Plugin, which they should not be, as they are not plugins.

The second PR fixed a long-standing issue about parsing errors in doctests (#1062, fixing #487). Essentially, the issue was that you could not doctest code that was supposed to throw a parse error.

It turned out to be a little bit more work than I initially assumed it would be. While working on it I also had to improve the handling of stacktraces when makedocs or doctest gets called in a non-top level context (this was necessary to reliably unit test the parse error fix).

Bulma front-end

For the front end, I took a long hard look at the sidebar styling. The aim was to improve its readability by making it more clear what level each item is on.

Another feature I implemented was collapsible submenus. By default, menu items on levels 3 and below are now hidden and the user has to click on sidebar items to reveal lower levels menus. As most packages have relatively few pages and are at most two levels deep, always showing level 1 and 2 items seems like a reasonable default.

However, for the Julia manual we probably want to collapse the second level items too. For that you can pass the collapselevel keyword argument to HTML to customize the level at which the collapsed menus begin.

A neat property of the implementation is that it is JS-free, as it uses hidden checkboxes, input labels and CSS to toggle whether a submenu is displayed or not.

Here is a demo of the latest version of the front end where I used it to build the Julia manual:

http://mortenpi.eu/gsoc2019-mockups/bulma/v6/

You may also notice that this site is now built with the new front end.

Fonts and Unicode

Finally this week, I was thinking about how to handle fonts in Documenter.

Custom fonts

First, people have asked for the ability to use custom fonts (#973). This is slightly complicated, because it involves two parts: (1) the fonts need to be made available to the user (most likely the user will not have whatever custom font you want to use installed), e.g. by CDNing[CDN] them in from Google Fonts, and (2) the CSS needs to be updated to actually use the new fonts.

In principle, one option to include CDNed font dependencies are the @import statements in CSS files. However, this is not recommended, as @import blocks the parsing of the parent CSS file until the imported file is downloaded, making CSS loading slow. It can also lead to the whole site not loading if the connection to the CDN times out.[1]

Hence, in Documenter, we would aim to load CDNed font dependencies with <link> tags. However, that means HTMLWriter needs to be aware of the fonts that the theme needs.

It would definitely be possible to create a system where Documenter would try to figure everything out automatically. However, this would have to involve some non-trivial interaction with the theme building process (which currently is just a matter of compiling Sass files).

So work within these constraints, but to also keep the implementation reasonably simple, I will likely go with the following system:

  • Fonts are up to the theme. If you want to use custom fonts, you need to deploy a custom theme (will likely just involve overriding the respective font-family variables).
  • The theme files just declare the CSS font families. It's up to the user to make sure that the site actually loads all the necessary fonts from CDNs etc.
    • For importing fonts with <link> tags, Documenter will provide generalized assets APIs.
    • A special one will probably be provided for Google Fonts.
    • As a side note: we cannot (nor should) stop user themes from using @import statements for importing remote fonts.

Unicode monospace

There is a long-standing open issue about Unicode handling in the monospace blocks (#618). Essentially, Roboto Mono's Unicode coverage is meagre and the different fallback fonts do not necessarily work nicely together, up to the point where monospace is not longer monospace and things do not align up properly.

Ideally, we would be able to CDN in a font with good Unicode coverage that just works, but alas, there is not such font. So we have to make sure that our fallbacks are sane. We can probably go with something like

pre, code {
  font-family: 'Roboto Mono', 'SFMono-Regular', 'Menlo', 'Consolas', 'Liberation Mono', 'DejaVu Sans Mono', monospace;
}

and hope for the best and all platforms. This setup continues to use Roboto Mono as the main font, but tries to fall back to the various OS-specific monospace fonts for symbols it does not have.

An alternative would be to always use the browser fallback monospace font (i.e. font-family: monospace). This is actually what Bulma uses by default.

Unicode in PDFs

Incidentally, as I was already thinking about fonts and looking into Unicode coverage etc., I ended up putting together a PR to fix #803, a PDF rendering bug. In a nutshell, the issue was that we were using Lato and Roboto Mono fonts in the PDF, which are great, but do not have much Unicode coverage. Hence a bunch of important symbols were missing in the manual (e.g. ∀, ∃, ⊻).

LaTeX unfortunately does not have any concept of fallback fonts (i.e. that it would render a symbol from a different font if the primary font does not have it). The closest you get is ucharclasses for XeTeX, but it has two issues: (1) we are using LuaTex and it's XeTeX-only, and (2) it only allows setting different fonts for classes of characters but does not provide fallbacks for individual characters.

However, browsers usually do not have problems rendering all kinds of weird Unicode symbols. So I checked that in most cases, for slightly non-standard Unicode symbols, Chrome on my Ubuntu falls back to the DejaVu fonts.

So I decided to just switch the PDF manual over to DejaVu Sans for the main body and headings, and DejaVu Sans Mono for the code. The compiled result is not too bad, and having all the symbols present is more important than having the prettiest font.

There was just one last issue to deal with – DejaVu Sans Mono still does not have . The browser, for that character, actually falls back to the non-monospace DejaVu Sans font. As this is used to represent a standard operator in Julia, it would be really great to have it render in the manual.

To get around the LaTeX limitation of not supporting fallback fonts, I created a small fallback mechanism just for this character. In all code blocks, Documenter now replaces instances of with the custom \unicodeveebar command, which renders the character using DejaVu Sans.[2]

For example, the following code block

myxor(x, y) = x ⊻ y

will become the following the the LaTeX source

\begin{lstlisting}{escapeinside=\%\%}
myxor(x, y) = x %\unicodeveebar% y
\end{lstlisting}

As can be seen above, one final minor hassle was that we need to call \unicodeveebar in code blocks, which are lstlisting or minted environments. They normally do not allow any command sequences, but that can be worked around with escape sequence. For inlines, which are just \texttts, just using \unicodeveebar{} suffices.

The relevant pull request is #1066 and it will fix JuliaLang/julia#31048 once Documenter there is upgraded.

Next week

Wrapping up a first iteration of the Bulma front end is still a work in progress (#1043), so that will also be the focus of next week. And the step after that is looking into managing font and JS dependencies better.