I'm writing this article because I'm getting tired of repeating the same concepts every time someone makes misinformed statements about the (lack of) support for mixed-DPI configurations in X11. It is my hope that anybody looking for information on the subject may be directed here, to get the facts about the actual possibilities offered by the protocol, avoiding the biased misinformation available from other sources.

If you only care about “how to do it”, jump straight to The RANDR way, otherwise read along.

So, what are we talking about?

The X Window System

The X Window System (frequently shortened to X11 or even just X), is a system to create and manage graphical user interfaces. It handles both the creation and rendering of graphical elements inside specific subregions of the screen (windows), and the interaction with input devices (such as keyboards and mice).

It's built around a protocol by means of which programs (clients) tell another program (the server, that controls the actual display) what to put on the screen, and conversely by means of which the server can inform the client about all the necessary information concerning both the display and the input devices.

The protocol in question has evolved over time, and reached version 11 in 1987. While the core protocol hasn't introduced any backwards-incompatible changes in the last 30 years (hence the name X11 used to refer to the X Window System), its extensible design has allowed it to keep abreast of technological progress thanks to the introduction and standardization of a number of extensions, that have effectively become part of the subsequent revisions of the protocol (the last one being X11R7.7, released in 2012; the next, X11R7.8, following more a “rolling release” model).

DPI

Bitmapped visual surfaces (monitor displays, printed sheets of paper, images projected on a wall) have a certain resolution density, i.e. a certa number of dots or pixels per unit of length: dots per inch (DPI) or pixel per inch (PPI) is a common way to measure it. The reciprocal of the the DPI is usually called “dot pitch”, and refers to the distance between adjacent dots (or pixels). This is usually measured in millimeters, so conversion between DPI and dot pitch is obtained with

DPI   = pitch/25.4
pitch = 25.4/DPI

(there being 25.4 millimeters to the inch).

When it comes to graphics, knowing the DPI of the output is essential to ensure consistent rendering (for example, a drawing program may have a “100% zoom” option where the user might expect a 10cm line to take 10cm on screen), but when it comes to graphical interface elements (text in messages and labels, sizes of buttons and other widgets) the information itself may not be sufficient: usage of the surface should ideally also be taken into consideration.

To this end, the concept of reference pixel was introduced in CSS, representing the pixel of an “ideal” display with a resolution of exactly 96 DPI (dot pitch of around 0.26mm) viewed from a distance of 28 inches (71cm). The reference pixel thus becomes the umpteenth unit of (typographical) length, with exactly 4 reference pixels every 3 typographical points.

Effectively, this allows the definition of a device pixel ratio, as the ratio of device pixels to reference pixels, taking into account the device resolution (DPI) and its assumed distance from the observer (for example, a typical wall-projected image has a much lower DPI than a typical monitor, but is also viewed from much further away, so that the device pixel ratio can be assumed to be the same).

Mixed DPI

A mixed-DPI configuration is a setup where the same display server controls multiple monitors, each with a different DPI.

For example, my current laptop has a built-in 15.6" display (physical dimensions in millimeters: 346×194) with a resolution of 3200×1800 pixels, and a pixel density of about 235 DPI —for all intents and purposes, this is a HiDPI monitor, with slightly higher density than Apple's Retina display brand. I frequently use it together with a 19" external monitor (physical dimensions in millimeters: 408×255) with a resolution of 1440×900 pixels and a pixel density of about 90 DPI —absolutely normal, maybe even somewhat on the lower side.

The massive difference in pixel density between the two monitors can lead to extremely inconsistent appearance of graphical user interfaces that do not take it into consideration: if they render assuming the standard (reference) DPI, elements will appear reasonably sized on the external monitor, but extremely small on the built-in monitor; conversely, if they double the pixel sizing of all interface elements, they will appear properly sized on the built-in monitor, but oversized on the external one.

Proper support for such configuration requires all graphical and textual elements to take a number of pixel which depends on the monitor it is being drawn on. The question is: is this possible with X11?

And the answer is yes. But let's see how this happens in details.

A brief history of X11 and its support for multiple monitors

The origins: the X Screen

An interesting aspect of X11 is that it was designed in a period where the quality and characteristics of bitmap displays (monitors) was much less consistent than it is today. The core protocol thus provides a significant amount of information for the monitors it controls: the resolution, the physical size, the allowed color depth(s), the available color palettes, etc.

A single server could make use of multiple monitors (referred to as “X Screen”s), and each of them could have wildly different characteristics (for example: one could be a high-resolution monochrome display, the other could be a lower-resolution color display). Due to the possible inconsistency between monitors, the classical support for multiple monitors in X did not allow windows to be moved from one X Screen to another. (How would the server render a window created to use a certain kind of visual on a different display that didn't support it?)

It should be noted that while the server itself didn't natively support moving windows across X Screens, clients could be aware of the availability of multiple displays, and they could allow (by their own means) the user to “send” a window to a different display (effectively destroying it, and recreating it with matching content, but taking into account the different characteristics of the other display).

A parenthetical: the client, the server and the toolkit

Multiple X Screen support being dependent on the client, rather than the server, is actually a common leit motif in X11: due to one of its founding principles (“mechanism, not policy”), a lot of X11 features are limited only by how much the clients are aware of them and can make use of them. So, something may be allowed by the protocol, but certain sets of applications don't make use of the functionality.

This is particularly relevant today, when very few applications actually communicate with the X server directly, preferring to rely on an intermediate toolkit library that handles all the nasty little details of communicating with the display server (and possibly even display servers of different nature, not just X11) according to the higher-level “wishes” of the application (“put a window with this size and this content somewhere on the screen”).

The upside of this is that when the toolkit gains support for a certain feature, all applications using it can rely (sometimes automatically) on this. The downside is that if the toolkit removes support for certain features or configurations, suddenly all applications using it stop supporting them too. We'll see some example of this specifically about DPI in this article.

Towards a more modern multi-monitor support: the Xinerama extension

In 1998, an extension to the core X11 protocol was devised to integrate multiple displays seamlessly, making them appear as a single X Screen, and thus allowing windows to freely move between them.

This extension (Xinerama) had some requirements (most importantly, all displays had to support the same visuals), but for the most part they could be heterogeneous.

An important downside of the Xinerama extension is that while it provides information about the resolution (in pixels) and relative position (in pixels!) of the displays, it doesn't reveal any information about the physical characteristics of the displays.

This is an important difference with respect to the classic “separate X Screens” approach: the classic method allowed clients to compute the monitor DPI (as both the resolution and the physical size were provided), but this is not possible in Xinerama.

As a consequence, DPI-aware applications were actually irremediably broken on servers that only supported this extension, unless all the outputs had the same (or similar enough) DPI.

Modern multi-monitor in X11: the XRANDR extension

Xinerama had a number of limitations (the lack of physical information about the monitors being just one of many), and it was essentially superseded by the RANDR (Resize and Rotate) extension when the latter reached version 1.2 in 2007.

Point of interest for our discussion, the RANDR extension took into consideration both the resolution and physical size of the display even when originally proposed in 2001. And even today that it has grown in scope and functionality, it provides all necessary information for each connected, enabled display.

The RANDR caveat

One of the main aspects of the RANDR extension is that each display is essentially a “viewport” on a virtual framebuffer. This virtual framebuffer is the one reported as “X Screen” via the core protocol, even though it doesn't necessarily match any physical screen (not even when a single physical screen is available!).

This gives great flexibility on how to combine monitors (including overlaps, cloning, etc); the hidden cost is that all of the physical information that the core protocol would report about the virtual backend to its X Screen become essentially meaningless.

For this reason, when the RANDR extension is enabled, the core protocol will synthetize ficticious physical dimensions for its X Screen, from the overall framebuffer size, assuming a “reference” pixel density of 96 DPI.

When using a single display covering the whole framebuffer, this leads to a discrepancy between the physical information provided by the core protocol, and the one reported by the RANDR extension. Luckily, the solution for this is trivial, as the RANDR extension allows changing the ficticious dimensions of the X Screen to any value (for example, by using commands such as xrandr --dpi eDP-1, to tell the X server to match the core protocol DPI information to that of the eDP-1 output).

Mixed DPI in X11

Ultimately, X11, as a display protocol, has almost always had support for mixed DPI configurations. With the possible exception of the short period between the introduction of Xinerama and the maturity of the RANDR extension, the server has always been able to provide its clients with all the necessary information to adapt their rendering, window by window, widget by widget, based on the physical characteristics of the outputs in use.

Whether or not this information is being used correctly by clients, however, it's an entirely different matter.

The core way

If you like the old ways, you can manage your mixed DPI setup the classic way, by using separate X Screens for each monitor.

The only thing to be aware of is that if your server is recent enough (and supports the RANDR extension), then by default the core protocol will report a DPI of 96, as discussed here. This can be worked around by calling xrandr as appropriate during the server initialization.

Of course, whether or not applications will use the provided DPI information, X Screen by X Screen, is again entirely up the application. For applications that do not query the X server about DPI information (e.g. all applications using GTK+3, due to this regression), the Xft.dpi resource can be set appropriately for each X Screen.

The RANDR way

On a modern X server with RANDR enabled and monitors with (very) different DPIs merged in a single framebuffer, well-written applications and toolkits can leverage the information provided by the RANDR extension to get the DPI information for each output, and use this to change the font and widget rendering depending on window location.

(This will still result in poor rendering when a window spans multiple montiors, but if you can live with a 2-inch bezel in the middle of your window, you can probably survive misrendering due to poor choice of device pixel ratios.)

The good news is that all applications using the Qt toolkit can do this more or less automatically, provided they use a recent enough version (5.6 at least, 5.9 recommended). Correctly designed Applications can request this behavior from the toolkit on their own (QApplication::setAttribute(Qt::AA_EnableHighDpiScaling);), but the interesting thing is that the user can ask this to be enabled even for legacy applications, by setting the environment variable QT_AUTO_SCREEN_SCALE_FACTOR=1.

(The caveat is that the scaling factor for each monitor is determined from the ratio between the device pixel ratio of the monitor and the device pixel ratio of the primary monitor. So make sure that the DPI reported by the core protocol (which is used as base reference) matches the DPI of your primary monitor —or override the default DPI used by Qt applications by setting the QT_FONT_DPI environment variable appropriately.)

The downside is that outside of Qt, not many applications and tookits have this level of DPI-awareness, and the other major toolkit (GTK+) seems to have no intention to acquire it.

A possible workaround

If you're stuck with poorly written toolkits and applications, RANDR still offers a clumsy workaround: you can level out the heterogeneity in DPI across monitors by pushing your lower-DPI displays to a higher virtual resolution than their native one, and then scaling this down. Combined with appropriate settings to change the DPI reported by the core protocol, or the appropriate Screen resources or other settings, this may lead to a more consistent experience.

For example, I could set my external 1440×900 monitor to “scale down” from a virtual 2880×1800 resolution (xrandr --output DP-1 --scale-from 2880x1800), which would bring its virtual DPI more on par with that of my HiDPI laptop monitor. The cost is a somewhat poorer image overall, due to the combined up/downscaling, but it's a workable workaround for poorly written applications.

(If you think this idea is a bit stupid, shed a tear for the future of the display servers: this same mechanism is essentially how Wayland compositors —Wayland being the purported future replacement for X— cope with mixed-DPI setups.)

Final words

Just remember, if you have a mixed DPI setup and it's not properly supported in X, this is not an X11 limitation: it's the toolkit's (or the application's) fault. Check what the server knows about your setup and ask yourself why your programs don't make use of that information.

If you're a developer, follow Qt's example and patch your toolkit or application of choice to properly support mixed DPI via RANDR. If you're a user, ask for this to be implemented, or consider switching to better applications with proper mixed DPI support.

The capability is there, let's make proper use of it.

A small update

There's a proof of concept patchset that introduces mixed-DPI support for GTK+ under X11. It doesn't implement all of the ideas I mentioned above (in particular, there's no Xft.dpi support to override the DPI reported by core), but it works reasonably well on pure GTK+ applications (more so than in applications that have their own toolkit abstraction layer, such Firefox, Chromium, LibreOffice).