DRM (GPU access coordination)

DRM is a Linux kernel subsystem that allows multiple programs such as a desktop interface and a game to safely use a graphics card, preventing interference while providing each with direct access to hardware acceleration.

DRM (Direct Rendering Manager) is used in all Linux distributions with an X11 or Wayland graphics server, as well as in operating systems based on the Linux kernel for embedded devices such as automotive panels and Android smartphones. It is a mandatory component for 3D acceleration, hardware-decoded video playback, and desktop compositing.

Typical Problems

The main difficulties arise with proprietary graphics card drivers, which sometimes do not fully follow the DRM interface, causing failures during kernel mode setting (KMS) switching. There are also memory leak errors due to improper buffer handling and deadlocks caused by incorrect scheduling of rendering queues among multiple processes.

How It Works

DRM operates in kernel space and acts as an intermediary between user applications and the graphics processing unit. Unlike DRI (Direct Rendering Infrastructure), which describes a protocol for bidirectional communication, DRM implements low-level locking and memory management mechanisms such as GEM (Graphics Execution Manager) and TTM (Translation Table Maps). A similar function in other operating systems, for example DXGI in Windows, is integrated directly with the window manager subsystem. DRM strictly separates screen mode management (Kernel Mode Setting) from rendering management: when switching virtual terminals, it preserves the device context. Unlike render servers such as Xorg that run on top of DRM, DRM itself does not perform compositing but only provides a secure channel and manages command queues (ring buffers) using a dma-buf file descriptor based synchronization mechanism, allowing video memory to be shared between different devices without copying through the central processor.

DRM functionality

  1. DRM as a kernel subsystem. The Direct Rendering Manager provides a unified interface for user spaces, managing graphics controllers, GPUs, and display pipelines. It exports symbols through file descriptors under /dev/dri/card*.
  2. Atomic Mode Changes. DRM implements an atomic commit mechanism that guarantees that a set of display parameters — resolution, refresh rate, color format — is applied coherently. This eliminates visual artifacts during reconfiguration.
  3. GEM Buffer Objects. The Graphics Execution Manager inside DRM manages the allocation and mapping of video memory. It allows creating buffers, importing them via dma-buf, and synchronizing access among the CPU, GPU, and display controller without copying.
  4. KMS Commands (Kernel Mode Setting). The KMS subsystem handles setting monitor operating modes. DRM validates timings, calculates pixel clocks, and configures CRTCs, encoders, and connectors without involving X11 user mode.
  5. CRTC Objects. The CRTC (CRT Controller) in DRM reads data from the framebuffer, applies color correction and gamma, then generates a video stream with random scanout. It can be bound to one or more encoders.
  6. Encoders and Connectors. An encoder converts the stream into a physical signal such as LVDS, HDMI, or DisplayPort. A connector monitors monitor connection, reporting hotplug events via polling or interrupts. DRM automatically pairs an encoder with a suitable CRTC.
  7. Properties in DRM. Mode objects have properties as key-value pairs. User code via the ioctl DRM_IOCTL_MODE_GETPROPERTY can adjust scaling, backlight, colors, or enable HDR. Each property is atomically independent.
  8. Power Management. DRM integrates with runtime PM, switching the GPU into D3 or D0 states. Controllers disable clocking for unused blocks such as raster operators and texture processors, reducing heat and power consumption.
  9. GPU Command Scheduler. DRM includes a scheduler, command ring buffers, and execution contexts. It prevents GPU hangs by adding timeouts and resetting the context via the GPU reset mechanism without rebooting the system.
  10. Explicit dmabuf Synchronization. DRM uses fence objects for synchronization between different devices. Explicit fencing is passed through dma-buf file descriptors, allowing the graphics card, codec, and display to work on a frame sequentially without blind waiting.
  11. Overlay Plane Support. DRM supports multiple hardware overlay planes: video, cursor, background. Each plane has independent coordinates, scaling, and pixel format such as ARGB or YUV, reducing the compositing load.
  12. Rendering ioctl API. GPU-specific ioctls such as DRM_IOCTL_GEM_EXECBUFFER allow submitting primary command lists for execution. The driver parses the commands, validates memory access rights, and triggers DMA transfers.
  13. Hang Handling. DRM detects GPU hangs via a watchdog timer. On timeout, it saves a register log, resets the context, and restarts the scheduler. Applications receive an error code without session failure.
  14. Direct Scan-out Mode. DRM can perform scan-out directly from a buffer created by a render client such as a Wayland compositor. This eliminates intermediate copying by using the IOMMU hardware memory arbiter for protection.
  15. IOMMU (Isolation of direct memory access addresses)
  16. Multi-GPU and PRIME. The PRIME mechanism allows exporting a DRM buffer as a dma-buf to another graphics card. Rendering on a discrete GPU and scan-out on an integrated GPU happen without reading back over the PCIe bus.
  17. EDID Mode Management. DRM reads extended EDID data through the connector, parsing blocks with DMT and CEA-861 timings. On request, it applies user-supplied modelines, overriding the preferred mode.
  18. Built-in debugfs Interface. DRM exports debugging information via debugfs: object lists, used buffers, connector states. The command cat /sys/kernel/debug/dri/* displays CRTC registers and error counters.
  19. VRR Support. DRM implements adaptive synchronization through the vrr_enabled property. When enabled, the GPU dynamically changes the vertical blanking interval, repeating or skipping frames to eliminate tearing in games.
  20. Virtual DRM Drivers. For emulation and cloud desktops, there are virtual drivers such as vkms and gud. They emulate CRTCs and connectors in memory without requiring a GPU, while using the same ioctl and atomic API.
  21. Export to User Space. libdrm provides C language wrappers for ioctls, but direct syscalls are allowed. Wayland and Xorg open /dev/dri/card0, request resources, and manage modes and buffers using structures defined in drm_mode.h.

Comparison with similar functions

  • DRM (Direct Rendering Manager) vs GEM (Graphics Execution Manager). DRM manages access to the GPU at the kernel level, ensuring safe multithreaded context switching, whereas GEM focuses on video memory management within the DRM driver — buffer allocation, shared access, and rendering synchronization. GEM is a subsystem of DRM, not a competitor, complementing it for full GPU control.
  • DRM vs DRI (Direct Rendering Infrastructure). DRI is an architectural layer for direct rendering in X11, including client libraries like libGL and Mesa, and X server modules, while DRM is a low-level kernel module providing primitives for hardware acceleration. DRM is the foundation of DRI without which direct userspace access to the GPU is impossible.
  • DRM vs KMS (Kernel Mode Setting). KMS is a DRM subsystem responsible for setting video modes such as resolution, refresh rate, and color depth, and managing display controllers and timings. Unlike X11 user mode, KMS is integrated into the kernel, allowing instant virtual terminal switching and eliminating boot-time flicker. DRM cannot work with displays without KMS.
  • DRM vs Mesa (Gallium) in the rendering context. DRM operates in kernel space and directly passes GPU command buffers via the ioctl interface, whereas Mesa is a userspace OpenGL/Vulkan library that constructs those buffers. Mesa requests GPU access through DRM, manages contexts and synchronization, but does not own the device. DRM is the resource controller; Mesa is the executor.
  • DRM vs Wayland in display management. Wayland is a compositor protocol running in userspace that manages windows and input, while DRM provides low-level access to framebuffers and display controllers. A Wayland compositor such as Weston sends completed buffers directly to DRM via KMS and GBM, bypassing the X server. DRM is the hardware interface; Wayland is the composition manager on top of it.

OS and Driver Support

DRM is built directly into the Linux kernel (starting from version 2.6.26) and implements the drm_core subsystem, which exports a unified API via /dev/dri/card* file descriptors. Device drivers (e.g., i915 for Intel, amdgpu for AMD, nouveau for NVIDIA) are loaded as kernel modules, registering their callback functions through the drm_driver structure, after which the core code manages display modes (KMS), GEM/TTM buffers, and synchronization using DMA-BUF.

Security

DRM provides isolation between processes because memory management operations (GEM – Graphics Execution Manager) and framebuffer access are performed via context switching inside the kernel with CAP_SYS_ADMIN and CAP_SYS_RAWIO permission checks, and also because user processes cannot directly access physical video memory addresses – all memory mappings (mmap) are implemented via drm_gem_mmap with integrity control and using the struct vm_operations_struct mechanism to track shared memory regions.

Logging

DRM uses the standard kernel logging mechanism via DRM_INFO, DRM_ERROR, DRM_DEBUG macros (based on pr_info and dev_info), which output messages to the kernel ring buffer (dmesg). For debugging, enabling drm.debug=0x1 (a bitmask of categories: initialization, atomic modes, synchronization, etc.) in the kernel boot parameters activates detailed logging of video mode switching operations (drm_mode_debug) and buffer allocation. Additionally, each driver can add its own trace points via trace_drm_vblank_event() for analysis in perf.

Limitations

DRM works only in a privileged kernel context, which prevents implementing a full graphics stack in userspace without root privileges or CAP_SYS_ADMIN capability for direct access to DRI devices. Moreover, closed-source GPU drivers (e.g., proprietary NVIDIA drivers) cannot use the standard DRM API natively and emulate it through the nvidia-drm.ko wrapper, which results in the loss of atomic mode support (Atomic KMS) and proper operation with nested compositors (Wayland). Additionally, on embedded systems without an IOMMU, DRM cannot protect against DMA attacks between the GPU and other devices.

History and Development

DRM originally appeared in 1999 as part of DRI (Direct Rendering Infrastructure) for XFree86, but starting from kernel version 2.6.13 (2005) it became an independent subsystem. A key turning point occurred with the introduction of KMS (Kernel Mode Setting) in 2008, which moved video mode control from the X server into the kernel. Modern development includes support for universal buffers (DMA-BUF), atomic commits (Atomic Mode Setting – added in 2014), integration with Vulkan through external extensions, and an actively developing debugging interface via DRM_IOCTL_CRTC_GET_SEQUENCE along with improved synchronization through syncobj.