DomU is a virtual machine inside the Xen hypervisor that does not require manual configuration of network interfaces. It receives a ready-made network channel from the Dom0 management system, operating like a regular computer on a local network.
DomU is used in cloud platforms and server infrastructures where client system isolation is required. Typical scenarios include hosting virtual private servers (VPS), running test environments, and deploying multi-tenant applications with network security guarantees.
Typical problems
The main difficulty is loss of connectivity after migrating a DomU to another physical host if the bridge is misconfigured. Traffic leaks are also possible due to incorrect filtering rules in Dom0, as well as MAC address conflicts when cloning virtual machines without modifying them.
How it works
DomU does not have direct access to physical network cards. Instead, the Xen hypervisor creates a virtual network interface (VIF) which connects to a network bridge in Dom0. Dom0, acting as a privileged management system, receives packet transmission events from the hypervisor. When sending data from DomU, the netfront driver creates a buffer with a frame and passes it to the Xen hypervisor via the Xen event mechanism. The hypervisor then forwards the buffer to the netback driver in Dom0, which places the frame into the real network stack of the Dom0 operating system. From there, via bridging or routing, the packet goes to the physical interface. On reception, the chain is reversed: the Dom0 network adapter receives the frame, the bridge determines its belonging to a virtual interface, netback packs the frame into shared memory, the hypervisor signals DomU, and netfront extracts the data and passes it to the guest operating system. This bypass of direct hardware access ensures isolation and allows multiple DomUs to share a single physical port without interfering with each other.
DomU functionality
DomU memory model. The hypervisor allocates a fixed amount of physical memory to DomU, mapped via pseudo-physical addresses. DomU does not work directly with real pages. Address translation is performed using two-level page tables (Shadow Page Tables or Hardware Assisted Paging), ensuring isolation.
CPU virtualization. DomU uses paravirtualization (PV) or hardware virtualization (HVM). In PV mode, the DomU kernel is modified to call the hypervisor via hypercalls. In HVM mode, DomU runs unmodified using Intel VT-x or AMD-V, but requires device emulation.
Hypercalls and context switching. DomU interacts with Xen via synchronous hypercalls, special instructions that pass control to the hypervisor. Context switching between DomU and the hypervisor minimizes overhead by saving registers and switching page tables within a few dozen clock cycles.
Virtual interrupts (VIRQ). DomU receives interrupts not from real hardware but via the VIRQ (Virtual IRQ) mechanism. The hypervisor generates events for DomU upon completion of I/O operations or changes in the state of other domains. VIRQs are handled like regular interrupts but without involving the PIC/APIC.
Control domain Dom0. Dom0 is a privileged virtual machine that creates, destroys, and manages DomUs. It contains device drivers and the xl/xm toolstack. DomUs cannot communicate with each other without Dom0 mediation, which enhances security.
Drivers and device splitting. DomU has no access to physical drivers. Two types of drivers are used: backend drivers in Dom0 and frontend drivers in DomU. The frontend passes requests via a ring buffer (shared memory ring), and the backend performs the actual hardware operation.
Ring buffers (I/O rings). Ring buffers located in shared memory are used for data transfer between DomU and Dom0. Each buffer contains request and response descriptors. DomU places a request into the ring, sends an event via an event channel, after which Dom0 processes the data.
Event channels. Event channels are a lightweight notification mechanism between domains. Each channel represents a unidirectional signaling path. DomU uses them to notify Dom0 about new network packets or completion of disk operations. Channels are mapped to VIRQs.
Network stack in DomU. The DomU virtual network interface (VIF) connects to a bridge or Open vSwitch in Dom0. A packet from DomU goes through the netfront frontend driver into the ring buffer. Dom0 via netback receives the packet and sends it to the physical interface. Losses are minimal.
DomU disk subsystem. DomU disk devices are represented as virtual block devices (VBDs). The blkfront frontend driver passes read/write requests. Dom0 via blkback accesses a physical block or file image. Raw, qcow2, LVM, and iSCSI formats are supported.
Graphical console. DomU can have a virtual graphics card via QEMU (in HVM) or a PV console (in PV). The xl console or VNC is used to access the console. Keyboard and mouse I/O are emulated, with events passed from Dom0 via event channels.
Live migration. DomU supports migration between physical servers without stopping. The hypervisor copies memory page by page, tracking changes via a dirty page logging mechanism. After synchronizing remaining pages, DomU is paused for milliseconds and resumed on the target node.
Save and restore. DomU state can be saved to a file using the xl save call. The hypervisor dumps memory contents, CPU registers, virtual device state, and event channels. Restoring (xl restore) loads this image and recreates the domain exactly as it was at the time of saving.
Resource limits (CPU caps). DomU cannot consume an entire physical CPU. The cpu_weight parameter determines CPU time share under the Credit2 fair scheduler. The cpu_cap parameter hard limits usage as a percentage of a single core, preventing DoS attacks.
Memory limits and swap. The hypervisor allocates a fixed amount of memory to DomU (static max). A dynamic minimum (memory target) can also be set. DomU has no direct swap to disk, but a virtual swap partition can be configured inside DomU via a VBD. Memory reallocation requires notification via XenBalloon.
XenBalloon (balloon driver). The balloon driver in DomU allows dynamically changing allocated memory. Dom0 can request DomU to return or accept additional pages. DomU deflates by releasing pages to the hypervisor or inflates by requesting them back. The operation is performed without rebooting.
Security and isolation of DomU. Each DomU exists in its own address space. Access to other domains‘ memory is impossible due to hardware page protection. The hypervisor checks all hypercalls for legitimacy. No DomU can directly execute a privileged instruction of the physical CPU.
Multi-core support (vCPUs). DomU can have multiple virtual CPUs (vCPUs). The hypervisor schedules vCPUs onto physical cores, with vCPU affinity supported. Inside DomU, SMP is present, but actual parallelism depends on physical core load and the scheduler algorithm.
Tracing and debugging DomU. For diagnostics, tools such as xentrace, xenmon, and xl debug-keys are used. DomU generates events into the hypervisor’s ring buffer. Hypercalls, interrupts, and context switches can be intercepted. Debugging the DomU kernel is possible via the Xen serial console or kdump with access to the memory image.
Comparisons involving DomU
DomU vs Dom0. DomU is a fully isolated virtual machine running under the hypervisor, while Dom0 is a privileged management domain with direct hardware access. The key difference is access level: Dom0 has exclusive rights to I/O operations and device drivers, servicing DomU network and disk requests via shared ring buffers, creating a strict privilege separation architecture for security.
DomU vs KVM guest. DomU on Xen uses paravirtualization with a modified guest operating system aware of the hypervisor, whereas a KVM guest relies on hardware virtualization (VT-x/AMD-V) with unmodified OSes. DomU performance is traditionally higher in network and disk scenarios due to the direct netfront/netback driver path, while KVM requires virtio device emulation for comparable efficiency, but wins in ease of deploying unsupported kernels.
DomU vs Docker container. DomU emulates a full hardware environment with its own OS kernel and dedicated memory, providing absolute multi-tenant isolation. Docker container uses the host’s shared kernel via namespaces and cgroups, sharing libraries and binaries. DomU sacrifices density for security: compromising the guest kernel does not affect neighboring VMs or the host, which is critical in public cloud environments with infrastructure tenancy.
DomU vs LXC/LXD. Both solutions provide isolated compute environments, but DomU operates at the hypervisor level with its own process scheduler and independent kernel lifecycle. LXC/LXD system containers run unprivileged user spaces with a single host kernel, delivering near-native performance with lower memory overhead. DomU is preferable when needing to run different kernel versions or operating systems on the same physical server without module conflicts.
DomU vs Firecracker microVM. Classic Xen DomU is designed for full-weight operating systems with a rich driver set and long lifecycle, whereas Firecracker microVM loads a minimal kernel in milliseconds with a reduced emulated device surface. DomU supports migration of entire guest systems and complex storage configurations via tapdisk backends, while Firecracker sacrifices this functionality for startup speed and security of single-tenant serverless workloads with millisecond cold-start times.
Operating system and driver support
In paravirtualized (PV) DomU, the guest OS uses a specially modified kernel with netfront and blkfront drivers that directly interact with the corresponding backend drivers (netback, blkback) in Dom0 via ring buffers and shared memory, bypassing hardware emulation; in hardware virtualization (HVM), an unmodified OS is launched, but to achieve performance close to PV, agents (PVHVM drivers) are installed inside the guest, switching disk and network operations from emulated IDE/SATA and e1000 to high-speed paravirtual paths (xen-netfront/xen-blkfront) after boot, allowing OSes without native Xen support (e.g., Windows) to run efficiently in HVM mode with PV drivers.
Security
DomU isolation is provided by hardware virtualization support (Intel VT-x/AMD-V), where the hypervisor runs in the most privileged VMX Root ring, while guest kernels execute in unprivileged VMX Non-Root contexts, so any attempt by DomU to execute a privileged instruction triggers a VM Exit, transferring control to the hypervisor for operation validation; memory access is strictly controlled via EPT/NPT (Extended/Nested Page Tables), which creates hardware-enforced separation of physical address spaces between guest domains, preventing any unauthorized reading of neighboring VM memory and limiting DomU only to those memory pages explicitly allocated by the hypervisor.
Logging
The DomU logging system is implemented via two main channels: first, the emulated serial console (xenconsoled), which intercepts all guest kernel output, including boot messages and kernel panics, and passes it through the Xen ring buffer to the xenconsoled daemon in Dom0, which writes these streams to files such as /var/log/xen/console/guest-<name>.log; second, for emergency debugging and preserving kernel logs across reboots, the xen-pvpanic mechanism is used — when the guest OS crashes, it sends the crash reason code via xenstore to Dom0; for detailed tracing of hypercalls and scheduler events, the built-in Xen tool xentrace is used, which records timestamps and parameters of all operations performed by DomU in binary format.
Limitations
Each DomU is subject to hard resource limits configurable in the credit scheduler (Credit/Credit2 Scheduler), where for each virtual CPU, weight (relative share of physical CPU time) and cap (absolute ceiling as a percentage of a single core) are set; disk limits are implemented via the blkback backend driver in Dom0, which applies a token bucket algorithm to limit IOPS and throughput of each virtual block device (vbd); network limits are imposed using Traffic Control (tc) directly on vif virtual interfaces in Dom0’s bridge configuration, where for each interface, policing classes are defined with a fixed average rate and a peak burst value, preventing the guest from exceeding its allocated bandwidth.
History and development
The Xen domain architecture originated at the University of Cambridge in the late 1990s with the advent of paravirtualization, which required a modified guest kernel aware of the hypervisor (first-generation DomU); but with Intel and AMD processors implementing hardware virtualization extensions (2005–2006), full hardware virtualization (HVM) appeared, giving rise to fully isolated and unmodified guest systems; a key transitional point was the hybrid PVHVM mode (2008), which added paravirtual drivers to HVM guests, eliminating the main drawback of pure emulation; and in modern versions of Xen (starting with 4.8), native PVH mode was introduced, combining the lightweight entry of PV with HVM hardware memory protection, completely freeing DomU from the need to emulate legacy hardware and QEMU dependencies, making guests simultaneously performant and maximally isolated.