DRAM (Storage and Byte-addressing of Data)

DRAM is a type of memory that stores each bit of data in a separate capacitor inside the chip. Due to natural charge leakage, it requires constant refresh; otherwise, information is lost. It is fast but volatile memory.

DRAM is used as the main memory in personal computers, laptops, and servers. It is also used in video memory for graphics adapters, mobile devices (LPDDR), and embedded systems such as routers or printers. Without DRAM, modern multi-functional processors could not simultaneously process large volumes of data.

Typical Issues

Capacitor discharge over time, leading to bit errors, and the need for frequent refresh, which creates latency. There is also vulnerability to electromagnetic interference and the row hammer effect, where frequent access to adjacent rows causes charge loss. A drawback is volatility: when power is removed, all data disappears instantly.

How DRAM works

Each DRAM cell consists of one capacitor and one transistor. The capacitor stores charge: the presence of charge corresponds to a logical one, its absence to a zero. The transistor acts as a switch: when voltage is applied to its gate, the wordline opens, connecting the capacitor to the bitline. For a read, the memory controller activates the desired row, and a sense amplifier detects the voltage change on the bitline by comparing it to a reference level, then amplifies and restores the original charge. Writing is similar, but instead of sensing, the controller forcibly sets the bitline to zero or one, charging or discharging the capacitor through the open transistor. Due to leakage through the substrate and dielectric, the capacitor loses charge within milliseconds, so the controller periodically rewrites each row – a process called refresh. Unlike SRAM, which uses a six-transistor flip-flop, DRAM provides high storage density at the cost of constant maintenance.

DRAM functionality

  1. Fundamental storage structure. A DRAM bit is stored in a basic cell consisting of one transistor and one capacitor. The capacitor holds an electrical charge, where the presence of charge encodes a logical one and the absence a zero. The transistor acts as a switch, controlling access to the capacitor.
  2. Need for refresh. DRAM is called dynamic because of charge leakage through the capacitor substrate. To retain data, periodic refresh is required, during which the controller reads and rewrites the contents of each cell. The typical refresh interval is 64 milliseconds.
  3. Matrix architecture and addressing. Cells are organized into a two-dimensional row-column matrix to reduce the number of address lines. The address is split into two parts: first the row is activated (Activate command), then the column (Read/Write command). This is called address multiplexing.
  4. Role of sense amplifiers. When a row is activated, all cells in that row are connected to long bitlines. Sense amplifiers located at the ends of the lines detect the microscopic voltage difference between the cell signal and a reference level, amplifying the signal to full logic levels.
  5. Precharge command. Before accessing a different row, the Precharge command must be issued. It disables the current row and sets the bitlines to a reference voltage (typically VDD/2). Without this operation, data from the next row would be corrupted by residual charge.
  6. Access latency (CAS Latency). The CAS Latency (CL) specification defines the delay between issuing a column read command and the data appearing at the output. With CL=9, 9 clock cycles of the memory bus are required. This time is needed for internal amplification and data switching in the multiplexer.
  7. Burst mode. To increase throughput, DRAM uses burst transmission. After the first read with a given CAS Latency, subsequent data from adjacent columns in the same row are output sequentially at one word per clock cycle. The burst length is typically 4 or 8 transfers.
  8. Open and closed page. The currently active row is called an open page. Access to another cell on the same page is fast (Page Hit) and does not require a new activation. Changing the page (Page Miss) involves Precharge and a new Activate, adding about 30-40 ns of delay.
  9. Banks and inter-bank pipelining. Modern DRAM is divided into independent banks (typically 4-16). Each bank has its own array and sense amplifiers. The controller can perform Precharge, Activate, and Read operations in different banks in parallel, hiding row switching delays.
  10. Data bus and DQS strobe. Data exchange is synchronized by the DQS (Data Strobe) signal. During a write, the memory strobes data on DQS edges. During a read, the memory generates DQS along with the data. This allows precise alignment of data windows at high frequencies (above 1600 MHz).
  11. REFRESH and auto-refresh. For refresh, the controller periodically issues an Auto-Refresh command. In response to this command, an internal DRAM counter sequentially activates one row in all banks. A fixed number of rows are refreshed per cycle (e.g., 8192 over 64 ms).
  12. Self-refresh mode. In low-power modes (Power-down or Sleep), DRAM is put into Self-refresh. The memory independently performs internal refresh using a built-in timer. The data bus is disconnected, and power consumption drops to a very low level.
  13. Effect of temperature on leakage. The rate of charge leakage increases exponentially with temperature. At +85°C, the refresh interval is reduced to 32 ms. Industrial DRAM modules with Temperature-Compensated Refresh (TCR) automatically reduce the period when the chip heats up.
  14. Row Hammer effect. Repeated rapid activation of adjacent rows causes parasitic charge leakage through the dielectric of the target row’s capacitor. This leads to bit flipping. Modern controllers implement Target Row Refresh (TRR), additionally refreshing potentially vulnerable rows.
  15. ECC DRAM organization. For error detection and correction, additional ECC (Error-Correcting Code) memory is used. For every 64 bits of data, 8 bits of syndrome (Hamming or Reed-Solomon code) are stored. The ECC controller corrects single-bit errors on the fly without interrupting access.
  16. Bank groups and pseudo-open page. In the DDR4 and DDR5 standards, Bank Groups are added. Each group contains independent banks and separate DQS lines. This allows simultaneous burst operations in different groups, doubling effective bandwidth.
  17. Write-Read turnaround time. When switching from write to read, a minimum interval called Write Recovery Time (tWR) is required. This is necessary to fully transfer data from the internal write buffer into the array cells. A typical tWR value is 15-20 ns for DDR4.
  18. Internal reference voltages. Sense amplifiers do not use an absolute level but rather the potential difference between the bitline and a reference line. The reference is generated by a special cell or a technologically created half-VDD. This differential scheme suppresses common-mode power supply noise.
  19. On-Die Termination (ODT). To prevent signal reflections at high frequencies, DRAM includes built-in termination resistors that pull the line to VTT. The controller dynamically configures ODT during initialization, matching the bus impedance to 40-60 Ohms.
  20. Precharge Power Down. The controller can put individual banks into Precharge Power Down state if they are not being accessed. In this mode, sense amplifiers are turned off, but row refresh continues. Exiting this mode takes about 2-3 clock cycles, saving up to 40% of dynamic power.

Comparisons

  • DRAM vs SRAM. DRAM uses one capacitor and one transistor per cell, requiring constant refreshing to retain data, which increases latency and power for maintenance. SRAM employs a flip-flop circuit (six transistors) per bit, offering faster access and no refresh cycles. Consequently, SRAM is used for CPU caches where speed is critical, while DRAM serves as main memory due to higher density and lower cost per bit.
  • DRAM vs Flash (NAND). DRAM is volatile, losing data immediately when power is removed, but supports byte-level read/write with very low latency (<100 ns). Flash memory is non-volatile, retaining data without power, yet requires block-level erase before write, causing high write latency (~ms) and limited endurance (~10⁴ cycles). Thus, DRAM handles active working sets, while Flash serves as persistent storage.
  • DRAM vs EEPROM. DRAM offers unlimited write cycles and fast random access (typical access time ~50 ns), making it ideal for temporary scratchpad data. EEPROM provides non-volatile storage with byte-addressable writes but has slow write speeds (~10 ms) and limited endurance (~1M cycles). Applications differ: firmware storage (EEPROM) versus system memory (DRAM); they are rarely interchangeable.
  • DRAM vs SDRAM (Synchronous DRAM). Traditional DRAM uses asynchronous interfacing, meaning memory operations are not synchronized to the CPU clock, leading to lower bandwidth and more complex control logic. SDRAM introduces a clock-synchronized pipeline, enabling burst transfers and higher throughput (e.g., PC100 vs older FPM DRAM). Modern systems use DDR SDRAM variants, which double data rate, leaving classic DRAM obsolete for performance computing.
  • SDRAM (Synchronous Data Storage and Retrieval)
  • DRAM vs HBM (High Bandwidth Memory). Conventional DRAM (e.g., DDR4/5) uses a wide, parallel bus but operates at lower per-pin frequencies and higher power per bit transferred. HBM stacks multiple DRAM dies vertically with through-silicon vias (TSVs) and a 1024-bit interface, achieving massive bandwidth (~1 TB/s). However, HBM requires interposers and costs more, making DRAM dominant for general-purpose systems, while HBM suits GPUs and accelerators.

OS and driver support

DRAM requires explicit initialization through BIOS/UEFI during the POST (Power-On Self-Test) stage, where the memory driver (part of firmware or chipset driver) configures timings, frequencies, and module ranks via the SPD (Serial Presence Detect) protocol. After this, the OS receives a linear address space managed through the MMU (Memory Management Unit) without needing separate drivers for each cell.

Security

DRAM’s physical vulnerabilities to cold boot attacks and Rowhammer (repeated row reads to flip bits in adjacent rows) require hardware protections such as TRR (Target Row Refresh) and ECC DRAM (Error-Correcting Code). At the OS level, isolation mechanisms like IOMMU for DMA attacks and kernel data encryption are needed, as DRAM retains residual bits for seconds after power loss.

Logging

The DRAM controller and chipset log events via MCA (Machine Check Architecture) or RAS (Reliability, Availability, Serviceability) mechanisms, such as corrected/uncorrected ECC errors, refresh timeouts, or overheating. These are passed to the OS system log (e.g., dmesg in Linux or Event Viewer in Windows) through ACPI interfaces or PCIe registers, where the memory driver converts them into human-readable records about rank or cell failures.

Limitations

Physical limitations of DRAM include the need for constant refresh (every 64 ms for JEDEC standards), which introduces latency and power consumption, as well as scalability limits of a few slots per channel due to electrical loading on the bus. Additionally, processor addressing limits (e.g., 4 GB in 32-bit OS without PAE) force the OS to use page swapping to disk.

History and development

Historically, DRAM evolved from asynchronous FPM (Fast Page Mode) in the 1980s to synchronous SDRAM, then to DDR SDRAM (with double data rate on both clock edges), where each generation (DDR2-5) reduced voltage (e.g., from 2.5V to 1.1V) and increased prefetch (from 2n to 16n bits). Current trends include 3D stacking (HBM) and the integration of logic computation within memory banks for near-memory computing.