SDRAM (Synchronous Dynamic Random-Access Memory) is a type of computer memory that synchronizes its operations with the processor’s clock signal, allowing data to be written and read in blocks at predictable moments in time.
This type of memory is used as RAM in personal computers, laptops, and servers. SDRAM is also used in network routers, printers, and specialized computing devices where a fast buffer is needed for temporary storage of codes and data processed by the central processing unit.
Typical issues
Typical issues with SDRAM include unavoidable performance degradation due to delays when accessing different banks and the need for periodic capacitor charge regeneration. Additionally, the synchronous mode complicates the physical layout of modules at high frequencies, and single-bit errors can occur due to alpha particles or voltage fluctuations.
How SDRAM works
The operating principle is based on an array of capacitors, each storing one bit as an electrical charge. The capacitors are organized into matrices divided into independent banks. Unlike asynchronous memory, SDRAM contains an internal finite-state machine and a command pipeline. Control signals (CAS, RAS, WE, CS) are applied synchronously with the clock edge. During a read, the processor first issues a row activation command for a specific bank; after a tRCD delay, the row opens. Then a read column command transfers data to the output pipeline, where it appears after a fixed number of clock cycles (CAS latency). Pipelining allows a new command to be issued before previous data is retrieved. Write operations are similar, but data is placed on the bus at the moment of the command. Refresh is performed automatically: the memory controller periodically sends Refresh commands, during which sense amplifiers rewrite rows, preserving the charge against capacitor self-discharge. Operating speed is determined by the clock frequency and the ability to use burst mode, where after specifying the first column address, subsequent addresses are generated internally without external commands.
SDRAM functionality
- Clock synchronization. SDRAM synchronizes its operations with an external clock signal from the system bus. This allows data exchange strictly on clock edges, eliminating asynchronous delays waiting for module readiness. The chip includes an internal finite-state machine.
- Command pipelining. The architecture implements a pipeline that separates the stages of command reception, addressing, and data output. While one chunk of data is being transferred to the bus, the next one is being addressed. This radically increases throughput compared to asynchronous DRAM.
- DRAM (Storage and Byte-addressing of Data)
- Burst mode. Burst transfer is supported, where a sequence of cells is read with a single address request. Block length (2, 4, or 8 words) is set by the mode register. The first word is output after a CAS delay, the rest on each subsequent clock cycle.
- Mode register. A programmable 12-bit register configures CAS latency, burst length, and addressing type (sequential/interleaved). Writing is performed using the
Load Mode Registercommand immediately after initialization. Changing the mode requires full recalibration. - Bank structure. The chip is divided into 2 to 4 independent logical banks. While one bank performs precharge, another is available for read/write operations. This hides row closing time. The
Activatecommand opens a row in a specific bank. - CAS Latency (CL). The delay between issuing a read command and the appearance of the first data at the output. Measured in clock cycles:
CL2,CL3, or higher. It is the sum of cell access time and path through the output buffer. Specified in the mode register. - tRCD (RAS-to-CAS Delay). The minimum interval from the row activation command (
RAS) to the column read/write command (CAS). Measured in clock cycles. Depends on array technology: charging sense amplifiers requires a fixed time that SDRAM cannot reduce. - tRP (Row Precharge Time). The time required to close the current row and prepare the bank to open another. Includes resetting sense amplifiers and equalizing bit lines. During
tRP, the bank processes no commands. Modern controllers use bank interleaving to masktRP. - Precharge command. Comes in two types: single (closes a row in a specified bank) and all banks. After precharge, the bank enters idle state. Without precharge, a new row cannot be opened in the same bank. Explicit precharge is mandatory after a burst operation.
- Auto Refresh. Required every 64 ms for all rows; otherwise, the accumulated charge in cell capacitors will dissipate. Executed via the
Refreshcommand. During an auto refresh cycle (typically 8–12 clocks), all banks are blocked. The controller must respect the interval between refreshes. - Self Refresh. Activated when entering a low-power mode. The chip independently generates internal clocks and cyclically regenerates data using an internal timer. Requires stable
VDDpower and minimal current. Used in laptops and systems with suspend mode. - Input and output buffers. All commands are strobed by the clock edge on
CS,RAS,CAS,WEinputs. Data is transferred throughDQpins with 0.1 ns precision relative toCK. Output buffers are enabled by theDQM(data mask) signal to ensure a high-impedance bus state. - Byte masking (DQM). Each
DQMsignal controls eightDQlines. In read mode,DQMadds a 2-cycle delay before disabling the output. In write mode,DQMimmediately suppresses the current byte. Allows partial writes without altering adjacent bytes in a word. - Write With Auto Precharge command. An extension of standard writes: after the burst completes, the controller does not issue a separate precharge. The chip automatically closes the row after an internal delay. Saves bus commands but requires precise timing relative to
tWR. - Write recovery time (tWR). The minimum interval from the last write data clock to the precharge command. Required for fully latching data into array cells. If
tWRis violated, bit loss may occur. The controller must wait this interval before closing the row. - Initialization. Upon power-up, a sequence is performed: 200 µs delay, precharge all, 8 auto refresh cycles, mode register load. Chip operation is not guaranteed without this. The controller monitors clock signal and
VDD/VDDQsupply stabilization before starting initialization. - Power-Down Modes. Two options: active (all banks open, PLL on) and precharged (all banks closed). In precharged mode, power consumption is minimal, but exiting takes several clocks for reactivation. Transition is performed using a
NOPcommand withCSdeasserted. - Programmable output drive. The mode register allows configuring output driver strength (weak/strong). Reduces reflections on long lines and crosstalk. Especially important in multi-module configurations. Value is selected based on a returning calibration pulse.
- Bank interleaving. A method to maximize bus utilization: the controller interleaves commands for different banks. While one bank performs precharge, another issues a burst. Nearly 100%
DQline utilization without idle cycles is achieved. Requires at least 2 banks, optimally 4. - Frequency limitations. The maximum operating frequency (typically 100–166 MHz for classic SDRAM) is limited by column access time and global line capacitance. Increasing the clock reduces the data valid window (
tOH). Above 200 MHz, a transition to DDR with double data rate is required.
Comparisons with SDRAM
- SDRAM vs DRAM. SDRAM (synchronous DRAM) is synchronized with the system bus clock, allowing commands to be executed on every clock cycle, unlike asynchronous DRAM which operates without clock binding and requires additional wait cycles. This gives SDRAM higher throughput and predictable latency.
- SDRAM vs SRAM. SDRAM uses one capacitor and one transistor per bit, requiring periodic refresh, which increases latency but reduces cost and power consumption. SRAM, based on flip-flops (6 transistors per bit), needs no refresh and provides nanosecond access times, but is significantly more expensive and used for cache memory.
- SRAM (Fast volatile random storage of bits)
- SDRAM vs DDR SDRAM. DDR SDRAM transfers data twice per clock cycle (on both rising and falling edges), while classic SDRAM transfers only once. This doubles bandwidth at the same bus clock frequency. However, the basic cell structure (capacitor + transistor) and need for refresh are shared.
- SDRAM vs RDRAM (Rambus DRAM). RDRAM uses a narrow but high-frequency serial bus with packetized transfer and low-latency protocol, providing high throughput for streaming data. SDRAM uses a wide parallel bus with a simpler protocol, offering lower latency for random access and better price/performance ratio.
- SDRAM vs Flash (NAND). SDRAM is volatile memory with random access and read times in tens of nanoseconds, ideal for CPU working memory. Flash is non-volatile but has block-structured access, high latencies (microseconds), and limited write cycles, making it suitable for long-term storage rather than direct code execution.
OS and driver support
SDRAM does not require special drivers from the OS, as it is managed exclusively by the hardware memory controller on the chipset or CPU; the OS only sees a standard range of physical memory through page table mechanisms (GDT/Page Tables in x86), and drivers interact with SDRAM only indirectly via CPU caches (L1/L2) and TLB, without directly sending commands to the memory modules.
Security
SDRAM has no built-in security features; data is stored in plaintext, making it vulnerable to cold boot attacks (extracting residual magnetization after power-off), as well as DMA attacks if the OS has not configured IOMMU; protection is provided only by external mechanisms such as address space isolation via MMU, cell encryption during paging (e.g., in TPM or SEV), and memory zeroing before deallocation.
Logging
SDRAM does not implement internal logging, but modern memory controllers can maintain ECC error statistics (if ECC SDRAM is used), transaction counters, and refresh timeout data via MSRs (model-specific registers) and SMBIOS registers; this data is accessible via edac-utils in Linux or WMI in Windows, but only at the level of hardware diagnostics, not user operation logging.
Limitations
SDRAM has strict physical limitations: mandatory periodic refresh (every 64 ms for DDR2-DDR4, every 32 ms for early types), which increases latency and reduces throughput under dense access; also, there is a maximum capacity limit per channel (the memory controller has address width and rank/bank restrictions), and using different types (DDR3 vs DDR4) on the same bus is impossible due to differing voltage and signaling interfaces.
History and development
SDRAM (synchronous DRAM) replaced asynchronous FPM and EDO DRAM in the mid-1990s — the first mass standards were PC66/100 at 66–100 MHz; followed by DDR SDRAM (doubling data on rising and falling edges, 2000), DDR2 (higher frequencies and voltage reduction to 1.8 V), DDR3 (1.5 V, command buffering), DDR4 (1.2 V, bank groups), and DDR5 (2 × 32-bit subchannels, integrated PMIC, and on-die ECC), while the basic memory architecture (capacitor matrix, refresh, row/column multiplexing) has remained unchanged.