SRAM stores each bit in a loop of two inverters. Unlike DRAM, it does not need regeneration to retain data, but it does require a constant power supply.
SRAM is used as processor cache memory (L1, L2, L3), in microcontroller buffers, high-performance network switches, and FPGAs. Due to its low latency, it is also used in register files and embedded RAMs for real-time systems where access speed is critical.
Typical issues with SRAM include low storage density, which makes manufacturing large arrays expensive. A key vulnerability is data loss when power is turned off. Sensitivity to alpha particles and cosmic radiation can also cause soft errors. Active mode power consumption is higher than that of DRAM for the same capacity.
How SRAM works
The operating principle of an SRAM cell is based on a bistable latch. A typical six-transistor cell (6T) contains two cross-coupled inverters forming positive feedback. Each inverter consists of one n-channel and one p-channel field-effect transistor. In a stable state, if the output of the first inverter is a logic one, it is fed to the input of the second, which outputs a zero; this zero returns to the input of the first, locking in the one. Thus, information is retained as long as power is supplied, without needing a refresh. Access to the cell is provided by two additional transistors (pass gates) controlled by a word line signal. When this signal is activated, these transistors open, connecting the internal storage nodes to pairs of bit lines. During a read, precharged bit lines create a small differential voltage, which is amplified by a sense amplifier. During a write, one bit line is forced low and the other high, switching the state of the inverters if necessary. More modern SRAMs use 8T and 10T cells to improve stability and allow independent read-write ports.
SRAM functionality
- Memory cell structure. A classic SRAM cell contains six transistors forming two cross-coupled inverters. This bistable structure has two stable states corresponding to a logic 0 or 1. Two additional transistors serve as access switches to the bit lines.
- Static storage principle. Information is held by positive feedback in the cross-coupled inverters. As long as the operating voltage is applied, the transistors compensate for leakage currents, preventing the state from changing spontaneously. This eliminates the need for refresh cycles.
- Data read process. During a read, the bit lines are precharged to
Vdd/2. Activating the word line opens the access transistors. A differential amplifier compares the potentials on the complementary bit lines, detecting the small difference created by the internal transistors. - Data write process. To write, opposite levels (
VddandGND) are forced onto the bit lines. Activating the word line connects the internal inverters to these lines. Because the external circuit drivers are stronger, the latch state switches against its current feedback. - Access timing characteristics. SRAM features low access time, typically a few nanoseconds. The delay from address application to stable data output is minimal. The absence of refresh cycles eliminates unpredictable delays in key memory parameters.
- Addressing topology. The memory array is divided into rows (word lines) and columns (bit lines). An address decoder selects a specific row. A column multiplexer directs signals from the selected group of cells to input-output buffers, allowing random access to any address in a single cycle.
- Static power consumption. SRAM static power consumption is extremely low because there is no leakage current in the stable state. The device consumes energy only during address or data transitions. This makes SRAM indispensable in battery-powered devices with sleep modes.
- Dynamic power consumption. During each read or write cycle, the parasitic capacitances of the bit lines are recharged, which becomes the main source of active power consumption. The charge current is proportional to the operation frequency. Precharge and low-swing signaling techniques are used to reduce energy.
- Speed and address timing. SRAM is an asynchronous memory type where Chip Select and address directly initiate a cycle. Synchronous versions (SSRAM) are tied to a clock generator for pipelining. A key parameter is
tAA, the access time from address to output. - Low-power modes. In standby mode with Chip Select deasserted, internal circuits are turned off, leaving only the cell latches active. Current drops to a few microamps per chip. Deep sleep mode with reduced cell voltage requires data restoration afterward.
- Read disturb problem. During a read, the cell is less stable because the voltage divider formed by the access transistors can flip a weak latch. Designers ratio transistor dimensions (beta ratio) to ensure static noise margin (SNM).
- Noise margins. SNM defines the maximum parasitic voltage level that will not cause the cell to switch spontaneously. It depends on transistor threshold voltages and supply voltage. Typical values are 100-300 mV for processes down to 28 nm, which is critical for low-power applications.
- Technology scalability. SRAM is a leading driver of semiconductor process technology. The 6T cell occupies up to 50% of a modern microprocessor die (cache). At 3 nm nodes, threshold voltage variability causes problems, requiring write-assist techniques.
- SRAM-based register files. Processor logic uses small banks of ultra-fast SRAM called register files. They have maximum word widths (32-64 bits) and minimal decode delay. Read-write port architecture enables simultaneous multiple access.
- Use in cache memory. High-speed static SRAM forms L1, L2, and L3 caches in processors. L1 cache runs at core frequency with an access time of 1-2 clock cycles. Discrete SRAM for external cache provides transfer speeds up to 500 MHz with a DDR bus.
- Write margin. A write must not disturb neighboring cells. The write margin parameter evaluates the minimum bit line swing needed to guarantee cell flipping. At low supply voltage, the write margin deteriorates, introducing the so-called lower
Vminlimit for array operation. - Soft error effect. Alpha particles or neutrons cause bit flips in an SRAM cell. The critical charge (
Qcrit) of the latch is small due to small capacitances. Protection methods include ECC codes, Hamming error correction, or radiation-hardened DICE cells. - Internal bank architecture. Large arrays are divided into independent banks with their own decoders and sense amplifiers. This reduces bit line and word line lengths, lowering parasitic capacitance and speeding up access. Interleaving across banks enables pipelining of alternating requests.
- Yield improvement techniques. Redundant rows and columns are used to compensate for process variations. Electrical testing identifies defective cells, and laser fuses or remapping logic replace them with spare elements without losing chip functionality.
Comparisons
- SRAM vs DRAM. SRAM uses six transistors per bit, providing static storage without needing refresh, yielding nanosecond latencies. DRAM requires one transistor and a capacitor but needs periodic data refresh, increasing latency and dynamic power consumption, though it wins in density.
- SRAM vs Flash. SRAM provides random bytewise access with a read cycle time under 10 ns, supporting an unlimited number of write cycles. Flash memory is orders of magnitude slower (tens of microseconds), requires block erasure before writing, has limited endurance, but retains data without power and offers high density.
- SRAM vs Register File. SRAM is organized as a matrix with row and column addressing using a standard 6T cell, providing capacity up to megabytes with moderate power consumption. A register file is built from D-flip-flops or faster 8T cells with separate read-write ports, offering access under 1 ns but dramatically increasing die area per bit.
- SRAM vs Latch Array. SRAM requires bit line precharging and differential sense amplifiers for reading, which adds a small but fixed delay independent of array size. A latch array saves dynamic power for sequential access but suffers from crowbar current and needs clock synchronization, complicating on-chip routing.
- SRAM vs CAM. SRAM performs access by explicit address in a single cycle, ideal for caches and buffers. CAM compares input data against all entries in parallel in one cycle, using 9-16 transistors per cell; this provides instant search for TLBs and routing tables, but at the cost of drastically higher power and area compared to SRAM.
OS and driver support
As a static random access memory, SRAM does not require driver initialization at the OS level — the memory controller or MCU accesses it directly via a fixed physical address. Drivers are only needed to configure external SRAM or to emulate a file system on an SRAM disk, where the OS sees the area as a high-speed block device with manual cache management.
Security
SRAM is vulnerable to power loss and to side-channel attacks such as a cold boot attack if the chip is not initialized. Protection is implemented via hardware SRAM zeroing at startup, built-in voltage drop detectors, and on-the-fly data encryption with a separate key in an SRAM bank isolated from user code.
Logging
For event logging in SRAM, a circular buffer with atomic write operations is used, organized via DMA. The SRAM area is marked as non-cached, and a supercapacitor or battery is used to preserve logs when main power is disconnected.
Limitations
The main limitations of SRAM are low density, high physical bit size, and volatility, making it unsuitable for long-term storage but ideal for stacks, caches, and critical temporary buffers.
History and development
SRAM evolved from bipolar cells (1960s, IBM) through CMOS structures (1980s, 4T-6T) to synchronous SRAM with pipelined access (1990s), and finally to modern low-voltage architectures with full power-down and retention sleep modes, widely used in SoCs for processor caches and high-performance FPGA buffers.