Synchronous DRAM (SDRAM)
Definition - What does Synchronous DRAM (SDRAM) mean?
Synchronous dynamic random access memory (SDRAM) is dynamic random access memory (DRAM) with an interface synchronous with the system bus carrying data between the CPU and the memory controller hub. SDRAM has a rapidly responding synchronous interface, which is in sync with the system bus. SDRAM waits for the clock signal before it responds to control inputs.
SDRAM preceded double data rate (DDR). The newer interface of DRAM has a double data transfer rate using both the falling and rising edges of the clock signal. This is called dual-pumped, double pumped or double transition. There are three significant characteristics differentiating SDRAM and DDR:
- The main difference is the amount of data transmitted with each cycle, not the speed.
- SDRAM sends signals once per clock cycle. DDR transfers data twice per clock cycle. (Both SDRAM and DDR use the same frequencies.)
- SDRAM uses one edge of the clock. DDR uses both edges of the clock.
SDRAM has a 64-bit module with long 168-pin dual inline memory modules (DIMMs). SDRAM access time is 6 to 12 nanoseconds (ns). SDRAM is the replacement for dynamic random access memory (DRAM) and EDO RAM. DRAM is a type of random access memory (RAM) having each bit of data in an isolated component within an integrated circuit. Older EDO RAM performed at 66 MHz.
Techopedia explains Synchronous DRAM (SDRAM)
With older clocked electronic circuits, the transfer rate was one per full cycle of the clock signal. This cycle is called rise and fall. A clock signal changes two times per transfer, but the data lines change no more than one time per transfer. This restriction can cause integrity (data corruption and errors during transmission) when high bandwidths are used. SDRAM transmits signals once per clock cycle. The newer DDR transmits twice per clock cycle.
SDRAM is improved DRAM with a synchronous interface waiting for a clock pulse before it responds to data input. SDRAM uses a feature called pipelining, which accepts new data before finishing processing previous data. A delay in data processing is called latency.
DRAM technology has been used since the 1970’s. In 1993, SDRAM was implemented by Samsung with model KM48SL2000 synchronous DRAM. By 2000, DRAM was replaced by SDRAM. In the beginning SDRAM was slower than burst EDO DRAM because of the extra logic features. But the benefits of SDRAM allowed more than one set of memory, which increased the bandwidth efficiency.
With the introduction of DDR, SDRAM quickly began to fade out of use because DDR was cheaper and more cost effective. The SDRAM used a 168-pin while the DDR module used a 184-pin. SDRAM modules used a voltage of 3.3V and DDR used 2.6V, producing less heat.
11 Terms Every Virtualization Engineer Should Know
Join thousands of others with our weekly newsletter
The 4th Era of IT Infrastructure: Superconverged Systems:
Approaches and Benefits of Network Virtualization:
Free E-Book: Public Cloud Guide:
Free Tool: Virtual Health Monitor:
Free 30 Day Trial – Turbonomic: