Mechanisms of Memory
Since the dawn of digital computation, the machine has only known one language: binary. This strange concoction of language and math has existed physically in many forms since the beginning. In its simplest form, binary represents numerical values using only two values, 1 and 0. This makes mathematical operations very easy to perform with switches. It also makes it very easy to store information in a very compact manor.
Early iterations of data storage employed some very creative thinking and some strange properties of materials.
One of the older (and simpler) methods of storing computer information was on punch cards. As the name suggests, punch cards would have sections punched out to indicate different values. Punch cards allowed for the storage of binary as well as decimal and character values. However, punch cards had an extremely low capacity, occupied a lot of space, and were subject to rapid degradation. For these reasons, punch cards became phased out along with black and white TV and drive-in movie theaters.
Digital machines had the potential to view and store data using far less intuitive methods. King of digital memory from the 1960s unto the mid-to-late 70s was magnetic core memory. By far one of the prettiest things ever made for the computer, this form of memory was constructed with a lattice of interconnected ferrite beads. These beads could be magnetized momentarily when a current of electricity passed near them. Upon demagnetizing, they would induce a current in nearby wire. This current could be used to measure the binary value stored in that bead. Current flowing = 1, no current = 0.
Even more peculiar was the delay-line memory used in the 1960s. Though occasionally implemented on a large scale, the delay-line units were primarily used from smaller computers as there is no way they were even remotely reliable… Data was stored in the form of pulsing twists through a long coil of wire. This mean that data could be corrupted if one of your fellow computer scientists slammed the door to the laboratory or dropped his pocket protector near the computer or something. This also meant that the data in the coil had to be constantly read and refreshed every time the twists traveled all the way through the coil which, as anyone who has ever played with a spring before knows, does not take a long time.
This issue of constant refreshing may seem like an issue of days past, but DDR memory, the kind that is used in modern computers, also has to do this. The DDR actually stands for double data rate and refers to the number of times every cycle that the data in every binary cell is copied into an adjacent cell and then copied back. This reduces the amount of useful work per clock cycle that a DDR memory unit can do. Furthermore, only 64 bits of the 72-bit DIMM connection used for DDR memory are actually used for data (the rest are for Hamming error correction). So we only use about half the work that DDR memory does for actual computation and it’s still so unreliable that we need a whole 8 bits for error correction; perhaps this explains why most computers now come with three levels of cache memory whose sole purpose is to guess what data the processor will need in the hopes that it will reduce the processor’s need to access the RAM.
Even SRAM (the faster and more stable kind of memory used in cache) is not perfect and it is extremely expensive. A MB of data on a RAM stick will run you about one cent while a MB of cache can be as costly as $10. What if there were a better way or making memory that was more similar to those ferrite cores I mentioned earlier? What if this new form of memory could also be written and read to with speeds orders of magnitude greater than DDR RAM or SRAM cache? What if this new memory also shared characteristics with human memory and neurons?
Enter: Memristors and Resistive Memory
As silicon-based transistor technology looks to be slowing down, there is something new on the horizon: resistive RAM. The idea is simple: there are materials out there whose electrical properties can be changed by having a voltage applied to them. When the voltage is taken away, these materials are changed and that change can be measured. Here’s the important part: when an equal but opposite voltage is applied, the change is reversed and that reversal can also be measured. Sounds like something we learned about earlier…
The change that takes place in these magic materials is in their resistivity. After the voltage is applied, the extent to which these materials resist a current of electricity changes. This change can be measured and therefor binary data can be stored.
Also at play in the coming resistive memory revolution is speed. Every transistor ever made is subject to something called propagation delay: the amount of time required for a signal to traverse the transistor. As transistors get smaller and smaller, this time is reduced. However, transistors cannot get very much smaller because of quantum uncertainty in position: a switch is no use if the thing you are trying to switch on and off can just teleport past the switch. This is the kind of behavior common among very small transistors.
Because the memristor does not use any kind of transistor, we could see near-speed-or-light propagation delays. This means resistive RAM could be faster than DDR RAM, faster than cache, and someday maybe even faster than the registers inside the CPU.
There is one more interesting aspect here. Memristors also have a tendency to “remember” data long after is has been erased and over written. Now, modern memory also does this but, because the resistance of the memristor is changing, large arrays of memristors could develop sections with lower resistance due to frequent accessing and overwriting. This behavior is very similar to the human brain; memory that’s accessed a lot tends to be easy to… well… remember.
Resistive RAM looks to be, at the very least, a part of the far-reaching future of computing. One day we might have computers which can not only recall information with near-zero latency, but possibly even know the information we’re looking for before we request it.