[News] SSD in your RAM slot?

Dave

Staff member
http://www.sandisk.com.br/enterprise/ulltradimm-ssd/

Looks like Sandisk is starting something excellent that will eventually cut down on RAM or latency. SSD that fits in your RAM slots. This has a number of advantages, but since it's new tech right now the cost is prohibitive and the latency issues are still present.

Still, when this tech gets more mature it's going to be awesome.
 
Interesting. Slower than RAM, but huge amounts of it, and uses the RAM interface.

As fast as SSDs get, though, RAM will generally still be significantly faster, so I don't see a time soon where machines only have flash and no RAM become common.
 

Dave

Staff member
Interesting. Slower than RAM, but huge amounts of it, and uses the RAM interface.

As fast as SSDs get, though, RAM will generally still be significantly faster, so I don't see a time soon where machines only have flash and no RAM become common.
400 GB SSD will allow significantly higher memory than RAM, though. And I still think that SSDs will get as fast as RAM, even though that may be some ways out.
 
I remember reading about this. The big advantage is in board real estate. Not having to worry about designing in another controller or slot means less components on the board, which means less expensive to manufacture and such. It's not as much about speed as it is about convenience.

--Patrick
 
I thought it was about having more than 64GB or 128GB of RAM, and about putting the SSD onto a bus that's closer to and faster than all the other busses further from the processor.
 
As a gamer, I'm mostly concerned about the speed of my RAM rather than having ungodly huge amounts of it. But I also recognize that computers are sometimes used for things other than gaming, and I'm sure there would be benefits to some of those.

Filthy casuals.
 
Now when your computer starts to lag when the memory is full, it will do you no good to reboot.
Ha!

In this case that doesn't matter. The SSD uses the memory channel for communication, but a driver is required, and it appears to the computer as a regular hard drive, not as memory.

You still have to have other slots with RAM, and on boot the BIOS has no idea that some slots have flash, it doesn't use them.
 
Okay, can someone explain the benefits of this - other than large amount of RAM?
It's not for large amounts of RAM. It's for attaching a SSD via your RAM slot. That means you don't have to have a different slot elsewhere on the motherboard to plug in your drive.

--Patrick
 
The RAM bus is connected directly to the processor, and is much, much faster than any other bus connected to the processor.

It's a way to get data into and out of the SSD that's a lot faster than the typical SSD connection.
 
I think if it catches on, it'll be because of its space-saving characteristics rather than because of any speed benefit. Right now the mSATA and M.2 connections are still competitive because they are only just starting to become saturated and are based on already matured technology.

--Patrick
 
You have to think of the PURPOSE of RAM and "Mass Storage" to begin with - the faster something is, the more expensive it is, and the more "volatile" it is as well. So if you want cheap and lots, it's going to be slow. If you want fast, it's going to be expensive and probably not permanent.

If you think of all the "storage" a modern CPU uses, it (usually) goes like this, from fastest and least, to slowest and most:
  1. CPU Registers - There's usually only a couple of dozen words (64-bits is a word in many, 32-bits in most others, and then older/smaller things have smaller bits per word) and it's right on the processor, right IN THE MIDDLE of where the "actual processing" happens. The only way it's faster than if it's a register is if it's literally an intermediate in the calculation. You don't even have 1kB worth of registers on a typical processor, but they store some of the most important information like where in the program you're running you currently are. They are swapped out constantly for what's happening at the moment, and some low-power modes of processors preserve their state, but lots (most?) don't.
  2. CPU Cache - L1 and L2 - Yes there's two of these. These are often in processor marketing these days (well, TECHIE marketing at least), and they are perfectly named - they cache information closer to the processor itself, these days ON the processor. It used to be off-processor (remember the days of CPU slots? The L2 cache was part of the package there) on the motherboard, and in some very small processors it doesn't exist, but basically this stuff is really fast, and stores things physically close to the processor so that if something is being actively worked on, it's "basically there" for the purpose of the program. L1 is smaller, and is literally the first thing past the registers that's "checked" for information, and L2 is slightly further away, but also slightly bigger. A "cache miss" is a big deal on needing to pull something from main RAM and not from the cache. It's why when working on data sets you want it in contiguous memory slices, so that it as much as possible can be in cache at once.
  3. Main system RAM - This is where everything that's actively running on the system should be in. If you exceed this, then you need to cache things to the hard drive (or SSD, or whatever). This whole thread is about how it may be replaced with an SSD so that it's non-volatile, but usually when sleeping, this is how "suspend to RAM" works from. It's "fast" in that it's orders of magnitude faster than a hard drive, but from a processor's perspective, it's actually really slow and "far away."
  4. Mass storage - Hard drives and SSDs - This is where things are stored which you expect to be there after shutdown, or power cycle. Hard drives are orders of magnitude slower than RAM, but offer orders of magnitude more storage, and it doesn't go away when you shut off your machine. Running CPU programs from Hard Disk is impractically slow.
To use an analogy, if the CPU thread doing processing is a carpenter working on something with their hands, the CPU register is the bin of stuff that s/he reaches for and grabs from that's at the bench with them. L1 cache is in the same room. L2 cache is in the next room. System RAM is the truck out front with all of your stuff in it, and the hard drive is the hardware store an hour or two away that takes TIME to get to, but has literally everything.


TL; DR; Memory Hierarchy


I hope that makes sense.


My opinion on the new tech - Unlike the extremely excellent SSD Endurance Experiment (original, latest) which shows you'll essentially NEVER blow through an SSD on a consumer machine, RAM gets a LOT more stress. I'm skeptical about its lifespan. I hope that's something that gets taken into account, or that technology solves the burnout problem. It might! We'll have to see.


And I wrote that all before looking at Wikipedia. I kind of should know this, as this is what I do. Also, for this example, "the cloud" may as well be on Mars unless you're on Gigabit. Then maybe it's only in the clouds themselves.
 
There's also frequently an L3 cache, which is often shared between the different processor cores (while L1 and L2 are local to an individual core). This L3 is still not humongous (usually 2-8MB for non-enterprise CPUs) but since it is still closer to the processor, it is still faster than having to send a request out to main memory, especially when the processor has to shuffle data between the L2 caches of different cores (like an intra-office mail cart). The food pyramid diagram you posted is functional, but I think a block diagram broken into regions would probably be more useful at explaining it (and I couldn't find one).

As for "the Cloud" being on Mars, I hear ya. If the rumors are true, and Da Gubbermint is so all-fired interested in snooping through our data, then forget Comcast/Charter/Whoever...the Gvt should be falling all over themselves to get gigabit (or even 10 gigabit!) service to every household in America. With networks speeds that fast, people would just keep all their personal data on cloud services (and ripe for snooping) rather than having their own in-house storage.

--Patrick
 
There's also frequently an L3 cache, which is often shared between the different processor cores (while L1 and L2 are local to an individual core). This L3 is still not humongous (usually 2-8MB for non-enterprise CPUs) but since it is still closer to the processor, it is still faster than having to send a request out to main memory, especially when the processor has to shuffle data between the L2 caches of different cores (like an intra-office mail cart). The food pyramid diagram you posted is functional, but I think a block diagram broken into regions would probably be more useful at explaining it (and I couldn't find one).
You're right, L3 exists. Forgot about that.
As for "the Cloud" being on Mars, I hear ya. If the rumors are true, and Da Gubbermint is so all-fired interested in snooping through our data, then forget Comcast/Charter/Whoever...the Gvt should be falling all over themselves to get gigabit (or even 10 gigabit!) service to every household in America. With networks speeds that fast, people would just keep all their personal data on cloud services (and ripe for snooping) rather than having their own in-house storage.
On the contrary, faster internet speeds can lead to the opposite, as I am more and more likely to store things on the cloud since my internet access went from fibre (apartment in a city) to DSL (very rural). Sure I might not upload some stuff, but I know that if I travel, I can get at things if they're on the "cloud" whereas if I self-hosted, there's no way. But if I had a fast connection, I'd find a way to self-host my stuff with owncloud or whatever since then it's "really mine" and not subject to everything you're talking about.

But I may be an outlyer.
 
But I may be an outlyer.
You might, but probably just because you might be more willing to handle the technical challenges yourself. Mr. & Mrs. America don't know how and don't wanna invest the time to figure out how to do it themselves, so they will probably just cave and go with some provider.

--Patrick
 
Top