Export thread

Intel Announces 3D XPoint Memory

#1

Eriol

Eriol

This... could be interesting:
http://newsroom.intel.com/community...micron-produce-breakthrough-memory-technology

And their video:


For the people not clicking through or watching, they think they have 10x the density of DRAM (didn't compare to NAND flash, though mention in the video that the new is better), and over 1000x the speed of flash, seemingly approaching DRAM. And non-volatile (doesn't lose data when power goes out). And 1000x the endurance of NAND flash, though that's less of an issue for mass storage, given the final results of the SSD Endurance Experiment, though it could become an issue if it is a replacement for system memory.

If works as promised, eliminate the need for dedicated RAM possibly. And if as cheap, finally put those spinning platters in the grave too alongside floppies.

But this is an announcement. So they have it working in the lab, presumably in lab-scale yields on real silicon, but time will tell if this will play out the way they want it to. Could be great news for the future though.


#2

PatrThom

PatrThom

Saw that, but didn't think anyone else here would be interested.
I guess I was wrong.
The difference between volatile and non-volatile storage just keep getting narrower. Hurry up and converge already!

--Patrick


#3

Bubble181

Bubble181

If it's fast, cheap, small and non-volatile, there'll be another problem with it...And if not, they'll find one. I can't imagine Intel announcing a technology that'd make half their own products obsolete.
Mind you, a computer built with one big pile of memory/HD/graphics memory/etc all in one would be neat. I imagien distances will mean there'll still be separate memory banks for different tasks, unfortunately.
It'll be fun to see.


#4

Here's Johnny

Here's Johnny

I am intrigued and wish to subscribe to your newsletter.


#5

Eriol

Eriol

If it's fast, cheap, small and non-volatile, there'll be another problem with it...And if not, they'll find one. I can't imagine Intel announcing a technology that'd make half their own products obsolete.
Intel has basically been in the business of making their own products obsolete since... forever. They come up with something bigger and better that makes you not want what they used to make. It's why they've stayed almost exclusively dominant in desktop CPUs (well, not the only reason, but that's a LONG discussion about Wintel and such) for so f'n long. The only time they weren't in the late 90s is when they got complacent, AMD found a way to make money with better tech. And then within about 5-10 years, when Intel re-focused on "performance, not MHz" they came back to dominance.

They've been one-upping themselves for 40 years. Otherwise they'd still be making the 4004.


#6

GasBandit

GasBandit

Ungh, oh baby.


#7

PatrThom

PatrThom

If RAM and storage become heterogenous and retain information when powered off, the computer as we know it will vastly change. Imagine if you could put your computer to "sleep," pull a box out of it the size of an Atari 2600 cartridge, go to another computer (at work, an Internet café, or whatever), plug in that cartridge, "wake" it up, and then continue blissfully along right where you left off. Applications that work with absolutely enormous datasets (weather simulation, ocean currents, 3D rendering) get all the benefits of being able to access up to 16 million terabytes of virtual RAM without the usual slowdown associated with disk thrashing because there would be no practical difference between RAM and storage. Game consoles would be completely redesigned, but what would really benefit would be mobile devices. There would no longer be the need to make sacrifices for RAM because you could just dynamically repurpose storage as needed.

--Patrick


#8

GasBandit

GasBandit

If RAM and storage become heterogenous and retain information when powered off, the computer as we know it will vastly change. Imagine if you could put your computer to "sleep," pull a box out of it the size of an Atari 2600 cartridge, go to another computer (at work, an Internet café, or whatever), plug in that cartridge, "wake" it up, and then continue blissfully along right where you left off. Applications that work with absolutely enormous datasets (weather simulation, ocean currents, 3D rendering) get all the benefits of being able to access up to 16 million terabytes of virtual RAM without the usual slowdown associated with disk thrashing because there would be no practical difference between RAM and storage. Game consoles would be completely redesigned, but what would really benefit would be mobile devices. There would no longer be the need to make sacrifices for RAM because you could just dynamically repurpose storage as needed.

--Patrick


#9

Shakey

Shakey

I feel like I've been hearing stuff like this was around the corner for about 10 years. I'll get excited when it's available.


#10

PatrThom

PatrThom



#11

PatrThom

PatrThom

And you could start seeing storage devices* based on this technology as early as next year.

--Patrick
*Doesn't feel right to call them "drives."


#12

Eriol

Eriol

And you could start seeing storage devices* based on this technology as early as next year.

--Patrick
*Doesn't feel right to call them "drives."
Is there a /drool option for the forum?

Maybe I'll be upgrading my machine next year. Could be a good time... ;)


#13

PatrThom

PatrThom

Could be a good time... ;)
Yeah. You can finally chew through that 500GB dataset you've always wanted.

--Patrick


#14

fade

fade

500 GB :dumb:

In geophysics we call that "a small test sample".


#15

Eriol

Eriol

Yeah. You can finally chew through that 500GB dataset you've always wanted.

--Patrick
No joke, the job I do would make use of it. It could make what i program for significantly cheaper since you may not need ridiculous amounts of RAM.


#16

fade

fade

Same here. In house, we have a 512 core cluster with 72 GB per 12 core node, and we eat every byte on data inversion.


#17

PatrThom

PatrThom

Same here. In house, we have a 512 core cluster with 72 GB per 12 core node, and we eat every byte on data inversion.


--Patrick


#18

Bowielee

Bowielee

Get in my computer!


Top