Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Rastor
Jun 2, 2001

SpelledBackwards posted:

Edit: well gently caress, dunno how I didn't see this was already posted yesterday in the thread.

What do you guys make of this, and do you think it has the potential to replace both RAM and solid state storage at the same time?

Intel, Micron debut 3D XPoint storage technology that's 1,000 times faster than current SSDs
It isn't fast enough to replace RAM. Especially with HBM arriving on the scene.

It will replace some solid state storage uses (assuming it can be successfully manufactured), but the first place it will do that is in million dollar enterprise setups; it will be a long time before this is something affordable for the home consumer.

Adbot
ADBOT LOVES YOU

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
It fits nicely as a sort of warm cache between RAM and SSDs when you're using something stupid piggy on RAM like SAP HANA. The use cases they're aiming for are primarily business analytics kind of scenarios that people traditionally load into gobs of RAM and incur massive costs (4TB+ RAM nodes are not unheard of in these clusters). Cheaper low latency access to the data is really darn handy when you're trying to fit in 50+ Petabytes of data into low latency, costly memory trying to form some form of L2 cache desperately while you're doing operations that really mess with data locality. The question is about pricing to me though. It's gotta be more expensive than RAM out of the gate I'd imagine, so who the heck wants to pay more for slower RAM basically? Maybe if you're paying a lot more for extra power & cooling instead of more nodes or have just run out of space on your mainboards?

Being able to use this in place of RAM for converged memory could make sense for lowering manufacturing costs for tablets. Instead of dedicating the PCB to separate DRAM and flash chips, you can stick with a single chip. But by the time it's affordable enough for consumer use it'll probably be like 2020 or something.

EIDE Van Hagar
Dec 8, 2000

Beep Boop
Tablets and phones are the perfect use case if the price is right.

*Reduced BOM cost from single package replacing NAND and DRAM
*Reduced power usage from not refreshing DRAM and the fact that it's a single package
*No need for the super high-end DRAM speeds because it's a tablet
*Super fast suspend and restore for deeper sleep modes because you don't have to worry about preserving RAM contents

Ak Gara
Jul 29, 2005

That's just the way he rolls.

Don Lapre posted:

What happens if you lower the fan speed? Does it overheat?

With my current H100, lowering/increasing the fan speed has no noticeable effect on temperatures. I'm hoping to go with a waterblock like a Heatkiller IV or EK Supremacy + dual EK-CoolStream XE 360's.

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.

Ak Gara posted:

With my current H100, lowering/increasing the fan speed has no noticeable effect on temperatures. I'm hoping to go with a waterblock like a Heatkiller IV or EK Supremacy + dual EK-CoolStream XE 360's.

Whats loud then? Is it overheating? If not just install quieter fans.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

kwinkles posted:

Tablets and phones are the perfect use case if the price is right.

If they can produce it economically, everything has a good use case for this. Why not blow away the entire SSD & HD market and own it all. Intel could effectively become the only game in town for storage, if it's as good as they say and isn't cost prohibitive.

Ak Gara
Jul 29, 2005

That's just the way he rolls.

Don Lapre posted:

Whats loud then? Is it overheating? If not just install quieter fans.

It hits about 90c with the H100 set to Low, Medium, or High. :psyduck:

Durinia
Sep 26, 2014

The Mad Computer Scientist

Skandranon posted:

If they can produce it economically, everything has a good use case for this. Why not blow away the entire SSD & HD market and own it all. Intel could effectively become the only game in town for storage, if it's as good as they say and isn't cost prohibitive.

There's "economically viable" and "economically dominant".

They've stated that it's clearly built to be "viable", which means it probably lands on a $/bit scale somewhere between flash and DRAM. Some applications don't need the performance that XPoint could bring, and they will always choose the sufficient and cheaper (per bit) solution.

There's a reason that tape drives are still being used today. (Yes, really)

EIDE Van Hagar
Dec 8, 2000

Beep Boop

Skandranon posted:

If they can produce it economically, everything has a good use case for this. Why not blow away the entire SSD & HD market and own it all. Intel could effectively become the only game in town for storage, if it's as good as they say and isn't cost prohibitive.

There is some actual good reporting from the register here:

http://www.theregister.co.uk/2015/07/29/having_a_looks_at_imtfs_crosspoint/

Notably they have a couple slides from a micron presentation from 2011 that might be the same tech and a hint that there might be a performance version and a cheaper consumer version.

Toast Museum
Dec 3, 2005

30% Iron Chef
Wasn't there a study recently about how at least some big data tasks could be done nearly as quickly and a lot cheaper with a bunch of SSDs in place of RAM? I wonder how this new stuff would do in that sort of application.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Durinia posted:

There's "economically viable" and "economically dominant".

They've stated that it's clearly built to be "viable", which means it probably lands on a $/bit scale somewhere between flash and DRAM. Some applications don't need the performance that XPoint could bring, and they will always choose the sufficient and cheaper (per bit) solution.

There's a reason that tape drives are still being used today. (Yes, really)

If Intel has effectively turned storage into a problem they can solve with their CPU fabs, and have 1000x performance improvement, with a technology only they will have patents on and only they could manufacture, they could push Samsung and all other SSD makers out of the market by aggressively pushing the cost of this new tech down. None of us have any idea how much this costs, but if I were Intel, I would be looking to own as much of the storage market as possible, from phones & tablets to consumer drives to high end server drives & specialty devices.

Daviclond
May 20, 2006

Bad post sighted! Firing.

Ak Gara posted:

It hits about 90c with the H100 set to Low, Medium, or High. :psyduck:

You shouldn't be getting temperatures that high, something is really wrong. The fact that changing fan speed doesn't alter the temperatures means that all the resistance to heat transfer is somewhere else in the system (and it shouldn't be). Is it possible you completely ballsed up the TIM application to the CPU? More likely, could your pump be broken?

I'm on mobile at the mo but if you go looking for H100 reviews and look at temperature benchmarks you'll see you should be getting much lower temperatures and there should be a clear inverse relation between fan RPM and CPU temperature.

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.
Dunno what load hes testing at, if hes doing prime95 its certainly possible hes getting 90c under unrealistic loads

Ak Gara
Jul 29, 2005

That's just the way he rolls.

Daviclond posted:

You shouldn't be getting temperatures that high, something is really wrong. The fact that changing fan speed doesn't alter the temperatures means that all the resistance to heat transfer is somewhere else in the system (and it shouldn't be). Is it possible you completely ballsed up the TIM application to the CPU? More likely, could your pump be broken?

I'm on mobile at the mo but if you go looking for H100 reviews and look at temperature benchmarks you'll see you should be getting much lower temperatures and there should be a clear inverse relation between fan RPM and CPU temperature.

5Ghz 2500k @ 1.4v but mostly used at 4.8 @ 1.35v. Temps were fine for over 2.5 - 3 years but seem to have slowly been rising. Possibly TIM, possibly coolant evaporating, possible 4.5 year old cpu having higher temps for equal overclocks. I've got some Ceramique 2 that I'm planning on reseating the waterblock tomorrow but wanted to clear up some questions I had first. (my original question about adding a second radiator to a custom loop only reducing temperatures by a few degrees if already at the saturation limit of the waterblock)

Don Lapre posted:

Dunno what load hes testing at, if hes doing prime95 its certainly possible hes getting 90c under unrealistic loads

Minecraft. :v:

Durinia
Sep 26, 2014

The Mad Computer Scientist

Skandranon posted:

If Intel has effectively turned storage into a problem they can solve with their CPU fabs, and have 1000x performance improvement, with a technology only they will have patents on and only they could manufacture...


That's quite an "if" you've got there. Especially the part where this (technology and fab) is co-owned by Micron, who will be releasing their own products based on it.

Skandranon posted:

...they could push Samsung and all other SSD makers out of the market by aggressively pushing the cost of this new tech down. None of us have any idea how much this costs, but if I were Intel, I would be looking to own as much of the storage market as possible, from phones & tablets to consumer drives to high end server drives & specialty devices.

...but for the purposes of a good time, let's allow it. If they have a technology that wholly supplants flash in both density AND cost, then absolutely this is what they will do. Set the price just at a point that it precludes flash and then take over the world.

However, there are a TON of people (companies, researchers, etc.) that have been working on emerging Non-volatile memories for decades. If any one of them got to an architecture and maturity where the performance was like this and the cost level was 3x flash, they'd have brought it to market - not wait for being 1x of flash.

If you have a better performing thing (especially 1000x), you bring it to market when it gets to be a reasonable premium, not price parity per bit.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

kwinkles posted:

*Super fast suspend and restore for deeper sleep modes because you don't have to worry about preserving RAM contents
I like to think that it's super useful for performance in general because you won't need to worry much about bus latency transferring from DRAM to the CPU either. Not sure if I caught the latency but if it's competitive with future L3 cache speeds then we're looking at mobile CPUs having less need to bump up to processors with L3 cache.

Toast Museum posted:

Wasn't there a study recently about how at least some big data tasks could be done nearly as quickly and a lot cheaper with a bunch of SSDs in place of RAM? I wonder how this new stuff would do in that sort of application.
A lot of currently used large scale processing frameworks are really bad at data locality optimization (Hadoop ecosystem is pretty bad at this with most solutions hardly being very impressive last I saw) and SSDs would make sense for cost when you have to transfer everything all over the network anyway. Spark is a bit better by being more memory-hungry and actually thinking of a memory hierarchy rather than a straightforward / naive "just make it work and we'll optimize later" approach that made it so hacky and crufty. Regardless, if your cluster's performance optimizations are so terrible that dropping from RAM down to SSDs doesn't make that much of a difference, you'd get somewhere between using more RAM and going even cheaper with SSDs.

Heck, people have written freakin' shell scripts that out-perform most "Big Data" frameworks on the same data sets on common benchmarks. The overhead of these frameworks is nowhere near trivial. It was years before HDFS got the "feature" to not transfer data over the TCP stack and to just grab it from disk when the data is available local to the node.

We'll see if even half of these "big data" projects will even still be alive by the time this technology makes it to even the enterprise market. Lots of places are really frustrated at how little they've gotten for the money dumped into it (hint: it's not the technology that's the problem these days unless you're a top tech company - it's probably how dysfunctional you or your engineers are).

japtor
Oct 28, 2005

necrobobsledder posted:

I like to think that it's super useful for performance in general because you won't need to worry much about bus latency transferring from DRAM to the CPU either. Not sure if I caught the latency but if it's competitive with future L3 cache speeds then we're looking at mobile CPUs having less need to bump up to processors with L3 cache.
It's slower than DRAM so that'd be slower than L3 caches right? Course I'm still trying to wrap my head around the idea above, I guess it'd effectively be like a RAM disk as your storage (or device storage as RAM?). One concern I have would be GPU performance since they use shared memory setups, or would having crap always loaded help a bunch? I'm just going off the basic idea that GPUs love bandwidth.

Anyway here's a quick mention of HMC on Xeon Phi I posted about earlier (re: HBM), although it sounds like Intel and Micron have their own name (or version) of it:
http://www.anandtech.com/show/9436/quick-note-intel-knights-landing-xeon-phi-omnipath-100-isc-2015

quote:

Furthermore Knights Landing would also include up to 16GB of on-chip Multi-Channel DRAM (MCDRAM), an ultra-wide stacked memory standard based around Hybrid Memory Cube.
And while looking that up, from the sidebar it looks like the Anandtech guys have been tweeting a bunch about 3DXPoint, and one of them had a meeting with engineers about it.

PC LOAD LETTER
May 23, 2005
WTF?!

BobHoward posted:

Nobody said the gains were ginormous.
Its not that they aren't ginormous its that they're invisible. Lots of small gains should still add up to a noticeable difference at least, but where is it? Yes I know LGA is technically better than a pinned socket from a electrical stand point, that wasn't in question, but if the gains are so small as to be unnoticeable while costs don't change overall and durability goes down why should that be counted as an advantage?

BobHoward posted:

There's dozens (maybe even hundreds) of minor things like this where, if taken alone, it's not a huge advantage for Intel, but the fact that Intel is able to do them all adds up to a substantial advantage.
Theory crafting possible advantages for LGA and disadvantages for pinned sockets isn't really that interesting since we could go back and forth forever with a "maybe this is a big enough problem/advantage to warrant LGA/pinned sockets". Its not like Intel hasn't done things which haven't panned out before despite the hype so I'm not sure why you want to give them the benefit of the doubt. I'm sure you remember the nonsense about RDRAM, Netburst, and Itanium.


edit: MEK or Xylene next time, just wear the proper gloves and do it outside or open a window and run a fan cuz' they're nasty\/\/\/\/\/\/

PC LOAD LETTER fucked around with this message at 12:43 on Jul 30, 2015

Ak Gara
Jul 29, 2005

That's just the way he rolls.
Phew! Managed to replace the thermal paste with that Ceramique 2.

Using isopropyl alcohol and some special paper cloth thingie, I very gently rubbed...and rubbed...and rubbed... gently caress me this poo poo is baked on. I ended up having to use a shaving razor blade to very carefully slice the old paste off the CPU and heatsink, (while also using a hoover nozzle) then followed up by using that isopropyl.

[edit]
Temps?

10c lower at 3.3 ghz
20c lower at 5.0 ghz

Cinebench 11.5
2.01 single 55c
7.88 multi 74c

Ak Gara fucked around with this message at 12:53 on Jul 30, 2015

Durinia
Sep 26, 2014

The Mad Computer Scientist

japtor posted:

It's slower than DRAM so that'd be slower than L3 caches right? Course I'm still trying to wrap my head around the idea above, I guess it'd effectively be like a RAM disk as your storage (or device storage as RAM?). One concern I have would be GPU performance since they use shared memory setups, or would having crap always loaded help a bunch? I'm just going off the basic idea that GPUs love bandwidth.
GPUs need bandwidth that only bandwidth-optimized DRAM can provide at this point, and moving towards specialized TSV DRAM like HBM. XPoint is slower than DRAM, so it wouldn't be able to keep up. There might be an interesting case for smaller iGPUs, but those tend to be pretty cost sensitive, and it will likely be more expensive than DRAM.

japtor posted:

Anyway here's a quick mention of HMC on Xeon Phi I posted about earlier (re: HBM), although it sounds like Intel and Micron have their own name (or version) of it:
http://www.anandtech.com/show/9436/quick-note-intel-knights-landing-xeon-phi-omnipath-100-isc-2015

Yeah, they've confirmed that MCDRAM is HMC but modified. If you look up the HMC spec from Micron's consortium, the signaling is defined as much longer reach (i.e. across a PCB). For an in-package application like Knight's Landing, that would be pretty electrically wasteful, so at the very least I assume they changed the signaling - possible they added features etc., but that's harder to guess.

EIDE Van Hagar
Dec 8, 2000

Beep Boop
CPU L1-L3 cache will usually be on-die SRAM so it will be much faster than DRAM. Even in a system where 3d xpoint replaced DRAM you would still see improvements in speed with more caching because SRAM is so fast.

GPUs will also need very fast access to a framebuffer when rendering, that's why tiled memory was a thing (trying to avoid a DRAM page walk), but mostly now it's just very very fast DRAM and you'll still need very fast memory to get good GPU performance. You would see things like loading screens where a bunch of stuff is pulled from the disk into DRAM go away though.

EIDE Van Hagar
Dec 8, 2000

Beep Boop
In fact, here's an article that details some patents for use 3d xpoint usecases. The author of the article thinks it will be faster than DDR3 but slower than DDR4, based on patents for replacing DRAM with xpoint in CPU memory systems and supplementing it with DDR4 or DDR5 in GPU systems where the extra speed of DDR4 or DDR5 is needed.

http://www.dailytech.com/Exclusive+If+Intel+and+Microns+Xpoint+is+3D+Phase+Change+Memory+Boy+Did+They+Patent+It/article37451.htm

Grundulum
Feb 28, 2006
Is "upgrade/change the memory" really a patentable development?

Deathreaper
Mar 27, 2010
Have been running a 3930K at 4.4ghz for 3-4 years and I can't find any reason for workstation / gaming / desktop purposes to upgrade. Are there any significant performance improvements for Intel within the next 5 years or so, or can we barely expect 5% ipc improvements per generation? Also - any idea when 10Gb Ethernet be standard on high end motherboards?

Yaoi Gagarin
Feb 20, 2014

HalloKitty posted:

I didn't really read too much into it, but suddenly I'm whisked back a few years to the time of the announcement that memristors would be replacing our storage and RAM by.. 2013.

...Whatever happened to the memristor, anyway?

filthychimp
Jan 2, 2006
Damned dirty ape
Same thing that happened to NAND flash in the 90's and early 2000's. They promised huge upgrades over existing technology, but existing technology got better.

japtor
Oct 28, 2005

Deathreaper posted:

Have been running a 3930K at 4.4ghz for 3-4 years and I can't find any reason for workstation / gaming / desktop purposes to upgrade. Are there any significant performance improvements for Intel within the next 5 years or so, or can we barely expect 5% ipc improvements per generation? Also - any idea when 10Gb Ethernet be standard on high end motherboards?
Purley platform (Skylake Xeons, Lewisburg PCH) sounds like it'll be pretty cool, although who knows when that's coming. Earlier roadmaps were pointing to 2017, but there's a recent rumor that Broadwell Xeons were canned and Purley platform would be pushed up.

No clue about consumer parts though :iiam:

mayodreams
Jul 4, 2003


Hello darkness,
my old friend
I just got an email from Corsair asking if I am ready for the 6th Gen Core processors. I guess they are jumping the gun a bit.

EoRaptor
Sep 13, 2003

by Fluffdaddy

VostokProgram posted:

...Whatever happened to the memristor, anyway?

Turns out making them reliable over long periods is hard, especially at the feature sizes modern manufacturing methods use.

Xpoint is a type of memristor, though, so we are finally getting there. Don't hold your breath for logic gates built with them, though.

Durinia
Sep 26, 2014

The Mad Computer Scientist

EoRaptor posted:

Turns out making them reliable over long periods is hard, especially at the feature sizes modern manufacturing methods use.

Xpoint is a type of memristor, though, so we are finally getting there. Don't hold your breath for logic gates built with them, though.

Especially for a company that has no existing fab infrastructure and a long history of research projects that never make it to production.

And yeah - HP announced it as a discovery, but it was a long ways out. Intel/Micron already have fabs capable of producing this stuff in volume.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

mayodreams posted:

I just got an email from Corsair asking if I am ready for the 6th Gen Core processors. I guess they are jumping the gun a bit.

Only by about 5 days, why? All of the peripheral guys are expected to start selling motherboards / DDR4 kits / everything you need to build a i7-6xxx machine next week, and then i7-6700K and i5-6600K should be available within 30 days from what I've been seeing.

mayodreams
Jul 4, 2003


Hello darkness,
my old friend

Twerk from Home posted:

Only by about 5 days, why? All of the peripheral guys are expected to start selling motherboards / DDR4 kits / everything you need to build a i7-6xxx machine next week, and then i7-6700K and i5-6600K should be available within 30 days from what I've been seeing.

I mean it's not like they broke the street date or anything, but there has been no official date for Skylake that I am aware of. I just thought it was interesting.

NihilismNow
Aug 31, 2003

Deathreaper posted:

Also - any idea when 10Gb Ethernet be standard on high end motherboards?

Have you looked at 10GB switchgear? $100+ per port still. Not many consumers interested in $800 switches.
What would you even use it for in a home environment besides maybe iscsi for your VM lab but that is not really a huge market.
In my uninformed opinion it will be quite a few years (5?) before 10GB ethernet comes standard on motherboards, for now it is a expensive feature few consumers care about.

doomisland
Oct 5, 2004

It may be consumers just get 2.5g/5g ethernet and not 10g. Though those are meant more for wireless access points maybe the cost point makes sense.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.
Idf sf is the likely street date


Hell is xpoint is as fast as ddr3 holy poo poo that would be great

Storage class memory destroying the memory mountain

Deathreaper
Mar 27, 2010

NihilismNow posted:

Have you looked at 10GB switchgear? $100+ per port still. Not many consumers interested in $800 switches.
What would you even use it for in a home environment besides maybe iscsi for your VM lab but that is not really a huge market.
In my uninformed opinion it will be quite a few years (5?) before 10GB ethernet comes standard on motherboards, for now it is a expensive feature few consumers care about.

Not even that sophisticated... It's justt moving large files (1TB+) around the network which takes two hours or more. Ehhh... maybe in the next 4-5 years.

JawnV6
Jul 4, 2004

So hot ...
"Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway." - Wayne Gretzky

japtor
Oct 28, 2005

Deathreaper posted:

Not even that sophisticated... It's justt moving large files (1TB+) around the network which takes two hours or more. Ehhh... maybe in the next 4-5 years.
Thunderbolt networking! Course that'd require to have TB ports and computers that are close enough for a TB cable, everything of which is expensive right now (really really expensive for a long optical TB cable). Hopefully TB3 works out.

Otherwise I saw something saying Purley would have native 10Gb support, albeit not necessarily standard, like lower end boards still wouldn't have it. And of course that'd require going with a pricier board and CPU.

Sooo until all that stuff becomes affordable, how easy/hard is it to aggregate links nowadays? Both in whatever OS and network hardware if there's anything there.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

PC LOAD LETTER posted:

Its not that they aren't ginormous its that they're invisible. Lots of small gains should still add up to a noticeable difference at least, but where is it? Yes I know LGA is technically better than a pinned socket from a electrical stand point, that wasn't in question, but if the gains are so small as to be unnoticeable while costs don't change overall and durability goes down why should that be counted as an advantage?

Theory crafting possible advantages for LGA and disadvantages for pinned sockets isn't really that interesting since we could go back and forth forever with a "maybe this is a big enough problem/advantage to warrant LGA/pinned sockets". Its not like Intel hasn't done things which haven't panned out before despite the hype so I'm not sure why you want to give them the benefit of the doubt. I'm sure you remember the nonsense about RDRAM, Netburst, and Itanium.

You say there isn't a noticeable gain. How can you tell? Have you stolen some unpackaged die from Intel, sorted them into equal-performance/leakage bins, assembled members of the same bin into LGA and PGA packages, and done a comprehensive comparison? When you have only one of the two options to evaluate it's a really uninteresting and misleading tautology that the difference isn't "noticeable".

Speaking of data you don't have, on what basis do you claim durability has gone down? How do you know costs haven't changed? I don't have any hard data, but there are at least plausible reasons why LGA ought to be cheaper. (CPU package is obviously cheaper to build, and the socket is probably a wash.)

Why do you think I'm merely theorycrafting? I have worked at a fabless semi company, I've participated in a project where the difference between the low end and high end variant of the same chip was literally just a cheaper package that restricted the low end version's performance by delivering power to the CPU core section of the chip with much worse IR droop. I am giving Intel the "benefit of the doubt" here because frankly it's ludicrous not to, in this case. (LGA, RDRAM, Netburst, Itanium: one of these is not like the others.)

Adbot
ADBOT LOVES YOU

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

kwinkles posted:

In fact, here's an article that details some patents for use 3d xpoint usecases. The author of the article thinks it will be faster than DDR3 but slower than DDR4, based on patents for replacing DRAM with xpoint in CPU memory systems and supplementing it with DDR4 or DDR5 in GPU systems where the extra speed of DDR4 or DDR5 is needed.

http://www.dailytech.com/Exclusive+If+Intel+and+Microns+Xpoint+is+3D+Phase+Change+Memory+Boy+Did+They+Patent+It/article37451.htm

I'm afraid that's not a good source, Dailytech is generally clickbait garbage and this specific author, Jason Mick, is one of its worst lazy idiots. In this case he didn't bother watching the Q&A section of the Intel/Micron press conference. Someone asked them if it was phase change memory (an obvious question given that Intel/Micron have in fact been researching PCM for years). They said no.

Mick also puts waaaaaay too much weight on the fact that patents have been filed and they say things. The one which struck me here was claiming a patent's mention of DDR5 absolutely for sure means DDR5 is coming guys!!! In the real world, the smart money is on DDR4 being the very last DDRn memory interface. DDRn is pretty much clapped out, it can't be pushed much further. We need a new interface technology -- like HBM, or HMC.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply