|
SpelledBackwards posted:Edit: well gently caress, dunno how I didn't see this was already posted yesterday in the thread. It will replace some solid state storage uses (assuming it can be successfully manufactured), but the first place it will do that is in million dollar enterprise setups; it will be a long time before this is something affordable for the home consumer.
|
# ? Jul 29, 2015 15:45 |
|
|
# ? May 19, 2024 22:04 |
|
It fits nicely as a sort of warm cache between RAM and SSDs when you're using something stupid piggy on RAM like SAP HANA. The use cases they're aiming for are primarily business analytics kind of scenarios that people traditionally load into gobs of RAM and incur massive costs (4TB+ RAM nodes are not unheard of in these clusters). Cheaper low latency access to the data is really darn handy when you're trying to fit in 50+ Petabytes of data into low latency, costly memory trying to form some form of L2 cache desperately while you're doing operations that really mess with data locality. The question is about pricing to me though. It's gotta be more expensive than RAM out of the gate I'd imagine, so who the heck wants to pay more for slower RAM basically? Maybe if you're paying a lot more for extra power & cooling instead of more nodes or have just run out of space on your mainboards? Being able to use this in place of RAM for converged memory could make sense for lowering manufacturing costs for tablets. Instead of dedicating the PCB to separate DRAM and flash chips, you can stick with a single chip. But by the time it's affordable enough for consumer use it'll probably be like 2020 or something.
|
# ? Jul 29, 2015 16:26 |
|
Tablets and phones are the perfect use case if the price is right. *Reduced BOM cost from single package replacing NAND and DRAM *Reduced power usage from not refreshing DRAM and the fact that it's a single package *No need for the super high-end DRAM speeds because it's a tablet *Super fast suspend and restore for deeper sleep modes because you don't have to worry about preserving RAM contents
|
# ? Jul 29, 2015 16:39 |
|
Don Lapre posted:What happens if you lower the fan speed? Does it overheat? With my current H100, lowering/increasing the fan speed has no noticeable effect on temperatures. I'm hoping to go with a waterblock like a Heatkiller IV or EK Supremacy + dual EK-CoolStream XE 360's.
|
# ? Jul 29, 2015 16:45 |
|
Ak Gara posted:With my current H100, lowering/increasing the fan speed has no noticeable effect on temperatures. I'm hoping to go with a waterblock like a Heatkiller IV or EK Supremacy + dual EK-CoolStream XE 360's. Whats loud then? Is it overheating? If not just install quieter fans.
|
# ? Jul 29, 2015 16:59 |
|
kwinkles posted:Tablets and phones are the perfect use case if the price is right. If they can produce it economically, everything has a good use case for this. Why not blow away the entire SSD & HD market and own it all. Intel could effectively become the only game in town for storage, if it's as good as they say and isn't cost prohibitive.
|
# ? Jul 29, 2015 17:32 |
|
Don Lapre posted:Whats loud then? Is it overheating? If not just install quieter fans. It hits about 90c with the H100 set to Low, Medium, or High.
|
# ? Jul 29, 2015 20:18 |
|
Skandranon posted:If they can produce it economically, everything has a good use case for this. Why not blow away the entire SSD & HD market and own it all. Intel could effectively become the only game in town for storage, if it's as good as they say and isn't cost prohibitive. There's "economically viable" and "economically dominant". They've stated that it's clearly built to be "viable", which means it probably lands on a $/bit scale somewhere between flash and DRAM. Some applications don't need the performance that XPoint could bring, and they will always choose the sufficient and cheaper (per bit) solution. There's a reason that tape drives are still being used today. (Yes, really)
|
# ? Jul 29, 2015 20:54 |
|
Skandranon posted:If they can produce it economically, everything has a good use case for this. Why not blow away the entire SSD & HD market and own it all. Intel could effectively become the only game in town for storage, if it's as good as they say and isn't cost prohibitive. There is some actual good reporting from the register here: http://www.theregister.co.uk/2015/07/29/having_a_looks_at_imtfs_crosspoint/ Notably they have a couple slides from a micron presentation from 2011 that might be the same tech and a hint that there might be a performance version and a cheaper consumer version.
|
# ? Jul 29, 2015 20:55 |
|
Wasn't there a study recently about how at least some big data tasks could be done nearly as quickly and a lot cheaper with a bunch of SSDs in place of RAM? I wonder how this new stuff would do in that sort of application.
|
# ? Jul 29, 2015 21:00 |
|
Durinia posted:There's "economically viable" and "economically dominant". If Intel has effectively turned storage into a problem they can solve with their CPU fabs, and have 1000x performance improvement, with a technology only they will have patents on and only they could manufacture, they could push Samsung and all other SSD makers out of the market by aggressively pushing the cost of this new tech down. None of us have any idea how much this costs, but if I were Intel, I would be looking to own as much of the storage market as possible, from phones & tablets to consumer drives to high end server drives & specialty devices.
|
# ? Jul 29, 2015 21:10 |
|
Ak Gara posted:It hits about 90c with the H100 set to Low, Medium, or High. You shouldn't be getting temperatures that high, something is really wrong. The fact that changing fan speed doesn't alter the temperatures means that all the resistance to heat transfer is somewhere else in the system (and it shouldn't be). Is it possible you completely ballsed up the TIM application to the CPU? More likely, could your pump be broken? I'm on mobile at the mo but if you go looking for H100 reviews and look at temperature benchmarks you'll see you should be getting much lower temperatures and there should be a clear inverse relation between fan RPM and CPU temperature.
|
# ? Jul 29, 2015 22:21 |
|
Dunno what load hes testing at, if hes doing prime95 its certainly possible hes getting 90c under unrealistic loads
|
# ? Jul 29, 2015 22:34 |
|
Daviclond posted:You shouldn't be getting temperatures that high, something is really wrong. The fact that changing fan speed doesn't alter the temperatures means that all the resistance to heat transfer is somewhere else in the system (and it shouldn't be). Is it possible you completely ballsed up the TIM application to the CPU? More likely, could your pump be broken? 5Ghz 2500k @ 1.4v but mostly used at 4.8 @ 1.35v. Temps were fine for over 2.5 - 3 years but seem to have slowly been rising. Possibly TIM, possibly coolant evaporating, possible 4.5 year old cpu having higher temps for equal overclocks. I've got some Ceramique 2 that I'm planning on reseating the waterblock tomorrow but wanted to clear up some questions I had first. (my original question about adding a second radiator to a custom loop only reducing temperatures by a few degrees if already at the saturation limit of the waterblock) Don Lapre posted:Dunno what load hes testing at, if hes doing prime95 its certainly possible hes getting 90c under unrealistic loads Minecraft.
|
# ? Jul 29, 2015 22:52 |
|
Skandranon posted:If Intel has effectively turned storage into a problem they can solve with their CPU fabs, and have 1000x performance improvement, with a technology only they will have patents on and only they could manufacture... That's quite an "if" you've got there. Especially the part where this (technology and fab) is co-owned by Micron, who will be releasing their own products based on it. Skandranon posted:...they could push Samsung and all other SSD makers out of the market by aggressively pushing the cost of this new tech down. None of us have any idea how much this costs, but if I were Intel, I would be looking to own as much of the storage market as possible, from phones & tablets to consumer drives to high end server drives & specialty devices. ...but for the purposes of a good time, let's allow it. If they have a technology that wholly supplants flash in both density AND cost, then absolutely this is what they will do. Set the price just at a point that it precludes flash and then take over the world. However, there are a TON of people (companies, researchers, etc.) that have been working on emerging Non-volatile memories for decades. If any one of them got to an architecture and maturity where the performance was like this and the cost level was 3x flash, they'd have brought it to market - not wait for being 1x of flash. If you have a better performing thing (especially 1000x), you bring it to market when it gets to be a reasonable premium, not price parity per bit.
|
# ? Jul 30, 2015 03:39 |
|
kwinkles posted:*Super fast suspend and restore for deeper sleep modes because you don't have to worry about preserving RAM contents Toast Museum posted:Wasn't there a study recently about how at least some big data tasks could be done nearly as quickly and a lot cheaper with a bunch of SSDs in place of RAM? I wonder how this new stuff would do in that sort of application. Heck, people have written freakin' shell scripts that out-perform most "Big Data" frameworks on the same data sets on common benchmarks. The overhead of these frameworks is nowhere near trivial. It was years before HDFS got the "feature" to not transfer data over the TCP stack and to just grab it from disk when the data is available local to the node. We'll see if even half of these "big data" projects will even still be alive by the time this technology makes it to even the enterprise market. Lots of places are really frustrated at how little they've gotten for the money dumped into it (hint: it's not the technology that's the problem these days unless you're a top tech company - it's probably how dysfunctional you or your engineers are).
|
# ? Jul 30, 2015 04:18 |
|
necrobobsledder posted:I like to think that it's super useful for performance in general because you won't need to worry much about bus latency transferring from DRAM to the CPU either. Not sure if I caught the latency but if it's competitive with future L3 cache speeds then we're looking at mobile CPUs having less need to bump up to processors with L3 cache. Anyway here's a quick mention of HMC on Xeon Phi I posted about earlier (re: HBM), although it sounds like Intel and Micron have their own name (or version) of it: http://www.anandtech.com/show/9436/quick-note-intel-knights-landing-xeon-phi-omnipath-100-isc-2015 quote:Furthermore Knights Landing would also include up to 16GB of on-chip Multi-Channel DRAM (MCDRAM), an ultra-wide stacked memory standard based around Hybrid Memory Cube.
|
# ? Jul 30, 2015 05:26 |
|
BobHoward posted:Nobody said the gains were ginormous. BobHoward posted:There's dozens (maybe even hundreds) of minor things like this where, if taken alone, it's not a huge advantage for Intel, but the fact that Intel is able to do them all adds up to a substantial advantage. edit: MEK or Xylene next time, just wear the proper gloves and do it outside or open a window and run a fan cuz' they're nasty\/\/\/\/\/\/ PC LOAD LETTER fucked around with this message at 12:43 on Jul 30, 2015 |
# ? Jul 30, 2015 09:25 |
|
Phew! Managed to replace the thermal paste with that Ceramique 2. Using isopropyl alcohol and some special paper cloth thingie, I very gently rubbed...and rubbed...and rubbed... gently caress me this poo poo is baked on. I ended up having to use a shaving razor blade to very carefully slice the old paste off the CPU and heatsink, (while also using a hoover nozzle) then followed up by using that isopropyl. [edit] Temps? 10c lower at 3.3 ghz 20c lower at 5.0 ghz Cinebench 11.5 2.01 single 55c 7.88 multi 74c Ak Gara fucked around with this message at 12:53 on Jul 30, 2015 |
# ? Jul 30, 2015 12:01 |
|
japtor posted:It's slower than DRAM so that'd be slower than L3 caches right? Course I'm still trying to wrap my head around the idea above, I guess it'd effectively be like a RAM disk as your storage (or device storage as RAM?). One concern I have would be GPU performance since they use shared memory setups, or would having crap always loaded help a bunch? I'm just going off the basic idea that GPUs love bandwidth. japtor posted:Anyway here's a quick mention of HMC on Xeon Phi I posted about earlier (re: HBM), although it sounds like Intel and Micron have their own name (or version) of it: Yeah, they've confirmed that MCDRAM is HMC but modified. If you look up the HMC spec from Micron's consortium, the signaling is defined as much longer reach (i.e. across a PCB). For an in-package application like Knight's Landing, that would be pretty electrically wasteful, so at the very least I assume they changed the signaling - possible they added features etc., but that's harder to guess.
|
# ? Jul 30, 2015 18:15 |
|
CPU L1-L3 cache will usually be on-die SRAM so it will be much faster than DRAM. Even in a system where 3d xpoint replaced DRAM you would still see improvements in speed with more caching because SRAM is so fast. GPUs will also need very fast access to a framebuffer when rendering, that's why tiled memory was a thing (trying to avoid a DRAM page walk), but mostly now it's just very very fast DRAM and you'll still need very fast memory to get good GPU performance. You would see things like loading screens where a bunch of stuff is pulled from the disk into DRAM go away though.
|
# ? Jul 31, 2015 00:43 |
|
In fact, here's an article that details some patents for use 3d xpoint usecases. The author of the article thinks it will be faster than DDR3 but slower than DDR4, based on patents for replacing DRAM with xpoint in CPU memory systems and supplementing it with DDR4 or DDR5 in GPU systems where the extra speed of DDR4 or DDR5 is needed. http://www.dailytech.com/Exclusive+If+Intel+and+Microns+Xpoint+is+3D+Phase+Change+Memory+Boy+Did+They+Patent+It/article37451.htm
|
# ? Jul 31, 2015 01:39 |
|
Is "upgrade/change the memory" really a patentable development?
|
# ? Jul 31, 2015 04:45 |
|
Have been running a 3930K at 4.4ghz for 3-4 years and I can't find any reason for workstation / gaming / desktop purposes to upgrade. Are there any significant performance improvements for Intel within the next 5 years or so, or can we barely expect 5% ipc improvements per generation? Also - any idea when 10Gb Ethernet be standard on high end motherboards?
|
# ? Jul 31, 2015 04:59 |
|
HalloKitty posted:I didn't really read too much into it, but suddenly I'm whisked back a few years to the time of the announcement that memristors would be replacing our storage and RAM by.. 2013. ...Whatever happened to the memristor, anyway?
|
# ? Jul 31, 2015 05:08 |
|
Same thing that happened to NAND flash in the 90's and early 2000's. They promised huge upgrades over existing technology, but existing technology got better.
|
# ? Jul 31, 2015 07:07 |
|
Deathreaper posted:Have been running a 3930K at 4.4ghz for 3-4 years and I can't find any reason for workstation / gaming / desktop purposes to upgrade. Are there any significant performance improvements for Intel within the next 5 years or so, or can we barely expect 5% ipc improvements per generation? Also - any idea when 10Gb Ethernet be standard on high end motherboards? No clue about consumer parts though
|
# ? Jul 31, 2015 10:07 |
|
I just got an email from Corsair asking if I am ready for the 6th Gen Core processors. I guess they are jumping the gun a bit.
|
# ? Jul 31, 2015 14:55 |
|
VostokProgram posted:...Whatever happened to the memristor, anyway? Turns out making them reliable over long periods is hard, especially at the feature sizes modern manufacturing methods use. Xpoint is a type of memristor, though, so we are finally getting there. Don't hold your breath for logic gates built with them, though.
|
# ? Jul 31, 2015 14:59 |
|
EoRaptor posted:Turns out making them reliable over long periods is hard, especially at the feature sizes modern manufacturing methods use. Especially for a company that has no existing fab infrastructure and a long history of research projects that never make it to production. And yeah - HP announced it as a discovery, but it was a long ways out. Intel/Micron already have fabs capable of producing this stuff in volume.
|
# ? Jul 31, 2015 15:04 |
|
mayodreams posted:I just got an email from Corsair asking if I am ready for the 6th Gen Core processors. I guess they are jumping the gun a bit. Only by about 5 days, why? All of the peripheral guys are expected to start selling motherboards / DDR4 kits / everything you need to build a i7-6xxx machine next week, and then i7-6700K and i5-6600K should be available within 30 days from what I've been seeing.
|
# ? Jul 31, 2015 15:33 |
|
Twerk from Home posted:Only by about 5 days, why? All of the peripheral guys are expected to start selling motherboards / DDR4 kits / everything you need to build a i7-6xxx machine next week, and then i7-6700K and i5-6600K should be available within 30 days from what I've been seeing. I mean it's not like they broke the street date or anything, but there has been no official date for Skylake that I am aware of. I just thought it was interesting.
|
# ? Jul 31, 2015 16:17 |
|
Deathreaper posted:Also - any idea when 10Gb Ethernet be standard on high end motherboards? Have you looked at 10GB switchgear? $100+ per port still. Not many consumers interested in $800 switches. What would you even use it for in a home environment besides maybe iscsi for your VM lab but that is not really a huge market. In my uninformed opinion it will be quite a few years (5?) before 10GB ethernet comes standard on motherboards, for now it is a expensive feature few consumers care about.
|
# ? Aug 1, 2015 15:49 |
|
It may be consumers just get 2.5g/5g ethernet and not 10g. Though those are meant more for wireless access points maybe the cost point makes sense.
|
# ? Aug 1, 2015 16:12 |
|
Idf sf is the likely street date Hell is xpoint is as fast as ddr3 holy poo poo that would be great Storage class memory destroying the memory mountain
|
# ? Aug 1, 2015 16:51 |
|
NihilismNow posted:Have you looked at 10GB switchgear? $100+ per port still. Not many consumers interested in $800 switches. Not even that sophisticated... It's justt moving large files (1TB+) around the network which takes two hours or more. Ehhh... maybe in the next 4-5 years.
|
# ? Aug 2, 2015 00:41 |
|
"Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway." - Wayne Gretzky
|
# ? Aug 2, 2015 01:00 |
|
Deathreaper posted:Not even that sophisticated... It's justt moving large files (1TB+) around the network which takes two hours or more. Ehhh... maybe in the next 4-5 years. Otherwise I saw something saying Purley would have native 10Gb support, albeit not necessarily standard, like lower end boards still wouldn't have it. And of course that'd require going with a pricier board and CPU. Sooo until all that stuff becomes affordable, how easy/hard is it to aggregate links nowadays? Both in whatever OS and network hardware if there's anything there.
|
# ? Aug 2, 2015 01:04 |
|
PC LOAD LETTER posted:Its not that they aren't ginormous its that they're invisible. Lots of small gains should still add up to a noticeable difference at least, but where is it? Yes I know LGA is technically better than a pinned socket from a electrical stand point, that wasn't in question, but if the gains are so small as to be unnoticeable while costs don't change overall and durability goes down why should that be counted as an advantage? You say there isn't a noticeable gain. How can you tell? Have you stolen some unpackaged die from Intel, sorted them into equal-performance/leakage bins, assembled members of the same bin into LGA and PGA packages, and done a comprehensive comparison? When you have only one of the two options to evaluate it's a really uninteresting and misleading tautology that the difference isn't "noticeable". Speaking of data you don't have, on what basis do you claim durability has gone down? How do you know costs haven't changed? I don't have any hard data, but there are at least plausible reasons why LGA ought to be cheaper. (CPU package is obviously cheaper to build, and the socket is probably a wash.) Why do you think I'm merely theorycrafting? I have worked at a fabless semi company, I've participated in a project where the difference between the low end and high end variant of the same chip was literally just a cheaper package that restricted the low end version's performance by delivering power to the CPU core section of the chip with much worse IR droop. I am giving Intel the "benefit of the doubt" here because frankly it's ludicrous not to, in this case. (LGA, RDRAM, Netburst, Itanium: one of these is not like the others.)
|
# ? Aug 2, 2015 03:32 |
|
|
# ? May 19, 2024 22:04 |
|
kwinkles posted:In fact, here's an article that details some patents for use 3d xpoint usecases. The author of the article thinks it will be faster than DDR3 but slower than DDR4, based on patents for replacing DRAM with xpoint in CPU memory systems and supplementing it with DDR4 or DDR5 in GPU systems where the extra speed of DDR4 or DDR5 is needed. I'm afraid that's not a good source, Dailytech is generally clickbait garbage and this specific author, Jason Mick, is one of its worst lazy idiots. In this case he didn't bother watching the Q&A section of the Intel/Micron press conference. Someone asked them if it was phase change memory (an obvious question given that Intel/Micron have in fact been researching PCM for years). They said no. Mick also puts waaaaaay too much weight on the fact that patents have been filed and they say things. The one which struck me here was claiming a patent's mention of DDR5 absolutely for sure means DDR5 is coming guys!!! In the real world, the smart money is on DDR4 being the very last DDRn memory interface. DDRn is pretty much clapped out, it can't be pushed much further. We need a new interface technology -- like HBM, or HMC.
|
# ? Aug 2, 2015 04:34 |