|
Even 2000 MB/s SSDs don't stand up to DRAM. Dual channel DDR3-1600 is worth 25 600 MB/s peak. Corsair did some synthetic benchmarking of DDR3 vs. DDR4 speeds, and long story short, quad-channel DDR4 systems are playing with around 60 000 MB/s of bandwidth, albeit at slower latencies than dual-channel controllers.
|
# ? Feb 5, 2015 16:42 |
|
|
# ? May 25, 2024 14:22 |
|
Well the original question was about whether 400 MBps bandwidth is fair for RAM and it's two orders of magnitude off easily is the point when an SSD is pushing an order magnitude higher. Even lower end SSDs do 400 MBps, we'd be in trouble if our RAM was that slow.
|
# ? Feb 5, 2015 17:00 |
|
A general manager from Intel said that 10nm chips will launch in early 2017. Intel then retracted his statement "for competitive reasons".
|
# ? Feb 5, 2015 17:49 |
|
Lord Windy posted:I can't wait until we have some new storage that is both RAM and Harddisk. Maybe Flash Memory will one day get fast enough. What does 400mb/s translate to in RAM land? Although 160ms latency is essentially forever in computers. Instant Grat posted:Google "Memristor". A) 400MB/s? 160ms latency? Google NVMe drives, such as the Samsung XS1715. 3000MB/s read / 1400MB/s write and 0.2ms latency. B) http://www.hpl.hp.com/research/systems-research/themachine/ Edit: I see I was rather beaten by necrobobsledder: necrobobsledder posted:The bigger problem I see is that our software written at present is incapable of handling super high speed without rewrites and completely rethinking networking. Here's a good example of what is required to handle the network hardware coming down the pipe at 100 gigabits - it's NOT easy, and ironically enough it's somewhat gated by how fast your CPU can work: https://lwn.net/Articles/629155/ Rastor fucked around with this message at 18:10 on Feb 5, 2015 |
# ? Feb 5, 2015 18:07 |
|
Darkpriest667 posted:Right, but Samsung is already having some controller issues (or at least I think it's NAND controller issues) which severely degrades the speed of accessing older memory blocks. They said they fixed it but it's rearing it's ugly head again. Basically what we need is large RAMdisks. We need to eliminate storage and RAM as separate and combine them into one thing. That's what I am saying about how programs are accessing RAM. If they were IN RAM and not loaded into RAM from storage. That's the main slowdown. If DDR wasn't so goddamned expensive now because of a B.S. shortage that wasn't even real I'd have bought more this past year. RAMdisk is really nice for loading stuff, but unless DDR4 comes down quite a ways it's really stupid to upgrade unless you need x99 for video editing and computational stuff. I do both so it's double annoying for me. I would never thought I would AGAIN live to see RAM more expensive than my CPU since the mid 1990s, but somehow it will be! yeah the only significant speed upgrade is going to be when some sort of competitive NVRAM hits the market. however: STTMRAM/PCM/FeRAM have density issues ReRAM (memristors, crossbar) aren't in production so they are theoretical at best
|
# ? Feb 5, 2015 19:19 |
|
Lord Windy posted:I can't wait until we have some new storage that is both RAM and Harddisk. Maybe Flash Memory will one day get fast enough. What does 400mb/s translate to in RAM land? Although 160ms latency is essentially forever in computers. This might be a start to do some reading, maybe?
|
# ? Feb 5, 2015 19:41 |
|
Doesn't the joke go something like "high performance computing is the science of turning a CPU-bound problem into an input-bound problem"?
|
# ? Feb 5, 2015 23:20 |
|
Malcolm XML posted:yeah the only significant speed upgrade is going to be when some sort of competitive NVRAM hits the market. reRAM is theoretical but it really is our only hope to solve the issue. I have no idea why we even went to DDR4 standard considering, except in a very few benchmarks, RAM speed is not very helpful. Latency is much more important and even then it's really so fast at this point that it's absurd.
|
# ? Feb 5, 2015 23:29 |
|
Darkpriest667 posted:reRAM is theoretical but it really is our only hope to solve the issue. I have no idea why we even went to DDR4 standard considering, except in a very few benchmarks, RAM speed is not very helpful. Latency is much more important and even then it's really so fast at this point that it's absurd. DDR4 introduced some more voltage regulation and reduced the power required which is important for servers but not so much for home users.
|
# ? Feb 6, 2015 00:01 |
|
HERAK posted:DDR4 introduced some more voltage regulation and reduced the power required which is important for servers but not so much for home users. it's a .3V difference. It's less than 5 watts for 4 Dimms. That's not enough of an efficiency for the entire consumer market to be switched to a new standard. Hell, I fold 247 and even for me it's not enough of an efficiency to switch standards. For servers it is a big deal because we're talking about datacenters that have 10000 DIMMs in them.
|
# ? Feb 6, 2015 00:11 |
|
Darkpriest667 posted:it's a .3V difference. It's less than 5 watts for 4 Dimms. That's not enough of an efficiency for the entire consumer market to be switched to a new standard. Hell, I fold 247 and even for me it's not enough of an efficiency to switch standards. For servers it is a big deal because we're talking about datacenters that have 10000 DIMMs in them. HERAK posted:DDR4 introduced some more voltage regulation and reduced the power required which is important for servers but not so much for home users. No one cares about your folding, darkprincess.
|
# ? Feb 6, 2015 00:18 |
|
r0ck0 posted:No one cares about your folding, darkprincess. Where is my tiara? the point is efficiency even for power users like people that do compute stuff isn't enough to make a switch from DDR3 to DDR4 if I could get an x99 platform with DDR3 I'd already be in one.
|
# ? Feb 6, 2015 00:22 |
|
Darkpriest667 posted:Where is my tiara? the point is efficiency even for power users like people that do compute stuff isn't enough to make a switch from DDR3 to DDR4 if I could get an x99 platform with DDR3 I'd already be in one. What is your point, would you just state it clearly once and for all?
|
# ? Feb 6, 2015 00:41 |
|
Darkpriest667 posted:it's a .3V difference. It's less than 5 watts for 4 Dimms. That's not enough of an efficiency for the entire consumer market to be switched to a new standard. Hell, I fold 247 and even for me it's not enough of an efficiency to switch standards. For servers it is a big deal because we're talking about datacenters that have 10000 DIMMs in them. From what I can tell, there won't be a DDR5, which means you'll be able to use your DDR4 from Skylake for years after as the industry tries to figure out a successor.
|
# ? Feb 6, 2015 00:42 |
|
Josh Lyman posted:Is it conceivable that we'd see DDR4 primarily in servers and DDR3 for consumers? My guess is the memory manufacturers would prefer to manufacture only one or the other, but that doesn't mean they can't manufacture both. No, AMD is still on DDR3 and Intel won't move it's consumers, that aren't in the HEDT segment, to DDR4 until Skylake (which is what they've sworn for a while now) there were some rumors of UniDIMM but that has mostly died off. Most of the memory makers are not producing nearly what they were 2 years ago. They are trying to clear inventories. I imagine they are scaling down DDR3 production and vamping up for DDR4 production, but not nearly on the scale that they produced DDR3. Mostly because the majority of consumer computer users are now on iPads or another tablet device. Desktops are mostly for gamers and high end users nowadays. This means we will likely not see the low pricing on parts we saw during the golden age of desktops that was 2006-2012. That day is over. Desktop PCs are now more of a niche than the standard.
|
# ? Feb 6, 2015 01:12 |
|
Darkpriest667 posted:it's a .3V difference. It's less than 5 watts for 4 Dimms. That's not enough of an efficiency for the entire consumer market to be switched to a new standard. Hell, I fold 247 and even for me it's not enough of an efficiency to switch standards. For servers it is a big deal because we're talking about datacenters that have 10000 DIMMs in them. A couple of watts in a very small form factor system where the target TPD* is ~20 watts is a good gain. *for the sake of argument
|
# ? Feb 6, 2015 01:26 |
|
One point of note is that Intel's CEO has said that for every mobile phone sold that there's at least 4(?) server CPUs sold to support the services used by that phone. So there is a good chance of market bifurcation. Both markets do demand lower power consumption and mobile device purchases may slow down and server purchases slow as well from sheer oversupply. Novel server boxes like the Dell VRTX or all those random micro servers like Project Moonshot might help with more turnover in datacenters. I'm so waiting to get some of the VRTX boxes decommissioned for a home datacenter instead of the boxes I have around.canyoneer posted:Doesn't the joke go something like "high performance computing is the science of turning a CPU-bound problem into an input-bound problem"?
|
# ? Feb 6, 2015 02:37 |
|
HERAK posted:DDR4 introduced some more voltage regulation and reduced the power required which is important for servers but not so much for home users. The DDR4 spec for voltages is pretty dated, remember that the spec was finalized in 2011. We've pushed very hard on performance per watt since then. Better is the DDR4L (LPDDR4) spec that exists for laptops, it pushed voltage down to 1.05V without sacrificing performance. Desktops could probably switch to SODIMM formats and adopt this spec without any end user impact, but I don't know if this is seriously considered or not. You could make a traditional LPDDR4 DIMM, but I don't think anyone actually has bothered to. Expect DDR4 to stick around for a long time, though. No one has proposed a spec that solves any of DDR4's problems in a way that is affordable for consumers.
|
# ? Feb 6, 2015 02:47 |
|
necrobobsledder posted:One point of note is that Intel's CEO has said that for every mobile phone sold that there's at least 4(?) server CPUs sold to support the services used by that phone. If it's the quote I'm thinking of, it was Otellini and it was for every 100 cell phones, 6 server chips were sold. But I can't dig it up because every search I try is talking about what a huge failure he was to not see into the iPhone future from 2005.
|
# ? Feb 6, 2015 03:24 |
|
I was pretty sure I got that quote wrong, hence the question. To be even more brutal to the guy, I was writing mobile apps for Windows CE and StrongARM crap back in school in the 2002-2004 timeframe, and even then everyone was talking about how huge smartphones were going to be. I mean, I knew they had neat capabilities, but everything we were trying to do with geolocation and triangulation was terrible then and the APIs just weren't there to make it easy to see how much of an impact they'd have. I like to think I wrote in class an early version of a Google Maps style predictive map tile loader based upon scroll momentum but all the map software on phones at the time was like MapQuest (incumbent) - click to re-center, repaint, repeat. I just went with an approach that was more intuitive for users. It was really easy on .NET compact (but ugh, I couldn't tell what was actually supported there v. desktop until I compiled the sucker). So yeah, both Microsoft and Intel lost hard when they had the technology about ready to do something meaningful with smartphones. But oh no, everyone was still riding that laptop money gravy train and enterprise software was selling like hotcakes still.
|
# ? Feb 6, 2015 06:06 |
|
Mr Chips posted:A couple of watts in a very small form factor system where the target TPD* is ~20 watts is a good gain. That's absolutely true, however where efficiency matters most is the mobile sector. Intel has been screwing around with efficiency for about 4 generations now and the rest of the industry has basically followed, except AMD who apparently has engineers and marketing sitting around in a room snorting lines of blow and then deciding maybe they should talk smack about Intel and Nvidia and release a product. The reason we're seeing so much focus on efficiency is because mobile is where the growth is and will continue to be for the future. People are spending 300 to 1000 dollars every few years on a new phone and tablet. A good amount of desktops are from the era before Intels Core and AMD's Phenom processors. It's good for the majority of the market and I guess in a way it's good for us that tinker with poo poo. If a product has the same amount of heat and power thresh hold but becomes more efficient we can push it harder and farther than we pushed things before. That being said it hasn't panned out in the CPU areas. Haswell and Ivy are both poor clockers in IPS but have made good ground in IPC. A 4.5Ghz Haswell is equivalent to a 5.0 Ghz Sandy and most Haswells can do 4.3. (not on default voltage of course.)
|
# ? Feb 6, 2015 12:05 |
|
Latest rumor, unlocked Skylake in Q3 after all. Hope it's true.
|
# ? Feb 9, 2015 00:59 |
|
is there any hope of broadwell-ep by q3 this year?
|
# ? Feb 9, 2015 02:30 |
|
StabbinHobo posted:is there any hope of broadwell-ep by q3 this year? I don't think an EP has shipped before an E in recent memory.
|
# ? Feb 9, 2015 03:34 |
|
PCjr sidecar posted:I don't think an EP has shipped before an E in recent memory. It would be very odd if one did. They may use the same die, but for the EP, Intel has to validate the multi-socket support and possibly do an extra spin or two to fix bugs.
|
# ? Feb 9, 2015 04:33 |
|
calusari posted:
well we know it's a rumor because there is no way Intel is going to galvanize it's own market of high end gamers by releasing Skylake and Broadwell LGA sockets at the same time.
|
# ? Feb 10, 2015 00:07 |
|
there was never a confirmation that there will be any broadwell desktop parts, so there may not be any cannibalization: 65W unlocked broadwell chip could be for AIO 95W skylake unlocked is the desktop part (devils canyon successor) obviously this is 100% speculation
|
# ? Feb 10, 2015 01:16 |
|
Malcolm XML posted:there will be no noticeable difference between ddr3 and ddr4 for the next few years In other words, my invest in an i7-5930K (overclocked to 4,3 GHz) with 16 GB DDR4 RAM 2666 and x99 Architecture to be "safe" for 4-5 gaming years (and just planning to upgrade the GPUs every 18-24 months) was... dumb?
|
# ? Feb 10, 2015 01:52 |
|
Mr.PayDay posted:In other words, my invest in an i7-5930K (overclocked to 4,3 GHz) with 16 GB DDR4 RAM 2666 and x99 Architecture to be "safe" for 4-5 gaming years (and just planning to upgrade the GPUs every 18-24 months) was... dumb? No one is safe when it comes to future-proofing shiny computer bits. No one.
|
# ? Feb 10, 2015 03:25 |
|
Does anyone have an article that compares the HD 5500, 6000 and 6100 Iris performance? Lenovo is selling the broadlake CPUs in their new 450s and 550s and I'd like to know how they perform.
|
# ? Feb 10, 2015 05:04 |
|
Mr.PayDay posted:In other words, my invest in an i7-5930K (overclocked to 4,3 GHz) with 16 GB DDR4 RAM 2666 and x99 Architecture to be "safe" for 4-5 gaming years (and just planning to upgrade the GPUs every 18-24 months) was... dumb? It was dumb because you should never, ever think of a high end gaming system as an "investment" on any level. Instead, you should think of it as lighting money on fire. You can almost always get 90% of the performance for way less than 90% of the money. From the manufacturer's point of view the reason for the premium line is to extract fat profits from people with deep pockets who have to have the fastest thing, not to give you a bargain deal on something that'll last forever. This is especially true in this case: for 99% of gamers, regular Haswell is a much better choice than Haswell-E. You can get a 4.0 GHz Haswell for $350 or less and it will be every bit as good as that 5930K for essentially all games for the forseeable future. (There aren't many games that need more than four Haswell cores, and there are not likely to be any in the next 5 years.)
|
# ? Feb 10, 2015 06:55 |
|
BobHoward posted:It was dumb because you should never, ever think of a high end gaming system as an "investment" on any level. Instead, you should think of it as lighting money on fire. Thanks for the reply, lesson learned I guess. My local PC dealer even "warned" me, but I think I just wanted to own a native 6 core CPU with nice oc results and be ready for up to a 3- or 4 way SLI system 2016 or 2017, so I won't have to buy anything new. I hope at least World of Warcraft should benefit. I get up to 190 frames on Ultra settings with CMAA as WoW is still heavy cpu dependant. Yeah, just trying to find reasons here
|
# ? Feb 10, 2015 09:38 |
|
Mr.PayDay posted:Thanks for the reply, lesson learned I guess. My local PC dealer even "warned" me, but I think I just wanted to own a native 6 core CPU with nice oc results and be ready for up to a 3- or 4 way SLI system 2016 or 2017, so I won't have to buy anything new. A 4790K would be faster at stock, because WoW and almost every other CPU heavy game doesn't scale with many cores. It just wants the fastest ones possible, and seeing as you're on the same architecture, a 4790K out of the box would be faster for that particular scenario - it turbos to 4.4 anyway. On the other hand, now you've overclocked it, you at least have parity. Parity to a much, much cheaper board, CPU and RAM. The extra cores are explicitly only for those people for whom time is money. Rendering, video encoding and the like. You're right that you have a lot of PCI Express lanes, but 3 or 4 way SLI scales very poorly in almost every situation, and thus would never actually be worth it anyway. I guess you're ready for adding a lot of fast PCIe SSDs, though. HalloKitty fucked around with this message at 09:52 on Feb 10, 2015 |
# ? Feb 10, 2015 09:44 |
|
BobHoward posted:You can almost always get 90% of the performance for way less than 90% of the money. From the manufacturer's point of view the reason for the premium line is to extract fat profits from people with deep pockets who have to have the fastest thing, not to give you a bargain deal on something that'll last forever. This is especially true in this case: for 99% of gamers, regular Haswell is a much better choice than Haswell-E. You can get a 4.0 GHz Haswell for $350 or less and it will be every bit as good as that 5930K for essentially all games for the forseeable future.
|
# ? Feb 10, 2015 16:51 |
|
eggyolk posted:No one is safe when it comes to future-proofing shiny computer bits. No one. Unless you bought an i7 920.
|
# ? Feb 10, 2015 17:17 |
|
ElehemEare posted:This being said, the last few generations of uArch changes have mainly given us greater efficiency. I'm running an i5-750 that still mostly chugs along well. Do we have any rational expectation that the Skylake uArch changes will have tangible benefits (for the mainstream gamer) that will outweigh the necessity of UniDIMM DDR3/DDR4 upgrades, in addition to mobo/CPU, for single GPU setups? Keeping in mind that everything about Skylake is basically rumors right now:
There are some other rumored uarch differences, but none that look relevant for gaming as long as you aren't rendering on the CPU.
|
# ? Feb 10, 2015 17:26 |
|
Rime posted:Unless you bought an i7 920. Radeon 7970 was actually an incredible buy too, seeing as it's STILL being sold now, and is totally viable with today's games. Up against the later but relevant rival the 680, it still had a 50% VRAM advantage, and when the fight came again in the form of 770 vs 280X, things still held up. From the CPU end, although Nehalem had a lot of charm (I remember building a friend's machine with a 980X, if that was mine, I'd still rock it today, a 920, not so much), I think it's overshadowed by the Q6600 which had frankly ridiculous overclock potential and staying power. I have a feeling the 2500K will have some outrageous term of relevance now, though. I guess if you want to throw any kind of bone to AMD CPUs, it's that their old K8 architecture is still toe to toe with their newest stuff, keeping them comparably relevant. A Phenom II X6 is not really much less desirable than any Piledriver. HalloKitty fucked around with this message at 18:14 on Feb 10, 2015 |
# ? Feb 10, 2015 17:59 |
|
So based on current rumors, unless I plan on stepping up multithread dependency of things I run on my home desktop (which is a strong maybe with SQL/Hadoop stuff if I decide I want to bring work home with me even more), decide I need multiple GPUs (I don't, I'm running a 1440x900 single monitor off a 970 ), or plan on putting tonnes of drives into a new setup (I don't, my old rig becomes a NAS for that), Skylake doesn't necessarily afford me any huge improvements over Haswell or Broadwell; or necessarily even current Lynnfield chip (aside from efficiency, but I'm not running a data centre out of my apartment)? Seems like I can wait for Skylake and hop on the clearance LGA1150 bandwagon perhaps. Thanks for the input.
ElehemEare fucked around with this message at 20:16 on Feb 10, 2015 |
# ? Feb 10, 2015 18:57 |
|
ElehemEare posted:So based on current rumors, unless I plan on stepping up multithread dependency of things I run on my home desktop (which is a strong maybe with SQL/Hadoop stuff if I decide I want to bring work home with me even more), decide I need multiple GPUs (I don't, I'm running a 1440x900 single monitor off a 970 ), or plan on putting tonnes of drives into a new setup (I don't, my old rig becomes a NAS for that), Skylake doesn't necessarily afford me any huge improvements over Haswell or Broadwell; or necessarily even current Lynnfield chip (aside from efficiency, but I'm not running a data centre out of my apartment)? Seems like I can wait for Skylake and hop on the clearance LGA1150 bandwagon perhaps. Thanks for the input.
|
# ? Feb 10, 2015 21:00 |
|
|
# ? May 25, 2024 14:22 |
|
necrobobsledder posted:I literally work at home running all of the above things and I have no real need to upgrade my E3-1230 (i7-2600k equivalent - somewhat faster actually for work). That E3-1230 line has held to performance numbers roughly all within 20%-ish from Sandy Bridge until now with most of the efforts going towards reducing power consumption. While Skylake is one of the more ambitious architectural changes (moreso than Haswell was to Sandy Bridge) I still don't think another 10% more performance would be that big of a deal either. For me, moving from Nehalem to Skylake isn't even about performance, its just to get out of my ancient X58 chipset motherboard, and even that isn't because of speed concerns, but because the thing is flat out old. Dead USB ports, it hasn't had a functioning onboard network interface in years, and I really feel like it's the weak point of my machine, currently. Also; I like new stuff.
|
# ? Feb 10, 2015 21:05 |