|
~Coxy posted:Makes more sense than Pentium/i3/i5/i7. I don't think so... i3/i5/i7 maps well to budget, mainstream, performance. It doesn't conflate confusingly common words (Core) with the name. Sure, the model numbers within (4790K wtf) don't help, but the numbers for the Core series didn't help much better (6600 vs 8300).
|
# ? Aug 5, 2015 05:10 |
|
|
# ? May 13, 2024 08:19 |
|
i3/i5/i7 work ok when you are comparing just the current generation against itself, but as soon as other generations come into the picture who the gently caress knows
|
# ? Aug 5, 2015 05:12 |
|
Xeons are ok I guess? E3/5/7 [max sockets]2/6/8[two random numbers?] v[generation number]
|
# ? Aug 5, 2015 05:20 |
|
computer parts posted:AMD can satisfy your desires there. Ak Gara posted:2500k for life, bro. The Serious Sam engine performance gains are AMAZING though. Assepoester fucked around with this message at 06:46 on Aug 5, 2015 |
# ? Aug 5, 2015 06:42 |
|
Lowen SoDium posted:Oh God... This was stupid when they did it for USB 2.0 and is stupider now. All this does is hurt their branding and confuse consumers. It's nowhere near as stupid and damaging this time, it's only a factor of 2. With USB2 it was a factor of 40 between "high" and "full", and the need for more speed than "full" (which was just 12 Mbps) was far more pressing. At 5Gbps USB3.0 already provides a bigger pipe than is needed for the most common high speed USB devices (HDDs and memory sticks). It does limit SSDs, but not to the point of unusability (I've seen 400 MB/s with a decent USB3-SATA3 bridge chip; I can live with that). Most users won't care this time around.
|
# ? Aug 5, 2015 06:46 |
|
mayodreams posted:Just doing a simple GIS for video card box art delivers. Josh Lyman fucked around with this message at 06:51 on Aug 5, 2015 |
# ? Aug 5, 2015 06:48 |
|
Josh Lyman posted:Oh man, the first video card I ever bought was an ASUS GeForce 256 when I built my first computer in 2000. I think it might have been the V6600 Deluxe because it came with 3D glasses (that I used once).
|
# ? Aug 5, 2015 07:27 |
|
Skandranon posted:I don't think so... i3/i5/i7 maps well to budget, mainstream, performance. It doesn't conflate confusingly common words (Core) with the name. Sure, the model numbers within (4790K wtf) don't help, but the numbers for the Core series didn't help much better (6600 vs 8300). Yes it does. "Core i7 2600K" Also i3/i5/i7 mean different things on different platforms. If it was a hard hard and fast rule where like i2 = 2C2T i2H = 2C4T i4 = 4C4T i4H = 4C8T it would be a lot better IMO.
|
# ? Aug 5, 2015 07:30 |
|
Cardboard Box A posted:Their cores are bad though That 2500K is stock. I wonder what a 5ghz 2500K vs skylake would be like. [edit] Who am I kidding it probably won't even matter. http://www.techspot.com/review/1006-the-witcher-3-benchmarks/page5.html Ak Gara fucked around with this message at 12:04 on Aug 5, 2015 |
# ? Aug 5, 2015 09:19 |
|
Marinmo posted:3D-glasses: The mistake TV makers can't wait to redo.
|
# ? Aug 5, 2015 13:14 |
|
Why the hell are those reviews using games that are bumping up against GPU limits to test CPU scaling? I thought like, dimmer one of testing one component's performance, remove all other bottlenecks.
|
# ? Aug 5, 2015 13:17 |
|
Skylake is out in Aus and the CPUs and motherboards are hilariously expensive. $529 for a 6700K, and the motherboards are X99 level prices. I mean, Asus Z97-Deluxe for 529 loving dollars, or the comedy option Gigabyte Z97X Gaming for $800.
|
# ? Aug 5, 2015 13:20 |
Gwaihir posted:Why the hell are those reviews using games that are bumping up against GPU limits to test CPU scaling? That chart comes from an article entitled "The Witcher 3: Wild Hunt Benchmarked: Graphics & CPU Performance", not one about Skylake.
|
|
# ? Aug 5, 2015 13:24 |
|
Embargo is up http://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation
|
# ? Aug 5, 2015 14:17 |
|
Ah, that's what I get for phone posting. Reading the Anandtech review now, and uhhh yea theres some weird stuff going on with this chip. Rather quite good gains in non gaming applications like encoding, but clock for clock regressions in gaming performance.
|
# ? Aug 5, 2015 14:17 |
|
repiv posted:Embargo is up Sandy Bridge/Ivy Bridge boxes still not dying fast enough for Intel to actually sell these. At least they're shunting development on Broadwell Xeons to focus on Skylake Xeons.
|
# ? Aug 5, 2015 14:31 |
|
Good, my 2500k is safe for another cpu generation. Well unless it dies I mean.
|
# ? Aug 5, 2015 14:43 |
|
And if your Sandy Bridge dies, Haswell is like, right there for much cheaper than a Skylake configuration will ever be for the next 1.5 years and you can even reuse your RAM. The desktop single-thread performance field has been a dead end for years. It's a good thing it's worth like nothing now.
|
# ? Aug 5, 2015 14:57 |
|
Gwaihir posted:Ah, that's what I get for phone posting. Reading the Anandtech review now, and uhhh yea theres some weird stuff going on with this chip. Rather quite good gains in non gaming applications like encoding, but clock for clock regressions in gaming performance. Turns out Sandy Bridge really was the high point in terms of value/performance (AT's chart shows the 2600K launching cheaper than any chip after it!), and even though the article likes to suggest the time is up for Sandy, nothing could be further from the truth. Overclocked, in games, with most bottlenecking on the GPU, there is simply nothing to draw the Sandy Bridge user. Even the power consumption is utterly underwhelming. I have no doubt that competition is playing a role here. If AMD was competing at the high end, Intel would be trying to up the clock speeds, or at least add another couple of cores on the mainstream platform. Anime Schoolgirl posted:It's a good thing it's worth like nothing now. I really don't think this is true at all, and I'd hazard a guess that much better single thread performance would be the most desirable improvement. HalloKitty fucked around with this message at 15:11 on Aug 5, 2015 |
# ? Aug 5, 2015 15:03 |
|
They did note that they were quite thermally limited in their overclocking tests on Skylake, but punted the OC stuff to a separate dedicated article that probably won't show up until after IDF. And yea, there's no question that Skylake at the same 4.8ghz OC as SB is much faster, but unless you're video encoding.. Who cares. Games still aren't CPU limited at resolutions people want to play at. Desktop chips have been firmly "good enough" for a while. At least DDR4 seems like an upgrade (even if very marginally) speed wise over DDR3, which was not really the case with DDR2/3 when 3 came out back in the day. Hopefully one of the decent motherboard manufacturers puts out a a hybrid DDR3L/4 board. In the past those hybrid boards were usually only from the second tier folks and were just sorta meh iirc. What would mainstream users even do with more cores on a desktop chip?
|
# ? Aug 5, 2015 15:11 |
|
Gwaihir posted:What would mainstream users even do with more cores on a desktop chip?
|
# ? Aug 5, 2015 15:16 |
|
Anime Schoolgirl posted:4k video decoding *Encoding! (I actually made an in retrospect not all that wise choice of upgrading from my i7-920 to an i7-3770 for more encoding performance. I should have just gotten a loving Xeon or something, Ivy Bridge was the real blooper between SB/IVB/Haswell-Devil's Canyon.)
|
# ? Aug 5, 2015 15:27 |
|
Intel CPU and Platform Discussion: I should have just gotten a loving Xeon or something
|
# ? Aug 5, 2015 15:29 |
|
I'm extremely curious to see what Skylake looks like in the 30W and less TDP range. As I understood it, part of Haswell's massive performance increase in low-TDP came from the voltage regulator on-package. With Skylake not giving much IPC advantage over Haswell in general, and the voltage regulator moving back onto the board, I'm skeptical that the theoretical i5-6200u will be any faster or even match a 5200u at all.
|
# ? Aug 5, 2015 15:46 |
|
Confirmed: Skylake does not have 20x PCI-E 3.0 lanes to the CPU as was previously rumored. Disappointing for anyone who was hoping to use CPU lanes for multiple video cards AND NVMe storage.
|
# ? Aug 5, 2015 15:58 |
|
Lowen SoDium posted:
You have to be able to saturate DMI 3.0 now though, that gives some more breathing room
|
# ? Aug 5, 2015 16:27 |
Tech Report's review of Skylake is up. Some interesting stuff here, but mainly about the i7-5775C.
|
|
# ? Aug 5, 2015 16:42 |
|
Gwaihir posted:*Encoding! There's always Haswell-E
|
# ? Aug 5, 2015 17:03 |
|
There is now, yea. But I've also stopped encoding as many videos since then so
|
# ? Aug 5, 2015 17:08 |
quote:In our discrete gaming benchmarks, at 3GHz Skylake actually performs worse than Haswell at an equivalent clockspeed, giving up an average of 1.3% performance. We don’t have much from Intel as to analyze the architecture to see why this happens, and it is pretty arguable that it is noticeable, but it is there. Hopefully this is just a teething issue with the new platform.
|
|
# ? Aug 5, 2015 17:15 |
|
This isn't that surprising, given that gaming is almost never CPU limited, DDR4 latency is a good bit higher normally, and there are differences in the platform that we don't fully understand right now. When the first i7-920 came out, wasn't it slower than a Q9650 in discrete GPU gaming because of cache changes?
|
# ? Aug 5, 2015 17:23 |
|
http://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/22 This page is interesting, 1080p gaming on ultra with a 980/290x. Thanks part picking thread for steering my broke rear end (and many others) towards an i5-4590 earlier this year.
|
# ? Aug 5, 2015 17:23 |
|
Maybe until you OCed it from stock to 3.6ghz
|
# ? Aug 5, 2015 17:23 |
|
The hell do the CPUs usually go on sale first? I just keep checking Newegg/Amazon every few hours and that's annoying. Could at least offer me an auto-notify button to throw my money down the toilet
|
# ? Aug 5, 2015 17:30 |
|
Twerk from Home posted:This isn't that surprising, given that gaming is almost never CPU limited, DDR4 latency is a good bit higher normally, and there are differences in the platform that we don't fully understand right now. When the first i7-920 came out, wasn't it slower than a Q9650 in discrete GPU gaming because of cache changes? The latency and speed of the memory can also play a huge factor. The CAS/speed ratio determines how long in absolute time the processor has to wait for data, and with DDR pushing the CAS to 16/17 up from 9-10, you need DDR4 2800 at 15 to really offset the issue. It wouldn't surprise me if you could offset that performance issue with faster ram. They discovered something similar for the haswel over at anadtech.
|
# ? Aug 5, 2015 17:35 |
|
That will be interesting to see once someone does some RAM/OCing benches, if they can get up to DDR4-4000 range.
|
# ? Aug 5, 2015 17:38 |
|
Lol'ed at the AT's OC testing. 4.4GHz max overclock with a chip that already stock turbo to 4.2GHz. Even in the context of an i5 that are probably worse binned, we are talking maybe only like a +500MHz (+13%) OC off the stock 3.7GHz 4C turbo. I dunno why we are even bothering to overclock, man. Anime Schoolgirl posted:And if your Sandy Bridge dies, Haswell is like, right there for much cheaper than a Skylake configuration will ever be for the next 1.5 years and you can even reuse your RAM. The funny part will be things on the software side like DX12 will make raw CPU performance even more irrelevant than today. Palladium fucked around with this message at 17:46 on Aug 5, 2015 |
# ? Aug 5, 2015 17:41 |
|
Palladium posted:Even in the context of an i5 that are probably worse binned, we are talking maybe only like a +500MHz (+13%) OC off the stock 3.7GHz 4C turbo. I dunno why we are even bothering to overclock, man. Given that the non-OC parts also have a 65W TDP instead of a 95W one, the -K parts could end up being overall inferior this generation.
|
# ? Aug 5, 2015 17:43 |
|
Twerk from Home posted:Given that the non-OC parts also have a 65W TDP instead of a 95W one, the -K parts could end up being overall inferior this generation.
|
# ? Aug 5, 2015 17:49 |
|
|
# ? May 13, 2024 08:19 |
|
Twerk from Home posted:Given that the non-OC parts also have a 65W TDP instead of a 95W one, the -K parts could end up being overall inferior this generation. Did I misread the charts or is the 6700k consuming a fair bit more power than the 4790k at the same clock speed? 2 years of development and intel's 14nm is still mediocre
|
# ? Aug 5, 2015 17:56 |