|
redeyes posted:I early adopted a 8700K and it cost $300. Seems like I saved around $50 bux. It’s a nice CPU and I haven’t even really messed around with an overclock other than the asrock push button tool.
|
# ? Jan 29, 2019 18:41 |
|
|
# ? May 27, 2024 02:23 |
|
I took a gamble and got a 5820K. It really paid off because apps I use are just starting to use 6 cores and 5960Xs are getting cheap and I'll get one of those once I start using apps that use 8 cores. Broadwells are still expensive but they overclock poorly and end up costing like 200$ more for the same performance. 5960Xs can be had for as little as 300$ now.
|
# ? Jan 29, 2019 18:49 |
|
Cygni posted:Intel finished with a record year across the board, and projects 2019 to be another record. 4th quarter basically matched the record 3rd quarter ($18.7B vs $19.2B). And they'll continue to be profitable if they can continue to provide Apple LTE and 5g modems so apple can dump Qualcomm.
|
# ? Jan 30, 2019 03:30 |
|
Qualcomm is gonna make all those sweet 5G infrastructure bucks because Huawei
|
# ? Jan 30, 2019 06:43 |
|
craig588 posted:I took a gamble and got a 5820K. It really paid off because apps I use are just starting to use 6 cores and 5960Xs are getting cheap and I'll get one of those once I start using apps that use 8 cores. Broadwells are still expensive but they overclock poorly and end up costing like 200$ more for the same performance. 5960Xs can be had for as little as 300$ now. I'm in the same situation, I keep looking at 6900K's and 5960x's but does broadwell-e really OC worse? I did also read that the Haswell-E equivalent Xeons also have unlocked multipliers but thats just going of some forum post.
|
# ? Jan 30, 2019 09:40 |
|
track day bro! posted:I'm in the same situation, I keep looking at 6900K's and 5960x's but does broadwell-e really OC worse? I did also read that the Haswell-E equivalent Xeons also have unlocked multipliers but thats just going of some forum post. I very much recall people at the time saying broadwell didn't overclock as well as haswell.
|
# ? Jan 30, 2019 10:02 |
|
HalloKitty posted:I very much recall people at the time saying broadwell didn't overclock as well as haswell. That's a shame I was hoping I could get slightly less power hungry (lol at worrying about power usage running a HEDT chip and a Vega64) cpu with a broadwell-e one
|
# ? Jan 30, 2019 10:52 |
|
Silicon Lottery had a tray of 5960xs and 6900ks and the 6900ks overclocked much worse https://siliconlottery.com/pages/statistics
|
# ? Jan 30, 2019 12:05 |
|
craig588 posted:Silicon Lottery had a tray of 5960xs and 6900ks and the 6900ks overclocked much worse https://siliconlottery.com/pages/statistics Wow only 35% are able to do 4.4ghz which is what I have my 5820K clocked to at around 1.100vcore
|
# ? Jan 30, 2019 12:45 |
|
Looks like the embargo lifted on the Xeon W-3175X (the unlocked 28-core CPU). Here's Gamers Nexus's review video: https://www.youtube.com/watch?v=N29jTOjBZrw And here's Der8auer delidding his review sample: https://www.youtube.com/watch?v=aD9B-uu8At8
|
# ? Jan 30, 2019 15:35 |
|
The most amusing thing to me is that it seems to be pretty reasonable value if you have a use for all these cores — $3000 for 28 cores and 38,5MB cache. The equivalent Xeon 8180 is >$10000.
|
# ? Jan 30, 2019 16:41 |
|
Mirrors the 32C TR2 pricing, doesn't it? I wholly expect AMD to screw them with the 32C TR3.
|
# ? Jan 30, 2019 16:55 |
|
eames posted:The most amusing thing to me is that it seems to be pretty reasonable value if you have a use for all these cores — $3000 for 28 cores and 38,5MB cache. The equivalent Xeon 8180 is >$10000. It will be interesting to see how AMD responds to this. I bet we will see a 64-core Threadripper SKU, probably for the same price as the Xeon (or maybe slightly higher since AMD can argue the overall system cost will still be less due to socket TR4 motherboards being much cheaper). Since we can probably expect it to hit the same clocks as the Xeon part (at least stock vs stock) and we know Zen 2 has full AVX2 support that would really only leave Intel with an advantage in memory channels for the Xeon.
|
# ? Jan 30, 2019 16:59 |
|
Combat Pretzel posted:Mirrors the 32C TR2 pricing, doesn't it? I wholly expect AMD to screw them with the 32C TR3. 2990WX is ~$1700.
|
# ? Jan 30, 2019 19:52 |
|
Combat Pretzel posted:Mirrors the 32C TR2 pricing, doesn't it? I wholly expect AMD to screw them with the 32C TR3. Steve also mentioned in the Gamers Nexus video that the Asus W-3175X motherboard is estimated to cost around ~$1700, with retail availability currently unknown.
|
# ? Jan 31, 2019 05:30 |
|
GN has gotten off track lately with crap like ^^that, LN cooling and collaborating with Roman. I mean yeah he needs to do a video about every day to put food on the table, but c'mon.
|
# ? Jan 31, 2019 08:29 |
|
sauer kraut posted:GN has gotten off track lately with crap like ^^that, LN cooling and collaborating with Roman. What? Do you just not like extreme OC or do you think it's poor quality content? Is this partially the weird hateboner against Debauer this thread has had in the past?
|
# ? Jan 31, 2019 09:58 |
|
GN/Steve's doing what he's doing because like it or not, clickbait pays the bills. That's why they're stoking these bullshit and wholly-over-the-top ~benchmark rivalries~ between the tech streamers all of a sudden. If anything, it leaves the "educating the masses about computer tech" field open for newer blood.
|
# ? Jan 31, 2019 10:03 |
|
What else is he gonna do at the moment, review H310 boards? He’s gotta fill the space between major releases with something.
|
# ? Jan 31, 2019 10:29 |
|
Good thing I know Chinese and can watch better quality reviews elsewhere from China despite having much smaller budgets than big time Youtubers
|
# ? Jan 31, 2019 11:15 |
|
Cygni posted:What else is he gonna do at the moment, review H310 boards? He’s gotta fill the space between major releases with something. Stay on top of recent GPU driver releases and their issues, best gaming mouse/keyboard or headsets, sexy cases, in depth OC/undervolting guides for different mobo vendors. Not putting a 28 core Xenon under a chilled waterblock and call yourself a gaming channel, the Canadian Clown does that kind of stuff better.
|
# ? Jan 31, 2019 11:36 |
|
If you want some solid content with unbelievable effort put into benchmarking, Hardware Unboxed with Aussie Steve is always good.
|
# ? Jan 31, 2019 12:22 |
|
sauer kraut posted:Stay on top of recent GPU driver releases and their issues, best gaming mouse/keyboard or headsets, sexy cases, in depth OC/undervolting guides for different mobo vendors.
|
# ? Jan 31, 2019 12:29 |
|
HalloKitty posted:If you want some solid content with unbelievable effort put into benchmarking, Hardware Unboxed with Aussie Steve is always good. Hardware Unboxed has a criminally low amount of subscribers for its quality, it's crazy At one point the dude did 300~ benchmarks for a single video edit: 342 benchmark runs for this particular video - he was banned by EA at one point because they detected so many hardware changes Zedsdeadbaby fucked around with this message at 13:18 on Jan 31, 2019 |
# ? Jan 31, 2019 13:15 |
|
Zedsdeadbaby posted:he was banned by EA at one point because they detected so many hardware changes Actually Hardcore Benchmarking
|
# ? Jan 31, 2019 16:32 |
|
sauer kraut posted:Stay on top of recent GPU driver releases and their issues, best gaming mouse/keyboard or headsets, sexy cases, in depth OC/undervolting guides for different mobo vendors. I would much rather watch the Xeon thing than driver errata showcase or BIOS simulators, but to each their own I guess.
|
# ? Jan 31, 2019 17:52 |
|
HalloKitty posted:If you want some solid content with unbelievable effort put into benchmarking, Hardware Unboxed with Aussie Steve is always good. Yeah on the subject of benchmarking effort, this vid they did a few weeks back is very useful. https://www.youtube.com/watch?v=nE_zW5SKPic&t=5s
|
# ? Jan 31, 2019 19:48 |
|
sauer kraut posted:GN has gotten off track lately with crap like ^^that, LN cooling and collaborating with Roman. Hello. Did you know you don't have to watch every video on a channel? It's possible to only click on those ones you're interested in. Well, goodbye.
|
# ? Jan 31, 2019 22:34 |
|
Itanium has officially been dead for a while now, but Intel has released the phase out plan. No further development, and the last parts will ship to HPE no later than summer 2021. https://www.anandtech.com/show/13924/intel-to-discontinue-itanium-9700-kittson-processor-the-last-itaniums It is kinda sad seeing non-embedded VLIW die like this. I remember Itanium being announced with magazine articles talking about how VLIW would avenge the i860 and iAPX 432 and revolutionize computing forever. Not so much.
|
# ? Feb 1, 2019 01:43 |
|
I'm not sad to see it go. That Itanium hype was always based on Intel management and marketing huffing the farts of the Itanium team and refusing to listen to the internal warnings from other departments that Itanium was likely to fall well short of promises. (Pages 85-92 are what you want to read.)
|
# ? Feb 1, 2019 06:29 |
|
quote:Anyway this chip architect
|
# ? Feb 1, 2019 18:53 |
|
Its less caring about Itanium itself or Intel or its team or whatever and more caring about VLIW as a concept, to me. The concept seemed so promising, with EPIC and TerraScale and all that and now its all pretty much gone except in embedded coprocs and stuff. Sad.
|
# ? Feb 1, 2019 19:42 |
|
I was so hyped on Itanium back in the day. Morbidly curious what the workloads folks buying the Itanium boxes from HPE are doing.
|
# ? Feb 1, 2019 23:00 |
|
movax posted:
Some big iron financial software system/database thing that's still too expensive to backport to x86 after the millions spent getting it working in the shiny new itanium environment.
|
# ? Feb 1, 2019 23:57 |
|
So for those that really didn’t ever follow it, what was the big hype from Itanium to begin with and how did it fall flat on its face?
|
# ? Feb 2, 2019 00:43 |
|
It was something like with an extremely long instruction set, the compiler could do magical things and it turned out compilers were pretty dumb. Performance gains were never realized.
|
# ? Feb 2, 2019 00:49 |
|
KKKLIP ART posted:So for those that really didn’t ever follow it, what was the big hype from Itanium to begin with and how did it fall flat on its face? It was supposed to have advanced features to make the pipeline more efficient and cut legacy garbage, but it turned out to actually not be more efficient for realistic workloads so why bother using it outside of special purpose applications.
|
# ? Feb 2, 2019 00:52 |
|
Most CPUs used now days extract parallelism from code using hardware at run time, Itanium used what is called EPIC (explicitly parallel instruction computing) architecture where instead of extracting parallelism from serial code the code was explicitly parallel in that one instruction word would have multiple instructions encoded in it. This put the onus on the compiler to extract parallelism and it turns out that's really loving hard when running general purpose code. The concept works better with specialized things like graphics which are much easier to parallelize.
|
# ? Feb 2, 2019 00:53 |
|
MaxxBot posted:The concept works better with specialized things like graphics which are much easier to parallelize. Hasn't VLIW been abandoned in graphics too though? Nvidia dropped it ages ago, AMD dropped it with GCN, maybe some weird mobile GPUs still use it.
|
# ? Feb 2, 2019 01:50 |
|
|
# ? May 27, 2024 02:23 |
|
MaxxBot posted:This put the onus on the compiler to extract parallelism and it turns out that's really loving hard when running general purpose code. Just to expand on this remarkably succinct explanation (nice job): general purpose code branches a lot. When it's the compiler's job to create big honking mega-instructions that explain how to use the processor's resources to their fullest potential, it needs to know what's being computed to do that. You can't cram more compute in, when you don't know what else needs to happen. When you have a branch that can go two different ways, it basically has to throw up its hands and go "idk sry." It can't really predict how the code will run. (And god help you if there's more than 2 different ways it can go!) The CPU can do that dynamically just fine with speculation though. (Like, even considering recent security problems.) So modern CPUs just do this scheduling of instructions onto ALUs dynamically while running the code, mostly unimpeded by branching. This whole failure is sometimes blamed on insufficiently smart compilers, and that's sort of true, but the truth is they designed an architecture that's bad at branching, wanted to run branch-heavy code on it, and said "this problem left to the compiler devs lol."
|
# ? Feb 2, 2019 02:16 |