Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Dog Toothbrush
Oct 21, 2019

by Reene
Keep in mind that these mitigations are hitting consumer/home users today. I believe Google has disabled HT by default across all HT-enabled Chromebooks, despite the Chrome timing resolution fix described above. That seems like an extremely drastic move and makes me wonder how Apple could avoid doing the same thing in a far less restrictive environment.

Adbot
ADBOT LOVES YOU

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
Apple has documented a way to turn HT off in macOS:

https://support.apple.com/en-us/HT210108

They think their current mitigations are good enough to cover known exploits, but are acknowledging that there may be 0-day exploits out there and want you to have the option to disable HT if you feel you need protection from them.

It probably is a bit harder for Apple to flip that switch for everyone because so many Mac users run media creation apps on Macs, and those tend to benefit from HT a lot. That's probably not a common use case for a Chromebook.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
Atom based Chromebooks getting the last laugh.

EdEddnEddy
Apr 5, 2012



Twerk from Home posted:

Atom based Chromebooks getting the last laugh.

Or Mediatek, or AMD, lol.

Are there any Qualcomm powered Chromebooks?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

EdEddnEddy posted:

Or Mediatek, or AMD, lol.

Are there any Qualcomm powered Chromebooks?

There were one or two, IIRC, and they were...bad.

Google is supposed to be working with Qualcomm on some updated chips that are supposed to be better optimized, but last I'd heard they were slated for 2H2019, and yet here we are without them, so who knows.

Honestly I think nuking HT off Chromebooks is pretty silly, both because of the aforementioned mitigations within Chrome itself, and because the likelihood of a successful attack against a low-powered, inconsistently connected system is pretty bad. Spectre varients also impacts AMD, so if it's based on that, we'd expect to see some impacts to AMD chips coming soon, too.

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"

Twerk from Home posted:

Atom based Chromebooks getting the last laugh.

A little miffed about my HP Chromebook 14 x360. It uses an i3-8130U. =/

eames
May 9, 2009

BobHoward posted:

They think their current mitigations are good enough to cover known exploits, but are acknowledging that there may be 0-day exploits out there and want you to have the option to disable HT if you feel you need protection from them.

IANAL but I suspect not making this the default is more about avoiding class action lawsuits from the potential performance loss than anything else. They’re stuck between a rock and a hard place here, it’s not like they can keep this quiet and replace the affected devices hoping nobody will notice (see Atom LPC bugs).

hambeet
Sep 13, 2002

I'm only just learning about this. What's the potential attack vectors for the HT exploits on a home use case?

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

Spectre, at least (I didn't hunt too much, I just remembered a JS POC for something related), can be exploited with javascript. A home user might only need to go to a bad website to have their data stolen.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Most of the Spectre varients that could be applied through JS have been effectively neutered by the timing degradation applied by Chrome and Firefox (and probably Edge? Does anyone even use Edge?).

Meltdown was the Intel-specific one, and that's gotten a few rounds of patches to try to address already, and is what Intel is trying to mitigate with hardware changes going forward (we'll see).

However, there are still other ways to exploit Spectre via malware running locally on a system, and the kick in the balls for that bit is that it's gonna be real hard to write a signature for it, because it's not actively doing anything that's directly, obviously malicious. So it would be concievable that you could get infected via some fairly pedestrian drive-by-download from a hostile site and have it go right past whatever AV solution you've got, because it's "not dangerous" as far as anythig can tell.

The double balls kicker for that is that there's mounting evidence that there is simply no software solution to fix that, and it's going to have to be addressed in hardware by both AMD and Intel. We'll see if someone can come up with some clever software/firmware solution, but right now it's not looking great.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

DrDork posted:

Most of the Spectre varients that could be applied through JS have been effectively neutered by the timing degradation applied by Chrome and Firefox (and probably Edge? Does anyone even use Edge?).

Meltdown was the Intel-specific one, and that's gotten a few rounds of patches to try to address already, and is what Intel is trying to mitigate with hardware changes going forward (we'll see).

However, there are still other ways to exploit Spectre via malware running locally on a system, and the kick in the balls for that bit is that it's gonna be real hard to write a signature for it, because it's not actively doing anything that's directly, obviously malicious. So it would be concievable that you could get infected via some fairly pedestrian drive-by-download from a hostile site and have it go right past whatever AV solution you've got, because it's "not dangerous" as far as anythig can tell.

The double balls kicker for that is that there's mounting evidence that there is simply no software solution to fix that, and it's going to have to be addressed in hardware by both AMD and Intel. We'll see if someone can come up with some clever software/firmware solution, but right now it's not looking great.

There is a software solution, but its ugly. You basically have to manage what cores important threads execute on and keep everything else the hell away of it. Important thread executing? Get everything else off that core and force the cache to evacuate once its done. On bare metal, this is probably doable now with a 4+ core system, but it will hurt. Try to do this on a virtual hypervisor with a heavy vm load? lol no. At that point if you're trying to protect important "enterprise things" you're far better off either dividing your VMs in to dedicated clusters based on risk of arbitrary code execution, locking down code execution so nothing untrusted can run (this includes stopping things like browsers/python/java/etc that have some intermediate code interpretation layer), or just saying gently caress it and turning off HT taking the performance hit.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

DrDork posted:

and probably Edge? Does anyone even use Edge?
Chromium Edge is pretty nice. It also has an actual visual design.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

DrDork posted:

However, there are still other ways to exploit Spectre via malware running locally on a system, and the kick in the balls for that bit is that it's gonna be real hard to write a signature for it, because it's not actively doing anything that's directly, obviously malicious. So it would be concievable that you could get infected via some fairly pedestrian drive-by-download from a hostile site and have it go right past whatever AV solution you've got, because it's "not dangerous" as far as anythig can tell.

So I don’t disagree that you can’t monitor for Spectre attacks that way, but you need to generalize that. If you count on malicious activity detection to work at all, you’re a consumer of security theatre. That type of AV has never been very effective, since it’s essentially impossible to reliably differentiate between normal and harmful software activity.

Worse than that, it’s often actively harmful. For one thing, it tends to be a huge performance drain. More importantly, software running with enough privilege to monitor everything becomes an additional attack surface, and third party AV vendors have repeatedly proven themselves incapable of writing secure code.

Mercrom
Jul 17, 2009
How hot is a i5-8400 supposed to be able to get? I installed an old Hyper 212 Evo on it and it reaches 90c using Linpack. Did I gently caress up?

eames
May 9, 2009

90°C is high but Linpack is about the most stressful workload you can run on a CPU.

Under normal circumstances you shouldn't get close to those temperatures, even with workflows like video encoding or 3D rendering. If you need to run Linpack on a daily basis then consider investing in a better cooler, though your temperature is technically still within spec.

Mercrom
Jul 17, 2009
I thought the 212 was so oversized for a stock 8400 that it wouldn't reach high temperatures. I'm trying to use Asus Q-fan control in BIOS to make it as silent as possible but I don't know how steep I should make the curve.

CFox
Nov 9, 2005
You're probably more limited by the ability to transfer heat to the cooler than the coolers ability to disperse the heat.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Aren't the 212s direct heatpipe contact? I would be a little wary using a direct contact, few pipe cooler from the Sandy Bridge era on these denser and smaller parts. (I -think- that was Sandy Bridge, wasn't it? I may have had one of those on my Core4Quads toward the end of its life actually...)

Farmer Crack-Ass
Jan 2, 2001

this is me posting irl

Paul MaudDib posted:

Does B-die really hold any advantage over that new stuff (Micron E-die I believe)? The new stuff is a lot cheaper and seems to support some fast frequencies.

Okay, so I dropped out of following PC hardware for a while and I've been confused: what exactly is "b-die" and "e-die" and when did this concept emerge (or at least become relevant/important)?

Fantastic Foreskin
Jan 6, 2013

A golden helix streaked skyward from the Helvault. A thunderous explosion shattered the silver monolith and Avacyn emerged, free from her prison at last.

Different ram chips/architecture. Gen 1 Ryzen had trouble with ram compatibility / overclocking and had the best results with ram sticks made with b-die chips. Turns out b-die and now e-die overclock better than other types.

pofcorn
May 30, 2011

Agreed posted:

Aren't the 212s direct heatpipe contact? I would be a little wary using a direct contact, few pipe cooler from the Sandy Bridge era on these denser and smaller parts. (I -think- that was Sandy Bridge, wasn't it? I may have had one of those on my Core4Quads toward the end of its life actually...)

Interesting, that must be why I've not been impressed with a 212 on my 2600x. So, something like a Scythe Mugen 5 would be better?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

ItBreathes posted:

Different ram chips/architecture. Gen 1 Ryzen had trouble with ram compatibility / overclocking and had the best results with ram sticks made with b-die chips. Turns out b-die and now e-die overclock better than other types.

basically this but Intel still sees big gains from faster RAM in certain game types (open-world FPSs and RPGs particularly) so it's still worth doing 3600-4000 on a 9900K/KS build too.

BONESTORM
Jan 27, 2009

Buy me Bonestorm or go to Hell!

Paul MaudDib posted:

basically this but Intel still sees big gains from faster RAM in certain game types (open-world FPSs and RPGs particularly) so it's still worth doing 3600-4000 on a 9900K/KS build too.

I bought 3733 for my 9900K build last year because I read an article discussing this. Noticeable improvements in minimum frame rates in those sorts of games. I updated my bios shortly after building, forgot to re-enable XMP and noticed right away something was off when attempting to play Far Cry 5. Worth the extra coin if you enjoy games in that niche (ARMA 3 used to be one of my most played games, very noticeable improvements there as well).

Cygni
Nov 12, 2005

raring to post

Theres been a bunch more Comet Lake leaks with both the 10/20 and 6/12, seems like the launch is getting pretty close. "Q1 2020" is what the leaks all say, so we are probably looking at a Feb or March launch.

eames
May 9, 2009

Meanwhile Computerbase reports that Intel has reversed the G3420 EOL status and is manufacturing 22nm Haswell CPUs again to ease the low end 14nm shortage. I can’t find a direct source but the CPU shows up in Ark as “launched” and CB is usually quite reliable. Yikes.

Also yeah, fast and low latency RAM is underrated.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Sergeant Steiner posted:

I bought 3733 for my 9900K build last year because I read an article discussing this. Noticeable improvements in minimum frame rates in those sorts of games. I updated my bios shortly after building, forgot to re-enable XMP and noticed right away something was off when attempting to play Far Cry 5. Worth the extra coin if you enjoy games in that niche (ARMA 3 used to be one of my most played games, very noticeable improvements there as well).

yup even Intel scales pretty well in open-world games.



Fallout 4 :wtc:





(anecdotally I've heard FC5 and RDR2 have apparently joined this set as well, which makes sense, although I don't have data)

The price curve gets real steep after 3600 but for anyone doing even a midrange build I totally recommend spending the extra $30 or whatever and getting some decent 3600 at a minimum. I just shopped for a friend on Sunday and I think I settled on 2x8 GB 3600 C17 for $75.

I have a nice 2x8GB 4000 kit (C19 tho), but that was back when RAM was super expensive and I regret that I couldn't go 32GB at that time. I had 32GB on my last rig (5820K, 4x8 3000C15) and it was kinda nice. I don't need it but I've been thinking I may try and track down a 3733 C17 or 4000 C18 2x16GB kit.

Paul MaudDib fucked around with this message at 00:46 on Dec 7, 2019

EdEddnEddy
Apr 5, 2012



This is bandwidth dependent more than latency/timing correct?

Hence how my 2133 DDR3 Quad Channel can keep relative pace with say Dual Channel DDR4 3200/3400 give or take.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

EdEddnEddy posted:

This is bandwidth dependent more than latency/timing correct?

Hence how my 2133 DDR3 Quad Channel can keep relative pace with say Dual Channel DDR4 3200/3400 give or take.

In the article HUB notes that apparently playing with latency didn't affect much, although it would have been nice to see that verified with some data. Kind of an interesting/counter-intuitive claim there.

But yes. X99 is in a comparatively good spot for a lot of reasons. This primarily comes down to bandwidth. Those open-world games stream a lot of data especially as they page map cells in and out a lot. X99 is good at that because quad-channel. X99 also has a shitload of cache for its time. The way Intel does cache, L3 is shared between all the cores, so if you have 16 MB of cache and you're running a single hot thread then that thread gets all the cache. So under single-threaded circumstances you have 2-3x (depending on SKU) of the cache of a 4790K or a Zen1/Zen+ processor (8MB per CCX). X99 still hung in there pretty loving well, Coffee Lake beats it but it still sits pretty drat high in a lot of gaming benchmarks.

(you have to overclock it of course, the stock clocks were trash, but you can get low 4.X's without any problem at all)

Zen2 does this really well too. Moving to a full 16MB per CCX is fantastic, and there's a reason that architecture dumps on Intel in Source Engine titles now.

But yeah, between the cache, the quad-channel, and the 40 PCIe lanes, X99 still is a cool platform even if the core count isn't there compared to Threadripper or X299. It really is the swansong of the Big Monolithic Ringbus Core, as far as HEDT. And nowadays it's super loving cheap (overclockable 8C Xeons hitting $175, 6C around $100) and the PCIe lanes make it super capable as well. It kind of is in a unique placement for homelab stuff at least until AMD comes to their senses and releases a more reasonable Threadripper 3000 entry-level ecosystem. Which I think will happen eventually.

I think it will be a very popular retro build in 10-20 years or whatever. It's hard to overstate just how many Haswell-E and Broadwell-E server CPUs there are out there. And the Chinese crappo boards are bringing the cost of entry way down and doing some really interesting things.

X99 is actually interesting in that Intel didn't lock off RDIMM support on the chipset. Most companies didn't bother to support it but ASUS did, and the chinese crappo boards mostly do as well. You can put 768 GB of cheap ECC RDIMMs on a $175 Xeon with competitive performance, on a $150 X99 board.

ASUS even apparently has LRDIMM support according to their QVL. I emailed them to confirm it and yup, put a Xeon on the board and it'll run LRDIMM too. You can cheaply load it up with more RAM than TR will do with 8 of the most expensive UDIMMs. YMMV on the chinese boards, not sure.

X58, X79, and X99 were all baller platforms (thought of very fondly by their owners) and I think TR 3000 will be too, I just wish it wasn't so drat expensive to get started. X399 socket compatibility should have been a thing and there should have been a 16C for $700 or $800. It can be basically R3-tier silicon after all.

A 16C TR for $800 with $200-300 motherboards running PCIe 3.0 would be the obvious HEDT/homelab choice right now. Props to AMD, the IO die design works loving great, it is actually really impressive how little it apparently hurts performance. Hopefully there is a reasonable Supermicro or Asrock Rack TRX40 board at some point. Ugh, even for 60 lane PCIe 4.0 and IPMI I'd hate to go past $800 just for a motherboard (even with cheap chips)....

Paul MaudDib fucked around with this message at 02:45 on Dec 7, 2019

Stickman
Feb 1, 2004

Bandwidth isn't the whole story. In Techspot's 3rd Gen ram scaling tests their manually-timed TForce 3000/CL16 kit (adjusted to CL14) only got a small bandwidth boost, but still beat the pants off of kits running at XMP all the way up to 3800/CL16.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Stickman posted:

Bandwidth isn't the whole story. In Techspot's 3rd Gen ram scaling tests their manually-timed TForce 3000/CL16 kit (adjusted to CL14) only got a small bandwidth boost, but still beat the pants off of kits running at XMP all the way up to 3800/CL16.

Yeah, that's why I said that it was a counterintuitive finding. Conventional wisdom is that latency matters a lot. That's why I said I'd like to see data on that.

I think tight subtimings also affect things more than most people realize though. Even for Intel.

There hasn't really been a high-quality study on the impact of timings and frequency for a while.

Cygni
Nov 12, 2005

raring to post

I've come to the conclusion that memory poo poo is just fuckin magic that doesnt apply to me for some reason, some people find massive differences when they crank the frequency or cut the latency and half the time when i do it all my benchmark results go down. I dont know anymore.

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

Throwback question: What's the good stuff DDR3 for Z97 with i5-4690K (Devil's Canyon)?

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!
You can get 16GB of DDR3 2133MHz for around $40

Stickman
Feb 1, 2004

Paul MaudDib posted:

Yeah, that's why I said that it was a counterintuitive finding. Conventional wisdom is that latency matters a lot. That's why I said I'd like to see data on that.

I think tight subtimings also affect things more than most people realize though. Even for Intel.

There hasn't really been a high-quality study on the impact of timings and frequency for a while.

That was meant more in support/addition to your point - I would love to see some high-quality tests too. It seems like such an obvious project for an up-and-coming hardware nerd or review channel summer intern, and wouldn't even require all that much in the way of hardware. No one seems willing to dedicate the time :/

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

taqueso posted:

Throwback question: What's the good stuff DDR3 for Z97 with i5-4690K (Devil's Canyon)?

ddr3 2133 (or maybe 2400) for sure

eames
May 9, 2009

Intel has a decent command line tool to check memory throughput and latency:

https://software.intel.com/en-us/articles/intelr-memory-latency-checker

The most interesting modes are mlc.exe —idle_latency and just mlc.exe without arguments for the full run. Has to be run as admin to work correctly, check for viruses, use at your own risk, etc.
My CFL system went from ~45ns to 34.5ns (idle) by just playing with sub- and tertiary timings, round trip latencies, etc.
It was a waste of time and I wouldn’t do it again (memory wasn’t on QVL and XMP profile is unstable) but at least it’s _perfectly_ stable now. Somehow the CPU does get ~10C hotter in various stability tests than with auto timings, at same voltages. I assume it’s because faster RAM lifts a bottleneck.

sincx
Jul 13, 2012

furiously masturbating to anime titties
.

sincx fucked around with this message at 05:55 on Mar 23, 2021

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

I mean, they're not wrong to figure that a lot of people (and offices) don't need anything faster than that to run Word, so if they can produce them at hilariously cheap prices, there's almost certainly a market to fill.

But, yeah, this is a strange timeline we're in.

BlankSystemDaemon
Mar 13, 2009



It's at times like these I wish we had a shared CPU thread, pun fully intended.


A Success on Arm for HPC: AnandTech Found a Fujitsu A64fx Wafer
and Arm Server CPUs: You Can Now Buy Ampere’s eMAG in a Workstation according to AnandTech.

Adbot
ADBOT LOVES YOU

Vanagoon
Jan 20, 2008


Best Dead Gay Forums
on the whole Internet!
I always wonder about those wafers that end up as showpieces or table ornaments, was the yield really 0% on that particular piece or were they from a production test run that failed or what?

Seems like it's here's a shitload of money we could have made selling these chips but none of them work, so have a look at the pretty thing.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply