Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
shrike82
Jun 11, 2005

I'm hoping the memory shortages continue. Made a 50 grand bet on Micron at the start of the year and it's doubled since then. Pretty crazy

Adbot
ADBOT LOVES YOU

shrike82
Jun 11, 2005

Linus laying a burn on Intel

https://lkml.org/lkml/2018/1/3/797

quote:

Why is this all done without any configuration options?

A *competent* CPU engineer would fix this by making sure speculation
doesn't happen across protection domains. Maybe even a L1 I$ that is
keyed by CPL.

I think somebody inside of Intel needs to really take a long hard look
at their CPU's, and actually admit that they have issues instead of
writing PR blurbs that say that everything works as designed.

.. and that really means that all these mitigation patches should be
written with "not all CPU's are crap" in mind.

Or is Intel basically saying "we are committed to selling you poo poo
forever and ever, and never fixing anything"?

Because if that's the case, maybe we should start looking towards the
ARM64 people more.

Please talk to management. Because I really see exactly two possibibilities:

- Intel never intends to fix anything

OR

- these workarounds should have a way to disable them.

Which of the two is it?

Linus

shrike82
Jun 11, 2005

I still have a lot of trouble with audio on Ubuntu but these days with my Bluetooth headset

shrike82
Jun 11, 2005

There's an entire market of prosumers to small tech outfits that buy parts from places like Amazon and that would absolutely see material performance hits on applications ranging from GCC compilation to video encoding to even stuff like 7zip compression.

A benchmark article from last year specifically on Intel SMT
https://www.phoronix.com/scan.php?page=article&item=intel-ht-2018&num=1

quote:

Long story short, Hyper Threading is still very much relevant in 2018 with current-generation Intel CPUs. In the threaded workloads that could scale past a few threads, HT/SMT on this Core i7 8700K processor yielded about a 30% performance improvement in many of these real-world test cases.

shrike82
Jun 11, 2005

It might be the case that the majority of users aren't realistically under threat but there's still uncertainty whether Intel or OS providers (Apple, Google, Microsoft) will force you to take a performance hit to cover their bases. ChromeOS disabling HT by default is a minor example but is Intel going to keep separate branches of its software/firmware updates moving forward for people that don't want to take any performance hit? They've already specifically stated that their soft mitigation causes up to an 8% performance hit.

shrike82
Jun 11, 2005


lol...but hey, users can disable these mitigations if they want to so no problemo

shrike82
Jun 11, 2005

https://twitter.com/atomicthumbs/status/1203771427679166464?s=20

shrike82
Jun 11, 2005

What's up with Intel? Is it a management problem or did they make a wrong bet on tech sometime back and it's taking time to turn the ship around.

shrike82
Jun 11, 2005

Err... Intel has pretty good support for ML inferencing (AVX2 and their OpenVINO platform).

shrike82
Jun 11, 2005

RE ML and CPUs, inferencing is moving to INT8 quantized operations.

GPUs will always be better for training but in production when you're serving inferencing queries to end-users, people shift to CPU since they're cheaper and scale up better.

shrike82
Jun 11, 2005

Yah, obviously it'll vary by use-case. I do NLP stuff - we do training on a mix of local GPUs, cloud TPUs but we've migrated to CPU-based inferencing mostly for scale + cost.

shrike82
Jun 11, 2005

I've spent a fair bit of time recently on transferring GPU-trained models to either CPUs or embedded platforms with cut-down tensor cores for inferencing. They're tenable for a lot of use-cases. The thrust of research and work in this area is to compress or distill the models in an efficient manner without losing much if any performance. Common techniques include throwing out precision (FP32->FP16->INT8), pruning them (throwing out swathes of weights), or distilling them (using the trained model as a teacher to transfer its knowledge to a smaller model with an efficient architecture).

Research has shown that a good way to build ML models is to train large and deep models then compress them down to fit your inferencing platform.

shrike82
Jun 11, 2005

Intel's a large labyrinthine bureaucracy at this point with 100K employees. It's pretty difficult to transform an organization like that, and mass firings or replacing the top layer with outsiders is unlikely to be the answer. I suspect you could parachute in Jensen Huang and he'd make a mess of things as an outsider without history in the company and a group of mid and higher tier experienced staff he could lean on.

shrike82
Jun 11, 2005

The market is dumb. They're pricing AMD and Nvidia at tech bubble levels so let's see where we end up.

shrike82
Jun 11, 2005

it's possible to believe that AMD and Nvidia have good products and also that their stocks are priced at dumb levels. take a look at their p/e ratios.

i guess i'm blessed to be an overpaid computer toucher but the US stock market specifically feels like late stage capitalism given everything else going on in the country.

shrike82
Jun 11, 2005

i'm not going to complain about my stock portfolio but number continually going up isn't going to do much for me if my neighborhood gets burnt down in the eviction riots of 2021.

shrike82
Jun 11, 2005

i went from a MBA to a Razer Blade to a Razer Stealth back to an MBP.
i don't get people who say Windows laptops are better than Macs even at the same price point

shrike82
Jun 11, 2005

I had the pre-2020 Stealth 13.

I was chasing the dream of a mobile gaming laptop for a while but they just suck due to the physics of thermal dissipation. I don't get how people accept the fan noise that a gaming laptop outputs. Even the ROG G14 which is the current sweetheart has the same issue.

Went back to MBP + a Switch.

shrike82
Jun 11, 2005

Have there been any good business write-ups about Intel falling apart? Something more focused on the whys rather than the technical details.

shrike82
Jun 11, 2005

People aren’t willing to pay for long form text tech reviews so...

I’m just happy there is some (any) decent tech coverage on video (DigitalFoundry etc)

shrike82
Jun 11, 2005

Nice meltdown as they say

shrike82
Jun 11, 2005

Murthy’s Law

shrike82
Jun 11, 2005

https://twitter.com/harukaze5719/status/1319643238513270785?s=20
lol, did some bad news just come out for Intel?

shrike82
Jun 11, 2005

apple approaching their PCs from a mobile first perspective kinda limits where they'll go. they don't seem interested in cloud compute and i'm skeptical they're interested in making a serious play for the HEDT/local server space. they can make a play for any consumer electronics/computer space they care about so it's more a matter of management focus and entering a market which is either strategically important or actually pushes their financials.

the m1's strength is its battery life - otoh, it's a pity all that processing power is limited to their software ecosystem. outside of a limited set of creative software, you can't really apply the power on stuff like gaming.

shrike82
Jun 11, 2005

Laslow posted:

Yep. Cloud is a race to the bottom and they don’t play that game.

lol cloud is a major profit center for big tech companies extending to even hardware makers that cater to cloud providers

shrike82
Jun 11, 2005

Ampere's a side show - intel/amd/nvidia should be more concerned about the major cloud players rolling their own silicon e.g. Amazon/graviton, Google/TPUs. Intel and Nvidia making juicy margins on cloud hardware was never a long-term thing. at some point, the cloud service providers are going to pull an Apple.

shrike82
Jun 11, 2005

movax posted:

Google doesn't have any adults in the room to capitalize on their resources / advantages / custom silicon, I feel.

lol, do you have a macro replacing "cloud" with "my butt"?

shrike82
Jun 11, 2005

i was thinking more in terms of AI. they've got a research complex pushing not just papers but hardware/architectural improvements. i get the impression their researchers get ready access to TPU pods (1024 TPUv3s) which has been an advantage in churning out benchmark beating models.

a hardware improvement that i've personally encountered is google/TPUs switching to their own half-precision floating point type (bfloat16). a lot of their research models are trained on it these days and you often can't run them on nvidia hardware without switching to fp32. nvidia's only introducing it with Ampere cards and as far as i can tell, the major ML framework don't support it on their cards yet.

shrike82
Jun 11, 2005

Intel's been a backwater for engineering talent for a while now. Even a decade ago when I was in school, I remember the semicon companies pulling in the middling graduates. The comp gap's only ballooned since then.

shrike82
Jun 11, 2005

zero use for AVX-512 in gaming given GPUs even for inferencing

shrike82
Jun 11, 2005

i'm referring specifically to AVX-512. also, does it still throttle performance on mixed workloads?

shrike82
Jun 11, 2005

Paul MaudDib posted:

not on Ice Lake at least.

Ice Lake loses 100 MHz max boost when running 512-bit operations on a single core, in other circumstances there is no change in boost.

https://travisdowns.github.io/blog/2020/08/19/icl-avx512-freq.html


quote:

Licence-based downclocking is only one source of downclocking. It is also possible to hit power, thermal or current limits. Some configurations may only be able to run wide SIMD instructions on all cores for a short period of time before exceeding running power limits.

:shrug:

shrike82
Jun 11, 2005

lol I didn’t realize Jim Keller rejoined Intel and left again purportedly due to an internal argument about outsourcing.

shrike82
Jun 11, 2005

FuturePastNow posted:

We don't know why he left or what he was working on. It is a mystery!

https://www.reuters.com/article/us-intel-thirdpoint-exclusive-idUSKBN2931PS

quote:

Exclusive: Hedge fund Third Point urges Intel to explore deal options

In June, it lost one of its veteran chip designers, Jim Keller, over a dispute on whether the company should outsource more of its production, sources said at the time.

who knows but it wouldn't surprise me

shrike82
Jun 11, 2005

it's always fascinating to follow the implosion of a successful organization due to bean counters and office politickers taking over
you have to give microsoft credit for being able to renew itself especially when you look at other big tech cos from its era

shrike82
Jun 11, 2005

obligatory

shrike82
Jun 11, 2005

there are 2-slot blower Ampere cards :shrug:
a lot of people use risers w/ cables to mount bigger cards anyway

shrike82
Jun 11, 2005

I have to wonder if there's any market for NUCs. My brother got a current gen one for casual gaming on the TV but gave up on it after realizing how anaemic its GPU capabilities were. He idly considered hooking up an eGPU to it but at that point, you're better off getting a normal desktop

shrike82
Jun 11, 2005

seems bad

quote:

Users looking at our gaming results will undoubtedly be disappointed. The improvements Intel has made to its processor seem to do very little in our gaming tests, and in a lot of cases, we see performance regressions rather than improvements. If Intel is promoting +19% IPC, then why is gaming so adversely affected? The answer from our side of the fence is that Rocket Lake has some regressions in core-to-core performance and its memory latency profile.
...
The danger is that during our testing, the power peaked at an eye-catching 292 W. This was during an all-core AVX-512 workload, automatically set at 4.6 GHz, and the CPU hit 104ºC momentarily. There’s no indication that the frequency reduced when hitting this temperature, and our cooler is easily sufficient for the thermal load, which means that on some level we might be hitting the thermal density limits of wide mathematics processing on Intel’s 14nm. In order to keep temperatures down, new cooling methods have to be used.

Adbot
ADBOT LOVES YOU

shrike82
Jun 11, 2005

lol I’m curious what you guys are doing are home that needs that much bandwidth

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply