Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Hello Spaceman posted:

i wonder if the 16gb ram limitation for the m1 macs is a repeat of the limitation where apple only offered macbooks with up to 16gb because the intel chipsets didn't support low power ddr ram?
in which case, what you are suggesting would fit with the design choice, since apple would be all about low-power components to bolster their battery life and efficiency goals

or it is just an architecture limitation, but hopefully a resident silicon wizard can provide some insight

Hi I am a silicon weenie, idk about wizard

It's very expensive to design, validate, and start manufacturing a new chip design. On the other hand, once all these non-recurring costs are paid, the incremental cost to make each chip is relatively low.

So, to save money on setup, often you want to design a multirole chip. You identify two or more products or markets that are close enough, and make the chip have all the features and performance it needs to serve any of them. You're wasting something by having so-called "dark silicon" in each product (stuff that's needed for other products, but is not active in this one), but if your finance and marketing guys successfully predict sales of all the variants, and you make good decisions about how much space to cover with each design (you don't want to overreach), it's a net win.

This is an example. By all appearances, the M1 will lead a double life as the A14X in an eventual iPad Pro refresh. In that application, it'll be programmed with lower thermal limits, an iPad probably won't have two USB4/Thunderbolt, you probably won't see 16GB RAM in an iPad, yada yada yada. (And on the Mac side, there's unused stuff too - the touchscreen controller, for example.)

That's where the M1's limitations come from. Its specs resulted from an Apple review of: "Which Macs can we power with a slightly upfeatured iPad Pro chip"? The answer they came up with is all the low-end Macs. Everything which previously already had a limitation of 16GB RAM, or two USB/thunderbolt ports, or an i3.

Note that they've kept around high end versions of the Intel mini and MBP 13". Only the MacBook Air got fully replaced with M1 models, because no Intel MBA had more than 16GB RAM or a giant SSD or two USB/TB ports.

To make bigger Macs, they're going to need bigger and more capable M series chips. These will not get to share a design with a high volume iOS device. Apple probably delayed them relative to M1 for two basic reasons: getting M1 out the door helps them ship the iPad Pro refresh too, and it's most important to transition the low end Macs first because they're the most popular models. Low end Macs also should get the most uplift from Apple Silicon, in relative terms.

Adbot
ADBOT LOVES YOU

Splinter
Jul 4, 2003
Cowabunga!
Is the reason the M1 Geekbench Multi-Core results aren't nearly as dominant as Single Core (compared to similar core count Intel chips) due to lack of Hyper Threading, or is there some other reason the M1 is much better at single core benches?

Perplx
Jun 26, 2004


Best viewed on Orgasma Plasma
Lipstick Apathy
The M1 only has 4 fast cores, the other 4 are to save power. A demanding app like geekbench will only run on 4 cores, which is why the multithread score is only 4.4x the single thread score.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Binary Badger posted:

I wonder if, with the new GPU, and Rosetta 2 being supposedly fantabulous, and Apple is incredibly keeping its moribund OpenGL 1.4 support and OpenCL 1.2 support, will older games run like lightning on the new M1 Macs?

Lots of them won't run at all, because 32-bit lol.

quote:

Betchoo Apple kept that old graphics framework around because all the people who do molecular imaging and biotech would scream bloody murder if their software, which also happens to lean heavily on those two older APIs, broke on the new hardware.

As far as I know, Apple's GL and CL were transformed into Metal API wrappers years ago. Keeping them around and porting to ARM is mostly free. There's no low-level driver code to worry about, everything is very stable, all that the GL and CL wrappers need is low-intensity maintenance. Fix bugs, tweak as needed to keep up with Metal API evolution.

Fame Douglas posted:

Lacking backwards compatibility aside, considering all the games they showed in their keynote didn't look to be running all that well, I don't think they'll "run like lightning". We're still talking about integrated graphics, those are never all that great.

This is not your dad's integrated graphics

But seriously,

M1 GPU: 2.6 teraflops, 82 gigatexel fill rate
AMD Radeon Pro 5600M: 5.3 teraflops, 165 gigatexel fill rate

The 5600M is the best GPU Apple offers in the 16" MBP. Having about half its raw computational throughput in such a low power chip (Big Navi there is about a 50W chip all on its own) is huge. And because the TBDR architecture of Apple's GPU is a lot more efficient at using that raw throughput, it might punch a bit above its weight when rendering graphics.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Fame Douglas posted:

I didn't take that as "they'll run faster under emulation". I took that as a "our new CPUs are so much faster than the old ones even emulated these applications will run faster on the new machines".

I'm not completely sure which claim you're talking about, but if it's the one about games, I think it amounted to games, specifically, running faster on the new machines even under emulation. Which is plausible, because old low end Macs all had such anemic Intel GPUs that most games were completely GPU-limited, not CPU-limited.

Perplx
Jun 26, 2004


Best viewed on Orgasma Plasma
Lipstick Apathy
It's because games written in metal run native as metal on AS, the game logic has to be emulated, but the graphics primitives are the same and can just be passed to the video card.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Combat Pretzel posted:

Yeah, right, 37% higher score over a 10700K at 3.8GHz.

Either the x86 guys hosed up royally in their designs, or this benchmark is wrong. Or the Intel iMac is thermally a clusterfuck.

None of the above. It's really that good.

Some of it is that the 10700K is Yet Another 14nm Skylake Rehash. Here's some Anandtech test data against Intel's latest 10nm Willow Cove core:

https://www.anandtech.com/show/16226/apple-silicon-m1-a14-deep-dive/4

That's a ~5W iPhone chip hanging right with Intel and AMD's best. Note that the Intel design is a 28W mobile chip and the AMD is a 105W desktop gaming chip.

The M1 has the same CPU core as the A14, with higher thermal limits.

Pulcinella
Feb 15, 2019

BobHoward posted:

Lots of them won't run at all, because 32-bit lol.


As far as I know, Apple's GL and CL were transformed into Metal API wrappers years ago. Keeping them around and porting to ARM is mostly free. There's no low-level driver code to worry about, everything is very stable, all that the GL and CL wrappers need is low-intensity maintenance. Fix bugs, tweak as needed to keep up with Metal API evolution.


This is not your dad's integrated graphics

But seriously,

M1 GPU: 2.6 teraflops, 82 gigatexel fill rate
AMD Radeon Pro 5600M: 5.3 teraflops, 165 gigatexel fill rate

The 5600M is the best GPU Apple offers in the 16" MBP. Having about half its raw computational throughput in such a low power chip (Big Navi there is about a 50W chip all on its own) is huge. And because the TBDR architecture of Apple's GPU is a lot more efficient at using that raw throughput, it might punch a bit above its weight when rendering graphics.

Yeah this. In this forum post Octane talks about getting GPU accelerated rendering setup for Metal (to support AMD and Apple silicon gpus). They were able to run their benchmark (OB is OctaneBench) on an A13 where it performs similarly to the integrated Intel GPU. I would imagine the A14 M1 (especially with cooling) can do much better.

quote:


I just want to know the OB score of all the AMD/Intel GPUs on 10.15 now

Just to be clear, these are still early numbers and have a +/- variance of 7% or so, but one thing that will likely hold is the relative performance in OB between these GPUs :

Radeon Pro Vega II Duo - 412 OB*
Radeon Pro Vega II - 206 OB
Radeon VII - 200 OB
5700 XT - 170 OB
Vega FE - ~161 OB
Vega 64 -~148 OB
Vega 56 - ~117 OB
Vega 48 - 108 OB
5600M - 108 OB
5500M - 77 OB
Vega 20 - ~50 OB
Intel iris 640 - ~17 OB
Intel iris 630 - ~15 OB
Apple A13 - ~15 OB

shrike82
Jun 11, 2005

People don’t (can’t) use Apple laptops for gaming so I dunno how meaningful the M1’s graphics performance is

Even with Apple Arcade, it’s been disappointing how little Apple cares about gaming given they could have a mobile platform that plays better than a Switch

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

shrike82 posted:

People don’t (can’t) use Apple laptops for gaming so I dunno how meaningful the M1’s graphics performance is

Wait for the Zoom effects marketplace!

a neurotic ai
Mar 22, 2012

BobHoward posted:

None of the above. It's really that good.

Some of it is that the 10700K is Yet Another 14nm Skylake Rehash. Here's some Anandtech test data against Intel's latest 10nm Willow Cove core:

https://www.anandtech.com/show/16226/apple-silicon-m1-a14-deep-dive/4

That's a ~5W iPhone chip hanging right with Intel and AMD's best. Note that the Intel design is a 28W mobile chip and the AMD is a 105W desktop gaming chip.

The M1 has the same CPU core as the A14, with higher thermal limits.

The thing I’m curious about is how an integrated package can stand up to some of the bigger, badder discrete packages. Unless they make those m1x dies absolutely massive and damage their yields, idk what sorcery they are going to perform in order to stand toe to toe with a chip that is 251mm^2 and dedicated just to graphics alone (the 5600m is based on the 5700XT navi chip iirc).

gret
Dec 12, 2005

goggle-eyed freak


Thinking about replacing my 2015 5k iMac with a Mini but can't quite pull the trigger because the only external monitor I have is a 24" 1920x1200 Dell monitor. Otherwise the M1 benchmarks absolutely murder both the CPU and GPU on my 5 year old iMac.

trilobite terror
Oct 20, 2007
BUT MY LIVELIHOOD DEPENDS ON THE FORUMS!

gret posted:

Thinking about replacing my 2015 5k iMac with a Mini but can't quite pull the trigger because the only external monitor I have is a 24" 1920x1200 Dell monitor. Otherwise the M1 benchmarks absolutely murder both the CPU and GPU on my 5 year old iMac.

get a 43” 4K tv for $250-$450 depending on which model you get/what you prioritize.

or a 4K desktop display I guess, but you won’t spend any less and where’s the fun in that

if you decide to dump it for an AS iMac down the road, the screen is a good size to put in a home office or bedroom, or possibly a kitchen depending on your living situation.

trilobite terror fucked around with this message at 23:16 on Nov 13, 2020

Fantastic Foreskin
Jan 6, 2013

A golden helix streaked skyward from the Helvault. A thunderous explosion shattered the silver monolith and Avacyn emerged, free from her prison at last.

shrike82 posted:

People don’t (can’t) use Apple laptops for gaming so I dunno how meaningful the M1’s graphics performance is

Even with Apple Arcade, it’s been disappointing how little Apple cares about gaming given they could have a mobile platform that plays better than a Switch

They get a 30% cut from all the games on the app store, they're doing fine on mobile gaming.

ptier
Jul 2, 2007

Back off man, I'm a scientist.
Pillbug

gret posted:

Thinking about replacing my 2015 5k iMac with a Mini but can't quite pull the trigger because the only external monitor I have is a 24" 1920x1200 Dell monitor. Otherwise the M1 benchmarks absolutely murder both the CPU and GPU on my 5 year old iMac.

If you like your iMac just wait till the iMac gets refreshed next year.

Honj Steak
May 31, 2013

Hi there.
I'm currently playing around with editing some 4K 10-bit files from a GH5 on my iPhone 12 Pro Max in Lumafusion. Playback, scrubbing and export speeds are not much slower than my i9-9900K with Vega 56 graphics. It’s insane.

gret
Dec 12, 2005

goggle-eyed freak


ptier posted:

If you like your iMac just wait till the iMac gets refreshed next year.

Yeah my iMac once in a while develops graphical glitches and then freezes up, so it's probably near its last legs. Hope new ARM iMacs are announced sooner rather than later.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

gret posted:

Thinking about replacing my 2015 5k iMac with a Mini but can't quite pull the trigger because the only external monitor I have is a 24" 1920x1200 Dell monitor. Otherwise the M1 benchmarks absolutely murder both the CPU and GPU on my 5 year old iMac.

I want to hear how this goes once people do it

Puppy Galaxy
Aug 1, 2004

somewhat related to my earlier post, looks like I may be taking the plunge and getting a new MacBook. Replacing a mid-2012 non retina MPB.

I mostly use my Mac for recording in Logic Pro X. Also do a little lightweight video editing. My current MBP is solid with Logic as long as I don't use too many midi tracks, and iMovie is fine except exporting anything over 720p takes forever. Also would like to be able to zoom or live stream without smoke pouring out of the vents

One thing I'm concerned about is ram in the new M1 Macs. I put 16GB in my machine back in March, am I going to feel the cut to 8GB? Seems like maybe I won't with the geekbench specs posted earlier?

shrike82
Jun 11, 2005

I'd wait for reviews especially for anyone using their laptop for more than web-browsing, light data entry stuff.

Ziploc
Sep 19, 2006
MX-5

Puppy Galaxy posted:

One thing I'm concerned about is ram in the new M1 Macs. I put 16GB in my machine back in March, am I going to feel the cut to 8GB? Seems like maybe I won't with the geekbench specs posted earlier?

I'm not sure how CPU GeekBench scores will tell you how much you'll hate having 8gb of ram.

That said, it is a tough call. But considering how long you've kept your 2012 machine, and have upgraded it to keep it relevant long term, I'd get the 16gb. Even if 16gb doesn't sound like a lot now a days.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

shrike82 posted:

I'd wait for reviews especially for anyone using their laptop for more than web-browsing, light data entry stuff.

It seems pretty clear that they’re going to kick rear end at small-to-medium-scale development given how they leaned so hard on integer perf and the memory bandwidth that looks to be high. I wouldn’t try to build Firefox on the M1BA, probably, but for most mobile and desktop app development I think it would be pretty fabulous. I suspect it sucks more for server development when one of your containers gets relegated to the LITTLE cores, but maybe not.

When pytorch and TF learn to talk to the Neural Cores or whatever we’ll usher in a new wave of “predicts better on my machine”.

shrike82
Jun 11, 2005

No one's going to be running inference on Apple laptops or desktops. If you're interested in custom silicon for ML inferencing, Amazon's shifting away from Nvidia cards to using their own ASICs for Alexa AI queries on the cloud.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

shrike82 posted:

No one's going to be running inference on Apple laptops or desktops. If you're interested in custom silicon for ML inferencing, Amazon's shifting away from Nvidia cards to using their own ASICs for Alexa AI queries on the cloud.

I don’t know about that. I could see lots of apps doing light inference over a given personal corpus like photos or emails or browser history or hand-written annotations or whatever and using the assist hardware for it. Most will be wasting their time, because they won’t be better than manually engineered state machines, but they’ll try it.

shrike82
Jun 11, 2005

oddly enough the examples you mentioned are being done on the cloud including Apple's implementations

Fame Douglas
Nov 20, 2013

by Fluffdaddy

Subjunctive posted:

It seems pretty clear that they’re going to kick rear end at small-to-medium-scale development given how they leaned so hard on integer perf and the memory bandwidth that looks to be high. I wouldn’t try to build Firefox on the M1BA, probably, but for most mobile and desktop app development I think it would be pretty fabulous. I suspect it sucks more for server development when one of your containers gets relegated to the LITTLE cores, but maybe not.

What's actually going to suck is that you can't run your x64 Docker containers, ARM machines are unsuitable to a whole lot of tasks.

Escape Goat
Jan 30, 2009

shrike82 posted:

oddly enough the examples you mentioned are being done on the cloud including Apple's implementations

I thought Apple was using neural engine for photos?

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



gret posted:

Yeah my iMac once in a while develops graphical glitches and then freezes up, so it's probably near its last legs. Hope new ARM iMacs are announced sooner rather than later.

The scuttlebutt is Q1 2021, fwiw.

Mu Zeta
Oct 17, 2002

Me crush ass to dust

It's about time we get a redesigned iMac too without the huge chin and height adjustment would be nice

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



Also, in terms of the performance of M1/A14 and x86/etc., I thought Anandtech did a decent discussion on it, and in particular, the following segments I found interesting:

Anandtech posted:

What really defines Apple’s Firestorm CPU core from other designs in the industry is just the sheer width of the microarchitecture. Featuring an 8-wide decode block, Apple’s Firestorm is by far the current widest commercialized design in the industry. IBM’s upcoming P10 Core in the POWER10 is the only other official design that’s expected to come to market with such a wide decoder design, following Samsung’s cancellation of their own M6 core which also was described as being design with such a wide design.

Other contemporary designs such as AMD’s Zen(1 through 3) and Intel’s µarch’s, x86 CPUs today still only feature a 4-wide decoder designs (Intel is 1+4) that is seemingly limited from going wider at this point in time due to the ISA’s inherent variable instruction length nature, making designing decoders that are able to deal with aspect of the architecture more difficult compared to the ARM ISA’s fixed-length instructions. On the ARM side of things, Samsung’s designs had been 6-wide from the M3 onwards, whilst Arm’s own Cortex cores had been steadily going wider with each generation, currently 4-wide in currently available silicon, and expected to see an increase to a 5-wide design in upcoming Cortex-X1 cores.

Apple’s microarchitecture being 8-wide actually isn’t new to the new A14. I had gone back to the A13 and it seems I had made a mistake in the tests as I had originally deemed it a 7-wide machine. Re-testing it recently, I confirmed that it was in that generation that Apple had upgraded from a 7-wide decode which had been present in the A11 and 12.

One aspect of recent Apple designs which we were never really able to answer concretely is how deep their out-of-order execution capabilities are. The last official resource we had on the matter was a 192 figure for the ROB (Re-order Buffer) inside of the 2013 Cyclone design. Thanks again to Veedrac’s implementation of a test that appears to expose this part of the µarch, we can seemingly confirm that Firestorm’s ROB is in the 630 instruction range deep, which had been an upgrade from last year’s A13 Lightning core which is measured in at 560 instructions. It’s not clear as to whether this is actually a traditional ROB as in other architectures, but the test at least exposes microarchitectural limitations which are tied to the ROB and behaves and exposes correct figures on other designs in the industry. An out-of-order window is the amount of instructions that a core can have “parked”, waiting for execution in, well, out of order sequence, whilst the core is trying to fetch and execute the dependencies of each instruction.

A +-630 deep ROB is an immensely huge out-of-order window for Apple’s new core, as it vastly outclasses any other design in the industry. Intel’s Sunny Cove and Willow Cove cores are the second-most “deep” OOO designs out there with a 352 ROB structure, while AMD’s newest Zen3 core makes due with 256 entries, and recent Arm designs such as the Cortex-X1 feature a 224 structure.

Exactly how and why Apple is able to achieve such a grossly disproportionate design compared to all other designers in the industry isn’t exactly clear, but it appears to be a key characteristic of Apple’s design philosophy and method to achieve high ILP (Instruction level-parallelism).

Many, Many Execution Units
Having high ILP also means that these instructions need to be executed in parallel by the machine, and here we also see Apple’s back-end execution engines feature extremely wide capabilities. On the Integer side, whose in-flight instructions and renaming physical register file capacity we estimate at around 354 entries, we find at least 7 execution ports for actual arithmetic operations. These include 4 simple ALUs capable of ADD instructions, 2 complex units which feature also MUL (multiply) capabilities, and what appears to be a dedicated integer division unit. The core is able to handle 2 branches per cycle, which I think is enabled by also one or two dedicated branch forwarding ports, but I wasn’t able to 100% confirm the layout of the design here.

The Firestorm core here doesn’t appear to have major changes on the Integer side of the design, as the only noteworthy change was an apparent slight increase (yes) in the integer division latency of that unit.

On the floating point and vector execution side of things, the new Firestorm cores are actually more impressive as they a 33% increase in capabilities, enabled by Apple’s addition of a fourth execution pipeline. The FP rename registers here seem to land at 384 entries, which is again comparatively massive. The four 128-bit NEON pipelines thus on paper match the current throughput capabilities of desktop cores from AMD and Intel, albeit with smaller vectors. Floating-point operations throughput here is 1:1 with the pipeline count, meaning Firestorm can do 4 FADDs and 4 FMULs per cycle with respectively 3 and 4 cycles latency. That’s quadruple the per-cycle throughput of Intel CPUs and previous AMD CPUs, and still double that of the recent Zen3, of course, still running at lower frequency. This might be one reason why Apples does so well in browser benchmarks (JavaScript numbers are floating-point doubles).

Vector abilities of the 4 pipelines seem to be identical, with the only instructions that see lower throughput being FP divisions, reciprocals and square-root operations that only have an throughput of 1, on one of the four pipes.

squirrelzipper
Nov 2, 2011

SourKraut posted:

Also, in terms of the performance of M1/A14 and x86/etc., I thought Anandtech did a decent discussion on it, and in particular, the following segments I found interesting:

This is cool poo poo and based on this and the launch stuff I’m super hyped for the late 2021 pro line.

E; question from a non hardware person tho, everything reads like Apple just lapped intel/AMD completely in watts/power, but this is essentially an iPad chipset with more room right? I’m confused how just now everyone’s going holy poo poo?

squirrelzipper fucked around with this message at 06:43 on Nov 14, 2020

shrike82
Jun 11, 2005

what kind of heavy lifting do goons do on their macs?

Baronash
Feb 29, 2012

So what do you want to be called?

shrike82 posted:

what kind of heavy lifting do goons do on their macs?

Hell, after a few years, casual web browsing becomes a heavy lift task when your computer’s thermal solution was designed by folks who apparently live in a world without dust.

Mostly blender rendering on a mid-2012 MBP. Video editing too, but I stick to HD footage so that’s typically not much of a problem.

Baronash fucked around with this message at 06:56 on Nov 14, 2020

Escape Goat
Jan 30, 2009

Mu Zeta posted:

It's about time we get a redesigned iMac too without the huge chin and height adjustment would be nice

They're going to keep the chin and add a notch

Small White Dragon
Nov 23, 2007

No relation.

shrike82 posted:

what kind of heavy lifting do goons do on their macs?

Software Development and Illustration

Mister Facetious
Apr 21, 2007

I think I died and woke up in L.A.,
I don't know how I wound up in this place...

:canada:

shrike82 posted:

what kind of heavy lifting do goons do on their macs?

Carrying it to the couch to live post The Mandalorian :newlol:

cowofwar
Jul 30, 2002

by Athanatos
Someone should compile a binary for zoom because as far as I can tell it’s written in R or something.

Although speaking of R, the dog poo poo language that is like 1,000,000 packages that are not internally consistent and don’t conform to any guidelines, the poo poo relies on some weird fortran compiler that can’t be ported to Apple silicon or something.

loving dog poo poo language die.

Hey gotta write this simple thing in R and I know how to do it just need to write it in five min and then spend four hours on stackoverflow figuring out why it doesn’t.

Everyone: multithreading.

R: lol

cowofwar fucked around with this message at 08:49 on Nov 14, 2020

squirrelzipper
Nov 2, 2011

shrike82 posted:

what kind of heavy lifting do goons do on their macs?

4K compositing is prolly the most taxing. Photoshop can get fucky too if it’s large print res files, like billboards or vinyls. The usual.

Mister Facetious posted:

Carrying it to the couch to live post The Mandalorian :newlol:

Fool. That’s what the iPad is for. (Wait they’re how much faster?!?)

squirrelzipper fucked around with this message at 09:08 on Nov 14, 2020

Virtue
Jan 7, 2009

cowofwar posted:

Someone should compile a binary for zoom because as far as I can tell it’s written in R or something.

Although speaking of R, the dog poo poo language that is like 1,000,000 packages that are not internally consistent and don’t conform to any guidelines, the poo poo relies on some weird fortran compiler that can’t be ported to Apple silicon or something.

loving dog poo poo language die.

Hey gotta write this simple thing in R and I know how to do it just need to write it in five min and then spend four hours on stackoverflow figuring out why it doesn’t.

Everyone: multithreading.

R: lol

%>% go brrr

Adbot
ADBOT LOVES YOU

~Coxy
Dec 9, 2003

R.I.P. Inter-OS Sass - b.2000AD d.2003AD

shrike82 posted:

what kind of heavy lifting do goons do on their macs?

Games.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply