Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
The_Franz
Aug 8, 2003

Beve Stuscemi posted:

Carthag Tuek posted:

just breakin ball grid arrays

the xbox 360 is still the reigning king there

Adbot
ADBOT LOVES YOU

Beve Stuscemi
Jun 6, 2001




true. ironically the ps3 gives it a run for its money

Powerful Two-Hander
Mar 10, 2004

Mods please change my name to "Tooter Skeleton" TIA.


Pendragon posted:

along those lines:

Apple’s manufacturing instructions calling for an entire tube of thermal paste on the CPU

elaborate cooling instructions involving shaving the top off your CPU and precisely spreading thermal paste with a razor back in the day when it theoretically might have helped but was more likely to destroy your CPU entirely


vs. now where you just blob the stuff in a pattern, clamp it the hell down and you're done

Progressive JPEG
Feb 19, 2003

hm so it looks like "kegabyte" wasn't actually a thing in the early 90s, i assume i'd extrapolated this from mb being "megabyte" when i was a wee lad

Kitfox88
Aug 21, 2007

Anybody lose their glasses?
kegalbyte?

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...

Kitfox88 posted:

kegalbyte?

No vagina dentata here tyvm

Beeftweeter
Jun 28, 2005

OFFICIAL #1 GNOME FAN

Progressive JPEG posted:

hm so it looks like "kegabyte" wasn't actually a thing in the early 90s, i assume i'd extrapolated this from mb being "megabyte" when i was a wee lad

maybe you just conflated kilobyte and megabyte? most files weren't even megabytes back then, remember a floppy was usually at most 2 MB

i still don't use mebibyte and gibibyte etc. because they sound stupid lol

3D Megadoodoo
Nov 25, 2010

people actually taking a portable computer off the desk, sometimes

Beeftweeter
Jun 28, 2005

OFFICIAL #1 GNOME FAN

3D Megadoodoo posted:

people actually taking a portable computer off the desk, sometimes

i do this all the time

3D Megadoodoo
Nov 25, 2010

insane

Elder Postsman
Aug 30, 2000


i used hot bot to search for "teens"

Beeftweeter posted:

i still don't use mebibyte and gibibyte etc. because they sound stupid lol

i never will

a megabyte will always be 1024 kilobytes. fight me, IEC

Zamujasa
Oct 27, 2010



Bread Liar
mebibyte, mebinot

Sweevo
Nov 8, 2007

i sometimes throw cables away

i mean straight into the bin without spending 10+ years in the box of might-come-in-handy-someday first

im a fucking monster

nobody has ever used or will ever use that kibi/mebi/gibibyte poo poo except for wikipedia weirdos

Beeftweeter
Jun 28, 2005

OFFICIAL #1 GNOME FAN
i think one of the standards bodies made a (not terribly big) push for using it around maybe 1999-2000 but i can't remember which one (it is 4/20 after all). but i do clearly remember laughing at it and saying "yeah gently caress that"

e: it might have been slightly later, like maybe 2001-2 at the latest, but it predated wikipedia being popular regardless

Pendragon
Jun 18, 2003

HE'S WATCHING YOU

Powerful Two-Hander posted:

elaborate cooling instructions involving shaving the top off your CPU and precisely spreading thermal paste with a razor back in the day when it theoretically might have helped but was more likely to destroy your CPU entirely


vs. now where you just blob the stuff in a pattern, clamp it the hell down and you're done

oh god I spent far too long lapping my first heatsink when it was smooth to begin with geez I was an idiot

mystes
May 31, 2006

Powerful Two-Hander posted:

elaborate cooling instructions involving shaving the top off your CPU and precisely spreading thermal paste with a razor back in the day when it theoretically might have helped but was more likely to destroy your CPU entirely


vs. now where you just blob the stuff in a pattern, clamp it the hell down and you're done
When I decide I need better cooling for my cpu I will remove massive wad of dust from the fan

shackleford
Sep 4, 2006

for a laugh look up the TDP's of the old hot and slow netburst pentium 4 chips and compare 'em to the TDP's of modern ryzen and core CPU's

Beeftweeter
Jun 28, 2005

OFFICIAL #1 GNOME FAN
i don't know much about how cpus work in extreme detail tbh, but from my understanding wasn't netburst designed to be, i guess to put it simply, generally better at higher frequencies than lower ones? i know that meant they had crazy high TDPs, but i wonder what would happen if someone made a modern chip on a modern process, based off of the netburst concept (i'd guess just using the same design would probably be bad) and clocked it real high. like, maybe the idea was valid, but we just couldn't manufacture it at the time?

or maybe it just sucked entirely, idk lol. i remember some revisions being pretty good, like northwood

shackleford
Sep 4, 2006

Beeftweeter posted:

i don't know much about how cpus work in extreme detail tbh, but from my understanding wasn't netburst designed to be, i guess to put it simply, generally better at higher frequencies than lower ones?

theoretically for chip architecture reasons a longer pipeline makes it easier to reach higher clock speeds because each stage is doing less work, or something. but netburst had a long rear end pipeline, like prescott had a ridiculous 31 stage pipeline, so you blast away a ton of work in the pipeline on a branch misprediction. so to make up for that design flaw they were like, let's make the branch predictor really good and jack the clock rate really high so it doesn't matter

quote:

i know that meant they had crazy high TDPs, but i wonder what would happen if someone made a modern chip on a modern process, based off of the netburst concept (i'd guess just using the same design would probably be bad) and clocked it real high. like, maybe the idea was valid, but we just couldn't manufacture it at the time?

or maybe it just sucked entirely, idk lol. i remember some revisions being pretty good, like northwood

it just sucked lol there's a reason they threw it in the trash

Jonny 290
May 5, 2005



[ASK] me about OS/2 Warp
for a while i had a dual netburst xeon server with four 15k SAS drives and to this day im surprised i didnt die in that room of heatstroke

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
the big idea of netburst was "moar clocks". like the whole point was to make a cpu that can clock as high as possible - they were hitting 4GHz on a 90nm process, and predicting 10GHz by 2010. that's impressive as hell, but it's not the metric that actually matters for real-world performance - instructions-per-clock is equally important (which netburst sucked at in practice), and in many modern applications instructions-per-watt is the real key.

to reach those clocks you need a long pipeline that only does a tiny amount of work at each step - you need the signal to propagate from the start of each step to the end before the next clock tick - so either you eat poo poo on every single branch or you dedicate immense amounts of your chip area to branch prediction and caching to try and avoid that.

netburst did dedicate immense amounts of chip area to that, and still ate poo poo every time your code had branches in it (even successfully predicted branches!) that went over what it could fit in the branch trace cache (so, literally anything bigger than a benchmark, essentially). meanwhile amd was kicking its rear end with athlon by using that area for moar instruction and data cache.

Zamujasa
Oct 27, 2010



Bread Liar

Beeftweeter posted:

i think one of the standards bodies made a (not terribly big) push for using it around maybe 1999-2000 but i can't remember which one (it is 4/20 after all). but i do clearly remember laughing at it and saying "yeah gently caress that"

e: it might have been slightly later, like maybe 2001-2 at the latest, but it predated wikipedia being popular regardless

the main thing i remember them being pushed was as an explanation for why hard drive sizes were consistently smaller than advertised. "oh no see when we say 100 gigabytes we mean 100,000,000,000 bytes :jerkbag:" poo poo.

disk sizes are a cluster gently caress.

shackleford
Sep 4, 2006

1.44 MB floppies are neither 1.44 * 1,000,000 bytes nor 1.44 * 1,048,576 bytes

they're 1.44 * 1024 * 1000 bytes :smuggo:

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...

shackleford posted:

1.44 MB floppies are neither 1.44 * 1,000,000 bytes nor 1.44 * 1,048,576 bytes

they're 1.44 * 1024 * 1000 bytes :smuggo:

... But why?

shackleford
Sep 4, 2006

Volmarias posted:

... But why?

i guess because it's twice the size of a 720 KB floppy

Wild EEPROM
Jul 29, 2011


oh, my, god. Becky, look at her bitrate.
the first retail 4ghz intel chip wasnt netburst even though it was trivial to overclock well past that

it was the haswell era 4790k like 5 years later

Pendragon
Jun 18, 2003

HE'S WATCHING YOU
remember how the first pentium 4s used Rambus RAM?

just lol

Beeftweeter
Jun 28, 2005

OFFICIAL #1 GNOME FAN

shackleford posted:

theoretically for chip architecture reasons a longer pipeline makes it easier to reach higher clock speeds because each stage is doing less work, or something. but netburst had a long rear end pipeline, like prescott had a ridiculous 31 stage pipeline, so you blast away a ton of work in the pipeline on a branch misprediction. so to make up for that design flaw they were like, let's make the branch predictor really good and jack the clock rate really high so it doesn't matter

it just sucked lol there's a reason they threw it in the trash


Jabor posted:

the big idea of netburst was "moar clocks". like the whole point was to make a cpu that can clock as high as possible - they were hitting 4GHz on a 90nm process, and predicting 10GHz by 2010. that's impressive as hell, but it's not the metric that actually matters for real-world performance - instructions-per-clock is equally important (which netburst sucked at in practice), and in many modern applications instructions-per-watt is the real key.

to reach those clocks you need a long pipeline that only does a tiny amount of work at each step - you need the signal to propagate from the start of each step to the end before the next clock tick - so either you eat poo poo on every single branch or you dedicate immense amounts of your chip area to branch prediction and caching to try and avoid that.

netburst did dedicate immense amounts of chip area to that, and still ate poo poo every time your code had branches in it (even successfully predicted branches!) that went over what it could fit in the branch trace cache (so, literally anything bigger than a benchmark, essentially). meanwhile amd was kicking its rear end with athlon by using that area for moar instruction and data cache.

huh i see. so even if the idea were valid (which it seems it wasn't, other than for squeezing out higher clock speeds on big-for-today processes) it wouldn't be any good at a modern workload anyway

kinda funny then that amd was eating their lunch (and yeah i do remember that, the athlon XPs) with a more traditional cpu design, while almost remaining competitive clock-for-clock lol. i suppose that's what gave rise to merom (and i did have one of the original pentium Ms — clocked at 1.5 ghz it blew my previous, much thicker and hotter 2.4ghz p4 laptop, straight outta the water lol)

Beeftweeter
Jun 28, 2005

OFFICIAL #1 GNOME FAN

Wild EEPROM posted:

the first retail 4ghz intel chip wasnt netburst even though it was trivial to overclock well past that

it was the haswell era 4790k like 5 years later

i had a 4770k at the time (still do even lol :negative:) and ran it at 4.2 ghz pretty much since i got it. afaik it's still working fine today, although it's been powered down for almost a year now and is in storage

Beeftweeter
Jun 28, 2005

OFFICIAL #1 GNOME FAN

Pendragon posted:

remember how the first pentium 4s used Rambus RAM?

just lol

remember those stupid spacer rambus rdimms that you had to use in what would otherwise be empty slots and were almost as expensive as an actual, populated rdimm? lol, lmao

so glad they eventually supported ddr lol

njsykora
Jan 23, 2012

Robots confuse squirrels.


its very funny that intel still takes the "throw more power at the chip until we have the biggest number" approach with their top level stuff in a desperate attempt to stay relevant

anyway, the intel "snake oil" presentation that leaked to the press was hilarious

Sweevo
Nov 8, 2007

i sometimes throw cables away

i mean straight into the bin without spending 10+ years in the box of might-come-in-handy-someday first

im a fucking monster

Intel thought they were going to get a 1000x speed boost from Netburst

10x from clock speed increases
10x from process shrinks
10x from multi-processing

Beeftweeter
Jun 28, 2005

OFFICIAL #1 GNOME FAN

njsykora posted:

its very funny that intel still takes the "throw more power at the chip until we have the biggest number" approach with their top level stuff in a desperate attempt to stay relevant

anyway, the intel "snake oil" presentation that leaked to the press was hilarious

lol i forgot about this

tbh they are right, amd's numbering scheme is bullshit

but so is intel's, so they're being extremely hypocritical here of course

mystes
May 31, 2006

I gave up on trying to interpret cpu model numbers a long time ago
You just have to look up the actual specs and probably benchmarks

Powerful Two-Hander
Mar 10, 2004

Mods please change my name to "Tooter Skeleton" TIA.


mystes posted:

When I decide I need better cooling for my cpu I will remove massive wad of dust from the fan

I opened my case for the first time in probably 3 years recently to change the GPU and was amazed that it was almost completely dust free thanks to the integrated filters and I guess the flow direction.

ah the days of accumulating big balls of dust and having GPUs that just vaguely moved air around and having zero outtake fans

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...

mystes posted:

I gave up on trying to interpret cpu model numbers a long time ago
You just have to look up the actual specs and probably benchmarks

I just pick whatever processor someone's build guide suggests now. And the next computer I get is probably going to be prebuilt, I'm done with doing this myself now.

feedmegin
Jul 30, 2008

Volmarias posted:

I just pick whatever processor someone's build guide suggests now. And the next computer I get is probably going to be prebuilt, I'm done with doing this myself now.

That was my plan but turns out the only prebuilts or even barebones on the market were office shitboxes or overpriced 'Pro gamer' stuff so I had to build my own like a caveman. Normal people buy laptops now I guess.

Agile Vector
May 21, 2007

scrum bored



during the start of the parts shortage in april 2020 I bought microcenter's in-house pre-built powerspec and it's been okay. not sure if I'd do it again, but it was the only gpu source available and my old system fried. at least the case is a sensible Lian-li and the cooling systems lights could be turned off

the parts were all good, too, but not the brands I'd pick. good for the moment, though, since individual stock was spotty everywhere of gouging

Agile Vector fucked around with this message at 23:12 on Apr 21, 2024

mystes
May 31, 2006

Before I bought my current desktop in 2017 I thought I probably wouldn't bother to build a computer again but at that time it ended up working out much better to buy a mobo/cpu bundle from microcenter than buying a prebuilt one

No idea what the situation is like now, although I probably should upgrade again at some point.

Adbot
ADBOT LOVES YOU

defmacro
Sep 27, 2005
cacio e ping pong

Crazy Achmed posted:

yeah, but the dual cd format of c&c let both people play singleplayer as well as multi, which was extremely economical

that is p cool

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply