|
That is also one of my very early cases, I gave it to my dad in like 2005 and he used it for long after he should have stopped. Not only with the 80mm fans, but with 10 year old 80MM fans. He kept insisting it was fine, but I secretly switched his hardware over to a more modern one while I was watching his house while he was on vacation. 120MM fans with less than 50,000 hours make a world of difference.
|
![]() |
|
![]()
|
# ? Jun 13, 2024 07:24 |
|
Cygni posted:zip drive, live drive AND an LCD fan controller?? I tried to fill every goddamn bay. I think the empty 5.25 bay was just a second CD drive but I'm not sure I remember. But I still couldn't figure out something to put in that third 3.5 bay. I shoulda just gone full retard and put another 3.5 drive in there for 20 bucks. Best part of the fan controller is that the right side is like a 25mm fan behind a foam filter. It's like pissing into a hurricane even if your case only has a single 80mm.
|
![]() |
|
I made the mistake of replacing my CPU fan with this thing at one point in like 2003, terrible loving idea.![]()
|
![]() |
|
MaxxBot posted:I made the mistake of replacing my CPU fan with this thing at one point in like 2003, terrible loving idea. I remember what jet engines the Delta black labels were. 38mm thick fuckers instead of 25.
|
![]() |
|
When is the earliest we might get the 8-core Coffee Lake CPUs? I want a 8C/8T "i5-9400" for ~$200 for my next build if my current 2500K rig lasts until then.
|
![]() |
|
BIG HEADLINE posted:I remember what jet engines the Delta black labels were. 38mm thick fuckers instead of 25. My Athlon loved it, on an Alpha 8045.
|
![]() |
|
spasticColon posted:When is the earliest we might get the 8-core Coffee Lake CPUs? I want a 8C/8T "i5-9400" for ~$200 for my next build if my current 2500K rig lasts until then. Figure on a tease of them at CES or Computex and probably on sale in late Q3 or early-to-mid Q4 2018. Q1 2019 if Intel wants to milk the six-core profits for a bit longer.
|
![]() |
|
MaxxBot posted:I made the mistake of replacing my CPU fan with this thing at one point in like 2003, terrible loving idea. I bought one as well and I think it lasted three days before I decided I did not like the sound of a jet taking off inside my PC.
|
![]() |
|
BIG HEADLINE posted:Figure on a tease of them at CES or Computex and probably on sale in late Q3 or early-to-mid Q4 2018. Q1 2019 if Intel wants to milk the six-core profits for a bit longer. Maybe RAM prices will be reasonable by then.
|
![]() |
|
MaxxBot posted:I made the mistake of replacing my CPU fan with this thing at one point in like 2003, terrible loving idea. I bought the 80mm version to see what it was like. I don't think I ever used it for any real duration, since it was similar to a hair dryer in volume and pitch.
|
![]() |
|
Has this been discussed yet? Intel will ship with AMD graphics. I remember there were some presentations from Intel with "3rd party graphics core". https://newsroom.intel.com/editorials/new-intel-core-processor-combine-high-performance-cpu-discrete-graphics-sleek-thin-devices/ https://www.pcworld.com/article/3235934/components-processors/intel-and-amd-ship-a-core-chip-with-radeon-graphics.html
|
![]() |
|
limaCAT posted:Has this been discussed yet? GPU thread was talking about it a bit. The general vibe is "what the fuuuuck is happening what a time to be alive"
|
![]() |
|
limaCAT posted:Has this been discussed yet? There's some discussion about it in the GPU thread.
|
![]() |
|
havenwaters posted:GPU thread was talking about it a bit. The general vibe is "what the fuuuuck is happening what a time to be alive" When Intel lost their cross licensing agreement with Nvidia they were effectively forced out of the graphics business. Intel is much more worried about Nvidia than AMD. This does explain the HBM2 on Vega though.
|
![]() |
|
I posted in the GPU thread, but how much hbm are they fitting? Wonder if it's accessible as CPU cache.
|
![]() |
|
We sure as hell need a merged CPU thread now. Also, I expect common CPU recommendations to be "Buy the Intel & AMD CPU"
|
![]() |
|
Twerk from Home posted:We sure as hell need a merged CPU thread now. Also, I expect common CPU recommendations to be "Buy the Intel & AMD CPU" Agreed, even the GPU thread is up on this, although there's probably a disproportionate number of common posters in CPUs and GPU threads. I am very excited to do science with this. Hopefully it'll be cheap enough to be a real console killing chip with the way dGPU prices are still hinky.
|
![]() |
|
Wonder how much of this was from pressure from Apple for non-lovely graphics
|
![]() |
|
WhyteRyce posted:Wonder how much of this was from pressure from Apple for non-lovely graphics
|
![]() |
|
I think Apple is more interested in putting their proprietary ARM cores for machine learning on that interposer. Should be good, can't wait for these to find their way into the MBPs. Is this using their 3D stacked dies yet?
|
![]() |
|
eames posted:I think Apple is more interested in putting their proprietary ARM cores for machine learning on that interposer. Should be good, can't wait for these to find their way into the MBPs. Apple is wealthy enough and mobile SoCs are mm2 constrained enough that integration is easier. Malcolm XML fucked around with this message at 20:14 on Nov 6, 2017 |
![]() |
|
GRINDCORE MEGGIDO posted:I posted in the GPU thread, but how much hbm are they fitting? Wonder if it's accessible as CPU cache. It would be quite useless as a CPU cache - it wouldn't even make very good CPU memory. CPU is all about low latency, low bandwidth; it wants to grab very small amounts of data from memory, and it wants it quickly. GPU is totally the other way around; it wants to grab very large amounts of data from memory, and it doesn't particularly care that it will take a long time for that to be fetched. Since HBM was designed with GPUs in mind, it does the latter fairly well. Which, unfortunately, makes it completely garbage, as far as a CPU is a concerned.
|
![]() |
|
Good point. How does the latency compare to ddr4?
|
![]() |
|
GRINDCORE MEGGIDO posted:Good point. How does the latency compare to ddr4? lDDQD posted:It would be quite useless as a CPU cache - it wouldn't even make very good CPU memory. CPU is all about low latency, low bandwidth; it wants to grab very small amounts of data from memory, and it wants it quickly. GPU is totally the other way around; it wants to grab very large amounts of data from memory, and it doesn't particularly care that it will take a long time for that to be fetched. Since HBM was designed with GPUs in mind, it does the latter fairly well. Which, unfortunately, makes it completely garbage, as far as a CPU is a concerned. It would make great CPU memory so great that it's basically wide io 2 mobile memory All them caches hide latency real well on CPUs
|
![]() |
|
GRINDCORE MEGGIDO posted:Good point. How does the latency compare to ddr4? I'd be more curious as to how it compares to eDRAM, since we already have a taste of what that was able to do with Crystalwell.
|
![]() |
|
This is... a thing. ![]() quote:An Open Letter to Intel
|
![]() |
|
That reads entirely to me like Tanenbaum stealth gloating about being writing the most widely used OS in the world by virtue of Intel embedding it in the ME. He's still not over the Torvalds-Tanenbaum debate. What a loving tool.
|
![]() |
|
Kazinsal posted:That reads entirely to me like Tanenbaum stealth gloating about being writing the most widely used OS in the world by virtue of Intel embedding it in the ME. Some Hacker News commenters are confused: "He helped Intel make changes to the code.. and is upset they didn't tell him what they did.. but he's happy they did it, because it proves he chose the correct license?"
|
![]() |
|
Kazinsal posted:That reads entirely to me like Tanenbaum stealth gloating about being writing the most widely used OS in the world by virtue of Intel embedding it in the ME. Meh, Linus toadies gloat about how Linux won because it's everywhere so I won't fault a man for rubbing some stuff back in their face
|
![]() |
|
WhyteRyce posted:Meh, Linus toadies gloat about how Linux won because it's everywhere so I won't fault a man for rubbing some stuff back in their face While I'm sure that now MINIX is indeed extremely used (most intel MBs), I kinda doubt that is the most widely used OS. QNX held that crown for quite a few decades and while Linux has taken some of the work old QNX did in certain areas, there are still a shitload of little chips deployed all over the world that control all kinds of machinery. Linux, even if you count Android as Linux (nah), still is nowhere near that kind of usage.
|
![]() |
|
MICROKERNELS
|
![]() |
|
Volguus posted:While I'm sure that now MINIX is indeed extremely used (most intel MBs), I kinda doubt that is the most widely used OS. QNX held that crown for quite a few decades and while Linux has taken some of the work old QNX did in certain areas, there are still a shitload of little chips deployed all over the world that control all kinds of machinery. Linux, even if you count Android as Linux (nah), still is nowhere near that kind of usage. That's why he very carefully said 'computer' OS. Otherwise Android cleans his clock. And QNX's by now because the really little stuff doesn't run that or anything else.
|
![]() |
|
Volguus posted:While I'm sure that now MINIX is indeed extremely used (most intel MBs), I kinda doubt that is the most widely used OS. QNX held that crown for quite a few decades and while Linux has taken some of the work old QNX did in certain areas, there are still a shitload of little chips deployed all over the world that control all kinds of machinery. Linux, even if you count Android as Linux (nah), still is nowhere near that kind of usage. Once upon a time I would have chimed in here pointing out that Cisco's IOS-XR is based on QNX, but now the new and preferred variation ("enhanced" XR) is based on Linux so there goes that bullet point. Not sure what the current install base looks like though, classic XR probably still has the lead there. Eletriarnation fucked around with this message at 22:36 on Nov 7, 2017 |
![]() |
|
feedmegin posted:That's why he very carefully said 'computer' OS. Otherwise Android cleans his clock. And QNX's by now because the really little stuff doesn't run that or anything else. Which is bogus-phones are computers now.
|
![]() |
|
Today Qualcomm announced availability of their Centriq 2400 series ARM processors, which they claim beat Xeons on both performance-per-dollar and performance-per-watt: https://www.theregister.co.uk/2017/11/08/qualcomm_centriq_2400/ Microsoft and Google are both quoted as investigating its use for cloud workloads and HP promises to ship servers with these to early access customers in early 2018.
|
![]() |
|
STH has some non performance analysis of the Qualcomm CPUs. His take away is basically ARM servers havent had great luck getting traction for a variety of reasons (including Broadcom, who may now buy Qualcomm, shutting down their own ARM server business), and with EPYC around, the anybody-but-intel server crowd may already have their darling. https://www.servethehome.com/analyzing-key-qualcomm-centriq-2400-market-headwinds/
|
![]() |
|
Cygni posted:STH has some non performance analysis of the Qualcomm CPUs. His take away is basically ARM servers havent had great luck getting traction for a variety of reasons (including Broadcom, who may now buy Qualcomm, shutting down their own ARM server business), and with EPYC around, the anybody-but-intel server crowd may already have their darling. Not sure why they claim it's single socket only, Qualcomm definitely announced dual node support. ![]() And someone like Google isn't going to give a poo poo about x86 support because they just recompile; they've also been big supporters of POWER architecture development. And for executing cloud lambda functions written in Python or Java or whatever it's a straight non-issue.
|
![]() |
|
Rastor posted:Not sure why they claim it's single socket only, Qualcomm definitely announced dual node support. That's two independent nodes in 1 OCP tray, note the multi-host NIC.
|
![]() |
|
Mellanox has an interesting host platform now too, ARM based SoC. I think it can be configured either as one of their high end network interface controllers or as a host system. (Bluefield) Also the POWER9 system has 3rd parties selling EATX form factor motherboards for its launch, pretty different from how it used to be! Should be an interesting few years for cpu watchers.
|
![]() |
|
![]()
|
# ? Jun 13, 2024 07:24 |
|
priznat posted:Mellanox has an interesting host platform now too, ARM based SoC. I think it can be configured either as one of their high end network interface controllers or as a host system. (Bluefield) IBM has been pushing PowerPC chips and AIX hard for the last few years. My last company re-platformed our Oracle DB from x86/Linux to PowerPC/AIX in The Year of Our Lord 2016 because the IBM rep swore it would solve our databases IO being slow as gently caress due to the SAN contention. ![]()
|
![]() |