|
WhyteRyce posted:Trying to to find a new way to make revenue that was previously frozen out is not in itself a dumb thing. And the only people that raised a stink about it were the ones who weren't buying that stuff in the first place On a $100k solution that's 40% hardware and 60% software and support contracts, being able to toss some cash at the problem to unlock cores 5 and 6 on the 8 core part you were sold when the software uses per core licensing is pretty magical. Segmenting features the board should support by default just reeks of 'gently caress you, pay me'.
|
# ¿ Jan 17, 2019 01:39 |
|
|
# ¿ May 12, 2024 14:53 |
|
movax posted:
Some big iron financial software system/database thing that's still too expensive to backport to x86 after the millions spent getting it working in the shiny new itanium environment.
|
# ¿ Feb 1, 2019 23:57 |
|
priznat posted:The nice thing about nvme over pcie is you can mux out to a lot of drives much like having SAS expanders. And you can mix drives of different widths and speeds as long as the upstream ports don’t bottleneck things. And you can connect host to host in all sorts of funky non transparent bridging implementations for failover or shared drive caching or multi function device sharing. Isn't a lot of that due to the fact that all the older traditional ways we made MORE BUS BIT GO FASTER! kinda stopped working nearly as well when we got to the signal speeds we're looking at for PCIe4? Also needing 10nm or 7nm litho to avoid having a 50w PCIe bridge chip to run it all?
|
# ¿ Apr 28, 2019 03:17 |
|
NewFatMike posted:God bless ASRock. It's beautiful. It's like 40% CPU socket by area, and the case is like 50% CPU cooler by volume. Truly doing god's work.
|
# ¿ May 8, 2019 11:08 |
|
Vanagoon posted:Manufacturing plant guy "I was just doing what I was told" The customer literally asked for that exact thing. The hell do you want me to do? Tell the customer "No" to his face when he asks for something?
|
# ¿ Jun 14, 2019 06:42 |
|
BangersInMyKnickers posted:My processor keeps throwing sugma faults You need to download the latest Ligma binaries, they address the Sugma faulting issue, as well as mitigations for the Boffa Deez family of CPU vulnerabilities.
|
# ¿ Jul 11, 2019 20:47 |
|
VostokProgram posted:Could be that the real mode behavior of the CPU is itself just an emulation of the real thing. Although when you get down to the hardware/microcode level what's the real difference between emulating a thing and actually doing it Under the hood there is very little in common with an old 386-era chip and a modern x86 CPU. All the bits and gubbins are translated from x86 assembly into a shitload of microcodes which do the correct math, but not in a way that's strictly 1:1 to what a naive programmer would expect given the assembly they fed it. There is a LOT of magic numbers and arcane poo poo on a modern processor that lets us make 'GOTTA GO FAST!' memes while shitposting on a sonic the hedgehog forum. Emulating the entire state machine for an old processor has so little overhead that it's basically a non-issue now, same with emulating a lot of game consoles, old Motorola chips, and ancient apple hardware.
|
# ¿ Jul 23, 2019 11:06 |
|
canyoneer posted:My father in law works for a giant defense contractor, and every few years buys a surplus 3 year old laptop from the company for $50 or something. New battery is like $150, cheap at 5x the price.
|
# ¿ Aug 13, 2019 07:27 |
|
Didn't we do the whole 'merge the threads' thing to the IT bitching threads like 2 years ago, and it was met with a resounding 'ehhhhhhh' and went back to 2 threads fairly quickly?
|
# ¿ Oct 7, 2019 23:25 |
|
Perplx posted:Intels capacity problems are probably all related to 10nm being behind schedule, 10nm has at least double the density of 14nm, which means double the chips per wafer. I'm assuming wafer throughput is relatively constant regardless of process node. Yep, fabs are set up for so many wafer starts per month, what you put on those wafers is your own business, but smaller chips = high yields. eames posted:What ever happened to the rumors of Samsung fabbing 14nm CPUs for Intel to address the shortages? Guess that never materialized. Even if you bust your rear end setting the masks up, going from "I want the shiny" to "here is the first batch of 50k shinies" is like a 6-12 months long, depending on process node, number of layers, and a million other factors. Methylethylaldehyde fucked around with this message at 00:04 on Nov 29, 2019 |
# ¿ Nov 29, 2019 00:01 |
|
BobHoward posted:Back to the process thing, it's pretty normal in the industry to develop differentiated recipes and cell libraries on the same node. Intel is no exception, e.g. they offer at least two versions of 14nm to foundry customers (14GP and 14LP, general purpose or low power). Even on a given node, you can still do a lot of tweaking to the basic fitFET transistors via doping, legth and width of the channel, etc. That way when you power and clock gate different areas of the chip, you can have less leaky but slower stuff for things that don't need to run at the full core speed, and save more of your tdp budget for the cores and cache, where a lot of the magic happens.
|
# ¿ Jan 24, 2020 18:12 |
|
BlankSystemDaemon posted:Inter-generational differences would mean a lot more if things like command rates, latency, and features like ECC or things of that nature were the sort of things we'd see improved/added, instead of just the bandwidth of the ICs and how big the ICs can get during intra-generational periods. Most of the latency issues are fundamental to DRAM in general, and how we've placed it on the motherboard. Arbitrarily large power use could lower the latency some, but basically everything BUT speed and size are already as good as it's gonna get.
|
# ¿ Apr 26, 2022 17:16 |
|
BlankSystemDaemon posted:Nah, you can buy low-latency DRAM, if you pay enough. You can buy it, but the difference between DDR4-3600 CAS 16 and CAS 8 is still a factor of 10 slower than SRAM in the best case. The '10ish ns' average latency across 2-3 generations suggests that it's near the fundamental limits of physics under the cost/capacity/speed/distance/power tradeoffs we've settled on for DRAM products. If someone got a large enough hair up their rear end about latency in DRAM, you could trade capacity, power and speed for lower overall latency. But it would need to be a completely new mask set for a really niche product. It would probably be way easier to bring the memory to sub-ambient, crank the voltage to the bleeding edge of what it can handle, and bring the timings down as low as you can.
|
# ¿ Apr 26, 2022 20:17 |
|
BlankSystemDaemon posted:You could also get a better electricity grid, as the AC-DC conversion efficiency on 230V is much higher than 110V. Nothing is stopping you from wiring up a NEMA 6-15 and being the change you wanna see in the world.
|
# ¿ Oct 13, 2022 18:30 |
|
Cygni posted:I think it makes a ton of sense in Big Iron land. And personally speaking, I think a "pay once to unlock stuff" model for the consumer market could be a positive thing in decreasing ewaste and increasing the longevity for platforms. Of course, it could also be abused and end up worse for consumers. Dell now sells you a PC with X Cores at Y Speed* *3 month introductory promotion, retention of speed and core count requires monthly $19.99 subscription. People would just jailbreak them or run whatever Sr. Juarez authentication server emulator the torrent site they use recommended. In big iron land, it's even more of a nightmare because of how many software systems that could benefit from more cores are already licensed on a per core basis. I'm sure Oracle would be super interested in a list of all the customers who used it, so they could conduct a 'random' licensing audit and stick people with 10 more core licenses at a 5x 'pay or we sue you' premium.
|
# ¿ Dec 2, 2022 02:34 |
|
carry on then posted:Think Marty Robbins had a song about him. Now I'm picturing some dudebro silicon valley dipshit with socks+sandals, a 10 gallon Stetson, and a 1U pizzabox server poorly belted to his hip, 40mm case fans screaming in protest. Cygni posted:The monthly service thing is my nightmare as a hardware dork, but like you pointed out, we will probably be the ones to crack that. The market rejection of Stadia was a good sign of the general rejection of "hardware as a service", but I'm still worried its creeping in anyway. Stadia was poo poo for a whole bunch of reasons entirely unrelated to the underlying Hardware as a Service model. Honestly I'd expect the PS5+/XBX-Xtreme to flirt with it before desktop/server hardware tries it. "Unlock Pro mode, 120hz output, and 15 extra Apex Legends FPS, for only $19.95/mo!". If you can monetize the compute efficiency of the mid-cycle refresh, that's tens of millions of dollars in basically free money. And the PS5/XBX is a locked down enough platform that they could probably actually prevent most people from being able to unlock the extra shaders+cache without too much effort. Or at least console banning people for jailbreaking it, forcing offline only mode. Methylethylaldehyde fucked around with this message at 23:46 on Dec 2, 2022 |
# ¿ Dec 2, 2022 23:39 |
|
in a well actually posted:2.5g exists because hyperscalers wanted to do 4 2.5g lanes off a 10g switch port in 2012 I thought it was because the 2.5G SERDES links were bonded to make the 10G link, same as how you take 4 10G links now to make a 40G, or 4 25s to make a 100G?
|
# ¿ Jan 23, 2023 23:38 |
|
mmkay posted:Feeling really good about being chosen for It's not a layoff, it's a corporate people movement and lateral promotion to customer!
|
# ¿ Feb 2, 2023 02:45 |
|
Twerk from Home posted:I thought that a hot CPU wouldn't lose any performance until you hit the point where thermal throttling happens, right? The real problem is that the hotter a processor is, the leakier it gets, which makes it use even more power at a given clock and get even hotter, so you run get pushed towards that throttling temperature. On a processor from 1993 without any kind of P-states or power monitoring, that would be true. It runs at a fixed speed until it catches fire. On a modern processor with per core opportunistic clock boosts, keeping the silicon super cool can allow it to boost higher, with some rapidly diminishing returns as you go from a good air cooler to a 360mm CLC setup. Like an extra 50 Mhz tier diminished returns. The exact mechanism is the processor going 'I can use up to 1.4v, as long as I don't go over 20w per core AND I don't go over 65C tDie. I can use 1.35 volts between 65 and 72C, 1.3 volts between 72 and 75C and 1.22 volts above that.' The extra voltage lets you bump the clockspeed up higher, but past a certain point, every 25Mhz costs more and more exorbitant amounts of power.
|
# ¿ Feb 17, 2023 20:07 |
|
LRADIKAL posted:Actually, I misspoke. These chips are designed to run at max 24/7. You will certainly lose performance, and may decrease the lifespan of the chip. Something on the made up order of 5 years from 10 years. The difference between 80C tDie and 40C tDie is something like a 20x improvement in life for IGBTs used in power switchgear. So it goes from 'will probably never break while still useful' to 'will probably never break in your grandchild's lifetime'. Which is kinda useless?
|
# ¿ Feb 17, 2023 21:10 |
|
Lockback posted:I thought 85C was a magic number for the 13 series at which it'll really start jumping through hoops to clock itself down, though I may be mistaken. Thats why I suggested using that as a "tell the fans go nuts if it means keeping it under this". The 13 series runs hot though, I think it can get above 100C if you let it. I made the entire volts/temp curve up, but the basics of how modern boost algos work is there. More volts == more speed == more heat, so the colder the silicon, the harder it'll push itself.
|
# ¿ Feb 17, 2023 21:24 |
|
WhyteRyce posted:The announcement emails are my favorite form of tea leaf reading / office gossip. Even if you get fired or pushed out you still often get a flowery “we will miss you here is a list of all your accomplishments” email. You have to extremely piss off your bosses to not get poo poo. "Also Dave won't be here, anyways the water cooler is now for revenue generating conversations only" Is a great sendoff after 5+ years and 10+ bil in R&D.
|
# ¿ Mar 22, 2023 06:57 |
|
HalloKitty posted:Yeah, saw it on the Register. I can't wait for custom rootkit style hacks for games to come out that require MSI motherboards to load, because cheating in videogames is worth loading Ring -1 code into your machine from hax4u.ru/fortnite
|
# ¿ May 10, 2023 23:47 |
|
Shipon posted:Prescott P4s were the infamous blast furnace of the time, Intel was suffering brutally with heat back then relatively speaking I do like how Intel's response to 'Processor make PC too hot?' was coming up with an entire new BTX form factor, so they could kick that thermal can down the road another few years.
|
# ¿ Jul 5, 2023 22:25 |
|
Potato Salad posted:Our first mistake was teaching sand to think The third mistake was to trust them with all of our secrets.
|
# ¿ Aug 9, 2023 22:11 |
|
Beef posted:I'm not anywhere near production, but on a prototype we tried to do extensive full-application testing on various simulators/emulation/FPGA before taping because we knew there was only budget for one stepping. You'd think one of the low hanging fruit tests would be setting every value that has a 'buffer size must be 1 through 65534 inclusive' to 0, 65535, -1 etc. just to see if whatever hardware logic used throws an exception, throws a fit, or throws a segfault at you. How many hours does it take a full-fat processor sim to even get to the point where the PCIe bus fully initializes and you could pass that DMA size=0 instruction to it? Or do you have to use the 'a Turing complete spherical cow in a vacuum' stand-in for 95% of the processor, and only test the one part of the chip responsible for the PCIe endpoint?
|
# ¿ Aug 16, 2023 00:48 |
|
priznat posted:Special cases with motherboard cutouts! Or just have really high standoffs. But then wouldn’t fit the rear connector panels.. REALLY tall standoffs, back IO is mounted on the bottom of the mobo, can only use half height PCI cards, must be mounted in full height case.
|
# ¿ Aug 30, 2023 20:45 |
|
Kazinsal posted:First generation actor resurrection tech: Holographic Tupac Fourth generation: ChatGPT Gordon Moore
|
# ¿ Oct 4, 2023 22:11 |
|
Klyith posted:Airbnb For Threads. AKA: How to get 27 different lovely electron apps to all play 'nice' with each other.
|
# ¿ Dec 9, 2023 05:11 |
|
Farmer Crack-rear end posted:when you say "home offices" are you talking about gamer dens? because i'm pretty sure the number of WFH positions that can't be serviced by a 15A 120V breaker is absolutely tiny Tell that to half my damned users who smuggle a space heater under their desk.
|
# ¿ Feb 19, 2024 23:03 |
|
Farmer Crack-rear end posted:that doesn't sound like a home office problem You obviously don't have friends with a wife who is perpetually cold. My buddy's wife has four god damned space heaters under various desks and cubbies, specifically so her feet don't get cold when she's sitting there. I kinda want a 20A/240 twistlock outlet for my computery poo poo, not because it needs it, but because twistlock is pretty great.
|
# ¿ Feb 20, 2024 10:12 |
|
|
# ¿ May 12, 2024 14:53 |
|
Farmer Crack-rear end posted:jesus christ how loving hot does that room get with four loving space heaters blasting They're not all in the same room. She has one under her little sewing desk, one under her working desk, one under the coffee table in her little reading nook, and one in the bedroom. She's also like 5 foot nothing and 120 pounds soaking wet. Every few months she learns which outlets are on the same circuit by blowing the breaker trying to air fry lunch with the reading nook heater going.
|
# ¿ Feb 21, 2024 00:13 |