|
DrDork posted:Companies want to move to subscription models because it tends to make them more money over the long term--the customer loses out in almost every angle Subscriptions / reoccurring revenue is corporate gold these days, it greatly increases the valuation of companies, allegedly because the income is more predictable and you don’t have to worry about giant customers maybe not renewing giant long term contracts as much. P.S. I was forced to rent a Tesla and it was locked down to Chill mode only somehow, whatever that is
|
![]() |
|
![]()
|
# ? Jun 11, 2024 04:19 |
|
tehinternet posted:To your first bits, I addressed that in the bit of my quote you left out. Why is the bolded part an important distinction? I don't get it. Why is ok in your mind for software and media to do this, but not computer chips? You also didn't address my point. You ignored it. I'll repeat it again: computer chips are arguably a lot more like software copies than most physical goods--the marginal production cost to produce a computer chip is pretty small and isn't anywhere close to the $200+ sales price. They aren't like a car or many other physical goods where the cost to the company to produce an additional unit is a huge fraction of the sales price. If computer chip companies only charged something like marginal production cost +10%, they wouldn't be able to bankroll the continual design and improvement of their products, and posters in this thread would be upset and would be making many more posts about how they think computer chips should go faster than they currently do. I don't understand the philosophical objection to binning. There is this entitlement that I just totally don't get. Unlike [more important services/products like] medicine and real estate, computer chips have improved dramatically in cost/function over time, so I don't get the customer exploitation angle here either. silence_kit fucked around with this message at 10:31 on Oct 9, 2021 |
![]() |
|
silence_kit posted:If computer chip companies only charged something like marginal production cost +10%, they wouldn't be able to bankroll the continual design and improvement of their products, and posters in this thread would be upset and would be making many more posts about how they think computer chips should go faster than they currently do. "but we can't fire the wizards who hold all the secrets of thinking sand because they're the only wizards we have and--" no. gently caress you. suck less. bankrolling more didn't do poo poo
|
![]() |
|
Sidesaddle Cavalry posted:computer chips should go faster than they currently do. intel deserves far more punishment for haswell thru coffee lake than they got away with. No, you don't get my point. If in the early days of computer chips, the government were to set a price ceiling on the computer chip companies' products to be marginal production cost +10% or whatever, we would not have products anywhere close to what we enjoy today. The R&D to dramatically improve the cost/function would never have happened or would have happened at a dramatically slower pace. New designs would not be coming out every year, it would be much much slower than that. We would be using 1980's computer chip technology today. If they were to instead set the price ceiling today, we would probably stop seeing the release of new computer chip products--it would be wildly uneconomical to release a new product or to improve the technology any more if they could only set a sales price of a couple of bucks for a computer chip. edit: The type of entitlement displayed in the quoted post I just don't get. Relatively speaking the consumer electronics industry has been incredibly great for customers. It would make more sense to me for people to get upset at agri-businesses or the medical industry or the real estate industry for not lowering prices year over year. silence_kit fucked around with this message at 11:16 on Oct 9, 2021 |
![]() |
|
Oops, I meant *Ivy Bridge* through *Rocket Lake*
Sidesaddle Cavalry fucked around with this message at 11:09 on Oct 9, 2021 |
![]() |
Since the consumers aren't the CPU manufacturers primary customers, and haven't been for a long time, I think at the heart of it is the issue of whether you think the company is going to try and screw over the consumer or not.
|
|
![]() |
|
BlankSystemDaemon posted:I think at the heart of it is the issue of whether you think the company is going to try and screw over the consumer or not. They are. No exceptions!
|
![]() |
|
silence_kit posted:Why is the bolded part an important distinction? I don't get it. Why is ok in your mind for software and media to do this, but not computer chips? tehinternet posted:I have a car. The car has eight functional cylinders, but only uses four unless you pay money to get the dealer to send the code to activate the cylinders. This metaphor doesn’t really work because of how CPUs are made from larger wafers and binned but yeah. Quoted the post from earlier that you ignored. My objection isn’t some vague sense of “entitlement.” It’s literally that they’re building things —that are basically the most difficult things to make on the planet— that can run at 100% and then wasting that capability. It’s about as close to the definition of waste as you can get. I don’t know how else I can explain it since you’ve been kind of aggressive from the start about “internet nerds” and their “entitlement” while ignoring the point I was making by comparing it to software. Which is not physical. Which I’ve already said my objection was.
|
![]() |
|
tehinternet posted:My objection isn’t some vague sense of “entitlement.” It’s literally that they’re building things —that are basically the most difficult things to make on the planet— that can run at 100% and then wasting that capability. It’s about as close to the definition of waste as you can get. And I say yet again, focusing on the physical aspect of computer chips isn't really understanding the product. In some ways, computer chips aren't hard at all to produce. In volume, after the R&D and production set-up cost, they are very inexpensive to produce. It doesn't cost computer chip companies $100+ to make one more chip. It is likely O($1). Computer chips are closer to software than other physical goods. In many ways, they are similar to physical media. Similarly, software companies and media companies are 'wasting' all of the effort that they put into designing and making their products when they don't give them away to everybody for free. tehinternet posted:while ignoring the point I was making by comparing it to software. Which is not physical. Which I’ve already said my objection was. I understand that you like to make this distinction between physical goods and software/media, but it seems totally arbitrary IMO. And the distinction you make between binning vs. the company just selling you a naturally worse product and/or putting a markup on the product, again seems arbitrary. silence_kit fucked around with this message at 15:33 on Oct 9, 2021 |
![]() |
|
without getting into the slippery-slope argument of "this is going to lead to CPUs-as-a-subscription-service", my apprehension about an "unlocking model" is more towards what happens when parts get shifted around and/or when internet access is no longer a given
|
![]() |
|
gradenko_2000 posted:without getting into the slippery-slope argument of "this is going to lead to CPUs-as-a-subscription-service", my apprehension about an "unlocking model" is more towards what happens when parts get shifted around and/or when internet access is no longer a given I don't really think you can worry about that without sliding into "Intel subscription-loving consumers!" slope arguments: the only market segments that they're likely to target with this stuff are megacorps where internet access will always be a given and the only resale will be far enough down the line that they'd only be interesting to the types of people who today are thinking that buying a Dell 710 rack is a good idea. Even if it did work its way down that far, I'd assume the answer would be something like it has to phone home ever 7 or 30 days or whatever to keep the DLC going so you don't have problems just because your network dropped for 5 minutes or whatever. Or it just reverts to the non-DLC mode and you get to chug along without those extra cores until you get your internet back. For resale, though, yeah that'd be an interesting question: does the DLC permanently unlock the features via some writable bit within the CPU? Or are you now trying to sell a CPU + intel.com account to go along with it? Worries for another day, at least.
|
![]() |
|
gradenko_2000 posted:without getting into the slippery-slope argument of "this is going to lead to CPUs-as-a-subscription-service", my apprehension about an "unlocking model" is more towards what happens when parts get shifted around and/or when internet access is no longer a given “CPUs as a service” is called AWS ec2 and some MBAs at Intel are probably furious they don’t get that revenue
|
![]() |
|
I don't think the car analogy even works. I have a Mazda 3 with a 2.0 engine and some 160ish hp, but if I bought a lower trim one I'd get the exact, actually identical engine, except it'd put out 120 hp max.
|
![]() |
|
hobbesmaster posted:“CPUs as a service” is called AWS ec2 and some MBAs at Intel are probably furious they don’t get that revenue TBF, given AWS's profit margins and the fact that they are aggressively working on their own CPUs which (if you can use them) massively undercut Intel on price....yeah. Intel should be furious. That's a ton of money AWS is actively trying to shut Intel out of in the coming years. But remember that no one in this thread gives a gently caress if Mr. Corporate Guy moves to CPU-as-a-service/-subscription. Business can gently caress eachother all day, cool and fine. The resistance is to Joe Home User getting roped into CPU-as-a-service/-subscription on the assumption that Joe ends up losing out vs the status quo.
|
![]() |
|
mmkay posted:I don't think the car analogy even works. I have a Mazda 3 with a 2.0 engine and some 160ish hp, but if I bought a lower trim one I'd get the exact, actually identical engine, except it'd put out 120 hp max. Honest question: If the power train is identical, what is the mechanism used to reduce hp on the lower end models?
|
![]() |
|
DrDork posted:I don't really think you can worry about that without sliding into "Intel subscription-loving consumers!" slope arguments: the only market segments that they're likely to target with this stuff are megacorps where internet access will always be a given and the only resale will be far enough down the line that they'd only be interesting to the types of people who today are thinking that buying a Dell 710 rack is a good idea. hobbesmaster posted:“CPUs as a service” is called AWS ec2 and some MBAs at Intel are probably furious they don’t get that revenue you know what I didn't look into the original article all too closely and if this is only targeted at the data center/server segment then heck I ain't too bothered
|
![]() |
|
Freedom Trails posted:Honest question: If the power train is identical, what is the mechanism used to reduce hp on the lower end models? There are a couple of options: you can put in an ECU which is simply software-set to tell the engine to run at less than its maximum possible output. Consequently, there's a decent market for ECU swap kits to let you "unlock" such hobbled performance. Other options are more physical, like attaching the engine to smaller-diameter intakes or exhaust lines, which will effectively starve the engine of required oxygen and thus force it to scale back. Often fixable by swapping in better parts, as well (and a regular contributor to the silly mod community). gradenko_2000 posted:you know what I didn't look into the original article all too closely and if this is only targeted at the data center/server segment then heck I ain't too bothered Yeah, no one's really upset about what has been announced so far. It's the worry about what it could mean down the line if they decide to give it a go with consumers, rather than big corporate customers. DrDork fucked around with this message at 16:01 on Oct 9, 2021 |
![]() |
|
Freedom Trails posted:Honest question: If the power train is identical, what is the mechanism used to reduce hp on the lower end models? No idea, but I remember seeing a graph where the lower trim would flatline in power at like 3500 rpm, while the more faster version would keep increasing.
|
![]() |
|
mmkay posted:No idea, but I remember seeing a graph where the lower trim would flatline in power at like 3500 rpm, while the more faster version would keep increasing. Is that the skyactiv g vs x? They’re different engines despite having the same displacement. https://en.wikipedia.org/wiki/Skyactiv?wprov=sfti1
|
![]() |
|
hobbesmaster posted:Is that the skyactiv g vs x? They’re different engines despite having the same displacement. They were both the G versions: https://www.automobile-catalog.com/curve/2018/2506745/mazda_3_2_0_skyactiv-g_120.html https://www.automobile-catalog.com/curve/2018/2506775/mazda_3_2_0_skyactiv-g_165_i-eloop.html
|
![]() |
|
Oh I see they pulled that in Europe. Fuel economy thing?
|
![]() |
|
I have no idea why, but it was probably either that or market segmentation ![]()
|
![]() |
|
Are you sure that they aren't including the effective HP and torque from the onboard electric motors in the total power/torque output available?
|
![]() |
|
Freedom Trails posted:Honest question: If the power train is identical, what is the mechanism used to reduce hp on the lower end models? aside from what was already mentioned: - valve timing/lift - compression ratio - rev limit
|
![]() |
|
hiya i'm trying to overclock my reliable old 4690k. i want to do a mild overclock to 4.2ghz, nothing huge, mostly just to help with playing kerbal space program as that game is mostly cpu. trouble is, the overclocking options in my z97 uefi just aren't sticking for some reason. or rather, some are, i upped the voltage to 1.10 as a start and that sticks, but the actual clock speed isn't, it's still at 3.5ghz no matter what i do. i tried using the auto settings built in for different overclocks, that didn't work (tried both my target of 4.2, as well as every other option), i tried manually changing poo poo, that didn't work, i followed a setp-by-step video walkthrough, that didn't work. i haven't tried overclocking a cpu in some 20 years so it's possible (extremely likely) that i'm just a fuckin idiot, but does anyone have any good resources, or any idea why the hell the clock speed won't change but the voltage does no problem?
|
![]() |
|
DEEP STATE PLOT posted:hiya i'm trying to overclock my reliable old 4690k. i want to do a mild overclock to 4.2ghz, nothing huge, mostly just to help with playing kerbal space program as that game is mostly cpu. trouble is, the overclocking options in my z97 uefi just aren't sticking for some reason. or rather, some are, i upped the voltage to 1.10 as a start and that sticks, but the actual clock speed isn't, it's still at 3.5ghz no matter what i do. i tried using the auto settings built in for different overclocks, that didn't work (tried both my target of 4.2, as well as every other option), i tried manually changing poo poo, that didn't work, i followed a setp-by-step video walkthrough, that didn't work. i haven't tried overclocking a cpu in some 20 years so it's possible (extremely likely) that i'm just a fuckin idiot, but does anyone have any good resources, or any idea why the hell the clock speed won't change but the voltage does no problem? How are you concluding that the clock speed isn't changing? If you're booting into Windows and finding your CPU is not clocked as expected and aren't running any CPU clock modification software, then you might be getting screwed by Microsoft. As a mitigation against various CPU exploits, Microsoft put out several microcode patches. These DLLs just don't apply the BIOS clock settings. This problem definitely hits Haswell-E and Broadwell-E as it breaks my overclock, but I wasn't aware it affected Haswell too. To fix this you have to boot into Safe Mode and rename the System32\mcupdate_GenuineIntel.dll file so it can't be loaded. Future Windows patches will probably cause this to be undone and you'll have to keep doing this to use an overclock. Newer systems aren't hit by this, because the CPUs either have mitigations baked into the silicon, or the motherboard vendors include them in the microcode. With these old systems, the motherboard vendor obviously does not go back and make BIOS patches to fix these security bugs, so Microsoft took this step.
|
![]() |
|
Riflen posted:How are you concluding that the clock speed isn't changing? the about for my pc displays 4690k 3.5ghz 3.5ghz (it should displays 3.5ghz and then whatever the overclock is after that), the clock speed on task manager never rises above 3.49ghz, and cpu-z displays my clock speed as 3.5 ghz i'll try the rest of what you said and report back. if i have to keep fuckin with things over time to keep the overclock, that's irritating, but whatever i guess.
|
![]() |
|
DEEP STATE PLOT posted:the about for my pc displays 4690k 3.5ghz 3.5ghz (it should displays 3.5ghz and then whatever the overclock is after that), the clock speed on task manager never rises above 3.49ghz, and cpu-z displays my clock speed as 3.5 ghz Task Manager and CPU-Z are reading fixed info about your CPU specs I believe. Use something like HWINFO to read the actual realtime clock under Windows. Doesn't seem like you need to rename DLLs just yet.
|
![]() |
|
Gamers Nexus always says to validate with actual output If you think your overclock is or isn't taking, do a control and experimental run of something like Cinebench to confirm
|
![]() |
|
Riflen posted:Task Manager and CPU-Z are reading fixed info about your CPU specs I believe. Both of these applications read real time info.
|
![]() |
|
Although we saw some wacky speed leaks, looks like production specs are pretty muted to start with, which isn’t a surprise obvi.![]()
|
![]() |
Racing Red.
|
|
![]() |
|
Cygni posted:Although we saw some wacky speed leaks, looks like production specs are pretty muted to start with, which isn’t a surprise obvi. question is how will amd do with those timings because they seem like big numbers.
|
![]() |
|
How does the switch to DDR5 actually affect performance? Does it do anything to be faster than DDR4 at the same speeds and timings, or are they directly comparable to one another? What I mean is, is DDR5-4800 at those timings as terrible as it looks on paper, or is there a secret sauce that makes it good?
|
![]() |
|
Both my crystal ball and time machines are in the shop, looks like we'll have to wait 😥
|
![]() |
|
We’ll find out in a month! (Or maybe sooner with all the relentless leaking)
|
![]() |
|
Dr. Video Games 0031 posted:How does the switch to DDR5 actually affect performance? As a general rough rule, latency times have pretty much equated 2 CAS with one frequency "step" (~266-333MT/s). That is, DDR4 2400@17, 2666@19, and 2933@21 have all pretty much the same latency, and therefore 2933@21 will offer a roughly 20% improvement in sustained speeds over 2400@17 from the higher bandwidth vs flat latency. So if that still holds, 4800@40 should have a clock cycle time of 0.42ns, giving it a latency of 16.66ns, which is pretty bad since commodity DDR4 3800@18 has a latency of ~9.5ns, to the point that it's reasonable to assume that such a DDR5 stick would perform poorly vs DDR4 in some workloads that are latency sensitive, and better in those which are just large blocks of sequential calls where the extra 20% frequency bandwidth would take over. But DDR5 also uses different burst lengths that are supposed to improve efficiency, and will have two 40b channels instead of just one 72b channel, so who knows. Also note that even on that small spreadsheet, 4800@40 is clearly the poo poo-tier pick. 5600@38 drops latency down to 13.5ns, which is pretty much on par or better than what you'd get from 2400@17, which is more or less where DDR4's 1st gen started, and then you add in the other efficiencies and more than double the bandwidth and it probably performs pretty damned well. It'll also be $$$.
|
![]() |
|
Dr. Video Games 0031 posted:How does the switch to DDR5 actually affect performance? Does it do anything to be faster than DDR4 at the same speeds and timings, or are they directly comparable to one another? What I mean is, is DDR5-4800 at those timings as terrible as it looks on paper, or is there a secret sauce that makes it good? The timings are expressed in clock cycles, so it's not as bad as it looks since the ram is clocked higher to compensate. The 4800 CL40 stuff will apparently be the middle of 3 JEDEC specs for that frequency at ~16.7ns. Worse than the top ddr4 bin of 3200 cl22 (13.75 ns) but the 4800 cl 34 gets close. Who knows what kind of timings will be available with XMP and voltage increase. Anandtech put out a decent chart comparing latency for each frequency/timing combination. https://www.anandtech.com/show/16143/insights-into-ddr5-subtimings-and-latencies
|
![]() |
|
Sidesaddle Cavalry posted:computer chips should go faster than they currently do. intel deserves far more punishment for haswell thru coffee lake than they got away with. Sidesaddle Cavalry posted:Oops, I meant *Ivy Bridge* through *Rocket Lake* haswell was really good tho? do you people not like AVX2 and significantly improved game frametimes for some reason? And hexacore DDR4 HEDT haswell-E for $320 owned bones. Riflen posted:How are you concluding that the clock speed isn't changing? If you're booting into Windows and finding your CPU is not clocked as expected and aren't running any CPU clock modification software, then you might be getting screwed by Microsoft. As a mitigation against various CPU exploits, Microsoft put out several microcode patches. These DLLs just don't apply the BIOS clock settings. This problem definitely hits Haswell-E and Broadwell-E as it breaks my overclock, but I wasn't aware it affected Haswell too. I actually haven't observed this on my X99 FTW K boards, windows reads them as 4.13 GHz which is my set overclock. I dunno what BIOS exactly I'm running, maybe that does help, but X99 as a whole isn't dead just because of smeltdown. Paul MaudDib fucked around with this message at 02:04 on Oct 13, 2021 |
![]() |
|
![]()
|
# ? Jun 11, 2024 04:19 |
|
Paul MaudDib posted:haswell was really good tho? do you people not like AVX2 and significantly improved game frametimes for some reason? And hexacore DDR4 HEDT haswell-E for $320 owned bones. I think the point was that Haswell was the last "Oh wow, this is actually a real good upgrade off the last generation" chip that Intel put out, and everything since then has felt like just coasting.
|
![]() |