|
AMD apple machines: dung.
|
# ? Apr 23, 2016 01:08 |
|
|
# ? May 5, 2024 00:20 |
|
Intel: isn't it neat to enjoy lasagne
|
# ? Apr 23, 2016 01:36 |
|
arm performance is like an order of magnitude lower than x86 though i mean yeah they can probably really get the chip guys to make a monster A10x considering you could make it twice as big and have a 38 watt hour battery
|
# ? Apr 24, 2016 05:21 |
|
BiohazrD posted:arm performance is like an order of magnitude lower than x86 though Wow, look at how ignorant you are. lol
|
# ? Apr 24, 2016 06:24 |
|
BiohazrD posted:arm performance is like an order of magnitude lower than x86 though If that's true then why does candy crush run great on my phone but my piii can't handle crysis on high? Check and mate, idiot.
|
# ? Apr 24, 2016 13:18 |
|
the point is that there is a likely future where intel sells only server and high-end workstation chips and gets continually pushed into a narrower and narrower niche the last chip architectures that happened to were Alpha/SPARC/etc, when Intel's dominance in the low end allowed them to move up-market and eventually take over the whole thing of course part of that was that clock speed and density were still going up absurdly fast then, whereas now clock speed scaling has ended a little while ago and density scaling is starting to slow down
|
# ? Apr 24, 2016 16:25 |
|
DuckConference posted:of course part of that was that clock speed and density were still going up absurdly fast then, whereas now clock speed scaling has ended a little while ago and density scaling is starting to slow down Yeah, this is pretty important to point out, and because of that, the Alpha/SPARC historical analogies may not apply. We are now living in a world, unlike the 90's, where integrated circuit technology isn't improving very quickly or maybe much at all, and so it is harder and a slower process for other companies to gain a technical edge and come in and take Intel's business. Edit: On the other hand, I have heard Intel as being described as a bunch of jocks who have been riding on their superior integrated circuit manufacturing technology for 30 years, and that their designs really would not be that great without their manufacturing advantage. I have no idea if that is true or not (I'm not a digital chip guy), but if it is true, that could be scary for Intel when their manufacturing technology levels out and other integrated circuit foundries achieve the same levels of performance. silence_kit fucked around with this message at 17:44 on Apr 24, 2016 |
# ? Apr 24, 2016 17:24 |
|
the slowdown in massive technology gains means intel's performance lead is rapidly shrinking, and servers are nowhere near as architecture-locked as consumer tech because most companies can easily recompile / port their server code. intel chips burn a lot of complexity (i.e. die space, i.e. power) on supporting terrible legacy features and working around the limitations of the ISA. their large server clients are already using non-commodity chips that remove some of the crap, but they can't simplify encodings or make the memory ordering weaker windows is never abandoning x86, and i wouldn't discuss apple's plans for the mac here even if i knew them, but it is very easy to imagine intel's dominance in servers disappearing over the next decade
|
# ? Apr 24, 2016 18:21 |
|
rjmccall posted:servers are nowhere near as architecture-locked as consumer tech because most companies can easily recompile / port their server code Oh, I didn't realize that. So if some other company were to come out with a server chip tomorrow with a different ISA which was the same cost, but had ~25% more favorable performance/power capability, would everybody be willing to put in the work to rework their server software to take advantage of that 25%? rjmccall posted:intel chips burn a lot of complexity (i.e. die space, i.e. power) on supporting terrible legacy features and working around the limitations of the ISA. their large server clients are already using non-commodity chips that remove some of the crap, but they can't simplify encodings or make the memory ordering weaker Is this translation really that much of a penalty? I've heard this said a lot among computer hobbyists and I've also heard from the same crowd that it doesn't really matter that much. I think I've read your posts in the C/C++ thread and you seem to know a lot about compilers and maybe this kind of stuff, so maybe you have more insight about this than the guys who read Ars Technica/Tom's Hardware articles. rjmccall posted:windows is never abandoning x86 Is this just because of the amount of labor that it would take to rewrite all of the software to a different chip, or is it because of their business relationship? Why do you say this?
|
# ? Apr 24, 2016 20:26 |
|
silence_kit posted:Yeah, this is pretty important to point out, and because of that, the Alpha/SPARC historical analogies may not apply. We are now living in a world, unlike the 90's, where integrated circuit technology isn't improving very quickly or maybe much at all, and so it is harder and a slower process for other companies to gain a technical edge and come in and take Intel's business. Progress slowing down gives competitors time to catch up. TSMC are talking about bringing out 7nm chips next year when Intel hope to finally get to 10nm.
|
# ? Apr 24, 2016 20:30 |
|
ConanTheLibrarian posted:Progress slowing down gives competitors time to catch up. TSMC are talking about bringing out 7nm chips next year when Intel hope to finally get to 10nm. I've been told that Xnm now is just a marketing number. It doesn't really mean anything. TSMC's 7nm technology may have a worse speed/power/area figure of merit than Intel's 10nm technology.
|
# ? Apr 24, 2016 20:42 |
|
silence_kit posted:Is this translation really that much of a penalty? I've heard this said a lot among computer hobbyists and I've also heard from the same crowd that it doesn't really matter that much. I think I've read your posts in the C/C++ thread and you seem to know a lot about compilers and maybe this kind of stuff, so maybe you have more insight about this than the guys who read Ars Technica/Tom's Hardware articles. x86 is really bad for the kinds of optimization that modern processors do. a modern core wants to reorder and parallelize as many instructions as it can, and speculate ahead before dependencies are resolved. x86 is basically made to thwart this. on the other hand arm64 is an instruction set architecture specifically designed for this kind of optimization
|
# ? Apr 24, 2016 20:46 |
|
silence_kit posted:Is this just because of the amount of labor that it would take to rewrite all of the software to a different chip, or is it because of their business relationship? Why do you say this? Windows supports 20 years or more of legacy software, much of which is still in use. Windows moving to another architecture and abandoning x86 would be catastrophic.
|
# ? Apr 24, 2016 20:48 |
|
The Management posted:x86 is really bad for the kinds of optimization that modern processors do. a modern core wants to reorder and parallelize as many instructions as it can, and speculate ahead before dependencies are resolved. x86 is basically made to thwart this. on the other hand arm64 is an instruction set architecture specifically designed for this kind of optimization Why are x86 chips so much faster then? Is it just that one one has bothered making a 3.5 GHz ARM processor?
|
# ? Apr 24, 2016 21:14 |
|
Citizen Tayne posted:Windows supports 20 years or more of legacy software, much of which is still in use. Windows moving to another architecture and abandoning x86 would be catastrophic. yep, the only real solution is to centralize the content and present the results in a platform-agnostic format in other words, have your software on an x86 server, and let users link in via a thin client or whatever. that's still bad for intel though because they're selling fewer processors
|
# ? Apr 24, 2016 21:25 |
|
computer parts posted:in other words, have your software on an x86 server, and let users link in via a thin client or whatever. Ah, the X11 solution, we know how popular that's been over the past thirty years.
|
# ? Apr 24, 2016 21:32 |
|
silence_kit posted:Oh, I didn't realize that. So if some other company were to come out with a server chip tomorrow with a different ISA which was the same cost, but had ~25% more favorable performance/power capability, would everybody be willing to put in the work to rework their server software to take advantage of that 25%? major data center owners, absolutely. porting to a new architecture isn't necessarily a huge amount of work, even for c/c++, especially if you've already ported to an architecture with similar type sizes. porting the entire os or tool chain is a pain in the rear end, but usually the architecture vendor will ensure that there's a compiler for it and that linux will boot. keep in mind that while some data centers are like aws, running massively divergent software on each machine, the ones doing stuff like fielding siri queries and indexing web pages for google are actually running a pretty small amount of custom software in fact i know that some companies proactively port their software to different architectures just to see how the performance is and maybe get some additional leverage against intel in contract negotiations silence_kit posted:Is this translation really that much of a penalty? I've heard this said a lot among computer hobbyists and I've also heard from the same crowd that it doesn't really matter that much. I think I've read your posts in the C/C++ thread and you seem to know a lot about compilers and maybe this kind of stuff, so maybe you have more insight about this than the guys who read Ars Technica/Tom's Hardware articles. this is what i hear from intel engineers, yes, that they are all aware that the ISA costs them a lot but there isn't much they can do because the marketing advantages of sticking with x86 are way too big i mean, it's not like it costs them cycles per issue. but... that's because they take it as an engineering constraint to avoid that, and meeting that constraint costs them in transistors silence_kit posted:Is this just because of the amount of labor that it would take to rewrite all of the software to a different chip, or is it because of their business relationship? Why do you say this? third-party software that will never get ported
|
# ? Apr 24, 2016 21:38 |
|
Citizen Tayne posted:Ah, the X11 solution, we know how popular that's been over the past thirty years. wyse hdx is kinda rather nice
|
# ? Apr 24, 2016 21:50 |
|
graph posted:wyse hdx is kinda rather nice I didn't even know the Wyse name was still around. wild.
|
# ? Apr 24, 2016 21:58 |
|
silence_kit posted:I've been told that Xnm now is just a marketing number. It doesn't really mean anything. TSMC's 7nm technology may have a worse speed/power/area figure of merit than Intel's 10nm technology. pretty much intel is claiming that their 14nm is way better than the Samsung/TSMC 14/16nm, expert consensus seems to be that it is indeed somewhat better. it's harder to compare than you'd think because intel's chips include a lot of tall logic cells which are basically gates that take up more area in order to switch faster, and also have different ratios of SRAM to logic etc. that said the foundries' 10nm will definitely be more dense and likely better overall than Intel's 14nm, and will likely beat Intel's 10nm to volume manufacturing. TSMC is planning to bring 7nm up pretty quickly in 2018 as well but that roadmap includes EUV so take it with a grain of salt wonder when intel will finally start making Altera's chips on their 14nm process.
|
# ? Apr 25, 2016 00:19 |
|
The Management posted:x86 is really bad for the kinds of optimization that modern processors do. a modern core wants to reorder and parallelize as many instructions as it can, and speculate ahead before dependencies are resolved. x86 is basically made to thwart this. on the other hand arm64 is an instruction set architecture specifically designed for this kind of optimization yeah, the arm64 instruction maps nicely to the seq_cst memory model without incurring extra overhead on non-atomic operations. x64 is still way better than the nightmare that is trying to do proper atomic synchronization on the powerpc though.
|
# ? Apr 25, 2016 00:31 |
|
Citizen Tayne posted:I didn't even know the Wyse name was still around. wild. dell bought them two years ago and with the emc merger theyre on the brink of hyperconverged vdi-rollout-in-a-box
|
# ? Apr 25, 2016 02:03 |
|
The_Franz posted:x64 is still way better than the nightmare that is trying to do proper atomic synchronization on the powerpc though. hey now, all you need is lwarx, stwcx and a copy of is PowerISA Book II Appendix B the new version of the isa reference manual has actual atomic memory instructions in it. no idea when hardware is going to be available though. rjmccall posted:in fact i know that some companies proactively port their software to different architectures just to see how the performance is and maybe get some additional leverage against intel in contract negotiations
|
# ? Apr 25, 2016 05:01 |
|
The Management posted:x86 is really bad for the kinds of optimization that modern processors do. a modern core wants to reorder and parallelize as many instructions as it can, and speculate ahead before dependencies are resolved. x86 is basically made to thwart this. on the other hand arm64 is an instruction set architecture specifically designed for this kind of optimization ehhhhhhhh one reason why x86 beat the poo poo out of powerpc in the 90s/early oughts is that x86 isn't nearly as terrible as people wanted it to be, as of i386 it evolved into a mostly sane isa (if you squint a bit), and x86-64 dealt with the worst of the remaining probs it's bad, but from a hardware implementation point of view only mildly bad, not so bad it can't be worked with. especially if (like intel has been doing for 20+ years) you compartmentalize all the truly weird mid-80s drug trip features that nobody uses into a "we dont care if this performs like poo poo" box in practice x86 cores have the most sophisticated and aggressive reordering/parallelize/speculation of any mass market cpu core so i'd be real curious to know what there is in x86-64 which "thwarts" this arm cores could be designed like that, the problem is finding a market lucrative enough to get someone to burn a few truckloads of money developing such cores. apple found it worthwhile to do custom arm cores for the cellphone/tablet market where they sell 20, 30, 40 million of the things annually, and those are the most sophisticated out of order arm cores known, but they're not really in the x86 weight class if you account for target frequency. even with apple economics it's an interesting question whether it's worthwhile to also build chips with true x86 class performance. they don't sell enough macs and they aren't going to want to license their cores to others so it might actually cost them extra money to use their own arm chips, and there's also the considerable costs of asking users and developers to migrate isa for (probably) little or no perf gain. the carrot which made powerpc -> x86 work was a massive perf gain especially in their bread and butter products (laptops) theres also the guys who are trying to do arm server chips but those have been abject failures so far. the ones based on off the shelf arm holdings cores are doomed to fail, the ones based on custom cores have to deal with the killer combo of intel's built in economic advantages (such as the desktop market subsidizing super advanced cpu cores that can also own bones in the server world) and the fact that x86 is the incumbent and asking customers to port isn't easy. (despite what's been said itt this is not a trivial thing even in servers) also linus torvalds has brainwashed me and i agree with him that the x86 memory ordering model is cool and good. this is something a lot of riscs got wrong, it's a case where trying to make the hardware implementation easy ends up being really bad
|
# ? Apr 25, 2016 06:48 |
|
broken clock opsec posted:Nah they'll just repurpose ios apps for desktop use. You can already see them eyeing that direction with ipad pro. microsoft was right????
|
# ? Apr 25, 2016 16:39 |
|
PleasureKevin posted:* AMD is releasing Zen this summer and it will perform as well as Skylake for less
|
# ? Apr 25, 2016 17:05 |
|
BobHoward posted:also linus torvalds has brainwashed me and i agree with him that the x86 memory ordering model is cool and good. this is something a lot of riscs got wrong, it's a case where trying to make the hardware implementation easy ends up being really bad nah
|
# ? Apr 25, 2016 17:17 |
My Linux Rig posted:microsoft was right???? They just patented a hybrid tablet device with a 2nd, folding, touchscreen, so I actually do think that's the way they want to go.
|
|
# ? Apr 25, 2016 17:38 |
|
but how is Apple going to ditch Intel when they hold all the keys on Thunderbolt? The only thing that makes sense about the USB-C Macbook is that it's ~*probably*~ the test model for ditching MagSafe and just charging everything via TB3.
|
# ? Apr 25, 2016 18:06 |
|
Hemick posted:but how is Apple going to ditch Intel when they hold all the keys on Thunderbolt? it's precious that you think this
|
# ? Apr 25, 2016 19:07 |
|
lol yeah all those people tripping over themselves to license thunderbolt lmao
|
# ? Apr 26, 2016 01:34 |
|
The Management posted:it's precious that you think this
|
# ? Apr 26, 2016 02:38 |
|
intel finally took the atom out back and shot it also i saw a prototype of their x86 phone once. it ran anroid about as good as any other midrange commodity arm smartphone, i.e. it was totally pointless
|
# ? Apr 30, 2016 05:35 |
|
papa_november posted:intel finally took the atom out back and shot it just the atom for phones, the one for lovely laptops is staying around http://www.anandtech.com/show/10288/intel-broxton-sofia-smartphone-socs-cancelled
|
# ? Apr 30, 2016 07:15 |
|
I just jacked off to all the people getting fired.
|
# ? Apr 30, 2016 07:33 |
|
DuckConference posted:just the atom for phones, the one for lovely laptops is staying around Good, lappy atoms are decent, I have an atom w10 tablet and it's nice and quiet cuz of passive cooling
|
# ? Apr 30, 2016 14:23 |
|
papa_november posted:intel finally took the atom out back and shot it it was less than pointless because Intel had to subsidize the price of the chips, design the phone motherboard, port the OS themselves, and host translated binaries of native code apps to get anyone to use them. basically they proved that if you save them enough money, you can get bad Chinese OEMs to go along with anything. when they stopped the subsidies they had zero repeat customers
|
# ? Apr 30, 2016 14:31 |
|
maniacdevnull posted:I have an atom w10 tablet lol
|
# ? Apr 30, 2016 14:32 |
|
so what's the story for things like Intel Edison and Arduino 101, did they make the cut?
|
# ? Apr 30, 2016 23:10 |
|
|
# ? May 5, 2024 00:20 |
|
H.P. Hovercraft posted:remember back in the day when you had to buy a new video card every 6 months to play new stuff I'm considering upgrading to a new CPU and motherboard because I've run out of PCI Express lanes.
|
# ? May 1, 2016 00:19 |