|
thebigcow posted:I am way out of date on things but basically Ahh yea I remember that article now. I wonder if win10 + increased number of CPU driven PCIe lanes will help out with that.
|
# ? Jun 3, 2015 22:05 |
|
|
# ? May 18, 2024 16:18 |
|
Intel, over the last twenty years, has murdered POWER, MIPS, SPARC, ALPHA, etc. in the high-margin low-volume server space because they've had a lead in fab process. They can justify the huge expense of new fabs by using that fab for large quantities of commodity silicon in low-margin desktops, laptops, etc. If TSMC/GF/Samsung/whoever can capture a significant fraction of the high-volume market they have, Intel loses that advantage. What happens when ARM manufacturers start using that design expertise they've obtained from shipping billions of units and a competitive process to push into the higher-margin spaces? Also, while there's no question that Intel is dominant now, there are a significant number of people in the hyperscale and HPC spaces who are uncomfortable with a monopoly supplier; see for example Google's work with POWER8. Twerk from Home posted:I should have just used 4ghz and 16ghz, I meant I'd rather have all the cycles on one core rather than across several. Also, all my thoughts have been purely from the software end and ignoring chip design itself, I know absolutely nothing about it. Ignoring the thermal impossibility, there are some real downsides to higher clock rates. Context switches between threads are expensive as hell, especially if the code isn't already in another register bank. You spend a lot more clock cycles waiting for data on/off chip. For higher clock rates you need fewer transistors per stage, which means simpler stages, which means long-rear end pipelines, which means you lose many steps when you have a branch mis-prediction or other mis-optimization. Etc.
|
# ? Jun 3, 2015 22:09 |
|
PCjr sidecar posted:Intel, over the last twenty years, has murdered POWER, MIPS, SPARC, ALPHA, etc. in the high-margin low-volume server space because they've had a lead in fab process. They can justify the huge expense of new fabs by using that fab for large quantities of commodity silicon in low-margin desktops, laptops, etc. If TSMC/GF/Samsung/whoever can capture a significant fraction of the high-volume market they have, Intel loses that advantage. What happens when ARM manufacturers start using that design expertise they've obtained from shipping billions of units and a competitive process to push into the higher-margin spaces? I dunno, what does happen? Is Samsung just that much better at designing CPUs than Intel (Somehow I doubt this) / is Arm such an inherently superior architecture that if they owned Intel's 14 and 10nm fabs suddenly we'd have a new multi-generational leap forward in performance? Nothing out there benchmark wise seems to indicate that, but people repeat stuff like what you've just said all the time. So what's the big upside to either the server market or desktop market for an ARM or POWER design made on something like Intel's 14nm process? I look at a review like this one: http://www.anandtech.com/show/8357/exploring-the-low-end-and-micro-server-platforms/17 and see that for scaleout type workloads it ends up with low power 22nm Haswell Xeons still beating out Intel's own ultra low power Atom scaleout platform in terms of absolute performance and performance per watt (When both chips are on the same process tech, too). The arm chip tested here was still on a 40nm process so it's no surprise at all that it gets clowned all over in power use terms by the Atom that it's competing against, but on the other hand it did match the atom's performance almost exactly. So assuming it's built on the same process power usage should come in line as well. So does it just boil down to "ARM at process parity will provide actual performance competition and be a better market driver of chip price and performance across the board?" That's certainly been the case for Atom, since Intel let Atom sit on an ancient dogshit performance core and process for something like 6+ years while arm partner chips filled in all the billions of tablets/phones/chromebooks/whatever content consumption devices that have been sold since 2007. On the other hand, what does an arm competitor to a real Xeon in the server market even look like? People like that their existing chips are cheap and (when on current processes) low power, but their performance/watt obviously isn't anywhere near current Xeons, and to get there would they end up needing a similar ballpark number of transistors? Obviously a totally different chip, but you can certainly see Power8 competing in raw performance with the latest Xeon E7, but at the cost of much higher TDP (over 200w for the highest core count/frequency models), and that's on 22nm SoI vs intel's 22nm finfet process.
|
# ? Jun 3, 2015 23:09 |
|
PCjr sidecar posted:What happens when ARM manufacturers start using that design expertise they've obtained from shipping billions of units and a competitive process to push into the higher-margin spaces? re: question #1, see ARM comments here http://www.realworldtech.com/compute-efficiency-2012/3/ and here http://www.realworldtech.com/microservers/4/ HPC would certainly welcome a competitor to Intel. The government abhors a monoculture in vendors or architectures. But right now if you want to get to exascale, designs are focused around heterogeneous machines using GPUs or Intel MIC designs, not just adding more/different CPUs (or ARM chips, at all). POWER9 actually did win some DoE contracts: http://www.anandtech.com/show/8727/nvidia-ibm-supercomputers , but if you break out the flops it's mostly due to NVidia. There is going to be a lot of time and money pored into porting some popular DoE/DoD codes to GPUs.
|
# ? Jun 4, 2015 00:24 |
|
Wulfolme posted:Has there been any reason for Thunderbolt to become popular that has legs? All I've seen for it are monitor hookups for big Macs and serious business peripherals that have firewire, usb 3 or straight up pci-e connected models available as well. Gwaihir posted:Ahh yea I remember that article now. I wonder if win10 + increased number of CPU driven PCIe lanes will help out with that. http://www.anandtech.com/show/9331/intel-announces-thunderbolt-3 ...not sure it mentions anything new with regard to Windows support in general, but the part about Windows networking support and general talk about hot plug GPU support makes me hopeful there. Otherwise generally answers a bunch of questions about TB3, particularly as far as pushing for adoption Alpine Ridge will also double as one of the first 10Gbps USB 3.1 controllers available (aka 3.1 gen 2, SuperSpeed+, or "Extreme USB 3.1"? ). And now for a completely unrelated question. I was looking at the Lewisburg PCH and saw the 20 or so PCIe 3.0 lanes on it along with a bunch of other ports, how's that all work with the DMI connection in terms of bandwidth (since that itself appears to be something like an x4 connection)? Is it all just dynamically managed with use and crammed to fit as needed or is there some other magic to make it work?
|
# ? Jun 4, 2015 06:07 |
|
Maybe its just me but does all consumer UEFI look the same? Like they took the intel code and slapped their own superficial winamp skin on it? Does anyone do it more 'elegantly'? Dell enterprise hardware?
|
# ? Jun 4, 2015 07:11 |
|
PC LOAD LETTER posted:Supposedly some stuff can still be processed natively for speed but yea most everything else is 'cracked' into micro-ops over several cycles or more (in some case hundreds or thousands of cycles for 'legacy' instructions that are seldom used) of some sort to run on the 'back end' which actually does all the computational work. This is not really true any more. Sure, there's still a decoder that emits something you can call micro-ops. Yes, there's some instructions which might require actual microcode programs running a loop that takes hundreds of cycles (and they're not even "legacy" btw). However, AFAIK (Jawn, speak up if you know better), for modern Intel cores the normal case is 1 uop per x86 instruction. And sometimes they actually go the other direction by fusing two x86 instructions together into a single uop. The "cracking" that was done on older cores (e.g. the Pentium II era) was mostly about letting the backend handle memory accesses as separate operations rather than linking them to computation. At the time, that was a substantial win, but today, not so much, so there's much less cracking going on. One important theme that's been playing out over the last 10+ years is that high end CPU design has become steadily more dominated by power efficiency. Today, if you want to go fast, you have to design the most power efficient circuit possible, because you're really limited by how much computation you can do inside a fixed thermal budget. That's relevant to the question of 'to crack? Or not to crack?' because when you crack everything in sight, the out-of-order backend has to juggle a lot more uops in flight, and that is a Hard Problem whose solutions tend to involve lots of transistors and power. Thus, cracking is not as much a thing as it used to be.
|
# ? Jun 4, 2015 07:17 |
|
Shaocaholica posted:Maybe its just me but does all consumer UEFI look the same? Like they took the intel code and slapped their own superficial winamp skin on it? Not from my experience (Poweredge 720), just a slow rear end wrapper around a boring but nominally functional bios and related crap. Slow boots and occasional whole system refusing to post if you may too many configuration changes at once.
|
# ? Jun 4, 2015 15:11 |
|
Aquila posted:Not from my experience (Poweredge 720), just a slow rear end wrapper around a boring but nominally functional bios and related crap. Slow boots and occasional whole system refusing to post if you may too many configuration changes at once. IMO, Dell's 15 year old (early 2000s) 'blue' expanding tree style BIOS UI was more useable than any of the cookie cutter winamp skin 'gamer' UEFI these days. Look at the non eye burning color scheme. Look at all the relevant info you get in a concise layout. No drop downs. No superfluous information pollution from related settings. No big rear end spreadsheet presentations. Even supports simple graphics like live bar graphs. Shaocaholica fucked around with this message at 15:19 on Jun 4, 2015 |
# ? Jun 4, 2015 15:17 |
|
I'm a pretty decent fan of the (lovely, limited) BIOS that laptops ship with. It's all very no frills and relevant, even if the actual options to make changes are limited.
|
# ? Jun 4, 2015 15:25 |
|
BobHoward posted:This is not really true any more. Scare quotes were around legacy because yes you're right they're technically not really legacy but some things so rarely used its quibbling at this point. BobHoward posted:Thus, cracking is not as much a thing as it used to be.
|
# ? Jun 4, 2015 16:07 |
|
PC LOAD LETTER posted:If I'm reading this right it sure looks like things haven't changed much fundamentally and the 'front' vs 'back end' metaphor still works pretty well even with a very modern x86 chip that can do uop fusion and has a trace cache. Sometimes you can get a 1:1 uop vs x86 instruction ratio but sometimes you still see multiple uops even with new instructions. Seems to be all over the place really. I don't think there is going to be a better solution than the current method of profiling applications, determining what they do most, optimizing that or introducing instructions that optimize certain actions and seeing what sticks. The lag between instruction availability, compiler support, application support, and instruction universality (eg: >80% of currently in use CPU's have it) is so huge that it's always going to be hard to predict what will actually turn out to be useful by the time it's actually generally usable. We typically have cycles where different areas get focus (hardware, compiler, language) for a bit, but even then it's hard to say where we currently are, only to look back and see where we were and try to go on from there. It's fun to watch, because there is real innovation happening all the time by some very smart people (and groups of people). The whole drive toward multithread/multiprocess as we ran into Ghz scaling limits was really interesting, and we are still seeing the results.
|
# ? Jun 4, 2015 19:30 |
|
as much as i'd like arm cores to kill off intel and usher in competition, it's not really happening except in niche areas, and it's not like high performance cores are all that fundamentally different (no one's doing an async design) with cyclone+ looking like big intel cores and atom looking like small arm cores uarch is pretty irrelevant to whether i can make money with software, so it's basically a price/perf game parallelizing code that isn't embarrassingly parallelizable is trick, and in the worst case requires a near total rewrite like servo's engine
|
# ? Jun 4, 2015 19:40 |
|
How much penalty is there on a multi CPU system with PCIe SSDs? The current crop of NVMe SSDs are advertising all the benefits of being connected -directly- to the CPU via PCIe but in a multi socket system all the PCIe devices have to go through another chip right? edit: oh wait. Looks like some of the PCIe slots in a multi CPU system are connected to CPU0 and some are connected to CPU1. http://www8.hp.com/h20195/v2/GetDocument.aspx?docname=c04400043 code:
Shaocaholica fucked around with this message at 01:08 on Jun 5, 2015 |
# ? Jun 5, 2015 00:59 |
|
Shaocaholica posted:Maybe its just me but does all consumer UEFI look the same? Like they took the intel code and slapped their own superficial winamp skin on it? They're all mostly based on reference drops from AMI, Phoenix, InsydeH20, etc. They do all share core black box code from Intel (MRC and other SEC code), and any other blackboxes for iGPU or ME. There is TianoCore and the EDK which provide a core set of EFI functionality that folks can base their code on. Skins are all the vendor's fault though
|
# ? Jun 5, 2015 01:24 |
|
I'm seriously considering picking up a PCIe SSD to put in that almost-impossible-to-use PCIe slot above the main x16 slot, but I'm pretty certain I'd have to drop my 970 into x8 mode since it's a Z68 board. If anything, I'll be able to grandfather it to my next system.
|
# ? Jun 5, 2015 01:52 |
|
No NVMe support on Z68 either
|
# ? Jun 5, 2015 04:01 |
|
Gwaihir posted:No NVMe support on Z68 either Oh, I know it'd just be a glorified storage drive until I got my hands on a new board with a new build. I'd want to use it for MMOs and other stuff that benefits from exceptionally fast seek times and transfer speeds. BIG HEADLINE fucked around with this message at 05:52 on Jun 5, 2015 |
# ? Jun 5, 2015 05:43 |
BIG HEADLINE posted:I'm seriously considering picking up a PCIe SSD to put in that almost-impossible-to-use PCIe slot above the main x16 slot, but I'm pretty certain I'd have to drop my 970 into x8 mode since it's a Z68 board. If anything, I'll be able to grandfather it to my next system.
|
|
# ? Jun 5, 2015 06:59 |
|
https://pcdiy.asus.com/2015/04/asus-nvme-support-poll-voice-your-opinion/ I found a thing! It still doesn't help out Z68 havers, though. http://strawpoll.me/4124030
|
# ? Jun 5, 2015 07:13 |
|
Shaocaholica posted:IMO, Dell's 15 year old (early 2000s) 'blue' expanding tree style BIOS UI was more useable than any of the cookie cutter winamp skin 'gamer' UEFI these days. Yeah, the Dell BIOSes tend to be pretty drat nice. Simple, and each option there is explained.
|
# ? Jun 5, 2015 10:31 |
|
Historical question time! Did the Core '1' Duo share chipset and socket with (early) Core 2 Duo? Would a 32bit Core '1' Duo drop into a Core 2 Duo system and work fine? Of course in 32bit only. Shaocaholica fucked around with this message at 19:41 on Jun 5, 2015 |
# ? Jun 5, 2015 19:37 |
|
Shaocaholica posted:Historical question time!
|
# ? Jun 5, 2015 20:07 |
|
japtor posted:I had a Core 1 Solo (remember those?) Mac mini and dropped in a C2D way back, but no clue about going the other way around. Same difference. Thanks! This helped too: http://en.wikipedia.org/wiki/Socket_M
|
# ? Jun 5, 2015 20:09 |
|
The first-generation core 2 duo Meroms shared a socket (socket M) with the core 1 Yonah chips, but after the first generation they switched to socket P - that's why swapping it into an early Intel mac worked. Also earlier Pentium M CPUs were not compatible with the chipsets used by the Core CPUs even though they shared the same socket. The chipsets used were variants of the 945 and 965 chipsets, but they were largely incompatible with most C2D CPUs since the socket is different for later models. Unless the C2D system specifically supports the original Core series and has socket M I'm not sure it'll allow a drop-in replacement like that. If you're trying to find a new chip for a socket P laptop then you should look for something sharing the same FSB as the original part.
|
# ? Jun 5, 2015 20:12 |
|
I was just more curious about why the Dell spec sheet for the Precision M65 laptop only mentions C2D options but on ebay you can find them with C1D procs.
|
# ? Jun 5, 2015 20:21 |
|
Dell will BTO anything for a large enough quantity. I'm 100% sure they're government refurbs as they tend to ask for unique variants.
|
# ? Jun 5, 2015 20:25 |
|
Shaocaholica posted:I was just more curious about why the Dell spec sheet for the Precision M65 laptop only mentions C2D options but on ebay you can find them with C1D procs.
|
# ? Jun 5, 2015 20:47 |
|
Biostar showed their gaming Z170 motherboard. Not that anyone here would probably using Biostar. https://www.youtube.com/watch?v=SBJnX-d-1T8 2 PCI slots?
|
# ? Jun 8, 2015 15:38 |
|
I'm toying with the idea of doing a "lite" version of this: https://www.pugetsystems.com/labs/articles/Multi-headed-VMWare-Gaming-Setup-564/ At Microcenter I can get a 5820k, ASRock X99 board, and 32GB of DDR4-2400 for $700-ish. I was planning to replace my wife's Phenom II 965 with Skylake later this year, but now I'm considering consolidating both of our machines into a single box, putting both our GPUs into it, and having a single box be our gaming / general use desktops. Is this an insane idea? In my harebrained plan, we get effectively 2 desktops, with the option to shut down one and allocate all resources to the other if wanted.
|
# ? Jun 9, 2015 17:49 |
|
That is really interesting. I would pass though because it seems there was more than a bit of setup headache involved, and it is locked into amd video cards. You are also instituting single points of failure for both your 'rigs.' Lose one power supply, both desktops are down. I guess the other side of that coin is less power use. At this point in life I just want my home pc to be the easiest setup possible, but if you like to tinker, good luck!
|
# ? Jun 10, 2015 11:40 |
|
JacksAngryBiome posted:That is really interesting. I would pass though because it seems there was more than a bit of setup headache involved, and it is locked into amd video cards. You are also instituting single points of failure for both your 'rigs.' Lose one power supply, both desktops are down. I guess the other side of that coin is less power use. It's uses less power, and space, and is more of a proof of concept for LAN cafes or large families. Or people with multi monitor setups splitting up the monitors for friends and setting up other boxes? Since everything is already virtualized, I guess you can have some sort of NAS to do snapshots for you and reload whenever things go wrong. As for point of failure, yes it's a much bigger risk putting your eggs in one basket but higher end hardware tends to be more reliable
|
# ? Jun 11, 2015 03:15 |
|
Lowen SoDium posted:Biostar showed their gaming Z170 motherboard. Not that anyone here would probably using Biostar. So i can use a IDE pci card for my mad clocks
|
# ? Jun 11, 2015 03:46 |
|
I really want that so I can have a rock-solid Linux host for real things and a high-powered gaming Windows client for, uh, gaming.
|
# ? Jun 11, 2015 04:20 |
|
It kind of depends on how far your endpoints are from each other and how easy you can run cable in your house. Network cable is hidden away now but i really don't feel like breaking open the walls and ceilings to run USB + HDMI/Displayport to another floor. Though longer might work your endpoints really need to be within 5 meters of each other for it to work properly. For reliability you should buy server class hardware with redundant power supplies and also buy 2 of everything and configure Vmotion (don't forget to buy your Vcenter license, oh and a SAN for shared storage, probably new managed switches so you can seperate iscsi traffic into its own vlan). It will be just like a real VDI project (doesn't save any money, makes things more complex).
|
# ? Jun 11, 2015 18:27 |
|
Some in the thread have asked before about a quick overview of semiconductor manufacturing. Intel's website has a good powerpoint that illustrates a high level overview. http://download.intel.com/newsroom/kits/chipmaking/pdfs/Sand-to-Silicon_22nm-Version.pdf
|
# ? Jun 16, 2015 17:39 |
|
It's amazing to see something like "foils" persist as long as it has, much less actually making it out into something public facing.
|
# ? Jun 16, 2015 18:16 |
|
I cringe every time someone says foils here.
|
# ? Jun 16, 2015 18:59 |
|
Henrik Zetterberg posted:I cringe every time someone says foils here.
|
# ? Jun 16, 2015 19:01 |
|
|
# ? May 18, 2024 16:18 |
|
Do people still say "iMBO" and rathole?
|
# ? Jun 16, 2015 20:02 |