|
Maybe AMD's new CPU will the the NetBurst of this generation, that will get the clock speed wars going again
|
# ? Mar 24, 2016 18:20 |
|
|
# ? May 9, 2024 08:48 |
|
How does this analogy work? The dozers were already the AMDs netburst.
|
# ? Mar 24, 2016 18:24 |
|
mobby_6kl posted:Maybe AMD's new CPU will the the NetBurst of this generation, that will get the clock speed wars going again I'll get my 80mm case fans with blue LEDs ready
|
# ? Mar 24, 2016 18:25 |
|
The details that were leaked a ways back said otherwise. Best rumors we have suggest a 3-4Ghz clockspeed for Zen. Bulldozer was their attempt at a Netburst-ish uarch and it was a failure. Intel didn't do very well with it either so it doesn't seem to be a really good idea to pursue a speed demon uarch in x86 land.
|
# ? Mar 24, 2016 18:27 |
|
PC LOAD LETTER posted:Most of those are the types you could say "its 64bit so its powerful" to and they'd buy it though. They have little to no understanding of the underlying tech and don't really know what it is they're buying. Just that its "better" somehow. I think you underestimate how succesful intel's branding of their core line has been. People who have no idea how many bit their CPU is or what that even means know i7 > i5. No one is going to bring back the 80886 for nostalgia's sake.
|
# ? Mar 24, 2016 18:30 |
|
feedmegin posted:Sure...but the argument that started this off was something like 'if AMD were stronger Intel would be forced to compete and our desktop CPUs would be shooting up in performance again'. My counter-argument is that this is untrue because of physics, and this remains the case. People at home, even power users, aren't generally running dozens of VMs or high-traffic webservers or whatever so giving them more CPUs or more cores wouldn't do them any good, and otherwise we are stuck with 5% improvements in the only area where improvements matter for anything that you can't do with your GPU. Physics stops you from increasing the clocks by much anymore but you can always put more transistors on the die for bigger caches and better branch prediction and stuff. They've really slowed down on doing that ever since Sandy Bridge came out on the desktop side while they're still trucking along with the increased transistor counts on the server side, which is why I was comparing the progress of desktop and server chips. I am guessing that this is due to a combination of diminishing returns, high cost/heat, and the fact that it would basically be niche product for PC enthusiasts which they have little economic incentive to provide. I still think that they would be pushing harder if they were still in close competition with AMD but yes there are inherent design challenges to providing more IPC that make it harder than just adding more cores like with GPUs and server CPUs.
|
# ? Mar 24, 2016 18:32 |
|
NihilismNow posted:I think you underestimate how succesful intel's branding of their core line has been. NihilismNow posted:People who have no idea how many bit their CPU is or what that even means know i7 > i5. NihilismNow posted:No one is going to bring back the 80886 for nostalgia's sake.
|
# ? Mar 24, 2016 19:28 |
|
The Phenom and FX branding are poisoned after being attached to garbage products for the last 5+ years. Maybe AMD could bring the Opteron branding down to their new high performance desktop chips. Athlon for midrange and Duron for low-end seems like a good idea to me.
|
# ? Mar 25, 2016 02:27 |
|
NihilismNow posted:i7 > i5.
|
# ? Mar 25, 2016 06:59 |
|
The dual core with hyperthreading mobile i5 I have in my work laptop is garbage.
|
# ? Mar 25, 2016 08:07 |
|
fart simpson posted:The dual core with hyperthreading mobile i5 I have in my work laptop is garbage. Which one is it? I have the i5-5300U in my work laptop and the i5-5257U in my home laptop and I've been pretty happy with them. MaxxBot fucked around with this message at 21:49 on Mar 25, 2016 |
# ? Mar 25, 2016 21:45 |
|
Got a m3 Compute Stick and it's a 900mhz proc that can't handle 1440p YouTube video as well as my quadcore i5 and this upsets me.
|
# ? Mar 26, 2016 00:34 |
|
A Bad King posted:Got a m3 Compute Stick and it's a 900mhz proc that can't handle 1440p YouTube video as well as my quadcore i5 and this upsets me.
|
# ? Mar 26, 2016 00:57 |
|
A Bad King posted:Got a m3 Compute Stick and it's a 900mhz proc that can't handle 1440p YouTube video as well as my quadcore i5 and this upsets me. Which i5?
|
# ? Mar 26, 2016 10:40 |
|
Introducing the new 'Goes Like Hell' line of processors.
|
# ? Mar 26, 2016 15:18 |
|
MaxxBot posted:Which one is it? I have the i5-5300U in my work laptop and the i5-5257U in my home laptop and I've been pretty happy with them. The new T-series Thinkpads that IT wants me to use instead of my T520 are actually (very mildly) slower than what I have. Of course they're thinner and lighter, but why would I care if it spends 90% of the time in the dock anyway.
|
# ? Mar 26, 2016 21:25 |
|
mobby_6kl posted:The new T-series Thinkpads that IT wants me to use instead of my T520 are actually (very mildly) slower than what I have. Of course they're thinner and lighter, but why would I care if it spends 90% of the time in the dock anyway. Because they also have less features and ports (apart from USB3) of course!
|
# ? Mar 27, 2016 15:18 |
|
mobby_6kl posted:The new T-series Thinkpads that IT wants me to use instead of my T520 are actually (very mildly) slower than what I have. Of course they're thinner and lighter, but why would I care if it spends 90% of the time in the dock anyway. Free upgrade?
|
# ? Mar 27, 2016 17:04 |
|
PBCrunch posted:Phenom PBCrunch posted:garbage products Say what you will about the FX series, but the Phenom II was AMD's last great lineup that could compete with Intel on the cheap. poo poo, I was able to make an 1100t last for 5 years with no real problem on modern games, save for some stuff on PCSX2, which is where most former AMD owners like me start to hit a wall when it comes to SSE4.1/AVX2 being mandatory for pickier games
|
# ? Mar 28, 2016 02:03 |
|
Given how Intel "borrowed" their branding from BMW (3-series, 5-series, 7-series), perhaps AMD should similarly borrow from Mercedes (C-class Zen, E-class Zen, S-class Zen). They can even justify it as: C=consumer, E=enthusiast, S=super.
|
# ? Mar 28, 2016 03:53 |
|
Anime Schoolgirl posted:that's supposed to be hardware-decodable, if you use a firefox variant or such have you set "media.gmp-provider.enabled" to false? Why isn't this enabled by default!? Was using Firefox, googled the issue and how to fix it, problem solved. Thanks! Boiled Water posted:Which i5? 6500, got it while it was on sale at $180.
|
# ? Mar 28, 2016 14:50 |
|
They should have similar video performance, they almost have the same iGPU in them (Intel HD Graphics 515 for Core m3, 530 for your i5).
|
# ? Mar 28, 2016 16:03 |
|
Boiled Water posted:They should have similar video performance, they almost have the same iGPU in them (Intel HD Graphics 515 for Core m3, 530 for your i5). One is a compute stick, the other is a HTPC in a 10liter mITX case. One is a 5 watt 2 core part at 900mhz/2.2ghz, the other is a 65watt 4 core part at 3.2/3.6ghz paired with a Nano Fury. I'm upset that the compute stick is not comparable. /s
|
# ? Mar 28, 2016 17:50 |
|
And yet the graphics parts of the chips are identical. I'm not sure what you're getting at.
|
# ? Mar 28, 2016 18:30 |
|
If USB-C is capable of feeding full-fat video cards NOW, what possible incentive is there to make new video cards PCI Express exclusives? Available, even primary, sure; PCI Express is on most workstation form factor computers and USB-C is on at most five percent of them. But backward-thinking at a time when onboard's aiming at the gaming market seems like suicide. Will there even be a reason for PCI Express in five years' time? For the workstation form factor at all?
|
# ? Mar 28, 2016 18:37 |
|
Boiled Water posted:And yet the graphics parts of the chips are identical. I'm not sure what you're getting at. Throttling makes a big difference, especially 65W vs. 5W.
|
# ? Mar 28, 2016 18:47 |
|
Eletriarnation posted:Throttling makes a big difference, especially 65W vs. 5W. Surely not for youtube videos? Sir Unimaginative posted:If USB-C is capable of feeding full-fat video cards NOW, what possible incentive is there to make new video cards PCI Express exclusives? Available, even primary, sure; PCI Express is on most workstation form factor computers and USB-C is on at most five percent of them. But backward-thinking at a time when onboard's aiming at the gaming market seems like suicide. What are you asking? I've read and re-read it and can't make sense of your text.
|
# ? Mar 28, 2016 18:49 |
|
Boiled Water posted:Surely not for youtube videos? Why not? Hardware decoding doesn't mean it uses no power, and 5W is a low ceiling. Sir Unimaginative posted:If USB-C is capable of feeding full-fat video cards NOW, what possible incentive is there to make new video cards PCI Express exclusives? Available, even primary, sure; PCI Express is on most workstation form factor computers and USB-C is on at most five percent of them. But backward-thinking at a time when onboard's aiming at the gaming market seems like suicide. I think there's a divide between 'capable' and 'ideal' here. Even with Thunderbolt, USB-C is not going to push as much bandwidth as PCIe 3.0 x16 in the near term unless I missed something.
|
# ? Mar 28, 2016 18:51 |
|
Boiled Water posted:What are you asking? I've read and re-read it and can't make sense of your text. I...I think they're saying that if USB-C can feed even Halo Tier GPUs, and thus work externally and be interchanged live, that in comparison to port size between PCIE and USB-C, PCIE ends up being hilariously inefficient on motherboard space and you'll get way more out of just using USB-C. Like, theoretically you could get away with XCF/SLI using USB-C on a Nano-ITX board that you'd never accomplish with PCIE. Basically, eGPU over USB-C offers potentially huge scalability in a much smaller package? Compared to PCIE anyway.
|
# ? Mar 28, 2016 18:57 |
|
Eletriarnation posted:Why not? Hardware decoding doesn't mean it uses no power, and 5W is a low ceiling. Sure it'll use power but I don't think it'll hit that ceilling decoding 1080p videos. edit: Realized I was on the internet and looked it up: Compute stick runs fine with 1080p, described as "struggling" at 1440p. FaustianQ posted:I...I think they're saying that if USB-C can feed even Halo Tier GPUs, and thus work externally and be interchanged live, that in comparison to port size between PCIE and USB-C, PCIE ends up being hilariously inefficient on motherboard space and you'll get way more out of just using USB-C. Like, theoretically you could get away with XCF/SLI using USB-C on a Nano-ITX board that you'd never accomplish with PCIE. Sure that could be done, but when you factor in how huge graphics cards are there's no reason to do it except of course for the external GPU but that business case has yet to prove itself. champagne posting fucked around with this message at 19:10 on Mar 28, 2016 |
# ? Mar 28, 2016 19:01 |
|
Sir Unimaginative posted:If USB-C is capable of feeding full-fat video cards NOW, what possible incentive is there to make new video cards PCI Express exclusives? Available, even primary, sure; PCI Express is on most workstation form factor computers and USB-C is on at most five percent of them. But backward-thinking at a time when onboard's aiming at the gaming market seems like suicide. Because most uses do not have need of an easily detachable video card? And putting a usb-c socket inside the case to handle the video card sounds kinda insane? Unless you're also making it so that the video cards themselves can be made much smaller, there's no point in saving a minuscule amount of board space with the port - if you're really cramped for space you can use the smaller pci-express slots and restrict your choice in video cards a little. Plus I'm pretty sure you do need to do a non-trivial amount of extra engineering to ensure that your video card chipset will adequately function on both PCI Express and USB-C connection methods, as well as extra work for the drivers.
|
# ? Mar 28, 2016 19:09 |
|
Boiled Water posted:Sure that could be done, but when you factor in how huge graphics cards are there's no reason to do it except of course for the external GPU but that business case has yet to prove itself. But GPU's really don't have to be gently caress huge with HBM2 and no longer being tied to PCIE. You could theoretically get smaller than MXM for PCB, although the cooling solution at that point would be interesting.
|
# ? Mar 28, 2016 19:10 |
|
Yes they have to be huge. The large part of a graphics card isn't the card but rather the heat sink attached to it.
|
# ? Mar 28, 2016 19:16 |
|
No, the *card* has to be huge, but the interface doesn't. With most proper video cards getting most of their power straight from the power supply as opposed to from the bus, we are kind of approaching the point where our graphics cards can be plugged in by cables, just like our SATA drives. Like, remove the PCIe fingers off the card, replace them with a hypothetical GPU cable port, a la SATA, and then mount the GPU into cages like you would a hard drive, as opposed to hanging it off the motherboard. I mean, wouldn't that be nice for the mini-ITX crowd, obviating the need for those fuggin' 3M PCIe riser cards? SwissArmyDruid fucked around with this message at 19:31 on Mar 28, 2016 |
# ? Mar 28, 2016 19:26 |
|
Boiled Water posted:Yes they have to be huge. The large part of a graphics card isn't the card but rather the heat sink attached to it. Hmm, yeah, you'd have to come up with some exotic cooling solution for what is supposed to be an idea for a highly scalable, flexible and off the shelf solution for getting a ton of GPU's running in parallel. At that point, you're not doing much better than the Razer Core. Otherwise, custom FPGA solution.
|
# ? Mar 28, 2016 19:26 |
|
what if you like made a graphics card that was like a folded up version of current cards and then in the middle you put a heatsink, with a fan on each side graphics cube
|
# ? Mar 28, 2016 19:27 |
|
SwissArmyDruid posted:No, the *card* has to be huge, but the interface doesn't. With most proper video cards getting most of their power straight from the power supply as opposed to from the bus, we are kind of approaching the point where our graphics cards can be plugged in by cables, just like our SATA drives. Like, remove the PCIe fingers off the card, replace them with a hypothetical GPU cable port, a la SATA. At that point you might net something smaller than the DAN A4 with HBM2 cards that are roughly the size of or smaller than the Nano.
|
# ? Mar 28, 2016 19:31 |
|
And (and I made this edit after you quoted it but before you posted) your GPUs could start being mounted in cages or on sleds, mounted elsewhere in the case. Hell, that Thermaltake Level 10 might finally be viable after all, if you could put the one vertical plane in the center, and mount GPUs on the backside of the motherboard tray. Honestly, I think the DAN A4 is probably as small as it gets, unless you start restricting card form factors to like, Fury Nano-only. It's a given that everyone's video cards are going to be smaller (thinner coolers at minimum thanks to the combination of process shrink and full-sized chips not being launched yet, even if the areal shrink caused by HBM isn't to arrive until later) SwissArmyDruid fucked around with this message at 20:37 on Mar 28, 2016 |
# ? Mar 28, 2016 19:44 |
|
Finally, the 5.25" slot makes a comeback!
|
# ? Mar 28, 2016 19:52 |
|
|
# ? May 9, 2024 08:48 |
|
SwissArmyDruid posted:And (and I made this edit after you quoted it but before you posted) your GPUs could start being mounted in cages or on sleds, mounted elsewhere in the case. You could probably fit a pair of Fury Nanos in a DAN-A4 style case in a mATX footprint. Other than that there's not really a point to the Nano formfactor - the DAN A4 fits a full-sized GPU, so why would you need a shortie card?
|
# ? Mar 29, 2016 00:02 |