|
I wish they'd hurry up and announce some goddamn specifics.
|
# ? Feb 2, 2017 01:03 |
|
|
# ? May 15, 2024 09:09 |
|
They've got to do something with the defective 8-cores
|
# ? Feb 2, 2017 02:31 |
|
Sounds like they have a lot of flexibility to make a huge stack that targets from top to bottom with the ability to also configure L3 separately from the cores. Sounds like they can recover virtually everything except 2C/4T (because it's way too forgone at that point). 8C/16T, 4+16MB 8C/16T 4+8MB 6C/12T 3+16MB 6C/12T 3+8MB 4C/8T 2+16MB 4C/8T 2+8MB 4C/4T 2+8MB 4C/4T 2+0MB For a line up? 16MB versions as the more expensive "Black Editions", and the 4C/4T as the cheapest SKU, coming in at ~100$ (Because no L3 means getting thrashed by a G4560, lol)
|
# ? Feb 2, 2017 04:34 |
|
FaustianQ posted:Sounds like they have a lot of flexibility to make a huge stack that targets from top to bottom with the ability to also configure L3 separately from the cores. Sounds like they can recover virtually everything except 2C/4T (because it's way too forgone at that point). I was going to say that maybe laptops could use the 2c chips, but then remembered that Ryzen doesn't have an iGPU. I'm sure they'll find a use for 2 core chips if there's enough of them though, no use wasting silicon
|
# ? Feb 2, 2017 05:02 |
|
VostokProgram posted:I was going to say that maybe laptops could use the 2c chips, but then remembered that Ryzen doesn't have an iGPU. I'm sure they'll find a use for 2 core chips if there's enough of them though, no use wasting silicon IRT the igpu, the plan is to have 4 core APUs later in the year. https://www.overclock3d.net/news/cpu_mainboard/amd_s_upcoming_raven_ridge_apus_will_feature_4_ryzen_cpu_cores/1
|
# ? Feb 2, 2017 05:08 |
|
Cowwan posted:IRT the igpu, the plan is to have 4 core APUs later in the year. Good grief, if that is true, one might find gpus one step closer to going the way of sound cards: enthusiast only.
|
# ? Feb 2, 2017 14:25 |
|
Eh, we've been at the point where iGPUs have matched the raw performance of low to mid end laptop dGPUs for quite a while but with no reasonably cost-effective way to feed them. We might see APUs on interposers paired with a stack of HBM, but not anytime soon.
|
# ? Feb 2, 2017 14:44 |
|
I know people think HBM2 when thinking APU+HBM but HBM1 makes way more sense from a cost and performance perspective if not Samsungs Wide I/O. 1GB of HBM should be more than enough for a 10-16CU Vega iGPU w/HBC if it does what AMD says it does, and 8CU should be easily fed by DDR4 3200 or better.
|
# ? Feb 2, 2017 15:13 |
|
GPU's have been enthusiast only for at least 5 years. The vast majority of PC's sold do not have a seperate GPU.
|
# ? Feb 2, 2017 18:58 |
|
Somebody out there has 8c/16t preview silicon. Prefix is "ZD", as opposed to "ES", and there are AotS benches. http://wccftech.com/amd-ashes-ryzen-4-0-ghz-benchmarks/ Salt now so you're not salty later.
|
# ? Feb 3, 2017 02:14 |
|
IanTheM posted:Can't you get 8 AMD cores for the price of 6 Intel cores? 6 Intel cores currently costs $320 at Microcenter or $390 at Newegg. I doubt AMD will give away their flagship processor for that kind of price. Now to be 100% fair here - for that price you are only getting a 28-lane chip, but also bear in mind that Ryzen will only have 16 lanes (plus both chipsets allow a couple more lanes running through the chipset for stuff like USB 3.0 and NVMe). The high-end Intels are second to none in their ability to hang poo poo off the PCIe bus. quote:For gaming it's not a huge deal, but for media/video stuff it seems like a big one for sure. Boiled Water posted:I'm not sure this particular dragon is worth chasing at all. DX12 will come but at this pace we'll probably be at DX13 or 14 before many cores are worth anything. And Vulkan, well, I don't know. I doubt anything will ever come of it. The extra cores don't really help gaming that much at present, but on the other hand they don't really hurt that much either. People think the single-thread performance sucks because they're a generation behind, but it's not like there were huge improvements between Haswell and Skylake or Kabylake. A Haswell running at 4.5 GHz is still a Haswell running at 4.5 GHz, plus you get quad-channel DDR4 out of the deal too, which covers some of the difference in poorly-optimized games like Fallout 4 that are viewed as "sensitive to single-thread performance". They definitely do help with media or video, plus they're also a really nice convenience thing to have. Running stuff in the background is no big deal when you have 2C/4T sitting around that aren't even being utilized, Chrome can be eating 20% of your CPU in the background while you're gaming and you don't even notice. And of course stuff that can be trivially parallelized (like music or video encoding, or image processing) just rocket along. Basically if they don't hurt too much, and in some circumstances they can really help, plus there's a chance of long term growth potential going forward - then why not? That's why I pulled the trigger on a 5820K last summer.
|
# ? Feb 3, 2017 02:15 |
|
FaustianQ posted:What exactly would newer iterations of DirectX even offer over DX12? This is pretty much impossible to say at the moment, we haven't even got past the early days of the DX12 generation yet. There's like a dozen or two games and a whole bunch of them are lovely hackjobs that someone hastily ported from DX11. NVIDIA hasn't even got hardware out that's fully optimized for DX12 yet (Pascal has some hackjob fixes to make it suck less), AMD hardware is optimized around Vulkan but runs hot and thirsty and can barely keep up with their NVIDIA counterparts even with their home-field DX12 advantage. So you're basically asking "what's going to be big in 10 years" here, and that's an eternity in computer hardware. You gotta either render pixels faster or avoid rendering them at all, and the problems with rendering them faster are pretty obvious. Power limits prevent bigger chips, the lack of node improvements prevents more powerful chips in the same envelope. There's probably some more gains to be squeezed out of tile-based processors, plus there's the trend towards temporally-interlaced rendering techniques like checkerboard rendering. Also, I wouldn't be surprised to see GPU offload take off. The APU-style Unified Memory architecture is good for this, consoles have cache-coherent busses in at least one direction, and I can only see that trend accelerating as we move into the HBM era. Imagine things like network update frames and game state updates that get pushed as gather/scatter operations in parallel. Possibly you could even have game servers "pre-digest" data into a format that is good for this, to speed up clients at the cost of servers. I don't know if it's feasible or not. Both of those things are still supported by DX12 though. At least in theory, and in theory you can extend it to do whatever. ------------- I was thinking about this while writing this, I'm not sure if this is at all viable or not, but there might be some other techniques you could use to reduce the amount of rendering you do. There are some obvious problems with this idea, starting with the fact that we have hardware that is not designed around this idea. But I'm curious what you and SwissArmyDruid (or any other compsci folks) think of this. Basically the general idea is to do a multi-viewport rendering to conserve passes like they do in VR - but instead of the viewports being physically separated, they are temporally separated. Thought experiment, you are riding a horse in Witcher 3, you approach a peasant and pan to watch as you ride by him.
So in other words, to save passes, instead of a raster to produce a 2-dimensional image, we are actually blowing up the dimensionality of the space. So instead of taking a 2D slice through a 3D space, you are actually taking a 3D slice through a 6D hyperspace, and then your third raster dimension becomes time. Or in the more simplistic model you are taking 3D slices through a 4D space. The tradeoff would obviously be that you are consuming a whole bunch more memory, essentially building a temporal cache of some of your intermediate results to avoid re-processing. And obviously this would take a bunch more processing if you wasted it on the areas with high error magnitude, so you would want to foveate those as much as possible. Possibly you could also apply vector fields or some poo poo to optimize the rendering, since essentially the image is mostly similar but there are small deltas to apply between each potential state. Which could allow you to move from a discrete (frame-based) model to a fully temporal model using real-valued dimensions instead of integers. I.e. "show me the 2.5th frame in this predicted sequence". Also, the model obviously goes off the rails as the temporal span increases - models in the real world are obviously not static, so changes in the game state that invoke changes in model animations would be problematic too (this may make high-dimensional projections pointless in real world games) But this is already addressed to some degree in checkerboard rendering, at least on relatively short time spans. Crazy enough to work, or just crazy? Paul MaudDib fucked around with this message at 03:39 on Feb 3, 2017 |
# ? Feb 3, 2017 03:16 |
|
Hopefully the total platform cost of Zen will be lower than X99, which is likely given that the massively decreased PCI-E bandwidth available means motherboard makers are capped out on how much expensive poo poo they put on boards. The massive delta between the cheapest Intel consumer 6 core and 8 core leaves a big hole in the market to fill with a Zen 8 core, but I think it will be harder to compete on the 6 core front with the "affordable" Intel.
|
# ? Feb 3, 2017 03:20 |
|
Is it a bit of an oxymoron for AMD to be ramping up the cores, whilst using a reduced pci width? "this 16 thread cpu is gonna demolish your workload fast, but then bottleneck said workload en route from/to its destination". Personally, if I bought zen it wouldn't be a problem because I'd be throwing a couple of tasks at any one time at it. But could it be a problem for extreme workloads?
|
# ? Feb 3, 2017 06:27 |
|
Oh gee, 16 lanes only. Totally didn't consider that one. I love my NVMe SSD and my 10GbE adapter, so I guess I'm out for the time being.
|
# ? Feb 3, 2017 06:30 |
|
BurritoJustice posted:Hopefully the total platform cost of Zen will be lower than X99, which is likely given that the massively decreased PCI-E bandwidth available means motherboard makers are capped out on how much expensive poo poo they put on boards. as if that is gonna stop mobo makers having an $400 AM4 board with over 9000 CPU power phases for "10000 amps of overclocking" and the most "31337 xtreme zOMGFATALITY GaMinG" plastic shroud and color scheme ever that an emo+drugged teenager can never even dream of, all of which probably has a combined BoM of like $10. And if you think Intel was overkill at market segmentation, you may wanna try the mobo makers where somebody has counted like 87 individual models using the Z170 chipset for Asus alone. There used to be a much simpler time where different models exist because of different chipsets, form factor, socket/slot type. Palladium fucked around with this message at 06:58 on Feb 3, 2017 |
# ? Feb 3, 2017 06:47 |
|
Combat Pretzel posted:Oh gee, 16 lanes only. Totally didn't consider that one. I love my NVMe SSD and my 10GbE adapter, so I guess I'm out for the time being. Again, plus a couple for things like NVMe in M.2 sockets. You're probably SOL on the 10GbE though, that's going to eat some lanes from your GPU. http://techreport.com/news/31228/amd-shows-off-ryzen-ready-chipsets-and-motherboards-at-ces Still though, Tech Report straight up notes that this weakness really puts them more alongside the consumer Intel processors for power users. edit: also, the lack of quad channel memory: while it's not a deal killer, it's certainly not a plus either if AMD wants to price these things close to Intel's prices. Much like the PCIe lanes - it's a degree of future proofing that you expect in a processor that's $700-800, because a high-end PC can easily last 5 years if you don't get antsy to upgrade, and who knows what will be relevant by then? Paul MaudDib fucked around with this message at 07:55 on Feb 3, 2017 |
# ? Feb 3, 2017 07:39 |
|
https://www.techpowerup.com/230291/amd-readies-ryzen-platform-drivers-for-windows-7 Heh, I totally called it somewhere earlier.
|
# ? Feb 3, 2017 07:53 |
|
Combat Pretzel posted:Oh gee, 16 lanes only. Totally didn't consider that one. I love my NVMe SSD and my 10GbE adapter, so I guess I'm out for the time being. BurritoJustice posted:Hopefully the total platform cost of Zen will be lower than X99
|
# ? Feb 3, 2017 08:04 |
|
Combat Pretzel posted:Oh gee, 16 lanes only. Totally didn't consider that one. I love my NVMe SSD and my 10GbE adapter, so I guess I'm out for the time being. As far as I know, x370 has 16x 3.0 lanes for graphics, 4x 3.0 for NVMe and 4x 3.0 and 8x 2.0 general purpose lanes, so you should be fine unless you also want 6 SATA and 2 SATAe ports for some reason. http://i.imgur.com/Luc2HEW.png apropos man posted:Is it a bit of an oxymoron for AMD to be ramping up the cores, whilst using a reduced pci width? I don't think the server CPUs are going to use consumer chipsets. Paul MaudDib posted:edit: also, the lack of quad channel memory: while it's not a deal killer, it's certainly not a plus either if AMD wants to price these things close to Intel's prices. Much like the PCIe lanes - it's a degree of future proofing that you expect in a processor that's $700-800, because a high-end PC can easily last 5 years if you don't get antsy to upgrade, and who knows what will be relevant by then? If AM4 supports 3000mhz+ DDR4 modules out of the box, that might help somewhat. Arzachel fucked around with this message at 08:19 on Feb 3, 2017 |
# ? Feb 3, 2017 08:05 |
|
Arzachel posted:As far as I know, x370 has 16x 3.0 lanes for graphics, 4x 3.0 for NVMe and 4x 3.0 and 8x 2.0 general purpose lanes, so you should be fine unless you also want 6 SATA and 2 SATAe ports for some reason. On the other hand, cheap 8C/16T. Given the distinction in available PCIe lanes and memory channels, I don't expect Intel to drop the prices on their 8C/16T anytime soon. --edit: Oh the SATA lanes can double as PCIe. Combat Pretzel fucked around with this message at 09:07 on Feb 3, 2017 |
# ? Feb 3, 2017 08:56 |
|
I wonder if reviewers will bother to perform tests on the chipset itself with regards to how good it is versus using lanes directly to the SoC. What do we know about what motherboard makers are doing with those dual-purpose/general purpose lanes? What are the trends so far in what they've wired the lanes to on X370? I'm kind of hopeful that there will be different configurations offered depending on add-in card use cases.
|
# ? Feb 3, 2017 12:17 |
|
I don't need SATA so a motherboard that could drive a PCIE Graphics card, an infiniband card or 10GB Ethernet card and a Nvme drive at full speed and support ECC would be awesome. Whats the RAM limit on this chip?
|
# ? Feb 3, 2017 16:08 |
|
Signs point to dual-channel memory only; 64GB over 4 DIMMs I hope X/A300 really takes off not just as a miniITX or embedded platform, but also as a no-frills motherboard with fewer signals to run into crosstalk
|
# ? Feb 3, 2017 16:14 |
|
Would the A300 even have a chipset or just rely entirely on the CPU?
|
# ? Feb 3, 2017 16:15 |
|
Arzachel posted:If AM4 supports 3000mhz+ DDR4 modules out of the box, that might help somewhat. Define "out of the box here", though. X99 is known for being a little temperamental on memory sometimes, but my cheapo X99 board (Gigabyte GA-X99-UD4) supported DDR4-3000 right out of the box, even though I used two separate kits to do it. Just plug them in, turn on XMP or whatever the equivalent DDR4 thing is called, and it booted right up. In theory it supports up to DDR4-3333. So again, not really an advantage for AMD considering X99 can do it too. Still better than being stuck on DDR4-2133 I guess. edit: but really, I think we're all missing one crucial piece of information that may well make or break this purchase: will Ryzen support SATA Express? double edit: yes it will! Paul MaudDib fucked around with this message at 17:09 on Feb 3, 2017 |
# ? Feb 3, 2017 17:02 |
|
FaustianQ posted:Would the A300 even have a chipset or just rely entirely on the CPU? Relies on the CPU. I hope it gets to use those extra lanes though! Sidesaddle Cavalry fucked around with this message at 18:32 on Feb 3, 2017 |
# ? Feb 3, 2017 17:19 |
|
Combat Pretzel posted:Oh gee, 16 lanes only. Totally didn't consider that one. I love my NVMe SSD and my 10GbE adapter, so I guess I'm out for the time being. How often do you use those with your graphics card streaming textures, at full tilt? Reminder that most graphics loads see very little difference on pcie speeds
|
# ? Feb 3, 2017 17:30 |
|
I mean we did just discuss that the SoC itself has more lanes than that so can we just drop the lane usage shaming?Arzachel posted:If AM4 supports 3000mhz+ DDR4 modules out of the box, that might help somewhat. They have their own version of XMP that supports 3200mhz according to a few pages ago Sidesaddle Cavalry fucked around with this message at 18:31 on Feb 3, 2017 |
# ? Feb 3, 2017 18:28 |
|
FE: double post
|
# ? Feb 3, 2017 18:30 |
|
Paul MaudDib posted:edit: but really, I think we're all missing one crucial piece of information that may well make or break this purchase: will Ryzen support SATA Express? sata4 ended up getting delayed, nobody actually wanted to make a sata express connector port despite it being SAS for consumers and it ended up not providing enough bandwidth (pcie 2.0 ), and m.2 is a confusing mess of poo poo with hosed up trace logic. u.2 did end up working out okay but no consumer board supported it for a long time people seemed to have settled on the pcie m.2 and u.2 for the next generation SSD connectors
|
# ? Feb 3, 2017 19:54 |
|
Malcolm XML posted:Reminder that most graphics loads see very little difference on pcie speeds Adding to this, there are two things to consider: The GTX Titan XP is the first GPU to actually need more bandwidth on average than PCIe 2.0 x16 can provide, and you can't hit that while gaming, only doing deep learning or other super-GPU-heavy tasks. The multitude of Thunderbolt 3 GPU docks run cards respectably close performance to desktop motherboards at only 4 PCIe 3.0 lanes. The fact of the matter is that 8 lanes of PCIe 3.0 is still overkill for gaming. You'd have to specifically create a scenario where it isn't, and at that point you're running a benchmark and not an actual game. Lane sharing is not a big deal, folks.
|
# ? Feb 4, 2017 07:43 |
|
RyuHimora posted:Adding to this, there are two things to consider: To be honest - "on average" is a bullshit metric. A lower-midrange card like the 4 GB RX 480 has more than enough bandwidth on average to satisfy the demands of the GPU processor. Nevertheless, increasing the bandwidth by 14% (i.e. the 8 GB card) still increases the actual performance of the card by roughly 7%. It's not a matter of "you have enough on average", more is always better, and sometimes disproportionately so. When you're trying to page something into VRAM you can't render any frames until you have the transfer finished. Averages don't help you there. Honestly at this point it's embarrassing for you to even bring up average frametime as some kind of serious measurement. Everyone knows it's pretty much entirely un-representative of actual performance. Bare minimum you need to combine it with a minimum FPS. Even better, forget about the whole thing and look at FCAT timings (slightly better but tough to digest), or ideally look at a badness metric like time-spent-beyond-X-milliseconds or a graph of frametimes-by-percentile, which are actually informative and can be digested with a glance. This kind of poo poo: here's a 3-generation-old processor that retailed for $50, embarrassing the gently caress out of AMD's high-end FX processors in real-world gaming performance in a highly-multithreaded game. Look at the averages, you think the FX-8350 is fine (look, it's within 5% of the 4790K that costs twice as much!). Look at the actual frametimes, you see how it has roughly 3x the amount of framedrops as an OC'd G3258 and roughly 20x the framedrops as the OC'd 4790K, i.e. it's hot garbage. Literally hot and figuratively garbage. Averages are useless. edit: actually the G3258 is 4 generations old now, I forgot about Broadwell (so did Intel lol) Paul MaudDib fucked around with this message at 11:37 on Feb 4, 2017 |
# ? Feb 4, 2017 11:15 |
|
Paul MaudDib posted:To be honest - "on average" is a bullshit metric. A lower-midrange card like the 4 GB RX 480 has more than enough bandwidth on average to satisfy the demands of the GPU processor. Nevertheless, increasing the bandwidth by 14% (i.e. the 8 GB card) still increases the actual performance of the card by roughly 7%. It's not a matter of "you have enough on average", more is always better, and sometimes disproportionately so. When you're trying to page something into VRAM you can't render any frames until you have the transfer finished. Averages don't help you there. Your example is poor because while there are many metrics that influence CPU performance, either you have enough PCI-E bandwidth to stream textures in time or you don't. There's no way to mask frame drops with high peaks when comparing 4 lanes to 16 as there would be with the FX-8150 compared to the Celeron. There are few articles that look at PCI-E scaling with frame time graphs, much less recent ones, but this one has a GTX 980 running at PCI-E 1.1/2.0/3.0 with full 16 lanes. Even with 1.1, there are no huge spikes that you'd associate with running out of VRAM and having to swap textures from main memory.
|
# ? Feb 4, 2017 12:31 |
|
RyuHimora posted:The GTX Titan XP is the first GPU to actually need more bandwidth on average than PCIe 2.0 x16 can provide, and you can't hit that while gaming, only doing deep learning or other super-GPU-heavy tasks. Similarly, this thirst for better interconnects in HPC and compute situations is one of the major reasons why Intel enabled Knight's Landing to self-host and talk directly to the network (e.g. Omni-Path) rather than requiring it to go through the PCIe complex to reach a host or network port. Similarly, the PCIe bandwidth pressure caused by network accesses are exacerbated by RDMA technologies like Nvidia's GPUDirect and AMD's DirectGMA. Limitations of PCIe have already spawned a number of industry consortia looking to build the next replacement, including: CCIX, Gen-Z, and OpenCAPI.
|
# ? Feb 4, 2017 20:03 |
|
Paul MaudDib posted:Honestly at this point it's embarrassing for you to even bring up average frametime When did I mention average frametime?. I was specifically talking about actual PCIe bandwidth used. My entire point was that, for a Desktop user running a PCIe 3.0 motherboard, you are not going to see performance loss by sharing lanes between a GPU and other PCIe cards. Don't put words in my mouth. Menacer posted:If you're going to bring up GPU compute tasks (such as machine learning applications), then this is entirely untrue. I was not aware of most of this, somehow. Thank you for bringing it to my attention.
|
# ? Feb 4, 2017 21:02 |
|
RyuHimora posted:When did I mention average frametime?. I was specifically talking about actual PCIe bandwidth used. My entire point was that, for a Desktop user running a PCIe 3.0 motherboard, you are not going to see performance loss by sharing lanes between a GPU and other PCIe cards. Don't put words in my mouth. Sorry, I meant average framerate. And my point was that you can't measure performance by average framerate alone. We're a decade past that poo poo.
|
# ? Feb 4, 2017 21:25 |
|
Chipzilla's moving! Praise be to Moore, Chipzilla has been roused from its torpor! https://www.cpchardware.com/intel-prepare-la-riposte-a-ryzen/ The gist: Intel appears to be carving out new bins for higher-clocked versions of existing chips in response to Ryzen: * An overclocked i7-7700K at 4.3GHz and 100W TDP named i7-7740K * An overclocked i5-7600K at 4.0GHz and maybe Hyperthreading named i5-7640K They should be getting their hands on samples by the end of the week. Better translation forthcoming! SwissArmyDruid fucked around with this message at 20:44 on Feb 6, 2017 |
# ? Feb 6, 2017 20:41 |
|
Hahaha, what?! I mean I guess this is the best response they can come up with in such a short time but it's so piss weak. This isn't a response, it's a fart.
|
# ? Feb 6, 2017 22:05 |
|
|
# ? May 15, 2024 09:09 |
|
They scared!
|
# ? Feb 6, 2017 22:10 |