Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Misc posted:

I have seen that bulldozer aged well, even if it didn't start out good enough to make it worth waiting around. I am within the ideal use case for Ryzen as I have multithreaded workloads and play games socially, but am weighing my options towards building a dedicated, small form factor gaming machine instead since I still carry my stuff around for lan parties. There are no mITX boards available, and my local Micro Center aren't offering many Ryzens with mATX boards, so I'm going to continue to observe where this all goes for a while longer, especially if the Ryzen-based APUs allow for good enough gaming performance that I can keep in an enclosure small enough to fit in a bag.

Unfortunately all the really tiny cases are mITX, not mATX. I am in the same boat.

SlayVus posted:

This kind of my problem now, I want to upgrade to itx build for portability. However, there are no boards at all and it'll be several months before any come out. To have say a SilverStone Raven RVZ02 with a Titan XP and Ryzen 1800x would be an amazing VR rig.

That's actually why I built my mITX box - Raven RVZ01 with my old 4690K, a Corsair H75, and my secondhand GPU at the time (780 Ti reference). Pulled it when I flipped those cards but eventually it'll probably get my current 1080. We have cats that get underfoot, and my fiance doesn't keep the living room clean enough for VR anyway, so I'm probably going to get a second pair of lighthouses and do cockpit sims on my desktop in my room instead anyway. Vive is supposed to release a Mark 2 Lighthouse soon that will be cheaper so I'm holding out for a bit.

Highly recommend the Raven boxes though. If you have the money you can even do a custom loop, which is absolutely absurd for a mITX box that size. Apart from the Corsair One I think they're probably the best mITX boxes ever made. Dan SFX-A4 is the only real competitor there.


I have mixed feelings about Ryzen and VR though. I'd actually love if someone dug up some reviews of how it does there. My gut instinct is that the single-threaded punch of the 7700K would win the day but there are really a lot of variables in play here.

First off - the background positioning tasks are something that can be parallelized and run on separate cores. This is IMO the classic example of "stuff is getting more parallel" - we now have some fairly compute-intensive tasks that run in the background.

Second - the better minimum framerates in a lot of games on Ryzen. In theory that's a plus.


The counterarguments though:

Positioning processing has totally diverged in approaches. The Oculus Rift really needs lots of processing power since it's basically handling a pair of USB 3.0 cameras streaming in realtime. In contrast the Vive relies on precision timing - detecting the beam sweeps from the lighthouse (probably with the precision timing happening in the headset). So the Ryzen might well be better for the Oculus Rift with its processing needs, and the 7700K for the Vive with its need for precision timing. Nowadays the fanbase has mostly gravitated towards the Vive though (by about 2-to-1).

Ryzen's generally acknowledged as being modestly worse for gaming. Yeah, the minimum frametimes matter, but a VR build is also the quintessential subject where you can't just handwave and go "but my cinebench performance!". It's an all-out gaming build and historically max single-threaded performance has won that niche.

Furthermore, even though minimum frametimes do matter, the VR community has expended a lot of effort to lessen that impact with stuff like "reprojection"/"asynchronous warp"/etc.


Sorry, that's clear as mud, but I still have a bunch of questions here re: Ryzen. With the boards that are available today, a 7700K is certainly the answer for most people. Anything a 7700K can't handle, you are probably better off with a very high-clocked Haswell-E on the (only) mITX board, but that's an expensive setup and will be difficult to cool (you would certainly want to go liquid cooling there, and bear in mind you will be dissipating 140W at 4.1 GHz and 200W+ at 4.5-4.7 GHz through a 120mm radiator mount).

I feel a lot of the same arguments on the Haswell-E will apply to Ryzen as well though. A Ryzen system, at stock clocks, with no GPU load consumes about 200-230W at full CPU load (measured at the wall). That's a lot of power to dissipate in a mITX case no matter how you slice it. The RVZ02 is certainly inappropriate for this given its even tighter dimensions and lack of 120mm radiator mount.

I don't want to say the Ryzen TDPs are outright lies but there's absolutely zero question that they cover the same level of boost clocks as the Intel processors do. Frankly my 5820K boosts to 4.13 GHz all-core at stock voltages, and still undershoots its 145W rated TDP fairly significantly. In practice Ryzen is perhaps 10% more efficient than Haswell-E at best, and that's being a little gracious. The 1800X is more like a 125W processor even at stock clocks. And like Haswell-E, once you hit the point of diminishing returns then the power starts going nuts. The difference with Ryzen is you hit that wall about 10% sooner than Intel - Ryzen hits it at ~3.9 GHz and Haswell-E hits it at 4.3 GHz, and Ryzen maxes out about 4.1 GHz while Haswell-E maxes out about 4.5 GHz.

That's normally not a huge deal anyway - I am the first to point out that 10 and 20 watts here and there doesn't matter when you're talking about pushing 200+ watts through your processor - but an ITX case is the exception to that rule because you have very few options to dump that heat.

Paul MaudDib fucked around with this message at 03:14 on Apr 15, 2017

Adbot
ADBOT LOVES YOU

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

SwissArmyDruid posted:

Passmark says AMD took 2% of market share away from Intel last quarter. :toot:

More significant when you consider that Ryzen was only on sale for the last month of Q1, which was further compounded by motherboard shortages.

https://www.cpubenchmark.net/market_share.html

Perf stepping for Ryzen when, AMD?

No, actually that says that 2% more of the Passmark runs were done on AMD systems. Which isn't quite the same thing, especially when Ryzen has performance problems that need a lot of tweaking to tune up properly. They don't even dedupe down to actual unique users, it's literally "runs of the software". Run it five times, you go in 5 times.

Steam results are probably going to be a little more meaningful, or some other source of data on actual systems as opposed to number of benchmark runs.

It's still a pretty good result for AMD though.

Paul MaudDib fucked around with this message at 00:34 on May 2, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Probably, as long as you don't move the block too much and create any air voids or crack the paste or whatever. I would really just repaste it though. It's fifteen minutes of work to clean it up, just suck it up.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

GRINDCORE MEGGIDO posted:

Best chemical I found was Akasa Tim clean. It just wipes the paste right off.

Nah, all you need is 99% isopropyl, or whatever is the highest-test stuff you can get. 70% will do in a pinch, just be careful to let it air out for a while longer because it does leave a little bit of water residue. And don't go nuts spilling 70% everywhere either.

That plus coffee filters or some other lint/fiber free wipes.

Paul MaudDib fucked around with this message at 01:43 on May 2, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

GRINDCORE MEGGIDO posted:

I hope the poster that offered you help isn't put off doing the same again in future. I totally don't get at all why you didn't just say thanks. But w/e.

I'm glad your new system works.

Is that me? I've been called worse during my history posting in the AMD thread :v:

(I really don't hate AMD, I pretty much ran ATI/AMD everything for close to 20 years and actually I'd love for them to take some real performance (not price) wins. I've just become... accustomed to disappointment)

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

GRINDCORE MEGGIDO posted:

Hah no! It was EdEddnEddy being snarked at for mentioning a potential fix to that guys problem.

I like AMD too but haven't used them for a long time. But the 1700 is interesting me.

1700 is awesome, especially for productivity, it's just not a super win over the Intel HEDT line like it was trumpeted as. It was a solid incremental improvement on the 5820K/6800K at more or less the same price (which is amazing given how terrible Bulldozer was, heil Keller), but not the second coming of Jesus or double the performance at the same price or any of the other poo poo that got said.

It's a hammer and now everyone is looking for a nail to apply it to. But 6 Haswell cores at 3.8 GHz max overclock is not really all that fast, I had a faster 5820K a year ago at stock voltage for $330. I had hopes, and Ryzen v1.0 was a big wet fart.

Also, Intel is literally launching 6C-12C Skylake HEDT chips within the next 3 months so... wait for Vega Skylake-X. 6 cores worth of Skylake IPC at 4.6 GHz is gonna be pretty killer for multi-thread gaming that is friendly to ryzen sort of cores and it'll probably launch at $425 or so. Betting $375 for us Microcenter-havers.

edit:

GRINDCORE MEGGIDO posted:

But I'll wait for a respin and Asus ITX.

At this point I don't trust Ryzen's memory controller far enough to do the kind of server builds I was interested in. Ryzen+ or bust.

Paul MaudDib fucked around with this message at 04:13 on May 4, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

EdEddnEddy posted:

Naw I am used to that sort of customer response both from people that haven't a real clue as to what they are doing, (But hate the "Did you turn it off and on again?" question, that usually fixes the issue) to the ones that know what they are doing, but hey, I can't read minds through a forum or anything.

On the Tech side of things though, I have a knack for bringing things back to life that appear dead to all others so I would have liked to have had a chance with that poor board, but I also can see that being this early in the new tech's lifecycle that something just borked itself between bios flashes.

Glad you're up and going again.

I've actually heard quite a few people that have managed to bork their BIOS beyond repair at this point via whatever upgrades and stuff. If shorting out the CMOS reset pins doesn't work... what are you actually going to do?

In fact at this point you're kinda getting lucky if swapping out the board works, since so much of the UEFI lives on the CPU package now.

edit:

GRINDCORE MEGGIDO posted:

What I'd really like is to get the ultimate 6c+ chip for consistent frametime. I wonder if the 6c skylake chips will change things there.

(I have a 6600k at 4.5 ATM and kind of wish I'd gone 5820k instead now.)

Like I said I paid $330 for my 5820K, $140 for a mobo, and $120 for 32 GB of DDR4-3000 a year ago, and I hit 4.13 GHz all-core at stock voltage so... for me Ryzen was a super wet fart.

Having cores is super nice, and I'm glad the AMD folks have it too, but it wasn't really that much of a price drop. Even at Newegg, the 5820K only ever ran about $30 more than the 6700K or 7700K or other 4C8T i7s. It's not "half the price of a 6900K", it's the same price-to-performance as a 5820K was 2 years ago.

I've actually been preaching the gospel of minimum frame-times since the G3258 days and the r/amd fanbois used to dump on me for that, until AMD kicked the FX to the curb (won in averages but lost in minimum frametimes) and Ryzen started underperforming in averages but winning in minimum frametimes.

Paul MaudDib fucked around with this message at 04:25 on May 4, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

SwissArmyDruid posted:

Rhetorical: ECC is the most natural thing in the world: It's the removal of a variable that can't be controlled. In the context of a home server, why *wouldn't* you want ECC?

There's really no technical argument against it, except that it's not currently binned for overclocking so using ECC would be marginally slower than using gaming memory. But usually even dual-channel has excessive amounts of memory throughput and quad-channel is like, gratuitous amounts, even at standard clocks.

It's a useful feature that's used to artificially segment the market. I'm told that ZFS is actually no more susceptible to bit errors than any other filesystem. The argument is actually more like "if you care enough about your data to consider using ZFS then you should be using ECC regardless of what you end up using".

Of course using slow-rear end ECC RAM would trash Ryzen's performance anyway given that inter-CCX communication is tied to the memory clocks. Really I guess that should have ended my interest in Ryzen server builds then and there.

Paul MaudDib fucked around with this message at 18:40 on May 5, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

NihilismNow posted:

It has checks against bad data on disk. The idea is that if you have bad ram that it will evaluate good data as being bad (Because it fails a checksum) and will then overwrite it, possibly with wrong data (because wrong data is evaluated as being good because of your dodgy ram). Not sure how correct it is but the FreeNas developers seems pretty adamant about not using non ECC ram with data you care about.
ECC is something you have to pay a lot for on the intel side to get, a real chance for AMD to differentiate itself from the "safe" intel choice that you know is going to work with everything.

ZFS isn't more likely to have bit errors than any other filesystem though. The devs have clarified their stance - they're adamant about using ECC with any data you care about on any filesystem. Which is good advice.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Obsurveyor posted:

ECC required for ZFS smells like bullshit and cargo cult behavior to me.

The key phrase being "any moreso than any other filesystem". If your data is significant and irreplaceable, or business-critical, or legally significant, then you should use ECC on any filesystem. If a bit gets flipped in your animes... so what?

Google's done some large-scale research here and bit errors are actually quite common. They scale up significantly with altitude, they scale up significantly with memory allocation amounts and CPU utilization, and of course temperatures are also well-known to scale up the error rate as well (hairdryers are a common method to induce bit errors). They also tend to increase significantly on older harder that's been used for more than ~20 months.

About one third of machines in Google's study experienced at least one memory error per year, with some platforms as high as 50% per year. Machines that have errors tend to have lots of them, with the median number of errors per year for machines having at least one error ranging from 25 to 611 (again depending on platform).

So basically the distribution here is kinda bimodal, most machines don't experience any failures but the ones that do really poo poo the bed like crazy, and those are often older machines and ones under high load. Of course, without ECC you don't really know whether your hardware is perfect or the silicon equivalent of Tubgirl.

Paul MaudDib fucked around with this message at 07:04 on May 6, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

SwissArmyDruid posted:

My vote's for AMD. They are like a dumb puppy. Overeager and even sometimes occasionally useful.

Nvidia's wants to push SaaS onto you, and SaaS is a legitimate goddamn cancer.

Wouldn't that be "hardware as a service" in NVIDIA's case?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Deuce posted:

Actual power draw will be 200W because hahahah TDP

What even is TDP, maaaan? You can't measure power! :okpos:
- - AMD engineers

PerrineClostermann posted:

160/12 = 13.3w per core
155/16 = 9.68w per core

Pretty decent difference in TDP, anyway.

Except that actual measured TDPs under, say, Cinebench 15 multi-threaded have been like 1/3 higher than the official TDP figures. All in all it's pretty much comparable with Broadwell-E - and that's a good result for a first try, yay, AMD numbah 1 Intel numbah 4, hype train choo choo - but it's nowhere near what AMD is advertising it as. AMD is lowballing it, and Intel is leaving some headroom for a moderate amount of AVX. A 6900K/6950X and a 1800X are all ~130W processors at stock clocks.

https://www.pcper.com/reviews/Processors/AMD-Ryzen-7-1800X-Review-Now-and-Zen/Power-Consumption-and-Conclusions

https://www.bit-tech.net/hardware/2017/04/06/amd-ryzen-7-1700x-review/6

http://www.tweaktown.com/reviews/8072/amd-ryzen-7-1800x-cpu-review-intel-battle-ready/index12.html

http://hothardware.com/reviews/amd-ryzen-7-1800x-1700x-1700-benchmarks-and-review?page=10

Paul MaudDib fucked around with this message at 22:12 on May 15, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Fauxtool posted:

1070 wasnt at launch either considering it was matching the high tier 980ti.

Also, a GTX 1050 isn't really low-end when you consider that it matches a high-tier GTX 580. In fact, even integrated graphics are high-end if we measure against a Voodoo 5. Really makes you think, huh?

quote:

It still isnt really mid tier if you consider the whole lineup on both sides

The fact that AMD's lineup has been clownshoes does not change the fact that the 1070 is a mid-tier graphics card. There are 2 cards above it in the lineup, that reach performance levels 60% higher than the 1070. Prices are hanging out just above $300 right now. It's exactly in the middle of NVIDIA's lineup. It's a mid-tier card.

Paul MaudDib fucked around with this message at 04:13 on May 21, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

PerrineClostermann posted:

Maybe in a rich man's world.

Yes, the x70 GPU, traditionally the best-selling price point in the entire lineup, is truly a status symbol for wealthy gamers.

AMD's lack of offerings above the RX 480 is causing some serious sour grapes for you guys.

Paul MaudDib fucked around with this message at 05:24 on May 21, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Nah, a $350-400 launch price is the norm for x70 cards dating back to the GTX 470. Go look it up if you like.

This poo poo is why Intel doesn't ever lower their prices: if they ever raise them back up then people will throw a temper tantrum. So Intel competes based on features/performance rather than pricing.

Frankly the whole thing is comical to me because NVIDIA has actually been the one driving increased consumer value in the GPU market for at least 3 years now. AMD is still peddling the same 290-performance-for-$200 offering they were 2 years ago, and rebranding has become AMD's norm. At this point they essentially just follow NVIDIA's pricing structure, and the only time they really lower prices is when NVIDIA comes out with a newer/faster/cheaper offering that forces their hand.

In the GPU market: AMD is basically what people imagine Intel to be.

Paul MaudDib fucked around with this message at 05:46 on May 21, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

MaxxBot posted:

The $330 970 was perhaps their best selling card ever though, it's been #1 on the steam hardware survey for ages. What defines mid-tier?

Well by basing the performance scales on the classic everyman's GPU, the Rage 128 Pro: you can clearly see that the current offerings all fall into the super-ultra-platinum tier, the mega-super-titanium tier, and the hyper-giga-ultra tier.

Serious answer: it's arbitrary to slice them into 3 performance tier to begin with, each model of GPU in the lineup is pretty much exactly 25-30% faster than the last (apart from 1050 Ti). So really they're all their own "category" if you want to look at it like that, there are really like 5 performance categories (low/low-mid/upper-mid/lower-highend/upper-highend). That puts the 1070 as an "upper-mid" tier, which sounds correct to me given the x70's mainstream popularity.

This question really also involves an implicit judgement on what resolutions/refresh rates are low-range, mid-range, and high-end. After all, a 1060 is a pretty decent GPU for 1080p... not so much for 1440p or even 1080p high-refresh. From a 1080p perspective I can certainly see the 1070 being high-end, but at 1440p it's the entry-level offering for a decent experience, whereas the 1060 would definitely be low-end.

But this is where people dig in their heels and start screaming about how the Steam Hardware Survey says if you own anything higher than 1080p60 then you deserve the guillotine.

(as if the Steam Hardware Survey isn't totally dominated by laptops with integrated graphics and other PCs that are laughably unable to handle modern gaming)

Paul MaudDib fucked around with this message at 06:39 on May 21, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

EoRaptor posted:

This is a defensive play against AMD, nothing more. They are going to drive 'feature X' as the new open standard, pushing out any competing standard. It's just 'coincidence' that 'feature X' is something they developed internally and have a huge head start on vs their competitor.

It's all about positioning for OEM's.

So basically like FreeSync then? :allears:

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

fishmech posted:

Don't they, for the normal high performance computing market?

There's the FirePro S9000 series. Possibly more. As you note, it's fairly common in the HPC market - removing the ports allows you to put more louvers there and cool it better. Another even more niche feature offered is passive cooling (eg NVIDIA K80) - dedicated supercomputers push aircon through the rack pretty quickly and you remove a whole ton of moving parts.

GRINDCORE MEGGIDO posted:

I'm surprised AMD don't sell a lightweight compute card that's just a standard card, cheaper, with no video outputs, and separate the markets.

This is all about economies of scale. What's the savings from removing a few extra discretes and side-chips in mass-production quantities? Not much. How much does it cost to set up the pick-and-place machine for new boards, and track an entire other second lineup? At least some. How many are they going to sell? Well, either very few, or their entire stock - in which case you're now sold out for months until you can ramp production (by which time it may well be over).

I'm sure you can see why they aren't jumping all over that.

Also, miners usually figure on selling used cards to get some of their money back out when they're done with them. Regular users are gonna want a display output.

Whether they are OK or not kinda depends on the card and how the miner has used it. Most miners do undervolt and try to run them cool (lots of fan) which helps electromigration, and they usually don't have many thermal cycles on them (it's one of the primary failure modes). But man it is a lot of power-on hours, and fan bearings wear over time, etc. And open-air-frame cases probably aren't the world's best thing w/r/t electrostatic discharge (which can cause very subtle damage that doesn't necessarily kill the card outright).

Frankly I really like the idea of the XFX Hardswap series. It should be easier for end-users to swap fans, it's a pretty common failure mode and would be appreciated by bitlords and gamers alike.

Paul MaudDib fucked around with this message at 04:52 on May 30, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

MaxxBot posted:

Thank you, I'm a big CPU nerd so stuff like this is right up my alley.

Agner Fog is the best. If you haven't yet, also check out another of his works in this series, Microarchitecture.

Scott Wasson (formerly of TechReport) is another amazing guy to read/listen to. Unfortunately he's mostly gone silent since getting a job at AMD doing hardware-engineering-y things. He was way out in front on the whole frametime issue and all-around knows his poo poo. Is his old podcast still worth listening to anymore/any replacements?

David Kanter's great. I wish he did more podcasts or writing or whatever.

SemiEngineering also has some really interesting reads focused on litho tech/semiconductor fab stuff. I read an interesting article recently on some of the challenges they're facing with Extended UV (which is now beginning to push into X-ray territory)[/url]. I'm pretty sure there was another article recently that went into some of the stuff they're doing, where they're now instantly ionizing a little blob of metal into plasma order to get a perfectly round blob that behaves as close to ideally as possible, or some poo poo like that.

pyf interesting tech reads?

Do we have a SHSC or Cavern of Cobol book/article/what-i'm-reading thread for this stuff?

Paul MaudDib fucked around with this message at 04:39 on May 30, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Krailor posted:

Yup, AMD is all-in on Infinity Fabric. Their plan for Navi to do the same thing on the GPU side that they're doing on the CPU side right now. If they can manage to glue together several under-volted Polaris/Vega cores and have them function as a single unified GPU I think they'll be very successful.

I'm interested to see what the die layout looks like for the 16 core Threadripper part. They showed off the 32 core layout and it's 4 8-core modules arranged in a square. I'm wondering if the 16 core will be; 2 modules on one side of the chip? diagonal? centered? Or (worst case) will it be some sort of conglomeration of various combinations of modules with some of their cores disabled.

I really can't imagine them doing that last one; but it's AMD.

There are probably more optimal arrangements, it's kind of an interesting question.

Are there any model systems (toy or real-world) where both the logical design (eg VHDL) and the design constraints on the fab process are known, ideally along with a datasheet on the process with something like "thermal output per mm^2" according to some clock/duty cycle rating?

(I doubt it but someone please surprise me)

Worse comes to worst you should be able to discern a lot of this experimentally - delid that bitch, toss some thermal-sensitive laquer on there, and record it as it goes through a test suite designed to gently caress with each particular stage or execution unit. I'm 100% sure one of the reviewers did this for Ryzen?

Paul MaudDib fucked around with this message at 02:07 on May 31, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Delusibeta posted:

The official argument is that they were demonstrating how Threadripper could handle two of the "latest and greatest graphics cards for content creation" (i.e. Vega). In the process, they made said graphics cards look like 1070s.

While that's obviously damage control, I think there may be a nugget of truth there.

Workstation processors are not good at gaming. Even thread-friendly games still have a single primary thread that bottlenecks first, and throwing more cores at it doesn't help much past a certain point. In fact the extra threading overhead can actually hurt (especially if it puts more overhead onto the primary thread). Given how Ryzen is already noted for suffering huge issues while gaming with its funky interconnect things may be even worse if threads are bouncing all over the place across four CCXs on two separate dies.

Threadripper is not a gaming chip and I think people who just want a high-end gaming chip are going to be disappointed. It's not really a competitor to the Intel HEDT lineup where you have 12 cores on a single die. It's a workstation chip and clocks are going to be lower (and they may have locked multipliers, who knows).

Maybe it's just super CPU bottlenecked on a single thread. Can using a lovely CPU make your 1080 Ti perform like a 1070 would? Yes.

But who knows given that they didn't show any Rivatuner data?

That's about the most positive interpretation I can give there. Maybe someone totally hosed up conceptually and used the wrong hardware to build their demo rig, like they thought they'd throw a bone to power-gamers or whatever. There is no possible interpretation where this wasn't an absolutely insanely idiotic demo to give.

Also, the idea that you somehow need to prove that your CPU can do CrossFire is absolutely ludicrous. You've been able to do Crossfire on consumer CPUs for loving ages now, and running at PCIe 3.0x8 speed has almost no impact on anything except really intensive compute.

Paul MaudDib fucked around with this message at 20:39 on May 31, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

redeyes posted:

Are you slightly crazy? Any of these high end chips are great for gaming. Just because you get 3 or 3.5Ghz per core does not mean games are going to suddenly suck.

3 GHz clock rates will actually have a very detrimental impact on game performance. Take a Ryzen 7 and clock it down by 33%, what do you think happens?

Lockback posted:

I think he means from a value prop. If your primary use is gaming (non streaming) then you will likely get a lot more value for your money elsewhere, right now the vast majority of games do not use 4 cores (and even the ones that do are usually still single-core dominant). That will eventually change, though dunno if that is 2-4 years from now or 5-8.

I mean in an absolute sense too. Gaming is typically bottlenecked on a single thread - even in the world of DX12 there is still a single primary thread that takes a disproportionate chunk of the load. You need to clock high to get that thread running fast enough. Workstation chips are going to have lower turbo clocks, and if you attempt to spread out over all your cores you're going to push the turbo clocks even lower.

Every thread you add increases the synchronization overhead. At first, the gains from moving work off the primary/bottlenecked thread are going to be worth it. At some point the overhead adds up and you're not really getting anything. That point is certainly at less than 32 threads. Like, right now it appears to be 6-8 threads for most games based on experimentation someone did on Ryzen 7 a few months ago.

HalloKitty posted:

Suddenly many core chips are uncool if they're from AMD, but an object of lust if they're from Intel, and the inter-core latency of AMD's solution is given an enormous amount of weight even though it's found to be a minor issue at worst in a handful of situations, and in others it's simply not a problem.

These new AMD chips are offering ECC RAM, a ton of cores and tons of PCIe lanes, no doubt at a price Intel won't want to compete with. What the hell is there to really complain about?

:lol: gaming doesn't need ECC, why would you even suggest that?

Intel's chips clock a lot higher, and Skylake-X is probably going to have some decent IPC gains. Those are tangible advantages for gaming.

We don't really know how latency is going to work, but we can certainly look at how it already works for existing multi-socket systems. Hint: badly. You see no gain from the second socket, and in many cases performance gains are negative due to threads bouncing around between sockets. Threadripper is basically multi-socket-on-a-chip and it'll probably behave much the same way.

Again: Threadripper is going to be nice as a workstation chip, but it's not a gaming chip. Ryzen 7 is already HEDT class, Threadripper is basically a Xeon. You don't buy Xeons for gaming. You certainly don't build a multi-socket Xeon system for gaming. Somehow it's a hateboner to point that out?

Paul MaudDib fucked around with this message at 16:20 on Jun 1, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

HalloKitty posted:

I'm not talking in relation to gaming at all, I'm just saying it's a nice bonus for other uses

Then I think you didn't read the quote you were responding to. Here's the excerpt that redeyes found fault with:

quote:

Threadripper is not a gaming chip and I think people who just want a high-end gaming chip are going to be disappointed. It's not really a competitor to the Intel HEDT lineup where you have 12 cores on a single die. It's a workstation chip and clocks are going to be lower (and they may have locked multipliers, who knows).

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

repiv posted:

If those stutters are caused by background processes being scheduled onto the same core as the main game thread, simply throwing more cores at the problem will help by giving the scheduler somewhere else to put those background tasks.

Take an 8-core Ryzen 7, put Windows into high-performance mode to force it to clock up, then look at your CPU utilization percent while you're at your desktop. That's how much load you'll see during a game.

Like, how high do you think it is? 5% constant load? Probably less I'd think. The reality is that Chrome now throttles background tasks, etc. Unless you've got antivirus running or something, the rest of the system just doesn't eat that many cycles.

Having 16 threads available is already plenty to schedule minor background tasks on. The impact from going to 16 to 32 threads available for that is going to be nil, and the drop in clockrates (again, comparing a 4 GHz Ryzen 7 vs a 3 GHz threadripper) is going to negatively affect game performance.

Ryzen 7 is already leaning way far to the "lots of cores" side of things. Going further is not going to help game performance.

Paul MaudDib fucked around with this message at 16:28 on Jun 1, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

FaustianQ posted:

There is no way Threadripper and Eypc don't get picked up for enterprise and worskatation use at these prices, it's literally AMD CPU+MB ≤ Intel CPU. Knowing AMD, they'd throw in discounts for buying completely from them, so that means more GPU sales and more sales of their rebranded SSDs and memory. Has AMD made any moves to either source Qualcomm or acquire their own for ethernet controllers?

The HBCC memory controller AMD is talking about has me thinking, could AMD turn HBM2 into a nonvolatile storage medium, or even a PCIE Ramdisk?

CUDA can support RDMA via peer-to-peer transfer or NVLink, I would be unsurprised if AMD had/developed equivalent hardware, the problem is the bandwidth of those channels is still quite low. EDR Infiniband with a 12x connection (3 ganged cables) is still only 300 Gbit/s and with a standard QDR x4 you're only at like 32 Gbit/s (1 cable). That's barely DDR4 speed. At PCIe 3.0x16 you're at 128 Gbit/s slice that down to x4 and you're naturally at 32 Gbit/s.

Paul MaudDib fucked around with this message at 01:30 on Jun 3, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

edit: I'm gonna take this to the GPU thread

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
That's a pretty good price, especially if they're unlocked for overclocking

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
I wonder what motherboards are going to look like to handle that many PCIe lanes out. More focus on EATX and EEB? Just all PCIe x16 at single-slot spacing to give good flexibility?

This would actually be a really interesting design to use as a basis for GPU compute servers (although the last 4 lanes to get an even x16 per card would have been nice). Supermicro has some rack chassis with flexible risers in them I think. All you need then is a mezzanine card implementation of your GPU and you can engineer something that can compete with NVIDIA on rack density (like 1-2U). I think I remember NVIDIA having water-cooled Tesla servers now? That would be really effective for AMD too, you could easily make it a 1U with everything under water.

Is Threadripper capable of multi-socket operation?

Paul MaudDib fucked around with this message at 02:16 on Jun 7, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

wargames posted:

Would zen2 with just a slight bump to volts all around be a lazy fix for such things or is zen 2 going to be on 7nm?

Yeah but it'd blow their power efficiency even worse than it already is. They are already practically tied with Intel's chips rather than being 30-50% ahead like their TDP numbers imply.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
I am actually kinda thinking about building an i3-7350K rig for single-threaded games, I can get one for $130 - $30 bundle discount at Microcenter, maybe put a 1060 3 GB in it. Or I could go full nuclear and get a Kaby Lake-X.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

GRINDCORE MEGGIDO posted:

Paul, what's the deal with game frametimes and sync and res and poo poo like that.
As far as I can tell the AMD chips are doing better with frametime consistency, but I'm not sure if gsync or freesync negates that or not.

Also how does frametime consistency change with res? is there a point at which AMD loses out as res goes up? Mostly for 1440p.
I'm not looking at HEDT from either manufacturer and it's just idle chit chat really.

*Sync makes minimum frametimes much less of a big deal, but they can't actually generate frames that aren't there so if you stutter hard enough you'll still notice. If your framerate is like 80fps average then dropping to 60fps momentarily isn't really a huge deal, but if your average framerate is like 40 fps then dropping to 30 fps is going to be a bigger problem.

AMD chips are indeed doing better with frametime consistency, a lot of which really comes down to more cores being able to handle "difficult" game events (sudden load-in, etc) more easily (and as such, the Intel HEDT chips have similar advantages as well). This does assume a degree of threading being present in the games, it won't help in single-threaded games.

I think Intel could improve things somewhat by bringing back Crystalwell's L4 cache, but there's no denying that modern games are going to be using more and more threads, 4C4T is still decent for the moment but 4C8T is definitely the minimum you'll want for a "long term" rig.

I don't think resolution has a major impact on CPU bottlenecking at all. The CPU doesn't directly handle graphics at all, geometry all happens on the GPU. CPU framerate is largely determined by the number of game objects that need to be checked (and drawcalls made, etc) and that's independent of the resolution.

People say "Ryzen isn't good at 1080p" gaming and that's not quite accurate IMO, it would do 1080p 60 Hz just fine. It's really more correct to say "Ryzen isn't good at high-refresh gaming", whether that's 1440p 144 Hz or 4K 144 Hz or whatever. But that's not as good a spin, given that everyone and their dog are buying high-refresh gaming monitors nowadays and high-refresh monitors are obviously the direction the premium gaming market is headed.

(I don't actually think Ryzen's single-threaded deficit is terrible to the extent they're unplayable, especially with *Sync, but Ryzen's single-thread performance is behind the Intel HEDT chips, which are behind Intel's Small Kaby Lake chips, and it's probably correct that a high-clocking i3 is going to do better on single-threaded games than a super-parallel 8-core machine with slower clocks. Which is why I'm thinking quasi-seriously about doing a cheap 7350K or 7640X build for a second machine.)

Paul MaudDib fucked around with this message at 06:56 on Jun 11, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

GRINDCORE MEGGIDO posted:

If / When Vive2 has higher res panels, I wonder what would be the best chip to drive it.
Perhaps Ryzen+ might be out by then, it'll be interesting to see.

So very much idle curiosity. Thank you.

Well again, my take is that resolution doesn't really drive CPU bottlenecking, framerate drives CPU bottlenecking. Higher-res panels are fine, higher-refresh panels would be problematic.

Fortunately, I think VR game devs are probably very conscious of the need to keep their games running fast.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

GRINDCORE MEGGIDO posted:

Oops. I meant to put refresh there.

So the correct way to do it for vr would be to match a chip to the refresh you want to drive. If they both can put out over the min refresh rate of the display, you'd choose the one that offered the best frame time consistency.

Is that correct?

In general yes, but there's really no globally-applicable rating here that Ryzen is good for 80 fps, 6800K is good for 85 fps, and 7700K is good for 100 fps, it all depends on the game. And if a game threads well, it's possible that Ryzen could come out on top, although fairly unlikely given the way games currently tend to bottleneck on single-thread performance first (DX12 does not fully solve this problem, there is always a "main" thread that takes disproportionate amounts of the load and eventually Amdahl's Law kicks in and that becomes the bottleneck).

And again, *Sync does tend to minimize apparent minimum-framerate issues (as long as they don't turn into multiple dropped frames, which will be perceptible). If you are running at 100 fps normally then dropping to 80fps for an instant isn't really an issue.

VR is a bit trickier because current *Sync implementations aren't really compatible with VR due to latency. Also, with Oculus Rift in particular you actually do need a fairly large amount of cores to handle the positioning (processing a pair of USB 3.0 camera streams in realtime is not an easy task). You can assume that eats at least one core full-time, if not two. Vive is a lot lighter on CPU time because of the inside-out tracking system which can be implemented almost entirely in hardware, you would probably want to go with a 7600K or 7700K for a dedicated Vive PC.

Paul MaudDib fucked around with this message at 07:30 on Jun 11, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

shrike82 posted:

how cleanly does (python) scientific code parallelize? i'm tempted to throw money at a ryzen r7 1700 as an upgrade over my current i5 4590; would I get close to 2x performance moving from an i5 4 core to ryzen 8 core for hobby machine learning workloads (e.g., sklearn, xboost)?

If you are tying down into numpy/scipy (numexpr is another one that came up on a StackOverflow response), i.e. something that hooks native C code and runs operations in parallel on a matrix, then it's pretty good.

Raw Python code, it's garbage. Interpreted Python does not thread due to fundamental architectural flaws that the BDFL will not change. You can have "threads" that take advantage of explicit yields or blocked I/O, but you will never have two threads actually executing at the same time due to the GIL, so the only way you can write truly parallel Python code is process-level parallelism. Or by going to another interpreter like Jython but you don't have any guarantee that any given library is written around an assumption of concurrency, so this is likely to crash and burn horribly.

Fixing this would basically involve going back and editing every Python package ever written, and adding thread-safety to it. Now, Python 3 actually requires everyone to go back and manually edit every Python 2 program for compatibility anyway... which has already taken almost a decade and produced a deep fracture in the Python community... but this particular breaking change was just one breaking change too many, according to GVR...

Paul MaudDib fucked around with this message at 07:27 on Jun 14, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

shrike82 posted:

yah i'm aware of the GIL - the code and frameworks I use spin up new processes.
my question is whether i'll get a straight 2x performance boost moving from an i5 4 core to ryzen 8 cores or do the architectural differences blunt that substantially.

Matrix math tends to parallelize real well thanks to time-tested primitives like BLAS/LAPACK (which Numpy hooks as native C). I'm guessing if you're getting good scaling (4x speedup) on a 4-core already then you would get 8x out of Ryzen, yes.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

shrike82 posted:

awesome thanks. and is there a sweet spot between the 1700, 1700x, and 1800x models?

The higher models have higher factory clocks and are binned better, so will OC slightly farther. But the difference is fairly minimal, all current Ryzen processors hit a wall at 3.9 GHz and even a good chip only make it to 4.0 or 4.1 GHz with a nearly dangerous amount of voltage.

On the flip side the 1700 uses significantly less power, both stock and apparently also when OC'd for some reason.

The X models have "XFR" which is like an auto-overclock thing (assuming you have sufficient cooling). But it only gives you another 100 MHz tops.

The 1700 is cheaper than the other models and comes with a cooler included, which is possibly the biggest differentiator in the whole line (:lol:). IMO it's still the obvious choice in the Ryzen 7 lineup although less so than at launch, when the 1800X was like $200 more expensive than the 1700. If you want more factory speed then the 1700X or the 1800X are OK but really they don't offer anything you won't be able to live without.

They all have the same L3 cache and stuff too, no difference there.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Wirth1000 posted:

Posting from my new Ryzen 5 1600X + B350 build mothafuckas. This is pretty spiffy.

Gratz, by the way. High-clocked 6C12T is bitchin' for any sort of encoding or productivity work.

How much memory did you manage to get to boot? Are you running 2 sticks per channel or 1?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

repiv posted:

In true AMD fashion, they've released the Zen software optimization guide... over 3 months after Zen launched.

http://support.amd.com/TechDocs/55723_SOG_Fam_17h_Processors_3.00.pdf

Better late than never I suppose v:v:v

By the way please consider this prosecution exhibit #50 of a billion that AMD was not ready to launch Zen and will not have any sort of effective silicon stepping revision available anytime this year (eg some people think Threadripper might be on a new stepping, which implies taping out as Ryzen was preparing to launch). If you don't understand what coding practices work well on your processor, how in the world would you actually revise the silicon?

I mean Ryzen+ is going to be baller, as will consumer mobos with real ECC support (if Asrock does a mATX board with ECC and a bunch of SATA ports I'm buying that so fast) but I don't think it can hit before next year even with an ASAP turnaround, and even Q1 might be a rush.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Anime Schoolgirl posted:

it's a hell of a showing for 90% finished architecture, though. i can see why they wanted to release it early, other than that stockholders etc etc

Oh, I agree. AMD needs a comeback story and overall the release has been just that.

But overall do you think that AMD was in an informed position to have coherent changes to Zen taping out during the pre-launch runup? They clearly did not even understand that memory clocks were an issue. A few gentle words "we know the problem, it's patchable in software, hold on" to the reviewers would have been great, they clearly were still trying out beta BIOS builds right to the day of launch.

Again, I'm not criticizing here, things have turned out well for them, but do I think they knew what was going on let alone the hardware patches that were necessary being approved and taped out? Nah.

Zen obviously has some stupid bottlenecks that nobody foresaw and Zen+ is going to be baller once they figure it out, I bet OC and efficiency goes up quite a bit. Zen+ at 4.5 GHz with only slightly higher TDP would be fantastic.

Adbot
ADBOT LOVES YOU

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Anime Schoolgirl posted:

I don't think they had a lot of other options, since their resources and budget are much more limited. Things seem to be going better now though and they're getting server customers and that's the money train they need to catch like last decade.

AMD has been operating on "chicken head cut off" mode since 2009, they're just surgically putting it back on now that Zen is out

I don't disagree with any of this. But do you think they knew what they were changing for the tapeout in January/February so it could go to the fab in March as a new stepping while Ryzen was launching? Without a brief optimization guide?

Nah.

And that's the only way you get a Q4 2017 Ryzen+. It's really gotta be Q1 2018, possibly Q2 (they'll rush it as much as possible). Which probably means "not threadripper", at least at launch (perhaps an early series upgrade?)

Paul MaudDib fucked around with this message at 06:06 on Jun 15, 2017

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply