Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Klyith
Aug 3, 2007

GBS Pledge Week

Palladium posted:

The funny part is nobody really gave a crap about VRMs back in the FSB OCing days when chips are actually worth OCing with 50+% headroom

cpus consumed less power & were way less sensitive to fluctuations back then, and the OC limiter was more the primitive heatsinks of the day and quality of your FSB clock.

Hell, I worked as a tech back during the big capacitor plague year. you know what the amazing thing was? the computers still mostly worked. Like, they'd crash or even hard restart once an hour or more, but still, you could turn it on and boot to windows despite the caps barfing their guys all over the inside of the machine! Kinda amazing when I think about it. No way would a modern cpu work under those conditions.


Craptacular! posted:

People watch Buildzoid videos (GamersNexus features him a lot and people love that channel) and see him crap on motherboards for their VRMs and don’t understand that the guy they’re listening to likes to overclock chips well past what most goons would.

I have a MSI mobo so obviously I'm not taking his reviews as blind gospel. The guy asked if there was any reason not to use the Pro4 and that's the reason I could think of.

As CPUs get these more sophisticated power management features that effectively are self-OCing, I think VRMs are gonna matter more. Obviously not to the extent that Buildzoid does where he calls anything that can't do continuous 250W "crap". But transient spikes in that ballpark -- which people have measured the 2700X doing right now -- put more stress on those components. Which for normal people like myself is more about longevity than performance. Caps and regulators wear out, running hotter wears them out faster, overspecced VRM runs cooler.

If this was intel and I knew my mobo was 100% disposable I wouldn't care. But with AMD's promise to keep AM4 going until DDR4 or something else forces incompatibility, I'm actually thinking twice about this stuff. I will likely put a Zen 2 CPU in this board, and easily might keep using it for 5 or 6 years -- if it keeps working that long.

Adbot
ADBOT LOVES YOU

Klyith
Aug 3, 2007

GBS Pledge Week
2021 seems awfully long. Holiday 2021 would be an eight year cycle for the PS4, even with the Pro available that's gonna feel awfully pokey after 8 years.

A public plan for press & investors might be more of a "last possible date" with their internal plans and decisions still looking at earlier options. Particularly when there's been over-excited reporting going around like Playstation 5 is coming next year!

EoRaptor posted:

Ehhhh... They do need to beat their retail date by about a year with development consoles, so they might not have that option. It does matter less on a dedicated console, because games can be written to be hardware optimized.

Optimization is always a casualty for launch games. And even if Navi or whatever is a big upgrade from GCN, it's not gonna be that different. The switch over to x86 and PC derived parts is gonna make a "generation" switchover way less painful.

Klyith
Aug 3, 2007

GBS Pledge Week

Eletriarnation posted:

Yeah, my thought is that the PS4 Pro is more like a successor that has backward compatibility

The pro is often called a half step but it's more like a quarter step. That's why IMO it won't count for much in 2021 for staving off unfortunate comparisons. Plus the requirement to spec games to the base PS4 will still hold back games unless Sony is ok with stuff being 20 fps on base models.


The 360 & PS3 had a really weak year when they were obviously limping and there was a big resurgence in PC only or PC-central games. IMO that was another contributing reason for the flip in dominance from 360 to PS4, besides all of MS's blunders: petering out like that let people drift away to PC or other platforms. So it wasn't as big a deal to switch loyalties when they came back. Eroding the lock-in right before a new generation was a tactical mistake.


Eletriarnation posted:

As far as I'm aware the console itself doesn't typically have great margins compared to the games anyway.

At this point I think the main source of profits for platform owners are the Gold/Plus subscriptions & ads.

Klyith
Aug 3, 2007

GBS Pledge Week

Combat Pretzel posted:

The problem with a 32-core Threadripper
is that the only people who need a 32-core processor are the people who AMD would rather sell higher-priced Epyc server chips to.

Klyith
Aug 3, 2007

GBS Pledge Week

Craptacular! posted:

The gaming industry marches to the tune of the blue team

Seems pretty specious when both consoles use AMD cpus. I'm sure Sony and MS are not hobbling performance just to kowtow to intel.

The sad truth is that multithreading is difficult, games are not made with the same priorities as server software, and a lot of the tasks that are easy to split out onto their own thread are not the ones that consume the majority of cpu time. 4-core CPUs have been the general point of diminishing returns for games because once you have the main game loop, GPU API, and OS/misc junk on their own cores you're left with pulling work out of the main thread.

Klyith
Aug 3, 2007

GBS Pledge Week
for $20 it's probably not even a Phenom or whatever other old crap that's been silkscreened to say "ryzen" on the lid


unless someone's been collecting all of the old CPUs that AMD was sending out for free to people who needed to flash their BIOS

Klyith
Aug 3, 2007

GBS Pledge Week

SwissArmyDruid posted:

Last year's Threadripper, at that.

Appropriate!

But they missed an opportunity for a real burn on intel by promising that the threadripper chips they send out would be pre-tested for a 300 mhz overclock.

pixaal posted:

Are these pre-releases or something? Maybe AMD just wants to see what Intel is doing and plans to crack them open.

people already have them, they're binned i7-8700K chips with twiddled turbo boost numbers.

every review basically says it's a bad value for anything other than the collector or extreme OCers looking for the best cherry-picked chips. it's like a limited edition car with a trivial horsepower boost and a fancy nameplate.

Klyith
Aug 3, 2007

GBS Pledge Week

Khorne posted:

I didn't realize they aren't even going 5.0GHz on all cores. Sorry about that. Isn't a 4.3 turbo on all cores really lackluster? Intel's cpus have been able to hit 4.7-5.0 all core consistently since 2011.

I really hope AMD is able to approach 5 in 2019/2020 and Intel stops leaving so much on the table.

Yes it's really lackluster, which is why all the reviews have been saying don't buy one.

And Intel already put it on the table. Coffee lake was what they were keeping in reserve, they pushed it to near it's thermal max to keep their edge over Ryzen. That's the reason the 8086k is :mediocre:, even with binned CPUs there wasn't a lot left. The delays to 10nm hosed them, and until that process is producing chips they're kinda stuck.

People were speculating that the 8086k would be soldered instead of TIM, but intel is not gonna change up their production line just for 50k enthusiast CPUs. AMD caters to us because the enthusiast PC market still matters to them, Intel is so big that they're kinda eh.

Klyith
Aug 3, 2007

GBS Pledge Week
New AMD promotion: turn in your CEO, we'll exchange it for a free Raja

Klyith
Aug 3, 2007

GBS Pledge Week

ufarn posted:

Any last-minute advice before I attempt to fix a few crooked pins on my Ryzen CPU?

I fixed a bent pin with a mechanical pencil -- remove the lead and then the tip was the perfect tool to slide onto an individual pin and bend it back.

This was more than a decade ago so pins were bigger and with more space between them. But it might still work if you had a .5mm ultra fine rotring or something like that.

FaustianQ posted:

I really hope AM5 has a better retention mechanism. Nothing wrong with PGA, just there really shouldn't be a risk of damage when just removing the cooler.

Use cheap / old-school thermal paste and the old-school minimal quantity application style. The newfangled pastes that 'cure' after heat have a number of good properties and advantages, but easy release is not one of them. If you're changing stuff out often enough that this is a concern, better to just accept the 1 degree extra heat and make removal easy on yourself.

(Personally I dislike curing pastes just because they're not shelf-stable.)


The common story was that mobo manufacturers like the PGA because it's cheaper and has less returns on their side, but couldn't you spec a socket to have pins but also a bracket retainer like the LGA types? Though that's also 25c extra cost just to prevent the occasional user fuckup.

Klyith
Aug 3, 2007

GBS Pledge Week

ufarn posted:

If BIOS knows best, would it be safe to set the memory frequency to AUTO and the AI overclocking profile to DOCP/DEFAULT rather than 3200 and DEFAULT?

I'm less concerned about 3000 vs 3200 than the low frequencies I seem to get with AUTO mode, but maybe it's scaling fine during high loads without me being able to notice?

if AUTO is setting memory frequency to under 3000, you may need to also enable XMP or something like that to use the technically-out-of-JDEC-spec higher frequencies.

this is kinda hard to help with because every mobo maker labels their poo poo differently. also I'm not sure whether you're trying to OC things or just run the best defaults?

Klyith
Aug 3, 2007

GBS Pledge Week

eames posted:

Speaking of power efficiency and all that, Buildzoid mentioned that 32C Threadripper will have a stock Vcore of 1.05V.

we don't need to imagine what it would look like running at the normal ryzen 1.35v, because james mickens has already imagined it for us:

quote:

John began to attend The Church of the Impending Power Catastrophe. He sat in the pew and he heard the cautionary tales, and he was afraid. John learned about the new hyper-threaded processor from AMD that ran so hot that it burned a hole to the center of the earth, yelled “I’ve come to rejoin my people!”, discovered that magma people are extremely bigoted against processor people, and then created the Processor Liberation Front to wage a decades-long, hilariously futile War to Burn the Intrinsically OK-With-Being-Burnt Magma People. The future was bleak, and John knew that he had to fight it. So, John repented his addiction to scaling, and he rededicated his life to reducing the power consumption of CPUs. It was a hard path, and a lonely path, but John could find no other way. Formerly the life of the party, John now resembled the scraggly, one-eyed wizard in a fantasy novel who constantly warns the protagonist about the variety of things that can lead to monocular bescragglement. At team meetings, whenever someone proposed a new hardware feature, John would yell “THE MAGMA PEOPLE ARE WAITING FOR OUR MISTAKES.”

Klyith
Aug 3, 2007

GBS Pledge Week

GRINDCORE MEGGIDO posted:

Cool I have this to look forward to

FWIW I never had any of these mouse problems with my 7870, which I used for something like 4 years (and always with multi-monitor). The one big issue I saw for a while was graphical corruption in firefox, but that was eventually bugfixed by mozilla not AMD so it's debatable.

Personally I felt that over the years that I had it, the AMD driver team did really well at removing the last vestiges of "lol ATI drivers" and getting to parity with nvidia. Some areas are actually better at least in terms of UI and feature availability. Nvidia's driver panel is ancient and actively lovely at this point. If AMD ever manages to make hardware that's better than nvidia again I'd happily buy it, because IMO the drivers are relatively even on the pro/con count.

Klyith
Aug 3, 2007

GBS Pledge Week
There could be something to it. Beyond the petty revenge thing, there also might be a fair bit of marketing benefit to it. Personally 'epyc' didn't sound all that nutty as a processor brand name to me, and some of that might be lingering memory of intel's branding. Kinda forgotten about it until now.

It's kinda piggybacking on the long-dead recognition that intel spent a lot of money on. But EPIC was never an official trademark or anything.

Klyith
Aug 3, 2007

GBS Pledge Week

SwissArmyDruid posted:

AMD must either be

a) very certain that the underlying technology is not trivially reverse-engineerable
2) licensing blackboxes with "power delivery goes here" and "PCIe controller goes here" and "memory controller goes here" stubs, or
iii) none of the above and whistling past a graveyard.

④ Knows from past experience that despite having a good product with Zen, Intel was going to gently caress them over with anti-competitive deals & monopoly power. So this time they're holding a live hand grenade with a pulled pin and yelling "I swear to loving god I'll do it!"

Klyith
Aug 3, 2007

GBS Pledge Week

HalloKitty posted:

AnandTech reviews Jim Keller, thought it might be of interest to some here

they give Jim Keller a recommended buy in the conclusion, but IMHO it's a paper launch. I can't find Jim Keller in stock anywhere!

Klyith
Aug 3, 2007

GBS Pledge Week
they've been fire-selling the 1st gen chips for months, anyone interested in a cheap ryzen system should have one by now


are they gonna make the integrated GPU ones on 12nm?

Klyith
Aug 3, 2007

GBS Pledge Week

pixaal posted:

Does anyone even look at the package a CPU comes in when deciding on what CPU to get? I feel everything would be a pre-built system or someone building one themselves. If you are building, you are probably buying online, or are buying a specific chip. You don't just go to a store and pick out a CPU.

I could see buying a replacement part from a shelf and not knowing any model at the store, but you are likely locked into a socket, and at that point it doesn't matter what the box looks like.

If I was spending $1000+ on a CPU then yes I would enjoy a fancy box. I would not display the box on a shelf or anything, but it would make taking it out of the box into an event.


Seamonster posted:

gyotdayum that looks absolutely evil. like pyrophoric plutonium.

It won't look like that IRL unless you shine a light through it just so

Klyith
Aug 3, 2007

GBS Pledge Week

ArgaWarga posted:

Micro Center has the 2600 for $160 and the 2600x for $190, is the step up worth the $30 or is this like a candy bar in the checkout line?

2600 and the $30 for a CM hyper 212 evo is definitely better than the 2600X with the cooler that comes in the box

Klyith
Aug 3, 2007

GBS Pledge Week

Combat Pretzel posted:

Is there a point to a 16 phase board like the upcoming MSI MEG, if you don't actively overclock? Everyone's saying that the CPU tries to get the maximum out of itself on its own, would anything beyond the usual 8 phases of the other crop of boards make a difference? The thing is going to run a 2950X, and likely a 3950X going forward, if they bump the core count per CCX.

More real phases = better voltage stability. Voltage stability is always nice whether you're overclocking or not, it's good for component lifespan as well as overclocking. Extreme overclockers need voltage stability because they're also overvolting the CPU to an inch of it's life. So the voltage needs to stay rock-solid when it's already way out of spec. For just XFR or other basic overclocking that's not a problem, the CPU draws spec volts, the spikes and droops stay in spec, and the capacitors don't get overwhelmed by twice as much power draw as they were designed for.

More VRM parts = still worth something, as it distributes load across more parts and means mosfets are operating more efficiently. This has become a bigger issue with the popularity of AIO watercooling since the VRM heatsinks may not get much airflow.



That MSI MEG is using a controller with 8 phases, and IDK whether the phases are really doubled or just stacked 2 per phase. Regardless, if you're not planning to do extreme overclocking it's way more than sufficient. That thing was designed with the new WX 24 & 32 core parts in mind that might draw 300w if you have a huge water cooler.

If you're just going to use a 2950X and are positive that you will be sticking with a hypothetical 3950 or whatever other sub-200W part* in the future, you can totally get away with one of the "first wave" x399 boards. The MSI MEG's other gimmick is NVMe slots if you want a ton of those.

*edit: by "official TDP" numbers

Klyith fucked around with this message at 01:03 on Aug 7, 2018

Klyith
Aug 3, 2007

GBS Pledge Week

GRINDCORE MEGGIDO posted:

Is it the paste bedding in, or differences in ambient temps?

Gotta be one of those, or something in the water cooler like particulate being washed off the block microfins. Some environmental factor outside the chip anyways.

XFR2 is smart but it's not learning over time smart.

Klyith
Aug 3, 2007

GBS Pledge Week

B-Mac posted:

No that’s great, thanks a lot for responding. Is there an easy way for a person to look at a motherboard and tell whether something is for example a true 8 phase or in actuality a doubled 4 phase.

It's almost impossible to tell at a glance because the heatsinks cover up all the important parts. Sometimes the doubler chips will be on the backside of the mobo because they're pretty tiny so that could be a clue, but that only seems likely if you have the thing in your hands since product shots rarely show the back. You have to actually own the thing so you can take the heatsinks off and ID the chips.

OTOH true 8 or doubled 4 is so minor a distinction that even buildzoid, the guy with the highest bar for "acceptable VRMs" around, thinks doublers are fine and is ok with that being marketed as 8 phases.


Unless you plan to go into heavy overclocking -- including overvolting -- it's really not worth worrying too much about this poo poo. Every AM4 motherboard will run every ryzen chip at stock settings. Make common sense decisions, like if you're buying a $350 cpu don't put it in a bargain basement motherboard.

Klyith
Aug 3, 2007

GBS Pledge Week

EmpyreanFlux posted:

GlobalFoundries™; A subsidiary of Samsung© 中国科学院

^ seems more likely

Klyith
Aug 3, 2007

GBS Pledge Week
All this talk about interposers seems wildly out of proportion to the number of products AMD is selling that actually have interposers.

They are on TR and Epyc CPUs, which are halo & server products. High margin, low volume. Even if AMD does really well in that market, that's not a high number of units that need interposers. Then there's Vega, which has been completely uncompetitive in everything except crypto since the day it launched. Maybe they sold a lot of those last year but I can't imagine they're gonna sell many going forward. Navi is still a year (?) away, and it seems unlikely that it will be using HBM in the first go-round.

AMD does not conceivably need enough interposers to make a dent in any remaining GloFlo agreements, and likely need so few that it's completely trivial to their overall business where they get made.

I dunno whether GlobalFlounderies cancelling 7nm means AMD gets to tear up their agreements. If they're still on the hook for a bunch of wafers in 2019, they'll probably just pay off the penalty for most of them.

Klyith
Aug 3, 2007

GBS Pledge Week

TheJeffers posted:

Epyc and Threadripper don’t use interposers.

I saw a lot of references to "organic interposer". Guess that's a different thing, but they're still made by chip fabs right? How else do you make all those tiny wires than photolithography?


PC LOAD LETTER posted:

its really hard to give any of them serious consideration at this point since there isn't much solid information on any of it right now.

Also considering the rumours for zen+, versus the reality where it was a pure process switch and nothing inside the chip changed at all.

Klyith
Aug 3, 2007

GBS Pledge Week

Winks posted:

Other than the Athlon one that is clearly aimed at the Pentium g-series, what's the purpose of this release?

A different SKU that they're gonna sell directly to OEMs, for prices that compete with whatever discount scheme Intel is doing at the moment?

Klyith
Aug 3, 2007

GBS Pledge Week

ufarn posted:

I had to set my minimum processor state to something like 9x% in the power settings, because Ryzen seems to struggle to find a plateau. Having it jump above my fan curve trigger ever minute was driving me mad.

That seems awfully wasteful, look into your fan control to set some hysteresis on the CPU fan (and any others that are based on CPU temp). Even an air cooler has plenty of thermal mass, the spike in temperature from a short burst of full clockspeed is more about conductivity. So a 1/2 second delay before the fan reacts to the CPU isn't really hurting anything.


Also if using BIOS fan control, build your fan curve with that +20 offset accounted for -- I was also having this problem until I realized that I needed to start spinning up at 60C not 40. That despite the temp display in the BIOS itself using the non-offset temp.

Klyith
Aug 3, 2007

GBS Pledge Week
Remember that the first shipping generation of 7nm from the Not-Intel Alliance isn't as big a change as it sounds. Only passive features are using the new EUV stuff; the transistors are still on current process tech or something very similar. I don't think that putting a new design into 7nm is as risky as it sounds at first.

I think the best clue we have is, Zen+ was not originally a thing on the roadmaps from AMD at launch. What changed that they needed to add it? Was it really just a quick-turnaround response to Coffee Lake? Could Zen+ have been what Zen2 was on the early roadmaps but the 7nm delays made them change names and push their half-step on the process they had?

Klyith
Aug 3, 2007

GBS Pledge Week

Jim Silly-Balls posted:

a raid1 array for increased read performance.

1) will the AMD Raid1 implementation actually increase read speeds? A cursory google seems to say “maybe, maybe not” and it seems to be mostly raid controller manufacturer dependent

Increased read performance of what? If you haven't characterized your workload that's an impossible question. Raid1 does increase read speed but not like a stripe set does. IIRC one of the main performance boosts is random access times, which is much more noticeable on a HD than a SSD.

(If your workload is "a pc running desktop programs and games" then no this is not worth doing for purely performance reasons.)



Also I don't even see if AMD RAID supports NVMe on non-X399 chipsets, let alone mixed NVMe and SATA raid.

Klyith
Aug 3, 2007

GBS Pledge Week
I'm not sure what that HP laptop would be like to use, it doesn't seem like a great platform for a Ryzen. Those convertibles are much more designed for lower-TDP intel processors.

Ryzen is an interesting option for a laptop but it's still a 15-25w CPU.

Klyith
Aug 3, 2007

GBS Pledge Week

pixaal posted:

They can't just move that stuff back to older tech?
For stuff like cell modems they may have made agreements with the companies they're selling them to. AMD isn't the only one that's ever negotiated a fab deal and then had regrets.

pixaal posted:

Did Intel destroy their older fabs or something?
Fab buildings are really expensive. So yes, most old production lines get taken apart and new machines are put in, effectively destroying the old fab.

Also there are aspects of the business that make an old process uneconomical. Silicon wafers are expensive, fewer chips per wafer means your costs are higher than your competition. For something like a chipset that may not be a problem if you can just pass the cost to your customers (but that does make the intel platform as a whole less competitive). But for anything that's more replaceable, like a modem or flash or any standard device, you have to hurt your own bottom line to compete.

Klyith
Aug 3, 2007

GBS Pledge Week

EmpyreanFlux posted:

Power and cost really, and maybe with PCIE4.0 they can further expand the chipsets USB/SATA/PCIE capabilities. No reason not to go to 28nm to save die space, cost and heat at that point?

PCIe, USB, and SATA controller stuff isn't really super heat-producing even on a bigger process. Mobo makers put shiny heatsinks on them but that's for show -- on laptops they just heatsink into whatever metal panel they can have a thermal pad glom on to.

The biggest impact is battery life on laptops where shaving a quarter watt is actually productive, which is why intel tries to keep theirs new-ish.



Integrating a decent ethernet controller onto the chipset chip would be cool though. Ditch the ubiquitous realtek poo poo, or occasional intel chip on highend boards.

Klyith
Aug 3, 2007

GBS Pledge Week

ConanTheLibrarian posted:

Roughly how long is it after engineering samples become available before the retail release?

I'm pretty sure there are multiple rounds of engineering samples. Sometimes you get engineering samples that leak to press or benchmarks a few weeks or a month before release. But those are more like QC stuff that run full speed and doesn't crash.

Klyith
Aug 3, 2007

GBS Pledge Week

Desuwa posted:

They only mention Windows in that article so this might just be a lower level bandaid on a poor NUMA balancing implementation in Windows.

It's absolutely a bandaid, but I wouldn't call it a low-level one at all. It's like manually setting CPU affinity in task manager, but automatic by an app. That works well enough for people using threadrippers for gaming or video editing or other single-user stuff, but would totally not fly in more complicated environments.

Klyith
Aug 3, 2007

GBS Pledge Week

ItBreathes posted:

IIRC b-die was only a must-have for first gen Ryzen. My Team dark 3000 runs at 2933 just fine (which is all it ran on Intel too).

Possibly just first gen BIOSes even. I have some hynix-chip g skill 3000 that runs at 3000 + CAS 15 + other settings it claims on the box. When I put it together it only did 2933. Results may be different with the super-premium stuff, but most tests I've seen say chasing a higher frequency at the expense of timings doesn't really get you much. As long as you can do 2900-3000 or higher to avoid the infinity fabric slowdown it's pretty much the same.


Whale Cancer posted:

$50 toward getting a 1080 or 1080ti.
If I were you I'd buy the cheap ram and a 1080, because the current pile-up of 1070, 1070ti, and 1080 all being within $75 of each other is insane. You can play around with speed & timings and see what you can get -- even if it doesn't do 3200-16 it might do 3000-15.

Klyith
Aug 3, 2007

GBS Pledge Week

TorakFade posted:

I really can't tell why AMD "balanced" is basically the same as full performance mode, and AMD doesn't recommend using standard "balanced" mode. I see something about "core parking" but apparently that's already included in the standard Balanced plan with the latest Windows10 updates... I don't want my CPU to run 100% all the time so I'm guessing Balanced would be the right choice, hopefully not leaving performance on the table when it's needed.

Yeah the AMD Balanced was a solution to a problem that MS independently fixed, windows now does not park cores on Ryzen desktop / AC power.

Also "problem" is maybe more severe than it deserved: I never noticed any difference back when it was an issue. Evidently the main effect was that parking could cause micro-stuttering in games, which is one of those things that you are either sensitive to or you aren't. I wasn't aware of micro-stuttering with Ryzen and I also didn't notice it with my 7870 gpu during the time when AMD drivers were bad at micro-stutters. I can only see it in demo videos that deliberately exaggerate the effect.

Klyith
Aug 3, 2007

GBS Pledge Week
A cursory glance at the x265 website says two things
- it's threading has accounting for NUMA nodes, likely because H.265 is quite memory intensive
- it does not multithread as cleanly as older codecs because apparently macroblocks have dependencies on previous ones within a single frame

Both of those together I could see giving Intel the edge. Possibly x265 doesn't have the NUMA stuff for the new big threadrippers, in which case it could improve a fair bit later. But maybe between AVX, Intel's memory controller having a bandwidth advantage over Ryzen, and HEVC not scaling to 32 cores as well, the threadripper just won't be the best CPU for ripping x265 videos.

Klyith
Aug 3, 2007

GBS Pledge Week
Infinity Fabric is described everywhere in technical articles as "a superset of HyperTransport" so I have never been clear exactly what the relationship is between that and PCIe. I think it is running the IF/HT message protocol over PCIe physical links? In that case, the gen 4 controllers for the exterior of the chip are of no benefit. IF is already running a protocol that's much faster than pcie gen 4 on the inside of the chip. You can support a far more demanding transaction & data spec when you only have to travel a few 10s of mm between chips.

Klyith
Aug 3, 2007

GBS Pledge Week

Cygni posted:

Yeah, they were (and still are) comically overvalued.

I read a few AMD news stories a few months ago on my phone, and ever since then google pushed a whole lot of bullish "buy AMD stock" articles at me. I don't look at any other financial / stock market stuff on my phone. I think AMD stock was clearly in some kind of trending loop that kept it hyped up.


(With the power of the cloud we can make stock bubbles faster than ever before!)

Adbot
ADBOT LOVES YOU

Klyith
Aug 3, 2007

GBS Pledge Week

A Bad King posted:

What's the general consensus on those laptop Raven Ridge APUs, and why are there so few (read as "none") high-end SKUs and only one mid-tier?

They're not power efficient enough for road-warrior battery life, which keeps them out of the high end ultraportables. And the GPU, while better than intel stuff, isn't good enough for high end gaming laptops. I think they're great as a platform for companion laptops, ie for people with a primary desktop. But most people for whom the laptop is a secondary machine are sticking to an under $1k purchase.


(Also laptops were always the place intel was most pushy about locking out AMD :tinfoil: :tinfoil: :tinfoil:)

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply