|
Don Lapre posted:If you didn't have a dolby digital speaker system in your bedroom/dungeon and an nforce 2 amd setup then you might as well just quit computers I had a 5.1 setup and an Audigy 2 on my Socket 939 system.
|
# ¿ Apr 17, 2017 21:04 |
|
|
# ¿ Apr 29, 2024 16:57 |
|
With Ryzen where in theory ECC is supported by every CPU / MB, why the hell not? ECC is like a $15 premium then.
|
# ¿ May 8, 2017 19:06 |
|
sweart gliwere posted:If it's that small a difference, it's worth it I guess. The premium on ECC memory itself is usually pretty minimal, the problem is that Intel has forbidden i5/i7 CPUs from using ECC, as well as the cheap consumer H and Z motherboards.
|
# ¿ May 8, 2017 19:08 |
|
NewFatMike posted:Think we'd be able to get 1050 performance out of an APU? RX 560 is having a tough time getting there, but if AMD can supplant their lowest level graphics cards with APUs, then the Polaris refresh makes a little more sense. No way. The 1050 / 1050 Ti are really efficient GPUs and still have a 75W TDP. I don't see integrated graphics keeping up with nVidia's current-generation high efficiency GPU when they have at most 50W to play with for the GPU-part alone, and that's being generous.
|
# ¿ May 23, 2017 00:29 |
|
FaustianQ posted:There is no way Threadripper and Eypc don't get picked up for enterprise and worskatation use at these prices, it's literally AMD CPU+MB ≤ Intel CPU. Knowing AMD, they'd throw in discounts for buying completely from them, so that means more GPU sales and more sales of their rebranded SSDs and memory. Has AMD made any moves to either source Qualcomm or acquire their own for ethernet controllers? Couldn't AMDs core counts work against them in an enterprise setting? Given that SQL and OS vendors now charge per-core licensing, you could end up saving tons of money by buying the multi-thousand dollar 8 core frequency optimized Intel Xeons with fewer, much faster cores than a basic 16 or 32 core AMD chip.
|
# ¿ Jun 2, 2017 22:10 |
|
Subjunctive posted:The other thing is that people infer quality from price. If something is 10% cheaper it's a good deal. If it's 30% cheaper it's a lower-tier product. If Intel wants to fight on pricing, they can. Intel's gross profit margins are enormous, and AMD could just be trying to win-market share up front and not have to look reactionary by cutting prices when Intel does. That said, I know very little about pricing theory and even less about hardware pricing. My only experience is with software pricing, and that the best prices for software are free or $50k+ per year.
|
# ¿ Jun 7, 2017 14:59 |
|
Obsurveyor posted:I thought Intel's prices were based on what the market would bear rather than actual cost to produce? As they should be. The question isn't if Intel will react, but rather how much.
|
# ¿ Jun 8, 2017 16:49 |
|
PerrineClostermann posted:If Intel really was pricing based on what the market will bear, and not as low as they can, and there's a large difference between those values, wouldn't Intel have been able to simply respond with lower pricing? They wouldn't necessarily have to design new chips to compete. That Intel believes (and is likely correct) that people will pay a premium for Intel stuff even when performance is similar. Intel has tons of room to come down on prices if they chose to.
|
# ¿ Jun 8, 2017 17:31 |
|
Combat Pretzel posted:Eh, it talks about the amount of hops depends on horizontal and vertical distance. Doesn't the same apply with the ringbus? Wouldn't the mesh result in shorter paths? Ideally. The big chips with their 2 rings already had some pretty nasty latency from one ring to the other. I'm curious if OSes are NUMA aware about where cores are on a given chip.
|
# ¿ Jun 15, 2017 19:36 |
|
underage at the vape shop posted:Are ryzen's reliable? They're fine. If you're looking for a radiation-hardened CPU to send into space I'd probably look at a 486 though.
|
# ¿ Jun 16, 2017 14:58 |
|
PC LOAD LETTER posted:I can't help but wonder if TR's rumored prices are correct if AMD is willing to sell 16C/32T Epycs for only $600-800. Even at just 2.2Ghz that is a whole lot of cores for that price. Big Skylake looks like a pretty drat great product and Intel's normal Xeon refreshes always end up giving you a couple more cores for the same dollar, so they're going to have to undercut dramatically in order to compete. Given that 12 Core Broadwell (E5-2650 v4) has an MSRP under $1200 and Skylake is likely to deliver at least 2 more cores + faster single threaded speed at that price, you see why AMD needs to price aggressively.
|
# ¿ Jun 16, 2017 17:00 |
|
Does anybody know when the Epyc NDA lifts today?
|
# ¿ Jun 20, 2017 16:28 |
|
Risky Bisquick posted:Dell/HP/SuperMicro confirmed OEMs so far, no cloud customers yet. If they announce even one Azure on stage right now. Any vendor actually taking orders or are these not showing up until December?
|
# ¿ Jun 20, 2017 21:24 |
|
Risky Bisquick posted:Regardless of that, the margins on Epyc are going to be insane for AMD. If they can ramp up some sales volume this is nuts. If they do roughly the same thing to Navi It really looks like AMD's platform is focused heavily on i/o. 128 PCI-e lanes on a 1P system are insane. If datancenter nVME drives actually are a high volume product, AMD has a spot.
|
# ¿ Jun 21, 2017 04:27 |
|
WAR DOGS OF SOCHI posted:ok, got it. pro is a corporate desktop line -- enhanced support, extended warranty etc, but little other difference It seems to me like a bit more difference between the vPro enabled /non vPro Intel CPUs.
|
# ¿ Jun 29, 2017 18:28 |
|
eames posted:to be honest i think Intel is becoming really worried because they have no response to Zen+ except for their current CPUs +10% clockspeed +25% power consumption Well, or moderate discounting. That would also solve the problem completely.
|
# ¿ Jul 12, 2017 16:41 |
|
We really aren't that far removed from GPUs with a frog wearing medieval armor on the box: https://www.theverge.com/2015/9/9/9274915/graphics-card-boxes-weird-art
|
# ¿ Jul 24, 2017 16:47 |
|
Combat Pretzel posted:Errrrrrr, is it me or is that not a lot of memory bandwidth? I gotta run whatever they're using, but I think the last time I ran AIDA64, I had like 50GB/s on quad channel DDR4-2400? Hope I'm mistaken. It looks like Intel has better memory controllers. This may or may not matter based on your workload. I'm seeing about 40GB/s per socket for Intel quad channel on DDR4-2133.
|
# ¿ Jul 24, 2017 22:11 |
|
Looking at that chart again, total memory bandwidth should be much greater than the individual memory bandwidth available to any given MCM, right? Basically each die has its own dual-channel memory controller, for 8 channels total per socket, but a single thread or core will never get as much total bandwidth as all of them working in parallel?
|
# ¿ Jul 24, 2017 22:33 |
|
FaustianQ posted:Intel is definitely putting the 6C/12T CPU in the 299-349$ category. Hyperthreading may actually move to i5s, which would be 4C/8T and the i3s will become 4C/4T. Pentiums and Celerons might just be Skylake chips. From what I've seen, it looks like there's 6C/6T SKUs and 4/8 is reserved for i7s still, you'll just have 6 core i5s instead.
|
# ¿ Jul 27, 2017 15:03 |
|
kirtar posted:The one problem I have with Ryzen 3 in concept is at that integrated graphics should probably be present in the entry level price range. That would be tough. The dual-core intel chips are more than 50% GPU by die area.
|
# ¿ Jul 27, 2017 16:10 |
|
SwissArmyDruid posted:Looking over the R3 reviews... that R3 1300X looks like it is _the_ chip to get in that segment. Street price on the R5 1400 is $135 right now, is hyper-threading worth $5?
|
# ¿ Jul 27, 2017 18:44 |
|
Cygni posted:wait you download every youtube video you watch and store them, thats a level of eHoarding i havent seen before AT&T's peering with youtube sucks in some areas, at peak youtube times it can be impossible to watch things at higher than 480p without tons of buffering on a wired 1000mbit connection.
|
# ¿ Jul 28, 2017 21:43 |
|
Bloody Antlers posted:I bought in near the end of that generation, I upgraded from a Phenom II to an AMD FX 8350 which has 4 x 2 core modules @ 4ghz, presenting 8 cores to the OS, though each module shares an FPU. Honestly, the amount of making GBS threads on this CPU from the community always felt very unwarranted and near twilight zoneish to me. I use very high end Intel workstations at the office (rare company that is upgrade happy), but my personal machine that I come home to has never really felt under powered aside from having much less / slower ram. I'm a professional software developer that multitasks like crazy - VMs, large software builds, Chrome occasionally running like 80 pornhub tabs, you name it. I could easily OC this chip, but I instead undervolt it from stock (1.45v?) to .875v for power savings / quieter fan speed It's not an issue of "underpowered", just that you could buy an i5 for the same price for most of the 8350's life that would outperform it in every way while using less power. The revised Vishera chips were OK, they just weren't competitive with Intel. You'd also have been fine cruising on an overclocked late core 2 quad.
|
# ¿ Aug 2, 2017 15:30 |
|
Mister Fister posted:Hey dumb question, i'm seriously thinking about switching from my dinosaur of an i5-2500k setup and going with a Ryzen chip. However, i also have an Nvidia 1060 card that i'd like to keep in this new setup. I was reading some articles that Nvidia Cards don't perform as well with Ryzen chips as they do w/ Intel Chips, but these articles were from several months back. Is this still true? Your usage case is one of very few that probably still wants a highly-clocked Intel CPU. How high is your 2500K clocked? Would you be better off buying a better cooler and clocking it more for now?
|
# ¿ Aug 3, 2017 21:10 |
|
Mister Fister posted:I have a Cooler Master 212 Evo on it... i overclocked it from 3.3 ghz to 4.2 ghz without messing with voltages. I play a lot of overwatch and one thing i've learned is that my lovely DDR3 memory is limiting the amount of FPS i can squeeze out in that game... applies to other games as well. I got some pretty big gains in CS:GO FPS by going from DDR3-1333 to DDR3-2133. It might be worth a shot at shoving more voltage into the RAM and seeing if you can make it go faster. Tighten up those timings, too. There's no doubt that an R5-1600 or similar is a better all-around CPU than a 2500K, but "300fps at minimum settings" really wants a 7700K or as close as you can get by overclocking older Intel chips.
|
# ¿ Aug 3, 2017 21:31 |
|
ufarn posted:Any good videos or reads on how specifically game devs implement multicore support? Do they put netcode and physics on one core etc sort of like PhysX, or is it more about basic parallellization? If you've got an hour to spare, the current state of the art is work-stealing algorithms and doing everything possible to minimize synchronization. https://www.youtube.com/watch?v=0nTDFLMLX9k
|
# ¿ Aug 9, 2017 16:22 |
|
Combat Pretzel posted:I'm at work, what's this UMA NUMA poo poo about? Big chips with multiple memory controllers like Threadripper have non-uniform access to memory. What that means is that cores 1-8 have memory that's "local", and so do cores 9-16. Memory latency can vary dramatically based on whether you are reaching out to access non-local memory. Software will either be aware of this and deal with it, or suffer inconsistent performance.
|
# ¿ Aug 10, 2017 15:32 |
|
Eyochigan posted:I was hoping the TR gaming scores would be better, but the 1800x beats it. Though if you're a streamer, it makes a lot of sense to buy threadripper. This is going to kill the little 2-in-1 cases that let you put a mITX encoding PC in the same case as your ATX gaming PC.
|
# ¿ Aug 10, 2017 15:50 |
|
CommieGIR posted:That's the guy that RAID-5'ed 3 RAID controllers running RAID-5, right? I thought that was Linus, of Linus Tech Tips.
|
# ¿ Aug 10, 2017 17:13 |
|
TR reviews have the least consistent results measured by different review outlets that I have ever seen in about 15 years of following CPU reviews. Once the smoke clears I'm really curious what's going on.
|
# ¿ Aug 10, 2017 21:16 |
|
Munkeymon posted:Dell has been doing Thunderbolt docks on all of their newer business class laptops for a couple years, as far as I can tell. Heck, I'm using one right now. Granted, they probably aren't about to put AMD in their business line, but, yeah, Thunderbolt is a thing outside of Apple trashcans and donglebooks. The $300 Dell Thunderbolt docks are my company's corporate-endorsed solution for the Macbook Pros having no ports. Because Apple won't build monitors or docks anymore, we've got a bunch of Macs hooked up to Dell everything else.
|
# ¿ Aug 11, 2017 16:36 |
|
Man, if they get anywhere near the Bulldozer -> Piledriver improvements that happened as their last uArch matured, they'll be in amazing shape.
|
# ¿ Sep 5, 2017 12:46 |
|
SlayVus posted:I thought the majority of games running 4k on consoles were all running some kind of rendering trucks like checker board. To get the visual quality they do at 1080p in 4k, I doubt any consoles could do legitimate native 4k. The 1070 barely does 30 fps in Ghost Recon Wildlands and less then 30 in Deus Ex, surprisingly gets ~50 fps in BF1. AMD CPU thread might not be 100% the place to talk about this, but the XB1X is a ton more powerful than the PS4P and is somewhere between a 1060 and a 1070 in power. Also textures tend to be lower-res in consoles than the "ultra" texture options in PC, which would help framerates. Consoles are still targeting 30fps in general, and tend to either do dynamic resolution scaling as mentioned, or accept some framerate drops from 30. Native 4K on the XB1X seems feasible with that expectation.
|
# ¿ Oct 31, 2017 16:29 |
|
Theris posted:The "XB1X is like a PC with a 1070" comparison has to be including the console optimization in the comparison. Because an XB1X has 40 CUs @ 1.2GHz, where on the PC to match a 1070 Vega needs 56 CUs at 1.4GHz. (Depending on the game, ofc) I thought that Vega was an old, lower performance per shader per clock design than the Polaris-based GPUs that were in the consoles? I know that in the real world, Vega is worse per clock than Fiji, which was worse for clock than Polaris.
|
# ¿ Oct 31, 2017 17:25 |
|
shrike82 posted:RE the AC:Origins benchmarks, the 4C/8T 7700K being 50% faster than the 4C/4C 7600K is pretty crazy. Word is that AC: Origins is using multiple types of CPU-heavy DRM simultaneously, it may not be representative of modern game engines in general: https://torrentfreak.com/assassins-creed-origin-drm-hammers-gamers-cpus-171030/ You need an 8 thread CPU so that 4 threads can run the DRM while the other 4 run the game. I know this is just rumormongering at this point, but performance has improved in other games when Denuvo was removed so it's not that outlandish.
|
# ¿ Nov 1, 2017 15:10 |
|
feedmegin posted:I find that very hard to believe, but if so it's ridiculous. Using half the computer's power just for DRM? The rumor is slowly working its way to slightly more reputable blogs: https://www.techpowerup.com/238377/cpus-bare-brunt-of-ubisoft-deploying-vmprotect-above-denuvo-for-ac-o
|
# ¿ Nov 1, 2017 15:49 |
|
repiv posted:If all else fails you could route them to a shitload of U.2 ports (4 lanes each). This is probably the biggest use case. Light up a storage server with 32 full-speed U.2 ports for fast enterprise SSDs with a single CPU.
|
# ¿ Nov 1, 2017 21:13 |
|
Darth Llama posted:Any thoughts on the 1300x? I have pretty modest needs and plan to pair it with a 1050ti. I've been running a 955+gtx550 for about 7/8 years now and it's been just fine for the games I play at 1680x1050. Would it be worth a lower clockspeed in exchange for hyperthreading? I just built a computer for my mother for a workstation with the 1600 but I don't think I really need that many cores/threads for what I guess is now low resolution gaming more or less.
|
# ¿ Nov 3, 2017 15:57 |
|
|
# ¿ Apr 29, 2024 16:57 |
|
NewFatMike posted:I thought over locking the 1200 was the pro move on that one. Also, you have a lot of upwards mobility since new CPUs will be up for AM4 for a few more years, where Intel ditches the socket every upgrade. The counterpoint to this is that the rate of CPU change has made it so that you only upgrade every 5 or more years, in which case you're ready for a new motherboard for several reasons by the time you want to upgrade your CPU.
|
# ¿ Nov 3, 2017 18:06 |