|
Combat Pretzel posted:^^^ So premature ejaculation about nothing then? Come to think of it, autoconf does generate those configure.sh scripts, which invoke gcc tons of times compiling and running test files? Kind of make sense some of that output breaks. Confirmed, I'm running the stress test on my Xeon E5-2650L VPS and conftest is throwing segfaults just like the Ryzen systems in the article. welp i expected better from phoronix repiv fucked around with this message at 19:43 on Aug 5, 2017 |
# ¿ Aug 5, 2017 19:33 |
|
|
# ¿ May 16, 2024 19:18 |
|
Yeah, the Phoronix article was junk and they've retracted it for now. Phoronix posted:As a result of feedback, currently working on some updated results. As some have pointed out, the conftest segmentation faults aren't specific to Ryzen, so updating the tests to avoid confusion. Though one area being explored now as well is the Clang segmentation faults shown in the original article, not originating from conftest as well as Clang being able to yield the system hanging hard where the system is unresponsive and SSH is not working. Plus also incorporating more Ryzen-Kill tests as outlined in the aforelinked article. As many readers have pointed out, BSD developers have also discovered a Ryzen bug. More details soon.
|
# ¿ Aug 6, 2017 00:34 |
|
AMD have managed to reproduce the Ryzen segfault bug, and confirmed that it doesn't affect Threadripper or Epyc. They don't have a fix for Ryzen yet though.
|
# ¿ Aug 7, 2017 19:25 |
|
Maxwell Adams posted:Does anyone know when the Threadripper NDA drops and a few dozen reviews will drop simultaneously? The NDA drops on launch day, the 10th.
|
# ¿ Aug 8, 2017 15:14 |
|
Wirth1000 posted:Who the hell even sells that kind of power supply to handle that? Aren't most manufacturers topping out at 1500W in their lineups? Super Flower has a 2000W power supply
|
# ¿ Aug 10, 2017 19:02 |
|
SwissArmyDruid posted:...I mean, there's nothing stopping OEMs from using a discrete Thunderbolt controller, right? That's how device makers have had to do it all this time, it wasn't until _May_ of _this year_ that Intel declared that they were going to start integrating Thunderbolt directly into their CPUs. https://newsroom.intel.com/editorials/envision-world-thunderbolt-3-everywhere/ AFAIK all of the existing discrete Thunderbolt controllers still need an Intel PCH in order to function. That's why PCI-E TB3 cards require this extra cable that hooks up to a proprietary header on the motherboard: Whether this connection serves an actual purpose or is just there to lock out AMD and older Intel platforms is anyone's guess though. repiv fucked around with this message at 13:15 on Aug 11, 2017 |
# ¿ Aug 11, 2017 12:55 |
|
How to use a torque driver correctly: https://clips.twitch.tv/HyperInventiveTubersSaltBae
|
# ¿ Aug 16, 2017 16:56 |
|
eames posted:Some TR benchmarks for comparison would have been nice, I suspect the engine tops out at 8 threads because that's what consoles have. Looks like you're right, it doesn't use the HT threads on an octo-core Intel chip either. It's just hard-coded to use 8 worker threads because consoles. http://gamegpu.com/action-/-fps-/-tps/destiny-beta-test-gpu-cpu
|
# ¿ Aug 31, 2017 13:36 |
|
Threadripper 1900X is now available https://www.newegg.com/Product/Product.aspx?Item=N82E16819113457
|
# ¿ Aug 31, 2017 14:11 |
|
new leak from wccftech
|
# ¿ Sep 5, 2017 18:16 |
|
Paul MaudDib posted:Hell, they don't even do native AVX2 instructions (it's executed as a pair of AVX1 ops, effectively halving the throughput). Zen doesn't do full-rate AVX1 either, it breaks all 256-bit SIMD ops into a pair of 128-bit ops (effectively decomposing AVX into SSE).
|
# ¿ Sep 5, 2017 21:39 |
|
Paul MaudDib posted:Yes, streamers are typically better off using GPU encoding. CPU encoding has better quality but it's difficult to achieve this in real-time, and if you are going to attempt to do so then the gold standard is to do the encoding on a dedicated machine since anything that exceeds the quality of GPU encoding is also going to poo poo up your framerate something fierce, even on an 8-core processor. To be fair to AMD, GPU-encoded Twitch streams looked pretty bad around the time of the Ryzen launch. Unfortunately for them Twitch almost doubled the bitrate limit a month later so it's far more forgiving of encoder quality now.
|
# ¿ Sep 5, 2017 22:30 |
|
Volguus posted:According to https://www.pugetsystems.com/labs/articles/Thermal-Paste-Application-Techniques-170/ the X shape is the best CPU paste spreading method. But always with the CPU in the socket.
|
# ¿ Sep 19, 2017 18:46 |
|
Looks like Intels promise to open up Thunderbolt has come to fruition, Gigabyte just announced an X399 board with support for TB3 cards: https://www.anandtech.com/show/11847/gigabyte-announces-x399-designare-ex
|
# ¿ Sep 20, 2017 00:14 |
|
Paul MaudDib posted:Well, how many units did Intel ship? At least in the UK, practically none. OCUK said they got 30 OEM units and 0 retail units, and since I ordered from Scan within the first ten minutes and still got my order delayed until November 1st I assume they didn't get any retail stock either.
|
# ¿ Oct 5, 2017 23:38 |
|
https://www.youtube.com/watch?v=UMzXMvOaTZk tldr: AMDs NVMe RAID driver installs outdated, exploitable copies of Apache and PHP just to host the config UI, and configures them to run as SYSTEM (i.e. root) and listen on all external network interfaces
|
# ¿ Oct 19, 2017 00:22 |
|
AMD has a bunch of on-die USB and SATA controllers so you'd expect more uncore area than Intel, who put all that stuff on the chipset
|
# ¿ Oct 26, 2017 19:30 |
|
FaustianQ posted:This should save a lot in the mobile space, right? Cheaper boards, easier thermal solutions. Yeah I think we'll start to see low-cost designs using the X300 and B300 chipsets. They're supposed to be "null chipsets" that just do the bare minimum needed to bootstrap the CPU, but don't provide any additional USB/SATA/PCIe I/O beyond what the CPU has built in, so their cost and thermals should be negligible.
|
# ¿ Oct 26, 2017 20:17 |
|
VostokProgram posted:I'm gonna need you to source a benchmark on this just because I have a 970 and don't want to believe you The closest thing to the PS4 Pro GPU on PC is the RX470 - they're the same architecture, the RX470 is a little faster but the PS4 Pro has bolted-on 2xFP16 support so they're probably pretty close in practice.
|
# ¿ Oct 29, 2017 21:44 |
|
wargames posted:consoles are using pre-polaris gpu if I remember correctly. Original PS4/XB1 used some early version of GCN but PS4Pro/XB1X are both using customized versions of Polaris. AFAIK the XB1Xes variant is fairly vanilla (other than having more CUs than any desktop Polaris card) but the Pro has 2xFP16 support grafted on, which desktop GCN didn't get until Vega. repiv fucked around with this message at 20:09 on Oct 31, 2017 |
# ¿ Oct 31, 2017 19:43 |
|
shrike82 posted:We should be able to see "clean" benchmarks once someone cracks it. Probably not, these VM-based DRM systems mangle the executable so much that it's practically impossible to reverse them back into the original binary. They usually get cracked by leaving the DRM in place but spoofing the environment around it so the licence checks succeed.
|
# ¿ Nov 1, 2017 16:00 |
|
Kazinsal posted:AC:O's "anti-tamper" solution is to literally run the DRM in a non-hardware assisted, software-based x86 virtualization wrapper. Even if that's using some kind of magical dynamic translation engine that's still going to eat cores like a hot drat. That's how most (somewhat) effective DRM systems work, although the VM is often a custom design rather than based on x86. The trick is to only VM-ify parts of the code that aren't in the hot path so performance doesn't go to poo poo. Doom 2016, MGS5, Mad Max, etc all used VM protection and ran fine because the developers weren't dumbasses and only wrapped performance insensitive code.
|
# ¿ Nov 1, 2017 18:16 |
|
Munkeymon posted:According to people on Twitter stepping through the assembly, it goes to the VM every time the character moves, so NBD I'm sure. Ubisoft never fails to disappoint
|
# ¿ Nov 1, 2017 18:55 |
|
If all else fails you could route them to a shitload of U.2 ports (4 lanes each).
|
# ¿ Nov 1, 2017 19:12 |
|
FaustianQ posted:Hmm, a quick check again and it seems Eurocoms post indicates Z390 will support 8C chips but doesn't really say Z370 won't. I'm not sure why Z390 needs to exist except for that reason though? Z390 has some pretty significant I/O upgrades, they're fully integrating USB3.1gen2, 802.11ac WiFi and SDXC 3.0, upgrading the audio DSP, adding new power states and adding support for the next gen external TB3 controller. It has plenty of reason to exist even if 8C chips aren't exclusive to it. repiv fucked around with this message at 18:42 on Nov 3, 2017 |
# ¿ Nov 3, 2017 18:34 |
|
GRINDCORE MEGGIDO posted:It's nice that AMD spent all that time and effort helping Intel APU's become really powerful graphically. Doesn't "APU" specifically refer to chips with HSA/unified memory? We need a new term for Intel's discrete-graphics-on-package thing
|
# ¿ Nov 10, 2017 19:36 |
|
PerrineClostermann posted:Didn't they show that the segfaults happen to Intel too? Nope, that was Phoronix screwing up their test methodology and confusing people. They found a way to "reproduce" the Ryzen segfault bug that had a 100% false positive rate, including on Intel machines, because they were looking at segfaults intentionally triggered by the compilers self-test scripts. The actual segfault-during-compilation problem only happened on Ryzen.
|
# ¿ Nov 23, 2017 22:31 |
|
i'm the guy still benchmarking portal 1 at 1080p when an rx580 is pushing close to a thousand frames per second
|
# ¿ Nov 25, 2017 00:50 |
|
that "leak" is a month old /r/ayymd shitpost lol https://www.reddit.com/r/AyyMD/comments/79aiwh/ryzen_2
|
# ¿ Dec 10, 2017 19:42 |
|
It sounds like I/O heavy workloads will take the worst of it, since they're transitioning between usermode and kernelmode a lot.
|
# ¿ Jan 3, 2018 04:08 |
|
I don't envy the people who write GPU drivers https://twitter.com/FioraAeterna/status/948464769039765504 https://twitter.com/FioraAeterna/status/948473516029968384
|
# ¿ Jan 3, 2018 15:33 |
|
Combat Pretzel posted:(I guess not often enough with command batching). Yeah, graphics drivers mostly run in usermode these days (at least on Windows) then batch the work into fat command lists before throwing them over the wall into the kernel. There's probably not that many kernel calls per-frame in the grand scheme of things.
|
# ¿ Jan 3, 2018 17:29 |
|
Combat Pretzel posted:--edit: What about this Denuvo DRM poo poo (or whatever it is called) which has games call god knows how many times a second into their kernel driver? Denuvo doesn't use a driver AFAIK, or even an external process. I'm pretty sure it's self contained in the protected executable. e: It's probably worth mentioning that organizations doing seriously high speed I/O are already bypassing the kernel for performance and shouldn't be affected by this workaround at all. repiv fucked around with this message at 18:11 on Jan 3, 2018 |
# ¿ Jan 3, 2018 17:41 |
|
oh no https://twitter.com/FioraAeterna/status/948473228686647296
|
# ¿ Jan 3, 2018 21:37 |
|
Truga posted:Over 100 million? 1060 or 970? It's probably something mobile, I think Fiora works for one of the SoC manufacturers. No idea which one though.
|
# ¿ Jan 3, 2018 21:46 |
|
https://security.googleblog.com/2018/01/todays-cpu-vulnerability-what-you-need.htmlGoogle Security posted:These vulnerabilities affect many CPUs, including those from AMD, ARM, and Intel, as well as the devices and operating systems running them. AMD? Is there a second disclosure coming that affects them too?
|
# ¿ Jan 3, 2018 23:37 |
|
GRINDCORE MEGGIDO posted:Anyone benched Ryzen yet with KB4056892 ? There's a bunch of reports on Reddit saying Window's Meltdown mitigation is not enabled on AMD systems so there should be no difference.
|
# ¿ Jan 4, 2018 18:28 |
|
FaustianQ posted:Wondering if by being linked through Infinity Fabric the dGPU can offload rendering work to the iGPU, like say physics or anti-aliasing? Not a huge potential improvement, maybe 20%? That's already possible with any combination of dGPU and iGPU using DX12 unlinked multi-adapter mode, the limiting factor is that no engine developer gives a poo poo about supporting such an esoteric setup.
|
# ¿ Jan 9, 2018 00:08 |
|
FaustianQ posted:Of course, but what I am talking about is whether that can be done at a hardware level. Like what makes IF specifically better than using a PCIE connection considering it's slower. Maybe an advantage is that the dGPU can "cannibalize" the iGPU for latency insensitive workloads? Maybe the iGPU gets access to the HBM2 as well, in which case huge performance uplift even when running low power? How would the hardware know what constitutes a latency insensitive workload? It doesn't have that kind of high level context. Without engine developers guiding the iGPU offload the only alternative I see is driver fuckery with game specific profiles, which is exactly what AMD has been trying to move away from.
|
# ¿ Jan 9, 2018 01:10 |
|
|
# ¿ May 16, 2024 19:18 |
|
GRINDCORE MEGGIDO posted:Interested to see the silicon lottery overclocking results. How long after launch do they usually release those? SiliconLottery dropped all their binned Ryzen SKUs once it came to light that AMD was skimming all the best dies for Threadripper, so they might not bother testing 2nd gen Ryzen at all
|
# ¿ Jan 18, 2018 21:59 |