|
Probably no better in threadripper, possibly worse if it tries to span the interconnect between dies/memory controllers and incurs the additional latency and bandwidth penalty. Those tests don't indicate any bottlenecking from the CPU; GPU limited pretty much exclusively. Game engines are going to target realistic hardware and it must have been more than enough work to make what they have distribute load across 8 cores, I doubt they bothered to do any more if it doesn't provide real gains.
|
# ? Aug 29, 2017 18:29 |
|
|
# ? Jun 3, 2024 21:57 |
|
fishmech posted:Ultimately DEC Alpha was important because DEC was important, and they positioned it as the direct successor to their popular VAX families of processors. I have a DEC Alpha Multia which was (at the time) their low end workstation CPU. I ran linux and Windows NT on it back in the 90s. The Alpha CPU in it wasn't very fast but it was sort of the lowest tier you could get that was still an Alpha. I think mine is 166mhz (haven't turned it on in 15 years or more due to dead multia syndrome but I'll repair it some day). I remember spending a lot of money on 32 MB of True Parity RAM for it. I got mine in 1996 and it was only two years later that they were bought by Compaq. They produced an Intel pentium based Multia as well, I guess because it was an attractive form factor (small pizza box kind of thing). The cool part about it was that it was decently fast for a lot of stuff and you had a lot of OS flexibility, but the downside was any x86 binaries for NT would run slowly in FX/32 emulation but be "optimized" for alpha and build a library of alpha-optimized binaries it could refer to. There was even some software to share that library on a LAN in case you had a lot of Alphas running NT, I guess. They never got as fast as something compiled for the Alpha natively, though.
|
# ? Aug 29, 2017 18:48 |
|
Couple of new stories aggregated for your perusal: AMD must shell out $29.5m to make a class-action lawsuit filed by shareholders who accused the company of misleading investors before Llano's launch go away. One wonders if they are only capitulating to the settlement *now* because they actually have the funds to pay it out. PCIe 4.0 to be finalized at the end of this year, 5.0 by 2019. Here's hoping Zen+ and future iterations can keep on top of that, or Ryzen will be short-lived indeed. 1900X? SwissArmyDruid fucked around with this message at 22:16 on Aug 29, 2017 |
# ? Aug 29, 2017 22:13 |
|
Alphas we're fast, often about twice as fast as the Pentium available at the time. The problem was that DEC was a dinosaur that didn't believe in things like marketing or personal computers, the Alpha was going to sell itself to mainframe builders everywhere based on performance alone. The alpha team later went on to work for AMD to design the Athlon, the engineers were brilliant. Dirk Meyer came from DEC to help with the Athlon, and they also had the silicon valley wizard Jim Keller, and Intel was just screwing off with Netburst talking about 10GHz Pentium 4s. drat I miss the early 2000s.
|
# ? Aug 29, 2017 23:38 |
|
I remember Intel sending dev support people to my work in the late 90s to tell us to change x<<4 to x*=16 all over our code so it would run faster, because of some architectural change in P4. Ridiculous in several ways, but that's Netburst for you.
|
# ? Aug 29, 2017 23:52 |
|
VostokProgram posted:Was DEC Alpha supposed to be super badass or something back in the day? I find mention of it in a lot of places but no explanation of why it was so interesting Each generation of Alpha was pretty much faster than anything else within their target market when released, but they didn't have that much software support, and DEC marketing was poo poo. It's really too bad they didn't get to continue with their plans for EV8, Tarantula, and beyond. They looked pretty ridiculous, and it would've been cool to see if they might have changed the landscape a bit. Here's a couple of interesting reads: http://www.realworldtech.com/ev8-mckinley/ https://pdfs.semanticscholar.org/024f/3e0ea6a49e536f3d135e73d77323a924498d.pdf
|
# ? Aug 30, 2017 03:11 |
|
Subjunctive posted:I remember Intel sending dev support people to my work in the late 90s to tell us to change x<<4 to x*=16 all over our code so it would run faster, because of some architectural change in P4. Ridiculous in several ways, but that's Netburst for you. Yeah, Intel apparently decided to drop the barrel shifter (that pretty much every other x86 CPU has for fast shifts and rotations by arbitrary amounts) from Netburst so they had to use microcode loops if you were shifting by more than a single bit. That makes writing constant-time crypto algorithms targeting Netburst a huge PITA.
|
# ? Aug 30, 2017 03:16 |
SwissArmyDruid posted:Couple of new stories aggregated for your perusal: why the hell is pcie4.0 going to be so drat short lived? 3 has been around for what, close to a decade?
|
|
# ? Aug 30, 2017 06:03 |
|
Watermelon Daiquiri posted:why the hell is pcie4.0 going to be so drat short lived? 3 has been around for what, close to a decade? two differant teams, 4 had issues and 5 didn't. Why did we go from ipv4 to ipv6? ipv5 had issues.
|
# ? Aug 30, 2017 06:11 |
|
PCIe 3.0 was finalized in 2010, boards started showing up around 2012 with Z77/Ivy Bridge. Their goal for finalizing PCIe 3.0 was 2008 IIRC so realistically 5.0 won't be until around 2023, and implementations around 2025. wargames posted:two differant teams, 4 had issues and 5 didn't. Why did we go from ipv4 to ipv6? ipv5 had issues. Plus this. You can actually read the "IPv5" spec though! It's RFC 1819, Internet Stream Protocol Version 2.
|
# ? Aug 30, 2017 06:12 |
|
The Gen4->Gen5 transition will "only" change the signal rate whereas there were some other changes going from 3.1 -> 4.0.. (10 bit tag size, scaled flow control, power supply spec changes, etc) Of course that's the plan and there will probably be some kind of feature creep that ends up delaying 5.0 anyway. The biggest reason for the delay in going to 4.0 is probably the number of companies in the PCI SIG consortium now (group that creates the spec). A lot of this is due to the explosion of PCIe based storage so drive vendors want their say. It's like herding cats. With the functional changes ironed out in 4.0 it should just be physical change going to 5.0..
|
# ? Aug 30, 2017 07:03 |
|
wargames posted:Why did we go from ipv4 to ipv6? ipv5 had issues. And here I thought it was supposed to make it easy to remember how many numbers you were supposed to input for the IP address.
|
# ? Aug 30, 2017 08:10 |
Honestly, I somehow never connected the 'v' in ipv# to 'version [number]' I think i just thought the 4/6 meant the number of bytes/halfwords Watermelon Daiquiri fucked around with this message at 08:38 on Aug 30, 2017 |
|
# ? Aug 30, 2017 08:21 |
|
IPv6 addresses have 16 bytes / 8 half-words (32-bit) though.
|
# ? Aug 30, 2017 11:42 |
|
Combat Pretzel posted:IPv6 addresses have 16 bytes / 8 half-words (32-bit) though. 128 bits.
|
# ? Aug 30, 2017 12:42 |
just goes to show i dont care enough to really count them lol also conflating link-local addresses which have fewer groups shown
|
|
# ? Aug 30, 2017 13:27 |
|
Subjunctive posted:128 bits.
|
# ? Aug 30, 2017 15:31 |
|
Combat Pretzel posted:That's what I said. The 32-bit was to indicate half of which word width. Ah, ok. Sorry.
|
# ? Aug 30, 2017 16:00 |
|
TBH kind of annoying that the protocol to that long to gain traction. I have technical literature of IPv6 dated 2001 in some cardboard box.
|
# ? Aug 30, 2017 17:31 |
|
Google's IPv6 usage statistics (gathered on the basis of how many connections they see to all their sites over v6 vs v4) are very interesting for that. For one thing, IPv6 adoption is consistently higher on Saturdays, Sundays and holidays than on normal workdays. For another, Belgium, the US and Greece are the top 3 IPv6 users, in that order. https://www.google.com/intl/en/ipv6/statistics.html
|
# ? Aug 30, 2017 17:52 |
|
The motherboard for my i7 3770 system died and I wanted a new system in a pinch so I grabbed a Ryzen 3 1200 and a MSI B350M PRO-VDH, along with some 3200 mhz RAM or whatever. I managed to easily hit 3.8Ghz with the stock cooler, though I only managed to get 2666mhz from the DDR4. Still, I don't really notice much of a difference from my old system which is fantastic for the cost. Dolphin even runs considerably better, I used to get framerate drops on some games and don't now. The R3 1200 is an absolute bargain A++++
|
# ? Aug 30, 2017 19:50 |
|
SwissCM posted:The motherboard for my i7 3770 system died and I wanted a new system in a pinch so I grabbed a Ryzen 3 1200 and a MSI B350M PRO-VDH, along with some 3200 mhz RAM or whatever. What are your plans for that 3770?
|
# ? Aug 30, 2017 20:46 |
|
https://www.youtube.com/watch?v=EtB3uirEhbY Destiny 2 CPU benchmarks by Gamer's Nexus
|
# ? Aug 30, 2017 20:51 |
|
HamHawkes posted:What are your plans for that 3770? It's probably going to a friend of mine. I live in Australia, so if you're interested in buying it the cost of shipping would probably make it not worth it.
|
# ? Aug 30, 2017 20:52 |
|
Woop. I finally got a retention ring for my Arctic Liquid Freezer 360, the free one they sent got lost in the post, so I ordered a Corsair branded ring which is a perfect fit because Asetek makes all of them. So far the CPU is a whole lot cooler. With the stock cooler at 3.7GHz it was topping 87C running mprime/prime95, now it is reaching only 61C during the same part of the stress test. I'm actually comfortable running it now, and may even try for a higher overclock.
|
# ? Aug 30, 2017 21:00 |
|
eames posted:https://www.youtube.com/watch?v=EtB3uirEhbY So SMT atm is completely not being used by D2, sure hope that changes by the time the full game comes out
|
# ? Aug 30, 2017 22:16 |
|
Some TR benchmarks for comparison would have been nice, I suspect the engine tops out at 8 threads because that's what consoles have.
|
# ? Aug 30, 2017 22:56 |
|
fishmech posted:Google's IPv6 usage statistics (gathered on the basis of how many connections they see to all their sites over v6 vs v4) are very interesting for that. For one thing, IPv6 adoption is consistently higher on Saturdays, Sundays and holidays than on normal workdays. For another, Belgium, the US and Greece are the top 3 IPv6 users, in that order.
|
# ? Aug 30, 2017 23:01 |
|
Scarecow posted:So SMT atm is completely not being used by D2, sure hope that changes by the time the full game comes out There's the typical 10% improvement between the 7600K and 7700K so SMT is being used. Based on the fact that the 1600X and 1700 score the same it probably isn't scaling past 12 threads - which is not unusual for games.
|
# ? Aug 30, 2017 23:07 |
|
Paul MaudDib posted:There's the typical 10% improvement between the 7600K and 7700K so SMT is being used. Based on the fact that the 1600X and 1700 score the same it probably isn't scaling past 12 threads - which is not unusual for games. no its that SMT for AMD is flat out not being used atm see in the article http://www.gamersnexus.net/game-bench/3038-destiny-2-beta-cpu-benchmarks-testing-research "For one instance, Destiny 2 doesn’t utilize SMT with Ryzen, producing utilization charts like this:"
|
# ? Aug 30, 2017 23:13 |
|
fishmech posted:Google's IPv6 usage statistics (gathered on the basis of how many connections they see to all their sites over v6 vs v4) are very interesting for that. For one thing, IPv6 adoption is consistently higher on Saturdays, Sundays and holidays than on normal workdays. For another, Belgium, the US and Greece are the top 3 IPv6 users, in that order. greece probably sold many of their ipv4 blocks off
|
# ? Aug 31, 2017 02:06 |
|
I think the only important thing from that GN article is this, and it shows how Intel is still totally ahead in everything and perfect in every way:
|
# ? Aug 31, 2017 02:20 |
|
Arivia posted:I think the only important thing from that GN article is this, and it shows how Intel is still totally ahead in everything and perfect in every way: yes lets just ignore that SMT is not working for AMD cpus in D2 atm
|
# ? Aug 31, 2017 02:30 |
|
Scarecow posted:yes lets just ignore that SMT is not working for AMD cpus in D2 atm
|
# ? Aug 31, 2017 02:47 |
Scarecow posted:no its that SMT for AMD is flat out not being used atm see in the article But can we really trust this? r/AMD says No!: The Hottest of Takes posted:Gamers Nexus is not reliable is anti AMD bullshit, all manipulated. Everything that favors AMD is not reviewed, benchmarked nor shown by gamers nexus. Conclusions are even worst. Stop using that guys things as a "revealed truth" nor information because it is all OPINIONS of one guy who clearly hates AMD or works for competitors. And it's clear that something about that is really going on with the videos he didn't show, then showed and then explained like the "glue interview" one that "was lost". I saw polaris benchmarks with OLD DRIVERS in that site.
|
|
# ? Aug 31, 2017 02:49 |
|
yeah I missed that one
|
# ? Aug 31, 2017 02:51 |
|
Scarecow posted:yeah I missed that one Yeah, Steve spends the first ten minutes of the video going "there's a lot more to this than just straight benchmarks, please pay attention to all the details" and then the last minute going "so don't take the one graph you like and throw it out there without context as if it's the final word" so I was just doing that for shits and giggles. It's a beta, SMT isn't working, Ryzens are good CPUs.
|
# ? Aug 31, 2017 04:04 |
|
AVeryLargeRadish posted:But can we really trust this? r/AMD says No!: r/AMD is hosed in the head because Steve only ever poo poo on the 1700X and 1800X for being overpriced relative to the 1700, and was iffy on the 1700 from a pure gaming standpoint as the 7700K performs better for a similar enough price. Oh and he rightly pans AMD for Vega. Otherwise I've never seen him explicitly down on AMD and instead has been supportive of basically the 1700 and down, and even Polaris as budget options.
|
# ? Aug 31, 2017 06:35 |
FaustianQ posted:r/AMD is hosed in the head because Steve only ever poo poo on the 1700X and 1800X for being overpriced relative to the 1700, and was iffy on the 1700 from a pure gaming standpoint as the 7700K performs better for a similar enough price. Oh and he rightly pans AMD for Vega. Otherwise I've never seen him explicitly down on AMD and instead has been supportive of basically the 1700 and down, and even Polaris as budget options. posted:
|
|
# ? Aug 31, 2017 08:29 |
|
|
# ? Jun 3, 2024 21:57 |
|
[/quote] I mean, yea?
|
# ? Aug 31, 2017 08:38 |