Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
mewse
May 2, 2006

Very interesting, thank you

Adbot
ADBOT LOVES YOU

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Otakufag posted:

I'm still trying to decide between 9600k, 9700k and ryzen 3600. Haven't upgraded cpu since 2011 and only game, nothing more. Have gsync 144hz monitor and gtx 1070, please help me decide.

For the money there's nothing that can touch the 3600, although right now AMD is going through the teething problems stage with BIOS. Right now the boost isn't hitting advertised clocks and they don't really idle right either, and are broken in Destiny 2 and don't boot Linux right. They'll get it within a few months I'm sure, but right now you will probably have to expect some tinkering.

The 9600K and 9700K both suffer from excessive segmentation, if they were priced 1:1 with their AMD equivalents then there would maybe be a case for strictly gaming use, but paying a 10% premium for the privilege of losing hyperthreading is not great, and Intel's per-core lead is down to about 10% at this point. So 10% more for 10% more gaming performance and 20-30% less multithreaded performance.

Out of your list the Intel option that makes the most sense is probably the 9700K but I wouldn't pay any more than the 3700X costs for it (~$330). Let me also throw in a shoutout to the 8700K, which is basically the same level of performance as the 9700K but can usually be sourced cheaper (Microcenter notably has them for $300 minus another $30 if you buy a mobo). In the current crop of games, the only thing that tops a delidded, 5.3 GHz 8700K is a delidded, 5.3 GHz 9700K/9900K-with-disabled-HT, and you don't have to deal with delidding a soldered processor or paying SiliconLottery to do it for you.

If you're the type of person who only upgrades once every 8 years then you also may want to consider the 9900K, if you can find one for closer to $400 then that's probably the better long-term option, but again I wouldn't pay much more than 3800X level pricing for it ($400-430). And the 9900K has another caveat - Intel has started binning out the golden chips for 9900KS, so you are much more likely to get a 4.9 GHz chip. There is the 9900KF, which isn't affected by that change, but then you don't have the iGPU, and I still wouldn't pay more than 3800X level pricing for it.

Paul MaudDib fucked around with this message at 19:50 on Jul 19, 2019

Otakufag
Aug 23, 2004
With all the poo poo show that Zen 2 launch has been regarding motherboard bios I wonder if this was the final true reason for Intel's love of socket changes.

Shemp the Stooge
Feb 23, 2001

Otakufag posted:

With all the poo poo show that Zen 2 launch has been regarding motherboard bios I wonder if this was the final true reason for Intel's love of socket changes.

I always suspected it was a thing they did to make their board partners happy and intel centric.

B-Mac
Apr 21, 2003
I'll never catch "the gay"!

Otakufag posted:

With all the poo poo show that Zen 2 launch has been regarding motherboard bios I wonder if this was the final true reason for Intel's love of socket changes.

Money is the only answer you need.

Kerbtree
Sep 8, 2008

BAD FALCON!
LAZY!
Question: do modern CPUs still have a full set of all the instructions for 8/16-bit software?

I'm sure I remember a long time ago there being something about only being able to use the next group down, so 64/32, 32/(16+8).

Virtualbox will happily boot a 8-bit DOS 6.22 instance on top of 64-bit Win10.

SamDabbers
May 26, 2003



Kerbtree posted:

Question: do modern CPUs still have a full set of all the instructions for 8/16-bit software?

I'm sure I remember a long time ago there being something about only being able to use the next group down, so 64/32, 32/(16+8).

Virtualbox will happily boot a 8-bit DOS 6.22 instance on top of 64-bit Win10.

Yes, modern x86 CPUs still can run 8- or 16-bit software. The thing about "only one group down" is for Windows binary compatibility.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I have MSDOS 6.22 running in a Hyper-V VM.

Yaoi Gagarin
Feb 20, 2014

Kerbtree posted:

Question: do modern CPUs still have a full set of all the instructions for 8/16-bit software?

I'm sure I remember a long time ago there being something about only being able to use the next group down, so 64/32, 32/(16+8).

Virtualbox will happily boot a 8-bit DOS 6.22 instance on top of 64-bit Win10.

Every modern x86_64 CPU actually starts out running in real mode on power-up, and then the OS is responsible for telling it to switch to protected mode or long mode

Kazinsal
Dec 13, 2011


No x86 machine supports 8-bit software; the 8086 was a 16-bit machine and MS-DOS was a 16-bit operating system. 8-bit would be 8080/Z80, which is an architecture that's sort of close enough to the 8086 that early versions of PC-DOS came with a source code translator to turn 8080 assembly source into unoptimized but generally workable 8086 assembly source.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

VostokProgram posted:

Every modern x86_64 CPU actually starts out running in real mode on power-up, and then the OS is responsible for telling it to switch to protected mode or long mode
I think on UEFI systems, it's getting switched to protected mode pretty much instantly, and stays in that mode while being handed off to the bootloader.

Yaoi Gagarin
Feb 20, 2014

Combat Pretzel posted:

I think on UEFI systems, it's getting switched to protected mode pretty much instantly, and stays in that mode while being handed off to the bootloader.

I'm sure that's true, but my point is more that some piece of software actually needs to tell it to make that switch, otherwise it'll happily chug along in real mode thinking it's still the 80s

Lowen SoDium
Jun 5, 2003

Highen Fiber
Clapping Larry

VostokProgram posted:

I'm sure that's true, but my point is more that some piece of software actually needs to tell it to make that switch, otherwise it'll happily chug along in real mode thinking it's still the 80s

Just like Ernest Cline wished he could.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

VostokProgram posted:

I'm sure that's true, but my point is more that some piece of software actually needs to tell it to make that switch, otherwise it'll happily chug along in real mode thinking it's still the 80s
Actually, at this point, I'm surprised it hasn't gotten ripped out yet, to streamline the CPU internals. Needing to support it probably has some influence on CPU design. Apparently CPUs need a coprocessor to be initialized to begin with, anyway. At least on Intel. So the mode it starts in should be irrelevant, so long the UEFI firmware is set up accordingly.

IIRC DOSBox emulates real mode, and apps run just fine and even games aren't really hard on the CPU. So if you need 16bit, emulation ought to do fine (enough).

sadus
Apr 5, 2004

Combat Pretzel posted:

I have MSDOS 6.22 running in a Hyper-V VM.

Yes, I have (a legit license) for some 16 bit old software but it has to run inside of an XP VM. R.I.P. worldgroup/majorBBS, needs XP for the ancient InstallShield version to run

Yaoi Gagarin
Feb 20, 2014

Combat Pretzel posted:

Actually, at this point, I'm surprised it hasn't gotten ripped out yet, to streamline the CPU internals. Needing to support it probably has some influence on CPU design. Apparently CPUs need a coprocessor to be initialized to begin with, anyway. At least on Intel. So the mode it starts in should be irrelevant, so long the UEFI firmware is set up accordingly.

IIRC DOSBox emulates real mode, and apps run just fine and even games aren't really hard on the CPU. So if you need 16bit, emulation ought to do fine (enough).

Could be that the real mode behavior of the CPU is itself just an emulation of the real thing. Although when you get down to the hardware/microcode level what's the real difference between emulating a thing and actually doing it

E: I used real 3 times in this post and I hate myself for it

iospace
Jan 19, 2038


VostokProgram posted:

Could be that the real mode behavior of the CPU is itself just an emulation of the real thing. Although when you get down to the hardware/microcode level what's the real difference between emulating a thing and actually doing it

E: I used real 3 times in this post and I hate myself for it

Real nice work here.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I don't know. Maybe there's additional state needed to be tracked for real mode emulation, which probably means transistors that could be lost for other things to use.

Combat Pretzel fucked around with this message at 03:22 on Jul 23, 2019

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

VostokProgram posted:

Could be that the real mode behavior of the CPU is itself just an emulation of the real thing. Although when you get down to the hardware/microcode level what's the real difference between emulating a thing and actually doing it

E: I used real 3 times in this post and I hate myself for it

Under the hood there is very little in common with an old 386-era chip and a modern x86 CPU. All the bits and gubbins are translated from x86 assembly into a shitload of microcodes which do the correct math, but not in a way that's strictly 1:1 to what a naive programmer would expect given the assembly they fed it. There is a LOT of magic numbers and arcane poo poo on a modern processor that lets us make 'GOTTA GO FAST!' memes while shitposting on a sonic the hedgehog forum.

Emulating the entire state machine for an old processor has so little overhead that it's basically a non-issue now, same with emulating a lot of game consoles, old Motorola chips, and ancient apple hardware.

Cygni
Nov 12, 2005

raring to post

IceLake/Sunny Cove officially launching. Naming is dropping a zero off, and becoming the "10 series". Intel is also going to be adding the graphics config to the end of the name itself for mobile, so parts are things like "Core i7-1068G7" or "Core i3-1005G1"

AT with the big overview article: https://www.anandtech.com/show/14514/examining-intels-ice-lake-microarchitecture-and-sunny-cove

And some actual testing in a development laptop: https://www.anandtech.com/show/14664/testing-intel-ice-lake-10nm/

Seems like the IPC is fantastic, the integrated graphics a huge step forward, and the memory controller great... but 10nm is hurting the max frequency pretty badly vs Whiskey Lake, so single threaded performance and performance per watt is only 3.5% or so over Whiskey Lake.

The Illusive Man
Mar 27, 2008

~savior of yoomanity~

Cygni posted:

IceLake/Sunny Cove officially launching. Naming is dropping a zero off, and becoming the "10 series". Intel is also going to be adding the graphics config to the end of the name itself for mobile, so parts are things like "Core i7-1068G7" or "Core i3-1005G1"

AT with the big overview article: https://www.anandtech.com/show/14514/examining-intels-ice-lake-microarchitecture-and-sunny-cove

And some actual testing in a development laptop: https://www.anandtech.com/show/14664/testing-intel-ice-lake-10nm/

Seems like the IPC is fantastic, the integrated graphics a huge step forward, and the memory controller great... but 10nm is hurting the max frequency pretty badly vs Whiskey Lake, so single threaded performance and performance per watt is only 3.5% or so over Whiskey Lake.

Here’s looking forward to this architecture on 7nm, I guess.

rage-saq
Mar 21, 2001

Thats so ninja...
Lighthouse tracking accuracy issue update: Steam has engineers that have replicated my issue and they have identified a problem in code and will be working towards a resolution.
I made a big long post on Reddit about this with responses from Steam Support included.
Maybe now lighthouse tech will be as accurate as everyone has always claimed it was.
https://reddit.com/r/ValveIndex/comments/ckg5ef/a_frank_discussion_about_lighthouse_tracking/

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

Cygni posted:

IceLake/Sunny Cove officially launching. Naming is dropping a zero off, and becoming the "10 series". Intel is also going to be adding the graphics config to the end of the name itself for mobile, so parts are things like "Core i7-1068G7" or "Core i3-1005G1"

AT with the big overview article: https://www.anandtech.com/show/14514/examining-intels-ice-lake-microarchitecture-and-sunny-cove

And some actual testing in a development laptop: https://www.anandtech.com/show/14664/testing-intel-ice-lake-10nm/

Seems like the IPC is fantastic, the integrated graphics a huge step forward, and the memory controller great... but 10nm is hurting the max frequency pretty badly vs Whiskey Lake, so single threaded performance and performance per watt is only 3.5% or so over Whiskey Lake.

yep it looks great until you get to the real world tests and the higher freq of the whiskey lake part makes up for the IPC deficit. Except for AVX-512, AES and Graphics where the arch of Ice is designed for more perf.



Space Racist posted:

Here’s looking forward to this architecture on 7nm, I guess.

Intel 7 is Foundry 5, so expect even worse clocks since resistance is goin up.



Also Apple A12, A13 are just incredibly better than everything else on IPC

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
A12, A13, etc, are also ARM processors, so that's not an entirely fair comparison.

Honestly, laptop CPU performance for 98% of people is already well above "good enough" with even middling i5 CPUs, so I'd be very happy with flat performance, better iGPU, and 20% better battery life.

craig588
Nov 19, 2005

by Nyc_Tattoo
For what I use a laptop for more battery life is all that matters. What am I going to do mobily with 8 5 GHz cores? I have 4 3.5 GHz Kaby Lake cores now and it's more than powerful enough for anything I'd use a laptop for. I had to choose between a 4 core i5 and a 2 core Pentium and the cost difference was small enough I went with the i5, but I didn't rule out the 2 core option based on performance alone.

zer0spunk
Nov 6, 2000

devil never even lived
I'm still on an oc'd 3770k from 2012..should I be looking at 2019 parts or wait until the 11th gen in 2020? I'm already on the last gpu upgrade I'd want to put in here (2080) so the next move is a new build altogether case on up. I'm using a 1440/60 2011 ultrasharp that I don't feel the need to replace either anytime soon so it's felt like a lateral move at best thus far.

GutBomb
Jun 15, 2005

Dude?

craig588 posted:

For what I use a laptop for more battery life is all that matters. What am I going to do mobily with 8 5 GHz cores? I have 4 3.5 GHz Kaby Lake cores now and it's more than powerful enough for anything I'd use a laptop for. I had to choose between a 4 core i5 and a 2 core Pentium and the cost difference was small enough I went with the i5, but I didn't rule out the 2 core option based on performance alone.

Lots of companies issue laptops as employee machines and developers could definitely use hefty machines with lots of fast cores. Development these days involves a lot of virtual machines and bloated IDEs that really take advantage of lots of fast cores.

PC LOAD LETTER
May 23, 2005
WTF?!

zer0spunk posted:

I'm still on an oc'd 3770k from 2012..should I be looking at 2019 parts or wait until the 11th gen in 2020?

IMO if you don't feel a need and aren't running into any performance issues with what you currently do then its perfectly fine waiting a while to see what happens in 2020. If nothing else you'll be able to get currently new and expensive parts for less.

Though I think if you're waiting on something special from Intel in that time period then you'll probably be disappointed since they're going to be stuck on 14nm+++++ *****lake parts until 2021 for desktop performance stuff. 2021-2022 might be a different story although even that isn't assured if Intel's 7nm turns out to be as ho hum as rumored.

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!

GutBomb posted:

Lots of companies issue laptops as employee machines and developers could definitely use hefty machines with lots of fast cores. Development these days involves a lot of virtual machines and bloated IDEs that really take advantage of lots of fast cores.

Yeah at the last place I worked developers were all given basic ultrabooks, if you were someone doing 3D modeling work then you got a chunky desktop replacement laptop with a dGPU.

FuturePastNow
May 19, 2014


I'm not impressed by the integrated graphics in that Anandtech test. Faster than the previous gen, wow. They're also testing the top-end model and the iGPU in the average mid-to-low end will be cut down significantly from that.

Cygni
Nov 12, 2005

raring to post

FuturePastNow posted:

I'm not impressed by the integrated graphics in that Anandtech test. Faster than the previous gen, wow. They're also testing the top-end model and the iGPU in the average mid-to-low end will be cut down significantly from that.

Even the most cutdown G1 version has more EUs than the Whiskey Lake flagships. And the big G7 version is on all the i7s and even the top i5s it seems, so I think its gonna be pretty common in things like good ultrabooks. Gaming laptops will still have dedicated im sure.

BlankSystemDaemon
Mar 13, 2009



I'm a little bit excited about is getting a quad-core non-SMT 7W CPU that has SHA512 as part of its supported crypto-primitives, and the higher bandwidth on the platforms.
That lets me dream about a passively cooled router/file-server/HTPC which has ZFS with sha512t256 checksumming both on-disk and in-memory.

KillHour
Oct 28, 2007


Those IPC improvements are pretty drat nice - better than we've seen for a long time, I think. Too bad Intel still can't figure out the process node enough to make desktop CPUs out of it.

As far as integrated GPUs go, it would be nice if we could take advantage of them being on the same die to have really super-fast CPU-GPU communication for stuff like compute shaders while your dGPU handles rendering duties. Normally, reading back the data over the PCIe bus is the limiting factor in offloading to compute.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

D. Ebdrup posted:

I'm a little bit excited about is getting a quad-core non-SMT 7W CPU that has SHA512 as part of its supported crypto-primitives, and the higher bandwidth on the platforms.
That lets me dream about a passively cooled router/file-server/HTPC which has ZFS with sha512t256 checksumming both on-disk and in-memory.

Why the hell are you trying to use sha512 for checksumming??

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

BangersInMyKnickers posted:

Why the hell are you trying to use sha512 for checksumming??

Don't want to risk the chance of two hentai classic home movies colliding in MD5 space!

SamDabbers
May 26, 2003



Checksum collisions are super bad if you're doing dedup on ZFS, which is why a SHA algorithm is strongly recommended. In the normal non-dedup case, ZFS is just checking that the record doesn't have bitrot, not that it's unique, so SHA is computationally expensive overkill.

It'd be interesting to see a performance comparison using SHA on ZFS with hardware instructions vs the pure software path vs fletcher4.

canyoneer
Sep 13, 2005


I only have canyoneyes for you
anytime I hear someone say "dedup" out loud I think of the Pink Panther theme. dedup dedup

BlankSystemDaemon
Mar 13, 2009



BangersInMyKnickers posted:

Why the hell are you trying to use sha512 for checksumming??
While SamDabbers is absolutely correct that it's good if you're using dedup on ZFS, I'm not doing that (because dedup on ZFS as it's currently implemented is loving nuts to use anything except a few very specific use-cases)
SHA512 was always likely to be one of the checksums which would get accelerated by hardware whereas fletcher2 and fletcher4 are unlikely to be accelerated and crc32c isn't supported by ZFS (despite being improved by SSE4). Skein, the only other option, is unlikely to ever be accelerated in hardware.
SHA512 is already supported by both QuickAssist and the Chelsio crypto accelerators supported by FreeBSDs ccr(4) driver supports it too - on top of that irrespective of x86 there's ARM, POWER9/10, and RISC-V which are getting or planning on adding it.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

SamDabbers posted:

Checksum collisions are super bad if you're doing dedup on ZFS, which is why a SHA algorithm is strongly recommended.

Very true, but for a non-enterprise setup, let alone a run of the mill HTPC / home file server, SHA-512 is taking it a bit far.

Even colliding SHA-1 would take a completely infeasible amount of file storage to even begin to talk about, and I think by default ZFS dedup uses SHA-256. And if you're just doing simple checksumming without dedup, there's even less of a point in using SHA-512, but I guess if it's hardware accelerated then do whatever. On a home system with any sort of modern CPU it is unlikely to matter from a performance stance, anyhow.

DrDork fucked around with this message at 19:02 on Aug 5, 2019

Adbot
ADBOT LOVES YOU

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

I mean, yeah, if you've got QA cards to offload to then by all means go nuts but there just aren't enough permutations on a 128kb block to make a hash collision probable over sha256 compared to some other corruption event.

https://blogs.oracle.com/bonwick/zfs-deduplication-v2

quote:

When using a secure hash like SHA256, the probability of a
hash collision is about 2\^-256 = 10\^-77 or, in more familiar notation,
0.00000000000000000000000000000000000000000000000000000000000000000000000000001.
For reference, this is 50 orders of magnitude less likely than an undetected,
uncorrected ECC memory error on the most reliable hardware you can buy.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply