Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Anime Schoolgirl
Nov 28, 2002

2133mhz is a really miserly speed to lock most DDR4 platforms to, but then again so was DDR3-1333. DDR4-3000+ chips have advanced to have appreciably less latency than 2133 for only 15% more money.

It's pretty much the reason why if you're going for anything above a budget build people tell you to use a Z170 chipset.

Adbot
ADBOT LOVES YOU

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib

Combat Pretzel posted:

I guess those benchmarks from a while ago were a figment of our imagination.
These ones?

the article posted:

in most home-use cases users won't see one bit of difference
...
games and synthetic gaming benchmarks realize even less performance increase with DDR4 than the applications we tested on the previous page
A handful of exceptions aside, most of the graphs look like this:

Anime Schoolgirl
Nov 28, 2002

in games that stream metric tons of assets like the witcher 3 or fallout 4 RAM DRAM throughput makes a huge difference especially in NPC-rich areas

most other times, lower latency gives you bigger gains and that's easier to come by than it was a year ago

AVeryLargeRadish
Aug 19, 2011

I LITERALLY DON'T KNOW HOW TO NOT BE A WEIRD SEXUAL CREEP ABOUT PREPUBESCENT ANIME GIRLS, READ ALL ABOUT IT HERE!!!

ConanTheLibrarian posted:

These ones?


A handful of exceptions aside, most of the graphs look like this:



Nah, the ones DigitalFoundry did a while back where they saw a 7%-10% increase in frame rates in games like GTAV and TW3 between DDR4-2133 and DDR4-3200.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

ConanTheLibrarian posted:

A handful of exceptions aside, most of the graphs look like this:
In that there's less than a 2% difference in that benchmark between DDR3-1600 and DDR3-2400, you can safely assume that benchmark does jack squat for testing the memory subsystem--that one's pretty much a straight CPU test. If you look through the other tests from that review, you do see DDR4 coming out ahead, with DDR4-3200 generally being around 5-10% faster than DDR3-2400, with some of the memory-heavy tests showing upwards of a 20% improvement.

Considering 16GB of DDR3-2400 is all of about $5 less than 16GB of DDR4-3200, there's also zero reason you should ever be doing an apples-to-apples comparison between same (or even similarly) clocked DDR3 vs DDR4 sticks.

Also, while testing 5+ year old games at tiny resolutions may make sense for removing the GPU from the equation when trying to test CPU performance, it probably isn't a great way to test memory performance considering the massively reduced swap workload.

No, DDR4 isn't going to magically make every single task you do noticeable faster, but then again swapping your GT 640 for a GTX 1080 isn't gonna do much for how fast MSOffice opens or how long it takes Handbreak to finish, either. Doesn't mean the 1080 isn't faster than the 640, though.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
In situations where you've got a serious memory bottleneck, isn't quad channel RAM on X99 the way to go? Quad channel DDR4-2400 is going to blow away any dual channel for bandwidth.

AVeryLargeRadish
Aug 19, 2011

I LITERALLY DON'T KNOW HOW TO NOT BE A WEIRD SEXUAL CREEP ABOUT PREPUBESCENT ANIME GIRLS, READ ALL ABOUT IT HERE!!!

Twerk from Home posted:

In situations where you've got a serious memory bottleneck, isn't quad channel RAM on X99 the way to go? Quad channel DDR4-2400 is going to blow away any dual channel for bandwidth.

For the games tested you stop seeing gains past DDR4-3400 w/ dual channel. Considering the price difference between a Z170 platform and an X99 one it makes more sense to just pay ~$10 more for some DDR4-3000/3200. Now if you are doing really memory intensive stuff, server stuff, database stuff and so on then yeah, quad channel is going to be huge, but for a consumer they either don't need the extra bandwidth or they can get enough bandwidth on a cheaper Z170 platform.

Regrettable
Jan 5, 2010



ConanTheLibrarian posted:

These ones?


A handful of exceptions aside, most of the graphs look like this:



Yes, the handful of exceptions that show a 10-30% increase in speed. Also, that handful makes up 30% of the tests.

Regrettable fucked around with this message at 07:02 on Aug 30, 2016

feedmegin
Jul 30, 2008

PerrineClostermann posted:

It isn't about the multithreaded capabilities of any single piece of software.

Exactly how many programs do you think your average user is actively using simultaneously?

fishmech posted:

It would sure be stupid of most consumer software to optimize towards massive amounts of cores when massive amounts of cores aren't available to consumers. The argument you're making is like it was the late 80s and someone was saying that since DOS doesn't currently support more than 32 MB in a partition, no hard drive manufacturers should offer larger drives.

Again, cores aren't magic. Look up Amdahl's Law. You can't 'optimize towards massive amounts of cores' if the task you are attempting to complete is fundamentally sequential.

feedmegin fucked around with this message at 13:36 on Aug 30, 2016

fishmech
Jul 16, 2006

by VideoGames
Salad Prong

feedmegin posted:

Exactly how many programs do you think your average user is actively using simultaneously?


If they're a typical user running Chrome and a word processor, they're actually running easily 10 "programs" at once.

feedmegin posted:

Again, cores aren't magic. Look up Amdahl's Law. You can't 'optimize towards massive amounts of cores' if the task you are attempting to complete is fundamentally sequential.

No one said they're magic, but these days we have plenty of programs that operate best with 2 cores and games that operate best with 4 or 8 threads. And having 8 threads is still pretty massive to the average consumer system, which is 2 or 4 thread.

Setset
Apr 14, 2012
Grimey Drawer

feedmegin posted:

Again, cores aren't magic. Look up Amdahl's Law. You can't 'optimize towards massive amounts of cores' if the task you are attempting to complete is fundamentally sequential.

Isn't Amdahl's Law overshadowed by Gustafson's Law? Which basically states that, given a non-fixed size of data to be processed, Amdahl's law is irrelevant?

Admittedly, I dont know enough about this (because math) other than what I've read on wikis and some light research. But it felt worth mentioning

HMS Boromir
Jul 16, 2011

by Lowtax
Hello. Now is the time when it is the Kaby Lake.

Barry
Aug 1, 2003

Hardened Criminal

DrDork posted:

Considering 16GB of DDR3-2400 is all of about $5 less than 16GB of DDR4-3200, there's also zero reason you should ever be doing an apples-to-apples comparison between same (or even similarly) clocked DDR3 vs DDR4 sticks.

Not really. More like ~$20. Which still isn't much, but it's enough to not just immediately spend the money when the performance difference is (typically) very negligible.

http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&IsNodeId=1&N=100007611%20601190328%20600006072%20600561668%20600561672%2050008476

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

HMS Boromir posted:

Hello. Now is the time when it is the Kaby Lake.

Still no direct HDMI 2.0 out, pushing up BOM costs for OEMs. Hardware HEVC / VP9 encode decode and an extra 200MHz for all!

feedmegin
Jul 30, 2008

Ninkobei posted:

Isn't Amdahl's Law overshadowed by Gustafson's Law? Which basically states that, given a non-fixed size of data to be processed, Amdahl's law is irrelevant?

Admittedly, I dont know enough about this (because math) other than what I've read on wikis and some light research. But it felt worth mentioning

Not really...basically what that seems to be saying is 'Gustafson's law argues that a fourfold increase in computing power would instead lead to a similar increase in expectations of what the system will be capable of. If the one-minute load time is acceptable to most users, then that is a starting point from which to increase the features and functions of the system. The time taken to boot to the operating system will be the same, i.e. one minute, but the new system would include more graphical or user-friendly features' but I'm not sure that really applies. Like, what 'extra features and functions of the system' are you going to use those 20 extra cores for? What extra bling can you really add to the process of writing a Word document? That page is a bit woolly but it basically seems to be saying 'there are extra cores so programmers will use them to add nonspecific Cool poo poo' and for your average consumer task I don't think that's true.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

feedmegin posted:

Again, cores aren't magic. Look up Amdahl's Law. You can't 'optimize towards massive amounts of cores' if the task you are attempting to complete is fundamentally sequential.

Most computationally-intensive tasks aren't fundamentally sequential though. You can do faster video compression by parallelizing the search for efficient codings on a spatial or a temporal basis. Instead of running a single big game loop you can run different tasks in their own update loops (network/IO/physics/AI/rendering/etc), or you can parallelize some of those tasks on their own (eg process AI for all objects in parallel).

Of course it's not a silver bullet, not everything parallelizes efficiently and it's much harder to reason about a program with more moving parts, but in a lot of places even the low-hanging fruit remain unpicked. For example, the traditional graphics API is a single-threaded state machine that was permanently tied to a single thread. Eventually the ability to call the state machine across multiple threads (but only one at a time) was added, but it's only with DX12/Vulkan that multiple threads could submit multiple command bundles to the API at the same time for parallel execution. And drawcalls make up a majority of the program runtime, so this is very obviously the limiting task that needed to be parallelized to speed up program execution.

Games are expected to run on hardware that exists in the real world, and that has kept development focused on the Single Big Loop that can be executed on crappy 2-core Pentiums and i3s. But that's not proof of the inherent single-threaded nature of the task, and we're already starting to break out of that mold. GTA:V refuses to launch on less than a 4-core processor and this will only accelerate as we move into the DX12 generation.

Paul MaudDib fucked around with this message at 15:42 on Aug 30, 2016

Gucci Loafers
May 20, 2006

Ask yourself, do you really want to talk to pair of really nice gaudy shoes?


Twerk from Home posted:

Still no direct HDMI 2.0 out, pushing up BOM costs for OEMs. Hardware HEVC / VP9 encode decode and an extra 200MHz for all!

Kaby Lake wasn't intended to be nothing more than a minor update and from our current rate I wouldn't expect a big jump in performance until IceLake in 2018. It's somewhat disappointing but the enhancements are certainly welcome towards portable devices.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

feedmegin posted:

Not really...basically what that seems to be saying is 'Gustafson's law argues that a fourfold increase in computing power would instead lead to a similar increase in expectations of what the system will be capable of. If the one-minute load time is acceptable to most users, then that is a starting point from which to increase the features and functions of the system. The time taken to boot to the operating system will be the same, i.e. one minute, but the new system would include more graphical or user-friendly features' but I'm not sure that really applies. Like, what 'extra features and functions of the system' are you going to use those 20 extra cores for? What extra bling can you really add to the process of writing a Word document? That page is a bit woolly but it basically seems to be saying 'there are extra cores so programmers will use them to add nonspecific Cool poo poo' and for your average consumer task I don't think that's true.

If you think word processing is the end-all be-all of computational tasks then you probably shouldn't be commenting on CPU architecture.

It's still pretty funny though, because word processing is the classic example of a task that could be executed perfectly well on an 8080 and yet still nowadays consumes a factor of 10,000x or 100,000x more system resources to accomplish. It's never been a resource-intensive task for the hardware of its time, yet its needs have increased in proportion to the resources available, so it's pretty much a perfect example of Gustafson's Law.

Even still, there's plenty of places where gains could be made. High-quality typesetting ala LaTeX is not a computationally trivial task, nor would be running some neural net that provides higher-quality grammar checking in languages like English with irregular grammar rules.

Paul MaudDib fucked around with this message at 16:12 on Aug 30, 2016

AVeryLargeRadish
Aug 19, 2011

I LITERALLY DON'T KNOW HOW TO NOT BE A WEIRD SEXUAL CREEP ABOUT PREPUBESCENT ANIME GIRLS, READ ALL ABOUT IT HERE!!!
Also plenty of people have preinstalled bloatware like Norton eating up CPU cycles in the background.

feedmegin
Jul 30, 2008

Paul MaudDib posted:

If you think word processing is the end-all be-all of computational tasks then you probably shouldn't be commenting on CPU architecture

Of course not. I said 'Average consumer task'. Not deep learning, not graphics stuff (because that's exactly why GPUs with loads of cores are a thing), not scientific computing, and specifically not high performance gaming, but the sort of thing Joe Sixpack does with his computer every day when he's not playing games on his PS4. You can throw a few more cores at that stuff and it'll help, but throwing 20 at it simply isn't going to have that much effect. Maybe if he's got like 8 tabs open and visible then the Javascript on each of those tabs goes a little bit faster.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

AVeryLargeRadish posted:

Also plenty of people have preinstalled bloatware like Norton eating up CPU cycles in the background.

Also graphics drivers are currently a non-trivial load (particularly for AMD GPUs) and if you are using VR then processing the positioning data takes some decent grunt too (particularly Rift where you are doing image processing on one-or-more cameras streaming data at USB 3.0 speeds).

In the future, like it or not, increased sandboxing is the shape of things to come. I wouldn't say running every application in a full-fat VM, but I wouldn't be surprised to see something based on the concept of BSD jails or containers at some future date. And one of the Microsoft Fellows has been talking about how what we really need is a red-machine green-machine model, with two completely independent VMs for untrusted and completely-trusted applications.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

feedmegin posted:

Of course not. I said 'Average consumer task'. Not deep learning, not graphics stuff (because that's exactly why GPUs with loads of cores are a thing), not scientific computing, and specifically not high performance gaming, but the sort of thing Joe Sixpack does with his computer every day when he's not playing games on his PS4. You can throw a few more cores at that stuff and it'll help, but throwing 20 at it simply isn't going to have that much effect. Maybe if he's got like 8 tabs open and visible then the Javascript on each of those tabs goes a little bit faster.

Lol, you think game engines are actually running on the graphics card? You really aren't qualified to comment.

The only programs that actually run on the GPU are certain CUDA programs that use Dynamic Parallelism to avoid the overhead of kernel launch/synchronization. Everything else is run on the host CPU.

The GPU accelerates the graphics processing, but it is still at the mercy of the CPU to set everything up, update the state, and make drawcalls via the API. You can have the fastest GPU in the world but if you pair it with a 2-core Pentium it's still going to suck rear end.

The game engines we have today are not The End Of History and certainly do not represent an ideal. Since IPC has pretty much hit a brick wall, that is mostly going to be happening through greater parallelization.

feedmegin
Jul 30, 2008

Paul MaudDib posted:

Lol, you think game engines are actually running on the graphics card? You really aren't qualified to comment.

The only programs that actually run on the GPU are certain CUDA programs that use Dynamic Parallelism to avoid the overhead of kernel launch/synchronization. Everything else is run on the host CPU.

The GPU accelerates the graphics processing, but it is still at the mercy of the CPU to set everything up, update the state, and make drawcalls via the API. You can have the fastest GPU in the world but if you pair it with a 2-core Pentium it's still going to suck rear end.

The game engines we have today are not The End Of History and certainly do not represent an ideal. Since IPC has pretty much hit a brick wall, that is mostly going to be happening through greater parallelization.

I am well aware the engine doesn't run on the graphics card, thank you. I do this poo poo for a living so a little less snark, thanks. My point is that, yes, graphics is a field with more available parallelism in it than most, which is why GPUs became a thing at the vertex/fragment shader level, which obviously is separate from the parallelisation available further up in a graphics engine (hence, Vulkan, to eliminate some of the bottlenecks in OpenGL).

Again, though, I'm not talking about games here, I'm talking about the sort of apps Joe Sixpack is running on his Walmart special with Intel integrated graphics. Most people's gaming consists of browser games in Facebook. High-spec PC games could probably use more cores (though I still think 20 is pushing it), but 90% of PC users are not running high-spec PC games.

Haquer
Nov 15, 2009

That windswept look...

feedmegin posted:

I do this poo poo for a living so a little less snark, thanks.

You're being a dick, too. Who gives a poo poo if you do it for a living.

PerrineClostermann
Dec 15, 2012

by FactsAreUseless
Web browsing is much more common for the average user and takes a ton of resources these days :shrug:

Back in the dual vs quad days, we used to argue about how nothing used four threads. But it's that anything running now has a "free" core to run on, instead of sharing one with other threads that are, together, taxing it. Games didn't have to compete with the os and background programs so much, for instance.

Haquer posted:

You're being a dick, too. Who gives a poo poo if you do it for a living.

AVeryLargeRadish
Aug 19, 2011

I LITERALLY DON'T KNOW HOW TO NOT BE A WEIRD SEXUAL CREEP ABOUT PREPUBESCENT ANIME GIRLS, READ ALL ABOUT IT HERE!!!

feedmegin posted:

I am well aware the engine doesn't run on the graphics card, thank you. I do this poo poo for a living so a little less snark, thanks. My point is that, yes, graphics is a field with more available parallelism in it than most, which is why GPUs became a thing at the vertex/fragment shader level, which obviously is separate from the parallelisation available further up in a graphics engine (hence, Vulkan, to eliminate some of the bottlenecks in OpenGL).

Again, though, I'm not talking about games here, I'm talking about the sort of apps Joe Sixpack is running on his Walmart special with Intel integrated graphics. Most people's gaming consists of browser games in Facebook. High-spec PC games could probably use more cores (though I still think 20 is pushing it), but 90% of PC users are not running high-spec PC games.

You can't really invoke "Joe Sixpack" and his "Walmart special" while ignoring preinstalled bloatware like the aforementioned Norton AV, I've seen that POS randomly spike i5 CPUs to 100% across all four cores and make browsers and Windows Explorer lag up. There are always arguments to be made for more threads and CPU horsepower in general.

fishmech
Jul 16, 2006

by VideoGames
Salad Prong

feedmegin posted:

Of course not. I said 'Average consumer task'. Not deep learning, not graphics stuff (because that's exactly why GPUs with loads of cores are a thing), not scientific computing, and specifically not high performance gaming, but the sort of thing Joe Sixpack does with his computer every day when he's not playing games on his PS4. You can throw a few more cores at that stuff and it'll help, but throwing 20 at it simply isn't going to have that much effect. Maybe if he's got like 8 tabs open and visible then the Javascript on each of those tabs goes a little bit faster.

The average consumer task is to have like Excel or Word open and then also a bunch of browser tabs (which in many browsers invokes multiple processes) and maybe the spotify app or something similar running too. That's actually a lot of processing that needs to be done, and it does bog down on their systems. If they had 20 threads on their processor, all that stuff they run at once would run a lot better even if they didn't know why it was running better.

mobby_6kl
Aug 9, 2009

by Fluffdaddy

AVeryLargeRadish posted:

You can't really invoke "Joe Sixpack" and his "Walmart special" while ignoring preinstalled bloatware like the aforementioned Norton AV, I've seen that POS randomly spike i5 CPUs to 100% across all four cores and make browsers and Windows Explorer lag up. There are always arguments to be made for more threads and CPU horsepower in general.

You give Joe Sixpack 8 cores and they'll just make better bloatware to negate it.

E: Unfortunately I don't think there's a good case for regular users needing a ton of cores, otherwise Intel would've been selling increasing number of smaller cores instead of having two large ones as a baseline. You just don't need 16 cores to run even 50 instances of Word, because they're mostly idle anyway.

mobby_6kl fucked around with this message at 18:07 on Aug 30, 2016

AVeryLargeRadish
Aug 19, 2011

I LITERALLY DON'T KNOW HOW TO NOT BE A WEIRD SEXUAL CREEP ABOUT PREPUBESCENT ANIME GIRLS, READ ALL ABOUT IT HERE!!!

mobby_6kl posted:

You give Joe Sixpack 8 cores and they'll just make better bloatware to negate it.

And thereby justify more threads. I never said this was a problem with a permanent solution.

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!
For some reason corporate PC use never comes up in these types of conversations, I see people in the office with mountains of programs open at once all the time, even people who aren't doing anything technical. Everyone in the office has dual monitors now too which further encourages having more stuff open.

NihilismNow
Aug 31, 2003

MaxxBot posted:

For some reason corporate PC use never comes up in these types of conversations, I see people in the office with mountains of programs open at once all the time, even people who aren't doing anything technical. Everyone in the office has dual monitors now too which further encourages having more stuff open.

5 year old i5 2400's can deal with this fine. If anything with the move to more VDI/SBC based platforms i see corporations taking computing resources away from end users. The worker who used to have a quad core desktop with 8GB RAM now gets a VM with 2 vCPU's and maybe 5 GB RAM. Only people who can justify it get to keep their physical machines.
Or maybe i just work for horrible corporations. That's probably it.

Setset
Apr 14, 2012
Grimey Drawer

NihilismNow posted:

5 year old i5 2400's can deal with this fine. If anything with the move to more VDI/SBC based platforms i see corporations taking computing resources away from end users. The worker who used to have a quad core desktop with 8GB RAM now gets a VM with 2 vCPU's and maybe 5 GB RAM. Only people who can justify it get to keep their physical machines.
Or maybe i just work for horrible corporations. That's probably it.

is there a "bring an old quadcore from home" option?

lDDQD
Apr 16, 2006
I suspect it's because it's a lot cheaper to support them this way. Something goes wrong? Just restart the VM with a clean image. So no, they probably wouldn't be super-thrilled if people started bringing in their core 2 quads.

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.
Even with an i7 and 8gb of RAM, my work computer still chugs running IE (we recently upgraded to 11!!!), jmp, spotfire, excel, and a couple proprietary database query/visualization tools (usually not all at once but sometimes). Thats after getting an upgrade from an i3/6gb... My computer must not SMT because it avoids putting anything on the secondary threads/cores.

SpelledBackwards
Jan 7, 2001

I found this image on the Internet, perhaps you've heard of it? It's been around for a while I hear.

Twerk from Home posted:

Still no direct HDMI 2.0 out, pushing up BOM costs for OEMs. Hardware HEVC / VP9 encode decode and an extra 200MHz for all!

On a whim this morning, I read some PC World article about Kaby Lake linked from my Google News page. I'd never heard of VP9. It was a fortuitous find, because we'd been trying to find a way to satisfy some vocal customers with a video codec that has comparable performance to H/x.264 and 265, but isn't bogged down with a crazy commercial licensing model that we can't map well to our software distribution model.

I was surprised to see that neither of my tech leads were aware of it and its use on YouTube, as well as its essentially royalty-free terms for redistribution. This could be pretty big for those customers and ought to make me look good to my boss too. :hellyeah:

japtor
Oct 28, 2005

SpelledBackwards posted:

On a whim this morning, I read some PC World article about Kaby Lake linked from my Google News page. I'd never heard of VP9. It was a fortuitous find, because we'd been trying to find a way to satisfy some vocal customers with a video codec that has comparable performance to H/x.264 and 265, but isn't bogged down with a crazy commercial licensing model that we can't map well to our software distribution model.

I was surprised to see that neither of my tech leads were aware of it and its use on YouTube, as well as its essentially royalty-free terms for redistribution. This could be pretty big for those customers and ought to make me look good to my boss too. :hellyeah:
Does iOS hardware support matter for your use? Cause H264 is the only option there, barring your own apps with a software decoder or viewing in some other third party viewer.

(Might apply to an extent on Android too but I’m not sure of the state of hardware decoders there or OS level support)

SpelledBackwards
Jan 7, 2001

I found this image on the Internet, perhaps you've heard of it? It's been around for a while I hear.

japtor posted:

Does iOS hardware support matter for your use? Cause H264 is the only option there, barring your own apps with a software decoder or viewing in some other third party viewer.

(Might apply to an extent on Android too but I’m not sure of the state of hardware decoders there or OS level support)

No, just machine vision mostly for industrial use. It's almost all Intel-based desktop Windows or embedded RTOS stuff for the systems that even would be powerful enough to consider VP9 or similar codecs. Some ARM-based embedded systems we work with might also get support added if we can justify it after benchmarking (I don't know enough about the tech specs of the various platforms to say, but I imagine they'd be a no-go in terms of CPU utilization). We'll see. Thanks for bringing the mobile limitations to my attention, though.

Otakufag
Aug 23, 2004
1- For those of you who have upgraded from a 2500k to a new skylake: can you feel noticeable differences when playing recent games or doing other windows stuff?
2- Should I buy a i5 6600k or a i7 6700? Both end up costing the same as the i5 requires a more expensive mobo+cooling. This is mainly for gaming btw.
3- Maybe I should wait for something better around the corner? Is kaby lake worth waiting for or something else like zen? gently caress

AVeryLargeRadish
Aug 19, 2011

I LITERALLY DON'T KNOW HOW TO NOT BE A WEIRD SEXUAL CREEP ABOUT PREPUBESCENT ANIME GIRLS, READ ALL ABOUT IT HERE!!!

Otakufag posted:

1- For those of you who have upgraded from a 2500k to a new skylake: can you feel noticeable differences when playing recent games or doing other windows stuff?
2- Should I buy a i5 6600k or a i7 6700? Both end up costing the same as the i5 requires a more expensive mobo+cooling. This is mainly for gaming btw.
3- Maybe I should wait for something better around the corner? Is kaby lake worth waiting for or something else like zen? gently caress

2) Go with the 6600k, most of the time the extra single threaded performance from an OCed CPU will make much more difference than extra cores, especially for gaming.
3) Ehhh, I expect Kaby to be a little faster but not much, just like it has been for years. In the best case Zen will be even with Intel's CPUs, but I expect it to still be behind in most things, especially single threaded performance which is more important for your usage case.

Adbot
ADBOT LOVES YOU

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

SpelledBackwards posted:

On a whim this morning, I read some PC World article about Kaby Lake linked from my Google News page. I'd never heard of VP9. It was a fortuitous find, because we'd been trying to find a way to satisfy some vocal customers with a video codec that has comparable performance to H/x.264 and 265, but isn't bogged down with a crazy commercial licensing model that we can't map well to our software distribution model.

I was surprised to see that neither of my tech leads were aware of it and its use on YouTube, as well as its essentially royalty-free terms for redistribution. This could be pretty big for those customers and ought to make me look good to my boss too. :hellyeah:

VP9 is excellent and definitely outperforms H.264, also battle-tested by delivering billions of hours of Youtube content already. It's weakest link is that google doesn't give a poo poo about fast encoding, so the encoder is godawful slow and uses multiple threads poorly. You'll have a better time with vp9 if you have a lot of different videos you can encode in parallel (like youtube) instead of trying to encode one or few videos quickly.

That said VP9 owns and we should really be hoping for more support in more places. Onboard fully hardware VP9 encode/decode in kaby lake is amazing. VP9 is also a stopgap because AV1 is coming very soon, and has most of vp10 in it among other things.

http://www.streamingmedia.com/Articles/Editorial/Featured-Articles/The-State-of-Video-Codecs-2016-110117.aspx

Twerk from Home fucked around with this message at 15:19 on Aug 31, 2016

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply