|
intel keeps cramming more cores in to their xeons and thats all I care about
|
# ¿ Feb 9, 2017 16:55 |
|
|
# ¿ Apr 27, 2024 21:30 |
|
Raere posted:it would be cool to have 8 cores for a home server at reasonable price get one of the expensive compute sticks, slap a usb ethernet dongle on it, and mount vhds to your nas over smb
|
# ¿ Feb 9, 2017 19:24 |
|
hifi posted:it runs cooler than the q6600. people seem to think that the tdp is what it runs at all the time well yeah obviously you set the power profile to MAXXXIMUM PERFORMANCE and disable all the thermal management settings
|
# ¿ Feb 9, 2017 21:29 |
|
it will be hardware encoders on the gpu, same as 264.
|
# ¿ Feb 11, 2017 22:35 |
|
pagancow posted:lol its only 8 bits OH NO NOT YOUR ANIMES
|
# ¿ Feb 13, 2017 16:43 |
|
both frames are now both AMD and Intel, at the same time, yet neither. Quantum computing.
|
# ¿ Feb 13, 2017 16:45 |
|
jammyozzy posted:On the flip-side, only a handful of tools in our work CAD software are multithreaded and they're all the esoteric new ones that have been added in the last few years. Your basic everyday tools are still single-thread because they're the oldest and I imagine the codebase behind it all makes Excel look like a paragon of excellence. Lol you can turn on threading through some obscure option flag but it will gently caress up layer composition I hate autodesk so much
|
# ¿ Mar 1, 2017 23:25 |
|
Breakfast All Day posted:
why are they overclocking the pcie bus what game has that as a bottleneck?
|
# ¿ Mar 1, 2017 23:26 |
|
Fabricated posted:we use a lot of autodesk poo poo at the university I work at but i have yet to hear of any workplace that actually uses it uhh... construction I guess? because that's what we use. everything gets drafted and delivered in cad though we're now also requiring BIM models in revit (another autodesk shitpile, slightly less bad)
|
# ¿ Mar 8, 2017 00:08 |
|
hifi posted:bclk * multiplier = cpu frequency so increase the multiplier like a sane person
|
# ¿ Mar 8, 2017 00:08 |
|
Perplx posted:the core to core latency isn't that great High end xeons have the same problem where the 12+ core packages are basically two 6 or 8 core packages glued together with a high speed crossbar. Software needs to be numa aware and hardware needs to present each set of cores as their own node to know not to jump the crossbar if possible or put latency insensitive things across it. It's probably ok for desktop workloads at the moment though a 4 and 4 core design is pathetic by todays standards and is going to cause a lot of headaches for programmers to optimize for. Notorious b.s.d. posted:as far as i know all intel x86 chips are 1 die in 1 package. even the monster 22 core E5s are on a single gigantic chip. Nope, they call it cluster-on-die. Intel broke out the glue gun too.
|
# ¿ Mar 16, 2017 14:06 |
|
|
# ¿ Apr 27, 2024 21:30 |
|
It's a bit more complicated than that. The caches might be unified but there are four memory controllers and each core is only able to directly address two at a time. The crossbar provides the interconnect from the two halves of the processor with the two sets of memory controllers. Hitting the crossbar incurs a latency and potential bandwidth bottleneck so CoD defines the numa domains so the memory manager can attempt to avoid that when possible for everything except extremely large VMs or large/parallel workloads. I don't believe it splits L3.
|
# ¿ Mar 19, 2017 20:47 |