Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

~Coxy posted:

16 + 4 lanes in total, right? Main graphics interface + DMI bus.
It would be enough assuming your only intended PCI-E card is a GPU, but I would like enough for a RAID card and a few spare on top as well.

Well, first off, if you really need non-chipset RAID on the desktop you're almost certainly building a workstation, and Intel would very much prefer that you buy one of their fancier chipsets for that. They're not really trying to cater to you, here. 99% of the desktop market will get by just fine with 16 PCIe lanes off of the CPU, and a handful from the southbridge.

Second, there are plenty of P55 chipset boards which support x8/x8 operation. That's enough to run a video card and RAID card at the same time, and you still have the southbridge connections (up to 8 lanes, although of course you're limited by the 10Gbps link between CPU and southbridge) if you want more peripherals. If you really must have more, there are boards available with PCIe switch chips, as well.

Adbot
ADBOT LOVES YOU

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

~Coxy posted:

What the hell?

$300 CPU + $50 cooler is a waste of money compared to a $600 CPU?

For many users, a $75 CPU at stock speed is more than enough - for most everyday desktop tasks, the bottleneck is the hard drive and network, not the CPU, even when you're talking about a comparatively wimpy processor like an Athlon II X3 or a Core i3. For most of the remainder, a $200-300 CPU at stock speed is plenty; that's more than enough for just about any game on the planet, and "home power user" stuff like home video editing. Many of the very small remainder, for whom a Core i7-860 or Phenom II X6 1090T isn't enough, are generally using their systems in professional environments where even a tiny risk of overclocking-related instability is unacceptable.

If you're just looking at the size of your e-peen speed of your processor in gigahertz, yeah, it's a great deal. When you look at overall system performance, though, the downsides (increased noise, heat, power consumption, and cooler cost) often outweigh the "benefit" of a CPU that's just going to be bottlenecking harder rather than running faster.

I will admit, overclocking can make sense in some situations. For instance, right now, I'm running a mildly overclocked Conroe. It's allowed me to put off upgrading for a little while, and I'm pretty happy with that. However, the gradual shift of the market to quad-and-more core support is eventually going to leave me behind, and at that point even a balls-to-the-wall 100% overclock won't catch me up to a dirt cheap Athlon II X4. Overclocking can be useful as a stopgap, but the value proposition of overclocking brand new CPUs in the $200-300 range is not very good.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Alereon posted:

Modern games do scale pretty well with higher CPU clockspeeds though, and if you get a good cooler it can be near-silent at anything from stock to a 1Ghz overclock.

Again, only if you're interested in making numbers go up rather than improving your subjective experience.

Game performance is almost always bottlenecked on the GPU. Take that bottleneck out of the picture by dropping to low resolution and visual settings, and CPU bottlenecks usually exist over 60fps. That means that, for the vast majority of users, the limiting factor is their video card or monitor. Running at 125fps with a 60hz monitor isn't really useful, unless you're playing Quake 3.

Jabor posted:

Following this logic to the extreme would suggest that everyone should underclock their processors so as to get reduced noise, heat and power consumption. The downsides of running at stock speeds as opposed to underclocking outweigh the advantages the extra clocks give you :downs:

Or you could just buy the CPU you need for decent performance in whatever it is you do, and let it underclock itself at idle like any modern x86 CPU. Sorry about your [H] cred, but sometimes it's not necessary to "tweak" or "tune" your system.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

rscott posted:

Yeah I was just sperging out because I miss the old days of using graphite pencils to unlock extra multipliers or setting jumpers to 2x to get 6x multipliers on my old super socket 7 boards. :)

That's the "old days?" Bah, you kids these days have no appreciation for the times when we had to desolder the oscillator module on the motherboard and replace it with a higher-frequency one. Of course, there were no provisions for mounting a cooler on the 386's socket, so you had to get creative with thermal adhesives (not easy to find, those days) and heatsinks designed for other devices. But, on the other hand, a 386-33 at 40 MHz could be as fast as a low-end 486!

spanko posted:

This isn't true anymore for a lot of popular games.

Mind providing some examples?

Of course, bottlenecks are going to depend on the exact hardware configuration - but a system built right now with a roughly even balance between video card, CPU, and monitor is almost certain to bottleneck on the video card in any game recent enough that bottlenecks matter. Yeah, if you run with an Athlon II X2 and a pair of GTX480s on a 1280x1024 display it won't be the CPU holding you back, but as long as everything's in the same rough category ("budget," "midrange," "high-end," or "I burn piles of money for laughs") the CPU is rarely the limiting factor.

Admittedly, CPU bottlenecks can be a more serious concern; as your system ages, it's generally possible to dodge a video card bottleneck that leads to unacceptable performance just by dropping settings, but CPU bottlenecks are more concrete. However, that's one place where overclocking is still a viable solution, at least for now. We'll see how it plays out as the market moves away from single-core performance and towards parallelism that can't be made up so easily by just cranking up the clocks.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

PopeOnARope posted:

Which starts to make you question the point of a netbook at that price point. Sure it's small, but it's more expensive than a bottom dollar 15" notebook.

Portability factors - especially battery life - are often bigger concerns than performance.

WhyteRyce posted:

The gaming numbers look impressive (comparatively) but those Anandtech numbers still look like poo poo overall. You can probably play your older games fine, but then I still wonder the point of playing games on a netbook.

The point of putting better graphics in low-end systems isn't so you can play games. People have already touched on video decoding, but an accelerated desktop is likely to play a much more significant role in the near future, as well. IE, Firefox, and Chrome are all moving towards hardware acceleration, and letting the GPU take some of the load off the CPU will allow the system as a whole to get away with a less powerful, less power-hungry CPU.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Combat Pretzel posted:

Anything special in the Sandy Bridge LGA2011 over the LGA1156 one? Apart from more PCIe lanes and quad channel memory?

Multiprocessor support, if you buy Xeons. In the consumer sector, it's likely to be a good replacement for LGA1366: theoretically more capable, but practically very little performance improvement.

Combat Pretzel posted:

Also, with apparent support for things like OpenCL, does that mean that games can actually use the graphics unit for physics?

Sandy Bridge's IGP doesn't support OpenCL or DX11 compute shaders, so no.

If you're talking about OpenCL/DirectCompute-based game physics in general, yes, although I wouldn't expect much to show up for a while. Requiring hardware physics acceleration would lock out the vast majority of the market, which isn't good for sales. We might see some physics-based graphical effects in the near term (similar to what Mirror's Edge and Batman: AA did with Nvidia's Physx acceleration), but major titles that use physics acceleration to actually change gameplay are still a long ways off.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

DoobieKeebler posted:

http://hothardware.com/Reviews/AMD-Zacate-E350-Processor-Performance-Preview/?page=8

Data from a third site comparing power draw from the wall of the e-350 vs atom/dual atom. It looks a lot better than the previous two sites under this context.

"In all cases the display was not factored into the power draw [...] In short, AMD's Brazos platform and their Zacate processor consume significantly less power than a dual core Atom/Ion2 solution at idle and under load. In addition, at idle, Zacate even consumes a lot less power than a standard single core Atom design."

http://www.pcper.com/article.php?aid=1039&type=expert&pid=8
4th site seems to show the same power consumption numbers, including a celeron su2300+ion combo.

Exciting from a cheap, mobility perspective. I'm curious enough to want to visit some stores and test out atoms/ulv's for a performance comparison/expectation experience.

The issue with these numbers is that they're using desktop parts (except for the Aspire 1551, which isn't really a battery life champion). The "standard single core Atom design" is closest to a typical netbook platform - but one from the GMA950 era, when battery life wasn't anything to write home about. Zacate certainly has potential, but I'd be interested to see some apples-to-apples comparisons before we declare it the second coming of mobile Jesus.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

BangersInMyKnickers posted:

Idle quad-core i5 systems draw less power than the older Core2 systems that are replacing them in our office. They might draw more when under heavy load, but it is really going to depend on how you are using them and with what software.

That's true, but if you're overclocking, the concern is going to be the power output running at full load. In that sense, it draws significantly more power, even if the draw at idle is lower.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Toast Museum posted:

Why? The impression I got from the SSD thread is that you basically don't have to worry about modern SSDs' lifespans unless you actively try to gently caress them up.

Capacity's a concern, though, and your hibernation file is as big as your RAM. On a fairly ordinary system with 4 gigs of RAM and an 80-90 gig SSD, that's about 5% of the drive's total capacity just for the hibernation file.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Nonpython posted:

But it gets deleted afterward.

What? No, it doesn't, at least not on Windows. As long as hibernation is enabled, hiberfil.sys exists on the root of the boot drive, equal in size to the amount of physical RAM in the system. Even if you could delete it, you'd still have a requirement to keep at least that much free space set aside, which boils down to the same thing: less space available for the rest of your data.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Nonpython posted:

:doh:

Sorry, I am used to Linux, an OS designed by people who you can't read a newspaper through their ears.

Linux works the same way. A lot of Linux distros require the free space on the swap partition, rather than set aside as a user-visible file, but it has to be there. You have to have free disk space if you're going to suspend to disk.

But great job trying, kiddo. Maybe one day, if you save up your pennies, you'll be able to afford a real desktop OS.

madprocess posted:

My Windows 7 laptop has 8 gb of ram but the hibernation file is 5 gb and the pagefile is 3 gb. I dunno what's up with that but it hibernates fine and the pagefile is fine. This is the automatic settings Windows did, not something I did myself.

It's possible to use powercfg to reduce the hibernation file size. This is typically safe, because a lot of data in RAM is easily compressible, and it's possible that your laptop came configured that way (especially if it has an SSD out of the box). I believe Windows can throw out cache data if necessary when hibernating, too. Of course, if you turn the hibernation file size way down and don't have enough space to hold the contents of RAM, it'll bluescreen.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

angry_keebler posted:

I guess if somebody was really super concerned about the size/seek time of a hibernation file they could always drop 25 bucks on an 80G hard drive to dedicate to that purpose and then go hog wild.

Wouldn't work on Windows. The bootloader only has enough smarts to access the root of the boot drive, so that's where you have to keep the hibernation file. Besides, even if you could, an 80 gig hard drive would be slow as poo poo on the restore; old hard drives have pathetic sustained bandwidth. Putting 4-8 gigs of data into RAM at 75 MB/sec would take long enough that you'd probably be better off just cold-booting from the SSD.

angry_keebler posted:

Imagine a series of four file system partitions sitting on the edge of a cliff.

requesting name change to "johnny fiveoses," tia

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

movax posted:

Also you can toss Biostar/ECS/PCChips on the lower end as well.

Biostar's decent enough - in my experience, their boards occasionally have issues, but for the most part they're OK if you're on a budget. ECS is just crap. PC Chips, if they're even still around (I thought they were folded completely into ECS since the merger?) are goddamn digital Hitler.

As for Asrock, I'd agree with the "weird" assessment. Their basic boards are OK, but they've made their name with wacky stuff like that double-dual-fuel LGA 775 board that could take either AGP or PCIe, and either DDR or DDR2.

japtor posted:

Mac users trying to run Windows games?

You're better off dual-booting, at least for the moment. VT-d makes it possible for guest OSes to directly access stuff like the GPU, but the GPU and host OS's drivers have to support the functionality as well.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

BLOWTAKKKS posted:

Wow, this seems like a really bad time to build a gaming PC. I was going to get an i7 950, but the i7 2600k is the same price. I really, really wish I could wait for LGA 2011.

Is it even worth getting the i7 2600k over the i7 950? I'll probably get angry when LGA 2011 comes out and upgrade to it anyway.

If you're just gaming, an i5 would be better than both.

Also, you should probably stop getting angry over computer hardware.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

BLOWTAKKKS posted:

Sorry, I just haven't been able to play PC games for about a year now, and when I finally get ready to jump back in, I have to wait almost another year in order to get a powerful computer. But I guess I can just forget about what's coming later for the meantime and get a "good-enough" PC for now.

By an i5 being better, do you mean the price for performance is better? I'm kind of tempted to get the i5 2500k to save money, but the hyperthreading on the 2600k sounds nice. If it's not worth it, I guess I can save 100 bucks.

You're grossly overestimating the requirements of current and near-future games. Most don't tax the processor very hard at all; you can build a good gaming system with a $100 Athlon II X4. Those games that do hit the CPU hard often only load up one thread at a time, and the current focus in the high-end market is on heavily parallel processing. You'd be much better off putting the money towards a faster GPU rather than a faster CPU. Game requirements these days are much more heavily weighted towards the graphics side of things, although it's easy to get way out into diminishing-returns territory there, too. You probably don't need a top-of-the-line video card unless you run a fairly exotic setup (2560x1600, 3x1920x1080, 3D, or something along those lines).

As for hyperthreading, it's very useful in easily parallelizable tasks, like compression or video encoding. However, it's very difficult to write a game engine that spreads its CPU load out evenly over a bunch of threads. Right now, there are a handful of games that will see (largely theoretical) performance gains from a quad-core CPU; it's still possible to get by with a fast dual, but building a gaming system with a quad is a good idea these days. However, nobody's yet moved to take advantage of eight threads, and based on the way the market's been moving they won't for several years. Look at the early quad-core adopters - they paid $800-1000 for their chips, often in gaming systems, and now that quads are finally useful for games you can grab one for $100 (and the early adopters are looking to upgrade, if they already haven't). Hyperthreading will probably go the same way: by the time there's a game which will actually take advantage of eight threads, you'll be buying a new CPU whether you buy the i5 or i7 today.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Goldmund posted:

I'm kind of surprised at how dominant Nvidia is, I had assumed that ATI surpassed them this past couple of generations.

Most people don't run out and replace their video card for every new "generation." If you go to the detailed statistics, you'll see that the ATI 4800 series are the single most popular card, and ATI has three of the top five, but a whole lot of people are hanging on to their 8- and 9-series Nvidia cards. Which isn't surprising, really - they were dynamite at the time, and still hold up fairly well.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

You Am I posted:

But doesn't it only boost one of the cores? I may have misread that on another site.

Turbo Boost gives a larger boost if fewer cores are active. With SB turbo-capable chips (even the locked ones), you can increase performance at all levels. You'll still hit an artificial ceiling, though - the turbo-based overclocking is limited to a boost of 4 bins. Here's an Intel graphic I stole from Anandtech that can explain things better than me:



Of course, if you don't have a Turbo-capable CPU, you don't get to overclock at all (besides a near-useless BCLK bump of a few percent).

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Lum posted:

This the first time I've heard of a mini-ITX gaming PC. Normally they're used for car stereos or silly "case mods" where they crammed the entire system into a football or something stupid likt that.

I'm sure if you stuck a mini ITX board on a table and put a modern graphics card in it, the thing would tip over due to the graphics card being heavier and the one single expansion slot being right on the edge.

Sure you're not thinking of Micro ATX?

It's possible to jam quite a lot of power into a mini-ITX shoebox. For instance, take this guy's insane effort - overclocked i5 750, GTX 480 (!!!), and two hard drives in less than 11 liters. For people who have less metalworking skill and more mental stability, it's also possible to buy Lian Li cases designed for mini-ITX gaming systems, although they tend to be fairly large by mini-ITX standards.

e: gently caress, didn't see the new page.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Veinless posted:

"The systems with the affected support chips have only been shipping since January 9th and the company believes that relatively few consumers are impacted by this issue." per http://newsroom.intel.com/community/intel_newsroom/blog/2011/01/31/intel-identifies-chipset-design-error-implementing-solution

According to Anandtech, every 6-series board shipped uses the affected B stepping. With the January 9th statement, Intel's trying to keep people from freaking out about the Dell they bought for Christmas.

Verizian posted:

So given that 6Gbps ports are backwards compatible would it still be worth picking up an SB rig now and how likely is it to find some discount mobos before march?
Assuming a gaming rig with 1TB spinpoint F3, SSD and an optical drive, not some insane home file server packed with 2TB drives.

I doubt you'll be able to find many LGA1155 boards until the revised chipsets are launched. It'll be a whole lot cleaner for motherboard manufacturers to recall everything now, and stick Intel with one big bill now. If they let bad boards float around in retail channels, quite a few will come back into their RMA departments, and it costs money to handle RMAs. They'll look to rip the band-aid off now.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Paino posted:

I don't know if I'm the only one in this silly situation, but my retailer now WON'T SELL ME the P67 that goes with the 2500k that arrived today (it was oos before). I haven't paid it yet, the idea was to pay the whole configuration and bring it home today, but apparently he's telling me he's received instructions NOT TO SELL any of those mobos and that (hush hush) they won't be supported in the future as the new ones to be manufactured in March-April

Of course they WON'T SELL YOU a P67 board. They've been recalled. What did you expect?

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Paino posted:

I suppose, but if it shows this much he's going to lose a few customers.


It's not a bad idea but it would still be a waste of money, and consider that I'm putting that CPU in a pc with a gtx570, 8gb ram, and an SSD Corsair Force. Heh. What I could do is buy the mobo from ebay regardless. I suppose if I bought it and it's flawed I'm entitled to a replacement/refund in April anyway.

Or am I missing something here?

If you don't get what you want and pitch a fit about it, the retailer loses one customer. If Asus/Gigabyte/MSI/whoever doesn't get all the motherboards back, and finds out that the retailer's been selling products after they've been recalled, the retailer doesn't get to sell their boards any more and goes out of business. Guess who the retailer's going to listen to?

Manufacturers often want purchase information to run an RMA, and "I bought it from some guy on eBay" isn't going to cut it. Based on how they're handling things in the US, the recall might go through retailers. And, of course, there's the typical "is someone trying to unload bad hardware?" factor in an eBay purchase, as well. I wouldn't do it.

Given your terrible and completely pitiable situation, I'd recommend that you stop slobbering over specs (oh wow! a GTX 570! you sure are special, heh), quit spewing your tantrum all over the thread, and buy an i5-750 if you just have to have your system right now. Yes, the Sandy Bridge chips are faster, but the difference is marginal and likely will be all but unnoticeable in a gaming system.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

LoKout posted:

Those aren't eSATA ports. eSATA doesn't have the L shaped connector. See http://en.wikipedia.org/wiki/Serial_ATA#eSATA.

Physically, they're standard SATA ports. However, eSATA's more than just the connector: it also uses a higher voltage to drive signals, and can deal with weaker signals, in order to make it more reliable over long distances. An eSATA port like that one should be OK with a passive adapter and a 2m eSATA cable. The same passive adapter and cable, hooked up to an ordinary SATA port inside the system, would probably have trouble dealing with the cable length. Intel's board designs are fairly rare in that they actually handle the spec correctly; many board manufacturers just toss an eSATA bracket in the box and expect you to hook it to a standard SATA controller.

frunksock, you're fine. It's the same signal, it'll just work over a longer cable if it needs to.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

freeforumuser posted:

And I still don't get why Intel is selling i5-760 at the same price as a 2500K. They really believe there will be idiots falling for buying last-gen stuff at current gen prices?

They always do this. There will be people with lower-end LGA1156 systems looking to upgrade for quite a while into the future. Even high-end LGA775 chips still command a pretty high price - a Q9550 is $290. Why let it go for cheap, if people will still buy it at a high price?

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

hobbesmaster posted:

That little SSD is the most useless thing...

It's useful if you want to use it as a cache to back a mechanical drive. But, if you're buying a $450 super deluxe motherboard, you're probably going to have a nice big SSD to boot from, too.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

redeyes posted:

Huh, I somehow missed that phone entirely. Still, yeah it has to run desktop windows 10. :/

The Lumia 950 will give you a real Windows 10 desktop in Continuum.

If by "desktop Windows 10" you want something that can run Win32 apps or whatever, though, that was never going to happen, even on some hypothetical Atom-powered Surface Phone. Microsoft's "Windows everywhere" strategy means that you get a similar kernel and set of base APIs everywhere, not that you'll be able to boot up Word 97 and Quake 3 on your HoloLens and phone. The kernel and CPU architecture are only two small parts of a huge backwards-compatibility stack that they were never going to develop.

See also: all the people who got so mad when Windows on the Raspberry Pi turned out to be exactly the IoT-focused platform MSFT had advertised, rather than a complete desktop system.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

necrobobsledder posted:

SMT has a lot more to it than just instruction scheduling, but the fundamental reason why Hyperthreading (tm) / SMT only gets you two "logical" cores is that SMT is a form of register file and ALU duty cycle utilization similar to how DDR RAM works. That is, in an SMT processor you are able to load registers and process them on both the high and low sides of a clock signal. Because there's a pretty big flurry of bits flipping (causes more noise than necessary in certain circuits) when you do that on top of cache coherency and branch prediction issues this makes sending out instructions correctly and efficiently pretty difficult.

Hyperthreading 1.0 happened well over a decade ago when Java applications were so king (heck, they still are honestly within the Fortune 500) and its best cases were really contrived but ok enough. Only with the second incarnation of Hyperthreading were CPUs better able to understand scheduling of micro-ops and caching enough to make more improvements in scheduling hardware threads better.

POWER8 supports eight SMT threads per core. Both the Xeon Phi products have done four-way SMT on x86. There is no two-logical-core limit.

Hyperthreading is not related to DDR, or triggering anything on both the rising and falling edge of a clock signal. It's just keeping track of multiple states for a given set of execution units, and switching rapidly between them. This is most helpful when you've got workloads that are hitting the slower caches or main memory frequently - there's enough idle time that you want to find some work to fill it, but not enough that it'd be worth it to hit the OS-level scheduler. You can track as much state as you like, but unless you're running embarrassingly parallel work that blocks a lot, diminishing returns kick in quickly. So, most implementations for consumer, workstation, and ordinary server hardware don't bother with more than two threads per core.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Mazz posted:

My biggest confusion is what risk they really pose to an individual user, and nothing really makes that clear. I get the JavaScript thing but doesn’t that require you to be loading sensitive information into the CPU caches with that malicious code actually running? Doesn’t everything else require access to the local machine? I understand the fear/terror for cloud based systems and VMs, but I’m not getting why I should freak out and patch my 3570k when all I do is play video games and read bad forums like this 99% of the time. If I’m doing bank/finance/important poo poo, I generally have nothing open on my machine but that, I’m absolutely not a person with 85 browser tabs open. Am I missing something here?

You've got most of it, but the last little bit could hurt you.

The really big risk is for companies that make money by renting out VMs and running arbitrary code that other people give to them: Amazon, Microsoft, Google, Heroku, Jimbo's Shared Hosting, whatever. This vulnerability, by itself, only happens when you take in random code and run it.

But, you should still patch, even if you're not a cloud provider. Malware typically doesn't spread by directly hitting a "remote code execution with full kernel privileges" vulnerability over the network - those are exceptionally rare these days. Instead, it goes step by step. Bad guy submits malicious code to an ad network, which runs in an unprivileged sandbox when your browser loads it. It uses an exploit against your browser or plugin to break out of that sandbox and runs as a local ordinary user. Then, it escalates from ordinary user to root/admin with an exploit (or, more likely, a series of exploits that each break some kind of mitigation). At that point it's not hard to get full kernel access in a desktop system. Now, that bad ad means you've got a persistent rootkit hanging around.

The good news is that, if you can disrupt that chain, the attack won't succeed. People are going to use these techniques to come up with all kinds of interesting practical exploits. By patching, you can mitigate that risk. You should absolutely patch.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

EdEddnEddy posted:

Yea I can understand using it in a 1+0 form scaled up to have multiple levels of redundancy which makes sense. But nobody would do an actual Raid 0 all by its lonesome in an enterprise environment outside of maybe testing throughout or something right?

It's the opposite. Enterprise is the only place where it makes sense. At that level, you should be able to pick any random system, destroy it completely, and not suffer any permanent setback. At that point, if you get significantly higher throughput almost all of the time at the cost of having to very occasionally re-do a bit of work, it makes perfect sense.

Just for instance, take a cache or even a search index. They exist as specialized copies of other data. If they're gone, it's work to rebuild them, but no actual data is missing. There might be a performance hit if one goes offline, but more capacity while they're online usually more than outweighs the inconvenience of spinning up a new one.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

NewFatMike posted:

Apple also make extremely performant ARM cores as well, and having locked down the hardware side, they have a distinct advantage to move to another CPU architecture.

Also, even if they don't ever make the move, it's a nice bit of leverage they can hold over Intel.

The real blocker would be the higher-end systems, especially the Mac Pro. Apple could drop an ARMbook Air with iPad Pro guts in a laptop shell any time they wanted, and it would probably work just fine. Selling it as the future, though, would be a bit harder if they keep their biggest, baddest systems on the old architecture indefinitely.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Palladium posted:

If we going in the all +12V direction, why not just have PCI-E 8-pin replace the bulky 24-pin on the mobo side while standardizing all modular cables on the PSU end FFS.

You need a few extra pins if you want "ATX but 12V only" behavior - one for standby power, one for the power-on signal from the motherboard, and one for the "power good" signal from the PSU that tells the host system it's safe to start up/it needs to reset because of a power fault. It's a lot less than a standard ATX setup, but you can't get away with a peripheral connector that depends on the motherboard for handling all those functions.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Cygni posted:

The hardware requirement is the DMA portion, although apparently Intel backtracked and said it was not VT-d REQUIRED... but I'm not entirely sure how else you would do it without it? But I'm a dumbass so!

All it needs is some kind of IOMMU sitting between the thunderbolt interface and memory, that can remap what addresses the hardware sees. That abstraction means the OS can lock out sections of memory where it doesn't want random external hardware poking around, just by not mapping those physical addresses to any device-visible addresses.

VT-d is Intel's brand name for that virtualization layer. AMD has the same basic functionality under the name "AMD-VI." Apple will come up with their own solution.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Happy_Misanthrope posted:

Did you...actually watch the video?

It's not dunking on Apple at all - the opposite. It's showing that running a single 12-core Xeon is very competitive in similar editing tasks to a 64-core Threadripper PC, largely because Apple's own software is so much more efficient on their platform.

I mean this isn't even a case where it's a bait and switch - the title "I'M SHOCKED!" should give you an indication of where it's going, it would absolutely not be shocking for a 64 core PC to beat a 12 core Mac.

It's tech youtuber clickbait.

It'd say "I'm SHOCKED," have a thumbnail with a guy making a funny face, and stretch 3 minutes worth of reading into 15 minutes of setup and repeating the same points over and over, even if the conclusion was "this $10k workstation is faster than a 25 cent microcontroller."

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

MeruFM posted:

I could see nvidia trying, but Apple's dominance here is the result of 10 years of spending ungodly amounts of money into design while simultaneously shoveling truckloads of money into TSMC

Could apple's marketshare double in a few years? Possible but that's still only like 20%.

They could take 90% market of all high end computers over 1k and it would still not be 50% of the market.

Apple's product strategy is larger than just laptops. They're trying to position iPads as their alternative to lower-cost PC laptop hardware. This shows up in their marketing - "your next computer isn't a computer," all the performance comparisons that put iPads up against "the best selling PC laptop," and so forth.

More importantly, it's showing up in their technical decision-making, and not just in putting Apple-designed ARM CPUs into Macbooks. They shipped iOS app compatibility in Mac OS, even though it's janky as hell right now, because they want to push developers towards making single apps that work more or less seamlessly across tablets and laptops.

Apple expects that laptops, as a category, are going to go the way of desktops: they'll stick around as a product category, but they'll transition away from "almost every home has one" to "people don't have one unless they specifically need one." They expect to pick up big chunks of the total "consumer-focused computing hardware" marketshare, without necessarily dominating middle-tier laptops.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

WhyteRyce posted:

Apple is pushing more and more into the services game. They aren't a race to the bottom company but their services aren't exactly race to the bottom either last I checked icloud pricing

Sure, they want to sell services. Like every other "cloud" provider not named Amazon, Microsoft, or Google, they'll keep hosting those services on AWS and Azure, because that's cheaper, easier, and less risky than developing their own server hardware from the ground up and building a worldwide network of datacenters to run it all.

The really interesting thing here is going to be the server-side transition to ARM. Right now, one of the perceived issues in cloud ARM adoption is that developers would like to be able to run the same binaries and containers locally and server-side, whether that's a specific interpreter/runtime or their own compiled binaries, to make deployment and troubleshooting easier. When local dev is x86-64, then there's a barrier in going to ARM hosts. But, local dev on an ARM Macbook flips the script; now, it'll be easier to deploy to AWS Graviton or whatever equivalent Azure comes up with when they finally roll out ARM VMs.

Laslow posted:

And maybe that’s their opening. They can charge people gently caress-awful prices for personal backups/sync because it’s so seamless with the iOS and Mac devices that can shove the service signup page into millions of captive faces.

Then it might make economic sense for them to make a buttload of ARM Xserves(AServes?) for themselves to use.

Apple has hosted iCloud in Azure and AWS for a long time now. Why would they go back to on-prem hosting now? Having a good CPU is just one tiny part of a very complex equation, and they can pay other people to handle all the headaches for them with better economies of scale than they'll ever have.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

redeyes posted:

The apple M1 is fast because it has encoders for video formats. When they use a non-supported codec it flats flat on its rear end.

What are you even talking about? Can you provide some kind of benchmark to back up whatever it is you're saying?

The M1 is a fast processor, period. It's very competitive on pure performance in its market segment (quad core mobile chips) and it absolutely slays on performance per watt-hour. This holds across all kinds of synthetic benchmarks that don't measure video encoding performance, and on various software-only video encoding performance tests.

Intel has a hardware encoder as well. So does AMD (although they tie it to their integrated graphics rather than the CPU itself). All three hardware encoders are very fast and power efficient, but don't offer tuning options, and don't do incredibly well on output quality or compression efficiency. So, serious video editing work might use the hardware encoders for previews, but will render out the final result with a higher-quality, slower software encoder.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

It must be the "dark" edition because it has only one garish backlit RGB logo on it

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

WhyteRyce posted:

I think he's highlighting that a lifestyle company is dunking all over them and their core business and phrased it in a way to show how ridiculous it is

I don't think he cares about being cute or trying to get a burn in, it's to highlight the systemic failure of the company that let this happen

The problem is, Apple isn't a "lifestyle company." They're a technology company that's adopted a very successful lifestyle strategy for their product development and marketing. But, even in their lowest, darkest days, they always had very good software and hardware people on board. A lot of their issues in the past specifically came from making deliberately contrarian technology decisions and sticking with them, instead of just taking generic mass-market tech and slapping a pretty coat of industrial design on top.

There are all kinds of caveats here about how this is a semi-public event, so I'm sure part of it is sending a message to employees and investors that Apple is not some unbeatable titan. But, if he's actually drinking the koolaid he's serving, it's not a good look for the future of the company.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Not Wolverine posted:

Seriously, what has Pat done? He is an EE who lead the 486 team, but frankly that was a loooong rear end time ago. Has he designed good child since then? Maybe my memory is bad, but back in the 486 days AMD was still trying to get by as a clone maker. AMD pushed the limits of the platforms by releasing chips that were slightly faster, like a 586 that worked on a 486 board. AMD released the K-6 with lower frequency but better IPC and it might have been less if a failure if more than 2 people coded for 3D Now!. AMD broke the GHz barrier with the K7 (still the best CPU of all time, in my opinion). I know Intel has had some good chips too (not you, Pentium 4), but I think Intel was also helped a lot by market share, mind share, anti competitive practices, and even unfair compiler tricks (at least one time). Maybe Pat is smart, I'm just not seeing a compelling reason to fawn over the guy holding a handful of Itanic CPUs. I used to run a dual Pentium Pro system, it was a turd.

In the late 80s to mid 90s, Intel's main competition wasn't x86 clone makers, it was all the new RISC-ish designs that dominated high-end workstation/server markets and were threatening to move into consumer and small business segments as well. There were a lot of different options for "who's going to be the architecture of the next decade," from the POWER family (including a big market push from one Cupertino "lifestyle company" that has for some reason been heavily involved in technology for a very long time), MIPS, Alpha, and even internal competition from the i860/i960 (fun fact: Windows NT was originally developed for the i860, then shifted to MIPS; it was ported to x86 later). The fact that x86 not only stuck around on the strength of backwards compatibility, but managed to move into high-end workstations, servers, and supercomputers, says a lot about the skill of the people working on x86 designs in that era.

Also, the Pentium Pro might not have been a great consumer CPU at launch, but it was a key part of that evolution. P6 was where Intel started to break down x86 instructions into RISC-like uops, which was a massive change in the design of the processor. If you chart the evolution of current Intel x86 designs, you can more or less draw a straight line from their current parts that ends at the first P6 chips (and completely bypasses netburst, lol).

DrDork posted:

Yeah, that is what I took from it:

"Hey guys, we do (really) one thing, we should probably do it better than some other company who does our "thing" as a second-class-citizen side-line project (and may be doing it in large part because they don't like our products)."

Apple's silicon design teams have clearly been anything but "second-class-citizens" for a long time now. They're one of the most cash-rich companies in the world and they've been willing to throw near-unlimited resources at becoming a serious player in the market.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

spunkshui posted:

Well, I cant vouch for RGB software outside of iCUE because since I was starting from scratch with no glowing lights I stuck to one company.

Oh well if it's from reputable vendor Corsair it must not have any security vulnerabilities.

Wait, what's this CVE-2020-8808? Well, they're so reputable, I'm sure that it was some obscure issue where someone could possibly fool the drivers into accessing some tiny chunk of should-be-off-limits memory, but it's probably not anything super serious.

CVE-2020-8808 posted:

The CorsairLLAccess64.sys and CorsairLLAccess32.sys drivers in CORSAIR iCUE before 3.25.60 allow local non-privileged users (including low-integrity level processes) to read and write to arbitrary physical memory locations, and consequently gain NT AUTHORITY\SYSTEM privileges, via a function call such as MmMapIoSpace.

Base Score: 7.8 HIGH

Hmm, it turns out that any process could just ask the iCUE drivers to read or write arbitrary memory and bypass the entire Windows security model. That seems pretty bad, but they did patch it a couple of months after it was reported.

I'm sure that was just a one-time "whoops, we completely forgot to put any security in our software that runs in a highly privileged context" issue, though, surely that wouldn't be something that would be part of a longer running pattern. Oh, wait, both CVE-2018-12441 and CVE-2018-19592 detail other issues with Corsair software that allow any unprivileged user on the system to execute arbitrary commands with system-level permissions.

Eh, gently caress it, who needs security when you have fancy flashing lights.

Adbot
ADBOT LOVES YOU

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

spunkshui posted:

Because you’re complaining about something that isn’t reality.

RGB can be shut off and even lit up without running software.

Tell me what the actual product is that you want that you can’t shut off the RGB?

Be specific.

I have an Nvidia-branded reference 2070 Super with a light-up logo (which is, amusingly, RGB but software locked out of the box to Nvidia's branded green color, because it's apparently cheaper and easier to spec RGB hardware than... a couple of green LEDs). That light-up logo comes on at boot. It can be dimmed or turned off with third-party software, but it doesn't maintain that state. The LEDs can't be easily removed or unplugged, because they're surface mount.

The light spill out my case isn't bad, so I live with it, but it is exactly what you claim doesn't exist.

MeruFM posted:

The build your PC community is basically gamers, and not the neckbeards of yore. RGB is a thing for the same reason fancy keyboards are a thing, people like customized stuff. You can either embrace it or just hate the world.

"You can customize your PC to be anything you desire!"
...
"oh, you want it to not light up like a seedy third-rate nightclub, and also not run software that just bypasses all security because it was a few dev days faster to ship? No, you're not allowed to want that! You have to embrace the custom lifestyle!"

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply