Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
movax
Aug 30, 2008

This is definitely going to kick-rear end for the portable market; the i7 I have in my MBP is already pretty sweet w/ integrated graphics, but now you get somewhat speedier CPU, greatly improved GPU, and reduced power consumption! Woo!

Question though, since I've just been sleeping through the Nehalem lifecycle: do the Xeons simply have the GPU portion disabled, or unconnected? I'd think it'd be kind of pointless for a 4-way Xeon machine to have 3 unused GPUs that the end-user still has to pay for.

Adbot
ADBOT LOVES YOU

movax
Aug 30, 2008

Fats posted:

As I understand it, the high end desktop and workstation processors (socket 2011 or whatever the replacement for socket 1366 is) won't have the GPU portion on the die at all.

Ah this makes sense, thanks. I'm pumped for Sandy Bridge; my first machine was a Pentium (family), then my personal boxes have gone P3 1.11GHz -> C2D E6600. I think it's almost upgrade time, even though my E6600 still does everything I need.

That's kind of the thing after the C2D IMHO; if you forget about games for a second, these newer CPUs aren't really opening too many new doors...they just help you get existing stuff done waaay faster. Which is great and all, but it's not like a night and day "holy poo poo I have a C2D, now I can actually finish encoding a H.264 movie in this lifetime!"

movax
Aug 30, 2008

B-Nasty posted:

Got SSD?

Last de-rail: yeah, the storage market is currently where the "holy poo poo, night and day" difference is at, thanks to SSD. Don't have to worry about mechanical failure, you get bitchin' fast speeds, etc.

On-topic: I really appreciate the built-in GPU because 1) battery life on portables should be far better, 2) integrated GFX that aren't awful, 3) when spec'ing out a "low-end" machine for grandman/business, you don't have to compromise and forgo Aero/future HW acceleration due to cost. You get that GPU with the CPU.

Serious gamers can scoff at it, but to me, being able to just drop a CPU in a new build for the parents and know that I'm not closing doors to pretty GUIs is a pretty sweet side benefit (that you can get with either Intel or AMD now I suppose).

The on-board GPU does eat a chunk of your system RAM though, correct? Or can mobo manufacturers slap 128MB/256MB of RAM on-board and wire that straight to the GPU? I think my AMD 780G-based mobo lets you choose between eating system RAM or some dedicated RAM on the mobo.

movax
Aug 30, 2008

BangersInMyKnickers posted:

Systems shipping with 2gb of ram aren't going to be slowed in any real way by 6% of system ram being allocated to desktop composition. Once you're up to 4gb it doesn't matter in the slightest. You can have all the dedicated GPU ram in the world you want, but this thing isn't going to be powerful enough to drive anything with it so you might as well just use the system memory that is already there and if you want anything more intensive than Aero or Flash HW acceleration then get real dedicated graphics.

I recalled that earlier integrated graphics chipsets (like, 915GL or older), the loss of a chunk of your system RAM wasn't terrible, but rather there was a parasitic/general loss of memory bandwidth due to the GPU sharing some of that memory.

Or maybe that was just in the silly Sandra synthetic memory benchmarks, which no one really cared about except in overclocking e-peen contests.

movax
Aug 30, 2008

BangersInMyKnickers posted:

Assuming Dell starts offering on-board video with dual-display outputs on the next gen of Optiplexes, this will probably get us to drop add-in video cards for everything except Autodesk product users.

This would be sexiful. I hate having to run two full-size graphics card for four displays...even having one output available on the mobo would be nice. DisplayPort connectors are tiny...I'm sure the Asus and Gigabyte "premium holy-poo poo" mobos will probably offer 2x DP (or 1xDP + HDMI) connectivity, in addition to strapping on USB3.0 hardware.

Now to decide which CPU to get...2600 or 2500. Not terribly interested in OCing now, I just want to strap an awesome CPU to like, 12GB of RAM and a bitchin' SSD.

movax
Aug 30, 2008

Ethereal posted:

Computers are never going to be fast enough or power efficient enough for companies with large data centers. I don't think a slowdown in the consumer space would really affect much.

I'd imagine that they are pretty pumped about any advances that let you pack more cores into a smaller space with lesser power consumption. Good for regular-Joe consumers too.

movax
Aug 30, 2008

Alereon posted:

Apple wants Light Peak because it will let them instantly obsolete all of your peripherals, forcing you to buy new (probably Apple-branded) hardware or expensive adapters. Intel is happy to oblige them because they get a sweet royalty check every time someone makes a Light Peak device.

I think if (maybe when) Light Peak falls flat on its face, the biggest benefits will be gained industry experience in the R&D and manufacturing of mass-market consumer optical-devices (kind of like how Toslink took off super fast and is in drat near every piece of A/V equipment now).

Then when the next-next generation of consumer insanity hits, we shall be ready!

movax
Aug 30, 2008

Lum posted:

At the time the new models were announced but not really available in my country and the ones you could get were overpriced.

Plus I needed my old CPU to upgrade my media centre PC.

Point is this PC is doing just fine, my ATI 4870 is the biggest bottleneck. The i3/5/7 was a completely pointless generation, for me, with two worthless sockets that are obsolete after a year. I'm happy to have sat the whole thing out.

Agreed...I would have rather annoyed at having my i5/i7 hardware obsoleted in only a year or so, if I had bought 'em. Perfectly happy with my LGA775 C2D + 8GB DDR2. I'm hoping this next socket will have a lifetime comparable to LGA775, but lasting more than a year or so would be wonderful. I know I would have gone for Asus's high mid-end to top-end offering (so, $250), probably 6GB of RAM at the least, and an i7. The only parts I would have been able to carry over is the RAM.

My current bottlenecks are GPU (2x 8800GTS 640...don't really care, because I don't game as much as I used too) and HDD (all mechanical still). Don't really *need* Sandy Bridge, but I'll probably get a new mobo + CPU + RAM as a treat to myself for now having a big-boy job.

I don't know what I'll do with my old hardware though. Too lazy to sell it, and I don't really need another PC for stuff...virtualization ruins the fun of having a farm of PCs doing different random things. But, power savings!

movax
Aug 30, 2008

Spime Wrangler posted:

You'll notice everyone saying "i'm happy with my C2D/C2Q I might sit this one out." Also, those benchmarks that were linked show sandy bridge underperforming the top-end i7 chips.

If only Intel hadn't put out an excellent product in the form of the Core 2 Family! :argh:

The next evolution in CPU technology will be an integrated self-destruct mechanism that goes off 2 years after manufacture.

movax
Aug 30, 2008

4 Day Weekend posted:

I don't think GPU can do video encoding better than CPUs. Faster yes, but the quality is pretty bad in comparison.

You need to define better. A good deal of the mathematical operations utilized in H.264 encoding can be performed much faster on massively-parallel hardware like a GPU. However, Intel has also been providing SSEx extensions for years now, and some current encoders are written to utilize the very wide SIMD/vector instructions, allowing for boosts in CPU performance. We can look at decoders...ffmpeg vs. CoreAVC (with CUDA enabled) vs. Broadcom CrystalHD card vs. DXVA. Some are pure software, some can leverage hardware in the form of a GPU/add-in card, some are hybrids...and they all turn out varying output quality (go to AVSForum to see people spergin' out about decoder vs. decoder).

It all comes down to software. An encoder that runs on fast hardware but sucks at CABAC will always deliver shittier results than an encoder running on slow hardware, but with an appropriate CABAC implementation. The true test, which I don't know if anyone has gotten too, is running the same encoder on a CPU and GPU, optimized for each respectively, but with the same algorithmic decision-making when it comes to quantization/CABAC/etc.

movax fucked around with this message at 15:21 on Sep 21, 2010

movax
Aug 30, 2008

incoherent posted:

I hope this chipset is finally the one to force motherboard makers to UEFI.

I hope so, because I'm tired of dealing with AMI's BIOS development environment and x86 assembly. So looking forward to being lazy as poo poo and being able to use C in conjunction with BIOS development.

movax
Aug 30, 2008

Alereon posted:

In theory all of the time between when the first POST image appears on your monitor and the Windows logo can be eliminated, especially if you're using an SSD and don't have to wait for it to spin up. That's 10+ seconds on my system.

I'm currently nearing the end of week-long training with EFI down at AMI, and I'd say even with tons of debug code active, we're seeing a reference board get to OS in ~20 seconds or so. I'm porting their reference UEFI BIOS to our new platform, so I can answer as much non-NDA'd stuff 'bout EFI as I can.

movax
Aug 30, 2008

~Coxy posted:

I hope this doesn't come across as a loaded question, because I'm actually curious.

BIOS as we all know is kinda slow, but what really kills it is all the "addins" that have their own screens, device enumerations, etc. before you hit the OS.

By way oof example:
basic POST -> OS = x seconds
enable AHCI in BIOS = x + y seconds
install a RAID card = x + y + z seconds
configure the JMicron RAID = yet another screen and even more seconds added to boot time.

Will EFI help with this crap?
Will we ever get to a Mac-like grey screen -> OS in stupidly low amount of time?

Well, a lot of what slows down PC boots are all those Option ROMs that are getting loaded; RAID cards loading and trapping Int 13h, etc. That, and now EFI can be stored on a flash chip on various busses, from LPC to SPI.

EFI does 3 stages: SEC, PEI, DXE. The first two are pretty "fixed", with the third being where all the drivers are loaded (Driver eXection Environment). The OEM can write their own custom DXE modules to do whatever the gently caress they want. Unfortunately for us PC blokes, I think 99% of OEMs will include a legacy support module (you do want all your hardware to work, don't you?) which means that 16-bit OROMs will continue to be executed.

Also, a good deal of the speed of Mac booting is the lack of enumeration (unless you reset the SMC because of some goofiness, but even then, that's just one slow boot). At least on a Macbook, Apple knows exactly what every single PCI(e) device will be, it will never change, and they can generally skip any boring enumeration tasks.

movax
Aug 30, 2008

Intel reference board with a single SSD, integrated GFX...4.3 seconds from power button to start of Linux boot. Boner.

movax
Aug 30, 2008

Cryolite posted:

Do you think we'll see that kind of performance with the early UEFI boards coming out in Q1 2011?

With FastBoot on, maybe. That was with the legacy support module active as well, so I think it will get faster as more and more legacy stuff goes poof (and Windows 8 supports EFI in all editions).

movax
Aug 30, 2008

Triikan posted:

What kind of stuff is going to be a Legacy device? Let's say somebody builds a gaming rig with Sandy Bridge, with no add on cards besides a new graphics card. Will onboard audio, ethernet, etc, be EFI enabled?

By legacy, I mean more offering up BIOS services to the OS. If you remember DOS and its ilk, they used direct BIOS interrupts for nearly everything. Like EnergizerFellow mentioned, even 32-bit Win7 still needs to use int 19h to actually *boot*.

Device compatibility shouldn't change, but you can do some really cool poo poo in just the EFI shell environment. There's a full TCP/IP stack available, and vendors like AMI have tools like AMIDiag, which is essentially EFI-memtest86 + other tools, with a shiny GUI!

movax
Aug 30, 2008

Siroc posted:

Then I only have to wait until the beginning of Jan for the i5-2500k? That'd be awesome. Will all these sockets be EFI? Its a new tech, so I'm a little concerned about being a 1st gen beta tester for it. Are there any worries in that regard, or has EFI been tested for a while?

There's nothing to worry about, EFI-based BIOS have been in Macs, and vendors like AMI have very mature EFI implementations; there were full EFI-based BIOSes available for Calpella & it's accompanying generation, if any vendors felt so inclined.

On topic, I can't wait, an i7-2500 is in my future. My E6600 is choking miserably on CoD Black Ops, and I'm beginning to think it was my E6600 bottlenecking a ton of games, not a 8800GTS. (Just bought a GTX460, and I think the E6600 is bottlenecking it...)

Downside, new mobo, CPU and RAM, but whatever.

movax
Aug 30, 2008

BangersInMyKnickers posted:

The E6600 has had a good service life though. Mine has been dutifully chugging away for 4 years now, it's really not surprising that my next upgrade is going to involve a total platform overhaul. In hindsight it really amazes me how much hardware requirements for applications stagnated over that period of time. Between purging of old hardware and greater adoption of .NET and the overhead it costs, they seem to be going up in earnest for the first time in years.

Oh god yes. I am really excited that the CPU I bought 4 years ago has just now hit a game it's bottlenecking (and patches for Black Ops are helping out). It's close to the limit, definitely (Mass Effect 1 would lag with my 8800GTS 640, had to be the CPU), but for just generic tasks + development, VM, etc, it is pretty solid.

I've pushed it's overclock up to 3.1GHz in an effort to hold out until I can get my paws on a new Asus mobo + i7-2500.

movax
Aug 30, 2008

Disgustipated posted:

Atoms aren't about price/performance, though. They're performance/watt. If a 1.6 GHz Atom really is as fast as a 2.2 GHz P4 that's pretty fantastic given how little power they use, especially compared to how ridiculous Netburst was. This is really a point in Atom's favor in my opinion. :shobon:

I used to own a Latitude C640 with a P4-M 2.2GHz. Battery life peaked at a hour, I think. Atom performance may be "atrocious", but it is kind to batteries. Too bad the first-gen netbooks paired a retarded powerhungry 9 series chipset with the Atom.

movax
Aug 30, 2008

Even with the integrated Intel MAC, you need a PHY like a 82577. Controllers like the 82574 are standalone PCIe-interfacing chips.

Excited about onboard Intel though. Always been an Asus guy, will continue to be now. Think I'll pass on the flak jacket though.

movax
Aug 30, 2008

El Bandit posted:

It was definitely the 8800GTS. I have an E6750 and it runs everything pretty well with a 4890 (except Blops, but that's because it's a poorly optimised piece of poo poo) - I had a GTS 320MB before and games released two years ago were starting to struggle.

Well, that makes me feel a little better. Little soldier will live on as my main CPU until February or so!

movax
Aug 30, 2008

Good to hear. $387 for an i7 2600K that I won't upgrade for another few years sounds good to me. I think I paid maybe around $300 for my E6600 and that has been working just great for me.

movax
Aug 30, 2008

You Am I posted:

I have found a lot of newer games, like Modern Warfare 2 and Battlefield: Bad Company 2 do run noticeably faster on quad core machines vs. dual core.

Depending on how smart the scheduler is, likely boring system services and such are eating up other cores whilst the game abuses its chosen cores. On *nix I know you can force the kernel to a specific CPU/core leaving the others for your delicious user apps.

movax
Aug 30, 2008

SRQ posted:

How much do you guys think a motherboard with support for this will go? I don't need/want crossfire/SLI.

Somewhere around $200 I would imagine. Asus and Gigabyte for example are pretty predictable about stratifying their product lines. You will have a base model, barebones (I don't remember if the 6 series chipsets have USB 3.0; if they don't, the base model boards may not give you a NEC controller or something) that will lack RAID, maybe cut down on the number of ports. Then you'll have a midrange model, which is usually the sweet spot.

Then there'll be some high-end, fuckoff board with 4 Ethernet jacks, two BIOSs (or two EFI ROMs), BlueTooth, 3 actual x16 slots (48 motherfucking lanes), integrated RAID, FireWire, and tons of poo poo.

Just don't skimp on the mobo, whatever you do. Asus and Gigabyte are safe bets for sure.

movax
Aug 30, 2008

ilkhan posted:

My planned build is a 2500K & GA-P67A-UD4. I'm hoping to get both for <$450. Already have the rest of the build on the way or being recycled.

Should I buy DDR3 now or wait until January?

movax
Aug 30, 2008

Alereon posted:

The DDR3 DRAM market is in a steady price decline that isn't expected to end until the first or even second quarter of 2011. Granted wholesale DRAM price isn't the price you pay at Newegg, but they're related like the price of gas is to the price of oil. In the first half of the year the wholesale price of 2GB of DDR3 was $46.50, by the beginning of November it was down to $25, and the price is expected to fall to $20 by the end of the year.

Awesome. The reason I inquired was that I didn't want to miss the price valley before DDR3 prices started climbing again. Guess I'll order that kit come January along with new mobo and CPU.

movax
Aug 30, 2008

Marinmo posted:

Does anyone know if (U)EFI will be the default on the motherboards coming out with SB (which would be sweet!)? Think I saw a video by some Swedish site previewing UEFI, but I don't know if it was a SB-related thing or not. Anyone?

Most early/first-generation UEFI boards are not going to be light-years ahead of their BIOS counterparts; just underlying source change, and GPT support I'm thinking. Maybe the second wave will give those promised GUI-based OCing tools. It's all up to how much effort the manufacturer's are willing to put into their BIOS development.

Businesses will really love EFI-based BIOS, so they can fully leverage all of Intel's management technology. It's a really goddamned dumb idea to steal a PC from work when it can be literally bricked (not even a BIOS flash will save it).

movax
Aug 30, 2008

Lum posted:

I haven't even begun overclocking my C2Q. 3GHz seems to be plenty and an SSD was the most effective upgrade I've had.

My main interest in EFI is to improve boot time.

I must be getting old.

I guess when the xBox 720 comes out, we'll all need to upgrade to play the latest games.

I'm upgrading because my E6600 is finally beginning to bottleneck some games (even OC'd at 3.2). Black Ops was really the first one to do that, being a terribly optimized mess. :(

Though at the rate I've been spending money lately, I'll probably be buying Sandy Bridge a month or so after launch methinks. New mobo, CPU, RAM (and whatever adapters I need to make my Ultra 120 mount to said mobo) will be pricey.

movax
Aug 30, 2008

ilkhan posted:

2500K@4.0-5.0Ghz sounds much better to me.

Yeah, I'm excited for another round of overclocking. It's so boring when you've had years to achieve a stable overclock that you can't exceed without going back to water-cooling or something. Hopefully my Ultra 120 is up to the job, I hate heatsink shopping. These are going to run cooler than their Conroe predecessors anyways, correct?

movax
Aug 30, 2008

spasticColon posted:

What will the availability be like at launch? Will there be plenty of them to go around or will it be slim pickings?

It's definitely a hard launch, but rest assured, Newegg and co. will gouge for a few weeks before prices settle. I'd be a little more worried about mobo availability, once the first few AnandTech/similar reviews come out recommending a certain board(s) at a certain pricepoint, they sell out reaaaaly fast.

Question, because I'm ignorant and skipped Core-iX: i5 and i3s use dual-channel DDR3, i7s use triple-channel? So usually 4 or...6 slots? 6x2GB sounds good, but I feel OCing would be easier with 3x4GB (or does that not matter anymore because there is no FSB to overclock anymore, and no memory straps to worry about? I feel so out of date :smith:)

e: Any goons with a Thermaltake Ultra 120 Extreme know about mounting adapter availability?

movax
Aug 30, 2008

Gotcha. So don't worry too much about dual or triple, just wait and see what mobo I'll end up getting. Will start out with 8 most likely, unless I find a really nice price on RAM in which case I'll happily go up to 12.

movax
Aug 30, 2008

ilkhan posted:

My P67A-UD4 is already on the way. :) Just waiting on a 2500K.

i7-8xx use dual channel (s1156/s1155)
i7-9xx use triple channel (s1366)
i9-9xx (Whatever they go with) will use quad channel (s2011, H2'2011)

Get a s1156 mounting adapter, its the same specs as s1155.

Didn't know 6 Series boards were out yet...time to do some shopping/drooling! Will look into a 1156 mounting adapter, thanks. No sense in replacing a perfectly good block of copper.

movax
Aug 30, 2008

spasticColon posted:

when there is no PCI-E 3.0 support.

Can't blame them for this, the spec was just finalized in Nov. 2010. I just started doing PCI-E 3.0 layout/design for my company a few weeks ago. I don't think any graphics cards are ready for 3.0 anyways, are they?

movax
Aug 30, 2008

Zhentar posted:

It sounds like you're under the impression that engineering samples are of higher quality than the retail chips. They aren't. Tweaks to the design to improve yields will have been added, and the manufacturing process will have been improved, so a substantially larger portion of the retail chips will be capable of achieving those speeds than the engineering samples, even at release.

I would expect Intel to favor higher-binning chips for the unlocked models anyway, since it wouldn't cost them anything and better overclocks will convince more people to pay the extra.

Production chipsets and CPUs come with functionality disabled as well. Some of our Ibex Peak docs had pages just filled with BGA balls marked 'NC"; the Intel FAE said the functionality that was supposed to be present didn't pass QA, so they just wrote off that part of the silicon and those balls, too late to change it.

movax
Aug 30, 2008

Props to Anand, we've had SB CPUs here for awhile now, and I never realized *Ks didn't have VT-d. I don't virtualize quite enough for that to drive me away, but like someone said, it's kind of a downer that the premium SKU (at least for now) doesn't have all the bell and whistles. My reasoning (guess) is that in QA testing, that functionality went straight to hell above certain clocks, and rather than have users enable/disable it in BIOS, they just killed it off. (Either that, or it can't co-habitate with the HD 3000 graphics, which I think is unlikely).

Also, offering up the 2600K and saying "yeah, you could get above 5GHz with this :smug:", and then basically requiring a chipset (P67) that doesn't implement FDI...why make us pay for that HD3000? Maybe they just expect people to wait for the Z68. Maybe it was a chip packaging decision. I wouldn't mind being able to use the integrated GPU to run my 3rd display, it'd leave me PCIe slots free.

HW transcoding and acceleration is nice on paper, as it always is, but dealing with being on the cutting edge of multimedia tech, I've been taking the approach of just throwing CPU brawn at decoding to avoid headaches. Colorspace conversion errors, incompatibility between output renderers and combination of DShow filters, asinine limitations for HW encoders...good thing there's a CPU underneath all of this that can actually earn its keep.

Looks like a home-run for the mobile market, AMD is continuing to get owned there. Interestingly enough, I guess Intel is (like they have been) content to leave the "value" market to AMD.

e: spasticColon, yes, I hope you like the feeling of Newegg's capitalist phallus in you! I will probably take it happily in return for a 2600K

movax
Aug 30, 2008

Alereon posted:

H67 doesn't support PCI-Express port bifurcation (dividing the x16 slot into two x8 slots) or overclocking. It's probably also less expensive than the P67.

It looks like the spec page says it supports some uber quad-GPU Radeon config; maybe they repurposed the FDI link? Though, IIRC, electrically FDI is very similar to DisplayPort, so maybe it is just the cost issue.

movax
Aug 30, 2008

Also, Asus saw it fit to make me dig through their product pages to figure poo poo out (their product comparator was borked when I tried it), and then I saw legit reviews put up a nice spec table: http://www.legitreviews.com/article/1500/1/

Happy:
- all have USB 3.0
- all have Firewire
- all have SATA 6Gbps
- P8P67 PRO and above implement a PHY for the integrated MAC in the P67
- P8P67 is only $160.00...so, $200 at the egg?

Sad:
- need to get the PRO or better for SLI
- P8P67 doesn't use the Intel ethernet controller
- P8P67 is starved for lanes :(
- no legacy 775 holes :(

Also, don't buy the LE. Personally I think I will go for the PRO, because I don't SLI, but want the Intel ethernet solution.

Happy to see that pretty EFI implementation. I guess the Asus engineers that were sleeping all the time during training could learn in their sleep.

e: In case anyone was curious about the integrated Intel Ethernet functionality, it's similar to what was in the Q57. The chipset provides an integrated 10/100/1000 MAC. This MAC is useless without an accompanying PHY however, which as its name suggests interfaces with the physical ethernet network. Apparently, it's cheaper for some makers to buy a Realtek controller (MAC+PHY) and use that compared to buying just the Intel PHY. If you don't use the Intel PHY however, I think you can repurpose that PCIe x1 link for something else...like the aforementioned Realtek chip.

e2: And now I know why the PEX8608 is backordered, thanks Asus! :argh:

I think everyone should seriously consider the "splurge" for getting the Intel solution; your torrents will thank you.

movax fucked around with this message at 19:26 on Jan 3, 2011

movax
Aug 30, 2008

Alereon posted:

Even if you do torrent, you're almost certainly not going to care about your NIC chipset. GigE has been common for long enough that even your standard Realtek or Marvell chipset has no problems making it work well.

I know that Broadcom and Atheros aren't up to snuff compared to Intel's offering, and my experiences with Realtek haven't given me the most favorable opinion. Granted, this can be due to the implementation of the controller IC (or drivers!), but Dell had R610s available with Broadcom NICs that would randomly kill off Solaris.

quote:

Yeah right, it's product differentiation.
It's the priciest SKU though? Or maybe HP/Dell buying boatloads of 2600s versus 2600Ks is better for them given manufacturing yields?

movax
Aug 30, 2008

JawnV6 posted:

The Virtual Machine Control Structure (VMCS) is explained in the Intel PRM Vol. 3B, if that's not quite enough to put you to sleep you can read the rest of the PRM. It's a 4kb region of memory containing guest state, host state, control bits, etc. accessed through the vmread/vmwrite instructions. Basically it's an implementation detail of VT-x and unless you're rolling your own hypervisor you shouldn't care about it.

God my PRMs are so old. Ordering some shiny new ones today.

I assume that you could possibly answer this: when will the Intel product pages for their 6 Series Chipsets go up? After/during CES?

Adbot
ADBOT LOVES YOU

movax
Aug 30, 2008

redeyes posted:

According to a lot of reviews this SB cannot decode FILM correctly and outputs 24fps which makes movie watching jittery. That is loving unacceptable.

1. It's a good thing there's a CPU included with SB that can decode video purely in software :smug:
2. It's been nearly half a century and everyone still has to deal with framerate fuckery (and they still get it wrong). Though screaming about 23.976 vs 24.000 is a new one to me.

I wonder if the DisplayPort jitter issue that was present on Ibex Peak snuck its way into the 6 Series from all the design reuse?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply