Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Star War Sex Parrot posted:

It's probably the vendors. In my testing at work over the last 2-3 years, I've found that most USB3 bridges are absolute poo poo. Fujistsu's stuff is alright, but most of the market has JMicron bridges which are really bad. Most host chipsets arent great too. Intel's are decent compared to the earlier ASMedia and Renesas garbage. I've been in no hurry to upgrade my DAS from FW800 to USB3 after I've seen how terrible it is at work. I'll just end up going straight to ThunderBolt, most likely.

I know this is from a few days ago, but I was wondering if you have any tales to tell about ASMedia USB3-SATA bridges. (It sounds like you were disgusted specifically with their host controllers?)

At work, I had to test the performance limits of a USB3 host controller when attached via 1 lane of PCIe 1.x. Therefore, I set out to build a USB3 disk faster than PCIe 1, so that we knew the disk wouldn't be the bottleneck. I tried googling various 2.5" buspowered enclosures to figure out the chipset. ASMedia seemed to be in all of them, so I just grabbed a cheap ASM1051e based enclosure and put a Samsung 840 Pro in it. Surprisingly, when I tested on an Ivy Bridge Linux PC to verify that it could beat PCIe 1, it was good for about 320 MB/s read and write.

So it's at least reasonably fast. I wasn't doing any kind of compatibility or conformance testing, though. No idea if ASMedia bridges are crap there.

BobHoward fucked around with this message at 09:15 on Apr 3, 2013

Adbot
ADBOT LOVES YOU

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Bob Morales posted:

USB originally ran at 12Mb/s. And then if you hooked up a USB keyboard or USB modem (remember back in these days computers might only have 1 or 2 USB ports) and you'd bring the whole USB bus down to 'low-speed' which was something like 1Mb/s. Give me a loving break.

USB isn't quite that bad, it doesn't actually force the whole bus to low speed. It's point to point without a hub, but when you put hubs in the picture, they don't broadcast traffic everywhere through the tree. Packets are only sent along the links connecting the computer to the device it's talking to at the moment. That means slow devices never even see the fast traffic, and vice versa. (All hubs between your computer and a device do have to support the speed you'd like the device to run at.)

That said, a "low-speed" (1.5 Mbps I think) device can consume a disproportionate amount of bus time for very little throughput. Try to fit 0.5 Mbps through a 1.5 Mbps pipe, and you're consuming about 1/3 of the available bus time, leaving just 2/3 for the high speed devices to play with. It's not a problem for mice and keyboards and joysticks, they just don't have a lot of data to send, but anything trying to move serious amounts of data at the low or even the 12 Mbps speed can really hog the bus.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Tenterhooks posted:

Anything else I should do or try?

You've probably got a difference in ground voltage between the iMac and the USB devices. Chassis aka earth ground is connected through the USB cable outer shell and shield conductor. If there's a voltage difference between the grounds on each end, you'll get visible arcing when the shell makes or breaks contact. (You don't see it with bus powered devices because their ground level floats.) Sparks of the size in your youtube vid are more alarming than potentially harmful.

Try plugging everything into a single grounded power strip. The problem should go away.

(And yes, wall wiring is the likely culprit. That, or whatever power strips and extension cords you're using now.)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

kuskus posted:

Disclaimer: I am a bored layman. As one is wont to do, I thought I'd search CPU charts to see what performance we could expect from Haswell (or, for me, what I'm "missing" from not upgrading from an i7-870). The i7-4xxx / E3-12xx Xeon series are Haswell.

Important note: E3-12xx is actually Sandy Bridge. E3-12xx v2 is Ivy Bridge. And E3-12xx v3 will be Haswell. The PassMark benchmark number you posted is actually Ivy Bridge.

quote:

- Holy lord, will the Mac Pro benefit from an upgrade. I don't see a 6-core CPU announced though, so a 12-core config might not be at the ready. All things being equal, a new 8-core (2x E3-1280 @ 9,856 = ~19,712) might score only a hair higher than the the current 12-core (2x X5675 @ 9,382 = ~18,764).

You won't ever see a 6-core E3-12xx v3. Since Sandy Bridge, Xeon E3 is the name for rebadged mainstream desktop parts, fitting the same LGA-11xx sockets as the corresponding desktop line. Features are slightly different, the most notable being that the Xeons all have ECC. It's really the exact same silicon with a different feature set enabled. Since those desktop product lines have only had up to 4 cores to date, and that's not changing in Haswell, that's what you get in Xeon E3 v3.

You can't actually put two E3 processors in one machine. Because of their desktop lineage, they're single socket only, no provisions for dual-socket (or higher). For that feature, you have to move up in the world to Xeon E5. So E5 is what you should be looking at when speculating about Mac Pros.

Presently the only E5 processors on the market are Sandy Bridge, 4 to 8 cores per socket.. Ivy Bridge E5 ("v2") is due out very soon. Haswell E5 isn't due till next year at the earliest. (It takes a lot of extra time to validate and bugfix dual socket Xeons, in part because they're doing more complex things, in part because the market for these CPUs demands a higher standard of bug-freeness than the mainstream desktop market does.) Unless Apple decides to offer a low cost single socket only model, expect the 2013 Mac Pro refresh to use Ivy Bridge E5.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

japtor posted:

The bandwidth with TB2 is kind of confusing...I think it's the same overall actually, but more useful. From what I can tell, TB1 is two bidirectional 10Gbps channels that can carry data or a display signal, but not both simultaneously over the same channel. TB2 is a single bidirectional 20Gbps channel that can carry data and display simultaneously.

TB1 can in principle mix display and PCIe in one bidir channel. It's a packetized bus encapsulating other packetized protocols so there's no limitations other than bandwidth. It's just that the data rates made it convenient to dedicate one 10g channel to DisplayPort.

4K DisplayPort needs more than 10Gbps so TB2 bonds the two 10g lanes to make one 20g channel. And the chips will now make use of what's already in the protocol to support mixing two or more encapsulated busses in one TB channel. Kinda disappointing that it didn't actually get upgraded to two 20g channels but that's a tall order with copper media (maybe when optical media is a thing?).

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Siguy posted:

This crazy in-depth look at Haswell's best on board GPU option makes it sound like it would be close to current performance but still slower.

Actually, for compute it may not be slower. Page 17 of the review has the OpenCL compute benchmarks. Haswell actually beats the GeForce GT 650M (the current rMBP discrete GPU) in almost all of them, sometimes by large margins.

According to page 2 of the same review, Haswell GT3 has significantly more general purpose compute power than 650M, but less pixel/texel/polygon throughput. Most GPU compute applications ignore the fixed function GPU hardware responsible for pixel/texel fillrate etc., so the raw compute FLOPS have a chance to shine.

carry on then posted:

Maybe they'll use the highs end integrated to further distinguish the 13" and keep dedicated for the 15".

Current rMBP 13" uses a 35W TDP CPU, and the current 15" uses a 45W TDP CPU plus a 45W TDP GPU. The new high end integrated is only available at 47W, and really shines in "cTDP up" mode, where under software control TDP is bumped to 55W. I suspect it wouldn't be easy to use that chip in the 13" chassis. It would be real easy in the 15" though.

The more likely scenario would be Apple switching to 28W GT3 Haswell ("Iris 5100" graphics) in the 13" rMBP. It's a big graphics performance upgrade over the current model, and reduces power too, so they could choose between making it smaller/lighter or giving it longer battery life.

(The 15" should be able to either slim down or gain battery life with integrated graphics too, but even more so.)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

tarepanda posted:

Real-world numbers trump testing numbers.

Both are real-world numbers. The problem with Hulu and Netflix is that those sites use Flash and Silverlight, respectively. Those plugins frequently do video decode in software, or even when they do make use of hardware acceleration, still have too much software overhead. That's very bad news for battery life, makes your fans run loud, and so on. Watch web video on a site which serves H.264 to you through HTML5, and CPU load should be minimal.

(This is WHY_APPLE_HATES_FLASH.TXT in a nutshell.)

Your point about it not mattering what the reason is when what you want is 8 hours of Hulu is valid, but the disparity isn't because the testing was unrealistic. Things should hopefully improve over the next few years as more sites switch to HTML5 / H.264. (And a million slashdotters cry out in agony.)

If you're a Safari user, you might also want to try ClickToPlugin. Its plugin killer feature substitutes a HTML5 player for the Flash players on many video websites, and it also makes annoying battery sucking Flash ads 100% opt-in.

Another tip: don't install Perian. Not only is it no longer supported, it will cause QuickTime to support WebM, and Safari will dutifully report the new capability to YouTube. Makes no difference when you're using the default Flash player, but if you sign up for YouTube's HTML5 trial or use ClickToPlugin to force the issue, YouTube will always choose WebM if it's supported by your system. (WebM is Google's pet video codec. Google owns YouTube. You do the math.) WebM isn't hardware accelerated on Macs, so this is bad news.

And: Chrome on Mac has built-in WebM even if there's no systemwide WebM codec installed, so you probably don't want to use Chrome for HTML5 YouTube either.

(This is all a bag of hurt. But everyone in the industry seems to want to squabble about how to do video so it's where we're at.)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

japtor posted:

Doing some math, 20Gbps is 2560MB/s, while x4 is 2000MB/s. No clue if that's usable bandwidth or some overhead in the spec though.

Usable (in theory). You've already accounted for PCIe 2.0's 8b10b line coding overhead in that 2000MB/s (16Gbps) figure, and it turns out you have with Thunderbolt too (you just didn't know it). Thunderbolt's line rate is actually 10.3125Gbps, with 64b66b line coding, which works out to exactly 10.0Gbps (or 20.0Gbps when bonding 2 lanes together). Thunderbolt and PCIe also have packet header overhead, so true data rates are less. Header sizes are relatively fixed, but payload size isn't, so the amount of overhead depends on average packet size, which varies.

The "in theory": Existing Thunderbolt controllers (Intel's "xyz Ridge" chips) dedicate one 10G TB link to x2 PCIe, the other to DisplayPort. They have no provisions for using excess Thunderbolt bandwidth in either link for other things. So in practice one TB port is exactly equivalent to x2 PCIe 2.0 + DisplayPort. (The TB chips which connect to the chipset with x4 PCIe are doing so in order to provide two independent Thunderbolt ports -- each TB port gets x2 PCIe to itself.)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

japtor posted:

Question from another forum for ya: "Which does not explain how you can have two Thunderbolt Displays (>10.6 Gbit/s even without blanking) and other devices like the Pegasus daisy chained on one Thunderbolt port."
I found a test showing two displays can slow down stuff so how's that work exactly, is the controller muxing DP with data at that point or something?

That is an excellent question, and it seems I misspoke! After some searching it seems that Intel's DSL3510 and CV82524EF/L chips ("Cactus Ridge 4C" and "Light Ridge" respectively) have two DisplayPort inputs (sinks), not one:

http://vr-zone.com/articles/intel-finally-shipping-2nd-gen-thunderbolt-controllers-just-in-time-for-new-macs/15539.html

So when you have two Thuderbolt Displays daisy chained, it's probably PCIe+DP muxed on one Thunderbolt lane, and DP alone on the other. Makes me wonder if they're also already more flexible about PCIe routing than I'd supposed.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Bob Morales posted:

It looks like you can put one SSD in each side that has a graphics card, for a total of 2.

If you look carefully the SSD connector is mounted to the graphics card, and the left hand card is missing the SSD connector.

This actually makes some sense. The E5 Xeon they're going to use has a total of 40 PCIe 3.0 lanes. They'll need 16 per GPU, leaving 8 for other things. There are three Thunderbolt 2 controllers, and each of those should need 2 PCIe lanes. That leaves two lanes for the SSD, which needs both of them if it's capable of 1.25 GB/s. That's that. There are no more lanes to route to the second graphics slot for a second SSD.

There is a way they could possibly work around this -- the chipset should have some more PCIe lanes available. I don't know if they'll be PCIe 3.0 lanes though.

Star War Sex Parrot posted:

(Xeon E7 things)

Xeon E5 v2 should support up to 12 cores in one socket:

http://www.cpu-world.com/news_2013/2013040201_Some_details_of_upcoming_Intel_Xeon_E5_v2_and_E7_v2_CPUs.html

E7 v2 is... unlikely to be appearing in this film. E7 is Intel's "big iron" x86 CPU line, designed to compete with IBM's POWER6/POWER7 server CPUs. It costs a lot more than E5 and is actually a worse workstation CPU. (Unless you need a workstation with huge amounts of memory per CPU socket. And by huge I don't mean a mere 128GB.)

mayodreams posted:

Two sockets and 8 ram slots to one socket and 4 slots. This is significant for compute heavy workloads like animation and heavy effects. There is no substitute for cores and ram, and the new Mac Pro lost 6 cores, 12 threads, and 4 ram slots.

I dunno why you're saying it lost 6 cores when it still supports 12. Times have changed, Intel's putting more cores in one chip. And they're rather obviously shifting emphasis to GPU compute in this new Mac Pro, which is significant for animation and effects and so forth. Despite the reduction in DIMM sockets they doubled the supported RAM from 64GB to 128GB. (If OWC is to be believed, current Mac Pros are compatible with a 128GB config, but an OS X limitation prevents you from using more than 96GB anyways.)

Same kind of thing goes for much of the rest of your list. I have some doubts about this direction for the Mac Pro, but the sky, it is not falling.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Shaocaholica posted:

When can we expect MBP updates? Or will they most likely be quite ones?

Speculation: they have to wait for Thunderbolt 2 chips to become available so they can drive 4K Thunderbolt displays.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Electric Bugaloo posted:

How does the BTO i7 compare to the i5 on the Haswell Airs in terms of performance and battery life? Its clock speed is notably similar to last year's model, compared to the big power cut in the i5. Is it safe to assume that the hyped up "12 hour" battery life will be much less fantabulous in a fully kitted-out Air?

Maybe sort of safe, it's a complicated issue these days. TDP is the same but it's clocked higher. Under less than 100% load it should get back to idle states quicker, which saves energy. You're not seeing 12 hours without a lot of CPU idle time...

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
SemiAccurate is an ongoing attempt by Charlie Demerjian to go independent. Who's that you say? He's some obscure tech journalist who used to write for The Register, until he left acrimoniously over something which I've forgetten. He is bitter about it to this day, but keeps on bringing you tech reporting in The Reg's editorial style, not that he'd never admit he owed anything to that publication. (That style is trashy Brit tabloid, if you're not familiar.)

The guy somehow has a few (unreliable) industry connections, so there's occasionally a tiny bit of truth buried in the noise, but there's no point looking for it. His writing is dominated by petty one-sided feuds. (Particularly with NVidia, but he also hates other companies too, including Apple. Because they make shiny toys for idiots who don't know how to use computers, you see. Did I mention he's one of those guys who really really wants Linux to rule the world?)

He also likes to write the occasional Apple linkbait article like this one. I wouldn't put much weight on it, whatever it is. A couple years ago he was claiming the next MacBook Air would have an AMD APU instead of Intel's Sandy Bridge. Another time it was ARM which was going to ship in Macs any day now.

He and his site admin have a giant chip on their shoulder about other even more terrible tech news sites plagiarizing their crap, which is why you'll always see giant SemiAccurate logos plastered all over any photo used in an article. Ironically these images are usually someone else's copyrighted material (e.g. leaked Intel or AMD product roadmap slides). They also used to have this JavaScript active on their site which forced your browser to instantly deselect text whenever you tried to select any. I'm pretty sure the intent was to defeat copy and paste. What I'm not sure about is how they ever thought this would be more than a few seconds' annoyance for anyone who actually wanted to plagiarize. (Me, I wasn't trying to c&p. I just have a bad habit of selecting words and paragraphs as I read, and on that site they wouldn't stay selected. Looked at page source etc. and holy poo poo, they really wrote some JS to do that.)

They went to this paywall model recently because they couldn't make enough money off ads to keep the site afloat. They've been claiming it's working out for them. I'm astonished anyone pays those absurd prices to read that drivel, and saddened because the paywall cut off access to so much unintentional comedy.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

flavor posted:

I can only speak for myself of course, but whether the next discrete GPU is going to be AMD or Nvidia has about zero influence on my buying decisions as long as it's decent and fits with the overall package.

For sure. The hilarity is that he was claiming it would have an AMD APU. That's AMDese for a CPU with integrated GPU. Which never happened, and even if he had a semi credible lead he should've known Apple was probably just trying to keep Intel honest in negotiations. (Because AMD was and is so far behind in CPU perf that Apple was never going to pick them. Apple values GPU perf but not so much it could've made those literal netbook performance level CPU cores look acceptable.)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Mu Zeta posted:

Not even Apple would have the balls to ship a $2,000 laptop with only integrated graphics.

I think they will, for a variety of reasons, and it's not even as gonads-y as you think.

It's not ludicrous any more. It has more GPGPU FLOPS than the current rMBP 650M GPU, and the L4 eDRAM cache gives it competitive memory performance. It's not as fast as the 650M for games (particularly when you use antialiasing), but it's not hopelessly bad, and more importantly Apple has never put gaming performance first.

Speaking of which, in AnandTech's Iris Pro preview it actually beat the 650M in nearly all OpenCL compute benchmarks, some by wide margins. Courtesy of the new Mac Pro's design, we know Apple is doubling down on OpenCL acceleration for "pro" application software. I think it's almost a foregone conclusion that Apple's going to sell the Haswell rMBP as a portable 4K video editing workstation, so OpenCL may be the most important GPU performance metric to them.

And it uses a lot less power. 55W in "cTDP up" mode for everything, versus today's 45W CPU + 45W GPU (plus some more for GDDR5 GPU memory). It's power managed in concert with the CPU cores, and should do low-power states a lot better than a discrete GPU can. If Apple wants 12 hour life, a discrete GPU probably won't get them there. With Iris Pro only, they should be able to, and might even be able to cut size and weight at the same time.

Finally, thanks to the 128MB L4 cache, Iris Pro costs a substantial amount more than other Haswells. Adding a discrete GPU on top of that has questionable bang per buck. Look at it from Apple's perspective: how much extra performance can you offer users for this massive leap in cost and power budget? It looked good when the integrated video was HD 4000, but probably not so much this time around.

IMO, it's either Iris Pro by itself, or a regular HD 4600 Haswell plus a discrete GPU. I could be wrong, but there's so many things about going Iris Pro only that are so very ~Apple~.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Granite Octopus posted:

Is there a quicker way to copy an entire NTFS partition than dd? I could only get 10MB/sec out of it, but normal finder file copies from the same drive but my HFS+ partition were getting up to 38MB/sec. If anything I thought it would be slower since it presumably has more overhead?

When using dd on OS X you should use the raw device nodes for best performance. /dev/rdiskXsY instead of /dev/diskXsY. Also try increasing the block size dd's using while copying. 128KB is usually a good value; the argument to pass to dd is "bs=128k".

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Kenny Logins posted:

As far as RAM/processor goes, when I started playing Binding of Isaac and Dungeons of Dredmor which I thought were fairly non-hardware-intensive games, only to have the fans spin up most of the time, it was then that I thought more RAM might have been wise. I'm not sure if a better processor would have helped as well, in that case.

Probably neither. Lots of simple games are poorly coded and eat 100% CPU even though they don't need to. (And will do so regardless of how fast the CPU is.)

You can use Activity Monitor to figure out what's up. Page outs increasing while running the game means more memory would help.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Civil posted:

Not sure what we're being coy about, but I laughed at myself for spending north of $1000 for a 1.3Ghz laptop. This thing is a monster, and has chewed up everything I've thrown at it.

They're being coy about surfing big waves.

I have a 2011 Sandy Bridge Air and it's a monster too. The base clock rates on these Sandy/Ivy/Haswell ultrabook CPUs can be deceptive. They often manage to run at max turbo instead. In your case, that's 2.3 GHz with 2 cores active, or 2.6 Ghz with one.

The reason for the low base clocks is competition for that limited 15W/17W power budget from the on-chip graphics. The chip has an intelligent power controller which adjusts the power split between CPU and GPU on the fly, based on demand. The minimum CPU clock rate should only come into play when the split is as far in favor of the GPU as it can go.

(Bigger GPUs are why Haswell ULT base clocks dropped compared to Ivy. The GPU can consume more power than before, so the CPU's minimum frequency had to suffer a bit. You can even see this in Haswell vs. Haswell: the HD 4400 version of your i5 has a base clock of 1.6 GHz and the same 2.3 / 2.6 turbo speeds. HD 5000 graphics are costing you 300 MHz of "base" clock rate.)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

japtor posted:

Well there's this:

Which doesn't necessarily clear things up since the HD5000 one there (4650U) is listed with the same max clock as the 4258U, while the 4558U adds another 100mhz. Maybe it's another random i5 vs i7 thing.

It's all about TDP in the end. Consider the Air CPUs and the probable 13" Haswell rMBP CPUs:

Air i5 (4250U): 15W TDP / 1.3-2.6GHz CPU / 0.2-1.0GHz HD 5000
Air i7 (4650U): 15W TDP / 1.7-3.3GHz CPU / 0.2-1.1GHz HD 5000

rMBP i5 (4258U): 28W TDP / 2.4-2.9GHz CPU / 0.2-1.1GHz Iris 5100
rMBP i7 (4558U): 28W TDP / 2.8-3.3GHz CPU / 0.2-1.2GHz Iris 5100

Sandy/Ivy/Haswell all have a built-in power manager which shifts power allocation between the CPU cores and GPU on the fly, capping total power at rated TDP. It restricts power use by adjusting clocks and voltages. The 28W chips have almost 2x the power budget, so the power manager never needs to clock the CPU way down, which you can see reflected in the specs. It also means that in real world situations with the GPU under load, the GPU will be able to clock much faster. That's why the 28W chip GPUs get to be called "Iris 5100" even though the specs look nearly identical to HD 5000.

The Haswell MBAs should bench close to the Haswell 13" rMBP under loads where the graphics processor doesn't have much to do, which lets the CPUs stay close to max clock rates. But with the GPU loaded, I expect the rMBP to win by substantial margins.

BobHoward fucked around with this message at 07:12 on Jun 24, 2013

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Bob Morales posted:

As far as all this 28W vs 35W stuff, are there no 35W Haswell chips? Has Apple hinted that they are going to move to the 28W chips?

My thinking is that they wouldn't go with a different wattage chip because they'd have to re-configured the internal power system of the MBP, right? Or would it not be such a big deal since they're going down? Or would they have to re-do everything with a new motherboard design for Haswell anyway?

There are 37W Haswell chips, but they're quadcore with HD 4600 graphics. The 28W parts are dual-core with Iris 5100, and are a more logical path for the 13" Retina MBP since Apple wouldn't want those to have less GPU power than the Airs. The other thing of note is that the 28W chips are "ULT" just like the 15W Air chips, which brings some interesting platform level power management features. (That will only get fully taken advantage of in Mavericks.)

Different wattage wouldn't necessarily mean power supply redesign. Haswell requires complete redesign for other reasons, since its integrated voltage regulators mean there's just two supply voltages, and the main one is significantly higher voltage than before.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

gucci void main posted:

The 11" has a great screen if you don't need IPS. If you need an IPS display that badly you're doing stuff that requires a 15" model.

Because only power users who need desktop replacement level CPU and GPU power could possibly want their display to do better than 6bpc color, not suffer from horrible viewing angle color and contrast shifts, and have "Retina" resolution for ultra crisp text. Got it. Glad you cleared that up for everyone. You are absolutely correct, there is no point to the 13" rMBP whatsoever and Apple's product designers should resign in shame.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

teagone posted:

I've been given a 13" MacBook Air (used, previous generation) from one of my very awesome, generous cousins but the charger is borked. Would it be a smart idea to get a replacement off ebay? I ran across this one: http://www.ebay.com/itm/Genuine-App...=item2a2ddf6246 and they say it's genuine Apple brand. I'm just worried about not being able to tell if its like a really good knock-off or something.

Take the borked charger to a fruit stand. They replaced mine when the insulation on its cord frayed even though it was half a year out of warranty at the time. No hassle either. There are no guarantees once you're out of warranty, but they're often pretty cool about taking care of you anyways.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

flavor posted:

https://itunes.apple.com/us/app/battery-health/id490192174?mt=12

I'm not saying that coconut battery is necessarily wrong, but I wouldn't draw any conclusions based on older versions when there's a newer one to consult first.

FYI, despite the similar visual appearance "Battery Health" isn't a newer version of Coconut Battery, it's a completely different app. And newer isn't better as neither is doing anything but reporting numbers gathered from an OS X API. Accuracy is really up to the OS. (Or, more likely, your Mac's SMC firmware -- in this case even the OS is probably just a middleman.)

You can view many of these numbers without installing a 3rd party app. Open Apple's "System Information" app (formerly "System Profiler") and look under the "Power" pane.

To me, the killer Coconut Battery feature is that you can have the app send your current battery capacity to an online database, which gives you a web page with a graph of your battery's life versus its age or charge cycle count, while also showing you the average curve for all Macs matching yours. Try "Open coconutBattery Online" under the coconutBattery menu. It could be done better, e.g. by automatically logging capacity whenever you open the app so that you get more points on the curve, but what the hell it's free.

Finally, on the third or fourth launch or somewhere thereabouts Battery Health pushed me to buy one of the author's other apps instead of showing battery info, and it uses obviously wrong units on the realtime battery current graph at the bottom. (That might not annoy other people as much as it annoys me, I'm an engineer.) I don't think it's truly a bad app and it does a couple things a little better than Coconut Battery, but I uninstalled it anyways.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

shodanjr_gr posted:

So I've had my haswell 11" air for a few weeks now and I don't think that I"m observing the expected battery life that I've read about. It's good but it's not 9 hours good. I don't do "heavy usage" either. Web browsing/videos, 1 account synced on Mail (IMAP), skydrive and skype running. Also I'm noticing that it seems to run down the battery a bit fast when in standby (even with power nap off).

Should I take it to the fruit stand?

Battery life is very dependent on software behavior, and more so with Haswell than ever before. Programs which seem like they ought to be "light" can sometimes do dumb things that aren't as light as you'd think, even while nominally idle. Apple's own 9 hour "wireless web" rating is probably based on doing a clean OS install, running Safari, and using some kind of script to drive it through a predetermined set of websites. They're not going to be running Skydrive, Skype, Mail, etc. Mavericks will help keep background app power use more under control, but it's not here yet.

So, open Activity Monitor and take a look at the CPU pane on the bottom. What's the idle percentage hovering at when the system appears to be idle? If it's not in the high 90s, look at the highest CPU use processes to see what might be eating energy.

You mention web browsing / videos. Did you install the Flash plugin? It is still one of the worst enemies of battery life while browsing. If you want high battery life, do whatever you can to control it, such as installing ClickToFlash or ClickToPlugin.

Safari can be a bit badly behaved for some things too -- most notably, when displaying webpages that have lots of animated GIFs. Such as the Something Awful Dot Com forums. :haw: Generally speaking modern websites have a lot of Javascript code, etc., so open tabs/windows can use a surprising amount of CPU, and some websites are definitely worse than others. (The Mavericks version of Safari will help a lot by reducing CPU use of non-visible tabs/windows to zero, but once again, not here yet.)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Maneki Neko posted:

Are the 15" retina MBPs still kind of a crapshoot with the displays? He's not doing anything super fancy with it, although he does have giant gorilla hands, which may be why he's been leaning towards the 15" models.

For what it's worth, I have giant gorilla hands (point of reference: can press both shift keys on a standard size keyboard with one hand) and the 13" MBA works fine for me.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Big City Drinkin posted:

This might be a dumb question, but does the "put hard disks to sleep when possible" option in Energy Saver mean anything if you have an SSD? I thought "sleep" in this context meant that the mechanical portions of regular HDDs would be deactivated, but is there some SSD analog?

Sure, they do still use power when idle:

http://www.anandtech.com/bench/SSD/305

Not nearly as much as a HDD of course, but it's still possible to implement a "sleep" state which shuts things down to save power. Processor cores (SSDs usually have an ARM or two), clock generation circuits, DRAM refresh (SSDs usually have a bit of local DRAM to cache data structures), and so on.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

teagone posted:

Also, would a defective charger harm the battery in any way? I ordered a replacement from OWC, and noticed that it stops charging like every 20 seconds or so, and the brick got really hot really fast. The original charger doesn't do that, so I'm assuming the new one I got is defective.

That's defective all right. It's hard to say whether that will harm the battery but get the charger replaced as soon as possible.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

fookolt posted:

Wait, how do you figure that?

Can't speak for tirinal (whoops, efb!), but I think the CPU being a Crystalwell (aka Iris Pro 5200) strongly implies no discrete graphics. It isn't impossible to do dual graphics with Crystalwell, but a regular HD 4600 i7 makes so much more sense in that case. If you have a discrete GPU, you don't need Crystalwell graphics, and the regular i7s are cheaper, clock higher, and have more L3 cache. Speaking of which:

quote:

What's up with that lower L3 cache?

It's not known for certain, but I've seen some speculation about it which I suspect is true.

Crystalwell consists of a GT3 Haswell CPU chip plus a 50 GB/s 128MB L4 eDRAM cache chip, both mounted to a single package. The L4 cache is what gives it a large graphics performance advantage over a regular GT3 Haswell, which has the same amount of GPU compute power. Fast GPUs can be very bandwidth limited. (The L4 helps CPU performance too, by the way.)

All caches consist of two memory arrays: data and tag. The sizes you see quoted refer only to the data array, so a "1MB" cache can hold 1MB of actual data. The tag memory is the filing system, extra memory that tracks what's currently stored in the data array. The data array is divided into "lines" (usually 64 bytes for x86 processors), and every line needs one tag entry. Tag size is therefore proportional to the number of lines in a cache.

A 128MB cache is going to need a lot of tag memory. Though it would be possible to store the tags on the eDRAM L4 cache chip alongside the L4 data array, SRAM tags located in the CPU chip would be a lot faster. That much tag SRAM would take a lot of die area, so Intel may have sacrificed 2MB of L3 cache SRAM to help keep the die size (and therefore cost) down.

BobHoward fucked around with this message at 11:12 on Jul 9, 2013

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Malcolm XML posted:

Has intel confirmed that the eDRAM is used transparently as an L4 cache? Because that makes up for a small loss in L3 (and I know a lot of people wanted crystalwell on desktop if that were the case).

Intel let Anandtech and a few other sites have access to a reference platform. It's been tested with Sandra, and as you can see from the curves, the eDRAM is definitely functioning as a L4 cache.

http://anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested/3

Jalumibnkrayal posted:

I'd love for there to be a $1500 13" rMPB with Iris 5200. That's not going to happen, is it?

Technically possible, but no, it's not going to. The current rMBP 13" uses a 35W dual-core CPU. The Iris 5200 CPUs are all quadcore 47W, and need a special "cTDP up" mode (boosts TDP to 55W if the system's cooling can handle it) to get the most out of the 5200. (Anandtech's Iris 5200 preview covers this mode.) Apple's theme for this year is clearly ridiculous battery life, so 28W dual-core Iris 5100 seems like the obvous choice. (And for what they're worth, there's already leaked geekbench benchmarks of a 13" rMBP with 5100.) (e: f,b)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

BigHandsVince posted:

So, I have 1 small issue with my new air. The edge of the recess that the keyboard sits in seems to be scratching the screen :s. I know the tolerance would be pretty tight but I was fairly sure they wouldnt make it possible for them to touch...

I've had the thing in a sleeve the whole time Ive had it and it's only visible when the screen is off but its still pretty disappointing.

Any advice?

Are your display housing or bottom case permanently bent such that this happens? If so, Apple owes you repair or replacement. If not, my only advice would be: Don't put heavy things on top of the computer when it's closed, or sandwich it between things that are going to crush it in your bags.

I have a 13" 2011 Air, but the chassis has barely changed in the 2012 and 2013 models. For what it's worth, with the display closed and looking at it edgewise with a lit wall in the background, I can see space between the display and the spacebar key. If you can see the same, you're probably pinching the machine when it's closed.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
Thin and light are, unfortunately, the enemies of stiff and robust. Airs are solid for their size and weight (especially the 11", the 13" is noticeably less rigid), but this is one place you're just going to lose something compared to thicker and heavier machines (including Apple's own MacBook Pros).

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
Are the ports on both sides bad? Usually one side's ports are on a small board connected to the motherboard by a small cable and the other's are direct mounted on the motherboard. If it's just one side and it is the IO board side it might be a simpler cheaper fix.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Bob Morales posted:

I personally read more into the benchmarks for performance and battery life expectations.

Same. I plan on using his reviews to decide between the 13" and 15" Haswell rMBP. I want to know how good the 13's GPU is; if it's fast enough I prefer the more portable chassis.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

rear end Catchcum posted:

Will the battery on my new 2013 Macbook Air 13" be effected negatively if I leave it plugged in/charging + on all the time except when I unplug it to use it on the couch/out of the house?

Empirical data on my 2011 Air, which was subjected to more or less that usage pattern:

Capacity vs age
Capacity vs loadcycles

The dotted lines are the database average, solid blue points are my computer.

As you can see in the first graph, my Air is doing much better than the average 24 month old 2011 Air, but is quite normal on the load cycle graph (where you'll note it has a very low load cycle count). This implies that battery wear is more strongly a function of charging and discharging than age. And that it's ok for it to sit on the charger most of the time.

It's probably good practice to discharge it all the way once in a while. Sitting at 100% charge state for long periods of time is supposedly not good for some lithium battery chemistries. But "long" probably means "many months", not "ohshit did I charge cycle the macbook this week?".

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Mercurius posted:

I thought that both OS X and iOS let the battery discharge to 95% and then charge back up on their own without any interaction from the user?

Yeah, there's that too.

OMGzKakaniz posted:

The bottom right of the screen keeps make a slight creak / pop noise. It's only really bad when I wake up the machine and I can't really reproduce it. Any issues with this before? Is it just the adhesive or something expanding? Almost sounds like a small electrical noise, something expanding or what not.

If it is extra bad when you wake up the machine it literally could be thermal expansion/contraction.

It's not totally clear but is this a one-time noise you hear when waking from sleep? It could be something entirely different. I have a Mac Pro that makes a very loud and audible click when you wake it, thanks to a big honking relay in its PSU.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

SeaborneClink posted:



I... I should probably get this looked at.. :ohdear:

I'm also not sure how/why this factors in


It seems that it is roughly average when plotted against mAh:cycle count, but vastly sub par when compared with mAh:age

There's nothing to get looked at, IMO. It's just another data point showing that it's about cycle count, not age or other factors (*). My computer looks great for its age only because the average user in that database is logging a lot more than 52 cycles in 24 months. Yours looks bad only because you've racked up 250+ cycles in 9 months, almost 1 a day. If you need to use the battery that much, it's going to wear out in less calendar time. Not much can be done about it.

For what it's worth, Apple rates their batteries to retain 80% capacity at 1000 cycles. And if batteries work like countless other kinds of wear phenomena, the wearout rate is highest at the beginning and end of life. You're in what should be the long, relatively flat section. (Don't get too freaked by the steep dropoff to the right of that capacity vs cycle count graph for 2012 Airs, by the way. There can't be more than a handful of people who have actually done 400-500 cycles in a year or less. It's probably not a statistically valid sample yet.)


* - Other than heat, that is. Don't store or use a lithium battery in places where it's going to get really hot (Apple cites 35C / 95F as the maximum recommended environmental temperature), and if you do, don't charge it while it's still hot.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

spongeworthy posted:

Has anyone else noticed this issue with their magsafe power cord? I'm not sure if the yellowing is due to a slight bend in the cord that sometimes occurs or if there is an electrical/wiring issue that is going to cause a fire.

I'm pretty sure it's the insulation absorbing skin oil. I had it happen to a magsafe cord in a different spot, thanks to my vile habit of browsing in bed. A length of cable which was frequently draped over ye auld boddy yellowed just like that. I can't tell if yours is doing it, but it puffed up a bit too. Eventually the cable management doohickey could not slide past the yellowed section.

There's no safety hazard yet, but it also makes the jacket material softer and weaker. In my case, it eventually stretched, then tore. The good news is that there was no immediate fire hazard -- the two internal wires are individually insulated, so there was no exposed metal, and that material doesn't seem to absorb oil like the outer jacket does. The other good news is that I took it to a Genius Bar and they gave me a new one, even though it was out of warranty.

(If they hadn't done anything I was just going to put some heatshrink tube on it. It would've been perfectly fine then.)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Cyne posted:

I just got my machine back today and have found that I'm unable to log into OS X with the password I've been using for years. Fortunately I set up a guest account prior to sending it away which I'm using now. I'm guessing this is an NVRAM thing and since I have a new logic board...

What's the best / most hassle free way to proceed here?

There is a password reset feature in the bootable OS X install disc or recovery partition.

http://support.apple.com/kb/ht1274

It's not a nvram thing, the password (or more precisely a hashed version of it) is stored on disk. The repair tech probably did a password reset in order to be able to log in.

e:fb

BobHoward fucked around with this message at 22:35 on Aug 8, 2013

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

tehschulman posted:

So uh, is there a recommended way to re-attach a transistor to a Mac Mini logic board? :)

With a soldering iron. However, I regret to inform you that if you detached it without the use of a soldering iron you have probably damaged the component, the pads and traces it was soldered to, or both.

Can you take a picture with a camera that has a decent macro mode?

Adbot
ADBOT LOVES YOU

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

tehschulman posted:

I have solder experience and some brand new fine tip solder heads. Some advice I read mentioned using a heat gun to reapply the component but I don't have access to one of those.

Good news. I showed the pictures to one of our technicians at work and he thinks the pads are not broken and that you should be able to resolder the component. Even if the pads are broken you should be able to scrape a little of the solder mask off the PCB to either side since the component is connected to planes, not traces.

We think could be a diode, so keep orientation the same as original. Also, no guarantees, but since one side is connected to a screw mount and the other side looks like a ground plane, it is probably part of a chassis ground protection circuit of some sort and odds are good the computer would function fine without it.

E: I wouldn't use a heat gun. Also, correct orientation is visible, note the residue on one edge which corresponds to residue on the PCB. IMO, match that, sit the component down in the grooves it left, hold it down with tweezers, reflow one side and then the other. You may not even need to add new solder.

BobHoward fucked around with this message at 01:54 on Aug 10, 2013

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply