|
Yudo posted:In the event no one posted it, the rumored specifications of the soon to be released GTX 760: Wouldn't the hope be for a 4GB version, to ensure efficient memory allocation with the 256-bit controller setup? I know nVidia claims voodoo magic tech helping them with the 600-generation cards that have 192-bit buses with 2GB of VRAM, but I don't buy that you get to have mismatched memory controller allocation without performance loss somehow. Factory Factory posted:That suggests there's a 760 Ti still to be had, and the 760 goes head to head with the 660 Ti. Given the success of their Ti branding, there's gotta be a Ti (or two) this generation as well - but I wonder what it will be. If those leaked specs are accurate for the 760, and we already know the 770 is basically a 680 with a shot in the arm, then where does that leave the 760Ti to go in terms of performance? Halfway between them seems to be an obvious option, but that would make it a price:performance demolisher. Though maybe nVidia is cool with that. Agreed fucked around with this message at 00:17 on Jun 18, 2013 |
# ? Jun 18, 2013 00:13 |
|
|
# ? Jun 4, 2024 12:40 |
|
Factory Factory posted:That suggests there's a 760 Ti still to be had, and the 760 goes head to head with the 660 Ti. There may not be a 760 Ti: Videocardz.com posted:According to the leaker, NVIDIA did in fact want to use the GTX 760 Ti naming, but they dropped the Ti idea later, so the card ended up with the pure GTX 760 sexiness. Ignoring the awfulness of the above sentence and despite seriously cutting down the 104 core, the rumored 760 is clocked so high and has so much more bandwidth that it might perform quite close to the current 670. This does not leave much room to position a Ti--also considering that the 760 will be ~$300. GTX 770: $400 GTX 760: $300 Where would a Ti fit in? Edit: Agreed posted:Wouldn't the hope be for a 4GB version, to ensure efficient memory allocation with the 256-bit controller setup? I know nVidia claims voodoo magic tech helping them with the 600-generation cards that have 192-bit buses with 2GB of VRAM, but I don't buy that you get to have mismatched memory controller allocation without performance loss somehow. Your right; didn't think about that, only about being inexpensive. Fitting 3 GBs on there would be more challenging than 4. Yudo fucked around with this message at 00:21 on Jun 18, 2013 |
# ? Jun 18, 2013 00:18 |
|
Another random query: I do a lot of gaming on PC (I haven't had a console in years), so I'm wondering if it's worth grabbing a 770 with 4GB memory. Is that much RAM a good or bad idea?
|
# ? Jun 18, 2013 00:26 |
|
It depends entirely on things not yet known! If the next console genration makes big use of the large amounts of RAM they have, and/or you run at 2560x resolution, it is possible that 2 GB of RAM could become a bottleneck to a 770 before you've run out of shader/compute/etc. power. But it's not certain this will be the case, and if you're running at 1080p or so, it might not affect you even so.
|
# ? Jun 18, 2013 00:37 |
|
Shadowhand00 posted:For comparison's sake, my I7-920 with the 780 (which is a beast) got the following: Er, so which 780 is that? I have the EVGA 780 SC ACX factory OC'd and I'm seeing scores 10% lower even with a 570 as a physX card (auto settings.) Without the 570 it's slightly worse. Maybe I'll borrow an I7-3770 to replace my I5-2500k and see if that makes a difference. Although I can play everything on ultra at good frames I still feel like sometimes it bogs down when I don't understand why.
|
# ? Jun 18, 2013 05:30 |
|
I wonder if that benchmark number has to do with PCI-E bandwidth issues using both a 780 and 580 on a 2500k-equipped motherboard. But I'm not sure whether or not it would make a difference just telling the 580 not to do anything. (Versus not having it in there)
|
# ? Jun 18, 2013 05:45 |
|
Cavauro posted:I wonder if that benchmark number has to do with PCI-E bandwidth issues using both a 780 and 580 on a 2500k-equipped motherboard. But I'm not sure whether or not it would make a difference just telling the 580 not to do anything. (Versus not having it in there)
|
# ? Jun 18, 2013 05:52 |
|
Gaming computer 101, MSI: When your CPU and GPU both throttle based on temperature headroom, DON'T HOOK THEM UP IN A HEATPIPE LOOP TO A SINGLE FAN. How did this get through testing? It does fine when CPU /or/ GPU is stressed, but when both are: Then parts that should be much faster perform much worse.
|
# ? Jun 18, 2013 06:19 |
|
randyest posted:It's slightly worse with the 570 out of the box The more you know! What's your CPU sitting at? His older i7 could be very nicely overclocked, if you're running a stock Sandy Bridge setup that could explain it in one step, sorry if I missed it. He does have a higher GPU score, though, which is a bit odd in this situation. Now I'm wanting to burn some bandwidth and see how my system does. Though at least you've denied one fear I had, with 1 day until my own 780 is out for delivery: the 780 is not meaningfully bandwidth limited at PCI-e 2.0 8x, unless you're running a 16x/4x PCH setup. Edit: 1 gig for the installer? Jesus, it's just a benchmark... Alright, here's $10 to AT&T so I can see what my stuff's up to. I guess it's still about a $15 discount once the 780 gets in and I can register it for free, but god drat. Agreed fucked around with this message at 06:51 on Jun 18, 2013 |
# ? Jun 18, 2013 06:36 |
|
Agreed posted:What's your CPU sitting at? His older i7 could be very nicely overclocked, if you're running a stock Sandy Bridge setup that could explain it in one step, sorry if I missed it. He does have a higher GPU score, though, which is a bit odd in this situation. I'd be worried if I thought synthetic benchmarks were worth a drat and if my Metro Last Light, Tomb Raider, Bioshock Infinite, and Sleeping Dogs in-game benches and performance weren't so good. I'll try an i7-3770 to see if it matters and go from there. Trial and error is so tedious.
|
# ? Jun 18, 2013 08:20 |
|
You're honestly better off just overclocking that 2500K instead of faffing about with a 3770. You could easily get higher performance with that 2500K in games overclocked than the 3770 at stock. (3770 overclocked is a different story of course).
|
# ? Jun 18, 2013 09:25 |
|
To quantify that, here are my score totals from the basic benchmark, with a heavily overclocked GTX 680, running the test off of an SSD, 16GB of 1600MHz DDR3, and a 2600K at 4.7GHz. Firestrike in particular I got: Score: 7186 Graphics: 8003 Physics: 12033 Combined: 3033 My graphics score is nearly 2K below yours, 2.5K below his, but my Physics score is double yours, and outpaces his by about 4K. Overclocking a lot pays off in some usage scenarios Get a nice cooler and kick that 2500K in the teeth, Sandy Bridge is born to run and you can probably close the gap nicely. I do wonder why your graphics score is lower than his, wish I knew each of your clocks. I /definitely/ wish I knew whether you are running in PCI-e 2.0 8x/8x mode or if you're using a motherboard that can do split PCI-e 2.0 16x for the primary slot then PCH the four auxiliary lanes for a PCI-e 2.0 4x lane for your PhysX card. My motherboard, Sabertooth P67, only runs 8x/8x. It's spare on certain features and that's one of them. Thanks to your results, if you can tell me what your PCI-e 2.0 bandwidth is on the card, I can provide us all with some data on whether and to what extent the GTX 780 is bandwidth limited in PCI-e 2.0; there are reasons to speculate "not very much at all" but also reasons that it might be "more than one would hope." Agreed fucked around with this message at 10:04 on Jun 18, 2013 |
# ? Jun 18, 2013 09:58 |
|
randyest posted:It's slightly worse with the 570 out of the box The physics score seems to love clock speed and/or more cores, for example my lovely 1090T (4GHz) which gets beat in most other things by the 2500k got nearly 2.5k more than the stock 2500k but gets absolutely obliterated in the physics test by the above 2600k at high clocks.
|
# ? Jun 18, 2013 10:24 |
|
What benchmarking software is that?
|
# ? Jun 18, 2013 13:27 |
|
3dmark11, specifically the Fire Strike portion, which runs last, unless you buy the advanced version, and then you can run it alone.
|
# ? Jun 18, 2013 13:37 |
|
If you're wondering how mobile gaming stacks up, here is a 675M with a 3610QM
|
# ? Jun 18, 2013 16:51 |
|
the runs formula posted:What benchmarking software is that? You also get it for free from EVGA right now (if you buy one of their 7-series cards). For comparison's sake, here is my benchmark:
|
# ? Jun 18, 2013 17:09 |
|
Please stop posting benchmark results Unless you have an interesting or unusual configuration or it's directly in response to a question you can't provide a link for, or it otherwise adds value. It's easy for a thread to get bogged down because a bunch of people think posting their benchmarks is a contribution.
|
# ? Jun 18, 2013 20:07 |
|
So EVGA just basically told me to get hosed - are there any GPU manufacturers with as solid of customer service as they used to have? Nvidia or AMD.
|
# ? Jun 18, 2013 21:58 |
|
Yudo posted:In the event no one posted it, the rumored specifications of the soon to be released GTX 760: As someone who knows very little to nothing about what makes one GPU better than the other, could anyone break down what the major differences between the 670 and 760 would be, assuming the specs listed on that site are accurate? Specifically, I'm wondering which will serve me better for gaming at 1920x1200. The 760 looks better in some regards, while the 670 does in others.
|
# ? Jun 18, 2013 22:08 |
|
beejay posted:So EVGA just basically told me to get hosed - are there any GPU manufacturers with as solid of customer service as they used to have? Nvidia or AMD. How'd we get to this post from beejay posted:That's weird, I just did an RMA with EVGA and it was super fast. I put it in on a Saturday even I think and they answered within minutes. You can't actually do the RMA part until you get a ticket going or they will reject it. I'd try emailing them again. that one? What happened?
|
# ? Jun 18, 2013 22:09 |
|
Colonel Pancreas posted:As someone who knows very little to nothing about what makes one GPU better than the other, could anyone break down what the major differences between the 670 and 760 would be, assuming the specs listed on that site are accurate? Specifically, I'm wondering which will serve me better for gaming at 1920x1200. The 760 looks better in some regards, while the 670 does in others. Okay. pre:670 760 GPU GK104 GK104 Shaders 1344 1152 Shader clock 915 MHz 1072 MHz Shader throughput 1.229 Giga 1.234 Giga in shaderhertz Boost clock algorithm 1.0 (TDP only) 2.0 (TDP + temp) VRAM 2 GB 2 GB VRAM clock 6 GHz 7 GHz VRAM bus 256-bit 256-bit VRAM bandwidth 192 GB/s 224 GB/s Other poo poo like ROPs Whatever It's the same
|
# ? Jun 18, 2013 22:19 |
|
beejay posted:So EVGA just basically told me to get hosed - are there any GPU manufacturers with as solid of customer service as they used to have? Nvidia or AMD. XFX gave me an RDM after a new card I bought was BOSD my computer. No issues, no problems. They even went as far, as if the replacement didnt work(re same issue), I would of been able to discuss other options of same value.
|
# ? Jun 18, 2013 22:27 |
|
Agreed posted:How'd we get to this post from It's a long story. I'll PM you in a minute. Basically I have had to RMA multiple times and today I got a call from a "manager" who was very rude and unhelpful and I'll probably be selling a 660ti on SA-mart soon.
|
# ? Jun 18, 2013 22:28 |
|
Anecdotes I guess but I had not bad service customer with MSI. The only problem was that their card design was flawed (Nvidia 570 reference design) so even after 3 RMA's a card would never last for more than a few months without artifacting and degenerating into BSOD's. Also the first time I RMA'd with MSI I got a brand new card in original box, the second time it came in a generic box so I don't know if it was refurbed or what. Third RMA wasn't by me (I gave the card away) so I don't know what they got in replacement but I do know it didn't last either. I haven't had to deal with Asus's support yet because their design actually worked out of the box.
|
# ? Jun 18, 2013 22:43 |
|
Alereon I totally agree benchmark postfests lead to bad whitenoise posting, but I think right now we're all collaborating pretty well to investigate interesting aspects of CPU / GPU / PhysX tradeoffs. If this is not OK please let me know and I'll edit out. Agreed posted:What's your CPU sitting at? His older i7 could be very nicely overclocked, if you're running a stock Sandy Bridge setup that could explain it in one step, sorry if I missed it. He does have a higher GPU score, though, which is a bit odd in this situation. Maybe I'll try "extreme" OC to see what happens, but this is looking pretty good to me as it is. Thanks for all the input I think my money would be better spent on a 2nd 780 ACX and better cooling to OC my 2500k instead of a 3770k (which I guess might be harder to OC.) Basically, this: Agreed posted:Get a nice cooler and kick that 2500K in the teeth, Sandy Bridge is born to run and you can probably close the gap nicely. Agreed posted:I do wonder why your graphics score is lower than his, wish I knew each of your clocks. I /definitely/ wish I knew whether you are running in PCI-e 2.0 8x/8x mode or if you're using a motherboard that can do split PCI-e 2.0 16x for the primary slot then PCH the four auxiliary lanes for a PCI-e 2.0 4x lane for your PhysX card.
|
# ? Jun 18, 2013 22:56 |
|
You find out looking at the specs for the board. The P8Z68 pro has:
The CPU hands out 16 PCIe 2.0 lanes (in your case; IVB/HSW hand out 16 3.0 lanes). On SNB/Z68, these can only be organized in x16 or x8/x8. The PCH has 8 PCIe lanes to hand out. These get pared down by peripherals - extra SATA controllers, FireWire, non-Intel NICs, etc. The rest are routed to an expansion slot, i.e. to the black PCIe x16 (x4 electrical) slot and the two PCIe x1 slots. The slots may be overloaded, especially on Asus boards, so the x4 slot may only work at x1 electrical if certain peripherals or other PCIe x1 slots are used - check the manual/BIOS. Remember that footnote marker on the x16 (x4 electrical) slot above? Here's the footnote: *1: The PCIe x16_3 slot shares bandwidth with PCIe x1_1 slot, PCIe x1_2 slot, USB3_34 and eSATA. The PCIe x16_3 default setting is in x1 mode. So no PCIe x4 or x1 slot can be a slot hooked up to the CPU on SNB/Z68. Z77 and Z87 can get more complex because they offer x8/x4/x4 splits from the CPU: So you may need to check reviews and/or the manual to see how lanes are allocated when particular slots are filled. Factory Factory fucked around with this message at 23:10 on Jun 18, 2013 |
# ? Jun 18, 2013 23:08 |
|
According to your motherboard's specs page, it's basically running two 16x that split to 8x/8x when running dual card (for SLI, for PhysX, doesn't matter). http://www.asus.com/Motherboards/P8Z68V_PRO/#specifications Depending on what slots you've got the cards plugged into, it is possible (all values relative to PCI-e 2.0) that you've got a 780 at 8X and a 570 at 8X, or a 780 at 16x and a 580 at 16x, or a 780 at 16x and a 570 at 4x via PCH. If they're adjacent it's most likely the 8x/8x split. Could you download GPU-Z and look at this here box to see what it says about your 780? Not what it's capable of but what it's actually running at. Edit: Factory Factory beat me to the post, because gently caress PHONE INTERNET, but please do this so we can see if your 780 is running unbridled or what the deal is there. Also, do not expect a dedicated PhysX card to improve your score. 3Dmark uses Bullet Physics, it runs on the rendering card only, and isn't proprietary like PhysX. Wouldn't be much of a benchmark if only nVidia cards could use it (not like it's much of a benchmark anyway, down to brass tacks - in-game matters all, these numbers are helpful in their limited scope but shouldn't be taken as determinative of much of anything). Also, you are a certified crazy person for going with two 780s, but I'd probably spend the money if I had it too, so I'm gonna refrain from judgment; just don't let Crackbone get wind of it or you'll give him a seizure, and for good reason
|
# ? Jun 18, 2013 23:20 |
|
randyest posted:Alereon I totally agree benchmark postfests lead to bad whitenoise posting, but I think right now we're all collaborating pretty well to investigate interesting aspects of CPU / GPU / PhysX tradeoffs. If this is not OK please let me know and I'll edit out.
|
# ? Jun 18, 2013 23:28 |
|
Yeah, and you should try manually setting your 2500K instead of letting that weird Asus suite do its weird things. I've been running mine stable at 4.7Ghz since I've had it with a few adjustments (voltage offset capped at 1.36v).
|
# ? Jun 18, 2013 23:29 |
|
beejay posted:It's a long story. I'll PM you in a minute. Basically I have had to RMA multiple times and today I got a call from a "manager" who was very rude and unhelpful and I'll probably be selling a 660ti on SA-mart soon. Don't abuse the system man . But seriously, that's odd. I managed to break a capacitor (I think) off the back of my card while putting it back to factory spec and they still honored my RMA. Well sort of, there's going to be some cost associated with the repair now, but considering I voided my warranty, it was pretty awesome of them. Granted this happened after the RMA was approved and I emailed them about it and took photos it to document the damage. It's still really good of them...in theory at least, I guess when I find put what it will cost me I may sing a different time. Still it's probably better than me having a $300 paperweight and having to shell out for a new card.
|
# ? Jun 19, 2013 01:01 |
|
Agreed posted:According to your motherboard's specs page, it's basically running two 16x that split to 8x/8x when running dual card (for SLI, for PhysX, doesn't matter). Agreed posted:Also, do not expect a dedicated PhysX card to improve your score. 3Dmark uses Bullet Physics, it runs on the rendering card only, and isn't proprietary like PhysX. Wouldn't be much of a benchmark if only nVidia cards could use it (not like it's much of a benchmark anyway, down to brass tacks - in-game matters all, these numbers are helpful in their limited scope but shouldn't be taken as determinative of much of anything). Agreed posted:Also, you are a certified crazy person for going with two 780s, but I'd probably spend the money if I had it too, so I'm gonna refrain from judgment; just don't let Crackbone get wind of it or you'll give him a seizure, and for good reason Seriously, thanks a ton to everyone here. I'm going to rip out the 570 and make sure the 780 is running at 16x in that case and re-bench everything. If you've replied to me in this thread and you want a forum upgrade or cert or just some paypalbux please pm or email me (username at gmail) and I'll set you up. TheRationalRedditor posted:Yeah, and you should try manually setting your 2500K instead of letting that weird Asus suite do its weird things. I've been running mine stable at 4.7Ghz since I've had it with a few adjustments (voltage offset capped at 1.36v).
|
# ? Jun 19, 2013 01:39 |
|
Not BIOS modding, BIOS settings. It exposes the same manual settings as AI Suite, except it's a good bit more stable for pushing the envelope, whereas in-Windows tools can freeze for no reason or lock up on settings that would work if entered from the BIOS.
|
# ? Jun 19, 2013 01:45 |
|
Some news on the GPGPU and HPC front: Intel announced its refresh of the Xeon Phi, "Knight's Landing." It's still not really a GPU, but it's still a many-core, highly-parallel processor, Basically a 61-core, 64-bit, P55C-Pentium-derived chip. What's different is that instead of strictly being a coprocessor unit (a system on a board with network interfaces over PCIe), the Knight's Landing Xeon Phi will also come in socketed versions, and it will be able to be the system's primary processor. This is kinda like running Linux natively on a GeForce Titan. With all the hubbub of AMD's ARM-based microserver CPUs and APUs with integrated Radeon bits and the upcoming Intel Atom refresh, we're about to see a new wave of high-density, many-core computers that sit in between where CPUs and GPUs currently preside. The eventual goal of both AMD and Intel's heterogeneous systems architecture movements for x86 is to allow seamless single-system switching between all these processors from a shared memory space. GPUs and GPU-like architectures handle graphics and other workloads where you need a lot of calculations, but the calculations themselves are independent and simple; CPUs handle tasks with complex single-threaded needs; and many-core architectures handle the tasks in between, like virtualization farms of simple webservers and webapps. Knights Landing also has a preview of a tech we'll probably start seeing in Nvidia's Maxwell and the next major GCN revision, memories dies integrated to the chip package. There are a number of ways this can be organized (e.g. as a layer of cache above L1/L2 etc., as an independent cache pool, or attached to a memory controller and treated like RAM), but the core idea is that with memory that close to the core that needs it, you can get REALLY high bandwidth out of it. Depending on how the memory is integrated to the package, it can cause big problems for cooling the compute-oriented part of the chip, though. -- The other big news is that Nvidia's Kepler architecture can now be licensed as IP blocks, and future architectures can be, too. This kinda like holy poo poo considering that Nvidia already has a strong SoC business combining ARM cores with their graphics hardware, but now they are opening that up for everyone. We're talking anyone who uses IP blocks being able to stick GeForces on it. And you know who uses IP blocks? Everyone. Not just Apple and Qualcomm and Samsung. Intel is doing it. AMD is doing it. We're talking the possibility for Intel to drop its GPU development and just stick Goddamn Keplers on things. We're talking AMD APUs with x86, ARM, and GeForce cores. Cats and dogs living together. Mass hysteria. Eventually, this may mean semi-custom GPUs not just at the card level, but at the SMX level. Apple offering customized GPUs in its systems with Apple-only tweaks that are more than just a BIOS lock. CUDA could kill OpenCL forever, as long as you're willing to give Nvidia a piece of the pie. That doesn't mean Nvidia is quitting the chip business, though, oh no. In fact, the next logical step is licensing an LTE modem for their own SOCs and getting into the phone business. But that's a topic for another thread.
|
# ? Jun 19, 2013 03:03 |
|
If it means we get Intel-quality drivers for nVidia graphics, even if it's just for R-class Intel CPUs, I'll probably react how the AIs said humanity reacted to the 'ideal world' version of the Matrix in one of that movie's terrible sequels. But the odds of that happening are a Large Number to one against.
|
# ? Jun 19, 2013 03:14 |
|
Got my dual 770 4GB classifieds from EVGA yesterday, and ran into an unexpected snafu: apparently with some mobo / 700 series combos from various manufacturers (including mine, a Z87 Extreme4), you can't enable SLI and surround screens simultaneously. Activating one toggles the other off. I couldn't find any solutions by googling. If anyone has any ideas please share, but I think I'm boned until a new driver release fixes it.
|
# ? Jun 19, 2013 03:15 |
|
That... hm. That looks like just the state of things, considering no SLI mode surround is offered on the Nvidia 3D Surround webpage. That kinda blows.
|
# ? Jun 19, 2013 03:20 |
|
Factory Factory posted:The other big news is that Nvidia's Kepler architecture can now be licensed as IP blocks, and future architectures can be, too. What the gently caress This bit of news has incredibly wide-ranging implications and I can't even begin to think... god drat, they're doing it in a way that leverages practically nothing on their end and positions them to be a major player in virtually anything, all on the basis of exceptional designs. And given their own diverse interests, it's just... it's... hooooly poo poo. nVidia is gonna get so much richer because of this. They're licensing... seriously? It's brilliant and a little bit crazy and I love it, haha, talk about a bold move. Edit: The funniest scenario involves, as mentioned, Intel and AMD licensing nVidia IP for easy integration into APUs or SoCs. Or AMD licensing CUDA, that might actually happen, who knows - UE4 will power a ton of games in the next generation and it has PhysX support, being able to compute a light CUDA workload would provide AMD with a competitive advantage, or Intel with another selling point for their IGPUs if it does become a thing. This is crazy, but at the same time it also makes total sense. We desktop users are dinosaurs in the market, integration is everything now. This is so weird, though, it totally changes who they were competing with as of a few days ago. I mean, they had Tegra, but now... Wow. Edit 2: It's license-available as soon as it's on paper! Funny situation number three (this one is not gonna happen, but bear with me): AMD buys a license for nVidia's next gen chip the very moment it gets taped out, and produces the first nVidia designed card before nVidia gets it out to partners for production. An actual thing that can happen now. Not going to happen for a lot of obvious reasons, but it could. Haha, what in the world. Strange days. Agreed fucked around with this message at 04:04 on Jun 19, 2013 |
# ? Jun 19, 2013 03:42 |
|
Factory Factory posted:That... hm. That looks like just the state of things, considering no SLI mode surround is offered on the Nvidia 3D Surround webpage. That kinda blows. gently caress me. I sincerely hope this is a temporary issue, otherwise I'll return these and just get a 780. Edit: how did this review get this to work? http://www.legitreviews.com/article/2210/1/ I've tried a few variations based on the color coded diagrams they list. I don't have the cables available to try a pure DVI based setup, maybe that's it? I've got 2 pure DVI cables, 1 HDMI, and 1 DVI-to-HDMI. Ugh. metasynthetic fucked around with this message at 06:22 on Jun 19, 2013 |
# ? Jun 19, 2013 05:08 |
|
|
# ? Jun 4, 2024 12:40 |
|
Agreed posted:nVidia is gonna get so much richer because of this. They're licensing... seriously? It's brilliant and a little bit crazy and I love it, haha, talk about a bold move. Apple's an interesting possibility and would net a fair amount of money (it would probably kill Imagination in the process). Apple builds their own CPUs, has no problem building relatively enormous mobile chips, and doesn't care about margins nearly as much as other vendors, so it could take a next-gen Tegra part relatively early and put a low-clocked enormous die in a tablet. A5X in the iPad 3 was 2x the size of Tegra 3 because Apple can make the SoC cost back on the tablet; there's no SoC vendor that has to make money and a final OEM that has to make money). However, this isn't a ton of money, as Imagination's total revenue was ~$200M last year, and that's with owning every iPhone and iPad sold (along with a bunch of other processors). Samsung is more difficult to understand because it doesn't build its own CPUs--it takes stock ARM cores. If Samsung continues to do that, why wouldn't they just use Tegra? Licensing a GPU core wouldn't net them anything meaningful there versus buying Tegra that I can see.
|
# ? Jun 19, 2013 06:08 |