Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Rastor
Jun 2, 2001

blowfish posted:

"up to 8 cores" - if they go in a consumer craptop, will most software be able to make use of that computing power, or will we see processors good for very specific circumstances but bad for general purpose laptops?

A 95W TDP part is for desktops and servers, not laptops.

Zen is still 18 months away assuming it launches on time, we don't know what laptop-wattage variations, if any, will be made available, and/or how they will/won't perform.

Adbot
ADBOT LOVES YOU

SwissArmyDruid
Feb 14, 2014

by sebmojo

blowfish posted:

"up to 8 cores" - if they go in a consumer craptop, will most software be able to make use of that computing power, or will we see processors good for very specific circumstances but bad for general purpose laptops?

I think that the specs listed are server parts downgraded to desktop, a la Xeon down to i7. It does not make sense for AMD to stop with the APUs for the "craptops". Nor does it make sense to bring hot and power-hungry server parts down to mobile.

suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!

SwissArmyDruid posted:

I think that the specs listed are server parts downgraded to desktop, a la Xeon down to i7. It does not make sense for AMD to stop with the APUs for the "craptops". Nor does it make sense to bring hot and power-hungry server parts down to mobile.

Ah ok. I had confused it with the mobile-only thing that got announced. 8 cores for server CPUs seems reasonable.

Rastor
Jun 2, 2001

Ah yeah, Zen is very different from Carrizo. Then there's also Godavari.

SwissArmyDruid
Feb 14, 2014

by sebmojo

Rastor posted:

Ah yeah, Zen is very different from Carrizo. Then there's also Godavari.

Actually, Godavari appears to just a refresh of Kaveri. The Kaveri successor appears to be something called "Bristol Ridge"? Notably: DDR4 memory controller onboard.

The usual WCCFT caveats apply: http://wccftech.com/amd-bristol-ridge-apu-2016/

Wistful of Dollars
Aug 25, 2009

Factory Factory posted:

We've got our first Zen uarch rumors.

Assuming this is true: 14nm FinFET (that's probably Samsung's process), FM3 socket, DDR4 controller on-chip. PCIe controller at rev 3.0, and moving on-chip, so the northbridge is done on AMD, too. Up to 8 cores in 95W. No word on graphics. Uarch details are light except for a rumor that it's moving from Module-based, Bulldozer-ish clustered multithreading to symmetric multithreading on unitary cores, like Intel does with Hyperthreading.

I need someone to throw me a bone on something I've been wondering; what's the point of these fancy onboard graphics they're putting in desktop processors? I understand why you'd want a combined unit for either mobile or basic office/consumer systems where people don't need much graphics power, but I don't understand the point when 99.9% of said chips will be paired with a separate gpu.

dont be mean to me
May 2, 2007

I'm interplanetary, bitch
Let's go to Mars


Because they can?

More seriously, because at this point it probably isn't worth it to gate it off, since if that much of the die is bad no amount of testing will validate the CPU components to QA's satisfaction.

Also it can do other things than composing and blasting out frame buffers and it has diagnostic value (you don't need a video card to boot the system).

And a lot of people need that much CPU and just any video output at all. Make no mistake, the personal desktop builder-user for GPU stuff (not necessarily games, but consider where you are) is an edge case.

Happy_Misanthrope
Aug 3, 2007

"I wanted to kill you, go to your funeral, and anyone who showed up to mourn you, I wanted to kill them too."

El Scotch posted:

, but I don't understand the point when 99.9% of said chips will be paired with a separate gpu.
...because they won't be?

SYSV Fanfic
Sep 9, 2003

by Pragmatica
Business reasons is why. AMD can charge a little more for an APU over just a cpu and they grab a potential sale from Nvidia. The APUs are the only reason to buy amd over intel in certain niche segments. In the future, AMD is hoping to leverage HSA to make up the performance gap between midrange intel and AMD APUs. Don't underestimate the marketing potential of "Your spreadsheets will calculate up to 60% faster" to a corporate accountant.

Because AMD is doing it, Intel is doing it too.

Wistful of Dollars
Aug 25, 2009

Happy_Misanthrope posted:

...because they won't be?

...I don't think you actually understood what I wrote.

Your snark-fu needs work.

Leshy
Jun 21, 2004

El Scotch posted:

I understand why you'd want a combined unit for either mobile or basic office/consumer systems where people don't need much graphics power, but I don't understand the point when 99.9% of said chips will be paired with a separate gpu.
Are you suggesting that "mobile or basic office/consumer systems" only make up 0.1% of the market? Because I'd sooner say that in 90%+ of cases (pun not intended), a separate graphics card is no longer a requirement.

Rosoboronexport
Jun 14, 2006

Get in the bath, baby!
Ramrod XTreme

El Scotch posted:

I need someone to throw me a bone on something I've been wondering; what's the point of these fancy onboard graphics they're putting in desktop processors? I understand why you'd want a combined unit for either mobile or basic office/consumer systems where people don't need much graphics power, but I don't understand the point when 99.9% of said chips will be paired with a separate gpu.


This is a five year old slide but the point stands: Amount of money made on enthusiast side is pretty low and probably is even lower now and the mainstream and value segments can be served with integrated GPU. OEM's like 'em because it's one less component to assemble and building costs are lower.

Rastor
Jun 2, 2001

El Scotch posted:

I need someone to throw me a bone on something I've been wondering; what's the point of these fancy onboard graphics they're putting in desktop processors? I understand why you'd want a combined unit for either mobile or basic office/consumer systems where people don't need much graphics power, but I don't understand the point when 99.9% of said chips will be paired with a separate gpu.

A) You are way, way, way overestimating what percentage of computers have a dedicated GPU.

B) AMD is making a long-term bet that HSA will be actually used to make your spreadsheets faster and your games fancier.

SwissArmyDruid
Feb 14, 2014

by sebmojo

Rastor posted:

B) AMD is making a long-term bet that HSA will be actually used to make your spreadsheets faster and your games fancier.

I'm still hoping that HBM means that processor caches become huge. (relative to their current sizes) Like, "gently caress going to system memory, that poo poo's weak. We're just gonna do most of our work here on-package." It seems like a good way to boost performance. I've now actually got a bet going with a co-worker that this was the entire endgame of HSA all along.

I mean, if they scale putting HBM onto parts all the way down to their sub-10W range, can you imagine what that would do for performance on the tablet/ultrathin end? Slide #12 of this Hynix presentation at Memcon last year ( http://www.memcon.com/pdfs/proceedings2014/NET104.pdf ) says that it should consume half as much power as DDR4, and less than a third the consumption of DDR3. With successive generations of HBM, I don't begin to doubt for a moment that we could see 4 GB as a single stack on the APU package, easy. (Same presentation, Hynix is claiming HBM2 is 1GB per layer. Not my typo. Gigabyte. In 4- or 8-layer flavors.)

I couldn't even begin to speculate on the energy savings, but it seems like it could be significant, increasing bandwidth and feeding those hungry-rear end-and-heretofore-bandwidth-choked GCN cores while reducing power requirements.

Now (and I realize this is a pipedream yet) the only thing that remains to be done is a processor that can look at a given task and figure out what would be done best with the CPU and what would be done best with the GPU, then assign appropriately without the need for OpenCL.

SwissArmyDruid fucked around with this message at 05:50 on Feb 7, 2015

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

SwissArmyDruid posted:

Now (and I realize this is a pipedream yet) the only thing that remains to be done is a processor that can look at a given task and figure out what would be done best with the CPU and what would be done best with the GPU, then assign appropriately without the need for OpenCL.

So, OpenCL is just an API specification for dispatching work. In the sense of the processor automatically identifying data-parallel sections, that already happens in modern processors and compilers, the low-hanging fruit has been more or less picked there. You might be able to get some sorta-OK auto-parallelization using a compiler with additional annotations. Running CPU algorithms directly on GPU usually isn't an efficient way to make use of data-parallel processors though, for real gains you are probably going to need a programmer to rewrite whole methods if not whole sections of the program. It's not something that can really done automagically.

OpenCL actually doesn't even provide a compiler or runtime - that's up to the hardware manufacturer. Which is why uptake has been so slow. It doesn't currently have any sort of auto-benchmarking system to determine whether deploying heterogenous compute resources would be advantageous, even if you have the binary right there. Assuming you have equivalent methods for GPU, you could probably make an auto-tuning suite to decide whether to turn them on There could potentially be some issues with linking different code segments together, and best case you'd have some seriously bloated binaries since they'd have code compiled for AMD processors, AMD GPUs, Intel processors, Intel GPUs, NVIDIA GPUs, and so on. I don't even know how you would handle more than one runtime "owning" a process, eg what if a Xeon Phi running Intel's OpenCL dispatches to NVIDIA's GPU?

I will say that it is an interesting pipe dream I've thought about too. Getting programmers to write OpenCL seems to be the real bottleneck, but it's a chicken and egg situation since it's hard to deploy and hit more than a handful of special-interest users.

At minimum to make the auto-tuning idea work, I think you'd need library-style programs that could be linked at install time, plus a common runtime. So you download .obj or .dll files for x64, APU, etc and try to find an optimal mix for a simulated task load, then link it and install.

Paul MaudDib fucked around with this message at 02:08 on Feb 7, 2015

SYSV Fanfic
Sep 9, 2003

by Pragmatica
Talking about huge cache takes me back to the k6-3.

Happy_Misanthrope
Aug 3, 2007

"I wanted to kill you, go to your funeral, and anyone who showed up to mourn you, I wanted to kill them too."

El Scotch posted:

...I don't think you actually understood what I wrote.

Your snark-fu needs work.

This is what you wrote:

quote:

what's the point of these fancy onboard graphics they're putting in desktop processors? I understand why you'd want a combined unit for either mobile or basic office/consumer systems where people don't need much graphics power, but I don't understand the point when 99.9% of said chips will be paired with a separate gpu.
I still can't fathom that to mean anything other than you feel putting decent GPU's in desktop processors is basically wasting die space because the vast majority will be paired with an external GPU. That's simply not the case at all. It's not 'snark', it's the reality of the market.

The thing is, those 'basic' office & consumer desktop systems are the vast majority of sales, and onboard GPU power is more than sufficient for their tasks, especially with the relative power they offer today. Sure, they still suck in goon terms for modern gaming, but we're a minority - and there's a small minority of use cases that can even take advantage of more cores than 4 to begin with. There's a cost involved for a discrete graphics card more than the extra expenditure of the extra hardware, IT and OEM's love one less component that can fail and it also allows for smaller and more energy efficient/cooler systems. For the things most people use desktop PC's for, decent-performing CPU's with decent built-in GPU's are a better fit than 8+ core CPU's that require a separate GPU. Intel and AMD are just serving the market.

If you meant something else entirely then fine, but I really don't know how else to take your statement.

Edit: Well add this to the "WTF are you talking about?" pile-on I guess.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

Hold a tic, what if that's the real advantage of being early to HBM then, not usage on GPUs but on APUs? Intel won't have a processor ready for HBM in 2015-16, maybe not even 2017 (18 is stretching it but possible), but if Zen drops with something like 64MB cache or greater for the CPU and ~128-256MB cache for the APU (maybe I'm not thinking big enough here), that's a massive advantage, correct? Weren't a lot of the short comings of the C2D/C2Q series obviated through copious cache? If so, and Intel doesn't have an answer in ~1 year, that's potentially a lot of market capture for AMD.

This still requires Zen to not be a complete flop, or at least competitive with the Core series, enough so that just dropping HBM on a Skylake processor obviates Zen cores performance lead [?]. I'm an idiot, shatter my dreams.

SwissArmyDruid
Feb 14, 2014

by sebmojo

FaustianQ posted:

Hold a tic, what if that's the real advantage of being early to HBM then, not usage on GPUs but on APUs? Intel won't have a processor ready for HBM in 2015-16, maybe not even 2017 (18 is stretching it but possible), but if Zen drops with something like 64MB cache or greater for the CPU and ~128-256MB cache for the APU (maybe I'm not thinking big enough here), that's a massive advantage, correct? Weren't a lot of the short comings of the C2D/C2Q series obviated through copious cache? If so, and Intel doesn't have an answer in ~1 year, that's potentially a lot of market capture for AMD.

This still requires Zen to not be a complete flop, or at least competitive with the Core series, enough so that just dropping HBM on a Skylake processor obviates Zen cores performance lead [?]. I'm an idiot, shatter my dreams.

To address your questions in order:

* I've been saying, AMD was probably working on this as the solution to those bandwidth-choked GCN cores all along, which makes me think that HSA was targetted for HBM all along. I also suspect that whole virtual core thing from a few months back was also targetted for APUs, but didn't pan out quite as well as expected.
* Intel might be along faster than that. They are already embedding, at great cost, 128 MB of eDRAM onto Haswell processors with Iris Pro graphics. They probably had the same idea as well, because the 128 MB serves as split between L4 cache and graphics memory.
* I think you're thinking too small. HBM1 (which was in risk production at the end of last year, and mass production now) comes in 2 gigabit increments. The default shipping configuration seems to be 4 layers high, so 2 Gb x 4 layers = 1 GigaByte of HBM, assuming they don't offer smaller increments in thinner stacks for whatever reason. HBM2, which is slated to go live in mid-2016, is slated to ramp that up to 8 Gb per layer, in 2, 4, and 8 layer configurations. And as I said in some of my previous posts (I am really starting to forget how many times I've typed this) one of the reasons why the graphics in current-gen APUs isn't better and allowing Intel to catch up is because DDR3 is just too freaking slow and doesn't provide enough bandwidth for it to make sense for AMD to put any more than eight GCN CUs onto even the top of the line APU. Consider your most absolute bare minimum AMD card for gaming, the R7 260. That card has 12 GCN CUs, by comparison, off a 128 bit memory bus.

I don't really have much in the way of dream-shattering. I only recently found out about this patent filed by AMD back in 2012: http://www.google.com/patents/US20130346695

Of note is this block diagram:

Look familiar? It should. Remember what I said about a jumbo L3 cache? There it is.

It would be advantageous to AMD if this patent prevents Intel from putting HBM on Broadwell and Sky Lake parts.

SwissArmyDruid fucked around with this message at 12:59 on Feb 8, 2015

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

SwissArmyDruid posted:

/words

It would be advantageous to AMD if this patent prevents Intel from putting HBM on Broadwell and Sky Lake parts.

How would 4, 8 or even 16GB work on die with the processor/GCN? Wouldn't this take up enormous space anyway? Would this obviate the need for any kind of third party DRAM? Wouldn't this make the processors ridiculously expensive?

So in theory, AMD could obsolete everything below the shallow end of high range cards? Is this cutting into their own profits? Won't such configurations run pretty drat hot? Is this possibly why ARM cores are being looked into?

Bolded, wouldn't this effectively kill Intel? It'd mean Intel would require a whole new architecture to remain competitive, if it's even possible. We'd be comparing processors were one needs a bunch of ancillary components to run, while the other only needs them for extremely intensive tasks.

Also that patent date is rather suspiciously well timed with the theoretical stage of producing a processor, correct? I think it was Factory who mentioned that processors before market consumption had something like a 6 year development cycle, so if we assume 2011 is were all the theory has been done, then: ?

SwissArmyDruid
Feb 14, 2014

by sebmojo

FaustianQ posted:

How would 4, 8 or even 16GB work on die with the processor/GCN? Wouldn't this take up enormous space anyway? Would this obviate the need for any kind of third party DRAM? Wouldn't this make the processors ridiculously expensive?

So in theory, AMD could obsolete everything below the shallow end of high range cards? Is this cutting into their own profits? Won't such configurations run pretty drat hot? Is this possibly why ARM cores are being looked into?

Bolded, wouldn't this effectively kill Intel? It'd mean Intel would require a whole new architecture to remain competitive, if it's even possible. We'd be comparing processors were one needs a bunch of ancillary components to run, while the other only needs them for extremely intensive tasks.

Also that patent date is rather suspiciously well timed with the theoretical stage of producing a processor, correct? I think it was Factory who mentioned that processors before market consumption had something like a 6 year development cycle, so if we assume 2011 is were all the theory has been done, then: ?

*No. That's 4, 8, or 16 GB, PER DIE. You really should look at the presentation I linked. It shows a size comparison of a single 4-layer HBM1 die that occupies a space smaller than an asprin pill. As for it making the chip overly large, well, Intel is using very large packages themselves for Broadwell-U anyways:


Back to the presentation: the configuration that is shown as an example looks an awful lot like an old Thunderbird XP. You know, with the bare CPU die in the center, and the four foam stabilizer pads, one at each corner? Given that AMD is partnering with Hynix on this, this could very well be what the new AMD die shots looks like. And yes, it could obviate the need for any system memory. How much it adds to the cost of a processor, nobody knows yet. But it can't be that much if you can, eventually with HBM2 just put one additional die onto a package and be done.

* In theory, yes. With improved GPU performance from HBM, and improved CPU performance from whatever architecture of Zen makes its way down into APUs, you could give OEMs a very convincing reason to go all-AMD. And any AMD graphics solution that's sold is one less sale for NVidia.

* No, I do not think it will kill Intel. There is no way in hell AMD will have sufficient production capability from GloFo or their partners to suddenly take over the kind of production that Intel holds. Their marketshare gains will likely be limited by their production. There's a possibility that Samsung could be able to step in, since it *is* their FinFET process, after all. But how much capacity they could contribute without hurting their own production of their own products remains to be seen.

* It is. I suspect that we are seeing the fingerprints of Dirk Meyer's work just before he left for the second time. AMD has been playing the long con this entire time, assembling a very good hand full of cards. It remains to be seen if they can do anything with them. Execution, I think, will be what makes or breaks AMD in the next few years, not tech.

SwissArmyDruid fucked around with this message at 09:37 on Feb 8, 2015

Wistful of Dollars
Aug 25, 2009

I apologize for being short yesterday.

sauer kraut
Oct 2, 2004
Jesus Christ http://wccftech.com/fake-amd-processors-spotted-making-rounds-a87600-counterfeits/

quote:

the guy requested an A8-7600 APU and instead received an Athlon 64 X2 5200 (Brisbane) processor
Tragically, I had to pause for a second to consider if that guy hadn't received a free upgrade.

chocolateTHUNDER
Jul 19, 2008

GIVE ME ALL YOUR FREE AGENTS

ALL OF THEM

sauer kraut posted:

Jesus Christ http://wccftech.com/fake-amd-processors-spotted-making-rounds-a87600-counterfeits/

Tragically, I had to pause for a second to consider if that guy hadn't received a free upgrade.

I wonder if they bought the processor from Amazon proper, or one of the 3rd party sellers.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
3rd party, but apparently fulfilled by Amazon.

SwissArmyDruid
Feb 14, 2014

by sebmojo
So, they've got an address from where the bum processors were shipped from, a bank account into which the money was transferred, and a name.

Not really seeing the problem here, this ought to be wrapped up pretty quick.

Setzer Gabbiani
Oct 13, 2004

I'm actually impressed that there are people in the world who can profitably-delid AMD CPU's without destroying them, most people usually learn the hard way that where Intel uses a bottle of Elmer's, AMD uses solder

future ghost
Dec 5, 2005

:byetankie:
Gun Saliva
Unless something changed recently that's never been true. Back in the C2D era AMD chips were regularly delidded since they used TIM inside, whereas Intel chips were soldered outside of the Allendale series and earlier P4's.

SwissArmyDruid
Feb 14, 2014

by sebmojo
What's the protocol these days. Is it razor blades and boiling water, or is it the vise-and-wooden block shearing method?

LethalGeek
Nov 4, 2009

I finally upgraded a barely was good enough at the time AMD E-350 media PC to a Athlon 5350 and I'm very happy with it. Went with it over the Intel equal since 99% of the machine's job is video and maybe games. Nice to finally have 1080P on the TV, old sucker barely handled 720P

chocolateTHUNDER
Jul 19, 2008

GIVE ME ALL YOUR FREE AGENTS

ALL OF THEM

LethalGeek posted:

I finally upgraded a barely was good enough at the time AMD E-350 media PC to a Athlon 5350 and I'm very happy with it. Went with it over the Intel equal since 99% of the machine's job is video and maybe games. Nice to finally have 1080P on the TV, old sucker barely handled 720P

My HP DM1Z netbook had an E-350 inside of it. At the time it launched (I think I got it sometime February 2011) it was adequate enough, and the battery life at the time was almost mind blowing for something that wasn't a mac (almost 6 hours of browsing!) but man, did that thing just start chugging along when websites started getting "heavier" so to speak. I finally gave up on it last spring. I loved that loving machine. If they could make the same exact one with an i3 inside of it, I would jump on it in an instant.

BOOTY-ADE
Aug 30, 2006

BIG KOOL TELLIN' Y'ALL TO KEEP IT TIGHT

SwissArmyDruid posted:

What's the protocol these days. Is it razor blades and boiling water, or is it the vise-and-wooden block shearing method?

Both seem to work equally well, I didn't have the vice/block so I used a razor and was super careful not to cut too far under the IHS to avoid hitting the core. Worked fine for me and took a few minutes (mostly cleaning the old glue off before putting the IHS back on with new compound). People using the block/vice method have had decent success heating up the IHS with a blow dryer to loosen the glue beforehand.

BOOTY-ADE fucked around with this message at 17:29 on Feb 11, 2015

LethalGeek
Nov 4, 2009

chocolateTHUNDER posted:

My HP DM1Z netbook had an E-350 inside of it. At the time it launched (I think I got it sometime February 2011) it was adequate enough, and the battery life at the time was almost mind blowing for something that wasn't a mac (almost 6 hours of browsing!) but man, did that thing just start chugging along when websites started getting "heavier" so to speak. I finally gave up on it last spring. I loved that loving machine. If they could make the same exact one with an i3 inside of it, I would jump on it in an instant.

I was really slow to give mine up but it just couldn't handle playing back bluray disks or handle HD streaming that wasn't youtube. A newer TV and the march of time finally made me give that little guy up.

WhyteRyce
Dec 30, 2001

I had an E-350 in my HTPC. I was usuing WMC, which uses Silverlight, and HD video absolutely murdered it and I had to limit my bitrate to something small just to get poo poo to play. And it's not like Netflix let you set it per browser/device, so I had to deal with low res poo poo everywhere after

Killer robot
Sep 6, 2010

I was having the most wonderful dream. I think you were in it!
Pillbug
I had a dm1z too, and that about mirrors my experience. It was really neat to start but gradually got slower and slower feeling. Though I might still use it if it at least could display 1080p.

LRADIKAL
Jun 10, 2001

Fun Shoe

LethalGeek posted:

I finally upgraded a barely was good enough at the time AMD E-350 media PC to a Athlon 5350 and I'm very happy with it. Went with it over the Intel equal since 99% of the machine's job is video and maybe games. Nice to finally have 1080P on the TV, old sucker barely handled 720P

Intel chips are better at both video and games, core for core (and even if the amd has more cores, most of the time).

What are you using for video? Do you have an old graphics card or do you use the on chip solution. If so, how's that with AMD nowadays?

SamDabbers
May 26, 2003



I have a couple Foxconn nT-A3500s with E-350 CPUs, and they're excellent with OpenELEC/Kodi/XBMC, which takes advantage of the hardware video decoding. No Netflix support, obviously, but they still decode everything I throw at them perfectly, including high bitrate 1080p h.264 content.

LethalGeek
Nov 4, 2009

Jago posted:

Intel chips are better at both video and games, core for core (and even if the amd has more cores, most of the time).

What are you using for video? Do you have an old graphics card or do you use the on chip solution. If so, how's that with AMD nowadays?

No room in the box for anything else, the one pci-e slot is taken up by the ceton TV card. Also nothing but bad times with Intel video anything. Like my work machine randomly blanks out and it's a known issue Intel can't figure out. Screw that

LethalGeek fucked around with this message at 04:15 on Feb 13, 2015

dissss
Nov 10, 2007

I'm a terrible forums poster with terrible opinions.

Here's a cat fucking a squid.

LethalGeek posted:

Also nothing but bad times with Intel video anything. Like my work machine randomly blanks out and it's a known issue Intel can't figure out. Screw that

Ha what?

Adbot
ADBOT LOVES YOU

LethalGeek
Nov 4, 2009

https://communities.intel.com/mobile/mobile-access.jspa#jive-content?content=%2Fapi%2Fcore%2Fv3%2Fcontents%2F157133

It's comical as hell. The video completely disconnects, the monitor comes back a moment later and gives me the input overlay as if I just connected something. Sometimes several times back to back. Along with other just not having a good time with their video drivers ever Nah I'll pass on their stuff video wise.

  • Locked thread