Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
movax
Aug 30, 2008

BangersInMyKnickers posted:

Those run ~250W PSUs right? 1.5 vs 1.35v on the dimms isn't going to be an appreciable difference on a desktop system, maybe a watt or two. It's really more for battery savings where you're trying to shave every last watt. The GPU and CPU are going to be your big draws, if you're seeing voltage dips with the config you're so far past the capacity of your PSU that ddr3l isn't going to help anything.

I worked on a design once with selectable DDR voltage at runtime via a GPIO pin; it would literally toggle a pin on the DDR voltage regulator to drop to 1.35 V from 1.5 V, defaulting to booting up to 1.5 V each time. It was a deeply embedded system, so all the power savings in the world were of interest. The DDR dies themselves basically get binned if they can run down at 1.35 V as well as 1.5 V.

What was not of interest were the seeming Heisenbugs that would show up due to software bugs, and forgetfulness in setting it to the proper voltage. The pin was driven by FPGA logic as well, so you'd have to boot up at 1.5 V, then drop to 1.35 V once the logic loaded, and then hope that the training parameters from that particular boot worked out OK.

Adbot
ADBOT LOVES YOU

Yaoi Gagarin
Feb 20, 2014

movax posted:

Next Unit of Computing indeed...wonder what (if) they will rebrand it too.

The Unit of Computing Formerly Known as Next Unit of Computing

JawnV6
Jul 4, 2004

So hot ...
"huh how could something be more horrific than 'touches MRC at all' i wonder"

movax posted:

The pin was driven by FPGA logic as well
lol, thanks movax

Cygni
Nov 12, 2005

raring to post

foldable screen laptops are dumb and this presentation is boring and i want a beer and a cookie!!!!

Cygni
Nov 12, 2005

raring to post

Tiger Lake running is a good sign, but Intel showed 10nm Cannonlake running years before we ever saw anything out of it in retail. Disappointed we didn't get the Comet Lake-S launch when the entire board and CPU lineup is already leaked to the public.

Disappointing.

Xaintrailles
Aug 14, 2015

:hellyeah::histdowns:

Cygni posted:

Tiger Lake running is a good sign, but Intel showed 10nm Cannonlake running years before we ever saw anything out of it in retail. Disappointed we didn't get the Comet Lake-S launch when the entire board and CPU lineup is already leaked to the public.

Disappointing.

Yeah. New chips or why are you even here...

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

movax posted:

What was not of interest were the seeming Heisenbugs that would show up due to software bugs, and forgetfulness in setting it to the proper voltage. The pin was driven by FPGA logic as well, so you'd have to boot up at 1.5 V, then drop to 1.35 V once the logic loaded, and then hope that the training parameters from that particular boot worked out OK.

I am confuse, why didn’t this system retrain at the final voltage setting, how could anyone think it’s safe to change one of the holy trinity (PVT) without a retrain???

Wait let me guess this fpga was controlling the voltage of dram connected to some other SoC with no easy/documented way of resetting the dram controller, or by then poo poo was booted and you couldn’t take dram away, or some equivalent hilarity

movax
Aug 30, 2008

BobHoward posted:

I am confuse, why didn’t this system retrain at the final voltage setting, how could anyone think it’s safe to change one of the holy trinity (PVT) without a retrain???

Wait let me guess this fpga was controlling the voltage of dram connected to some other SoC with no easy/documented way of resetting the dram controller, or by then poo poo was booted and you couldn’t take dram away, or some equivalent hilarity

It was a Zynq, so processor and FPGA in one package (but different voltage domains, technically). FPGA pin connected to the PMIC that supplied the DDR voltage rail and that pin would toggle the PMIC between 1.35 V and 1.5 V. It was mostly SW engineers or FPGA engineers forgetting to document their interfaces correctly and thinking they had set the bit in memory that toggled the GPIO, or put the correct bitstream on the device that caused issues. The first stage bootloader also configures DRAM timing / frequencies and if there was a mismatch there between the desired target voltage and desired target speed, issues would also occur. A neat idea in concept, poorly executed.

The MRC Jawn refers to is an unholy mess of tens of thousands of lines of x86 that get dropped by Intel with 0 explanation. Do never touch the Memory Reference Code.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
I should not be surprised they haven’t done it but it would be cool if Intel put out a Zync competitor, one of the smaller Alteras with an Atom smooshed in it or something.

movax
Aug 30, 2008

priznat posted:

I should not be surprised they haven’t done it but it would be cool if Intel put out a Zync competitor, one of the smaller Alteras with an Atom smooshed in it or something.

There was Stellarton, way back before they bought Altera, but that wasn't a monolithic die, just Atom and Arria II (IIRC) dies squished together in the same package with some PCIe + GPIO routed between them.

Now there is Cyclone V SoC but it got beat to market by Xilinx and I'm not sure who actually uses it / chooses it over Zynq, considering the amount of time and effort Xilinx put into supporting their Linux solution (github.com/xilinx/linux-xlnx).

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

movax posted:

There was Stellarton, way back before they bought Altera, but that wasn't a monolithic die, just Atom and Arria II (IIRC) dies squished together in the same package with some PCIe + GPIO routed between them.

Now there is Cyclone V SoC but it got beat to market by Xilinx and I'm not sure who actually uses it / chooses it over Zynq, considering the amount of time and effort Xilinx put into supporting their Linux solution (github.com/xilinx/linux-xlnx).

The Cyclone V was before the acquisition right? Gotta be, what with an ARM on there.

I've put Zynqs on emulation platforms and functional validation boards before and they are awesome, love net booting them and running full linux off remote mounted file systems! Or SD Card if you are on the go. Wish we had a use case for them on the current platforms :sigh:

movax
Aug 30, 2008

priznat posted:

The Cyclone V was before the acquisition right? Gotta be, what with an ARM on there.

I've put Zynqs on emulation platforms and functional validation boards before and they are awesome, love net booting them and running full linux off remote mounted file systems! Or SD Card if you are on the go. Wish we had a use case for them on the current platforms :sigh:

Yeah, definitely before acquisition. Are you using a SoM for dropping Zynqs around, or do you have a core layout for a 7010 or 7020 w/ DDR and PMICs that you can drop around easily? Net booting w/ U-Boot off a tiny little QSPI and then doing everything over NFS is awesome — I love it.

Back on the Intel side of things though... reading a bit of the AMD thread right before this, what's the play for Intel Xe? Most office applications using Intel CPUs w/ integrated graphics don't have a lot of pixels to push, and they already have the fixed-function video and audio hardware acceleration.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

movax posted:

Yeah, definitely before acquisition. Are you using a SoM for dropping Zynqs around, or do you have a core layout for a 7010 or 7020 w/ DDR and PMICs that you can drop around easily? Net booting w/ U-Boot off a tiny little QSPI and then doing everything over NFS is awesome — I love it.

Back on the Intel side of things though... reading a bit of the AMD thread right before this, what's the play for Intel Xe? Most office applications using Intel CPUs w/ integrated graphics don't have a lot of pixels to push, and they already have the fixed-function video and audio hardware acceleration.

Way back when it was a design in heavily based on the zedboard reference design even down to the display which was actually pretty useful for putting up things like host name, IP address, load avg, firmware version etc. This was at a previous company and the boards were very low run sky’s the limit cost wise so we could go hog wild, it was great. Previous to that we used Atmel AVR32s which were decent little devices but the ARMs on the Zynqs blew them away. Plus being able to directly interface with the logic that talked to the onboard DUT GPIOs was really nice, just configuring the GPIOs in Linux and they showed up as regular devices, very slick.

Also for the Xe perhaps it is a way to finally kill off Matrox :haw:

Worf
Sep 12, 2017

If only Seth would love me like I love him!

Hi I am excited for what tiger lake will do for Ultrabooks and probably tablets

Where can I go to learn more about whatever info there is on these chips

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy
My general impression of Zen2's various boost mechanisms is that they maximize performance enough that there's often very little room for squeezing out more via a manual overclock

Does a similar situation exist with Intel's chips, either on their current generation or what we can expect from Comet Lake?

Fantastic Foreskin
Jan 6, 2013

A golden helix streaked skyward from the Helvault. A thunderous explosion shattered the silver monolith and Avacyn emerged, free from her prison at last.

No, and I don't think anybody outside of Intel knows, respectively.

eames
May 9, 2009

gradenko_2000 posted:

My general impression of Zen2's various boost mechanisms is that they maximize performance enough that there's often very little room for squeezing out more via a manual overclock

Does a similar situation exist with Intel's chips, either on their current generation or what we can expect from Comet Lake?

AMDs newer chips were designed from the group up and have many (apparently thousands) of sensors in the chip that various monitor parameters related to silicon reliability. [1]
That’s how Zen can boost so high, close to the stability limits but without failing or becoming unreliable.

Intel’s current architectures don’t seem to have similar technology (at least not the ones directly based on Skylake, that includes Comet Lake) but even they are binning closer to the limit with something called Thermal Velocity Boost (TVB) which looks at power, time, load and temperature to extend the boost range.
It’s not nearly as fine sophisticated as AMDs solution, so it has to work with safety margins that can be used for overclocking.

Over time overclocking margins are shrinking or going away entirely unless the manufacturers purposefully bin down their CPUs again. The Ryzen 1600AF is one of those cases.

[1] https://www.anandtech.com/show/11170/the-amd-zen-and-ryzen-7-review-a-deep-dive-on-1800x-1700x-and-1700/11

Worf
Sep 12, 2017

If only Seth would love me like I love him!

Is that related to the max turbo 3.0 stuff I see on next gen stuff ?

Arzachel
May 12, 2012

Statutory Ape posted:

Hi I am excited for what tiger lake will do for Ultrabooks and probably tablets

Where can I go to learn more about whatever info there is on these chips

Uhhh, Computex probably? Ice Lake only launched a couple months ago so what little they teased in CES is all we're getting for a while.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
edit: probably not nevermind

Paul MaudDib fucked around with this message at 20:10 on Jan 10, 2020

Not Wolverine
Jul 1, 2007
Intel is going to totally dominate dedicated GPUs in 2020.

Worf
Sep 12, 2017

If only Seth would love me like I love him!


Maybe they'll sell it for $29.99

LRADIKAL
Jun 10, 2001

Fun Shoe
Isn't that card basically a stand in for where they want their integrated graphics to be for this generation? It doesn't have any PCI-E power plugs, so it can't be pulling more than 75 watts. This doesn't seem like a complete failure. If they can integrate it with reasonable power draw, it'll be good for the low end gamer.

Worf
Sep 12, 2017

If only Seth would love me like I love him!

LRADIKAL posted:

Isn't that card basically a stand in for where they want their integrated graphics to be for this generation? It doesn't have any PCI-E power plugs, so it can't be pulling more than 75 watts. This doesn't seem like a complete failure. If they can integrate it with reasonable power draw, it'll be good for the low end gamer.

That is my assumption as well. Doing this as a separate unit makes sense to get telemetry etc probably. Develop drivers and watch how this poo poo interacts w poo poo.

These will be cheap or never seen in that form by consumers

I don't think they're trying to get into the traditional dgpu sphere. Probably just the low end and compute type stuff I would guess, as a rank amateur

LRADIKAL
Jun 10, 2001

Fun Shoe
This card is also for game engine developers and the like to test and write code against. They have to work OK if and when they start integrating them into laptops or whatever.

KKKLIP ART
Sep 3, 2004

I could also see it filling the same niche for DOTA/LoL cards full because I feel like there are a ton of lower end cards that get shipped off overseas that fill literally that niche.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Yeah, the segment filled by the NVidia 1650 and below is surprisingly large, and anything that can get even to 1050 levels would be able to capably play a lot of net cafe and other lower end games that, frankly, a whole fuckton of people worldwide dump a lot of play time into.

SwissArmyDruid
Feb 14, 2014

by sebmojo
I have been saying for years that not having iGPUs from either camp that can routinely do what a 750ti can is a goddamn travesty. Maybe we'll finally see that in the next two or three years.

Worf
Sep 12, 2017

If only Seth would love me like I love him!

SwissArmyDruid posted:

I have been saying for years that not having iGPUs from either camp that can routinely do what a 750ti can is a goddamn travesty. Maybe we'll finally see that in the next two or three years.

Is the consumer desktop segment the least important segment now? I assume its the high end stuff and the mobile being the big things people would care about at this point, the stuff that drives innovation right?

I assume the fact that I use a desktop the same way my dad did when he was 33 puts me squarely in the minority (not in this forum/sub forum)

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

SwissArmyDruid posted:

I have been saying for years that not having iGPUs from either camp that can routinely do what a 750ti can is a goddamn travesty. Maybe we'll finally see that in the next two or three years.

750 Ti has its own memory bandwidth and 75W of power and cooling. Maybe when DDR5 exists AMD can chiplet out something. I'd expect any huge iGPU chips to be hot as hell too, 150W+.

GRINDCORE MEGGIDO
Feb 28, 1985


Can't imagine 750ti GPU would use anywhere near that on a modern process

Arzachel
May 12, 2012

DrDork posted:

Yeah, the segment filled by the NVidia 1650 and below is surprisingly large, and anything that can get even to 1050 levels would be able to capably play a lot of net cafe and other lower end games that, frankly, a whole fuckton of people worldwide dump a lot of play time into.

Die size on the smaller chips tends to be pad limited so a new product to fill the niche between iGPUs and a 1650 just isn't financially viable when discounted older products and the used market exists.

SwissArmyDruid
Feb 14, 2014

by sebmojo
It absolutely shouldn't. For comparison's sake, a 750ti was capable of 1.4 TFLOPs single precision.

Right now, at THIS VERY MOMENT, Ice Lake G7 clocks in somewhere between 1.0 and 1.1 TFLOPs, depending on cooling and configuration.

I don't know why you guys think this is some kind of unattainable goal that needs voodoo and HBM and chiplets and new memory. It's there! It's right loving there! It's so close!

Arzachel
May 12, 2012

SwissArmyDruid posted:

It absolutely shouldn't. For comparison's sake, a 750ti was capable of 1.4 TFLOPs single precision.

Right now, at THIS VERY MOMENT, Ice Lake G7 clocks in somewhere between 1.0 and 1.1 TFLOPs, depending on cooling and configuration.

I don't know why you guys think this is some kind of unattainable goal that needs voodoo and HBM and chiplets. It's there! It's right loving there! It's so close!

Memory bandwidth.

SwissArmyDruid
Feb 14, 2014

by sebmojo
And I quote Paul:

Paul MaudDib posted:

LPDDR4 is a hell of a drug.

If AMD can use it to realize what they're claiming is "59% improved performance" on their new APUs that are still using Vega-and-not-RDNA cores rewarmed, why in god's name shouldn't Intel get in on that poo poo?

Arzachel
May 12, 2012
Raven Ridge officially supported up to ddr4 2933 which is about 23GB/s, lpddr4x 4266 does about 34GB/s or 50% more, making it real obvious where the 59% performance increase comes from.

A 750ti does 86GB/s.

Worf
Sep 12, 2017

If only Seth would love me like I love him!

Y'all see this thing about the 28w chip

https://www.anandtech.com/show/15302/intel-28-w-ice-lake-core-i71068g7-coming-q1

I feel like a 28w chip with whatever their dgpu evolves into might be a decent type of all in one

Shaocaholica
Oct 29, 2002

Fig. 5E
Is the IHS soldered now?

SwissArmyDruid
Feb 14, 2014

by sebmojo

Arzachel posted:

Raven Ridge officially supported up to ddr4 2933 which is about 23GB/s, lpddr4x 4266 does about 34GB/s or 50% more, making it real obvious where the 59% performance increase comes from.

A 750ti does 86GB/s.

And yet one is benched as being capable of about 10% fewer FLOPS than the other. Gee. It's almost like IPC and transistor count between a 28nm process and a 10nm process actually *means* something.

Adbot
ADBOT LOVES YOU

Arzachel
May 12, 2012

SwissArmyDruid posted:

And yet one is benched as being capable of about 10% fewer FLOPS than the other. Gee. It's almost like IPC and transistor count between a 28nm process and a 10nm process actually *means* something.

I messed up and the numbers should be doubled for dual channel, so it's not nearly as grim as I thought and Renoir should probably beat a 750ti as long as you're using real fancy memory :toot:

Realistically, AMD's apu designs will still be heavily memory bandwidth limited until ddr5.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply