Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
EoRaptor
Sep 13, 2003

by Fluffdaddy

orcane posted:

Also remember nForce? :haw:

It was pretty good until nvidia abandoned it mid product cycle.

Adbot
ADBOT LOVES YOU

EoRaptor
Sep 13, 2003

by Fluffdaddy

SwissArmyDruid posted:

It is a move that was needed last year, when people were still excited about external GPU over Thunderbolt. Now, who knows when that will be a standard industry thing.

This is a defensive play against AMD, nothing more. They are going to drive 'feature X' as the new open standard, pushing out any competing standard. It's just 'coincidence' that 'feature X' is something they developed internally and have a huge head start on vs their competitor.

It's all about positioning for OEM's.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Mr Shiny Pants posted:

So the next question: Have compilers gotten better or is this something that was never going to happen?

Compilers have gotten better (In fact, a lot better, LLVM was/is a major advance in compiler design), however it turns out what itanium needed from a compiler was perfect knowledge of all possible operations an application could perform and all the CPU states that would result, which even with very simple code seems to be an NP hard problem. Also, compilers are still written by humans and aren't capable of the level of 'perfection' needed to even approach what itanium demanded.

In the end, itanium wasn't even a good CPU design. It didn't scale well in speed or performance, and was a dead end for most CPU applications, which are dominated by end users.

EoRaptor
Sep 13, 2003

by Fluffdaddy

fishmech posted:

NewFatMike posted:

Alternate architecture chat is making me think about x86 emulation running on ARM that got Intel all in a tizzy earlier this year.

Would be cool to have AMD and Qualcomm competing with Intel. But also rip in peace AMD for selling Adreno.

I mean, that's been a thing for a long time. Its problem is that it compounds existing ARM performance issues by the speed penalty emulation imposes.

Transmeta spent a lot of time and money proving x86 emulation isn't worth it, and they weren't stupid or lazy people. Far better to push towards more universal API's / foundations and work on compiler platform/CPU targeting.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Combat Pretzel posted:

So that 12nm spin is going to be OG Zen, not anything architecturally improved, other than erratas?

It depends on whether the gate assembly is different that the current 'node'. It's no longer just shrinking the size of transistors, but changing how they are assembled and connected. For instance, a change from 14nm LPP to 12nm LPU would involve a complete redesign of the chip layout.

It's also a question of if this new node/process is even going to be used by AMD, and which part of AMD (GPU vs CPU) would even choose to adopt it.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Cygni posted:

"12LP" is a rebrand of Samsungs 14LPP with Global Foundries designed 7.5T libraries in places. Pretty much everything else is the same. It's what AMD was calling 14+ on earlier slides.

As that extremetech article put it: "Anyone expecting anything but the most modest of performance or power improvements is probably overselling the boost."

I hope they do a revision to the layout and logic at the same time, because there is some low hanging fruit that should be easily fixable with Zen. That probably doesn't include the linked memory<->internal bus, that seems like a decision that was made too early in the design process to change quickly, and will be the big thing for Zen2.

The nice thing at least is that it should be easy to identify 14nm vs 12nm chips, as they will get different model numbers.

EoRaptor
Sep 13, 2003

by Fluffdaddy

A SWEATY FATBEARD posted:

Yeah that's what's confusing me as well - well since both the keyboard and the mouse are wireless, I suspect that the USB dongles are picking up neighbor's keyboard signal or somesuch. No harmful interference though. :)

edit: I also have a Bluetooth dongle, is the mainboard BIOS "smart" enough to try to connect to BT mice, even though the mouse might be in a different apartment altogether? :)

You probably have 'Legacy USB emulation' or something similar turned on in the BIOS. This will take any USB keyboards and mice and 'emulate' equivalent PS/2 versions of those devices, in case you are booting into something that doens't have USB support (DOS?). If you hunt around you could probably turn it off, but I don't think it's actually harmful, it might use a few kb of memory.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Cygni posted:

DDR5 and PCIe 4.0 at the consumer level is expected in 2020. That’s also the timeframe that 7nm EUV should be showing up if there aren’t delays (there will be delays). If I had to guess, that seems the logical time for a whole platform overhaul.

EUV is still having problems, it seems even when the setup is 'perfect' defects still occur. https://arstechnica.com/gadgets/2018/02/random-errors-throw-up-more-hurdles-in-the-move-to-extreme-uv-chip-fabricating/

EoRaptor
Sep 13, 2003

by Fluffdaddy

PerrineClostermann posted:

...Aren't most instructions broken down into common micro-ops anyway? Would you really save that much silicon by dropping old instruction sets?

Depends on how they map. SSE turned into a huge savings, because you could map FP instructions onto an SSE instruction with next to no performance loss*, and SSE used up a fraction of the real estate a 'true' floating point unit did. I don't think there is much to be made trying to map SSE onto AVX, as they basically use the same circuit logic anyway.

*Some FP instructions got much slower, but anything current that used them was well into being rebuilt to use SSE type instructions instead, which had huge performance gains, and anything older was mostly made up by pure increase in clock speed.

EoRaptor
Sep 13, 2003

by Fluffdaddy

FaustianQ posted:

Also it gives Sony time to skip GCN. If it was 2019 or early 2020, they'd be stuck with Navi. A 2021 release strongly indicates the PS5 will be using whatever AMD has for post GCN GPUs.

Ehhhh... They do need to beat their retail date by about a year with development consoles, so they might not have that option. It does matter less on a dedicated console, because games can be written to be hardware optimized.

EoRaptor
Sep 13, 2003

by Fluffdaddy

PC LOAD LETTER posted:

I can't remember why exactly but x86 isn't a ISA that is inherently good at instruction level parallelism and so scaling by adding more pipelines gets hit HARD with diminishing returns after 2 pipelines or so. And both AMD and Intel have had more than that for a long time now so anything from Sandybridge era onwards is probably already about as wide as is practical anyways.

The relatively poor ILP of x86 was part of the reason why Intel decided to try and double down so hard with a new ISA that was supposed to have insane ILP scaling with EPIC (Itanium) back in the late 90's. They failed of course but the goals were admirable even if the implementation turned out to be a bust.

ARM is supposed to scale better by going wide vs x86 (again I don't know why, I'm not a CPU architect but I've read comments by some people who seem to know their stuff and that seems to be the general opinion for quite a while now) but there are other trade offs with that ISA (IIRC worse cache usage efficiency vs x86) .


Itanium depended on a magical compiler that could find parallelism when compiling an application. The uarch was laid out to take advantage of that in better ways than x86 can, but without that compiler support, it wasn't able to gain anything. If you are thinking some sort of magical compiler would also help x86 (or any uarch) multi-core then yes, it would. Also, despite spending a good chunk of change of it, neither intel nor anybody else was able to produce such a compiler. Nobody was surprised.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Klyith posted:

except for the part where you have to install plumbing if you want to run those things


You could probably put an 8U 'faceplate replacement' radiator + fans + pump and cool that thing pretty effectively. I suspect this product is really aimed at datacenters that already use direct water/liquid cooling and can plug in racks of these things without taking any sort of extra cost hit.

Don't forget that being seen as having innovative 'halo' products that make headlines without being practical is a successful business model for selling products that are otherwise the same as your competitors.

EoRaptor
Sep 13, 2003

by Fluffdaddy
Well now I know how to pronounce Huawei. Learned nothing new about AMD products though.

EoRaptor
Sep 13, 2003

by Fluffdaddy

ratbert90 posted:

I missed it, but I have always pronounced it "wah-way." Is this incorrect?

Nope, that's about right. I was mostly being facetious, I've heard it pronounced incorrectly a bunch of times, and nothing about AMD's future was actually being said.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Lube banjo posted:

$699 what the christ. yuck


That's way too high. Even with AMD's usual street price being lower it's not going to come in cheap enough to make a big difference.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Number19 posted:

Number must go up. Always up. If number not go up, find way to make number go up, even if dumb. Number must go up.

:capitalism:


<impolite reference to one of the products of the company you work for not having numbers that are going up><acknowledgement that you are in no way responsible for that product><also acknowledgement that I still frequently use that product voluntarily>

The target for that CPU is a highly specialized compute market that absolutely has a time = money driver, and this is never going to be a CPU that anyone buys individually or would ever have bought individually, it will always have been part of a larger 'solution' from a service provider that included the entire product stack of compute, storage, networking, management, and software. What we think of it isn't relevant at all, and even the people ultimately buying it may never know or care what the CPU model is.

EoRaptor
Sep 13, 2003

by Fluffdaddy

K8.0 posted:

My point is that the thing AMD gains here is prestige and possibly more access to other high end (and thus higher margin) products. They aren't hurting Intel so much as they're helping themselves, especially in the long run.

Though less monetary, building an x64 Windows PC platform with Microsoft will gain AMD a lot of benefit with performance generally under Windows as Microsoft incorporates better native memory and thread management, better power control, etc. It may not seem like much, but if it can make the experience cheaper, quicker, and more reliable for OEMs and end users of Windows, it’s a really big selling point.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Black Griffon posted:

Well that's a load off my back. I did install the Realtek Audio Driver from the DVD before anyone could stop me, but I can pretty much chuck the rest, yeah?


SIV is SpeedFan 4, which is the Gigabyte tool for setting custom fan curves. If you are happy with your fans you don't need it, but if you want to change them, it'll be required. The good thing is you can set a curve and quit the tool, the settings are saved to the BIOS and stick without needing to have SIV running all the time.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Malcolm XML posted:

Ddr is the last parallel single ended interface in wide use, they could switch to serial like IBM does. In that case differential signaling and fec gets you to dozens of gigabits per pair. But that's only over the wire.

RDRAM was serialized, so it's certainly possible. You'd need to solve the latency problem somehow if you wanted to use that method these days. I do wonder if any of the Rambus patents are still active

EoRaptor
Sep 13, 2003

by Fluffdaddy

OhFunny posted:

Zen2 refresh seems like an odd move.

They were probably doing a mask refresh to improve yields, and squeezed this in because it makes the OEM market happy.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Number19 posted:

I just bought a fleet of AMD workstations so they are definitely starting to make inroads in places other than the consumer space


How many cbills did it cost?

EoRaptor
Sep 13, 2003

by Fluffdaddy

K8.0 posted:

For what? That's not a kit you can run XMP on AMD and get good performance (clock is too high), so you're going to have to hand tune it (drop to probably 3600 and then tighten up timings as much as you can). And if you're doing that, will you be able to get better deals? Probably, but I'm not the expert.

Far more important to check an RAM against the MB makers compatibility list. This is now less of a thing vs the early days of 5 series AMD chipsets, but I still think it's underrated as something to pay attention to vs a lot of difficult to troubleshoot issues.

EoRaptor
Sep 13, 2003

by Fluffdaddy
If you want to read basic ntfs data and don't care about permissions, journals, replay logs, disk states, files in the MFT, compressed files, encrypted files, last accessed times, and a bunch of other things, NTFS file system access becomes a lot simpler. Since so many have it, it's probably a module from the BIOS maker (Phoenix) that can be licensed.

EoRaptor
Sep 13, 2003

by Fluffdaddy

whalestory posted:

I just built my new pc with new mobo, new 7800x3d, new ram, new ssds, my previous GPU, but not getting any monitor signal. It's starting up and everything, lights are going on, fans are on, but just no signal.

Is it possible that this is because the gigabyte b650 needs to update its bios to support the 7800x3d? Is there a way to flash the bios without needing a monitor?

Or maybe it could be some ram stick/slot bullshit...

Pull out your video card and plug a monitor directly into the motherboard. Some BIOS default to onboard first, and never init the video card, waiting for the (not yet present) O/S to do it. Another possibility is first time RAM training can take >5 minutes, so maybe just wait?

EoRaptor
Sep 13, 2003

by Fluffdaddy

whalestory posted:

I tried booting and letting it sit for 10 minutes with displayport plugged into gpu, then did it again plugged into motherboard, then again plugged into motherboard with the gpu taken out, and now I'm trying plugged into motherboard with gpu and one ram stick taken out :twisted:

Oh yeah I also tried the Q flash plus thing but it it only stayed on for like 5 minutes before shutting off... what's that's about? That doesn't seem like nearly enough time, nor did it seem to do anything.

I think I'm going to keep removing stuff until it starts working, maybe even switch to hdmi, who knows. I'm getting sleepy :(

Another option is to pull everything except the CPU and see if it'll beep about missing RAM. This should include unplugging any drives/keyboards/mice etc.

EoRaptor
Sep 13, 2003

by Fluffdaddy

SwissArmyDruid posted:

lol, I missed that video. Steve is right.

edit: AND ALL THIS loving MOTHERBOARD ARMOR. loving WHY.

At this point, why. I dream of a diag port that I can plug a cable into, maybe on the other end is my phone with a diagnostic app with other useful utilities built into it.

Or fine, make the standardized diag port, and then you have the option of either plugging in a little circuitboard with a few MB of non-volatile memory to store logs on that you can use on any motherboard with a port, with hexcode readout and maybe a few extra buttons for quick CMOS clear. (because that's another loving thing that motherboard manufacturers have been doing, skipping out on including jumpers for CMOS reset, and then placing the jumper for that underneath the GPUs. Thank god I have an altoids tin FULL of the little fuckers, salvaged from dead hard drives and older motherboards so they have the extra little plastic tab to make them easier to yank. I have picked up the trick of wiring the case's reset button into two-pin CMOS clear jumpers to make my life easier while I'm diagnosing a problem, BUT EVEN CASE MANUFACTUERS ARE SKIPPING OUT ON THE RESET BUTTONS THESE DAYS so I have a reset switch on some leads that I salvaged out of a case specifically for this purpose.)

Or, you can plug in a cable that goes to an app on your phone that logs, reads the error codes, a detailed description of what it means, possible remediation steps, and a more advanced "Unbrick My BIOS" functionality. Just being able to evacuate logs onto an internet-connected device so you can send it to someone else that might be able to help you? <chefkiss>

I don't loving know.

All I know is that a boot code on its own is sometimes is no more informative than a blink or beep code.

God I loving hate ATX so much.

I was thinking you could embed NFC into the rear IO somewhere, and then just update that with an error code. Any recent phone could read it and look it up through a website or manufacturers app. You'd need a tiny bit of circuitry to update the tag and the tag itself, but everything else would be externalized so no need to waste time on code to set different LCD segment displays or beeping, etc. Could also extend it to have it spit out serial numbers, manufacturing dates, or even current BIOS settings, for OEMs to enable field techs or other big service providers.

EoRaptor
Sep 13, 2003

by Fluffdaddy

mdxi posted:

What do you propose to run the full Bluetooth stack on while bringing up the CPU?

NFC has nothing to do with Bluetooth.

EoRaptor
Sep 13, 2003

by Fluffdaddy

BlankSystemDaemon posted:

gently caress bluetooth, NFC and every other piece of bullshit that requires hundreds of thousands of lines of code in firmware to properly be able to display an error.

It's possible to encode the entire alphanumerical character set (if you take advantage of both normal case and capital case, like the segment7 font does), and that lets you cover 73 errors just with a one seven segment display.

NFC would be simpler to set than a display. Literally just dumping ascii into an address on a serial bus that is actually a bit of flash rom behind the scenes. Everything else is handled by the reader. It doesn't even need power to read, as long as there was some power to complete the write, power is supplied by the reader, so you could pull up 'last status' without needing to supply power to the board.

EoRaptor
Sep 13, 2003

by Fluffdaddy

mdxi posted:

So, separate protocols entirely, but commonly implemented together in hardware. Cool. Now I know.

For readers yes, the ‘tag’ is what would go into the MB somewhere with a simple serial interface to get updated. That’s the benefit here, that all the costs are in a device you already own (smart phone) so all the MB needs to add is a tiny nfc chip that can sit on an already existing serial bus, has some flash rom, and a conveniently located loop of wire.

Adbot
ADBOT LOVES YOU

EoRaptor
Sep 13, 2003

by Fluffdaddy
Just an FYI about Gigabyte: They publish beta BIOS's as lettered, eg: F7b or F7g are considered 'beta'. However, when they are about to roll out a final version, they will pull all the beta versions from the website. Usually, in 5 to 10 days, the final will arrive. That may not fully apply right now, as they may pull BIOS's for safety reasons, but it's been the historical pattern when beta BIOS's suddenly vanish.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply