|
Oh, when you said “just more lower powerish cores” I thought you meant with the same ISA
|
# ? May 8, 2024 00:23 |
|
|
# ? May 30, 2024 18:25 |
|
Subjunctive posted:I wonder if the NPUs were originally targeted at upscaling, if the design goes that far back. I guess Apple had been doing the Neural Engine thing for a while by then My understanding is that they were initially for doing computational photography tricks, which is how phones get the “pictures” from the tiny sensor to look good to people. Behind the scenes, it’s stitching together multiple pictures and applying filters to them in real time to make a hybrid monster image that people like. Supposedly it also gets used for FaceID and those Animoji/memoji you haven’t seen in years. Ultimately it has some real uses, and the chip makers have been hoping to get traction on the “AI” buzzword for them for like 8 years… they just finally succeeded in the wake of the successive Crypto/Metaverse/NFT scams.
|
# ? May 8, 2024 03:10 |
|
Cygni posted:My understanding is that they were initially for doing computational photography tricks, which is how phones get the “pictures” from the tiny sensor to look good to people. Behind the scenes, it’s stitching together multiple pictures and applying filters to them in real time to make a hybrid monster image that people like. This sounds surprisingly like TAAU/DLSS. I never knew how phone cameras worked with their tiny sensors, but this makes a lot of sense. Are they taking multiple photos in quick succession while using natural hand shaking as the "jitter" and reconstructing a super high-res image by sampling all of the images?
|
# ? May 8, 2024 03:28 |
|
Dr. Video Games 0031 posted:This sounds surprisingly like TAAU/DLSS. I never knew how phone cameras worked with their tiny sensors, but this makes a lot of sense. Are they taking multiple photos in quick succession while using natural hand shaking as the "jitter" and reconstructing a super high-res image by sampling all of the images? Some stills cameras can also do it by using the sensor stabiliser to provide the jitter.
|
# ? May 8, 2024 03:48 |
|
MKBHD has done a few videos with the question "what is a photo?" discussing the processing pipeline and computations etc which I find pretty interesting, even though I'm not personally a super camera guy: https://www.youtube.com/watch?v=88kd9tVwkH8 the funniest one is Huawei/Samsung getting busted faking the moon in their pictures. Huawei literally superimposing a canned image of the moon into your photos when the software recognized it, and Samsung adding fake details from a "reference image": https://www.youtube.com/watch?v=1afpDuTb-P0&t=78s
|
# ? May 8, 2024 04:41 |
|
Does this mean that technically a phone mounted on a stable tripod will take worse photos than a handheld phone?
|
# ? May 8, 2024 04:49 |
|
Cygni posted:My understanding is that they were initially for doing computational photography tricks Sorry, I meant the AMD NPUs!
|
# ? May 8, 2024 05:06 |
|
Subjunctive posted:Sorry, I meant the AMD NPUs! I actually just asked some AMD guys this earlier today: their origins are all in the “traditional” VLIW DSP “cores” and more or less function like that from a programmer’s perspective.
|
# ? May 8, 2024 05:20 |
|
hobbesmaster posted:I actually just asked some AMD guys this earlier today: their origins are all in the “traditional” VLIW DSP “cores” and more or less function like that from a programmer’s perspective. ah, cool—thanks!
|
# ? May 8, 2024 05:23 |
|
Llamadeus posted:Some stills cameras can also do it by using the sensor stabiliser to provide the jitter. lmao
|
# ? May 8, 2024 06:16 |
|
Cygni posted:My understanding is that they were initially for doing computational photography tricks, which is how phones get the “pictures” from the tiny sensor to look good to people. Behind the scenes, it’s stitching together multiple pictures and applying filters to them in real time to make a hybrid monster image that people like. Nah, computational photog stuff in Apple's SoCs is a dedicated block, the Image Signal Processor (ISP). It's existed in their chips a lot longer than the Apple Neural Engine (ANE). Apple's far from the only company with an ISP, all cellphone SoCs have had one for ages. The ANE and similar "AI" engines are coprocessors heavily specialized to accelerate matrix math, as that's the root of so-called "AI". I don't think I've ever heard much about what's in a typical ISP but my guess would be DSP cores for some programmability and possibly some image filter engines that are slightly less programmable. Apple has recently started borrowing the ANE for some camera functions - an example being that on Apple Silicon MacBooks, the ANE is used to do advanced "AI" image enhancement on the webcam's output. However, as far as I know, the ANE postprocesses the ISP's output rather than taking over the whole pipeline. The first ANE was in 2017's A11 Bionic, used in the iPhone X, which was the first iPhone with FaceID - so yeah, at the time, FaceID was the ANE's headline feature.
|
# ? May 8, 2024 09:23 |
|
Cygni posted:the funniest one is Huawei/Samsung getting busted faking the moon in their pictures. It's one of those things that in retrospect of course was going to happen.
|
# ? May 8, 2024 12:39 |
|
BobHoward posted:Nah, computational photog stuff in Apple's SoCs is a dedicated block, the Image Signal Processor (ISP). It's existed in their chips a lot longer than the Apple Neural Engine (ANE). Apple's far from the only company with an ISP, all cellphone SoCs have had one for ages. My understanding from articles back at the time was that the NPU was an offshoot of the ISPs to allow them to do different computational photography tricks than the ISPs were doing, stuff like subject recognition etc. I might fully be wrong though, not really a camera guy
|
# ? May 8, 2024 15:56 |
|
Cygni posted:My understanding from articles back at the time was that the NPU was an offshoot of the ISPs to allow them to do different computational photography tricks than the ISPs were doing, stuff like subject recognition etc. I might fully be wrong though, not really a camera guy I guess those articles weren't wrong, but also not quite right? The wrong part is that it was a new block designed to accelerate inference, not an offshoot of the ISP. The right part is that one of the things you can do with spicy matrix math is, as you mentioned, the new kinds of image processing tasks made possible by building and training a model. So, sometimes it does have a role to play in work that was formerly ISP-only. But the ISP is still there to this day; it wasn't made redundant by the NPU.
|
# ? May 12, 2024 13:09 |
|
ISPs I would think would be 'new' to the desktop space -- on mobiles / cameras they have been around for decades(?) to do HW de-Bayering / interface to the raw sensor. If anything I guess there is some internal alignment that has occurred where ISP/DSP blocks have converged so the ISP block on a modern SoC is a derivative case from whatever generic DSP cores exist. Or not, since validation is $$$ and don't gently caress with what works, I guess.Cygni posted:My understanding from articles back at the time was that the NPU was an offshoot of the ISPs to allow them to do different computational photography tricks than the ISPs were doing, stuff like subject recognition etc. I might fully be wrong though, not really a camera guy This would make more sense; IMO the ISP sits in the pipeline as a block that ingests raw sensor data, processes (many steps to get from raw sensor data to a x by y pixels frame) and outputs frames upstream to whatever is next, NPU or otherwise.
|
# ? May 12, 2024 20:33 |
|
https://twitter.com/VideoCardz/status/1790086418984816810?t=jmjSYjBKaFQxhArYEkVIkA&s=19
|
# ? May 13, 2024 23:59 |
|
Worth stressing that this will have lower cache amounts than the 7000 desktop parts on top of the lower clock speeds. So the 8700F will be considerably slower than the 7700 (X or not). Great naming AMD, very clear.
|
# ? May 14, 2024 00:03 |
|
Dr. Video Games 0031 posted:Great naming AMD, very clear. Xilinx guys: thanks!
|
# ? May 14, 2024 00:55 |
|
the benefit of APUs or chips made from them is that they do support faster RAM than the chiplet CPUs and have asynchronous IF clock but this has little practical use outside of very specific applications
|
# ? May 14, 2024 04:10 |
|
e: now that AMD has posted the product pages, i was wrong! i thought the 8400F was an 8500G die without graphics, but its actually an 8600G without graphics. so: 8700G with no graphics = 8700F 8600G with no graphics = 8400F(??) i assumed the names would mean something, and that was silly of me. i know better. Cygni fucked around with this message at 08:26 on May 14, 2024 |
# ? May 14, 2024 05:56 |
|
stupid question but cross-check me on this: Is there any difference between a 7302 and a 7302P aside from multi-processor support? I thought the P just had that fused off for segmentation reasons, but I don't want to discover that half the PCIe lanes are unusable on a single-chip motherboard because I have the "wrong" one. I've got a week to return it for the "right" processor while waiting on the rest of the parts to arrive.
|
# ? May 14, 2024 06:23 |
|
this article is for epyc 1 but on the non-P cpu and board STH tested, the 64 lanes that would be used for IF on a dual socket board are routed properly to devices on the board. don't think they'd change this behavior going forward, especially as some boards were upgradeable to epyc 2. you could of course change the CPU if you wanted to be sure. https://www.servethehome.com/single-socket-amd-epyc-7000-faq-answers-common-questions/
|
# ? May 14, 2024 06:35 |
|
I’m annoyed you can’t get the pro models in retail boxes. I want one for a nas build! Integrated graphics and ECC? Nice.
|
# ? May 14, 2024 07:00 |
|
priznat posted:I’m annoyed you can’t get the pro models in retail boxes. I want one for a nas build! Integrated graphics and ECC? Nice.
|
# ? May 14, 2024 07:36 |
|
Anime Schoolgirl posted:this article is for epyc 1 but on the non-P cpu and board STH tested, the 64 lanes that would be used for IF on a dual socket board are routed properly to devices on the board. don't think they'd change this behavior going forward, especially as some boards were upgradeable to epyc 2. you could of course change the CPU if you wanted to be sure. thanks, that checks with what I was seeing as well. It makes sense since there's not -P variants of most of the lineup, and I don't see them selling their top-of-the-line chip only in pairs.
|
# ? May 15, 2024 04:21 |
|
Is there a thread discussing MS’s new ARM push? (Asking here cause it’s CPU centric) https://www.theverge.com/2024/5/20/24160486/microsoft-copilot-plus-ai-arm-chips-pc-surface-event
|
# ? May 20, 2024 21:55 |
|
So anyhow, here's the dumbest question of the week: If Apple can make such a super efficient CPU, why can't AMD or Intel? Surely can't be just the packaged RAM on Mx or the bigger µOp decoder on x86.
|
# ? May 20, 2024 22:06 |
|
Combat Pretzel posted:So anyhow, here's the dumbest question of the week: If Apple can make such a super efficient CPU, why can't AMD or Intel? Surely can't be just the packaged RAM on Mx or the bigger µOp decoder on x86. helps when you don't need to give remotely a poo poo about backwards compatibility and can make a CPU to do EXACTLY what you want it to do and dictate all the software match you, instead of being stuck with 40 years of standards
|
# ? May 20, 2024 22:20 |
|
also helps if it doesn't matter how much the chip costs to make because it will only ever be packaged with an entire system with enormous margins
|
# ? May 20, 2024 22:32 |
|
Arivia posted:helps when you don't need to give remotely a poo poo about backwards compatibility and can make a CPU to do EXACTLY what you want it to do and dictate all the software match you, instead of being stuck with 40 years of standards Apple’s ARM stuff has a bunch of things specifically for backward compatibility with the x86 memory model, as it happens. This is one of the reasons that Rosetta 2 works so well.
|
# ? May 20, 2024 22:36 |
|
So that x86S stuff Intel's planning could eventually result in some performance advantages?
|
# ? May 20, 2024 22:37 |
|
You can't overlook economies of scale for Apple, either. The iPhone sells more units annually than the entire global laptop market combined, which helps them because of the substantial sharing across all their processors.
|
# ? May 20, 2024 22:42 |
|
Combat Pretzel posted:So that x86S stuff Intel's planning could eventually result in some performance advantages? I doubt it, they’re pretty trivial simplifications. might let them take a few gates off a cut-down chip, but nothing in there is going to make x86 as fast to decode as ARM AFAICT
|
# ? May 20, 2024 22:47 |
|
like not to undersell apples silicon engineering, because it's good, but they have something of a head-start when part of their strategy is just to give the worlds leading foundry a quadrillion dollars to completely buy out their latest most efficient nodes for their first year
|
# ? May 20, 2024 22:50 |
|
Intel could have its foundry do whatever it wanted…
|
# ? May 20, 2024 22:56 |
|
Subjunctive posted:Intel could have its foundry do whatever it wanted… Except be good at things.
|
# ? May 20, 2024 22:59 |
|
Remember when Intel had a 2 year manufacturing lead on the whole world? And then they decided that was just a cost center.
|
# ? May 20, 2024 23:08 |
|
It's definitely a combination of a bunch of things. Apple hired very good people to design efficient processors for mobile devices on the most advanced nodes possible. They aren't trying to compete with Intel and AMD, they're trying to dominate the mobile device world, and frankly in terms of CPUs they are. AMD and Intel have a different target. Laptops are in it, but so are desktops and servers. Performance, broad spectrums of hardware acceleration, and hardware backcompat matter more to them. Apple isn't competitive with them in most of their applications, and they aren't trying to be. Their entire hardware and software stack is basically bespoke, they can optimize for their goals and say "lol it's not supposed to do that" to anything else.
|
# ? May 20, 2024 23:12 |
|
repiv posted:like not to undersell apples silicon engineering, because it's good, but they have something of a head-start when part of their strategy is just to give the worlds leading foundry a quadrillion dollars to completely buy out their latest most efficient nodes for their first year Pays to be the world’s richest tech company
|
# ? May 20, 2024 23:16 |
|
|
# ? May 30, 2024 18:25 |
|
The M chips can't add an extra .25ghz if you shove 200 extra watts into them, sounds like a CPU for wusses. Over here in x86 land we have Chad CPUs! haha electrons go brrrrrrr
|
# ? May 21, 2024 03:25 |