Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

TraderStav posted:

Inside each of those is a Google Photos Subdirectory and then Directories of albums/etc. Issue is that in each there are subfolders of the same name, such as "Photos from 2009" that exist in multiple of the zips.

Is there a good and reliable way to merge all of these folders, and subfolders, into one single Folder?

1. Are you confident that all the photos are not duplicates, either in name or in contents?
2. Are you comfortable using Terminal?

If the answer to both is "yes" you could try this Terminal one-liner:

find sourcedir -type f -print0 | xargs -0 -J % mv -i % destdir/

This should move all files (not the enclosing folders) found in "sourcedir" to "destdir". Because we're giving the 'mv' (move) command the "-i" switch, it won't move files from the "sourcedir" hierarchy when a file of the same name already exists in "destdir".

I have a much more complex Perl script for identifying and optionally deleting duplicate files. It looks at file contents rather than names, so it's very good for curating a huge collection of photos where duplicates have accumulated (I needed it because that happened to me a lot when exporting from cameras, in the past). Let me know if you're interested, I can put it up on pastebin or something.

Adbot
ADBOT LOVES YOU

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

TraderStav posted:

I am confident that there are not any meaningful duplicates, at least not structurally. Could just be multiples of photos from prior photo management attempts, nothing worth taking into consideration though. There WILL be duplicates in subdirectory names though, if that's something I need to consider.

I am comfortable with terminal, just not wholly originating commands like the above. I'll give that a go. I am planning to backup all of the individual directories to my homelab prior to running the command, so it's not an all or nothing thing. Will give this a go!

Appreciate all the input on this.


E: Ooooh, I just realized that I do not have assurance that Google did not repeat file names... so perhaps a pastebin of the perl script would be helpful to account for that.

https://pastebin.com/c0cV0dkX

The idea behind this script is to first go through all the files calculating a MD5 hash of the first 32KB. This allows it to generate a candidate list of possible dupes very quickly, then in a second pass, it does full byte-by-byte comparisons of any files whose hashes match.

It won't delete anything unless you pass it one of the deletion switches on the command line. I recommend starting out by passing it no switches and just pointing it at the top level directory you want to examine. When you do tell it to delete things, it does not ask for confirmation and the files it deletes are sent straight to hell, not the trashcan.

I wasn't the original author, this is a significant rewrite of someone else's script. My goal was to make it faster when processing huge numbers of files - that's why my version does the two-pass thing, because reading just the first 32KB of each file in pass 1 is lots faster than the whole file. This was probably more important back when I did it, because at the time, SSDs weren't really a thing. Along the way I also added some features to detect truncated and similar files, which are features you probably won't care about. (They're also not perfect because I didn't need them to be.) In case you might find the original more useful for your needs, here it is:

https://hayne.net/MacDev/Perl/findDupeFiles

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Subjunctive posted:

Anyone have thoughts about this? I used to work on Linux filesystems a long time ago but I don’t know the macOS VFS stuff at all.

Comedy answer: learn another VFS!

https://opensource.apple.com/source/xnu/xnu-7195.81.3/bsd/vfs/

real answer: I think Apple's exFAT driver is a toy which they don't pay enough attention to. Probably file a bug on it? Which you'll never hear any response on.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

TraderStav posted:

How does it handle the dupes once it finds them? Does it rename them to something unique? I couldn't care less about the file names themselves.

Having never run a perl script before, is it as simple as just running a command in terminal if it's already set up by default in macOS or are there preceding steps I need to look into?

How to run it: make a plain text file, paste the script into it, then use 'chmod +x' on the script to make it executable. You can now run it just like it's a regular command line program.

It has built in help, just run it without any arguments and it will print it out. The default option is to do nothing beyond printing a list of the dupes found. If you pass the appropriate switch argument listed in the help, it will delete them too. I didn't write anything for rename, so that option doesn't exist.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Grassy Knowles posted:

Netnewswire is foss and works for me

Seconding this. NNW went through some dark years after Brent Simmons sold it to a company that didn't seem to know what to do with it and seldom updated it, but now that he's got it back and decided to make it a FOSS passion project, it's doing great.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Happy_Misanthrope posted:

No, as I understand it they present as one. I'm saying that for game workloads though, it's more difficult for multi-die GPU configurations to get the level of scaling you'd expect like apps can. There's a benefit no doubt, but apps can get close to 100% scaling - not the case with games.

Albeit the M1/M2's Ultra interposer is very fast, so it's not like this is really comparable to SLI/Crossfire either.

I'm pretty sure with Metal the M1 Ultra was also presented as one GPU, the basic architecture of the Ultra hasn't changed significantly with the M2 in terms of the GPU though, it's still 2 dies. The bottleneck is not necessarily the API.

Correct - both M1 and M2 Ultra present as a single GPU to software. There is no way to even try to use them as two independent GPUs. Apple has a 2.5 TB/s interconnect linking the two chips, so the GPU's been architected to behave as a single GPU down to the hardware level. Unlike Crossfire, the whole GPU works on one frame at a time. And yes, there's scaling issues, especially on M1 Ultra. But it was the first generation of this, so that's somewhat expected.

The real issue for AAA game ports isn't unique to Ultra chips. It's that Apple GPUs use Tile-Based Deferred Rendering (TBDR), AMD and Nvidia use immediate mode rendering (IMR), and a naive port of an IMR-optimized game to a TBDR GPU won't perform as well as it potentially could. It usually doesn't take massive changes to fix this, but it is extra work, and it's not necessarily easy for devs who haven't done it before to predict the scope of the project before actually trying to do it.

Which brings us to the purpose of Apple's Game Porting Toolkit. Despite the name, most of it is not intended (or licensed) to be used directly in finished ports of games. Instead, it's for making low-effort internal-only ports which can be put under the microscope of Apple's Metal performance tools. This is supposed to let devs invest very little time in figuring out where and how their engine's accidentally bottlenecking itself, which in turn allows them to make informed estimates of the scope and cost of doing a real native port. Basically, Apple's hoping that making it cheap to resolve uncertainty about how worthwhile it will be to do a port will result in more ports.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Binary Badger posted:

Gives me hope for the M3 to inherit the A17 hardware RT, something tells me there are undocumented RT APIs in Ventura / Sonoma that exist since the M2 was supposed to get the RT hw but it was pulled at the last minute because the hw at the time was pulling too much power.

I'm very confident that hardware RT is coming in M3. If it's also a TSMC 3nm chip, it's going to be based on A17 generation CPU and GPU cores.

Apple's been doing raytracing in Metal (including API support, recently) out in the open for years. It's just that before A17, Apple's sample code and/or Metal drivers had to use programmable compute shaders to do work better handled by dedicated hardware.

I don't trust those stories about M2 and/or A16 having RT hardware pulled at the last minute. Maybe there's a sliver of truth to it, but if so, the reporting has garbled a lot.

In general, the Mac rumors 'press' loves to believe that big and dramatic last-minute things happen all the time in chip design, and that's just... not how things work, really. Chip designs get frozen (meaning: no more changes except bugfixes) a long time before any are sold to the public, probably a year or so for a big complex chip like M2. Issues like excessive power should have shown up in simulations done well before the freeze, so it's hard to imagine a problem like that as a last-minute "OH poo poo EVERYTHING'S ON FIRE NOW" surprise.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

TraderStav posted:

Thanks again for providing the script and the background! I finally got around to experimenting a bit with it and wow, did it provide a lot of dupes (with no deletion instructions).

I originally assumed that the script was just checking filenames, but since it's comparing the hash, is it safe to assume that if the filename AND the hash matches, they can be safely deleted and only one of them would be preserved?

Yes. The script doesn't care about filenames unless you pass it the "-same" switch, which forces it to only compare files with the same name. Even then it's doing file content comparisons - it doesn't ever declare files as duplicates based on filename alone.

It does two comparison passes. The algorithm is that the first pass hashes the first few kilobytes of every file, then compares all the hashes to generate a list of dupe candidates. The second pass is full-length rechecks of the dupe candidates, and it doesn't use hashing, it's a direct comparison.

If you look at the printed output, files in the left hand column are those which will get deleted if you pass the "-d" switch. You can influence the script's choice of what to delete by passing it two or more directories to scan. It always checks all files in all directories you tell it to look at, but when dupes are found, if one resides somewhere under the first directory, it'll prioritize that one as the one to delete.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

TraderStav posted:

L O L

Clearly I did something wrong, added -d at the end and boom, everything but the directories was deleted. Everything

Command was ./FindDupes.pl /Users/username/GoogleTakeout -d

Clearly I misunderstood how this was going to work. I would have thought it was going to flag every duplicate file and delete those, leaving one version of each.

No, you understood the directions correctly, that's the way it's intended to work. I apologize for this result! I wrote the script years ago and there may be some difference in modern Perl behavior that's responsible for this. (I also haven't run it in many many years, so that would be why I didn't see this bug.)

I'll let you know ITT if I figure out the problem, but in the meantime, don't try using the delete option again, it'll probably just do the same thing.

EDIT: gently caress. Confirmed that it's a Stupid Perl Thing. Fixed version of the script:

https://pastebin.com/GeKv7NG6

hey me from about 10 years ago: There's a reason why you switched to Python for poo poo like this later in life, why the gently caress didn't you switch earlier

BobHoward fucked around with this message at 11:34 on Oct 2, 2023

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

TraderStav posted:

Awesome, thanks for updating the script! I'm in the process of restoring the files and will try this script out instead. Had the genius idea (why didn't I think of doing it earlier) of just running this script right on my UnRaid machine instead of transferring back to my MBP that lacks the space for two sets of the data. Nothing like a 500GB tarball sitting there

No problem on the update, I had to do it for my own peace of mind. It was a 1-line bug too, so not hard to fix.

Perl has this thing where referencing an array either returns its length or treats it as an array access. Which one you get is implicit based on the context of how you're using the array. The rules on this context-dependent behavior apparently changed between when I wrote the script and now.

So, on the line where I intended to check the size of an array containing the list of duplicates for a file, instead it started checking for the existence of that array. The array always existed, even when zero length (no duplicates), which meant the "if" always evaluated true, which meant the script unconditionally deleted all files regardless of duplicate count.

The fix was simple: just explicitly force scalar context, as I should have in the first place. (Or, as mentioned, past me should have written the thing in a more sane scripting language than Perl.)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Binary Badger posted:

Basically you do a revive if you want to preserve user data, restore is when you wanna the scorched earth route of burning everything away, like after a malware incursion or you want to give the Mac to someone without giving up the data.

You do have to be able to boot into DFU mode to use it but it seems foolproof past that.

The utility will download an bootable OS image OTA directly to the Mac being serviced via a USB-C cable; strange that in this instance they specifically disallow using a TB3 cable.

There's actually reasons for this which make sense.

The first part is that DFU (Device Firmware Update) isn't an Apple proprietary thing, it's a very old part of the USB spec. So even if you wanted DFU to run over a TB3 cable, that TB cable would have to be in USB mode.

The second part is that Apple needs their implementation of DFU to be capable of unbricking a Mac (or iOS device) which has had all of its writeable storage (both SSD flash and firmware flash) erased or corrupted.

So, Apple puts their DFU firmware in the only incorruptible storage on the platform: the stage 0 boot ROM. This is a mask ROM baked into the SoC itself; it can't be altered without physically damaging the SoC. Because stage 0 is a key component of Apple's platform security (it's literally the "root of trust"), it must be as close to 100% correct as Apple can possibly make it. There also aren't many bytes to play in; mask ROM on the SoC itself is an extremely expensive storage medium. And since it's truly read only, Apple can never fix bugs in fielded systems. For all these reasons, Apple keeps stage 0 brutally simple. It knows how to do two things: chain to a more complex stage 1 bootloader stored in a firmware flash chip on the motherboard (note: not the SSD), or make one specific USB port act as a DFU device and boot a program supplied over USB.

Stage 0 can't deal with iniitalizing things as complex as a Thunderbolt cable with active signal conditioning chips in the cable heads. It needs a dumb USB cable that's just plain wires.

This is also why, when you put a Mac in DFU mode, there's no UI to speak of. Gone are the days of the floating Firewire logo on a Mac in Target Disk Mode. Initializing things as complex as displays just isn't something Apple's stage 0 ROM can do.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Binary Badger posted:

So where is the stage 0 boot ROM on an Intel Mac? Just wondering.

Nominally, Intel x86 CPUs start up just like an 80386 from the 1980s: they assume there's firmware stored in an external non-volatile memory, and that at least 16 bytes of this memory are mapped to physical address 0xFFFFFFF0 (just before the end of the 386's 32-bit address space). They fetch the instruction at that address and execute it. This first instruction is called the "reset vector" and it will almost always be an unconditional jump to another location inside the firmware.

On a modern Intel Mac (except T2 Macs), the reset vector and the rest of the UEFI firmware should reside in a motherboard flash chip which is directly addressable by the x86 CPU.

Under the hood, there's a bit more going on before an x86 chip actually jumps to its reset vector, because Intel has attempted to retrofit security into this nearly 40-year-old design.

https://mjg59.dreamwidth.org/66109.html

As Apple got more serious about security, they wanted to bring iOS quality boot security to the Mac, and that's why I said "except T2 Macs". On T2 machines, the UEFI image is in flash attached to the T2, and the T2 validates it and provides it to the x86 CPU. Apple also did a lot of customization in their UEFI implementation to improve boot security above the norm in the Windows PC world.

https://www.youtube.com/watch?v=3byNNUReyvE

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

jaegerx posted:

Brenden Gregg has an account here?

IDK who that is

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
Spotlight works a hell of a lot better if you turn search options off in its settings to de-pollute results. "Siri Suggestions" and "Websites" are particularly bad offenders. (Not perfect, though. You may still be frustrated with it and want to use a different tool.)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

SRQ posted:

Am I missing something or is HDR supposed to be a useless "make everything look washed out" button on MacOS.
It doesn't look at all like this on Windows and I am frankly baffled as to what the point is supposed to be.

A brief google look seems to lean towards, yes, Apple is shipping "HDR support" in MacOS, has been for years, and it's completely broken and useless?

Apple's HDR support is great on a HDR display, like the built-in displays on 14" and 16" retina MBPs. If you aren't using it on a HDR display that Apple's HDR software knows how to utilize, they do some kind of HDR-on-SDR emulation and that might be what you're seeing.

Otherwise, what Perplx said.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Splinter posted:

Why is my macintosh operating system telling me I can't downgrade from Monterey to Ventura?

Are you in the default Full Security mode? It has rollback protection, meaning it won't let you downgrade to older versions that are no longer considered trusted by Apple (typically because they have security vulnerabilities addressed by later releases).

You can change to Reduced Security mode (which will let you downgrade to any macOS release that supports your hardware) by following the directions here.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Data Graham posted:

It feels like the issues with Siri are at the "parsing human language"/"understand intent" level, and I accept that it could be my fault for not talking enough like a computer, but it's a whole nother issue that I feel really dumb giving voice commands and I keep having to reword things and misspeak and try to start over and get all tangled up in knots trying to make it time out so I can try again and not crash at the same time. Star Trek this isn't

None of these systems understand human intent in any way you can relate to as a human being. This is a crude and inexact analogy, but *GPT is a family of word arrangers built on picking the statistically best next word, where the notions of "best" and/or "most likely" are built by training on a huge corpus of source texts. Calling it "AI" is a mistake. They've trained in enough responsiveness that it can chatbot you into believing there's more there, but really there's no guiding intelligence. No mental model of the world. It's super-Eliza, not a nascent artificial intelligence. Sort of like a parrot that's been exposed to billions of bytes of things that it can regurgitate, but I'd actually credit a parrot with more self-awareness than *GPT.

eightysixed posted:

What did that guy just post? :stare:

A lot of really insightful stuff about the current AI hype cycle.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
"com.apple.CrashReporterSupportHelper" is part of Apple's infrastructure to upload crash reports for their software (and third party software) to their servers. If it's dead, you shouldn't lose any sync functionality, because it's not part of sync.

For exactly that reason, I'm a little suspicious of the advice which claimed killing it should help syncing out. You sure that the success wasn't just due to yanking the cord and plugging it back in?

I mean, I can imagine scenarios where killing the crash reporter helps. Something along the lines of "sync program crashes, can't be restarted until the crash reporter exits, but the crash reporter hangs forever". It just seems odd that someone would tell you to kill the crash reporter though, usually the fix for a sync problem is to kill the sync daemon (which should lead to launchd automatically restarting it).

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
No, the 4-second thing is loving terrible because literally the only Mac application which does that is Chrome and it comes across as "Google really really does not want you to ever quit Chrome". Also web browsers can be set up to minimize the impact of accidental cmd-Q by re-opening windows and tabs from the previous session, so there was never any need to introduce that horseshit.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

The Grumbles posted:

In one of my early career lovely jobs, which involved going from event to event giving presentations, I had a blazing argument with one of the directors because she caught me closing the lid of the work MacBook and not fully shutting the computer down between uses. I remember explaining that MacBooks are like phones now and we really shouldn't be switching it off ever and she told me that everyone knows you need to always shut down your computer and chewed me out in front of the whole crowd. This would have been like 2014

What a lovely micromanager. Also Apple was making notebooks that could be slept instead of shut down since before Y2K, so LMAO at someone getting worked up about using sleep on a MacBook in 2014.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Dans Macabre posted:

[*] Can’t disable internal monitor (if you want to JUST use external monitor)

Not sure exactly what your use case is, but you absolutely can - just close the lid. You need an external mouse and keyboard, but that's usually what you want anyways when working entirely on an external monitor.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Twerk from Home posted:

What's the best virtualization tool for running x86 linux VMs on a modern Apple Silicon host? Ideally using Rosetta so that performance isn't dogshit?

There isn't a way to do precisely what you described. Rosetta doesn't provide the full-machine emulation you'd need to run an entire x86 VM, it's limited to running userspace x86 binaries on an arm64 OS.

What you can do is run an arm64 linux guest, and install Rosetta for Linux in that guest to enable it to run x86 binaries at high speed. As already suggested, UTM is a good place to start.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

The Lord Bude posted:

That isn’t strictly true - Rosetta doesn’t do full machine emulation; but UTM does (and has done for at least several years now).

this isn't a good well actually because it doesn't contradict anything I said, OP ideally wanted Rosetta performance while running an x86 VM and that's not possible.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

ThermoPhysical posted:

Is the M3 Pro/Max 14" MacBook Pro a good desktop replacement machine? My roommate needs a device to replace his M1 iMac with and wants portability but also a desktop. He had the 16" MBP M1 but it was too heavy for his backpack and he got the iMac instead.

The M1 iMac was the base M1 chip, not M1 Pro or M1 Max. If he's happy with the iMac's compute power and just wants it portable, the base model 14" MBP with the non-pro M3 chip will be significantly more powerful, just tell him to make sure he gets at least as much RAM as the iMac.

The 24" M series iMacs aren't high end computers, they all have base M series chips, not Pro/Max chips. Only the Airs are less powerful, and that's only because they lack a cooling fan.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

eightysixed posted:

Please tell that to my 2017 Air, because that thing has a fan that might match a leaf blower.

Branch Nvidian posted:

The Apple Silicon Airs lack a fan. It and the Touch Bar were the only real differentiators between the 2020 M1 MacBook Pro and MacBook Air

Yep, meant Apple Silicon Airs. I think the one and only fanless Intel Mac was the 12" 2015 (?) MacBook ultra-ultralight.

(whose fandom won't stop jonesing for an Apple Silicon version, because Intel processors were not a good match to no fan)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

frogbs posted:

I tried with my usual keyboard I plug in that does have custom mappings, and the keyboard on the MacBook. Same behavior in both cases. Maybe the custom mappings is also in place for my laptop keyboard somehow even when the other keyboard is plugged in? Guess I’ll check that next.

I recommend adding a brand new user account, logging into that, and using it to import the ancient iTunes database. Besides resolving all possible doubt about keyboard remapping (because none should be set up when logged into a newly created account), you won't have to worry about loving over your modern iTunes library while you go to town doing any risky thing you can imagine to force the old one to import.

Oops I meant "force a copy of the old one to import", don't do any of this with your only copy of the data

Adbot
ADBOT LOVES YOU

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Warbird posted:

How does the logic of what has "focus" of the media controls work? Is there some way to change what has that focus? Occasionally I'll have a youtube video playing and try to pause via F8 but that input is intercepted by something else that hasn't been messed with in hours.

I can't tell you what makes it decide to sometimes change focus, but when there's multiple media items the system can play, you should get a menu bar widget which looks like a play button with a circle around it. Click it and you'll see a list of all current playable media with pause/play/fwd/back buttons. Clicking on things here seems to set focus for the keyboard media controls.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply