Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Klyith
Aug 3, 2007

GBS Pledge Week

v1ld posted:

This is a neat way of maintaining dotfiles in git without any scripts, symlinks and, most importantly, no .git folder in your home directory: https://news.ycombinator.com/item?id=11071754

Works beautifully. Happy to throw away my old perl script that maintains symlinks from a git-controlled dotfiles directory and allows for os- and host-specific overrides and all that cruft. This is a pretty simple idea, and elegant too.

Is the main idea for using got on your dotfiles to have versioning so you can gently caress around with setups, or just a convenient way to store & pull config from the cloud to a new machine?

I dunno, I have a hard time seeing the superiority of git over basic versioned backup. Are you really making commit comments to updated configs so you know what changes you made? If so, that's an impressive level of dedication!

(Also hopefully nobody ITT is dumb enough to do this, but a story I saw a while ago was about github being a security problem because too many people were just backing up this poo poo from their home folder without considering things like bash history or whatnot.)

Adbot
ADBOT LOVES YOU

Kibner
Oct 21, 2008

Acguy Supremacy
I have some private git hub repositories for my docker-compose.yaml files, along with some comments because I know I would forget why I was doing what if I didn't.

Music Theory
Aug 7, 2013

Avatar by Garden Walker
i use a complicated multi-repository home-manager setup, because i have brainworms

e: and i do actually write comments and real commit messages for it, because i have brainworms

v1ld
Apr 16, 2012

Klyith posted:

Is the main idea for using got on your dotfiles to have versioning so you can gently caress around with setups, or just a convenient way to store & pull config from the cloud to a new machine?

I dunno, I have a hard time seeing the superiority of git over basic versioned backup. Are you really making commit comments to updated configs so you know what changes you made? If so, that's an impressive level of dedication!

(Also hopefully nobody ITT is dumb enough to do this, but a story I saw a while ago was about github being a security problem because too many people were just backing up this poo poo from their home folder without considering things like bash history or whatnot.)

The former - to have easy access from the cloud - is the primary motivation, but branching so as to be able to customize quickly for a specific host or OS yet track it is another. With git it's a two way flow too - you can hack away on all of your machines secure in the knowledge that you can safely merge your histories later.

History as history is the least interesting, it has been useful only a very few times and it's arguable that it was needed even then for stuff as light as configs. Though I do share the same brainworms as Music Theory when it comes to commit messages. I'll even do --amends and rebases to clean up history - for an audience of one, me.

I just bootstrapped another Arch install on my laptop and it was trivial to set up the primary dotfiles - that was the motivation to look around for state of the art here actually.

Back when I was actively using OS X and Windows as desk/laptops and Linux as a server, it was useful to be able to use all the same heavily customized dotfiles on all 3 environments (Cygwin on Windows) with overrides for each environment and in some cases, based on hostname. So I wrote this perl script which was basically a glorified "ln -s .* ~" with some sanity checks and the ability to do OS/host-specific overrides.

But one major reason for doing it by having a separate git-controlled dotfiles repo dir that was then symlinked to is that putting a git repo in your $HOME means you can't have any other git repos under your home dir. I do all my work in ~/work and personal stuff in ~/src so that's a no-no.

With the --separate-git-dir idea, you get the best of both worlds - no symlinks needed, no .git in $HOME, but direct version controlled files in $HOME.


It's also true that a lot of the stuff I had written for myself - primarily zsh hacks - is now available in standard packages (like the transient prompt below which I'd hacked together for myself but seems to have become standard in various shells in the past couple years). I'm also not doing active cli stuff on many machines anymore. So my config files are also a lot smaller and less system-specific and a simple git repo with branches as needed for OS-specific stuff should be just fine.

Long-winded answer, but it's a neat little idea and not having to use anything beyond git and one alias is elegant when you look at how many solutions people have come up with for the problem.

Transient Prompt


E; The gif is from Powerline10k's implementation as the author came up with it independently and everyone else seems to have picked it up from there. I like it so far, using its Lean theme to reduce the bling.

I started over with a fresh .zshrc and am only bringing in the bare minimum of stuff from my old configs just so I'm forced to look at all the cool new stuff that's out there.

Likewise with the editor. Decided to scap my Emacs configs that go back to '93(!) and switch to Neovim which I'm mightily impressed with. Modern day Emacs is far far beyond my old configs and if I'm going to start from scratch might as well try something new.

The Helix editor seem really interesting as a clean restart on a modal editor using all the knowledge that came before. It uses the core ideas from Kakoune too, which is another really cool look at how you'd do a modal editor today - its ideas on always multi-select and obj/verb syntax are neat.

v1ld fucked around with this message at 18:52 on Aug 21, 2022

Hexigrammus
May 22, 2006

Cheech Wizard stories are clean, wholesome, reflective truths that go great with the marijuana munchies and a blow job.
I'm thinking about adding a sound card to my gaming/media box to try to fix some sound quality issues with the built in board and my sound system. Any issues I need to be aware of with Linux and lower end boards by Asus, Creative, or Sedna?

Currently running Pop!_OS but will probably have switched to Fedora by the time anything arrives. I'm getting tired of the way Pop!_OS keeps breaking the Nvidia driver installs for me this spring.

Mr. Crow
May 22, 2008

Snap City mayor for life

Klyith posted:

I dunno, I have a hard time seeing the superiority of git over basic versioned backup. Are you really making commit comments to updated configs so you know what changes you made? If so, that's an impressive level of dedication!

Of course?

Computer viking
May 30, 2011
Now with less breakage.

Hexigrammus posted:

I'm thinking about adding a sound card to my gaming/media box to try to fix some sound quality issues with the built in board and my sound system. Any issues I need to be aware of with Linux and lower end boards by Asus, Creative, or Sedna?

Currently running Pop!_OS but will probably have switched to Fedora by the time anything arrives. I'm getting tired of the way Pop!_OS keeps breaking the Nvidia driver installs for me this spring.

If you just want to test, the cheap dongle style USB soundcards are probably the most likely to Just Work - they tend to be very interchangeable and show up as generic USB audio. The more fancy the card is, the more likely it is to require a specific driver- though I bet most of those will also just work on a modern distro.

v1ld
Apr 16, 2012

I did an interview a long time agao where the 2 interviewers were offended that I put in a comment in the solution to a problem that you were supposed to code up at home. Bad style, I was told.

So I asked if they'd noticed the object in the problem wasn't a tree, though it superficially looked like one, and had sqrt(n) complexity to walk instead of log(n) as you might naively expect. The 2 interviewers hadn't realized this - and so maybe the comment that pointed this out was actually relevant and they should read it?

Nope, bad style.

It wasn't a tree because some nodes had 2 parents, not one. But it kinda look like one when you wrote it like this:

code:
         1
        2 3
       4 5 6
      7 8 9 0
So 5 has both 2 & 3 as parents, 9 has both 5 & 6, etc - not a tree, but a general graph with depth O(sqrt(total nodes)). They didn't like the comment but didn't understand their own problem.

v1ld fucked around with this message at 20:33 on Aug 21, 2022

Yaoi Gagarin
Feb 20, 2014

v1ld posted:

I did an interview a long time agao where the 2 interviewers were offended that I put in a comment in the solution to a problem that you were supposed to code up at home. Bad style, I was told.

So I asked if they'd noticed the object in the problem wasn't a tree, though it superficially looked like one, and had sqrt(n) complexity to walk instead of log(n) as you might naively expect. The 2 interviewers hadn't realized this - and so maybe the comment that pointed this out was actually relevant and they should read it?

Nope, bad style.

It wasn't a tree because some nodes had 2 parents, not one. But it kinda look like one when you wrote it like this:

code:
         1
        2 3
       4 5 6
      7 8 9 0
So 5 has both 2 & 3 as parents, 9 has both 5 & 6, etc - not a tree, but a general graph with depth O(sqrt(total nodes)). They didn't like the comment but didn't understand their own problem.
loving lol

Klyith
Aug 3, 2007

GBS Pledge Week

Mr. Crow posted:

Of course?



Yeah I definitely get why comments are good but config files seemed to not need that much of a history. I may not remember what changes I did or in which order, but that's ok because I'd rarely want to revert anything I'd intentionally set up.

OTOH I was kinda only thinking about user apps, not much more complicated stuff like docker or whatnot. That I see how actual version control would be really useful.


Computer viking posted:

If you just want to test, the cheap dongle style USB soundcards are probably the most likely to Just Work - they tend to be very interchangeable and show up as generic USB audio.

Definitely this, in particular look for a USB audio device that is UAC1/2 and you have something that is a completely standard audio interface with generic drivers and a dead-simple approach (dump PCM audio over the USB to a DAC). Not much to go wrong there. I can give a nod to Schiit USB DACs if you want something with audiophile quality that works with linux.


(If an internal sound card is a must, I was using a Asus Xonar D-something, which is actually just a C-media chip with asus branding, before I got the schitt. Worked fine in linux, I only ditched it to reclaim PCIe lanes.)

Computer viking
May 30, 2011
Now with less breakage.

I have a Fiio headphone amp in a drawer that also seems to work on absolutely everything and sounds good. Presumably some standard usb audio chip in front of a better output stage.

Depending on what the problem is, you may also be able to get an SPDIF to (analog) line out converter; a lot of motherboards seem to include optical out. Won't fix any driver issues, but sometimes just getting a digital stream over a non-conductive cable is all you need to fix your sound quality issues.

Computer viking fucked around with this message at 21:00 on Aug 21, 2022

v1ld
Apr 16, 2012

The Apple USB-C to 3.5 mm Headphone Jack Adapter is just $9 and has a DAC in it. Like Computer viking said, you can't go wrong with these if you're looking for USB -> analog conversion because your sound system doesn't speak USB.

Since you're going into a sound system maybe you want to stay digital all the way and could use a USB -> PCM converter like this one instead? https://www.amazon.com/Cubilux-TOSLINK-Thunderbolt-Converter-Compatible/dp/B09QFYNB7Y

A sound card or more expensive external DAC is good if you don't like the one in your current path, but if you have a sound system it probably has a good DAC in it already and staying digital all the way to it is a good idea.

Mr. Crow
May 22, 2008

Snap City mayor for life

Klyith posted:

Yeah I definitely get why comments are good but config files seemed to not need that much of a history. I may not remember what changes I did or in which order, but that's ok because I'd rarely want to revert anything I'd intentionally set up.

OTOH I was kinda only thinking about user apps, not much more complicated stuff like docker or whatnot. That I see how actual version control would be really useful.

I find its helpful for taking notes when i change a bunch of config files at once for some reason, usually quirks, gives me a way to record what i did and why i did it that just looking at a config file in isolation might not tell me. I also keep systemd user units and scripts in my dotfiles though so /shrug.

v1ld
Apr 16, 2012

Mr. Crow posted:

I also keep systemd user units and scripts in my dotfiles though so /shrug.

This is a cool idea.

I'm new to systemd and only discovered systemd's user units stuff yesterday when looking at how best to run kmonad. Structuring some of the login-session only stuff I want to run as systemd units is better than putting more complex tests in startup files to ensure uniqueness or guaranteed start for example.

Hexigrammus
May 22, 2006

Cheech Wizard stories are clean, wholesome, reflective truths that go great with the marijuana munchies and a blow job.

Computer viking posted:

Depending on what the problem is, you may also be able to get an SPDIF to (analog) line out converter; a lot of motherboards seem to include optical out. Won't fix any driver issues, but sometimes just getting a digital stream over a non-conductive cable is all you need to fix your sound quality issues.


v1ld posted:

A sound card or more expensive external DAC is good if you don't like the one in your current path, but if you have a sound system it probably has a good DAC in it already and staying digital all the way to it is a good idea.


Thanks, I think this is the approach I'll take. I think part of the problem is a ground loop so optical would be good. I've got one of the inexpensive USB converters on order now so we'll see how that goes.


Klyith posted:

I can give a nod to Schiit USB DACs if you want something with audiophile quality that works with linux.

Oooh... Shiny!

Computer viking
May 30, 2011
Now with less breakage.

I will say that a separately powered optical to line out box did wonders when I was testing a tube headphone amp I made as a toy project. Computers are electrically noisy beasts, and 30cm of optical fibre is a great insulator.

F4rt5
May 20, 2006

Computer viking posted:

I will say that a separately powered optical to line out box did wonders when I was testing a tube headphone amp I made as a toy project. Computers are electrically noisy beasts, and 30cm of optical fibre is a great insulator.

I was so happy to get a pair of decent (to me) monitors only to discover that the lack of balanced outputs from my crap Behringer mixer made the interference from the GPU extremely noticeable. Switched to a Focusrite Scarlett and balanced cables and the noise is gone.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Could someone point me to a good resource that explains what LVM thin provisioning is, and how to use it?

Someone mentioned it as a possible solution to a problem I have, and I can't find a clear explanation of the concept, only a bunch of "type these commands" tutorials.

Tesseraction
Apr 5, 2009

As a concept? It's just allocating more than you actually have with the idea being that the users won't all fill out their allocation at once.

If I have 10 TB of storage I can allocate 1 TB each to 20 people and only need to add more physical storage once they start getting close to the limit. However in theory only one or two of them are going to use the full amount and everyone else will use very little.

BlankSystemDaemon
Mar 13, 2009



NihilCredo posted:

Could someone point me to a good resource that explains what LVM thin provisioning is, and how to use it?

Someone mentioned it as a possible solution to a problem I have, and I can't find a clear explanation of the concept, only a bunch of "type these commands" tutorials.
Thin provisioning is pretending that a there's more storage than what's available, and hoping the whoever's paying for it don't catch on.
It's the exact same idea behind ISP overprovisioning, where they hope that the customer won't use all their available bandwidth.

I'm not convinced that the IT industry as a whole would be profitable if people used all the compute, storage and bandwidth that they pay for - but I suppose that's what comes of the commodification of compute et al.

EDIT: I forgot to press post. orz

BlankSystemDaemon fucked around with this message at 14:35 on Aug 22, 2022

Computer viking
May 30, 2011
Now with less breakage.

It's fundamentally very similar to the virtual memory subsystem, if you're familiar with that?

At one end, you have a certain amount of actual, physical, disk. At the other end, you have partitions that claim to be of a certain size - the partition table says "this is 1 TB long", and when you try to read or write to any random position, it will work.

In the middle, there is a translation layer that converts between physical addresses and logical addresses, probably in fairly large blocks. Whenever you try to read something at the upper layer, LVM looks up that address in the translation layer, and redirects to the right physical block. If the address is not in the table, it has never been written - and it's fine to pretend that it exists and is full of zeroes. When you write, LVM redirects it to a free chunk of physical disk, and adds an entry to the translation table.

The obvious vulnerability here is that you can easily create thin disks that in sum claim to be larger than the physical disks. This works fine until you write too much data - and then all the virtual disks will seem to have free space but you still can't write to them.

This is basically the same approach and the same failure mode as running VMs backed by thin provisioned files - qcow2 and the like are basically the same kind of translation table plus "physical" storage, just inside a normal file that starts small but can grow. Same vulnerabilities, too - nothing keeps you from creating a "this is 2TB" disk in a file that lives on a 128GB disk, and will work fine. Until it doesn't.

BlankSystemDaemon
Mar 13, 2009



Computer viking posted:

It's fundamentally very similar to the virtual memory subsystem, if you're familiar with that?

At one end, you have a certain amount of actual, physical, disk. At the other end, you have partitions that claim to be of a certain size - the partition table says "this is 1 TB long", and when you try to read or write to any random position, it will work.

In the middle, there is a translation layer that converts between physical addresses and logical addresses, probably in fairly large blocks. Whenever you try to read something at the upper layer, LVM looks up that address in the translation layer, and redirects to the right physical block. If the address is not in the table, it has never been written - and it's fine to pretend that it exists and is full of zeroes. When you write, LVM redirects it to a free chunk of physical disk, and adds an entry to the translation table.

The obvious vulnerability here is that you can easily create thin disks that in sum claim to be larger than the physical disks. This works fine until you write too much data - and then all the virtual disks will seem to have free space but you still can't write to them.

This is basically the same approach and the same failure mode as running VMs backed by thin provisioned files - qcow2 and the like are basically the same kind of translation table plus "physical" storage, just inside a normal file that starts small but can grow. Same vulnerabilities, too - nothing keeps you from creating a "this is 2TB" disk in a file that lives on a 128GB disk, and will work fine. Until it doesn't.
It's not quite the same, because the entire point of virtual memory is that you're not mapping to physical addresses, so that the various parts of each program stack can be stored relative to each other, and in the same vein it isn't related to physical vs logical addressing, as the first has to do with cylinder/head positioning on very old disks and the latter is about the number of individual blocks newer disks are divided into.

Virtual memory is also not wildly useful without paging, which is much more complex and which I think is what you're trying to get at? Either way, it's hard to explain either paging or overprovisioning with analogy and the analogies breaks down pretty easily.

BlankSystemDaemon fucked around with this message at 14:37 on Aug 22, 2022

Computer viking
May 30, 2011
Now with less breakage.

BlankSystemDaemon posted:

It's not quite the same, because the entire point of virtual memory is that you're not mapping to physical addresses, so that the various parts of each program stack can be stored relative to each other, and in the same vein it isn't related to physical vs logical addressing, as the first has to do with cylinder/head positioning on very old disks and the latter is about the number of individual blocks newer disks are divided into.

Virtual memory is also not wildly useful without paging, which is much more complex and which I think is what you're trying to get at? Either way, it's hard to explain either paging or overprovisioning with analogy and the analogies breaks down pretty easily.

Imagine a VM system where you want to provide each new userland process with its own memory map, and you're fine with overcommitting, but there is no swap. Sure, it'll fail in exciting ways when a process tries to write to a new page but there are no free pages to back it. Up to that point it should work, though.

Now, imagine a thin provisioned storage system where you want each virtual disk to have its own address space and you're fine with overcomitting. Sure, it'll fail in exciting ways when something tries to write to a virtual block but there are no free physical blocks to back it. Up to that point it should work, though.

As for paging, how many linux systems in the world are running with no swap and vm.overcommit_memory set to allow some/unlimited overcommits? That's conceptually not a mile away from the "a handful of 1TB thin images on a 128GB disk" situation.

Also, I guess you could extend a thin provisioning system to demote less used blocks to slower storage to free up fast storage - but you can still deliver the data directly from the slow store, so it's not 1:1 with swap. It's not at all a perfect analogy, but it's also not entirely unrelated. :)

Computer viking fucked around with this message at 16:06 on Aug 22, 2022

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Thanks for all the responses.

At what level does this thin provisioning happen? I think it's lower-level than the filesystem, otherwise the response I got didn't really make sense. And I'm guessing (hoping) the performance impact is minimal?

To not beat further around the bush, my issue is that I want to use a few differently-sized HDDs to store both redundant (= important) and non-redundant (= not important) data, but I don't know in advance how much storage I wanted to assign to redundant data. I'm using Fedora so BTRFS was my first choice but it can't (yet) set different redundancy options to different subvolumes.

So I guess the greybeard who said "LVM thin provisioning sounds right for your case" meant that I should create for each drive two fake volumes each equal to the drive's total capacity, then put half of them in a btrfs/zfs redundant pool and half in a non-redundant pool. Since it's for home use I don't have to worry about sudden usage spikes or anything.

RFC2324
Jun 7, 2012

http 418

Someone please stop telling my sales department about overprovisioning. We are getting tired of backups failing because some chucklefuck allocated 120% of the datastore

Mr. Crow
May 22, 2008

Snap City mayor for life

NihilCredo posted:

Thanks for all the responses.

At what level does this thin provisioning happen? I think it's lower-level than the filesystem, otherwise the response I got didn't really make sense. And I'm guessing (hoping) the performance impact is minimal?

To not beat further around the bush, my issue is that I want to use a few differently-sized HDDs to store both redundant (= important) and non-redundant (= not important) data, but I don't know in advance how much storage I wanted to assign to redundant data. I'm using Fedora so BTRFS was my first choice but it can't (yet) set different redundancy options to different subvolumes.

So I guess the greybeard who said "LVM thin provisioning sounds right for your case" meant that I should create for each drive two fake volumes each equal to the drive's total capacity, then put half of them in a btrfs/zfs redundant pool and half in a non-redundant pool. Since it's for home use I don't have to worry about sudden usage spikes or anything.

Since you're (talking about) using LVM why dont you just not allocate 100% of your volume group space up front and distribute it as needed down the road? Thin provisioning could work but its not really any better in this situation and adds complexity, imo.

E.g. assign all your disks to butts-vg, assign 25% to important-butts-lv and 25% to unimportant-butts-lv, then give more to each LV as needed and appropriate.

Klyith
Aug 3, 2007

GBS Pledge Week

NihilCredo posted:

I think it's lower-level than the filesystem, otherwise the response I got didn't really make sense.

Yes, LVM works on the block device layer, which is under the FS. Same as LUKS.


NihilCredo posted:

To not beat further around the bush, my issue is that I want to use a few differently-sized HDDs to store both redundant (= important) and non-redundant (= not important) data, but I don't know in advance how much storage I wanted to assign to redundant data. I'm using Fedora so BTRFS was my first choice but it can't (yet) set different redundancy options to different subvolumes.

So I guess the greybeard who said "LVM thin provisioning sounds right for your case" meant that I should create for each drive two fake volumes each equal to the drive's total capacity, then put half of them in a btrfs/zfs redundant pool and half in a non-redundant pool. Since it's for home use I don't have to worry about sudden usage spikes or anything.

So the immediate problem I see with this is that both FSes will try to spread data across drives evenly, so the smallest drive is gonna run out of space first if it's shared equally between the redundant and non-redundant FS.

So you could use thin provision for flexibility, but even with that you're gonna need to put some thought into it and think about how data will be allocated. Just the basic principle of redundant fills every drive equally, non-redundant tries to spread data but can put it wherever.

Like for example if you had radically different drive sizes 4, 8, and 14TB I would probably put the 4 as redundant only to avoid that problem.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

NihilCredo posted:

To not beat further around the bush, my issue is that I want to use a few differently-sized HDDs to store both redundant (= important) and non-redundant (= not important) data, but I don't know in advance how much storage I wanted to assign to redundant data. I'm using Fedora so BTRFS was my first choice but it can't (yet) set different redundancy options to different subvolumes.

I would do that using MDADM and LVM, the tools I'm familiar with. Two options. The simple method is to choose two similar sized disks and make a RAID-1 mirror of them. Leave the remaining disk(s) as JBOD.

Create a LVM volume group and add the RAID-1 mirror to it. Then create a LVM logical volume for the important data, make it slightly larger than what you know is needed, it's trivial to expand it later.

Now add the JBOD disk to the volume group and create a logical volume for the non-important data. The tricky part here is that you want this LV to only live on the JBOD disk, but also use all of it so you can't expand the "important" LV to that side by accident. I haven't tested it, but it might be possible to do that with command 'lvcreate -l 100%FREE -n lv_junk myVG /dev/JBODdisk'. The other way is to run 'pvdisplay' and check the physical extent amount (Total PE) of JBODdisk and give that to the -l parameter.


The extreme version of this is to divide all the drives to small (~100GB or more), equal sized partitions and combine them to different kinds of RAID arrays as suites your needs before adding them to different volume groups. This is actually what I have been doing for over 15 years and it has worked quite well. If you have 4 drives you could make a 4 partition RAID-1 for the really important stuff, RAID-5 for the importantish stuff, use single partitions for the non-important stuff, or even RAID-0 for scratch stuff.

This system is also really flexible when swapping, adding or removing drives. I originally started with something like 80 GB drives and it has been growing ever since.

BlankSystemDaemon
Mar 13, 2009



One way to accomplish it would be to use GEOM or MDADM/LVM to create many smaller partitions on each individual disk, which you then stripe together, and create a mirror across the other disks for each set of striped partitions.
It's effectively how Drobo accomplishes their expandable storage solution, but it's a horrible hack at best and a quick way to lose your data at worst.

Also keep in mind the most important thing about RAID: The R doesn't mean redundancy in the sense most people assume, but rather reflects an attempt at data availability.
Irrespective of anything else though, you need backups, and they'll need to be separate, programmatic and automated (meaning you can easily do them manually, not just rely on automation) - and in an ideal world they'll be pull rather than push based so that if you get cryptolockered, your backups won't be nuked (because cryptolockers and everything else has learned about most common backup methods).

Methanar
Sep 26, 2013

by the sex ghost

where beastie

BlankSystemDaemon
Mar 13, 2009



Methanar posted:

where beastie
Alpha is better. :colbert:
For now

Klyith
Aug 3, 2007

GBS Pledge Week
So, weird thing I can't figure out: the samba sharing on my linux desktop, other clients now can't see or connect to the top level \\HOSTNAME list of shared folders.

Dunno when this happened 'cause I haven't done it in a while, but I also haven't make any changes to my smb.conf in even longer. But it definitely worked at some point in the past. It fails on a windows PC, my android phone, and loopback. Different but non-specific error messages on each, natch!


My google-fu is failing me because I don't know what that top-level \\HOST\ folder is called. ("Root" is all people trying to share / or allow a smb user to be root, for whatever godforsaken reason. And "top level" is too vague.)

edit: sharing still works 100% otherwise, as long as I connect directly to \\HOSTNAME\Music or whatever.

Klyith fucked around with this message at 02:40 on Aug 24, 2022

RFC2324
Jun 7, 2012

http 418

Klyith posted:

So, weird thing I can't figure out: the samba sharing on my linux desktop, other clients now can't see or connect to the top level \\HOSTNAME list of shared folders.

Dunno when this happened 'cause I haven't done it in a while, but I also haven't make any changes to my smb.conf in even longer. But it definitely worked at some point in the past. It fails on a windows PC, my android phone, and loopback. Different but non-specific error messages on each, natch!


My google-fu is failing me because I don't know what that top-level \\HOST\ folder is called. ("Root" is all people trying to share / or allow a smb user to be root, for whatever godforsaken reason. And "top level" is too vague.)

edit: sharing still works 100% otherwise, as long as I connect directly to \\HOSTNAME\Music or whatever.

the phrase you are looking for is "browse shares". you are currently unable to browse shares.

https://unix.stackexchange.com/questions/665872/samba-shares-not-visible-in-network-neighborhood-windows-explorer

check that out

Klyith
Aug 3, 2007

GBS Pledge Week

RFC2324 posted:

the phrase you are looking for is "browse shares". you are currently unable to browse shares.

https://unix.stackexchange.com/questions/665872/samba-shares-not-visible-in-network-neighborhood-windows-explorer

check that out

Hmmm that's a much better keyword, thanks. Lots better results to dig through.

But that specific answer isn't what's up -- that's about Windows auto putting a computer in Network Neighborhood. Couldn't care less about that, my issue is connecting to the computer and getting a list of shared folders. And it fails across other OSes.

CaptainSarcastic
Jul 6, 2013



Klyith posted:

Hmmm that's a much better keyword, thanks. Lots better results to dig through.

But that specific answer isn't what's up -- that's about Windows auto putting a computer in Network Neighborhood. Couldn't care less about that, my issue is connecting to the computer and getting a list of shared folders. And it fails across other OSes.

I'm not the most technical user, but could your firewall rules have changed and are blocking samba?

Computer viking
May 30, 2011
Now with less breakage.

I had that issue in Gnome files but not on the terminal yesterday - my theory is that Files had somehow cached the broken state when the server was down for a bit, but only for the directories I had tried while it was down. Typing in any deeper path worked.

Most likely not your problem, though.

RFC2324
Jun 7, 2012

http 418

Just gonna point out again that the share browser uses a different mechanism to discover that info than the one used once you are in a share, so "i can browse files just fine" is a red herring other than telling you what does work

Klyith
Aug 3, 2007

GBS Pledge Week

CaptainSarcastic posted:

I'm not the most technical user, but could your firewall rules have changed and are blocking samba?

Definitely not blocking samba as a whole, since everything is fine when I navigate directly to a subfolder. The reason I don't know when this started is because my main use for samba is getting files on the PC from my phone, and on the phone I have bookmarks that go directly to \\LINUXPC\Public\ or \\LINUXPC\Music\ in Cx File Explorer.


Also smb sharing works normally with everything else on the network. The Linux PC can connect to the \\WINDOWSPC\ and \\PI-MUSICBOX\ top level shares, as can my phone. It's just sharing from the linux pc, and just that top-level \\LINUXPC\ quasi-folder that's broken. Oh yeah it still doesn't work using the IP address instead of hostname.


RFC2324 posted:

Just gonna point out again that the share browser uses a different mechanism to discover that info than the one used once you are in a share, so "i can browse files just fine" is a red herring other than telling you what does work

I have never had wsdd installed, and connecting to \\LINUXPC\ used to work just fine from a windows PC. Also it doesn't work from the android phone, or the linux pc trying to connect to itself on smb://127.0.0.1/ (but again is fine with smb://127.0.0.1/Music/), so I think wsdd can be ruled out as the problem.

Some other mechanism that makes the top-level share browser special, it has to be in samba itself rather than an add-on.

Klyith
Aug 3, 2007

GBS Pledge Week
probably I should just nuke all samba configs and re-create them following only the current instructions and docs on samba.org

samba is apparently changing a bunch of stuff right now, even the arch wiki is out of date (still talks about server min protocol when that got removed a version ago). lots of internet how-tos are apparently wrong.

but the samba.org docs suck for stuff that's not AD or domains, all their well-written basic manuals are marked deprecated

Adbot
ADBOT LOVES YOU

RFC2324
Jun 7, 2012

http 418

Welcome to Linux lol

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply