|
v1ld posted:This is a neat way of maintaining dotfiles in git without any scripts, symlinks and, most importantly, no .git folder in your home directory: https://news.ycombinator.com/item?id=11071754 Is the main idea for using got on your dotfiles to have versioning so you can gently caress around with setups, or just a convenient way to store & pull config from the cloud to a new machine? I dunno, I have a hard time seeing the superiority of git over basic versioned backup. Are you really making commit comments to updated configs so you know what changes you made? If so, that's an impressive level of dedication! (Also hopefully nobody ITT is dumb enough to do this, but a story I saw a while ago was about github being a security problem because too many people were just backing up this poo poo from their home folder without considering things like bash history or whatnot.)
|
# ? Aug 21, 2022 16:08 |
|
|
# ? May 28, 2024 12:25 |
|
I have some private git hub repositories for my docker-compose.yaml files, along with some comments because I know I would forget why I was doing what if I didn't.
|
# ? Aug 21, 2022 16:14 |
|
i use a complicated multi-repository home-manager setup, because i have brainworms e: and i do actually write comments and real commit messages for it, because i have brainworms
|
# ? Aug 21, 2022 16:58 |
|
Klyith posted:Is the main idea for using got on your dotfiles to have versioning so you can gently caress around with setups, or just a convenient way to store & pull config from the cloud to a new machine? The former - to have easy access from the cloud - is the primary motivation, but branching so as to be able to customize quickly for a specific host or OS yet track it is another. With git it's a two way flow too - you can hack away on all of your machines secure in the knowledge that you can safely merge your histories later. History as history is the least interesting, it has been useful only a very few times and it's arguable that it was needed even then for stuff as light as configs. Though I do share the same brainworms as Music Theory when it comes to commit messages. I'll even do --amends and rebases to clean up history - for an audience of one, me. I just bootstrapped another Arch install on my laptop and it was trivial to set up the primary dotfiles - that was the motivation to look around for state of the art here actually. Back when I was actively using OS X and Windows as desk/laptops and Linux as a server, it was useful to be able to use all the same heavily customized dotfiles on all 3 environments (Cygwin on Windows) with overrides for each environment and in some cases, based on hostname. So I wrote this perl script which was basically a glorified "ln -s .* ~" with some sanity checks and the ability to do OS/host-specific overrides. But one major reason for doing it by having a separate git-controlled dotfiles repo dir that was then symlinked to is that putting a git repo in your $HOME means you can't have any other git repos under your home dir. I do all my work in ~/work and personal stuff in ~/src so that's a no-no. With the --separate-git-dir idea, you get the best of both worlds - no symlinks needed, no .git in $HOME, but direct version controlled files in $HOME. It's also true that a lot of the stuff I had written for myself - primarily zsh hacks - is now available in standard packages (like the transient prompt below which I'd hacked together for myself but seems to have become standard in various shells in the past couple years). I'm also not doing active cli stuff on many machines anymore. So my config files are also a lot smaller and less system-specific and a simple git repo with branches as needed for OS-specific stuff should be just fine. Long-winded answer, but it's a neat little idea and not having to use anything beyond git and one alias is elegant when you look at how many solutions people have come up with for the problem. Transient Prompt E; The gif is from Powerline10k's implementation as the author came up with it independently and everyone else seems to have picked it up from there. I like it so far, using its Lean theme to reduce the bling. I started over with a fresh .zshrc and am only bringing in the bare minimum of stuff from my old configs just so I'm forced to look at all the cool new stuff that's out there. Likewise with the editor. Decided to scap my Emacs configs that go back to '93(!) and switch to Neovim which I'm mightily impressed with. Modern day Emacs is far far beyond my old configs and if I'm going to start from scratch might as well try something new. The Helix editor seem really interesting as a clean restart on a modal editor using all the knowledge that came before. It uses the core ideas from Kakoune too, which is another really cool look at how you'd do a modal editor today - its ideas on always multi-select and obj/verb syntax are neat. v1ld fucked around with this message at 18:52 on Aug 21, 2022 |
# ? Aug 21, 2022 18:42 |
|
I'm thinking about adding a sound card to my gaming/media box to try to fix some sound quality issues with the built in board and my sound system. Any issues I need to be aware of with Linux and lower end boards by Asus, Creative, or Sedna? Currently running Pop!_OS but will probably have switched to Fedora by the time anything arrives. I'm getting tired of the way Pop!_OS keeps breaking the Nvidia driver installs for me this spring.
|
# ? Aug 21, 2022 19:54 |
|
Klyith posted:I dunno, I have a hard time seeing the superiority of git over basic versioned backup. Are you really making commit comments to updated configs so you know what changes you made? If so, that's an impressive level of dedication! Of course?
|
# ? Aug 21, 2022 20:18 |
|
Hexigrammus posted:I'm thinking about adding a sound card to my gaming/media box to try to fix some sound quality issues with the built in board and my sound system. Any issues I need to be aware of with Linux and lower end boards by Asus, Creative, or Sedna? If you just want to test, the cheap dongle style USB soundcards are probably the most likely to Just Work - they tend to be very interchangeable and show up as generic USB audio. The more fancy the card is, the more likely it is to require a specific driver- though I bet most of those will also just work on a modern distro.
|
# ? Aug 21, 2022 20:23 |
|
I did an interview a long time agao where the 2 interviewers were offended that I put in a comment in the solution to a problem that you were supposed to code up at home. Bad style, I was told. So I asked if they'd noticed the object in the problem wasn't a tree, though it superficially looked like one, and had sqrt(n) complexity to walk instead of log(n) as you might naively expect. The 2 interviewers hadn't realized this - and so maybe the comment that pointed this out was actually relevant and they should read it? Nope, bad style. It wasn't a tree because some nodes had 2 parents, not one. But it kinda look like one when you wrote it like this: code:
v1ld fucked around with this message at 20:33 on Aug 21, 2022 |
# ? Aug 21, 2022 20:30 |
|
v1ld posted:I did an interview a long time agao where the 2 interviewers were offended that I put in a comment in the solution to a problem that you were supposed to code up at home. Bad style, I was told.
|
# ? Aug 21, 2022 20:36 |
|
Mr. Crow posted:Of course? Yeah I definitely get why comments are good but config files seemed to not need that much of a history. I may not remember what changes I did or in which order, but that's ok because I'd rarely want to revert anything I'd intentionally set up. OTOH I was kinda only thinking about user apps, not much more complicated stuff like docker or whatnot. That I see how actual version control would be really useful. Computer viking posted:If you just want to test, the cheap dongle style USB soundcards are probably the most likely to Just Work - they tend to be very interchangeable and show up as generic USB audio. Definitely this, in particular look for a USB audio device that is UAC1/2 and you have something that is a completely standard audio interface with generic drivers and a dead-simple approach (dump PCM audio over the USB to a DAC). Not much to go wrong there. I can give a nod to Schiit USB DACs if you want something with audiophile quality that works with linux. (If an internal sound card is a must, I was using a Asus Xonar D-something, which is actually just a C-media chip with asus branding, before I got the schitt. Worked fine in linux, I only ditched it to reclaim PCIe lanes.)
|
# ? Aug 21, 2022 20:46 |
|
I have a Fiio headphone amp in a drawer that also seems to work on absolutely everything and sounds good. Presumably some standard usb audio chip in front of a better output stage. Depending on what the problem is, you may also be able to get an SPDIF to (analog) line out converter; a lot of motherboards seem to include optical out. Won't fix any driver issues, but sometimes just getting a digital stream over a non-conductive cable is all you need to fix your sound quality issues. Computer viking fucked around with this message at 21:00 on Aug 21, 2022 |
# ? Aug 21, 2022 20:57 |
|
The Apple USB-C to 3.5 mm Headphone Jack Adapter is just $9 and has a DAC in it. Like Computer viking said, you can't go wrong with these if you're looking for USB -> analog conversion because your sound system doesn't speak USB. Since you're going into a sound system maybe you want to stay digital all the way and could use a USB -> PCM converter like this one instead? https://www.amazon.com/Cubilux-TOSLINK-Thunderbolt-Converter-Compatible/dp/B09QFYNB7Y A sound card or more expensive external DAC is good if you don't like the one in your current path, but if you have a sound system it probably has a good DAC in it already and staying digital all the way to it is a good idea.
|
# ? Aug 21, 2022 21:35 |
|
Klyith posted:Yeah I definitely get why comments are good but config files seemed to not need that much of a history. I may not remember what changes I did or in which order, but that's ok because I'd rarely want to revert anything I'd intentionally set up. I find its helpful for taking notes when i change a bunch of config files at once for some reason, usually quirks, gives me a way to record what i did and why i did it that just looking at a config file in isolation might not tell me. I also keep systemd user units and scripts in my dotfiles though so /shrug.
|
# ? Aug 21, 2022 23:07 |
|
Mr. Crow posted:I also keep systemd user units and scripts in my dotfiles though so /shrug. This is a cool idea. I'm new to systemd and only discovered systemd's user units stuff yesterday when looking at how best to run kmonad. Structuring some of the login-session only stuff I want to run as systemd units is better than putting more complex tests in startup files to ensure uniqueness or guaranteed start for example.
|
# ? Aug 21, 2022 23:14 |
|
Computer viking posted:Depending on what the problem is, you may also be able to get an SPDIF to (analog) line out converter; a lot of motherboards seem to include optical out. Won't fix any driver issues, but sometimes just getting a digital stream over a non-conductive cable is all you need to fix your sound quality issues. v1ld posted:A sound card or more expensive external DAC is good if you don't like the one in your current path, but if you have a sound system it probably has a good DAC in it already and staying digital all the way to it is a good idea. Thanks, I think this is the approach I'll take. I think part of the problem is a ground loop so optical would be good. I've got one of the inexpensive USB converters on order now so we'll see how that goes. Klyith posted:I can give a nod to Schiit USB DACs if you want something with audiophile quality that works with linux. Oooh... Shiny!
|
# ? Aug 22, 2022 06:15 |
|
I will say that a separately powered optical to line out box did wonders when I was testing a tube headphone amp I made as a toy project. Computers are electrically noisy beasts, and 30cm of optical fibre is a great insulator.
|
# ? Aug 22, 2022 10:55 |
|
Computer viking posted:I will say that a separately powered optical to line out box did wonders when I was testing a tube headphone amp I made as a toy project. Computers are electrically noisy beasts, and 30cm of optical fibre is a great insulator. I was so happy to get a pair of decent (to me) monitors only to discover that the lack of balanced outputs from my crap Behringer mixer made the interference from the GPU extremely noticeable. Switched to a Focusrite Scarlett and balanced cables and the noise is gone.
|
# ? Aug 22, 2022 11:48 |
|
Could someone point me to a good resource that explains what LVM thin provisioning is, and how to use it? Someone mentioned it as a possible solution to a problem I have, and I can't find a clear explanation of the concept, only a bunch of "type these commands" tutorials.
|
# ? Aug 22, 2022 11:59 |
|
As a concept? It's just allocating more than you actually have with the idea being that the users won't all fill out their allocation at once. If I have 10 TB of storage I can allocate 1 TB each to 20 people and only need to add more physical storage once they start getting close to the limit. However in theory only one or two of them are going to use the full amount and everyone else will use very little.
|
# ? Aug 22, 2022 12:25 |
NihilCredo posted:Could someone point me to a good resource that explains what LVM thin provisioning is, and how to use it? It's the exact same idea behind ISP overprovisioning, where they hope that the customer won't use all their available bandwidth. I'm not convinced that the IT industry as a whole would be profitable if people used all the compute, storage and bandwidth that they pay for - but I suppose that's what comes of the commodification of compute et al. EDIT: I forgot to press post. orz BlankSystemDaemon fucked around with this message at 14:35 on Aug 22, 2022 |
|
# ? Aug 22, 2022 13:11 |
|
It's fundamentally very similar to the virtual memory subsystem, if you're familiar with that? At one end, you have a certain amount of actual, physical, disk. At the other end, you have partitions that claim to be of a certain size - the partition table says "this is 1 TB long", and when you try to read or write to any random position, it will work. In the middle, there is a translation layer that converts between physical addresses and logical addresses, probably in fairly large blocks. Whenever you try to read something at the upper layer, LVM looks up that address in the translation layer, and redirects to the right physical block. If the address is not in the table, it has never been written - and it's fine to pretend that it exists and is full of zeroes. When you write, LVM redirects it to a free chunk of physical disk, and adds an entry to the translation table. The obvious vulnerability here is that you can easily create thin disks that in sum claim to be larger than the physical disks. This works fine until you write too much data - and then all the virtual disks will seem to have free space but you still can't write to them. This is basically the same approach and the same failure mode as running VMs backed by thin provisioned files - qcow2 and the like are basically the same kind of translation table plus "physical" storage, just inside a normal file that starts small but can grow. Same vulnerabilities, too - nothing keeps you from creating a "this is 2TB" disk in a file that lives on a 128GB disk, and will work fine. Until it doesn't.
|
# ? Aug 22, 2022 13:30 |
Computer viking posted:It's fundamentally very similar to the virtual memory subsystem, if you're familiar with that? Virtual memory is also not wildly useful without paging, which is much more complex and which I think is what you're trying to get at? Either way, it's hard to explain either paging or overprovisioning with analogy and the analogies breaks down pretty easily. BlankSystemDaemon fucked around with this message at 14:37 on Aug 22, 2022 |
|
# ? Aug 22, 2022 14:33 |
|
BlankSystemDaemon posted:It's not quite the same, because the entire point of virtual memory is that you're not mapping to physical addresses, so that the various parts of each program stack can be stored relative to each other, and in the same vein it isn't related to physical vs logical addressing, as the first has to do with cylinder/head positioning on very old disks and the latter is about the number of individual blocks newer disks are divided into. Imagine a VM system where you want to provide each new userland process with its own memory map, and you're fine with overcommitting, but there is no swap. Sure, it'll fail in exciting ways when a process tries to write to a new page but there are no free pages to back it. Up to that point it should work, though. Now, imagine a thin provisioned storage system where you want each virtual disk to have its own address space and you're fine with overcomitting. Sure, it'll fail in exciting ways when something tries to write to a virtual block but there are no free physical blocks to back it. Up to that point it should work, though. As for paging, how many linux systems in the world are running with no swap and vm.overcommit_memory set to allow some/unlimited overcommits? That's conceptually not a mile away from the "a handful of 1TB thin images on a 128GB disk" situation. Also, I guess you could extend a thin provisioning system to demote less used blocks to slower storage to free up fast storage - but you can still deliver the data directly from the slow store, so it's not 1:1 with swap. It's not at all a perfect analogy, but it's also not entirely unrelated. Computer viking fucked around with this message at 16:06 on Aug 22, 2022 |
# ? Aug 22, 2022 16:02 |
|
Thanks for all the responses. At what level does this thin provisioning happen? I think it's lower-level than the filesystem, otherwise the response I got didn't really make sense. And I'm guessing (hoping) the performance impact is minimal? To not beat further around the bush, my issue is that I want to use a few differently-sized HDDs to store both redundant (= important) and non-redundant (= not important) data, but I don't know in advance how much storage I wanted to assign to redundant data. I'm using Fedora so BTRFS was my first choice but it can't (yet) set different redundancy options to different subvolumes. So I guess the greybeard who said "LVM thin provisioning sounds right for your case" meant that I should create for each drive two fake volumes each equal to the drive's total capacity, then put half of them in a btrfs/zfs redundant pool and half in a non-redundant pool. Since it's for home use I don't have to worry about sudden usage spikes or anything.
|
# ? Aug 22, 2022 16:29 |
|
Someone please stop telling my sales department about overprovisioning. We are getting tired of backups failing because some chucklefuck allocated 120% of the datastore
|
# ? Aug 22, 2022 16:58 |
|
NihilCredo posted:Thanks for all the responses. Since you're (talking about) using LVM why dont you just not allocate 100% of your volume group space up front and distribute it as needed down the road? Thin provisioning could work but its not really any better in this situation and adds complexity, imo. E.g. assign all your disks to butts-vg, assign 25% to important-butts-lv and 25% to unimportant-butts-lv, then give more to each LV as needed and appropriate.
|
# ? Aug 22, 2022 17:25 |
|
NihilCredo posted:I think it's lower-level than the filesystem, otherwise the response I got didn't really make sense. Yes, LVM works on the block device layer, which is under the FS. Same as LUKS. NihilCredo posted:To not beat further around the bush, my issue is that I want to use a few differently-sized HDDs to store both redundant (= important) and non-redundant (= not important) data, but I don't know in advance how much storage I wanted to assign to redundant data. I'm using Fedora so BTRFS was my first choice but it can't (yet) set different redundancy options to different subvolumes. So the immediate problem I see with this is that both FSes will try to spread data across drives evenly, so the smallest drive is gonna run out of space first if it's shared equally between the redundant and non-redundant FS. So you could use thin provision for flexibility, but even with that you're gonna need to put some thought into it and think about how data will be allocated. Just the basic principle of redundant fills every drive equally, non-redundant tries to spread data but can put it wherever. Like for example if you had radically different drive sizes 4, 8, and 14TB I would probably put the 4 as redundant only to avoid that problem.
|
# ? Aug 22, 2022 17:27 |
|
NihilCredo posted:To not beat further around the bush, my issue is that I want to use a few differently-sized HDDs to store both redundant (= important) and non-redundant (= not important) data, but I don't know in advance how much storage I wanted to assign to redundant data. I'm using Fedora so BTRFS was my first choice but it can't (yet) set different redundancy options to different subvolumes. I would do that using MDADM and LVM, the tools I'm familiar with. Two options. The simple method is to choose two similar sized disks and make a RAID-1 mirror of them. Leave the remaining disk(s) as JBOD. Create a LVM volume group and add the RAID-1 mirror to it. Then create a LVM logical volume for the important data, make it slightly larger than what you know is needed, it's trivial to expand it later. Now add the JBOD disk to the volume group and create a logical volume for the non-important data. The tricky part here is that you want this LV to only live on the JBOD disk, but also use all of it so you can't expand the "important" LV to that side by accident. I haven't tested it, but it might be possible to do that with command 'lvcreate -l 100%FREE -n lv_junk myVG /dev/JBODdisk'. The other way is to run 'pvdisplay' and check the physical extent amount (Total PE) of JBODdisk and give that to the -l parameter. The extreme version of this is to divide all the drives to small (~100GB or more), equal sized partitions and combine them to different kinds of RAID arrays as suites your needs before adding them to different volume groups. This is actually what I have been doing for over 15 years and it has worked quite well. If you have 4 drives you could make a 4 partition RAID-1 for the really important stuff, RAID-5 for the importantish stuff, use single partitions for the non-important stuff, or even RAID-0 for scratch stuff. This system is also really flexible when swapping, adding or removing drives. I originally started with something like 80 GB drives and it has been growing ever since.
|
# ? Aug 22, 2022 17:33 |
One way to accomplish it would be to use GEOM or MDADM/LVM to create many smaller partitions on each individual disk, which you then stripe together, and create a mirror across the other disks for each set of striped partitions. It's effectively how Drobo accomplishes their expandable storage solution, but it's a horrible hack at best and a quick way to lose your data at worst. Also keep in mind the most important thing about RAID: The R doesn't mean redundancy in the sense most people assume, but rather reflects an attempt at data availability. Irrespective of anything else though, you need backups, and they'll need to be separate, programmatic and automated (meaning you can easily do them manually, not just rely on automation) - and in an ideal world they'll be pull rather than push based so that if you get cryptolockered, your backups won't be nuked (because cryptolockers and everything else has learned about most common backup methods).
|
|
# ? Aug 22, 2022 19:45 |
|
where beastie
|
# ? Aug 22, 2022 20:17 |
Methanar posted:where beastie For now
|
|
# ? Aug 22, 2022 22:18 |
|
So, weird thing I can't figure out: the samba sharing on my linux desktop, other clients now can't see or connect to the top level \\HOSTNAME list of shared folders. Dunno when this happened 'cause I haven't done it in a while, but I also haven't make any changes to my smb.conf in even longer. But it definitely worked at some point in the past. It fails on a windows PC, my android phone, and loopback. Different but non-specific error messages on each, natch! My google-fu is failing me because I don't know what that top-level \\HOST\ folder is called. ("Root" is all people trying to share / or allow a smb user to be root, for whatever godforsaken reason. And "top level" is too vague.) edit: sharing still works 100% otherwise, as long as I connect directly to \\HOSTNAME\Music or whatever. Klyith fucked around with this message at 02:40 on Aug 24, 2022 |
# ? Aug 24, 2022 02:38 |
|
Klyith posted:So, weird thing I can't figure out: the samba sharing on my linux desktop, other clients now can't see or connect to the top level \\HOSTNAME list of shared folders. the phrase you are looking for is "browse shares". you are currently unable to browse shares. https://unix.stackexchange.com/questions/665872/samba-shares-not-visible-in-network-neighborhood-windows-explorer check that out
|
# ? Aug 24, 2022 02:53 |
|
RFC2324 posted:the phrase you are looking for is "browse shares". you are currently unable to browse shares. Hmmm that's a much better keyword, thanks. Lots better results to dig through. But that specific answer isn't what's up -- that's about Windows auto putting a computer in Network Neighborhood. Couldn't care less about that, my issue is connecting to the computer and getting a list of shared folders. And it fails across other OSes.
|
# ? Aug 24, 2022 03:12 |
|
Klyith posted:Hmmm that's a much better keyword, thanks. Lots better results to dig through. I'm not the most technical user, but could your firewall rules have changed and are blocking samba?
|
# ? Aug 24, 2022 04:11 |
|
I had that issue in Gnome files but not on the terminal yesterday - my theory is that Files had somehow cached the broken state when the server was down for a bit, but only for the directories I had tried while it was down. Typing in any deeper path worked. Most likely not your problem, though.
|
# ? Aug 24, 2022 09:55 |
|
Just gonna point out again that the share browser uses a different mechanism to discover that info than the one used once you are in a share, so "i can browse files just fine" is a red herring other than telling you what does work
|
# ? Aug 24, 2022 15:19 |
|
CaptainSarcastic posted:I'm not the most technical user, but could your firewall rules have changed and are blocking samba? Definitely not blocking samba as a whole, since everything is fine when I navigate directly to a subfolder. The reason I don't know when this started is because my main use for samba is getting files on the PC from my phone, and on the phone I have bookmarks that go directly to \\LINUXPC\Public\ or \\LINUXPC\Music\ in Cx File Explorer. Also smb sharing works normally with everything else on the network. The Linux PC can connect to the \\WINDOWSPC\ and \\PI-MUSICBOX\ top level shares, as can my phone. It's just sharing from the linux pc, and just that top-level \\LINUXPC\ quasi-folder that's broken. Oh yeah it still doesn't work using the IP address instead of hostname. RFC2324 posted:Just gonna point out again that the share browser uses a different mechanism to discover that info than the one used once you are in a share, so "i can browse files just fine" is a red herring other than telling you what does work I have never had wsdd installed, and connecting to \\LINUXPC\ used to work just fine from a windows PC. Also it doesn't work from the android phone, or the linux pc trying to connect to itself on smb://127.0.0.1/ (but again is fine with smb://127.0.0.1/Music/), so I think wsdd can be ruled out as the problem. Some other mechanism that makes the top-level share browser special, it has to be in samba itself rather than an add-on.
|
# ? Aug 24, 2022 15:35 |
|
probably I should just nuke all samba configs and re-create them following only the current instructions and docs on samba.org samba is apparently changing a bunch of stuff right now, even the arch wiki is out of date (still talks about server min protocol when that got removed a version ago). lots of internet how-tos are apparently wrong. but the samba.org docs suck for stuff that's not AD or domains, all their well-written basic manuals are marked deprecated
|
# ? Aug 24, 2022 16:07 |
|
|
# ? May 28, 2024 12:25 |
|
Welcome to Linux lol
|
# ? Aug 24, 2022 16:16 |