|
so this should be an easy one: I've got a tiny device running a branch of OpenWRT that I'm using as an airplay server. I've got everything working except I'd like it to auto-run the airplay server program on startup. When I SSH in all I need to do is type 'shairport sync' and it starts up, how do I get this to happen without my intervention?
|
# ? Dec 3, 2017 21:09 |
|
|
# ? Jun 13, 2024 06:35 |
|
I hosed this up with Linux so I'm trying to fix it with Linux as well. Basically I used an SD card to make a bootable recovery system and when I needed some storage on it, created a new partition in the empty space. This worked perfectly and I got the data off, but I now can't unfuck the card to make it usable elsewhere. Basically my camera and phone fail to format it, and every Windows tool fails to do anything too. I finally got back into a Linux system but parted and gparted basically have the same effect: I'm just trying to completely wipe it and make a big FAT32 partition...
|
# ? Dec 3, 2017 21:41 |
|
mobby_6kl posted:I hosed this up with Linux so I'm trying to fix it with Linux as well. Basically I used an SD card to make a bootable recovery system and when I needed some storage on it, created a new partition in the empty space. This worked perfectly and I got the data off, but I now can't unfuck the card to make it usable elsewhere. Basically my camera and phone fail to format it, and every Windows tool fails to do anything too. I finally got back into a Linux system but parted and gparted basically have the same effect: This might be redundant, but if it is a full-size SD card did you check to make sure it doesn't have a physical switch to make it read-only that has been turned on? Aside from that I'd clear the flags on the hidden partition before deleting/formatting.
|
# ? Dec 3, 2017 21:48 |
|
mobby_6kl posted:I hosed this up with Linux so I'm trying to fix it with Linux as well. Basically I used an SD card to make a bootable recovery system and when I needed some storage on it, created a new partition in the empty space. This worked perfectly and I got the data off, but I now can't unfuck the card to make it usable elsewhere. Basically my camera and phone fail to format it, and every Windows tool fails to do anything too. I finally got back into a Linux system but parted and gparted basically have the same effect: Zero the disk out with code:
|
# ? Dec 3, 2017 21:51 |
|
You are probably fine if you just zero out the first 512 bytes since that's where the partition data is stored. To be on the safe side, wipe few kilobytes. Otherwise you're gonna wait forever for dd to zero out a 30GB sdcard.
|
# ? Dec 4, 2017 02:05 |
|
mobby_6kl posted:I hosed this up with Linux so I'm trying to fix it with Linux as well. Basically I used an SD card to make a bootable recovery system and when I needed some storage on it, created a new partition in the empty space. This worked perfectly and I got the data off, but I now can't unfuck the card to make it usable elsewhere. Basically my camera and phone fail to format it, and every Windows tool fails to do anything too. I finally got back into a Linux system but parted and gparted basically have the same effect: You could just write a new partition table and then make your FAT32 partition in gparted. Even Windows disk management should be able to do this.
|
# ? Dec 4, 2017 09:27 |
|
Volguus posted:You are probably fine if you just zero out the first 512 bytes since that's where the partition data is stored. To be on the safe side, wipe few kilobytes. Otherwise you're gonna wait forever for dd to zero out a 30GB sdcard. Good point... Add code:
|
# ? Dec 4, 2017 09:43 |
|
Eletriarnation posted:Yeah, I went back and tried both methods listed in the wiki (setting KVM to hidden and adding vendorID to Hyper-V extensions, as well as just disabling Hyper-V extensions entirely) and after each I'm still seeing Code 43 in Device Manager. I also tried some script from GitHub that modifies the drivers downloaded from Nvidia and claims that they won't trigger the issue if installed after modification, but it doesn't help either. just curious, if you do that pci-passthrough thing, is there any way to display your vm on the host desktop? basically is it possible to interact* with your vm via a window, like you do with virtualbox. *graphics and all, not connect to it via ssh or cli means
|
# ? Dec 4, 2017 09:52 |
|
mike12345 posted:just curious, if you do that pci-passthrough thing, is there any way to display your vm on the host desktop? basically is it possible to interact* with your vm via a window, like you do with virtualbox. In a month or two. Yes. It’s not ready yet, but Some guy is working on a render to ram driver for windows. Guest dumps frames to ram and then the host just dumps the contents of that to the host display.
|
# ? Dec 4, 2017 12:28 |
|
mike12345 posted:just curious, if you do that pci-passthrough thing, is there any way to display your vm on the host desktop? basically is it possible to interact* with your vm via a window, like you do with virtualbox. You can set it up with the normal Spice configuration that's attached to a new VM by default in virt-manager and allows you to use the KVM console, and then once you have Windows installed and Remote Desktop enabled you can remove those components and just roll with Remote Desktop for access. I was able to find a script which disconnects me from Remote Desktop and opens a local session, which is required for Steam streaming since Steam can't unlock a locked screen for you and I'm running the box headless. You also need something that makes your GPU think there's actually a monitor of the appropriate resolution attached, for which there are little HDMI/DP dongles available if you don't want to leave a monitor attached. I don't think you're actually required to remove the Spice bits at any point as they're just another display controller and monitor logically, but I expect that there's a performance impact to leaving them in. Eletriarnation fucked around with this message at 15:23 on Dec 4, 2017 |
# ? Dec 4, 2017 15:20 |
|
mike12345 posted:just curious, if you do that pci-passthrough thing, is there any way to display your vm on the host desktop? basically is it possible to interact* with your vm via a window, like you do with virtualbox. There is no reason not to just use RDP for this. Eletriarnation posted:Yeah, I checked into that and apparently at this point to get it to work with current drivers you have to lock out some Hyper-V extensions too that commenters say actually affect performance. gently caress that noise, the 1050 is still well within the return period and Amazon has an RX 460 for $85 so I'll switch teams. I can post my config if you want, which works fine with a 970 on Fedora 27. Performance is basically native.
|
# ? Dec 4, 2017 15:51 |
|
If it's no trouble, I'd be interested to see it. Right now I have the 460 installed and am trying to figure out why my VM keeps going into Paused (same as sleep, I assume?) state instead of booting up properly. I'll probably find something once I dig into the logs, but don't think I have the motivation to do it today.
|
# ? Dec 5, 2017 05:11 |
|
Eletriarnation posted:You can set it up with the normal Spice configuration that's attached to a new VM by default in virt-manager and allows you to use the KVM console, and then once you have Windows installed and Remote Desktop enabled you can remove those components and just roll with Remote Desktop for access. I was able to find a script which disconnects me from Remote Desktop and opens a local session, which is required for Steam streaming since Steam can't unlock a locked screen for you and I'm running the box headless. You also need something that makes your GPU think there's actually a monitor of the appropriate resolution attached, for which there are little HDMI/DP dongles available if you don't want to leave a monitor attached. I have on of these for that: https://www.megamac.com/products/newertech-hdmi-headless-video-accelerator-nwtadp4khead Runs 1920 x 1080 which is ideal for Steamlink. The problem I have with passthrough is that my Mac Pro does not like PciE devices resetting themselves, making KVM poo poo the bed it seems.
|
# ? Dec 5, 2017 07:42 |
|
Horse Clocks posted:Good point... code:
Finally I also tried doing it in windows with the special SD Card Formatter tool - which failed at 92% This was in the windows log: The IO operation at logical block address 0x2493 for Disk 3 (PDO name: \Device\000002cc) was retried. So yeah I guess the card is just hosed physically. It's just strange that it never had any issues until this exact moment but I suppose it could be a coincidence.
|
# ? Dec 5, 2017 21:18 |
|
SD cards are notoriously lovely. They can die if you look at them funny.
|
# ? Dec 5, 2017 22:59 |
|
It can also be that the blocks were bad from before but only when you created the recovery system, wrote something to the new partition, etc. is when it actually poo poo the bed. They are better than floppies, but that isn't saying much.
|
# ? Dec 5, 2017 23:16 |
fletcher posted:I've got two machines running Ubuntu 16.04 and Logwatch 7.4.2, both with identical logwatch.conf files that have MailTo and MailFrom set appropriately. Both also have identical postfix configs to relay mail to Amazon SES. Bumping this one...any ideas?
|
|
# ? Dec 5, 2017 23:17 |
|
fletcher posted:Bumping this one...any ideas? I am not familiar with Logwatch, but surely they can't have the same config files since one works and one doesn't. root@ubuntu is just the default email address of a user in a *NIX system: user@host. So one instance takes the email address of the user is running under, while the other doesn't. I'd re-check the conf files.
|
# ? Dec 5, 2017 23:20 |
|
Didn't run newaliases on that one?
|
# ? Dec 5, 2017 23:25 |
|
Double Punctuation posted:SD cards are notoriously lovely. They can die if you look at them funny. Always buy sandisk, everyone else has poo poo reviews.
|
# ? Dec 6, 2017 00:39 |
Volguus posted:I am not familiar with Logwatch, but surely they can't have the same config files since one works and one doesn't. root@ubuntu is just the default email address of a user in a *NIX system: user@host. I was thinking the same but I've triple checked them, and they are identical anthonypants posted:Didn't run newaliases on that one? /etc/aliases is the same for both as well: code:
|
|
# ? Dec 6, 2017 09:03 |
|
What's your /etc/hosts value for your edge-facing IP address + output from "postconf myhostname"? If it isn't spitting out a FQDN, check that mydomain is properly set. Otherwise does sending an email via "mail" work?
|
# ? Dec 6, 2017 16:11 |
|
I have an Ubuntu Server 16.04 machine where I host a Gitlab instance and play around with various CI/CD tools. It's starting to move from "don't care if it dies and I lose all the data on it" to "would be kind of annoying if it died and I lost all the data on it". The disk is pretty small, so I'd prefer to image it nightly rather than worrying about which folders I need to backup. Is there a simple tool that will take an image of a running system and back it up to a network share? tl;dr: is there a Macrium Reflect or DriveImage XML for Linux?
|
# ? Dec 6, 2017 18:38 |
|
NihilCredo posted:I have an Ubuntu Server 16.04 machine where I host a Gitlab instance and play around with various CI/CD tools. It's starting to move from "don't care if it dies and I lose all the data on it" to "would be kind of annoying if it died and I lost all the data on it". I think LVM snapshots can do in Linux what VSS does for Windows, but I don't know of any software that wraps this all up neatly. An alternative would be getting some configuration management set up that could restore all the software you use, and then backup the data with more conventional means.
|
# ? Dec 6, 2017 19:06 |
NihilCredo posted:I have an Ubuntu Server 16.04 machine where I host a Gitlab instance and play around with various CI/CD tools. It's starting to move from "don't care if it dies and I lose all the data on it" to "would be kind of annoying if it died and I lost all the data on it". Gitlab has specific guidance for how to do backups: https://docs.gitlab.com/ee/raketasks/backup_restore.html
|
|
# ? Dec 6, 2017 20:06 |
|
thebigcow posted:I think LVM snapshots can do in Linux what VSS does for Windows, but I don't know of any software that wraps this all up neatly. Worst case scenario you could just shutdown the VM and copy the image to backup with a script, if there is no other good(presumably free) option available. Never not roll your own
|
# ? Dec 6, 2017 20:52 |
|
It didn't sound like a VM
|
# ? Dec 6, 2017 21:22 |
|
thebigcow posted:It didn't sound like a VM my bad, you're right. i just assumed it was because putting things like that in a vm seems to be standard even for home labs nowadays
|
# ? Dec 6, 2017 21:32 |
|
NihilCredo posted:I have an Ubuntu Server 16.04 machine where I host a Gitlab instance and play around with various CI/CD tools. It's starting to move from "don't care if it dies and I lose all the data on it" to "would be kind of annoying if it died and I lost all the data on it". Don't know your exact setup, but for my Gitlab instance I run an LXC on ZFS and just snapshot the pool. Works a treat.
|
# ? Dec 6, 2017 21:53 |
|
Currently I'm running everything on that server as Docker containers. So yes, I could rsync the volume folders and have all my data, but in case of a screwup of some kind I'd still have to restore everything to the right place, re-start up the right images, etc. And if I run something outside of Docker, I'd need to remember to add the right extra stuff to the backup. Since the entire system is using a little over 100GB in disk space, I'd much prefer to image the whole thing and restore it with one click.
|
# ? Dec 7, 2017 00:31 |
|
NihilCredo posted:Currently I'm running everything on that server as Docker containers. So yes, I could rsync the volume folders and have all my data, but in case of a screwup of some kind I'd still have to restore everything to the right place, re-start up the right images, etc. And if I run something outside of Docker, I'd need to remember to add the right extra stuff to the backup. That's why I decided to just snapshot the pool, I have all my VMs on it and some other containers as well. In case of a mess-up I just revert to the latest snapshot and all is well again. Snapshots are made automatically every 15 minutes by some script. You could go a step further and have all the containers in their own ZVOL or Dataset and do per dataset snapshots, this way you can easily revert a single container or VM. As you pointed out you might have dependencies on other containers/VMs so it might be handy to just snapshow the whole lot and treat it as logically being one system.
|
# ? Dec 7, 2017 08:45 |
|
thebigcow posted:I think LVM snapshots can do in Linux what VSS does for Windows, but I don't know of any software that wraps this all up neatly. No, LVM snapshots alone are useless as backups. However, they can be an useful tool for avoiding downtime when making an actual backup. LVM snapshots work at the block level, using chunks of 4 KB by default. It's an implementation of copy-on-write at the LVM level: once a snapshot is set up for a particular logical volume, the first time each chunk is written to in the original of the snapshot, the old data is first copied into the disk space allocated for the snapshot, and only then the write operation is allowed to complete. Any subsequent writes into the same chunk will proceed normally. As a result, the snapshot will appear as an alternate "view" into the original LV as it existed at the moment of snapshot creation. However, a LVM snapshot is just a view, not a true copy: if the original LV is destroyed, the snapshot LV will contain only old versions of those chunks you have modified since the creation of the snapshot; any data that hasn't been changed is only stored on the original LV, and will be lost. LVM snapshots can be useful when you have a large filesystem or database, and not enough downtime to make a backup of it. You'll just need enough extra disk space to cover the amount of changes expected to happen in the time the backup will take (+ some percentage extra, for safety). You get a short amount of downtime, during which you'll stop applications/put the database in backup mode/do the needful to ensure the filesystem/database is in a good state for backups, then create the snapshot and resume regular service. Now, you can mount the snapshot at some convenient location and let the backup take its sweet time on it, while the actual filesystem/database keeps receiving new data. Once the backup is complete, you just delete the snapshot (no need to sync anything to the original, so it will be a quick and easy operation). Yes, you can allocate less disk space for the snapshot than its original has if you don't expect the original to receive too many changes during the time you'll need the snapshot. If your guesstimate is wrong and the snapshot space becomes full while the snapshot is still in use, the original LV keeps working just fine, while the snapshot LV disables itself. Then your backup operation will fail and you'll need to do it all over again...
|
# ? Dec 7, 2017 12:48 |
|
telcoM posted:No, LVM snapshots alone are useless as backups. However, they can be an useful tool for avoiding downtime when making an actual backup. Almost exactly like Volume Shadow Service and your backup software of choice. But I don't know of any backup software for Linux designed for that.
|
# ? Dec 7, 2017 14:37 |
|
I use rsnapshot for that. It makes an LVM snapshot, copies everything that has changed using rsync (and makes links to anything that hasn't), and rotates the backups so I end up with unlimited weekly, 7 daily, and 24 hourly snapshots.
|
# ? Dec 7, 2017 16:52 |
|
Mr Shiny Pants posted:That's why I decided to just snapshot the pool, I have all my VMs on it and some other containers as well. In case of a mess-up I just revert to the latest snapshot and all is well again. Snapshots are made automatically every 15 minutes by some script. Where do you copy the snapshots to ?
|
# ? Dec 7, 2017 20:58 |
|
Anyone have some advice on how to handle mass converting a bunch of videos in nested folders with HandbrakeCLI? I'm squeezing a bunch onto a memory card for a tablet so I'm running them all through handbrake but am having issues with files that have spaces in them. I almost got it with this command using find: code:
|
# ? Dec 7, 2017 22:12 |
|
Ashex posted:Anyone have some advice on how to handle mass converting a bunch of videos in nested folders with HandbrakeCLI? I'm squeezing a bunch onto a memory card for a tablet so I'm running them all through handbrake but am having issues with files that have spaces in them. Totally unhelpful answer: Set up a kubernetes cluster with custom golang tools and dockerized Handbrake to automatically convert all your files http://carolynvanslyck.com/blog/2017/10/my-little-cluster/
|
# ? Dec 7, 2017 22:17 |
|
Ashex posted:Anyone have some advice on how to handle mass converting a bunch of videos in nested folders with HandbrakeCLI? I'm squeezing a bunch onto a memory card for a tablet so I'm running them all through handbrake but am having issues with files that have spaces in them. this may help https://stackoverflow.com/questions/19562785/handbrakecli-bash-script-convert-all-videos-in-a-folder
|
# ? Dec 7, 2017 22:19 |
|
Ashex posted:But the output filename always comes out as '.mp4'. code:
code:
|
# ? Dec 7, 2017 22:55 |
|
|
# ? Jun 13, 2024 06:35 |
|
There's a practical limit to how much you can do with find before you're spending all your time adjusting for quirks with subshells and substitutions. I usually pipe it into a while loop so I have a little more control.
|
# ? Dec 7, 2017 23:01 |