Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

I would like some feedback with planning my (almost) fully self-hosted backup strategy.

I currently have 3 devices: a phone running Android (well, LineageOS), a Pi 4B, and a desktop computer (both running Linux).

A) The phone is where most of my personal data gets created nowadays - photos, videos, notes, memes, phone calls (it's legal to record them in my country). I want that data to be replicated ASAP, so I already have NextCloud running and syncing often.

I am not currently backing up apps and settings, but I would like it if it happened automatically when the phone is charging at night and connected to my home wifi; an acceptable alternative would be if it happened automatically when I connect my phone to the desktop computer via USB. There are many Android backup tools around, with very little difference in features, but NeoBackup seems to be the most active FOSS option - is it the case?

B) The Pi is my personal server. It is directly exposed to the internet (static IP), so it runs as little software as possible, just a SSH server and various docker services hosted behind a reverse proxy. One of these services is the NextCloud server to which the phone uploads my personal data. The data is saved on a LUKS-encrypted 4TB USB HDD connected to the Pi. There are other services which save their data on that HDD, notably among them my password manager and personal blog, so I would like to back it up nightly or so.

C) My desktop is mostly an entertainment device (games and home theater), so it's usually turned on for an hour or two a day. It has a lot more storage than the Pi (13TB in HDDs and 1.5 TB in SSDs), but I rarely create personal data on it, and I have the NextCloud client installed there as well for what little I do create (game saves, the occasional document or Photoshop), so they get synced to the Pi. The games obviously don't need to be backed up, while backing up up music and films can happen once a month or so.

I am currently using rsync to manually back up the Pi's data to a folder on the desktop's HDDs every now and then. The file transfer happens via SFTP over wifi, so it's not very fast, and rsync probably isn't optimized for this scenario (the Nextcloud folder is full of small files), but it's acceptable to leave the desktop running until the backup has finished.

To recap, I would like to improve my backup as follows:

1) Automatically run a de-duplicating backup job from the Pi to the desktop every night when I shut down the computer. My understanding is that Borgmatic and Duplicati would be the best tools for this kind of job, but I'm not very clear on the pros and cons.

2) I would like to purchase the cheapest off-site backup possible as a protection against house fires, some 0-day ransomware somehow infecting everything, ''oops I didn't mean to delete that" etc. Some cheap encrypted cloud backup like B2 would be optimal here, but here's my reasoning:

- Paying something like $40/month to back up terabytes of media collection is absolutely not worth it
- However, paying a one-time $250 or so for a big-rear end USB HDD and storing it off-site (e.g. in my desk at work) sounds extremely worth it to me
- I could then also pay a few dollars a month to cloud-backup just my personal data (it's still the better part of a TB due to phone videos and such). However, the hassle of installing a second workflow and a second form of encryption doesn't seem worth it to me just to improve the reliability of my 4th copy of my personal data.

So I'm currently thinking of buying e.g. a regular WD Elements external HDD, bring it home every couple of weeks, and having Borgmatic/Duplicati automatically sync to it upon connection. If it dies or get lost or stolen, my media collection will be fine as long my desktop HDDs didn't die at the same time, and my other data will be fine as long my desktop HDD and PI HDD didn't both die at the same time.

An important caveat is that I'm not a de-Googling fanatic and I do occasionally upload the most critical real-life documents (education, tax, health) to Google Drive in an encrypted .zip, for extra safety. But I don't want to rely on Google.

3) Automatically backup my apps and settings from my phone when I connect it to the desktop. Last and least concern, but it would be nice to have.

Thoughts? Any obvious hole in the current or proposed setup?

Adbot
ADBOT LOVES YOU

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

e.pilot posted:

unraid 6.10 is out and I’m resisting the urge to upgrade while I’m on the road because who knows what it’s going to break

It completely blasted my docker backend, but that's a pretty easy fix. Otherwise I'm running it fine.

Wibla
Feb 16, 2011

New box is up and running (sans drives for now), testing 10gbit performance between two machines using a DAC cable and woah :haw:

code:
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.05  sec  10.9 GBytes  9.35 Gbits/sec                  receiver
Got GPU passthrough running as well, confirmed that the P400 works in plex inside a VM.
code:
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1543      C   ...diaserver/Plex Transcoder      552MiB |
+-----------------------------------------------------------------------------+
Idle power draw with no drives is around 84W after I turned off all the dumb auto-overclocking stuff. It'll probably be around 150W after I add drives.

Klyith
Aug 3, 2007

GBS Pledge Week

NihilCredo posted:

Thoughts? Any obvious hole in the current or proposed setup?

My thoughts are

Off-site backup via external HDD is totally fine if:
• you're the type of person that says "I'm gonna do this every 2 weeks" and then actually does it
• you won't care that your backups are 2 weeks out of date if your house burns down


Duplicati: I checked out Duplicati a while back because it's a favorite in the backup thread. IMO it seemed like a great program if you were doing cloud backups, but over-complicated and kinda fragile for traditional local storage. The title of the backup thread is "test the restore procedure" and with backup systems that store everything as chunks of data-stew that becomes a an actual job. Not a super hard one, but still some effort compared to a basic backup where you can see & test files directly on the backup drive.

Duplicati is also kinda fragile where a lost or corrupt config db can make your life miserable.

If incremental backup with good deduplication is a major pro for you I might look at borg before duplicati. But also I'd ask how much that even matters: it seems like you are not dealing with a ton of data, so is dedupe even a big concern?


NihilCredo posted:

I am currently using rsync to manually back up the Pi's data to a folder on the desktop's HDDs every now and then. The file transfer happens via SFTP over wifi, so it's not very fast, and rsync probably isn't optimized for this scenario (the Nextcloud folder is full of small files), but it's acceptable to leave the desktop running until the backup has finished.

One way to make this more automatic is to turn on WoL on your desktop, and then add a magic packet ping (plus ~15s delay) to the start of your backup script. Then you can set this to run on a schedule rather than manually, and happen in the middle of the night or whenever you don't care about how long it takes.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Klyith posted:

My thoughts are

Thanks for the insights! Also I didn´t know there was a backup thread, it's not very active but I'll keep it in mind in the future, thanks.

quote:

Duplicati: I checked out Duplicati a while back because it's a favorite in the backup thread. IMO it seemed like a great program if you were doing cloud backups, but over-complicated and kinda fragile for traditional local storage. The title of the backup thread is "test the restore procedure" and with backup systems that store everything as chunks of data-stew that becomes a an actual job. Not a super hard one, but still some effort compared to a basic backup where you can see & test files directly on the backup drive.

Duplicati is also kinda fragile where a lost or corrupt config db can make your life miserable.

Ah, so even if a given file never changes, Duplicati won´t store it as a regular file in the filesystem? I thought that like, revision 0 of a backup with one of these tools would be just a regular folder copy, and then when a new backup happens, the 'history' of what changed and were got stored into a specific folder, like .git.

But apparently I was wrong, it's just a binary blob from the start?

quote:

If incremental backup with good deduplication is a major pro for you I might look at borg before duplicati. But also I'd ask how much that even matters: it seems like you are not dealing with a ton of data, so is dedupe even a big concern?

"Deduplication" interests me more as a QoL feature rather than a space saving measure.

For example, right now my photo / video folder is just one big pile of everything, separated by year/month. Suppose that on one dreadful winter night I sit down and reorganize it, or at least extract some of the most interesting pictures into separate folders.

This is what I expect would happen: a dumb rsync would leave my backups an absolute mess, with all the chaos in the main folders as well as the new folders I create - and most likely some mid-reorganization copies as well, if a backup runs while I'm still in the middle of it.

However, a deduplicating backup would recognize that the same files simply moved, and would just store the new folder structure / filenames into a modification layer. And if I needed to recover a given picture from backup, I could choose to browse the data as it was either before or after the reorganization.

Is that correct?

quote:

One way to make this more automatic is to turn on WoL on your desktop, and then add a magic packet ping (plus ~15s delay) to the start of your backup script. Then you can set this to run on a schedule rather than manually, and happen in the middle of the night or whenever you don't care about how long it takes.

Good idea. Although, now that I googled it, there's apparently a Linux utility called rtcwake that lets you schedule a computer to wake up from sleep on its own. That seems simpler than WoL.

NihilCredo fucked around with this message at 20:00 on May 19, 2022

Klyith
Aug 3, 2007

GBS Pledge Week

NihilCredo posted:

But apparently I was wrong, it's just a binary blob from the start?

Yeah, or rather a series of binary chunks inside larger compressed binary volumes.

The main advantage is that this is super efficient in bandwidth & storage for small changes. Using something like B2 where you pay for both this is a great way to minimize costs.

quote:

"Deduplication" interests me more as a QoL feature rather than a space saving measure.

For example, right now my photo / video folder is just one big pile of everything, separated by year/month. Suppose that on one dreadful winter night I sit down and reorganize it, or at least extract some of the most interesting pictures into separate folders.

This is what I expect would happen: a dumb rsync would leave my backups an absolute mess, with all the chaos in the main folders as well as the new folders I create - and most likely some mid-reorganization copies as well, if a backup runs while I'm still in the middle of it.

However, a deduplicating backup would recognize that the same files simply moved, and would just store the new folder structure / filenames into a modification layer. And if I needed to recover a given picture from backup, I could choose to browse the data as it was either before or after the reorganization.

Is that correct?

Yeah, the most basic rsync -av would leave a mess. There are smarter options like --link-dest --delete that at least avoid the mess. You'd have 2 copies of all your photos: the existing unorganized ones that are now in a 2022-05-19 folder or whatever*, and the new organized ones. Very inefficient in space and copy time, but also a braindead simple result.

OTOH if you don't care about archiving changes, you can just use --delete to "deduplicate" the dumb way.

I dunno, I think both directions are a valid choice. It just depends what type of effort you find easiest to spend. I decided on the one that needs a bit more manual effort, but makes recovery in the ultimate disaster scenario easiest and has the least potential for unforeseen consequences vis-a-vis complex software I didn't fully understand.


*the robocopy script I posted in the backup thread is basically a crude and lovely method of doing a thing that's a one-line rsync --link-dest --delete in linux

quote:

Goodidea. Although, now that I googled it, there's apparently a Linux utility called rtcwake that lets you schedule a computer to wake up from sleep on its own. That seems simpler than WoL.

Oh yeah that's easier. I had WoL on my mind because I was just looking up how to set that up in linux the other day.

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit

Matt Zerella posted:

It completely blasted my docker backend, but that's a pretty easy fix. Otherwise I'm running it fine.

Upgraded both my servers this morning, only thing that broke was an NFS share, not bad.

CopperHound
Feb 14, 2012

NihilCredo posted:

Backup stuff
Have you looked into using syncthing for backing up your user data and doing imaging less often?

Imo, if my desktop crashes I would want to do a fresh install then copy my user data over instead of restoring a full image.

BlankSystemDaemon
Mar 13, 2009



Generic Monk posted:

yeah i pulled it, it got to 95%ish by which point it was going at about 4MB/s with no estimated completion time. gently caress that

on the plus side WD did email me today saying they would replace it with a CMR drive, so yay! i just hope it doesn’t take actual months like this one did. the support ticket got escalated pretty much immediately which i guess is the result of all the media attention this got in 2020

i only discovered and emailed them about the issue yesterday so can’t fault the turnaround in that regard. it was the actual shipping of the that took months; i think i shipped the original drive to them in early march? i can only assume that’s covid fallout or cost-cutting since it didn’t take nearly that long when i did that a few years ago.
Oh, it was the shipping? That makes sense.
When all the container ships are stuck in other parts of the world than where things need to be shipped from, because of the ongoing supply chain issues, that's gonna make a dent in any shipping plans.

IOwnCalculus posted:

I'm starting to lean this way myself, especially if a company is going to gently caress around with what you get on the warranty anyway. I do have the luxury of an assload of SAS drive bays so I'm leaning more towards the constant supply of $100 10TB SAS drives on eBay and just buying N+1 or N+2 with the savings over buying an EasyStore I'm just going to shuck.
It's just good buying practice for data hoarders, and I'm a bit miffed that people aren't catching onto it.
It's not like the above-mentioned supply chain issues are a new thing.

Klyith
Aug 3, 2007

GBS Pledge Week

BlankSystemDaemon posted:

It's just good buying practice for data hoarders, and I'm a bit miffed that people aren't catching onto it.

The downside of buying a spare is that it starts your warranty countdown while the drive is sitting on the shelf. If you're gonna do that, put it in the server and use Z2 or whatever extra redundancy. Then when a drive fails you can just keep running single redundant while your 2 month RMA crawls along.

Otherwise, just put the $ aside and use 2-day delivery as your "cold spare".

Only way I see buying an spare as sensible for home NAS is if you're shucking drives. In that case you're buying during some mega-sale, and the 1-year warranty is probably irrelevant anyways.

BlankSystemDaemon
Mar 13, 2009



Klyith posted:

The downside of buying a spare is that it starts your warranty countdown while the drive is sitting on the shelf. If you're gonna do that, put it in the server and use Z2 or whatever extra redundancy. Then when a drive fails you can just keep running single redundant while your 2 month RMA crawls along.

Otherwise, just put the $ aside and use 2-day delivery as your "cold spare".

Only way I see buying an spare as sensible for home NAS is if you're shucking drives. In that case you're buying during some mega-sale, and the 1-year warranty is probably irrelevant anyways.
I don't mean buy a disk and put it on a shelf, I mean buy a shelf and run the usual workloads to provoke a almost-DoA to commit ritual suicide.
Since the drive doesn't contain any data, there's less of a worry than is usually the case with cold drives.

I don't know about 2-day shipping, but Denmark and many other countries don't have retailers that promise 2-day delivery of any product you can care to mention, and it's not fun to pay through the nose when you need a disk right now and can't find a good deal.

mobby_6kl
Aug 9, 2009

by Fluffdaddy
I set up Sonarr on my DS415+ and let it loose on a new show. It seems like the downloads ramped up and now the NAS is completely unresponsive. It can clearly hear the disks still working away, but I can't connect to the web interface or SSH into it anymore.

Has anyone ran into this situation? I thought it'd finish by now but there's no way to check on the progress either. I guess it'll either unfuck itself by the time I finish work or maybe I could disconnect the internet for a bit to get it to stop.

Flipperwaldt
Nov 11, 2011

Won't somebody think of the starving hamsters in China?



mobby_6kl posted:

I set up Sonarr on my DS415+ and let it loose on a new show. It seems like the downloads ramped up and now the NAS is completely unresponsive. It can clearly hear the disks still working away, but I can't connect to the web interface or SSH into it anymore.

Has anyone ran into this situation? I thought it'd finish by now but there's no way to check on the progress either. I guess it'll either unfuck itself by the time I finish work or maybe I could disconnect the internet for a bit to get it to stop.
Yeah that happens to my DS112j if I have download station do more than one download at a time. I don't know Sonarr, but if you can reduce the number of things it needs to do simultaneously, that'd be the way to go in the future.

If you give it time, it's going to finish the job, but disconnecting the internet seems like a good idea if you can't wait for that.

mobby_6kl
Aug 9, 2009

by Fluffdaddy

Flipperwaldt posted:

Yeah that happens to my DS112j if I have download station do more than one download at a time. I don't know Sonarr, but if you can reduce the number of things it needs to do simultaneously, that'd be the way to go in the future.

If you give it time, it's going to finish the job, but disconnecting the internet seems like a good idea if you can't wait for that.

Thanks, good to know I didn't just gently caress something up. I don't really "know" Sonarr either but there has to be a way to limit downloads either there or in the torrent clientz which is download station as well. For all I know it tried downloading the whole season at once, but it's still pretty wild that this completely knocked out the NAS.

Wibla
Feb 16, 2011

Torrents are hard on CPU/RAM and disk access. Not really surprising that it struggles.

Flipperwaldt
Nov 11, 2011

Won't somebody think of the starving hamsters in China?



Yeah if the actual downloading goes through download station, you can reduce the number of simultaneous downloads in the settings there.

mobby_6kl
Aug 9, 2009

by Fluffdaddy
It's no that it just struggles, after 8 hours it's still completely unreachable. I've now tried disconnecting internet and even unhooking it altogether from the network but it still continues to do something and doesn't respond to anything. Since it can't be actually downloading anything now, it has to be something with allocating space or moving around the data or something

If I can get back in, I'll definitely try adjusting tbe number of downloads. Never had a problem with even 2 or 3 going before.


Update: I left it running without any network for an hour or two and still nothing. Force-rebooted it and everything's back to normal. There were 7 torrents 50-70% completed so... wasn't very close to finished lol. It's a bummer cause I had almost a year of uptime on it but at least I can update it now :v:

mobby_6kl fucked around with this message at 21:10 on May 25, 2022

Aware
Nov 18, 2003
That sounds totally nuts to me, is it weaker than a raspberry pi?

Wizard of the Deep
Sep 25, 2005

Another productive workday

Aware posted:

That sounds totally nuts to me, is it weaker than a raspberry pi?

Probably. On Synology devices, the last two numbers are generally the model year. So mobby_6kl is running a device from 2015, or 7 years ago. Synology devices generally use the lower-power CPUs to keep the cost down. So a seven year old budget CPU is what's running a bunch of torrents simultaneously will definitely use every single hertz of processing power.

So mobby_6kl, if you have an older computer or laptop, or maybe even set up a spare Pi, you could use that as a landing zone before transferring it to the NAS. The CPU is fine for running a simple web host and moderate data management tasks, but that many torrents is going to be a bit much.

tuyop
Sep 15, 2006

Every second that we're not growing BASIL is a second wasted

Fun Shoe
Sonarr is also no slouch either. I had some issues with slowdown on a 218+.

mobby_6kl
Aug 9, 2009

by Fluffdaddy

Aware posted:

That sounds totally nuts to me, is it weaker than a raspberry pi?
As Wizard says, it's possible. It has an Atom C2538 which is an even older CPU than the NAS itself. Plus I might've had some VMs running at the time (in addition to Docker Sonarr) :v:

It's not going to be an issue though, I just set it to 2 consecutive downloads which is perfectly fine for me and causes no performance issues.


E: for your amusement, I just ran Geekbench on the thing: https://browser.geekbench.com/v5/cpu/15123612

242 single core and 766 multicore.

It's not much, obviously, a modern Atom scores like 3x more single core for example. But not that bad either. It's way more than budget Arm CPUs like you'd find in Android TV boxes https://gadgetversus.com/processor/rockchip-rk3228a-geekbench-5-android/ and similar to Pi 4, it seems: https://browser.geekbench.com/v5/cpu/search?utf8=%E2%9C%93&q=raspberry+pi+4

mobby_6kl fucked around with this message at 01:34 on May 26, 2022

Aware
Nov 18, 2003
I feel very spoiled with my converted old desktop pc running unraid with an i7-8700 lol. Just went through the process of adding bigger disks so not I have another 16tb to fill.

mobby_6kl
Aug 9, 2009

by Fluffdaddy
Yeah as a giant nerd with a bunch of old laptops and PCs around, I thought I was going to re-use an older desktop but... the newest one I could use is Sandy Bridge, it would be noisy, power hungry, and huge, plus I'd have to actually set everything up myself. Then I discovered that this generation has a known issue that bricks them but can be unbricked by soldering one resistor so I picked a faulty one on ebay for peanuts and it solved all my problems. I don't even have to janitor it.

At least this old one won't do hardware transcoding or run a ton of VMs (or 10 torrents at once as I discovered) but none of that is really an issue for me. Still have 2 empty slots too.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
What experiences have you had with SATA disks on SAS expanders? I've had a couple of really bad experiences with SATA disks on SAS expanders both at work and at home over the years. Disks falling out of hardware RAID arrays, SATA disks throwing regular I/O errors at a higher rate when used with software defined storage, the whole array going non-responsive and blocking mysteriously and not doing that after the SATA disks were removed and only SAS disks were in place.

The thing is, I've gotten explicit confirmation from hardware vendors and also software tools that SATA disks on SAS expanders are supported, but I'm twice burned now and basically ready to just always pay the small cost premium for SAS disks forever going forward. Can I get a sanity check?

Thanks Ants
May 21, 2004

#essereFerrari


I'd replace the hardware RAID with software before I started spending on SAS disks for a home server.

IOwnCalculus
Apr 2, 2003





I think it might be dependent on the SATA disk. It's absolutely supposed to work but it is not guaranteed.

I have a NetApp DS4246 with the controller swapped out for a generic Xyratex SAS module so that I could use regular cabling instead of a custom SFP based cable. I have a mix of SATA and SAS drives in it, and I'm not using the SATA/SAS transposers (because they block SMART).

The only SATA drives I've tried so far in this config that do not work are my ancient 3TB WD Reds (WD30EFRX). They connect just fine and will sit there forever. But if the drive is actually loaded at all, it will cause any random drive in the array to go unresponsive and make ZFS fault it. I have shucked 8TB and 10TB drives in there with no problems.

Enos Cabell
Nov 3, 2004


I've been using shucked SATA drives on an SAS expander (HP 468405-002 for the record) for several years now on my Unraid box with no issues.

Enos Cabell fucked around with this message at 21:45 on May 27, 2022

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Thanks Ants posted:

I'd replace the hardware RAID with software before I started spending on SAS disks for a home server.

I've been using all-software RAID for a long time, I've seen SATA disks throwing a ton of errors and falling out of Ceph most recently, while SAS drives next to them were fine.

The cost premium on SAS disks is all of like $20/disk, and from what I've seen you can find used SAS disks cheaper than used SATA, I'm guessing because the SAS controller is more expensive, but at the point where you've got a controller the cost difference isn't big. I just wanted to see if I'm being superstitious and dumb and don't need to be splashing the extra $20/disk.

Twerk from Home fucked around with this message at 21:54 on May 27, 2022

Rexxed
May 1, 2010

Dis is amazing!
I gotta try dis!

Just as an FYI, I had to RMA a brand new 16TB WD Red Pro. WD received it on the 11th (make sure you have your tracking number) and told me they had received it and were processing it on the 25th. The UPS tracking puts the replacement arriving to me on June 2nd. So you may want to plan for a month for a RMA through WD and be scared about them losing it in the building for up to two weeks while you wait. Also, just to be unfair, in my current experience the WD Red Pro 16TB have a 50% failure rate (although that's with a sample size of two).

Motronic
Nov 6, 2009

Rexxed posted:

Just as an FYI, I had to RMA a brand new 16TB WD Red Pro. WD received it on the 11th (make sure you have your tracking number) and told me they had received it and were processing it on the 25th. The UPS tracking puts the replacement arriving to me on June 2nd. So you may want to plan for a month for a RMA through WD and be scared about them losing it in the building for up to two weeks while you wait. Also, just to be unfair, in my current experience the WD Red Pro 16TB have a 50% failure rate (although that's with a sample size of two).

My last WD red pro purchase/warranty service went similarly and ended up with near 50% immediate failure rate. Even one of them that I caught early enough to return to the vendor resulted in a replacement that was broken in a different way.

I have no idea what WD is doing, but it's getting harder to write this off as just bad luck on my part.

power crystals
Jun 6, 2007

Who wants a belly rub??

TrueNAS SCALE question I'm not sure where else to ask: I have an app (the official "app" things, not a docker image) that I created and assigned the onboard Intel GPU to for video transcoding. However I now want to reassign that Intel GPU to a different app which only supports VAAPI and instead use NVENC from an nVidia GPU on the first app. But basically this config just don't work. As far as I can tell the nVidia OpenCL stuff is just not even present inside the app (which is asking for an OpenCL identifier), but this is admittedly difficult to debug as I know very little about this part of linux in the first place, let alone figuring it out inside a pod shell that's missing stuff like "lspci". Now putting it back to the Intel GPU that doesn't seem to work either. ffmpeg fails with either:

Intel: Failed to initialise VAAPI connection: -1 (unknown libva error).
nVidia: Failed to get number of OpenCL platforms: -1001.

Did I screw this up somehow? All I can easily find via google on this is people happy that it just worked which sure isn't my experience here.

Shumagorath
Jun 6, 2001

Motronic posted:

My last WD red pro purchase/warranty service went similarly and ended up with near 50% immediate failure rate. Even one of them that I caught early enough to return to the vendor resulted in a replacement that was broken in a different way.

I have no idea what WD is doing, but it's getting harder to write this off as just bad luck on my part.
Am I likely to have better experiences with Seagate?

CopperHound
Feb 14, 2012

Shumagorath posted:

Am I likely to have better experiences with Seagate?
I have 1 Seagate barracuda with 7 WD drives. The Seagate would drop out of my pool every month or two when connected to an hba card, but has been working fine with my onboard sata.

Yes, I tried different ports and cables

Rexxed
May 1, 2010

Dis is amazing!
I gotta try dis!

Shumagorath posted:

Am I likely to have better experiences with Seagate?

No, every manufacturer has some bad product lines and the failure rates are typically very low. You can't really be too brand loyal when there's only three companies making disks these days. I'll never personally buy seagate again after having some of their disks that straight up lied in the smart data and having had almost no seagates survive over 8 years or so, but they're not really worse than WD as a manufacturer, they both have bad product lines sometimes. The main thing to note from my post is the long RMA times. The 50% failure rate out of the box is more anecdotal because I'm mad about the slow rear end RMA situation which never used to be the case.

Generic Monk
Oct 31, 2011

credit where it's due, they did replace the SMR disk they mistakenly (?) sent out to me pretty quickly, the turnaround time was under a week. sending proof of dispatch and proof of arrival probably helped. that is on top of the standard RMA process taking over a month and initially sending the wrong drive though. i was already thinking the value proposition of the 3 year warranty on reds was kind of dubious, this basically confirms it.

is there any other real difference between reds and say, the white label drives you get from shucking? the only things i'm aware of is the warranty and that the firmware is tweaked to not aggressively spin the drive down when not in use; is that it?

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo
whites usually have smaller cache while red can have more or just as small cache. up in the air.

the big change was the fact the drives follow a later version of SATA dealing with the fact that the system can send a signal to shut off the drives 3.3v



SATA v3.3 posted:

The new Power Disable feature (similar to the SAS Power Disable feature) uses Pin 3 of the SATA power connector. Some legacy power supplies that provide 3.3 V power on Pin 3 would force drives with Power Disable feature to get stuck in a hard reset condition preventing them from spinning up. The problem can usually be eliminated by using a simple “Molex to SATA” power adaptor to supply power to these drives.[43]


these are the stories where people use Teflon to cover the pin but a molex to SATA, like the quote above says, is all you need to do.

v3.3 also deals with SMR implementation which is why the same SMR whites have also the power disable.

but besides the SMR and the power disable, really cache is the only thing that changes but that story is a story older than IDE

Motronic
Nov 6, 2009

EVIL Gibson posted:

these are the stories where people use Teflon to cover the pin but a molex to SATA, like the quote above says, is all you need to do.

Stories? Like as in you don't believe this happens? I use kapton tape. I think everyone else who does this successfully probably does the same.

It's non-trivial to do it any other way when your drives are going into a backplane. It's also exceptionally cheap, easy and immediate.

Wibla
Feb 16, 2011

I just snipped the +3.3V wire going to my SATA power connectors :haw:

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

Motronic posted:

Stories? Like as in you don't believe this happens? I use kapton tape. I think everyone else who does this successfully probably does the same.

It's non-trivial to do it any other way when your drives are going into a backplane. It's also exceptionally cheap, easy and immediate.

tell me your definition of "story" because your reaction seems to be taking it in a way I did not mean and I apologize.

I like simple solutions, now, over bodges but it's harder for me to do bodges with my eyes going.

Hell, in 2003 I was filing down blackberry serial adapters to fit into the first Dakota Digital disposable camera so I could download the files directly and then wipe.



Plus I have a million of those molex to SATA power after being around the time of SATA introduction when all PSUs did not come with SATA power connections.

EVIL Gibson fucked around with this message at 20:47 on May 28, 2022

Adbot
ADBOT LOVES YOU

mobby_6kl
Aug 9, 2009

by Fluffdaddy
Cool story, bro.



Is there a way to get Synology DSM to run a job when the system wakes up? That doesn't seem to be an option in the "Secheduled Tasks" thingie, and from what I've seen one solution is to write a script in /etc/pm but there's no /etc and I'm not enough of a Linux nerd to work out how to get around that.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply