Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Yaoi Gagarin
Feb 20, 2014

Eletriarnation posted:

Maybe in a mirror, but as far as I understand it the performance characteristics of distributed parity topologies like RAID-5/6/Z* have more similarities to striped arrays. You of course have a lot more CPU overhead, and at any given time some subset of your disks is reading/writing parity blocks that don't contribute to your final application-available bandwidth. Still, modern CPUs are fast so that's not much of a bottleneck to HDDs and you can absolutely get very fast numbers for sustained, sequential transfers.

Ah, neat. In that case I second what Wibla said. Make a raidz1 or raidz2 of your drives and you're good

Adbot
ADBOT LOVES YOU

Wibla
Feb 16, 2011

Jim Silly-Balls posted:

I do have a cache 1TB SATA SSD in my unraid, but again, its the limitations of a single disk that come into play.

I am looking to store a modest amount of data (currently about 5.5TB) in a redundant way so that I do not get a single point of failure (this is why I don't have it all sitting on a single 10TB disk, despite that being by far the easiest option). I also would like to take advantage of the 10Gbit link between the server and my video editing PC. It would be nice to be able to store everything on the NAS and edit directly from there. I'm mostly dealing with 4K and some 6K footage. Other than that its simple file storage accessible over gigabit ethernet or wifi, one or two VM's and a plex docker that just occasionally serves my local LAN on the box.

I have access to a bunch of ex-datacenter stuff (which is where I got the 10GB SFP cards for the unraid box and my video editing box). Everything currently runs on a Dell Poweredge T420 with 8 Drive bays filled with spinning disks on a PERC 6GB unit of some sort that has been flashed to HBA mode, 2x E5-2430 V2's and 192GB DDR3

I also have a stack of 1TB SATA SSD's, which, should get me 7TB usable under the current unraid standard, assuming 8 of them in use. Since I already have the SSD's I was hoping to put them to use as an upgrade in both speed, heat generation and power consumption over the spinning rust.

I also have the option of moving this all over to an HP DL380 G9 with 2x E5-2640 V3's, 128GB DDR4, and a 12GB/S SAS controller, but I'm guessing the 12GB/s unit wont gain me anything without using 12GB/s drives with it.

Nothing that I own as a candidate for a NAS can accept an NVME drive. Its all a bit too old for that.

hol up!

You have all that poo poo sitting already? Is that an SFF (16 bay) DL380?

I would lab the DL380 with 8x1TB setup in striped ZFS mirrors (one pool with multiple mirrored vdevs) and see how it performed for that use case. That'd get you around 3.something TB formatted capacity for actual high-speed storage, with reasonable redundancy. You could probably run them in RAIDZ2 and still get good enough performance for your needs.

You can also get cheap ($20) pci-e riser cards that will fit a single nvme drive. I have one in my DL360p Gen8 and it works great. There are versions with multiple NVMe slots, but they require PCI-e bifurcation and your mileage will vary there.

Zorak of Michigan
Jun 10, 2006


VostokProgram posted:

I thought each vdev only gets the bandwidth of its slowest drive?

I was going to recommend ZFS with striped mirrors. You'll only get half the space but if you really want to saturate the network it might be worth it

Each vdev gets the IOPS of its worst-performing drive, but throughput of a multi-disk vdev can be much higher than single-disk throughput.

Beve Stuscemi
Jun 6, 2001




Wibla posted:

hol up!

You have all that poo poo sitting already? Is that an SFF (16 bay) DL380?

I would lab the DL380 with 8x1TB setup in striped ZFS mirrors (one pool with multiple mirrored vdevs) and see how it performed for that use case. That'd get you around 3.something TB formatted capacity for actual high-speed storage, with reasonable redundancy. You could probably run them in RAIDZ2 and still get good enough performance for your needs.

You can also get cheap ($20) pci-e riser cards that will fit a single nvme drive. I have one in my DL360p Gen8 and it works great. There are versions with multiple NVMe slots, but they require PCI-e bifurcation and your mileage will vary there.

Yeah, its all here just sitting, its an 8-bay 2.5" drive DL380. I do have a free PCI-E slot after I would add the SFP card to it, so the NVME is something to look at then.

Beve Stuscemi fucked around with this message at 22:29 on Mar 31, 2023

BlankSystemDaemon
Mar 13, 2009



At AsiaBSDCon, Alexander Motin (a FreeBSD and ZFS developer) is going to be presenting on ZFS Data Path, Caching and Performance on April 2nd.

He's one of the people (Allan Jude, another FreeBSD and ZFS developer, being the other that I know of) who's working on speeding up ZFS on NVMe, since when ZFS was invented in 2001-2003, NVMe wasn't even a gleam in someone's eye (well, as far as I know, although I know we all wished for faster bandwidth for disks, even back then).

Combat Pretzel posted:

L2ARC maps directly do ZFS filesystem blocks. It's 70 bytes of header per block.

512MB of RAM allows you to map to either 3.6GB of 512 byte blocks, 29GB of 4KB blocks, 117GB of 16KB blocks, or 937GB of 128KB blocks. Latter is the default block size of ZFS.

My L2ARC stats, which is caching just metadata of ZFS filesystems and two ZVOLs at 16KB block size in their entirety, hosting a Steam library and MS Flight Simulator respectively:

code:
L2ARC size (adaptive):                                         259.7 GiB
        Compressed:                                    82.1 %  213.1 GiB
        Header size:                                    0.3 %  704.7 MiB
        MFU allocated size:                             9.2 %   19.6 GiB
        MRU allocated size:                            90.4 %  192.6 GiB
        Prefetch allocated size:                        0.4 %  893.9 MiB
        Data (buffer content) allocated size:          99.0 %  210.9 GiB
        Metadata (buffer content) allocated size:       1.0 %    2.2 GiB
I can live with losing 700MB of RAM to keep like 250GB of data warm.

--edit:

Maybe if these dipshits at Samba would actually practically implement SMB Direct instead of just talking about it for 15 years (or whatever it is), that'd be nice. Needs RDMA capable cards at both ends, tho.
Right, I'm used to thinking of the LBA mapping taking up ~330 bytes, but that was in versions prior to OpenZFS (and very old versions, before it was called OpenZFS).

Samba is always going to be behind SMB, because SMB is a proprietary protocol.

NFS, on the other hand, is an open protocol - and on top of that, it's also actively used in a ton of high-performance systems, so its implementations tend to be better optimized.

Jim Silly-Balls posted:

I had to check the hardware profile, but it looks like bits
Right, and for 10Gbps the aggregate bandwidth you're looking to hit is around 936MBps when Ethernet, TCP/IP, and 66b/68b overhead is accounted for.

Motronic posted:

I know this is your schtick, but in this case the suggested reason to use ZFS, as you very well know but don't want to say "oh, you were right I missed it" so that you can keep on well acksuallying instead is: because OP needs a pool of drives. Not a single drive. Yes, other files systems can ackshually do that too. But I chose ZFS as the example because of your response.
I absolutely agree that ZFS is the right solution for the situation, but pretty much every time I bring it up I get shouted down by other people who swear that their proprietary solution, which isn't designed to show silent data corruption, works fine for them because they haven't seen silent data corruption.

Also, I think we need to agree on terminology here.
Any array of drives, irrespective of whatever filesystem goes on top, can achieve 10Gbps.
Pooled storage, which is what ZFS does, allows you to combine arbitrary collections of arrays and stripe the data across each of these arrays (which ZFS calls vdevs).

I never once mentioned using a single drive, but I should've been more explicit about Jim using ZFS, you're right - I just didn't want to have the discussion I alluded to above; I guess I'm doomed if I do, and doomed if I don't.

Wibla posted:

Can we not have this stupid slapfight again?

Also: I routinely see >500MB/s over SMB via 10gbe, it's not as slow as some people claim.
500MBps is entirely respectable for Samba (I assume that's what you're using, not SMB via Microsoft), but NFS can achieve much closer to the 936MBps I mentioned above on the exact same hardware.

500MBps is just about what you can achieve from a single Intel 520 480GB SSD, which is what I have in my workstation (even if the motherboards chipset is dead, because it's more than a decade old).

Jim Silly-Balls posted:

Yeah, its all here just sitting, its an 8-bay 2.5" drive DL380. I do have a free PCI-E slot after I would add the SFP card to it, so the NVME is something to look at then.
I need to check since I have a DL380p Gen8 myself - the card you've got connected to the 2.5" bays, is that the HPE branded Microsemi RAID controller? Because outside of some unsupported commands, you can't get that to stop being a RAID controller.
And even if you do use the unsupported commands, I'm not sure it presents the disks as initiator targets - which is what you want for ZFS.

Also, with 128GB of memory, you're definitely fine to use a NVMe SSD for L2ARC so that you can fit an entire video project into a read-cache.

EDIT: I finally got around to upgrading my always-online HPE Gen10+ Microserver.
It seems like something's changed between FreeBSD 12.0 and 13.1, because all of a sudden diskinfo -v ada0 shows the physical path, and sesutil map shows:
pre:
ses0:
	Enclosure Name: AHCI SGPIO Enclosure 2.00
	Enclosure ID: 3061686369656d30
	Element 0, Type: Array Device Slot
		Status: Unsupported (0x00 0x00 0x00 0x00)
		Description: Drive Slots
	Element 1, Type: Array Device Slot
		Status: OK (0x01 0x00 0x00 0x00)
		Description: Slot 00
		Device Names: pass0,ada0
	Element 2, Type: Array Device Slot
		Status: OK (0x01 0x00 0x00 0x00)
		Description: Slot 01
		Device Names: pass1,ada1
	Element 3, Type: Array Device Slot
		Status: OK (0x01 0x00 0x00 0x00)
		Description: Slot 02
		Device Names: pass2,ada2
	Element 4, Type: Array Device Slot
		Status: OK (0x01 0x00 0x00 0x00)
		Description: Slot 03
		Device Names: pass3,ada3
	Element 5, Type: Array Device Slot
		Status: Not Installed (0x05 0x00 0x00 0x00)
		Description: Slot 04
	Element 6, Type: Array Device Slot
		Status: Not Installed (0x05 0x00 0x00 0x00)
		Description: Slot 05
Can't wait for vdev properties in 13.2 to land, so that I can get enclosure information printed in zpool status. :toot:

BlankSystemDaemon fucked around with this message at 22:58 on Mar 31, 2023

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

BlankSystemDaemon posted:

Samba is always going to be behind SMB, because SMB is a proprietary protocol.
Probably. But the kernel module implementation of SMB, called ksmbd, can actually do SMB Direct. So I'm not sure why Samba is dragging their balls across the ground in that regard.

--edit:
Also, regular SMB peaks at 1.8GB/s here, until ZFS decides to throttle incoming writes because it decided the in-memory ZIL write buffer is too full and wants to write to disk.

Combat Pretzel fucked around with this message at 00:19 on Apr 1, 2023

BlankSystemDaemon
Mar 13, 2009



Combat Pretzel posted:

Probably. But the kernel module implementation of SMB, called ksmbd, can actually do SMB Direct. So I'm not sure why Samba is dragging their balls across the ground in that regard.

--edit:
Also, regular SMB peaks at 1.8GB/s here, until ZFS decides to throttle incoming writes because it decided the in-memory ZIL write buffer is too full and wants to write to disk.
The ksmbd which shipped with an unauthenticated kernel-privileged remote-code execution, landing it a perfect 10.0 CVSS score? Or the 4 other CVEs of a 9.6, 8.5, 6.5, and 5.3 that didn't get widely reported because of the nature of the security theater that is the infosec press?
Besides, the point was that with NFS vs Samba on the same machine, NFS tends to perform better. Not that one machine might get a higher number than another.

It's probably the disk caches that can absorb the data at 1.8GBps, and then subsequently dropping to the actual write speed once they're filled.
The dirty-data buffer writes asynchronous data to disk every 5 seconds or whenever it fills up (defaults to 10% of system memory).

BlankSystemDaemon fucked around with this message at 00:41 on Apr 1, 2023

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Being a swiss cheese is unrelated to the feasibility of implementing SMB Direct, though.

BlankSystemDaemon
Mar 13, 2009



Combat Pretzel posted:

Being a swiss cheese is unrelated to the feasibility of implementing SMB Direct, though.
Yeah, true.

And now I have a mental image of a piece of swiss cheese with network cables going into the holes. Great.

CopperHound
Feb 14, 2012

Zorak of Michigan posted:

Each vdev gets the IOPS of its worst-performing drive, but throughput of a multi-disk vdev can be much higher than single-disk throughput.
Wasn't there some testing that showed mirror vdevs had slightly better read iops?

BlankSystemDaemon
Mar 13, 2009



CopperHound posted:

Wasn't there some testing that showed mirror vdevs had slightly better read iops?
If you set primarycache=none and secondarycache=none, there shouldn't be any way to achieve better read speeds than the aggregate bandwidth of the striped data.

In reality, the firmware and hardware caching can probably introduce enough variability that it's hard to say for sure.

Less Fat Luke
May 23, 2003

Exciting Lemon
ZFS *read* performance is should definitely be faster on mirrored devices, thanks to both load balancing of reads and queuing reads to least-busy drives:
https://openzfs.org/wiki/Features#Improve_N-way_mirror_read_performance

ZFS write performance is limited by the slowest device in the vdev as mentioned above though.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Wibla posted:

I would NOT buy a bunch of SATA SSD's in 2023. At least not without a clearly defined goal :haw:

Oh no, I just ordered an Intel 670p 2tb yesterday!

Please forgive me.

Beve Stuscemi
Jun 6, 2001




BlankSystemDaemon posted:

HPE branded Microsemi RAID controller?

It is an HPE branded controller, although I don’t know what kind offhand without opening it up and it’s at my office right now. A cursory google suggested that it has an inbuilt HBA mode that you can switch to

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

CopperHound posted:

Wasn't there some testing that showed mirror vdevs had slightly better read iops?
Mirrors in ZFS are load balanced for reads, so ideally it scales pretty well with the amount of disks.

Wibla
Feb 16, 2011

Moey posted:

Oh no, I just ordered an Intel 670p 2tb yesterday!

Please forgive me.

For your sins, you have to move 20 workstations alone, to somewhere with no power and network drops :v:

(jk)

I ordered a 2TB KC3000 yesterday :sun:

BlankSystemDaemon
Mar 13, 2009



Less Fat Luke posted:

ZFS *read* performance is should definitely be faster on mirrored devices, thanks to both load balancing of reads and queuing reads to least-busy drives:
https://openzfs.org/wiki/Features#Improve_N-way_mirror_read_performance

ZFS write performance is limited by the slowest device in the vdev as mentioned above though.
Huh, I'd completely forgotten about both these.

Beve Stuscemi posted:

It is an HPE branded controller, although I don’t know what kind offhand without opening it up and it’s at my office right now. A cursory google suggested that it has an inbuilt HBA mode that you can switch to
Yeah, that's the trouble - I don't know for sure that the HBA mode is the same as an initiator target mode.
A lot of RAID controllers think it's fine to just add each individual disk in a raid0 of their own, but this doesn't work both because it means you're locked into using RAID controllers that support the vendors RAID implementation, but worse yet it usually still means that ZFS has no control over disk and cache flushing (which it can't work properly without).

Only way I know of is to try it, and then use hd(1) or similar tools to look at the raw/character device (usually at the beginning, the end, or both), and then try moving the disk to another machine entirely and repeating.

BlankSystemDaemon fucked around with this message at 11:10 on Apr 1, 2023

BlankSystemDaemon
Mar 13, 2009



Double-post, but it's somewhat related to the discussion of ZFS and NFS in that there's a lot better instrumentation for observability.
It's also a good article in case anyone has been wondering about how to go about determining the size of writes, whenever I've brought that up in the past.

Samba, in theory, could add dtrace compatibility via USDT - but they haven't done so yet.
In theory it'd also be possible for FreeBSD to import the Illumos SMB code, because it's a complete reimplementation of the SMB protocol - but in practice, it's a big job, because it involves not just SMB but importing LDAP (instead of it being integrated via PAM), quite a few additions to the VFS, and probably a whole lot more.

That Works
Jul 22, 2006

Every revolution evaporates and leaves behind only the slime of a new bureaucracy


Sorry this probably isn't the best thread for this but not sure what one would be more appropriate.

I've been using a work (University) supplied Google Drive to share lab notebooks, data, protocols etc (think lots of word and excel docs and some img files) between myself and my grad and undergrad students. Due to some annoying changes on the University end, I'd like to get a non Google product that does something similar. Ideally I'd like at least 1TB of cloud storage with an app ecosystem that's cross platform. I don't care if we all share a single account etc if that is cheaper. The data will have also separate backups to physical and Glacier S3 storage at intervals (if the platform also facilitates this that would be nice).

So, any recs on a google drive replacement for shared cloud storage between <10 people?

Aware
Nov 18, 2003
We use OneDrive at work and it's fine. The online/browser version of Office is frankly good enough to not open a native app most of the time and you can't really get more integrated than that if those are the apps you use. It doesn't have a native Linux client though there are third party tools but I just work via the browser on my Linux laptop. Not sure about backups to S3 but google suggests a number of ways.

Aware fucked around with this message at 14:36 on Apr 1, 2023

That Works
Jul 22, 2006

Every revolution evaporates and leaves behind only the slime of a new bureaucracy


Aware posted:

We use OneDrive at work and it's fine. The online/browser version of Office is frankly good enough to not open a native app most of the time and you can't really get more integrated than that if those are the apps you use.

Honestly the browser office apps don't work well for reference manager and other excel plugin stuff we use so that's a no go for that part.

Otherwise, how problematic is it for OneDrive to run a shared drive between multiple users? My only experience with it has been as a personal sync drive between my own multiple windows systems. ie, most of the things on my OneDrive are items that would never be shared to anyone at work etc.

Aware
Nov 18, 2003

That Works posted:

Honestly the browser office apps don't work well for reference manager and other excel plugin stuff we use so that's a no go for that part.

Otherwise, how problematic is it for OneDrive to run a shared drive between multiple users? My only experience with it has been as a personal sync drive between my own multiple windows systems. ie, most of the things on my OneDrive are items that would never be shared to anyone at work etc.

I think SharePoint is actually the preferred solution for a real shared folders between users but we mostly just give access to folders in our own onedrives to a bunch of users.

I don't actually store anything work related locally, it's all in OneDrive.

Can't help on the browser app/plugin side, but basically on Windows it's all going to show up as a folder in explorer so you just interact with the files as normal plus realtime multiuser editing in the native apps.

I'm not a O365 admin though, just a user so probably can't add much further other than it 'just works' for the most part.

Aware
Nov 18, 2003
I guess I should post just to be clear - this is my works O365 implementation. For my personal account I've had no issues doing shared folders with my fiance and her personal onedrive account if that helps and is what you're looking at. I think you can make a Microsoft account with any email for this.

Beve Stuscemi
Jun 6, 2001




BlankSystemDaemon posted:

Huh, I'd completely forgotten about both these.

Yeah, that's the trouble - I don't know for sure that the HBA mode is the same as an initiator target mode.
A lot of RAID controllers think it's fine to just add each individual disk in a raid0 of their own, but this doesn't work both because it means you're locked into using RAID controllers that support the vendors RAID implementation, but worse yet it usually still means that ZFS has no control over disk and cache flushing (which it can't work properly without).

Only way I know of is to try it, and then use hd(1) or similar tools to look at the raw/character device (usually at the beginning, the end, or both), and then try moving the disk to another machine entirely and repeating.

I’ll do some research on it. I know the perc controller in my current Dell should be fine, so worst case scenario I can swap that in because I believe it’s normal pci-e

IOwnCalculus
Apr 2, 2003





Any of the controller options on a DL380 G9 should work, but IMO they aren't ideal. Even in HBA mode they require funky commands to play nice with smartctl, for example.

I ended up swapping mine for another LSI 2308 just to get everything on known good hardware. I was having issues with a SSD randomly slowing the whole system down but I suspect that was actually an issue with the drive and not the controller.

Bonus, if your mezzanine card controller is a P840ar, those go for stupid money still on ebay. Mine is/was and the only reason I haven't sold it yet is because whoever put my server together last stripped almost all of the torx screws that hold it together.

Likewise if you want to maximize your pcie slots, consider finding a FlexLOM 10G NIC to use the dedicated slot instead of one of your regular PCIe slots. That'll leave you more room in the future for other HBAs or NVMe SSDs. I don't have any ability to use 10G where my server is at, so I'm considering figuring out how to make a card that adapts the FlexLOM slot to m.2. The slot is just PCIe, except with a slightly different form factor and pinout because of course it is.

That Works
Jul 22, 2006

Every revolution evaporates and leaves behind only the slime of a new bureaucracy


Aware posted:

I guess I should post just to be clear - this is my works O365 implementation. For my personal account I've had no issues doing shared folders with my fiance and her personal onedrive account if that helps and is what you're looking at. I think you can make a Microsoft account with any email for this.

Thanks, that helps

hooah
Feb 6, 2006
WTF?
I have a Synology DiskStation 218+ that I have a Jellyfin Docker container on. I have looked around, but haven't been able to figure this out: what is the best way (if it's possible) to update the container without losing any of the configuration?

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



hooah posted:

I have a Synology DiskStation 218+ that I have a Jellyfin Docker container on. I have looked around, but haven't been able to figure this out: what is the best way (if it's possible) to update the container without losing any of the configuration?

Do you have the config directory mounted as a persistent volume claim as recommended? If so it will survive a container update and should pull in your existing settings.
https://jellyfin.org/docs/general/installation/container/

Nitrousoxide fucked around with this message at 18:32 on Apr 1, 2023

hooah
Feb 6, 2006
WTF?

Nitrousoxide posted:

Do you have the config directory mounted as a persistent volume claim as recommended? If so it will survive a container update and should pull in your existing settings.
https://jellyfin.org/docs/general/installation/container/

Ok, I think I do. I probably did that the last time I updated Jellyfin and lost my whole setup.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



hooah posted:

Ok, I think I do. I probably did that the last time I updated Jellyfin and lost my whole setup.

If you aren't sure you can copy your /config directory in your container to your host machine before you update the container with
code:
docker cp
. Here is the documentation.

https://docs.docker.com/engine/reference/commandline/cp/

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

hooah posted:

Ok, I think I do. I probably did that the last time I updated Jellyfin and lost my whole setup.

'docker ps' and 'docker inspect ID' should be able to show if it's persistent.

Tiny Timbs
Sep 6, 2008

Matt Zerella posted:

Set any VMs or Docker shares to "Prefer" and run them strictly off the SSD. If you set the share to prefer and stop the docker/vm services and then run the mover it will move them off the array and onto the SSD. Also do this for appdata.

Make sure you're backing up your appdata and VMs!

Thanks, yeah, I already had to reconfigure everything once after my original NVME drive kept overheating (this tiny motherboard put the slot on the back where it gets no airflow). I just found the plugin that lets me back up the appdata folder to the array drive.

Tiny Timbs fucked around with this message at 00:24 on Apr 3, 2023

Vaporware
May 22, 2004

Still not here yet.
I just got another 18tb elements drive for a deal, but it's a return. I plugged it in, it sounds fine, connects up and identifies, unlike the last one I paid full price for... I'm running badblocks just to see if it's got an easily identifiable defective drive. It's obviously been opened before (the case is on upside down) but the drive seems to be in good shape? any other tests I should run before declaring it good enough for service?

edit the discussion I found the badblocks command in was very interesting. badblocks can't just run on a drive that big, you have to run it in chunks, lol
code:
 badblocks: Value too large for defined data type invalid end block (4394573824): must be 32-bit value 
sudo badblocks -svw -b 4096 /dev/sda 2197286912 0
then
sudo badblocks -svw -b 4096 /dev/sda 4394573824 2197286912

https://superuser.com/questions/692912/is-there-a-way-to-restart-badblocks

Vaporware fucked around with this message at 16:11 on Apr 3, 2023

EL BROMANCE
Jun 10, 2006

COWABUNGA DUDES!
🥷🐢😬



Do any Mac users know of any health monitoring software that works for drives in a DAS connected via USB-C? I’m guessing there’s not really anything particularly useful but just in case. I have everything backed up to backblaze anyway, but any kind of heads up that there’s an issue with a drive rather than waking up to something being offline is useful.

Wee
Dec 16, 2022

by Fluffdaddy
Whats the...

best
easiest
free
(easiest and free preferable, but all ideas welcome)

...way to back up a Wordpress blog and its database (cpanel and hosted with Hostgator if that helps) to my Synology NAS?

Ive googled it, but I would like a more informed opinion.

Wee fucked around with this message at 06:09 on Apr 5, 2023

Thanks Ants
May 21, 2004

#essereFerrari


I guess you want a plugin that runs on a schedule and puts the backup in a Zip file somewhere that you can get to it, and then you run some sort of scheduled task on your NAS to download that file.

VelociBacon
Dec 8, 2009

Hoping someone can help, I can't remember the name of some software that was recommended, basically it was like Plex but better at hosting media that isn't movies or shows (in my case it's motorsport stuff). Anyone remember what this was called or have another recommendation for this? Ideally I would be able to install an app like Plex on phone/tablet and be able to stream this media from outside of the LAN.

Beve Stuscemi
Jun 6, 2001




VelociBacon posted:

Hoping someone can help, I can't remember the name of some software that was recommended, basically it was like Plex but better at hosting media that isn't movies or shows (in my case it's motorsport stuff). Anyone remember what this was called or have another recommendation for this? Ideally I would be able to install an app like Plex on phone/tablet and be able to stream this media from outside of the LAN.

Jellyfin

VelociBacon
Dec 8, 2009


Yeah that was it thanks!

Adbot
ADBOT LOVES YOU

Xenix
Feb 21, 2003
I think I posted about this here a few years ago, but I have a Synology Diskstation 415+, with the Intel Atom processor that goes bad. Well, that was fixable with a resistor being soldered onto the motherboard, and it worked well for at least 2 more years. However, we recently had a bunch of power outages in the area due to storms, and after one of the outages, the Diskstation wouldn't turn back on. It shows green lights for the drives, and the power light blinks, just like when the processor goes bad. I opened it back up, couldn't find anything wrong with the resistor or solder joint, so I put it all back together and tried again. I had the same problem, so I removed the resistor and soldered a new one in it's place (note: I am not very experienced at doing these kinds of repairs, so it's totally possible I hosed up somewhere). I'm still having the problem. 1) Is there another known problem with this unit that might cause this problem? 2) Is pursuing a repair even worth my time, or is an almost 10 year old unit with known bad hardware just a lost cause at this point?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply