|
Eletriarnation posted:Maybe in a mirror, but as far as I understand it the performance characteristics of distributed parity topologies like RAID-5/6/Z* have more similarities to striped arrays. You of course have a lot more CPU overhead, and at any given time some subset of your disks is reading/writing parity blocks that don't contribute to your final application-available bandwidth. Still, modern CPUs are fast so that's not much of a bottleneck to HDDs and you can absolutely get very fast numbers for sustained, sequential transfers. Ah, neat. In that case I second what Wibla said. Make a raidz1 or raidz2 of your drives and you're good
|
# ? Mar 31, 2023 21:37 |
|
|
# ? May 30, 2024 10:47 |
|
Jim Silly-Balls posted:I do have a cache 1TB SATA SSD in my unraid, but again, its the limitations of a single disk that come into play. hol up! You have all that poo poo sitting already? Is that an SFF (16 bay) DL380? I would lab the DL380 with 8x1TB setup in striped ZFS mirrors (one pool with multiple mirrored vdevs) and see how it performed for that use case. That'd get you around 3.something TB formatted capacity for actual high-speed storage, with reasonable redundancy. You could probably run them in RAIDZ2 and still get good enough performance for your needs. You can also get cheap ($20) pci-e riser cards that will fit a single nvme drive. I have one in my DL360p Gen8 and it works great. There are versions with multiple NVMe slots, but they require PCI-e bifurcation and your mileage will vary there.
|
# ? Mar 31, 2023 21:38 |
|
VostokProgram posted:I thought each vdev only gets the bandwidth of its slowest drive? Each vdev gets the IOPS of its worst-performing drive, but throughput of a multi-disk vdev can be much higher than single-disk throughput.
|
# ? Mar 31, 2023 22:16 |
|
Wibla posted:hol up! Yeah, its all here just sitting, its an 8-bay 2.5" drive DL380. I do have a free PCI-E slot after I would add the SFP card to it, so the NVME is something to look at then. Beve Stuscemi fucked around with this message at 22:29 on Mar 31, 2023 |
# ? Mar 31, 2023 22:26 |
At AsiaBSDCon, Alexander Motin (a FreeBSD and ZFS developer) is going to be presenting on ZFS Data Path, Caching and Performance on April 2nd. He's one of the people (Allan Jude, another FreeBSD and ZFS developer, being the other that I know of) who's working on speeding up ZFS on NVMe, since when ZFS was invented in 2001-2003, NVMe wasn't even a gleam in someone's eye (well, as far as I know, although I know we all wished for faster bandwidth for disks, even back then). Combat Pretzel posted:L2ARC maps directly do ZFS filesystem blocks. It's 70 bytes of header per block. Samba is always going to be behind SMB, because SMB is a proprietary protocol. NFS, on the other hand, is an open protocol - and on top of that, it's also actively used in a ton of high-performance systems, so its implementations tend to be better optimized. Jim Silly-Balls posted:I had to check the hardware profile, but it looks like bits Motronic posted:I know this is your schtick, but in this case the suggested reason to use ZFS, as you very well know but don't want to say "oh, you were right I missed it" so that you can keep on well acksuallying instead is: because OP needs a pool of drives. Not a single drive. Yes, other files systems can ackshually do that too. But I chose ZFS as the example because of your response. Also, I think we need to agree on terminology here. Any array of drives, irrespective of whatever filesystem goes on top, can achieve 10Gbps. Pooled storage, which is what ZFS does, allows you to combine arbitrary collections of arrays and stripe the data across each of these arrays (which ZFS calls vdevs). I never once mentioned using a single drive, but I should've been more explicit about Jim using ZFS, you're right - I just didn't want to have the discussion I alluded to above; I guess I'm doomed if I do, and doomed if I don't. Wibla posted:Can we not have this stupid slapfight again? 500MBps is just about what you can achieve from a single Intel 520 480GB SSD, which is what I have in my workstation (even if the motherboards chipset is dead, because it's more than a decade old). Jim Silly-Balls posted:Yeah, its all here just sitting, its an 8-bay 2.5" drive DL380. I do have a free PCI-E slot after I would add the SFP card to it, so the NVME is something to look at then. And even if you do use the unsupported commands, I'm not sure it presents the disks as initiator targets - which is what you want for ZFS. Also, with 128GB of memory, you're definitely fine to use a NVMe SSD for L2ARC so that you can fit an entire video project into a read-cache. EDIT: I finally got around to upgrading my always-online HPE Gen10+ Microserver. It seems like something's changed between FreeBSD 12.0 and 13.1, because all of a sudden diskinfo -v ada0 shows the physical path, and sesutil map shows: pre:ses0: Enclosure Name: AHCI SGPIO Enclosure 2.00 Enclosure ID: 3061686369656d30 Element 0, Type: Array Device Slot Status: Unsupported (0x00 0x00 0x00 0x00) Description: Drive Slots Element 1, Type: Array Device Slot Status: OK (0x01 0x00 0x00 0x00) Description: Slot 00 Device Names: pass0,ada0 Element 2, Type: Array Device Slot Status: OK (0x01 0x00 0x00 0x00) Description: Slot 01 Device Names: pass1,ada1 Element 3, Type: Array Device Slot Status: OK (0x01 0x00 0x00 0x00) Description: Slot 02 Device Names: pass2,ada2 Element 4, Type: Array Device Slot Status: OK (0x01 0x00 0x00 0x00) Description: Slot 03 Device Names: pass3,ada3 Element 5, Type: Array Device Slot Status: Not Installed (0x05 0x00 0x00 0x00) Description: Slot 04 Element 6, Type: Array Device Slot Status: Not Installed (0x05 0x00 0x00 0x00) Description: Slot 05 BlankSystemDaemon fucked around with this message at 22:58 on Mar 31, 2023 |
|
# ? Mar 31, 2023 22:30 |
|
BlankSystemDaemon posted:Samba is always going to be behind SMB, because SMB is a proprietary protocol. --edit: Also, regular SMB peaks at 1.8GB/s here, until ZFS decides to throttle incoming writes because it decided the Combat Pretzel fucked around with this message at 00:19 on Apr 1, 2023 |
# ? Apr 1, 2023 00:13 |
Combat Pretzel posted:Probably. But the kernel module implementation of SMB, called ksmbd, can actually do SMB Direct. So I'm not sure why Samba is dragging their balls across the ground in that regard. Besides, the point was that with NFS vs Samba on the same machine, NFS tends to perform better. Not that one machine might get a higher number than another. It's probably the disk caches that can absorb the data at 1.8GBps, and then subsequently dropping to the actual write speed once they're filled. The dirty-data buffer writes asynchronous data to disk every 5 seconds or whenever it fills up (defaults to 10% of system memory). BlankSystemDaemon fucked around with this message at 00:41 on Apr 1, 2023 |
|
# ? Apr 1, 2023 00:34 |
|
Being a swiss cheese is unrelated to the feasibility of implementing SMB Direct, though.
|
# ? Apr 1, 2023 00:40 |
Combat Pretzel posted:Being a swiss cheese is unrelated to the feasibility of implementing SMB Direct, though. And now I have a mental image of a piece of swiss cheese with network cables going into the holes. Great.
|
|
# ? Apr 1, 2023 00:42 |
|
Zorak of Michigan posted:Each vdev gets the IOPS of its worst-performing drive, but throughput of a multi-disk vdev can be much higher than single-disk throughput.
|
# ? Apr 1, 2023 00:55 |
CopperHound posted:Wasn't there some testing that showed mirror vdevs had slightly better read iops? In reality, the firmware and hardware caching can probably introduce enough variability that it's hard to say for sure.
|
|
# ? Apr 1, 2023 01:04 |
|
ZFS *read* performance is should definitely be faster on mirrored devices, thanks to both load balancing of reads and queuing reads to least-busy drives: https://openzfs.org/wiki/Features#Improve_N-way_mirror_read_performance ZFS write performance is limited by the slowest device in the vdev as mentioned above though.
|
# ? Apr 1, 2023 01:14 |
|
Wibla posted:I would NOT buy a bunch of SATA SSD's in 2023. At least not without a clearly defined goal Oh no, I just ordered an Intel 670p 2tb yesterday! Please forgive me.
|
# ? Apr 1, 2023 02:05 |
|
BlankSystemDaemon posted:HPE branded Microsemi RAID controller? It is an HPE branded controller, although I don’t know what kind offhand without opening it up and it’s at my office right now. A cursory google suggested that it has an inbuilt HBA mode that you can switch to
|
# ? Apr 1, 2023 02:40 |
|
CopperHound posted:Wasn't there some testing that showed mirror vdevs had slightly better read iops?
|
# ? Apr 1, 2023 08:51 |
|
Moey posted:Oh no, I just ordered an Intel 670p 2tb yesterday! For your sins, you have to move 20 workstations alone, to somewhere with no power and network drops (jk) I ordered a 2TB KC3000 yesterday
|
# ? Apr 1, 2023 09:10 |
Less Fat Luke posted:ZFS *read* performance is should definitely be faster on mirrored devices, thanks to both load balancing of reads and queuing reads to least-busy drives: Beve Stuscemi posted:It is an HPE branded controller, although I don’t know what kind offhand without opening it up and it’s at my office right now. A cursory google suggested that it has an inbuilt HBA mode that you can switch to A lot of RAID controllers think it's fine to just add each individual disk in a raid0 of their own, but this doesn't work both because it means you're locked into using RAID controllers that support the vendors RAID implementation, but worse yet it usually still means that ZFS has no control over disk and cache flushing (which it can't work properly without). Only way I know of is to try it, and then use hd(1) or similar tools to look at the raw/character device (usually at the beginning, the end, or both), and then try moving the disk to another machine entirely and repeating. BlankSystemDaemon fucked around with this message at 11:10 on Apr 1, 2023 |
|
# ? Apr 1, 2023 11:08 |
Double-post, but it's somewhat related to the discussion of ZFS and NFS in that there's a lot better instrumentation for observability. It's also a good article in case anyone has been wondering about how to go about determining the size of writes, whenever I've brought that up in the past. Samba, in theory, could add dtrace compatibility via USDT - but they haven't done so yet. In theory it'd also be possible for FreeBSD to import the Illumos SMB code, because it's a complete reimplementation of the SMB protocol - but in practice, it's a big job, because it involves not just SMB but importing LDAP (instead of it being integrated via PAM), quite a few additions to the VFS, and probably a whole lot more.
|
|
# ? Apr 1, 2023 13:33 |
Sorry this probably isn't the best thread for this but not sure what one would be more appropriate. I've been using a work (University) supplied Google Drive to share lab notebooks, data, protocols etc (think lots of word and excel docs and some img files) between myself and my grad and undergrad students. Due to some annoying changes on the University end, I'd like to get a non Google product that does something similar. Ideally I'd like at least 1TB of cloud storage with an app ecosystem that's cross platform. I don't care if we all share a single account etc if that is cheaper. The data will have also separate backups to physical and Glacier S3 storage at intervals (if the platform also facilitates this that would be nice). So, any recs on a google drive replacement for shared cloud storage between <10 people?
|
|
# ? Apr 1, 2023 14:28 |
|
We use OneDrive at work and it's fine. The online/browser version of Office is frankly good enough to not open a native app most of the time and you can't really get more integrated than that if those are the apps you use. It doesn't have a native Linux client though there are third party tools but I just work via the browser on my Linux laptop. Not sure about backups to S3 but google suggests a number of ways.
Aware fucked around with this message at 14:36 on Apr 1, 2023 |
# ? Apr 1, 2023 14:31 |
Aware posted:We use OneDrive at work and it's fine. The online/browser version of Office is frankly good enough to not open a native app most of the time and you can't really get more integrated than that if those are the apps you use. Honestly the browser office apps don't work well for reference manager and other excel plugin stuff we use so that's a no go for that part. Otherwise, how problematic is it for OneDrive to run a shared drive between multiple users? My only experience with it has been as a personal sync drive between my own multiple windows systems. ie, most of the things on my OneDrive are items that would never be shared to anyone at work etc.
|
|
# ? Apr 1, 2023 14:36 |
|
That Works posted:Honestly the browser office apps don't work well for reference manager and other excel plugin stuff we use so that's a no go for that part. I think SharePoint is actually the preferred solution for a real shared folders between users but we mostly just give access to folders in our own onedrives to a bunch of users. I don't actually store anything work related locally, it's all in OneDrive. Can't help on the browser app/plugin side, but basically on Windows it's all going to show up as a folder in explorer so you just interact with the files as normal plus realtime multiuser editing in the native apps. I'm not a O365 admin though, just a user so probably can't add much further other than it 'just works' for the most part.
|
# ? Apr 1, 2023 14:41 |
|
I guess I should post just to be clear - this is my works O365 implementation. For my personal account I've had no issues doing shared folders with my fiance and her personal onedrive account if that helps and is what you're looking at. I think you can make a Microsoft account with any email for this.
|
# ? Apr 1, 2023 14:52 |
|
BlankSystemDaemon posted:Huh, I'd completely forgotten about both these. I’ll do some research on it. I know the perc controller in my current Dell should be fine, so worst case scenario I can swap that in because I believe it’s normal pci-e
|
# ? Apr 1, 2023 15:35 |
|
Any of the controller options on a DL380 G9 should work, but IMO they aren't ideal. Even in HBA mode they require funky commands to play nice with smartctl, for example. I ended up swapping mine for another LSI 2308 just to get everything on known good hardware. I was having issues with a SSD randomly slowing the whole system down but I suspect that was actually an issue with the drive and not the controller. Bonus, if your mezzanine card controller is a P840ar, those go for stupid money still on ebay. Mine is/was and the only reason I haven't sold it yet is because whoever put my server together last stripped almost all of the torx screws that hold it together. Likewise if you want to maximize your pcie slots, consider finding a FlexLOM 10G NIC to use the dedicated slot instead of one of your regular PCIe slots. That'll leave you more room in the future for other HBAs or NVMe SSDs. I don't have any ability to use 10G where my server is at, so I'm considering figuring out how to make a card that adapts the FlexLOM slot to m.2. The slot is just PCIe, except with a slightly different form factor and pinout because of course it is.
|
# ? Apr 1, 2023 16:11 |
Aware posted:I guess I should post just to be clear - this is my works O365 implementation. For my personal account I've had no issues doing shared folders with my fiance and her personal onedrive account if that helps and is what you're looking at. I think you can make a Microsoft account with any email for this. Thanks, that helps
|
|
# ? Apr 1, 2023 16:30 |
|
I have a Synology DiskStation 218+ that I have a Jellyfin Docker container on. I have looked around, but haven't been able to figure this out: what is the best way (if it's possible) to update the container without losing any of the configuration?
|
# ? Apr 1, 2023 17:44 |
hooah posted:I have a Synology DiskStation 218+ that I have a Jellyfin Docker container on. I have looked around, but haven't been able to figure this out: what is the best way (if it's possible) to update the container without losing any of the configuration? Do you have the config directory mounted as a persistent volume claim as recommended? If so it will survive a container update and should pull in your existing settings. https://jellyfin.org/docs/general/installation/container/ Nitrousoxide fucked around with this message at 18:32 on Apr 1, 2023 |
|
# ? Apr 1, 2023 18:27 |
|
Nitrousoxide posted:Do you have the config directory mounted as a persistent volume claim as recommended? If so it will survive a container update and should pull in your existing settings. Ok, I think I do. I probably did that the last time I updated Jellyfin and lost my whole setup.
|
# ? Apr 1, 2023 18:34 |
hooah posted:Ok, I think I do. I probably did that the last time I updated Jellyfin and lost my whole setup. If you aren't sure you can copy your /config directory in your container to your host machine before you update the container with code:
https://docs.docker.com/engine/reference/commandline/cp/
|
|
# ? Apr 1, 2023 18:38 |
|
hooah posted:Ok, I think I do. I probably did that the last time I updated Jellyfin and lost my whole setup. 'docker ps' and 'docker inspect ID' should be able to show if it's persistent.
|
# ? Apr 1, 2023 20:45 |
|
Matt Zerella posted:Set any VMs or Docker shares to "Prefer" and run them strictly off the SSD. If you set the share to prefer and stop the docker/vm services and then run the mover it will move them off the array and onto the SSD. Also do this for appdata. Thanks, yeah, I already had to reconfigure everything once after my original NVME drive kept overheating (this tiny motherboard put the slot on the back where it gets no airflow). I just found the plugin that lets me back up the appdata folder to the array drive. Tiny Timbs fucked around with this message at 00:24 on Apr 3, 2023 |
# ? Apr 3, 2023 00:21 |
|
I just got another 18tb elements drive for a deal, but it's a return. I plugged it in, it sounds fine, connects up and identifies, unlike the last one I paid full price for... I'm running badblocks just to see if it's got an easily identifiable defective drive. It's obviously been opened before (the case is on upside down) but the drive seems to be in good shape? any other tests I should run before declaring it good enough for service? edit the discussion I found the badblocks command in was very interesting. badblocks can't just run on a drive that big, you have to run it in chunks, lol code:
then sudo badblocks -svw -b 4096 /dev/sda 4394573824 2197286912 https://superuser.com/questions/692912/is-there-a-way-to-restart-badblocks Vaporware fucked around with this message at 16:11 on Apr 3, 2023 |
# ? Apr 3, 2023 15:51 |
|
Do any Mac users know of any health monitoring software that works for drives in a DAS connected via USB-C? I’m guessing there’s not really anything particularly useful but just in case. I have everything backed up to backblaze anyway, but any kind of heads up that there’s an issue with a drive rather than waking up to something being offline is useful.
|
# ? Apr 3, 2023 16:43 |
|
Whats the... best easiest free (easiest and free preferable, but all ideas welcome) ...way to back up a Wordpress blog and its database (cpanel and hosted with Hostgator if that helps) to my Synology NAS? Ive googled it, but I would like a more informed opinion. Wee fucked around with this message at 06:09 on Apr 5, 2023 |
# ? Apr 5, 2023 05:33 |
|
I guess you want a plugin that runs on a schedule and puts the backup in a Zip file somewhere that you can get to it, and then you run some sort of scheduled task on your NAS to download that file.
|
# ? Apr 5, 2023 16:28 |
|
Hoping someone can help, I can't remember the name of some software that was recommended, basically it was like Plex but better at hosting media that isn't movies or shows (in my case it's motorsport stuff). Anyone remember what this was called or have another recommendation for this? Ideally I would be able to install an app like Plex on phone/tablet and be able to stream this media from outside of the LAN.
|
# ? Apr 5, 2023 19:12 |
|
VelociBacon posted:Hoping someone can help, I can't remember the name of some software that was recommended, basically it was like Plex but better at hosting media that isn't movies or shows (in my case it's motorsport stuff). Anyone remember what this was called or have another recommendation for this? Ideally I would be able to install an app like Plex on phone/tablet and be able to stream this media from outside of the LAN. Jellyfin
|
# ? Apr 5, 2023 19:34 |
|
Beve Stuscemi posted:Jellyfin Yeah that was it thanks!
|
# ? Apr 5, 2023 20:54 |
|
|
# ? May 30, 2024 10:47 |
|
I think I posted about this here a few years ago, but I have a Synology Diskstation 415+, with the Intel Atom processor that goes bad. Well, that was fixable with a resistor being soldered onto the motherboard, and it worked well for at least 2 more years. However, we recently had a bunch of power outages in the area due to storms, and after one of the outages, the Diskstation wouldn't turn back on. It shows green lights for the drives, and the power light blinks, just like when the processor goes bad. I opened it back up, couldn't find anything wrong with the resistor or solder joint, so I put it all back together and tried again. I had the same problem, so I removed the resistor and soldered a new one in it's place (note: I am not very experienced at doing these kinds of repairs, so it's totally possible I hosed up somewhere). I'm still having the problem. 1) Is there another known problem with this unit that might cause this problem? 2) Is pursuing a repair even worth my time, or is an almost 10 year old unit with known bad hardware just a lost cause at this point?
|
# ? Apr 7, 2023 20:05 |