|
BlankSystemDaemon posted:I think it's possible that I'm missing something, but didn't Computer Viking ask about SAS, as in Serial Attached SCSI - rather than Network Attached Storage? Indeed. The reason I ask is that I can stuff eight disks in a tower server and it will keep the disks, SAS controller, and SAS expander/backplane cool enough while making about as much noise as a gaming PC, or I can put 12 disks in a 2U enclosure that sounds like a vacuum cleaner fighting a deep carpet. Logically, it should be possible to build a similar tower without $2k of computing hardware inside, with an SFF-whatever plug at the rear, that could host those same eight disks for a comparable cost and noise level - or less. E: Consider the Synology DX1222. That's a 12 bay SAS expander that I know from experience is reasonably quiet and compact and seems to work fine - but from what I can read it only works with a Synology controller, despite using SFF-8644 cables. Does anyone make a generic version of that? Computer viking fucked around with this message at 22:21 on Mar 9, 2023 |
# ? Mar 9, 2023 22:10 |
|
|
# ? May 28, 2024 09:28 |
|
You can, but you're going to have to roll your own. Serve the home has guides that are a bit dated but the general idea is still the same. You need a SAS expander, something to handle the power supply, and the appropriate cables and brackets to connect the downstream ports to the drives and the upstream port to an external SAS port.
|
# ? Mar 9, 2023 22:13 |
|
gen10+ is perfect in every way except: no nvme slot only one pcie slot no quicksync support a second pcie slot would make it the absolute perfect home NAS
|
# ? Mar 9, 2023 22:13 |
|
I built a 4U with dual X5676 CPUs, a bunch of 4TB drives, SAS HBA + SAS expander + 10gbe and it was about as quiet as my regular PC, but I used Noctua coolers and fans. A used R6 will do fine for 8 drives etc, and it won't make much noise. That 5.25" thing with a bunch of mSATA drives is not my idea of fun, that's going to be both expensive (for what you get) and noisy. A single pcie 4.0 NVMe will outperform a bunch of mSATA drives anyway, and for bulk storage you want spinning rust. E: Here's an old build from 2009 that got some incremental upgrades along the way moving from a 3ware 9500-12 SATA raid controller to two LSI 9211 cards. Note the obligatory fan - and the IDE system drive Wibla fucked around with this message at 22:19 on Mar 9, 2023 |
# ? Mar 9, 2023 22:15 |
|
IOwnCalculus posted:You can, but you're going to have to roll your own. Serve the home has guides that are a bit dated but the general idea is still the same. You need a SAS expander, something to handle the power supply, and the appropriate cables and brackets to connect the downstream ports to the drives and the upstream port to an external SAS port. Right. It just feels like something someone would sell, and for work I'd like less of my own mediocre handiwork in there. Oh well, it sounds perfectly manageable, and I should have most of the year before they start running low on space.
|
# ? Mar 9, 2023 22:28 |
|
e.pilot posted:gen10+ is perfect in every way except: so, like, this should be a dealbreaker for most then I am just sour that there is no great OOTB solution for a home plex server with 100TB+ capacity that is affordable
|
# ? Mar 9, 2023 23:27 |
|
Computer viking posted:Right. It just feels like something someone would sell, and for work I'd like less of my own mediocre handiwork in there. Oh well, it sounds perfectly manageable, and I should have most of the year before they start running low on space. Someone does: you get a fractal define 7 XL, another 3 of the add-on HDD cages & sleds, and you have a full tower case that can fit 18 3.5 drives. You spend $350 for the privilege, but that's a shitload of drives and a low-noise case for you. VVV edit: umm yeah seems I don't math so good. it has 8 drive slots ootb and space for up to 18, so 5 packs of HDD sleds. Though I was also double-counting how many accessories you need to buy -- the extra sleds don't go into cages, so I think you don't need to buy extra cages? So my price tag was actually high. Anyways it's a lot of money, look at the manuals first to figure out what you need. Klyith fucked around with this message at 01:01 on Mar 10, 2023 |
# ? Mar 9, 2023 23:42 |
|
I did this and it's awesome (well, Meshify 2 XL which is the same frame). Also I needed 5 of the 2-pack drive trays to get up to 16 drives so if you go this route check your math
|
# ? Mar 10, 2023 00:26 |
|
Ihmemies posted:It means the Software is absolute pure trash, so I give huge shits. Surely there must be other software too? I've had a love/hate relationship with Veeam for over a decade now. It works great, until it decides to start leaving orphaned snapshot and just murder VMs. Good times.
|
# ? Mar 10, 2023 01:36 |
I'm not sure I understand how that's different from filling a Lian Li PC343B with those 5x3.5" in 3x5.25" bays (which nets you 30 drives in total). That's a case that dates back to when PATA was still the most common interface in consumer devices - and you'd still be operating it basically the same way today, that you would the set of PATA disks back then. A modern SAS setup involves all sorts of fancy things like enclosure identification and fault notification (so that if a drive fails, the LED on the outside of the bay automatically lights up), auto-expansion and auto-replacement (so that when you pull a drive that's failing/dead and insert a bigger drive, it automatically starts the resilver process, and once every drive in that part of the array has been replaced, it automatically expands to fit the available amount of space), and using multiple controllers and multiple data cables to each disk to ensure that there's full datapath redundancy, as well as being able to daisy-chain multiple enclosures. All of that, mind you, is possible with ZFS (at least on FreeBSD, though I don't see any reason Linux shouldn't be capable of it as all of the software necessary for it is either expected of any modern OS or is cross-platform enough that it runs on any modern OS). BlankSystemDaemon fucked around with this message at 01:39 on Mar 10, 2023 |
|
# ? Mar 10, 2023 01:36 |
|
My back hurts just thinking of that many drives in a box.
|
# ? Mar 10, 2023 02:49 |
|
Matt Zerella posted:My back hurts just thinking of that many drives in a box. You're not wrong. Step one of deracking either my server or the DS4246 attached to it is "pull all the spindles".
|
# ? Mar 10, 2023 03:08 |
|
IOwnCalculus posted:You're not wrong. Step one of deracking either my server or the DS4246 attached to it is "pull all the spindles". I have 5 drives in my unraid server and almost died trying to move it without a dolly when we got our new place.
|
# ? Mar 10, 2023 04:10 |
|
My Supermicro CSE-836 fully loaded with drives and PSUs is excellent for deadlift training. Just remove some drives to de-load on a tough set
|
# ? Mar 10, 2023 18:23 |
I once watched someone slide out a SAS enclosure on its rails, with some 40 drives loaded, and the rack subsequently tipping forward and landing face-down.
|
|
# ? Mar 10, 2023 18:46 |
|
The real reason to have a UPS is your rack is just to anchor that thing when you're working on a server, lol.
|
# ? Mar 10, 2023 19:18 |
Scruff McGruff posted:The real reason to have a UPS is your rack is just to anchor that thing when you're working on a server, lol. I will say though, I've never seen a PFY move that far in such a short amount of time.
|
|
# ? Mar 10, 2023 19:20 |
|
In good news: I've finally found the space and time to do a full "restore everything" test of the tape backup instead of just picking a few random folders. I'm getting about 133MB/s back from LTO-8 tape, which feels slightly low - but on the other hand it does seem to be working perfectly, so I'll just leave it alone (apart from the tape swap in the middle). As a bonus, I'm doing this to move from an old to a new fileserver, so I actually get to use the mythical "set different properties and restore everything from backups" approach to changing the compression method for files in a dataset. I'm moving from lz4 to ztsd, and I'm curious to see what sort of difference it makes. Computer viking fucked around with this message at 13:57 on Mar 13, 2023 |
# ? Mar 13, 2023 13:51 |
|
I'm still poking thru installing ZFS. I finally got everything mounted properly and the kernel headers sorted. I immediately found a conflict between the tutorial I was following and another very similar SBC. Hardware specific tutorial recommends using dev/sdx and uses USB disks extensively because of the rockpro64's single PCI slot. https://github.com/nathmo/makeZFS_armbian/tree/c59583bc03b884be15822ec93040b9fec970784b Another easy to read ZFS tutorial I found says use uuid so you can replace a disk later: https://wiki.kobol.io/helios64/software/zfs/install-zfs/ This feels like a bigger design question I was completely unaware of. No big deal or is the UUID thing something I should look up? I know the solution to this is to RTFM but I am still struggling with Linux basics. What partition options are correct or useful, how to check what services are running and how to debug when I get error messages in logs (even finding things like boot logs) as most of my knowledge is either windows specific or circa early 2000s. Is there a better document to follow? I tried TLDR man pages and they help because of the included examples. Anything else to help get updated? There seem to be a lot of SEO garbage linux tutorial sites that are wasting my time. Things like not partitioning the a full SSD because some sort of firmware error correcting utility eats the free space? That's a thing? Also, I'm planning on booting off the eMMC for now because it's easy, but I'm also interested in making the install robust enough to survive boot disk failure and I have no idea what that involves on linux. Back it up to a USB stick? lol make an acronis disk? code:
|
# ? Mar 13, 2023 14:40 |
|
Vaporware posted:This feels like a bigger design question I was completely unaware of. No big deal or is the UUID thing something I should look up? The FAQ had a good explanation on that. Selecting /dev/ names when creating a pool (Linux)
|
# ? Mar 13, 2023 17:23 |
It will never not be funny to me that in current year, Linux still has inconsistent device naming because of its floppy disk support.
|
|
# ? Mar 13, 2023 18:27 |
|
BlankSystemDaemon posted:It will never not be funny to me that in current year, Linux still has inconsistent device naming because of its floppy disk support. I was vividly reminded of two weeks ago when we updated a SAP HANA server and during boot-up it was dropped to a recovery console and the only line shown on the screen was an error message about PS/2 UART controller. When you are doing diagnostics in a rush your Google searches may not immediately find the information how that if a standard error message because the server doesn't have PS/2 and can be ignored. As a professional Linux server admin I hope I'm enough involved with it, that I'm entitled to hate Linux. To balance things out my coworker hates Windows more and I probably less.
|
# ? Mar 13, 2023 18:45 |
Saukkis posted:As a professional Linux server admin I hope I'm enough involved with it, that I'm entitled to hate Linux. To balance things out my coworker hates Windows more and I probably less.
|
|
# ? Mar 13, 2023 18:47 |
|
Saukkis posted:The FAQ had a good explanation on that. Thanks! I will read up
|
# ? Mar 13, 2023 18:48 |
|
I have a problem I got this this for $100: Its a NetApp DE6600
|
# ? Mar 13, 2023 21:15 |
|
Woah. How big are those drives? And I'm sorry for your power bill...
|
# ? Mar 13, 2023 21:23 |
|
That's the sort of thing you run when you have acres of land and a solar farm
|
# ? Mar 13, 2023 21:26 |
|
Wibla posted:Woah. How big are those drives? And I'm sorry for your power bill... 60 x 3TB SAS. 1,100 Watts or so to run fully loaded. Wibla posted:Woah. How big are those drives? And I'm sorry for your power bill... Yeah I run bladeservers
|
# ? Mar 13, 2023 21:36 |
|
Wibla posted:Woah. How big are those drives? And I'm sorry for your power bill... Maximum Power: 1,222W https://www.ibm.com/docs/en/ess-p8/2.0?topic=enclosures-netapp-de6600
|
# ? Mar 13, 2023 21:36 |
Rap Game Goku posted:Maximum Power: 1,222W There's a reason why hyperscalers use per-rack rectifiers with two bus-bars that blades blind-punch into.
|
|
# ? Mar 13, 2023 22:39 |
|
And here I thought I was being stupid running my SAS expander, NAS, and NAS 3.0 at 220w for a while. I also literally have solar power at home although I'm not able to get off the grid due to capacity restriction ordinances by the city.
|
# ? Mar 14, 2023 03:31 |
|
necrobobsledder posted:I'm not able to get off the grid due to capacity restriction ordinances by the city. same I hate it
|
# ? Mar 14, 2023 03:41 |
|
e.pilot posted:gen10+ is perfect in every way except: This is exactly what I want... why doesn't this exist? e: if I was to purchase one, I'm guessing I'd want to use the one PCIe slot for a graphics card to help with the lack of QuickSync? Boner Wad fucked around with this message at 04:28 on Mar 14, 2023 |
# ? Mar 14, 2023 04:25 |
|
Boner Wad posted:This is exactly what I want... why doesn't this exist? yeah, I did that with a USB3.whatever external SSD for a while. It at least has 10gbps USB, so it worked fine just a little ungainly Now I have plex running in the ML30 I got a bit ago and have the bizarre qnap 10gbit and dual NVME card in the gen10+
|
# ? Mar 14, 2023 05:17 |
In TrueNAS I've got a user account "fletcher" that was created with the "Microsoft Account" and "Samba Authentication" checkboxes ticked. I have a dataset with this user set as the owner, which I use exclusively as a mapped drive on my Windows machine. I've been using WinSCP and a scheduled job on my Windows machine to sync some files between the NAS and a remote server. I'd like to move this scheduled sync job to a linux machine and started going down the route of using NFS & rsync (as I'm writing this I realize this could probably just be an Rsync Task in TrueNAS itself). However, I'm wondering if mixing linux & windows permissions like this is going to cause any issues? Or if I create the Rsync Task in TrueNAS with my "fletcher" as the user, everything will work just fine?
|
|
# ? Mar 14, 2023 06:34 |
|
This is intriguing. https://youtu.be/E_an5heI1BU
|
# ? Mar 14, 2023 06:40 |
|
e.pilot posted:This is intriguing. I was for the first 30 seconds of the video then I stopped and decided to look for the price: https://www.newegg.com/thinkstation...&quicklink=true
|
# ? Mar 14, 2023 07:15 |
|
Cantide posted:I was for the first 30 seconds of the video then I stopped and decided to look for the price: Lol, this is the tagline for every ServeTheHome video
|
# ? Mar 14, 2023 12:58 |
|
"Serve the Home! For all your domestic requirements! Here's a Fibre Channel SAN!"
|
# ? Mar 14, 2023 13:10 |
|
|
# ? May 28, 2024 09:28 |
|
Thanks Ants posted:"Serve the Home! For all your domestic requirements! Here's a Fibre Channel SAN!" Its the way of the future!
|
# ? Mar 14, 2023 13:49 |