|
redeyes posted:I wish I could find a case big enough for 2 x (3x5.25" iStar hotswap bays) plus a full size ATX mobo with full size RX480 GFX card. G-Prime posted:Finding a case with 6x5.25" externals is hard, honestly, but Newegg shows that they have 4 different ones available. They're all "gamer" cases, but the Rosewill one doesn't have a window and has TONS of cooling capabilities. And if you remove the front fans and replace them with unlighted ones, it's just a chunky, black case. Alternative: Rosewill RSV-L4412 + Tower stand w/casters Similar cost if you include hot swap bays, of which there are 12, no LED bullshit, and currently the case comes with a 500W power supply.
|
# ? Aug 13, 2017 19:44 |
|
|
# ? May 31, 2024 19:22 |
|
evol262 posted:use ECC if you care about your data Yes. Is this now controversial? I admit I came off too strong at the start by saying anyone who cares is already using ECC, sure, but for the data I care about any source of preventable corruption is equivalent to me. I also think you're overselling how much ZFS protects you from memory errors; data still gets held and handled in RAM by your backup program, network stack, and every other program that wants to read or modify it. ZFS may protect you from some memory errors, but the protections from checksumming and ECC do not overlap. I think this thread overstates the dangers of bad reads from HDDs, or at least seriously understates the dangers of RAM, just because disk errors can be mitigated by software. Both are very rare events and data isn't more or less corrupt from either one. If there were some piece of software we could run that could mitigate memory errors like ZFS mitigates disk errors we'd all be running it. evol262 posted:You're gonna spend 2k on a NAS anyway Yeah, if you're building a high performance Xeon/Threadripper/Ryzen 7 server with tons of storage, which that was in direct response to. You brought up spending hundreds on a "Xeon and Xeon board (or Threadripper, or Ryzen, or whatever)," which makes it more than just a simple NAS to serve files. $2000 for a high end home server with tons of storage isn't unreasonable and at that point ECC ends up a small part of it. You can go with Ryzen and ECC instead of an i5/i7 and probably end up saving a bit of money. If you don't need high end CPU performance (i.e. $60 pentium + mobo combo tier) and just want to saturate one or two gigabit links (or to pair with a 10GbE PCIe NIC) you can get ECC for the same price as the cheapest new consumer level stuff. It's probably cheaper, actually, since old server ECC RAM is substantially cheaper than new DDR4 RAM, plus you'll get IPMI and multiple onboard NICs with teaming out of the deal. evol262 posted:My position is that you can build a performant, capable NAS with 24tb usable for $1000, which doesn't include ECC or ECC-capable chipsets. Okay. But you could also do it with ECC for the same price ($65 for a cpu + mobo) and same definition of performant - in the absence of the hard requirement for the matx form factor. Once you need a small form factor you do face a higher price floor for ECC (~200ish for a pentium and ECC capable matx mobo, though that would have IPMI too). But you're also paying $60+ more for a Node 804 or equivalent compared to a bare bones ATX case, which could have been another 3TB drive or could knock the price down to $600. Running a file system with high overhead like ZFS (particularly with AF drives since each tiny piece of metadata eats 4KB instead of 512-1024B) also ends up costing way more than ECC in the form of unusable space, on the order of multiple terabytes with large raidz1-3 pools. All my unimportant data would do just as well on mdadm or LVM, except that administering multiple tiers of storage like that would not be worth the hassle for ~$80 of HDD space and ZFS has other, unrelated, features like CoW snapshots. evol262 posted:"can't build a NAS for less than 2k" What a strange quote. Desuwa fucked around with this message at 20:13 on Aug 13, 2017 |
# ? Aug 13, 2017 20:02 |
|
I get the impression that Intel and Nvidia both have to deal with a lot of patents when it comes to the hardware accelerated pieces of their hardware. But like I said before, the industry's given up on hardware acceleration at scale for encode / decode besides end user devices so the few that seem to care about this segment of hardware acceleration on non-mobile GPUs are home users trying to play pirated content basically and streamers that will probably wind up with 8-core monstrosities that can easily transcode at least h.264 in realtime anyway.
|
# ? Aug 13, 2017 20:11 |
redeyes posted:I wish I could find a case big enough for 2 x (3x5.25" iStar hotswap bays) plus a full size ATX mobo with full size RX480 GFX card. BlankSystemDaemon fucked around with this message at 21:27 on Aug 13, 2017 |
|
# ? Aug 13, 2017 21:13 |
|
Nullsmack posted:I'm looking to build a new fileserver system. My current one is an older system running an AMD E450 chip. I liked that since it is low power and doesn't require a fan. Any modern equivalents without dropping nearly $1000 on a xeon-d processor and board? I got started on an Intel Bay Trail motherboard and then upgraded to an Intel Braswell board. Both were very low power and feature a CPU soldered onto the motherboard with a large heatsink. Both had no fan. Both made excellent file servers if you just want to run FreeNAS or Ubuntu. I think there's a modern equivalent. It might be Apollo Lake, but I'm not sure since I've moved up another level to a mainstream desktop processor.
|
# ? Aug 13, 2017 21:43 |
|
Desuwa posted:Yes. Is this now controversial? Desuwa posted:I admit I came off too strong at the start by saying anyone who cares is already using ECC, sure, but for the data I care about any source of preventable corruption is equivalent to me. I also think you're overselling how much ZFS protects you from memory errors; data still gets held and handled in RAM by your backup program, network stack, and every other program that wants to read or modify it. ZFS may protect you from some memory errors, but the protections from checksumming and ECC do not overlap. More the point, though, is that disks tend to gradually fail until you get a click of death. Cascading memory failures are immediately observable as core dumps, crashes, and other 'WTF is happening' behavior. This is why disk corruption is 'silent', but nobody speaks about memory failure that way. If you've never seen a system repeatedly choke network transfers because TX/RX checksumming fails because a stick of memory is dying, I'm not sure what to say. Failing memory shows up all over the system. Desuwa posted:I think this thread overstates the dangers of bad reads from HDDs, or at least seriously understates the dangers of RAM, just because disk errors can be mitigated by software. Both are very rare events and data isn't more or less corrupt from either one. If there were some piece of software we could run that could mitigate memory errors like ZFS mitigates disk errors we'd all be running it. Desuwa posted:Yeah, if you're building a high performance Xeon/Threadripper/Ryzen 7 server with tons of storage, which that was in direct response to. You brought up spending hundreds on a "Xeon and Xeon board (or Threadripper, or Ryzen, or whatever)," which makes it more than just a simple NAS to serve files. $2000 for a high end home server with tons of storage isn't unreasonable and at that point ECC ends up a small part of it. You can go with Ryzen and ECC instead of an i5/i7 and probably end up saving a bit of money. I was replying to you saying "you're already gonna spend 2k, so may as well ...". Why spend that much? I get if you're building an 'all in one' virt/storage/whatever machine, but that's an interesting single point of failure for someone who seems very concerned with uptime and integrity. Yeah, if you're spending 2k on a NAS, why not just get ECC? But why are you spending 2k at all? Desuwa posted:If you don't need high end CPU performance (i.e. $60 pentium + mobo combo tier) and just want to saturate one or two gigabit links (or to pair with a 10GbE PCIe NIC) you can get ECC for the same price as the cheapest new consumer level stuff. It's probably cheaper, actually, since old server ECC RAM is substantially cheaper than new DDR4 RAM, plus you'll get IPMI and multiple onboard NICs with teaming out of the deal. Desuwa posted:Okay. But you could also do it with ECC for the same price ($65 for a cpu + mobo) and same definition of performant - in the absence of the hard requirement for the matx form factor. Once you need a small form factor you do face a higher price floor for ECC (~200ish for a pentium and ECC capable matx mobo, though that would have IPMI too). But you're also paying $60+ more for a Node 804 or equivalent compared to a bare bones ATX case, which could have been another 3TB drive or could knock the price down to $600. However, this is the consumer NAS thread, not the home lab thread. While a lot of people probably are using their NAS as an 'all in one' machine, some of us aren't. Desuwa posted:Running a file system with high overhead like ZFS (particularly with AF drives since each tiny piece of metadata eats 4KB instead of 512-1024B) also ends up costing way more than ECC in the form of unusable space, on the order of multiple terabytes with large raidz1-3 pools. All my unimportant data would do just as well on mdadm or LVM, except that administering multiple tiers of storage like that would not be worth the hassle for ~$80 of HDD space and ZFS has other, unrelated, features like CoW snapshots. Desuwa posted:What a strange quote. Desuwa posted:If a person needs a high performance NAS with tons of storage they're already dropping 2000+ dollars then they can build a Ryzen system with ECC without shelling out for xeons. Look, let's just stop. We're clearly not getting anywhere. I can post a part list from my NAS if you want, but a $1k system in mATX can easily get to 24TB of storage with raidz2/raid6, quietly, saturate 4x multipathed gige iscsi (and probably a lot more than that -- I haven't actually run a disk benchmark), and have enough juice left over to run a couple of small VMs for It won't do whatever extra stuff your system is doing (SQL VM?), but that's ok -- I want to provide storage, and not much else. This way, I can get a 1U switch, PDU, 4 compute nodes, and a storage server inside a half-depth 9U enclosure that sits next to my desk, and I can't hear it. Those are my needs, since the whole thing performs well, transports easily when I move, and is unobtrusive enough that my wife won't kill me if we're in some 1-bedroom temporary corporate housing for a month. This comprises a lot of my working environment, so it's important to me to have it available (and I can't be bothered with a colo)
|
# ? Aug 14, 2017 13:23 |
|
Frankly the real argument is that you shouldn't build a NAS without IPMI because IPMI is the loving tits and once you get spoiled by it you'll never want to go back. And if you get a board with IPMI, it's very probably gonna also come with ECC support. And at that point you might as well get ECC RAM because RAM prices are all hosed up right now anyhow and Intel chips have all but ceased to meaningfully advance over the last few generations.
|
# ? Aug 14, 2017 13:53 |
|
DrDork posted:Frankly the real argument is that you shouldn't build a NAS without IPMI because IPMI is the loving tits and once you get spoiled by it you'll never want to go back. This, so loving hard.
|
# ? Aug 14, 2017 14:51 |
|
DrDork posted:Frankly the real argument is that you shouldn't build a NAS without IPMI because IPMI is the loving tits and once you get spoiled by it you'll never want to go back. And if you get a board with IPMI, it's very probably gonna also come with ECC support. And at that point you might as well get ECC RAM because RAM prices are all hosed up right now anyhow and Intel chips have all but ceased to meaningfully advance over the last few generations. vPro-capable boards are $100. It's perfectly fine OOB. I'm not gonna argue that Intel is doing a hell of a lot for performance since Haswell, but that's a different discussion.
|
# ? Aug 14, 2017 15:02 |
|
Sure, you're welcome to go with your OOB management platform of choice. Just get something that will let you remotely manage / boot / install an OS. Though Intel AMT has had a whole host of problems of late, so I might look elsewhere unless I had a compelling reason to select them.
|
# ? Aug 14, 2017 15:28 |
|
Intel AMT is better than nothing (ignoring the security issues), but sucks compared to Supermicro IPMI.
|
# ? Aug 14, 2017 15:36 |
|
All things being equal you should get IPMI or whatever, but most people not tinkering all the time shouldn't worry about it too much. In fact, if you do a VM-centric server, even most tinkerers don't need it because they'll be installing in VMs, not on the hardware. I would have benefited from having that on my server like twice in 7 years. (don't get me wrong, that poo poo is cool)
|
# ? Aug 14, 2017 16:14 |
|
Is there an add-in card of choice that would give me IPMI/lights-out/whatever you call it? I just bought a GT 710 from EVGA B-stock for $25 for a similar purpose, i.e. something I can plug into headless boxes for those rare occasions I need to tweak BIOS settings. But if there's something that would give me full-on remote management in a similar price bracket (let's say <$50) I'd be open to that, I'm not wedded to a direct video output instead of something I could connect to over the network.
|
# ? Aug 14, 2017 16:31 |
|
Paul MaudDib posted:Is there an add-in card of choice that would give me IPMI/lights-out/whatever you call it? I think there are some cheapish solutions for just the remote restart ability, but if you want the full capabilities of IPMI (remote power management, remote console, remote mounting of .ISO/IMG files, etc), you'd need something like the IP8000 which tends to go for well over $200. You might be able to find a used remote KVM setup for <$100, but that'd be as close as you'd get.
|
# ? Aug 14, 2017 16:54 |
|
evol262 posted:Yes, in this thread, and from ZFS developers. Please stop flogging this horse. They even said you should have ECC. ZFS has no special requirement for ECC, but it's not ZFS that "wants" ECC, it's you or your data that should want ECC. evol262 posted:I was replying to you saying "you're already gonna spend 2k, so may as well ...". Why spend that much? This is just false. Go check the posts, especially the one where I mention 2000 for the first time. I've only built basic NAS boxes to serve files myself, with both new and used hardware, but if someone was doing a high end server with threadripper or a Xeon it's not unreasonable, and ECC ends up costing a relatively small amount. evol262 posted:This is a facile argument, since nobody in this thread is buying bare bones cases (not enough drive mounting capacity). But I don't need a smaller form factor. I want one, like you want a single home server to do it all. We're not gonna come to a consensus here. I have a $50 ATX case that handles 10 drives. Maybe $55 once I factor in a 5.25 -> 3.5 adapter. It's not a great case but for just holding drives it works. evol262 posted:However, this is the consumer NAS thread, not the home lab thread. While a lot of people probably are using their NAS as an 'all in one' machine, some of us aren't. I've not said that. I've said whether you're buying something just to serve files or some monster home server, ECC isn't some unobtainable halo tier product. I think anyone purpose-buying NAS hardware for any data they care about should get ECC. In fact the quote you brought up showing me saying you can't build a NAS for under 2000 doesn't show anything of the sort. You're not building "a high performance NAS with tons of storage" so I'm not sure what you're getting at by constantly bringing it up. evol262 posted:ECC and a checksumming filesystem are not orthogonal. ECC doesn't do a drat thing to protect you from bit rot. I'm not sure what you mean by orthogonal because that sounds pretty orthogonal? Two things that act on different, unrelated factors, which was my point? Desuwa fucked around with this message at 17:31 on Aug 14, 2017 |
# ? Aug 14, 2017 17:02 |
|
There's really nothing that's that cheap that will add in the capabilities. The usual box recommended on ServetheHome that's among the cheapest is the Lantronix Spider and it's $300+ on Amazon. The V portion of IP KVM is the part that gets expensive, and otherwise you're looking at Serial-over-USB type nonsense to get a terminal to the machine. This alone makes IPMI support extremely cost-effective if you want a lower-maintenance home NAS. Otherwise, you're looking at the SOHO NAS options that are a little pricey compared to frankenstein boxing it. I'm considering the possibility of using a chassis like the Phantex Enthoo mini XL that can house two motherboards since I'm still going to have a desktop of some sort no matter what.
|
# ? Aug 14, 2017 17:06 |
|
DrDork posted:I think there are some cheapish solutions for just the remote restart ability, but if you want the full capabilities of IPMI (remote power management, remote console, remote mounting of .ISO/IMG files, etc), you'd need something like the IP8000 which tends to go for well over $200. Desuwa posted:They even said you should have ECC. ZFS has no special requirement for ECC, but it's not ZFS that "wants" ECC, it's you or your data that should want ECC. Desuwa posted:This is just false. Go check the posts, especially the one where I mention 2000 for the first time. Desuwa posted:I've only built basic NAS boxes to serve files myself, with both new and used hardware, but if someone was doing a high end server with threadripper or a Xeon it's not unreasonable, and ECC ends up costing a relatively small amount. Desuwa posted:I have a $50 ATX case that handles 10 drives. Maybe $55 once I factor in a 5.25 -> 3.5 adapter. It's not a great case but for just holding drives it works. Desuwa posted:I've not said that. I've said whether you're buying something just to serve files or some monster home server, ECC isn't some unobtainable halo tier product. I think anyone purpose-buying NAS hardware for any data they care about should get ECC. Desuwa posted:In fact the quote you brought up showing me saying you can't build a NAS for under 2000 doesn't show anything of the sort. You're not building "a high performance NAS with tons of storage" so I'm not sure what you're getting at by constantly bringing it up. Could I get more performance with more money? Yes. But in more memory and more flash. Would it improve my workloads? Not noticably. This is why I made the SLI GTX 99999 analogy. There are rapidly diminishing returns. Desuwa posted:I'm not sure what you mean by orthogonal because that sounds pretty orthogonal? Two things that act on different, unrelated factors, which was my point? When you say "ZFS is more of a hit in space, and ECC..." Not orthogonal means "it's not an either/or choice". ECC does not protect you from bit rot on whatever you'd run on top of mdadm/lvm (ext4 checksumming or btrfs would, but you wouldn't use mdadm or lvm with btrfs). Checksumming filesystems do not protect you from bit flipping before write. You can have both. Or either. ECC and filesystems are complimentary, but one saves data you already have, and one saves data you're acquiring or modifying. Again, though (and again, and again), cosmic rays are amazingly rare, tx/rx checksums will protect from some of it, and bad DIMMS have enough symptoms that you'd notice. Use ECC if you want. This is not a holy war. Just please stop overblowing the importance of it, and going down slippery slopes like "you're already spending X" or "already gonna have a Xeon/Ryzen" which have no merit to a lot of us, and which don't noticably increase workload performance for a pure fileserver. In general, please just let this drop.
|
# ? Aug 14, 2017 19:08 |
|
SamDabbers posted:Alternative: I already have 2 iStar 4x hotswap backplanes but Rosewill makes that same rackmount case with no hotswap so that works fine. I was kind of hoping to not have to build an entire computer around the hotswap backplanes.. but I have no idea what I would use. eSata somehow? I can certainly run that case along side my workstation but I need a fast connection between the 2 boxes (infiniband?) [edit] I didn't realize that Rosewill has a 25" deep model. Perfect! I can move all the stuff I need into one case which is the best solution. redeyes fucked around with this message at 19:31 on Aug 14, 2017 |
# ? Aug 14, 2017 19:15 |
|
E: yeah this post was written with a bit too much anger.
Desuwa fucked around with this message at 22:15 on Aug 14, 2017 |
# ? Aug 14, 2017 19:19 |
|
Pull up thread, pull up!
|
# ? Aug 14, 2017 21:49 |
|
evol262 posted:
I'd like to know more about your system.
|
# ? Aug 15, 2017 00:57 |
Steakandchips posted:Pull up thread, pull up! In storage relevant news, Gigabyte Server has finally made the MS10-ST0 available to interested parties with local representatives (read: cloud customers, we're ready to sell to you - consumers, get hosed a little while longer). Turns out the rumors were true: we're getting a 16 non-SMT-core 2GHz CPU with 1MB L2 cache per core, up to 64GB UDIMM ECC (128GB RDIMM ECC), 32GB eMMC, and 16 drives provided you don't use a daughter board. Makes me wonder if there isn't something to be said for a memory-resident FreeBSD installation which loads itself from the eMMC and stores data on the zpool. BlankSystemDaemon fucked around with this message at 13:49 on Aug 15, 2017 |
|
# ? Aug 15, 2017 13:44 |
|
Nullsmack posted:I'd like to know more about your system. Me too, it sounds like an awesome setup.
|
# ? Aug 15, 2017 15:44 |
|
D. Ebdrup posted:Come on, you and I both know you should've gone with: Dive, dive, dive! Hit your burners, pilot! That looks really nice. I wonder what pricing would be for that?
|
# ? Aug 15, 2017 16:07 |
|
D. Ebdrup posted:Come on, you and I both know you should've gone with: Dive, dive, dive! Hit your burners, pilot! What's the deal with Intel releases now? It's weird to see OEM hardware released that has unannounced Intel CPUs in it, but Google Cloud has been selling Xeon Platinums for 6 months, and they only got announced 3 weeks ago.
|
# ? Aug 15, 2017 16:55 |
|
That looks like a $400+ board given the mini SAS controller and SFP+ ports. Primary difference between that and the Xeon-D boards from a couple years ago besides processor is the fact it's a mini ITX board too. There doesn't appear to be any m.2 slots either but the 32GB of eMMC onboard will more than make up the price difference.
|
# ? Aug 15, 2017 16:58 |
Twerk from Home posted:What's the deal with Intel releases now? It's weird to see OEM hardware released that has unannounced Intel CPUs in it, but Google Cloud has been selling Xeon Platinums for 6 months, and they only got announced 3 weeks ago. people posted:Pricing on the Apollo Lake/Denverton SoCs EDIT: Supermicro has apparently also released a whole set of boards powered by the Denverton SoC, and some of them feature QuickAssist. BlankSystemDaemon fucked around with this message at 19:31 on Aug 15, 2017 |
|
# ? Aug 15, 2017 18:50 |
|
It seems like most of the posting in this thread if for homebrew/custom built type stuff so I'm not sure if this is the right place but I'll post anyway and see what yall have to say. Essentially I'm looking for is a super simple, easy to use, low maintenance, plug and play type solution for like 20-30TB worth of storage. What I have now is a bunch of random internal/external harddrives ranging in the 3-8TB range and just have all my stuff spread out among them without much rhyme or reason and most of them are getting pretty full. Don't really have any kind of budget in mind but ideally I'd like to stay around $2k at the higher end and hopefully have some kind of expand-ability in the future if I need more storage. I've been looking at the Synology DS1517 5-bay unit with 5x 8TB WD Red drives in it. I like the idea that I can easily add the 5-Bay expansion units in the future for increasing capacity, I've heard Synology stuff is very easy to use as well. Feel free to steer me away if there's something other brands or solutions that I should be considering.
|
# ? Aug 15, 2017 20:57 |
Courtesy of ServeTheHome, we also have the full SKU list with MSRP and full feature lists including which chips support QuickAssist: EDIT: Going by this, I'm most interested in the C3758 on the Supermicro A2SDi-8C+-HLN4F. BlankSystemDaemon fucked around with this message at 23:31 on Aug 15, 2017 |
|
# ? Aug 15, 2017 21:02 |
|
C3708 looking sweet
|
# ? Aug 15, 2017 21:08 |
|
100% Dundee posted:It seems like most of the posting in this thread if for homebrew/custom built type stuff so I'm not sure if this is the right place but I'll post anyway and see what yall have to say. You'll be perfectly happy with a Synology DS1517. I just bought a Synology DS1517+, actually. I do IT for a living and could have gone the homebrew/custom route, but I've enjoyed my little Synology 2 bay NAS and it meets my needs without requiring babysitting. Just be sure to have backups, as RAID is not backup. Also, 5x 8 TB drives will give you closer to 21 TB usable with SHR2, despite what Synology's RAID calculator states.
|
# ? Aug 15, 2017 23:49 |
|
Internet Explorer posted:Also, 5x 8 TB drives will give you closer to 21 TB usable with SHR2, despite what Synology's RAID calculator states. Yikes, are you serious? 5x 8TB Drives should be about ~36TB of storage, you think it'll need 15TB for it's RAID/protection magic voodoo? If so you're super right since the calculator says it should be about 30TB with SHR/Raid 5.
|
# ? Aug 16, 2017 08:19 |
|
What would I need to run Infiniband over 30 meters? My NAS is in the spare bedroom, as is my switch and I run CAT5 to my workstation which is about 30 meters give or take. Let's say I want to run 40Gbit QDR over fibre, what would I need?
|
# ? Aug 16, 2017 09:06 |
|
100% Dundee posted:Yikes, are you serious? 5x 8TB Drives should be about ~36TB of storage, you think it'll need 15TB for it's RAID/protection magic voodoo? If so you're super right since the calculator says it should be about 30TB with SHR/Raid 5. SHR2/RAID-6. I wouldn't recommend running SHR/RAID-5 with large drives. Some might disagree, saying that the issue with RAID-5 is overblown, but I've dealt with enough failed arrays in my life.
|
# ? Aug 16, 2017 14:34 |
|
Mr Shiny Pants posted:What would I need to run Infiniband over 30 meters? My NAS is in the spare bedroom, as is my switch and I run CAT5 to my workstation which is about 30 meters give or take.
|
# ? Aug 16, 2017 14:50 |
|
Combat Pretzel posted:Infiniband cards and appropriate transceivers. Most of these that you can get on Ebay use cables with four pairs of fiber (for 40/56GBit) and MPO connectors. There's also transceivers that work over a single pair without these annoying MPO connectors, but they're rare on Ebay and expensive as hell. Would those be the active connectors?
|
# ? Aug 16, 2017 19:45 |
|
I wonder how those new Atom chips will handle transcoding, my AMD 5350 crumbles once one MPEG2 stream starts getting decoded (I have multiple tuners on the network so sometimes people watch using Emby)
|
# ? Aug 16, 2017 20:42 |
|
Mr Shiny Pants posted:Would those be the active connectors?
|
# ? Aug 16, 2017 21:15 |
|
Has anyone here used Open Media Vault? How does it compare in terms of ease of use / feature set to something like Diskstation Manager? I'm trying to decide whether to pick up a preconfigured NAS (Qnap / Synology) for convenience or go for a homebrew, and one factor is whether there's a clear winner in terms of babysitting / administration.
|
# ? Aug 17, 2017 22:24 |
|
|
# ? May 31, 2024 19:22 |
|
I think you're going to struggle to beat Synology DSM for lack of having to administer anything
|
# ? Aug 17, 2017 22:25 |