|
Grayham posted:Is AFP on FreeNAS any better than SMB? AFP is never better, let that drat protocol die!
|
# ¿ Jun 23, 2008 21:40 |
|
|
# ¿ May 19, 2024 07:00 |
|
Anybody recommend a 4-5 disk NAS that'll do both Wifi and Ethernet? I want to hook a popcorn hour up to something like a DroboFS via cat5 but also have wireless so I can drop files on it without having it wired to the rest of my network. It just feels retarded to run cat5 to my TV twice to make this work.
|
# ¿ Sep 15, 2010 22:38 |
|
FISHMANPET posted:I don't think you're doing it right. Hook up your NAS to your network via cable, and then it will be on your network. Your wireless devices will access it through the router just like your wired devices would. I am not going to put my streaming content on wireless, that's exactly what I'm trying to avoid.
|
# ¿ Sep 15, 2010 22:43 |
|
FreakyZoid posted:QNAP's 4 bay models would fit your needs. Just remember that QNAP devices want you to use a "nearline" or enterprise class drive for storage. ie: Constellation line/wd black, which are more expensive but lower failure rates. I also second a qnap 419p (find the newer atom based devices). Mine does sickbeard/sabnzbd/couchpotato, gbit and streams to my boxee. It's awesome. edit: not barracuda -> constellation ILikeVoltron fucked around with this message at 05:18 on Dec 26, 2011 |
# ¿ Dec 26, 2011 00:06 |
|
TerryLennox posted:Well I could leave the AC running but that would skyrocket my power bill. Its my room and I'm in the tropics so ambient temps are in that range. http://tech.blorge.com/Structure:%20/2007/02/20/googles-hard-disk-study-shows-temperature-is-not-as-important-as-once-thought/ This article may make you feel better about drive temps. I'm sure if you search you can find the paper the article is based on and determine exactly how hot they were running their drives.
|
# ¿ Jan 20, 2012 20:28 |
|
Jonny 290 posted:Well, you need to send a specially formatted WOL packet to the MAC address to wake it. It doesn't just wake on "any" traffic. Otherwise it'd never sleep. I personally just have mine shut down and start up at times of the day. 7-8am boot, 3pm- boot and I shut it down when I go to bed. Works very well for me.
|
# ¿ Feb 27, 2012 18:43 |
|
adorai posted:generally the URE rate is 1/10th for enterprise drives. This, 10^14 for consumer class drives and 10^15 for enterprise drives. Which works out to around one URE per 12TB on consumer class and 120TB iirc for enterprise class drives.
|
# ¿ Apr 23, 2012 15:19 |
|
Longinus00 posted:Not to mention any vibration issues. You can clearly see the rubber feet used to separate the drives in the picture
|
# ¿ Jan 20, 2013 19:19 |
|
Xythar posted:I picked up a QNAP TS412 a few days ago with 4x WD Red 3TBs for storing my media collection and basically anything else I feel like holding onto for the foreseeable future. I initially set it up as a RAID 5 but I've read a bunch of stuff since that says it's basically suicide to run RAID 5 with an array of that size since mathematically, the chance of an unrecoverable read error during resilver is pretty close to 1. Is this needless paranoia where Red drives are concerned or should I bother copying everything off and switching to RAID 10 or something? I don't really need that extra 3gb (yet, anyway) but it would take forever and be a pain, and I haven't really read much beyond the theoretical. http://www.snia.org/sites/default/education/tutorials/2007/fall/storage/WillisWhittington_Deltas_by_Design.pdf http://www.zdnet.com/blog/storage/why-raid-5-stops-working-in-2009/162 The definitive answer to your concerns lies between these two articles. The skinny is this: UER rates of 10^14 mean that once you hit 12tb, there is a chance you'll hit a error on disk during a rebuild. Does this guarantee it? No, but its the risk most of us (me at least) won't take. Reds, being NAS drives shouldn't be UER 10^14, but I just checked and they are. Info found here: http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-771442.pdf Anyway, raid 10 is an option, raid 6 (I don't think that model supports it), a larger raid 1 etc. Tempting fate with a raid 5, sure. It might make it on the second pass of a rebuild, who knows! It's a risk, that's all I mean to get at.
|
# ¿ Jun 14, 2013 03:04 |
|
thebigcow posted:So instead of an actively cooled case he wants you to buy disks shoved into tiny sealed enclosures? Yea, and a tiny little fan for each drive (maybe!) I don't know why anybody would think a 80mm fan isn't enough for 4 drives? That's what the QNAPs ship with (or round about)
|
# ¿ Oct 16, 2013 17:22 |
|
Cooter Brown posted:Would running a BTRFS based NAS in a VM environment be controversial as well? I realize BTRFS is less mature than ZFS, but it does have some decent looking NAS tooling. I'm not looking to make the jump right now, just curious about my future options. https://btrfs.wiki.kernel.org/index.php/FAQ#Is_btrfs_stable.3F Heh, honestly I feel like btrfs is the great white hope on linux but it keeps failing to reach maturity.
|
# ¿ Aug 17, 2015 03:50 |
|
mAlfunkti0n posted:Decided against the QNAP and it is going back. The proprietary weird linux build installed can make for some headaches. I have an i5 w/16GB ram laying around that I was going to sell but I've since installed unRAID on it and really like it. Docker containers, KVM for virtualization and the storage system works well. I thought about doing this as well, but I'm willing to put up with it's lovely kernel because most of the time it hardly matters. I sort of look at the docker thing as yet another way to get away from caring what kernel it's all riding on top of in the first place, or the tools that are there.
|
# ¿ Aug 30, 2015 05:36 |
|
Laserface posted:I just lost my first drive in 7 years. I'd only be worried if there was a high pitch twang of the heads crashing. If it's just the normal little noises that drives made when reading/writing data, then no.
|
# ¿ Aug 30, 2015 18:42 |
|
Twerk from Home posted:My thought was that backups are necessary anyway, and with how insanely fast home internet connections are getting restoring from backup wouldn't be as miserable as it used to be. Also, as you said, the checksumming filesystems should be able to recognize the corruption created by the URE and say "OK, that file is hosed, but everything else is fine". URE typically kicks the disk that generated the URE out of the RAID, many different raid card manufacturers have a different way of handling this though.
|
# ¿ Feb 23, 2016 18:40 |
|
Shaocaholica posted:Is there a NAS+firmware that gives proper access to smb.conf or something similar so that my Macs don't poo poo up the place with their dot files? Seems like my current WD NAS doesn't have proper support for it (resets on reboot). I do this on my macs: http://hints.macworld.com/article.php?story=2005070300463515 $ defaults write com.apple.desktopservices DSDontWriteNetworkStores true May not be a solution for you if you've got lots of users, but for me this was an easy solution.
|
# ¿ Apr 29, 2016 20:13 |
|
Shaocaholica posted:And what works for you? Is it sticky? After OS updates? I just feel like a NAS side solution is much more robust. I've run the command once, and since then no new DS Files have shown up, this was a few years ago, but I'm not sure it crosses major OS upgrade as I'm still on Yosemite.
|
# ¿ May 1, 2016 00:14 |
|
DrDork posted:Cat6A/E isn't that bad. I wired up most of a house using it a year or two ago, and it wasn't problematic at all. Heh, I've often thought of going 10gigE because I can get it straight out to the internet at that speed...
|
# ¿ Apr 13, 2017 00:23 |
|
This announcement has totally killed any desire I had in trying to build something, I'm now going back to qnap for the foreseeable future. gently caress this immense waste of time.
|
# ¿ Apr 14, 2017 00:58 |
|
apropos man posted:Do people use cheap second-hand consumer SSD's as L2ARC, since it's only used for caching and not essential for data integrity? Initial google results suggest you're a horrible person with horrible ideas and deserve to burn in a pit of filth. My gut says it couldn't hurt anything to try playing around with it.
|
# ¿ Jul 6, 2017 02:53 |
|
I picked up a Lian-Li PC-D8000 (waiting on shipping) and I've been investigating how to do a backplane in it because I want to avoid sticking my hand in the case every time I add a drive. I found one of these: http://www.lian-li.com/en/dt_portfolio/bp3sata/ but it seems like lots of this sort of equipment (including the case) are hard to find. There's a bunch of supermicro backplanes out there but they look impossible to mount on the rear side of a case not designed for it. Anybody have recommendations on how to build this stuff out? I'd likely just pick up another 3 of those lian-li backplanes if they were easier to find. I'm curious if anybody doing a ZFS build has recommendations on doing SAS -> 4 x SATA, and if flashing a RAID card into IT mode is the right way to go. I'm very hesitant on the buying a card to do this, I'd rather do it straight from the mainboard that way I'm only depending on it. Could anybody throw out recommendations for cards that support passthrough? I'll at least have a starting point to look into those. I found this, https://www.amazon.com/SAS9211-8I-8PORT-Int-Sata-Pcie/dp/B002RL8I7M but am looking for opinions on this card or others.
|
# ¿ Jul 11, 2017 18:40 |
|
IOwnCalculus posted:Backplanes are either highly customized to the case they're in, or they're designed to fit in 5.25" drive bays like this one. However, at least that specific Supermicro one is deep as hell so just because your case has 3x 5.25" bays doesn't mean it will be a good fit. I guess my concern for onboard SATA ports vs some sort of card is the architecture of the mainboard. It appears that all of the "southbridge" (read: DMI/PCH) interfaces are up to 4GB/s on the LGA 2011 boards, and the one I'm looking at would share that with the M.2 slot, GigE LAN, USB, etc. While I think in theory that would be enough bandwidth, from an architecture side it just seems smarter to utilize more of the PCIe lanes directly? Anyway, that's what I'm fighting with. I could buy two LSI cards, split my disks up on each of them doing mirrored pairs making it fault tolerant to one card, and I wouldn't be sharing any of the bandwidth with my nic, M.2 slot or most anything else.
|
# ¿ Jul 11, 2017 19:25 |
|
IOwnCalculus posted:Is this box going into a corporate environment supporting a massive workload and multiple dozens of spindles? Nope, home NAS. DrDork posted:This. I mean, sure, as a fun exercise in overkill you could, but remember that even GigE is limited to ~100MB/s throughput, which is 1/10th of a 1x PCIe 3.0 lane. So....yeah. You're never, ever, ever going to get any sort of congestion due to a lack of PCIe lanes if all you're doing with it is file serving type stuff. I'm just looking at the single PCH/DMI chip on a mainboard and figuring if every SATA port (10) plus the GigE, plus (maybe) USB, and for sure a M.2 slot (that eats up 2 lanes on it's own) there will be contention on that chip to perform. Maybe I'm over thinking it? poo poo, it's $160ish difference at the end of the day, which is less than the cost of the mainboard and roughly a third the cost of the CPU. I'm not doing this to be cheap. There's another thing that I didn't really get into but if I'm doing more than 10 drives at some point I'd have to buy a card anyway. So why not plan for that now, spend the $120 on it and be done? Hopefully the only thing I ever need to open this case up for is adding a CPU and some memory. Internet Explorer posted:Also you don't need hotswappable drives on a home NAS. It's a creature comfort for sure, and at the end of the day (I found enough backplanes) it was $110 or so for the 16 ports. *shrug* that's an easily justifiable expense at least to me. ILikeVoltron fucked around with this message at 17:50 on Jul 12, 2017 |
# ¿ Jul 12, 2017 17:36 |
|
DrDork posted:Remember that PCIe is packet-based: that it's communicating with X devices isn't a big issue so long as the combined bandwidth is below its limits. Certain devices do get dedicated lanes for various reasons, but HDDs generally are not one of those items, since it would be an enormous waste to sit a 100MB/s drive exclusively on a 500MB/s lane. USB's data needs are so hilariously low as to be nonexistent on a modern platform, unless you're talking about using USB 3.0 for an external HDD or something, and even then they're no more worrisome than another HDD. I guess what I don't understand here is how the I/O controller on the PCH chip works. Another thing is, just because an interface supports something doesn't mean you see that in the real world, so I'm a bit hesitant on it. I get what you're saying though, that even with plenty of overhead it's not something you'll bottleneck on, my only concern here is how the controller itself handles contention and splitting up a big block of writes across 10+ disks. As far as the data types I'll be working with, it'll be some VMs, some containers, some NFS storage most of the time. Other times I'll be building 8+ VMs to launch openstack tests (between 32 and 64 gigs of memory for this). I'll be unpacking DVD sized files, so there will be some IO that's not coming directly across the wire. I imagine largely the system will be idle most of the day, but while I'm doing testing on various tasks it'll be heavily utilized.
|
# ¿ Jul 12, 2017 18:44 |
|
BobHoward posted:Also, the PCH doesn't split writes across disks. It isn't that smart. The OS decides what gets written where and then asks its SATA driver to do writes through an AHCI SATA controller, which in this case happens to be located in the PCH. DrDork posted:To simplify this: The PCH treats one long-rear end write to a single disk pretty much the same as 10 shorter writes to 10 different disks, the only difference being occasionally varying the recipient device header for the data packet (which, as pointed out, isn't even something the PCH does on its own--it just follows what the OS tells it to do). Otherwise the PCH doesn't really give much of a gently caress about where the data is going in that sense, so as long as the total bandwidth you're trying to utilize is less than what the PCH is able to provide, you're fine. Yea, poor choice of words on this. I mean to say how it handles the contention of having to make 10 writes to 10 different disks (like say when you're flushing out a large number of blocks to disk). It might only be 2-3 chunks written to 4-6 disks, so the same data written out to each, hence the how would it split question. Again, just poorly written. DrDork posted:All that said, this is a thread dedicated to excess and "because I can," so you absolutely shouldn't feel bad about deciding to over-think/over-engineer something on the grounds of "gently caress IT I WANT TO." gently caress yea! The weird thing that brought me here (not the thread but rather wanting to build a NAS) was that there just doesn't seem to be a clean and cheap way to do 10+ disks in a NAS. Either you're spending $2200+ on something from QNAP/etc or you're building it yourself. When I started looking into the cost of expanding my little NAS I figured I wanted it to do VMs and a few other things and the price kept going up until I was like gently caress this, I'll just build it myself. BobHoward posted:DMI2 and DMI3 are really just 4 lane Gen2 or Gen3 PCIe links. The total raw throughput of these links is therefore 2GB/s or 4GB/s before packetization and other overhead. 75% efficiency is achievable: I have measured 1.5 GB/s read throughput from a RAID0 of 4 SATA SSDs connected to an Intel DMI2 PCH. Without bogging everything down to get into the math of SATA overhead, plus every other device and everything; I just looked at the numbers of a M.2 disk and 10+ SATA ports. Assuming they would cache a little bit on disk then be rate limited to how fast they could flush that cache to disk (as in, on the SATA board itself). I figured we were getting pretty close to the numbers on that interface (DMI/PCH). Am I going to have 10 disks right off the bat? hell no. Maybe I'm just thinking of how this thing will scale beyond the 10 onboard SATA ports, or maybe I'm just curious how it all works. Either way, thanks for the explanation. It looks like I could get away with using it for now, and then maybe upgrade to some PCIe cards, so thanks for explaining these things. I'll stew a bit on this but I think I might just go with the cards so I don't have to rebuild my case and re-cable everything later on (assuming I'd grow to 12 disks). Again, thanks for the help
|
# ¿ Jul 13, 2017 00:52 |
|
IOwnCalculus posted:Or skip virtualization, install Ubuntu on an SSD, mount your tank using ZFS on Linux, and use docker for Plex / plexpy / deluge / sonarr / whatever else you want to run. Quoting this, I'm going the full docker route and couldn't be happier with everything. I did a CentOS base with ZFS installed through a goofy (but working) method of nuking the partition the xfs/ext4 install was on and rsync -> rpool -> rebuild kernel -> grub. I found this compose file and got going within a few hours. Currently in the process of syncing all of my media from the old NAS to the new one... days later... ugh... gigabit... why is 10gigE so expensive still...
|
# ¿ Sep 8, 2017 04:19 |
|
Furism posted:Yes, I imported by id ("zpool import -d /dev/disk/by-id pool1"). I even re-exported and imported from CentOS but still no luck. Did you run the recommended commands from the wiki? code:
code:
Then rebuild your grub as usual. I've been pretty happy with using CentOS for my NAS. I considered doing the whole freebsd/etc route but figured I wanted to do too much with docker and then maybe even openvswitch to make that work.
|
# ¿ Sep 17, 2017 02:55 |
|
CommieGIR posted:BHyve with direct GUI access to the VMs, Docker support is much better, GUI is much more intuitive. I went back and forth over this until I ultimately gave up and just did a straight linux install. 99% of what I do in my NAS is now Container based. Who's going to run containers better than a native linux kernel? The only other thing I really want from the host is ZFS support, which ZoL does pretty well. I'd still consider FreeNAS if they had kept on Corral though.
|
# ¿ Apr 7, 2018 22:00 |
|
Paul MaudDib posted:Really? I'd say there's pretty much no reason for ethernet below 10GbE to exist anymore Man, I've been saying this for years. 10gigE switches are still pretty expensive but 1gigE switches, even the prosumer stuff is super cheap. I wanted to run 10gigE over my house but I couldn't justify it because the switches still don't seem fairly priced and nothing else would even have a 10gigE nic in it.. it blows my mind.
|
# ¿ Aug 9, 2018 03:31 |
|
H110Hawk posted:5% free plus rotational disks is going to generate a ton of thrash as it tries to find available stripes to store your data. You need to resolve this before you start mucking with disks. Basically you wind up converting what your NAS has attempted to keep as sequential reads (HDD's are awesome at this) into purely random (SSD's are awesome at this). Can't you just run a smart test?
|
# ¿ Jan 27, 2019 21:32 |
|
Gay Retard posted:What kind of case does everybody use? I've been on the hunt for a mid-tower that could comfortably hold 8 3.5" drives, and finally settled on an open box Fractal R5 for $60 on eBay. Eventually I'll replace the 2 x 5.25" bays. https://www.techpowerup.com/reviews/LianLi/PC-D8000/ 20 drive slots with SATA ports up front. it's a huge box though.. think of it like a coffee table.
|
# ¿ Mar 9, 2019 03:46 |
|
I really wanted to try and build a ceph based NAS but I don't think it's going to work too well if I'm forced to put all my docker containers on CephFS vs having a native docker supported volume backend, anybody have any experience with this? I'm currently running a 4 disk zfs NAS on CentOS but every single time I went to upgrade it broke and building and rebuilding the kernel modules has been a huge PITA.
|
# ¿ Jul 11, 2019 00:56 |
|
originalnickname posted:If you're not completely in love with CentOS you could pick something that's got better ZFS support, export the pools and import into ubuntu server or something. I do this for a living, so I don't mind tinkering with it for fun. I have managed several ceph deployments, openstack deployments and openshift deployments in the past. I don't fear running ceph at all, and even if it fails catastrophically that's ok too. As far as running ubuntu or something, I figure that's what I'll most likely end up doing. I think the other big reason it's such a pain is because of attempting to do root ZFS, so I might buy a M.2 ssd for my box too.
|
# ¿ Jul 11, 2019 03:18 |
|
Baconroll posted:Hopefully going to get some good dealing on external drives for shucking during the Amazon prime sales. Other than the WD 8/10 TB Elements are there any other good options to keep a lookout for ? I was just noting that the 8TB seagate drives were pretty low right now, thinking I'll pick up a couple when prime day starts
|
# ¿ Jul 14, 2019 22:14 |
|
Hughlander posted:I have a 4 core Xeon with a SuperMicro board w/IPMI that's awesome. The biggest problem is that it's limited to 32 gigs of RAM. What CPU and mainboard are you using that's limited to 32 gigs of ram, you might be better off replacing the mainboard if it's slot limited or density limited, I'm having a hard time remembering how far back you'd have to go to get a Xeon that's limited to 32 gigs of ram
|
# ¿ Sep 26, 2019 05:07 |
|
I'm trying to upgrade my mainboard to something that supports M.2 NVMe and I'm having a huge PITA figuring out wtf supermicro is talking about on their descriptions. It's down to the two following boards: X10DRD-iNTP vs X10DRD-iNT The iNTP lists the following: 2 PCI-E 3.0 NVMExpress x4 External Ports and the iNT lists: 2 Internal NVMe ports (PCI-E 3.0 x4) I assume the iNT is M.2, but it's not really shown that well in any of the pictures or otherwise.. and I'm totally confused over what an external port is to a mainboard?
|
# ¿ Oct 25, 2019 00:55 |
|
pzy posted:RAM disk for downloading/repairing/unpacking is ungodly fast. Just need... lots of RAM! Love ZFS for this, I just poked around to verify my arc is being used properly and yep - I use a scratch NVMe disk but it basically stays in ram the entire time until I push it to my multimedia pool
|
# ¿ Nov 13, 2019 19:30 |
|
Heners_UK posted:Actually that brings me to a point, one I think I'd better ask seeing as I fell flat on my own face talking about security earlier, what are people's thoughts on using a long passphrase? E.g. "tomatoes yoghurt canopy chainsaw cats phesant" rather than "3wM%64t4&&WQW$Wk*qgx". I'm thinking about the time I might have to log in interactively at the console (i.e. use a mouse and keyboard, cannot get to password manager). So from your example, the entropy on the first wordlist password would be 26 letters plus space, 27^(number of characters, or 46) = 696198609130885597695136021593547814689632716312296141651066450089 vs Numbers, upper and lower case letters, and 4 special characters, so 10+26*2+4 = 66^18th power = 564664961438246926567398233604096 So yeah, without explicit knowledge of the pattern used or any of that the first is like 10x more secure than the second one.
|
# ¿ Dec 11, 2019 17:39 |
|
I'm running 20.04 on a server, I've fooled around with it way more than is worth the effort though. I'm currently battling grub being a utter garbage fire of failure and misery because for whatever reason, installing zsysd (the hip cool new snapshot, boot manager) caused some loving weird as poo poo change and now my system wants to look for hwmatch during boot and refuses to not look for loving hwmatch... ugh
|
# ¿ Dec 12, 2019 16:55 |
|
fletcher posted:Debating which OS to use for my new NAS. All it will be doing is hosting a ZFS array over NFS and Samba. I have more familiarity with Debian based distros at home, and RHEL based distros at work. I was thinking FreeBSD originally for the NAS, but with FreeBSD switching to ZoL anyways, it seems like I might as well just use Ubuntu Server for the NAS to make it easy. I've been running the 20.04 ubuntu "beta" for a bit now. It does a install directly on zfs, works out all the boot pool / root pool stuff for you. The latest update to zsysd also does snapshots based on apt installs and a few other neat things. Sadly it's a "desktop" OS, but I've got it running headless on my NAS and have no complaints about it. pre:filename: /lib/modules/5.4.0-14-generic/kernel/zfs/zfs.ko version: 0.8.3-1ubuntu3 license: CDDL author: OpenZFS on Linux description: ZFS alias: devname:zfs
|
# ¿ Mar 11, 2020 22:45 |
|
|
# ¿ May 19, 2024 07:00 |
|
IOwnCalculus posted:How old of hardware are you running it on? I had to my server (E5 V2) a little while back and the 20.04 USB installer wouldn't even loving boot on it. Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz Supermicro X10DRi/X10DRI-T
|
# ¿ Mar 11, 2020 23:40 |