Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
ILikeVoltron
May 17, 2003

I <3 spyderbyte!

Grayham posted:

Is AFP on FreeNAS any better than SMB?

AFP is never better, let that drat protocol die!

Adbot
ADBOT LOVES YOU

ILikeVoltron
May 17, 2003

I <3 spyderbyte!
Anybody recommend a 4-5 disk NAS that'll do both Wifi and Ethernet? I want to hook a popcorn hour up to something like a DroboFS via cat5 but also have wireless so I can drop files on it without having it wired to the rest of my network. It just feels retarded to run cat5 to my TV twice to make this work.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

FISHMANPET posted:

I don't think you're doing it right. Hook up your NAS to your network via cable, and then it will be on your network. Your wireless devices will access it through the router just like your wired devices would.

Also, wireless is pretty bad at streaming content.

I am not going to put my streaming content on wireless, that's exactly what I'm trying to avoid.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

FreakyZoid posted:

QNAP's 4 bay models would fit your needs.

Just remember that QNAP devices want you to use a "nearline" or enterprise class drive for storage. ie: Constellation line/wd black, which are more expensive but lower failure rates.

I also second a qnap 419p (find the newer atom based devices).

Mine does sickbeard/sabnzbd/couchpotato, gbit and streams to my boxee. It's awesome.

edit: not barracuda -> constellation

ILikeVoltron fucked around with this message at 05:18 on Dec 26, 2011

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

TerryLennox posted:

Well I could leave the AC running but that would skyrocket my power bill. Its my room and I'm in the tropics so ambient temps are in that range.

Would shutting the NAS down except while I'm using it be a better idea?

http://tech.blorge.com/Structure:%20/2007/02/20/googles-hard-disk-study-shows-temperature-is-not-as-important-as-once-thought/

This article may make you feel better about drive temps. I'm sure if you search you can find the paper the article is based on and determine exactly how hot they were running their drives.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

Jonny 290 posted:

Well, you need to send a specially formatted WOL packet to the MAC address to wake it. It doesn't just wake on "any" traffic. Otherwise it'd never sleep.

If you can bookmark your router's WOL page it can be useful to spin up your NAS if you need to access something. Or, there may be desktop programs that do the same thing.

But my preferred method to save power on a NAS is to keep the board itself up, and let it handle drive spindown via hdparm or a similar method. I keep my Synology connected 24/7 to my main server via NFS, and the drives only spin up if I actually copy something to the NFS-mounted partitions, and it's all automatic.

I personally just have mine shut down and start up at times of the day. 7-8am boot, 3pm- boot and I shut it down when I go to bed. Works very well for me.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

adorai posted:

generally the URE rate is 1/10th for enterprise drives.

This, 10^14 for consumer class drives and 10^15 for enterprise drives. Which works out to around one URE per 12TB on consumer class and 120TB iirc for enterprise class drives.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

Longinus00 posted:

Not to mention any vibration issues.

You can clearly see the rubber feet used to separate the drives in the picture :psyduck:

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

Xythar posted:

I picked up a QNAP TS412 a few days ago with 4x WD Red 3TBs for storing my media collection and basically anything else I feel like holding onto for the foreseeable future. I initially set it up as a RAID 5 but I've read a bunch of stuff since that says it's basically suicide to run RAID 5 with an array of that size since mathematically, the chance of an unrecoverable read error during resilver is pretty close to 1. Is this needless paranoia where Red drives are concerned or should I bother copying everything off and switching to RAID 10 or something? I don't really need that extra 3gb (yet, anyway) but it would take forever and be a pain, and I haven't really read much beyond the theoretical.

http://www.snia.org/sites/default/education/tutorials/2007/fall/storage/WillisWhittington_Deltas_by_Design.pdf
http://www.zdnet.com/blog/storage/why-raid-5-stops-working-in-2009/162

The definitive answer to your concerns lies between these two articles. The skinny is this: UER rates of 10^14 mean that once you hit 12tb, there is a chance you'll hit a error on disk during a rebuild. Does this guarantee it? No, but its the risk most of us (me at least) won't take. Reds, being NAS drives shouldn't be UER 10^14, but I just checked and they are. Info found here: http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-771442.pdf

Anyway, raid 10 is an option, raid 6 (I don't think that model supports it), a larger raid 1 etc. Tempting fate with a raid 5, sure. It might make it on the second pass of a rebuild, who knows! It's a risk, that's all I mean to get at.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

thebigcow posted:

So instead of an actively cooled case he wants you to buy disks shoved into tiny sealed enclosures?

Yea, and a tiny little fan for each drive (maybe!) I don't know why anybody would think a 80mm fan isn't enough for 4 drives? That's what the QNAPs ship with (or round about)

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

Cooter Brown posted:

Would running a BTRFS based NAS in a VM environment be controversial as well? I realize BTRFS is less mature than ZFS, but it does have some decent looking NAS tooling. I'm not looking to make the jump right now, just curious about my future options.

https://btrfs.wiki.kernel.org/index.php/FAQ#Is_btrfs_stable.3F

Heh, honestly I feel like btrfs is the great white hope on linux but it keeps failing to reach maturity.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

mAlfunkti0n posted:

Decided against the QNAP and it is going back. The proprietary weird linux build installed can make for some headaches. I have an i5 w/16GB ram laying around that I was going to sell but I've since installed unRAID on it and really like it. Docker containers, KVM for virtualization and the storage system works well.

I thought about doing this as well, but I'm willing to put up with it's lovely kernel because most of the time it hardly matters. I sort of look at the docker thing as yet another way to get away from caring what kernel it's all riding on top of in the first place, or the tools that are there.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

Laserface posted:

I just lost my first drive in 7 years.

It was a returned Seagate 1TB that I installed for a client in his computer. it stopped booting, so I bought him a new one and put his drive in my NAS, to see what would happen. It worked fine! Smart errors, but worked fine.

for seven years.


Then it failed, and 2 other drives started getting SMART errors. all 3 remaining drives started racking up reallocated sectors like there was no tomorrow.

So I bought a new Netgear RN204 with 2 WD 3TB Reds to get me started. Had to migrate off my 3TB external backup first, then top off the recent data from the old NAS as it was struggling to get through anything more than a few hours of uptime before the OS shat the bed. but all my data is back on the new super quiet, super fast NAS.

Should I be concerned if they are making Disk-noises? I havent used Reds before so I dont know if they are more noisy than normal other disks.

I'd only be worried if there was a high pitch twang of the heads crashing. If it's just the normal little noises that drives made when reading/writing data, then no.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

Twerk from Home posted:

My thought was that backups are necessary anyway, and with how insanely fast home internet connections are getting restoring from backup wouldn't be as miserable as it used to be. Also, as you said, the checksumming filesystems should be able to recognize the corruption created by the URE and say "OK, that file is hosed, but everything else is fine".

URE typically kicks the disk that generated the URE out of the RAID, many different raid card manufacturers have a different way of handling this though.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

Shaocaholica posted:

Is there a NAS+firmware that gives proper access to smb.conf or something similar so that my Macs don't poo poo up the place with their dot files? Seems like my current WD NAS doesn't have proper support for it (resets on reboot).

I do this on my macs:

http://hints.macworld.com/article.php?story=2005070300463515

$ defaults write com.apple.desktopservices DSDontWriteNetworkStores true

May not be a solution for you if you've got lots of users, but for me this was an easy solution.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

Shaocaholica posted:

And what works for you? Is it sticky? After OS updates? I just feel like a NAS side solution is much more robust.

I've run the command once, and since then no new DS Files have shown up, this was a few years ago, but I'm not sure it crosses major OS upgrade as I'm still on Yosemite.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

DrDork posted:

Cat6A/E isn't that bad. I wired up most of a house using it a year or two ago, and it wasn't problematic at all.

Somehow I missed these. You are not helping me convince myself that Plex and torrents and whatnot don't really need 10Gb...

Heh, I've often thought of going 10gigE because I can get it straight out to the internet at that speed...

ILikeVoltron
May 17, 2003

I <3 spyderbyte!
This announcement has totally killed any desire I had in trying to build something, I'm now going back to qnap for the foreseeable future. gently caress this immense waste of time.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

apropos man posted:

Do people use cheap second-hand consumer SSD's as L2ARC, since it's only used for caching and not essential for data integrity?

Or could something like an old, eBay sourced, Kingston SSDNow cause problems?

Initial google results suggest you're a horrible person with horrible ideas and deserve to burn in a pit of filth.

My gut says it couldn't hurt anything to try playing around with it.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!
I picked up a Lian-Li PC-D8000 (waiting on shipping) and I've been investigating how to do a backplane in it because I want to avoid sticking my hand in the case every time I add a drive. I found one of these: http://www.lian-li.com/en/dt_portfolio/bp3sata/ but it seems like lots of this sort of equipment (including the case) are hard to find. There's a bunch of supermicro backplanes out there but they look impossible to mount on the rear side of a case not designed for it. Anybody have recommendations on how to build this stuff out? I'd likely just pick up another 3 of those lian-li backplanes if they were easier to find.

I'm curious if anybody doing a ZFS build has recommendations on doing SAS -> 4 x SATA, and if flashing a RAID card into IT mode is the right way to go. I'm very hesitant on the buying a card to do this, I'd rather do it straight from the mainboard that way I'm only depending on it. Could anybody throw out recommendations for cards that support passthrough? I'll at least have a starting point to look into those. I found this, https://www.amazon.com/SAS9211-8I-8PORT-Int-Sata-Pcie/dp/B002RL8I7M but am looking for opinions on this card or others.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

IOwnCalculus posted:

Backplanes are either highly customized to the case they're in, or they're designed to fit in 5.25" drive bays like this one. However, at least that specific Supermicro one is deep as hell so just because your case has 3x 5.25" bays doesn't mean it will be a good fit.

The standard recommendation for cheap reliable SATA ports is the LSI2008 flashed into IT mode. Of course you'd only need one if the motherboard you want to use doesn't have enough SATA ports to connect the drives you want to use. The card you linked looks like pretty much every other LSI2008 card out there, and according to at least one of the reviews it had to be flashed into IT mode.

I guess my concern for onboard SATA ports vs some sort of card is the architecture of the mainboard. It appears that all of the "southbridge" (read: DMI/PCH) interfaces are up to 4GB/s on the LGA 2011 boards, and the one I'm looking at would share that with the M.2 slot, GigE LAN, USB, etc. While I think in theory that would be enough bandwidth, from an architecture side it just seems smarter to utilize more of the PCIe lanes directly? Anyway, that's what I'm fighting with. I could buy two LSI cards, split my disks up on each of them doing mirrored pairs making it fault tolerant to one card, and I wouldn't be sharing any of the bandwidth with my nic, M.2 slot or most anything else.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

IOwnCalculus posted:

Is this box going into a corporate environment supporting a massive workload and multiple dozens of spindles?

If not, you're way overthinking this. You're not going to run into PCIe bottlenecks in a home environment.

Nope, home NAS.

DrDork posted:

This. I mean, sure, as a fun exercise in overkill you could, but remember that even GigE is limited to ~100MB/s throughput, which is 1/10th of a 1x PCIe 3.0 lane. So....yeah. You're never, ever, ever going to get any sort of congestion due to a lack of PCIe lanes if all you're doing with it is file serving type stuff.

Hell, even with SLI top-end GPUs gobbling up 16 lanes on their own, running into meaningful PCIe lane slowdowns takes effort.

e; also, if all you're using is 4x drives, by all means stick with straight motherboard connections. People start stuffing LSI cards into their builds because they've run out of motherboard ports, not (generally) because there's anything wrong with the motherboard ports available.

I'm just looking at the single PCH/DMI chip on a mainboard and figuring if every SATA port (10) plus the GigE, plus (maybe) USB, and for sure a M.2 slot (that eats up 2 lanes on it's own) there will be contention on that chip to perform. Maybe I'm over thinking it? poo poo, it's $160ish difference at the end of the day, which is less than the cost of the mainboard and roughly a third the cost of the CPU. I'm not doing this to be cheap.

There's another thing that I didn't really get into but if I'm doing more than 10 drives at some point I'd have to buy a card anyway. So why not plan for that now, spend the $120 on it and be done? Hopefully the only thing I ever need to open this case up for is adding a CPU and some memory.

Internet Explorer posted:

Also you don't need hotswappable drives on a home NAS.

It's a creature comfort for sure, and at the end of the day (I found enough backplanes) it was $110 or so for the 16 ports. *shrug* that's an easily justifiable expense at least to me.

ILikeVoltron fucked around with this message at 17:50 on Jul 12, 2017

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

DrDork posted:

Remember that PCIe is packet-based: that it's communicating with X devices isn't a big issue so long as the combined bandwidth is below its limits. Certain devices do get dedicated lanes for various reasons, but HDDs generally are not one of those items, since it would be an enormous waste to sit a 100MB/s drive exclusively on a 500MB/s lane. USB's data needs are so hilariously low as to be nonexistent on a modern platform, unless you're talking about using USB 3.0 for an external HDD or something, and even then they're no more worrisome than another HDD.

Assuming you're generating the vast, vast majority of your I/O and data requests via the GigE (it's mostly a file server, no?), the built-in limit of ~100MB/s from the network means it will be trivial for the PCH to serve an array of arbitrary size. If you were talking about doing a lot of large file transfers internally (array <-> M.2 for HD video editing, for example) then maybe it would be worth worrying about. Maybe. But still probably not, because even a 10 drive array is gonna be limited to well under 2GB/s write performance, and that's only 2 lanes, figure another 2 for the M.2 you'd be reading from, so you'd have to be actually hitting near those numbers before you'd start maxing out the 8 PCIe 2.0 lanes that the Haswell/Broadwell PCHs have.

Go for it if you want to, but don't fool yourself into thinking you're getting better performance out of it.

I guess what I don't understand here is how the I/O controller on the PCH chip works. Another thing is, just because an interface supports something doesn't mean you see that in the real world, so I'm a bit hesitant on it. I get what you're saying though, that even with plenty of overhead it's not something you'll bottleneck on, my only concern here is how the controller itself handles contention and splitting up a big block of writes across 10+ disks.

As far as the data types I'll be working with, it'll be some VMs, some containers, some NFS storage most of the time. Other times I'll be building 8+ VMs to launch openstack tests (between 32 and 64 gigs of memory for this). I'll be unpacking DVD sized files, so there will be some IO that's not coming directly across the wire. I imagine largely the system will be idle most of the day, but while I'm doing testing on various tasks it'll be heavily utilized.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

BobHoward posted:

Also, the PCH doesn't split writes across disks. It isn't that smart. The OS decides what gets written where and then asks its SATA driver to do writes through an AHCI SATA controller, which in this case happens to be located in the PCH.

DrDork posted:

To simplify this: The PCH treats one long-rear end write to a single disk pretty much the same as 10 shorter writes to 10 different disks, the only difference being occasionally varying the recipient device header for the data packet (which, as pointed out, isn't even something the PCH does on its own--it just follows what the OS tells it to do). Otherwise the PCH doesn't really give much of a gently caress about where the data is going in that sense, so as long as the total bandwidth you're trying to utilize is less than what the PCH is able to provide, you're fine.

Yea, poor choice of words on this. I mean to say how it handles the contention of having to make 10 writes to 10 different disks (like say when you're flushing out a large number of blocks to disk). It might only be 2-3 chunks written to 4-6 disks, so the same data written out to each, hence the how would it split question. Again, just poorly written.

DrDork posted:

All that said, this is a thread dedicated to excess and "because I can," so you absolutely shouldn't feel bad about deciding to over-think/over-engineer something on the grounds of "gently caress IT I WANT TO."

gently caress yea! The weird thing that brought me here (not the thread but rather wanting to build a NAS) was that there just doesn't seem to be a clean and cheap way to do 10+ disks in a NAS. Either you're spending $2200+ on something from QNAP/etc or you're building it yourself. When I started looking into the cost of expanding my little NAS I figured I wanted it to do VMs and a few other things and the price kept going up until I was like gently caress this, I'll just build it myself.

BobHoward posted:

DMI2 and DMI3 are really just 4 lane Gen2 or Gen3 PCIe links. The total raw throughput of these links is therefore 2GB/s or 4GB/s before packetization and other overhead. 75% efficiency is achievable: I have measured 1.5 GB/s read throughput from a RAID0 of 4 SATA SSDs connected to an Intel DMI2 PCH.

A PCH chip is just a collection of PCIe IO controllers, each equivalent to what you might plug in to a PCIe expansion slot, plus a PCIe packet switching fabric so they can all share the one DMI (PCIe) link to the CPU. The CPU has a "root complex" (another switch fabric) to provide connectivity between DMI/PCIe ports and DRAM.

How PCIe devices and switches handle contention is a major chunk of the specification, but suffice it to say that PCIe has a credit based flow control scheme which does a good job of fairly allocating each link's bandwidth between all the traffic flows passing through it.

Without bogging everything down to get into the math of SATA overhead, plus every other device and everything; I just looked at the numbers of a M.2 disk and 10+ SATA ports. Assuming they would cache a little bit on disk then be rate limited to how fast they could flush that cache to disk (as in, on the SATA board itself). I figured we were getting pretty close to the numbers on that interface (DMI/PCH). Am I going to have 10 disks right off the bat? hell no. Maybe I'm just thinking of how this thing will scale beyond the 10 onboard SATA ports, or maybe I'm just curious how it all works. Either way, thanks for the explanation.

It looks like I could get away with using it for now, and then maybe upgrade to some PCIe cards, so thanks for explaining these things.

I'll stew a bit on this but I think I might just go with the cards so I don't have to rebuild my case and re-cable everything later on (assuming I'd grow to 12 disks).

Again, thanks for the help

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

IOwnCalculus posted:

Or skip virtualization, install Ubuntu on an SSD, mount your tank using ZFS on Linux, and use docker for Plex / plexpy / deluge / sonarr / whatever else you want to run.

Quoting this, I'm going the full docker route and couldn't be happier with everything. I did a CentOS base with ZFS installed through a goofy (but working) method of nuking the partition the xfs/ext4 install was on and rsync -> rpool -> rebuild kernel -> grub.

I found this compose file and got going within a few hours.

Currently in the process of syncing all of my media from the old NAS to the new one... days later... ugh... gigabit... why is 10gigE so expensive still...

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

Furism posted:

Yes, I imported by id ("zpool import -d /dev/disk/by-id pool1"). I even re-exported and imported from CentOS but still no luck.

Is there a log I can find somewhere? I couldn't find any.

Did you run the recommended commands from the wiki?

code:
systemctl preset zfs-import-cache zfs-import-scan zfs-mount zfs-share zfs-zed zfs.target
also if it's a root mount, you might want to set:

code:
GRUB_CMDLINE_LINUX="rhgb quiet zfs_force=1"
to your /etc/default/grub file.

Then rebuild your grub as usual.

I've been pretty happy with using CentOS for my NAS. I considered doing the whole freebsd/etc route but figured I wanted to do too much with docker and then maybe even openvswitch to make that work.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

CommieGIR posted:

BHyve with direct GUI access to the VMs, Docker support is much better, GUI is much more intuitive.

Yeah, its got a lot of flaws, but I feel more at home with it.

I went back and forth over this until I ultimately gave up and just did a straight linux install. 99% of what I do in my NAS is now Container based. Who's going to run containers better than a native linux kernel? The only other thing I really want from the host is ZFS support, which ZoL does pretty well. I'd still consider FreeNAS if they had kept on Corral though.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

Paul MaudDib posted:

Really? I'd say there's pretty much no reason for ethernet below 10GbE to exist anymore :smuggo:

It is infinitely annoying to me that we're in a catch-22, faster networking is moving at a glacial pace in the consumer sphere, because what are you going to serve it from? And drives aren't getting much faster, because you're going to be bottlenecked on gigabit ethernet anyway. And yet we have much faster storage available on the desktop, so there's clearly demand for faster-than-HDD speeds, and we also have fancy tiered-storage systems that let us pretend we have big SSDs, etc.

In particular, the abysmal IOPS of gigabit pretty much rules out any of the fun applications, like serving steam disks/booting from your NAS/etc, which might be interesting in a power-user space. Instead you either have everything client-side or you virtualize everything, really no in-between.

Man, I've been saying this for years. 10gigE switches are still pretty expensive but 1gigE switches, even the prosumer stuff is super cheap. I wanted to run 10gigE over my house but I couldn't justify it because the switches still don't seem fairly priced and nothing else would even have a 10gigE nic in it.. it blows my mind.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

H110Hawk posted:

5% free plus rotational disks is going to generate a ton of thrash as it tries to find available stripes to store your data. You need to resolve this before you start mucking with disks. Basically you wind up converting what your NAS has attempted to keep as sequential reads (HDD's are awesome at this) into purely random (SSD's are awesome at this).

You may also have bad disks, or your super long io times could be causing phantom issues, even with counters going up.

Can't you just run a smart test?

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

Gay Retard posted:

What kind of case does everybody use? I've been on the hunt for a mid-tower that could comfortably hold 8 3.5" drives, and finally settled on an open box Fractal R5 for $60 on eBay. Eventually I'll replace the 2 x 5.25" bays.

https://www.techpowerup.com/reviews/LianLi/PC-D8000/ 20 drive slots with SATA ports up front. it's a huge box though.. think of it like a coffee table.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!
I really wanted to try and build a ceph based NAS but I don't think it's going to work too well if I'm forced to put all my docker containers on CephFS vs having a native docker supported volume backend, anybody have any experience with this? I'm currently running a 4 disk zfs NAS on CentOS but every single time I went to upgrade it broke and building and rebuilding the kernel modules has been a huge PITA.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

originalnickname posted:

If you're not completely in love with CentOS you could pick something that's got better ZFS support, export the pools and import into ubuntu server or something.

I get why you'd want to get your feet wet with Ceph (and it's pretty cool), but man, what you just described sounds like pretty much the poster child of complicated for the sake of being complicated.. unless you wanna be a Ceph admin or something.

I do this for a living, so I don't mind tinkering with it for fun. I have managed several ceph deployments, openstack deployments and openshift deployments in the past. I don't fear running ceph at all, and even if it fails catastrophically that's ok too.

As far as running ubuntu or something, I figure that's what I'll most likely end up doing. I think the other big reason it's such a pain is because of attempting to do root ZFS, so I might buy a M.2 ssd for my box too.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

Baconroll posted:

Hopefully going to get some good dealing on external drives for shucking during the Amazon prime sales. Other than the WD 8/10 TB Elements are there any other good options to keep a lookout for ?

I was just noting that the 8TB seagate drives were pretty low right now, thinking I'll pick up a couple when prime day starts

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

Hughlander posted:

I have a 4 core Xeon with a SuperMicro board w/IPMI that's awesome. The biggest problem is that it's limited to 32 gigs of RAM.

What CPU and mainboard are you using that's limited to 32 gigs of ram, you might be better off replacing the mainboard if it's slot limited or density limited, I'm having a hard time remembering how far back you'd have to go to get a Xeon that's limited to 32 gigs of ram

ILikeVoltron
May 17, 2003

I <3 spyderbyte!
I'm trying to upgrade my mainboard to something that supports M.2 NVMe and I'm having a huge PITA figuring out wtf supermicro is talking about on their descriptions. It's down to the two following boards:

X10DRD-iNTP vs X10DRD-iNT



The iNTP lists the following: 2 PCI-E 3.0 NVMExpress x4 External Ports

and the iNT lists: 2 Internal NVMe ports (PCI-E 3.0 x4)

I assume the iNT is M.2, but it's not really shown that well in any of the pictures or otherwise.. and I'm totally confused over what an external port is to a mainboard?

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

pzy posted:

RAM disk for downloading/repairing/unpacking is ungodly fast. Just need... lots of RAM!

Love ZFS for this, I just poked around to verify my arc is being used properly and yep - I use a scratch NVMe disk but it basically stays in ram the entire time until I push it to my multimedia pool

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

Heners_UK posted:

Actually that brings me to a point, one I think I'd better ask seeing as I fell flat on my own face talking about security earlier, what are people's thoughts on using a long passphrase? E.g. "tomatoes yoghurt canopy chainsaw cats phesant" rather than "3wM%64t4&&WQW$Wk*qgx". I'm thinking about the time I might have to log in interactively at the console (i.e. use a mouse and keyboard, cannot get to password manager).

EDIT: Generated another passphrase example from bitwarden: "Endurable-Moonlit-Marine-Rush-Frisbee-Dreaded4"

So from your example, the entropy on the first wordlist password would be 26 letters plus space, 27^(number of characters, or 46) = 696198609130885597695136021593547814689632716312296141651066450089

vs Numbers, upper and lower case letters, and 4 special characters, so 10+26*2+4 = 66^18th power = 564664961438246926567398233604096

So yeah, without explicit knowledge of the pattern used or any of that the first is like 10x more secure than the second one.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!
I'm running 20.04 on a server, I've fooled around with it way more than is worth the effort though. I'm currently battling grub being a utter garbage fire of failure and misery because for whatever reason, installing zsysd (the hip cool new snapshot, boot manager) caused some loving weird as poo poo change and now my system wants to look for hwmatch during boot and refuses to not look for loving hwmatch... ugh

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

fletcher posted:

Debating which OS to use for my new NAS. All it will be doing is hosting a ZFS array over NFS and Samba. I have more familiarity with Debian based distros at home, and RHEL based distros at work. I was thinking FreeBSD originally for the NAS, but with FreeBSD switching to ZoL anyways, it seems like I might as well just use Ubuntu Server for the NAS to make it easy.

I've been running the 20.04 ubuntu "beta" for a bit now. It does a install directly on zfs, works out all the boot pool / root pool stuff for you. The latest update to zsysd also does snapshots based on apt installs and a few other neat things. Sadly it's a "desktop" OS, but I've got it running headless on my NAS and have no complaints about it.

pre:
filename:       /lib/modules/5.4.0-14-generic/kernel/zfs/zfs.ko
version:        0.8.3-1ubuntu3
license:        CDDL
author:         OpenZFS on Linux
description:    ZFS
alias:          devname:zfs

Adbot
ADBOT LOVES YOU

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

IOwnCalculus posted:

How old of hardware are you running it on? I had to :pt: my server (E5 V2) a little while back and the 20.04 USB installer wouldn't even loving boot on it.

Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
Supermicro X10DRi/X10DRI-T

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply