|
Cool. I'm gonna upgrade the pool and reboot.
|
# ? Jul 30, 2017 22:22 |
|
|
# ? May 27, 2024 02:36 |
|
redeyes posted:those are 5400rpm drives right? (or 5900?) code:
|
# ? Jul 30, 2017 23:42 |
|
Well, now that I have all the 8TB drives I need for my next build and still no release date for the new Intel Atom platform, I wasn't able to stop myself from dropping $200 on decommissioned server parts and required accessories to toss together a system to get my new drives spinning. The CPU is probably only a slight upgrade from my old Atom, and it'll suck a lot more power, but except for 10GbE I don't think I'm missing anything important. I'm not even sure if I'll upgrade to the new Atoms, but if I do this ended up being cheap enough that, if nothing else, it'll make a good backup system so I'm not panicking to revive an old desktop at 5am when my next server decides to die.
|
# ? Aug 1, 2017 09:12 |
|
quote:but except for 10GbE I don't think I'm missing anything important.
|
# ? Aug 1, 2017 18:39 |
|
redeyes posted:But that is getting important with big arrays. Imagine getting more than 100MB/s performance.. Yeah I said there's nothing else important that it's missing. It'll probably come down, eventually, to deciding between a PCIe NIC or a new atom board.
|
# ? Aug 1, 2017 20:23 |
|
code:
|
# ? Aug 2, 2017 19:22 |
Unless you're running ZFS, I can almost guarantee that you've silently lost data. If you're running ZFS, you'll at least know which file is corrupted, assuming it couldn't self-heal it.
|
|
# ? Aug 2, 2017 20:44 |
|
eightysixed posted:
Yes. Life-raft your data elsewhere immediately.
|
# ? Aug 2, 2017 21:03 |
|
D. Ebdrup posted:Unless you're running ZFS, I can almost guarantee that you've silently lost data. No ZFS - Just a regular harddrive in a regular i7 workstation - I just asked here because it seemed like the most relevant thread. DrDork posted:Yes. Life-raft your data elsewhere immediately. On it.
|
# ? Aug 2, 2017 21:16 |
|
For the "prosumer"/IT guys here - Would you recommend building out a dedicated NAS or a beefy ESXi host and then a virtual NAS appliance in a home lab setting? I was kicking around the idea of building a freenas box, but then I thought, why not just build a monster ESXi host instead which gives me a lot more flexibility? The ESXi host might cost more upfront but I think the bells and whistles might make it worth while in the long run, thoughts?
|
# ? Aug 2, 2017 23:02 |
|
Do you have anything else that you'd want to run a "native" VM for? FreeNAS has its own VM hypervisor, so if all you're intending to do is some penny-ante VM stuff (like I run a tiny Linux VM for Crashplan), you may not really get much out of ESXi. Also, if you're not setting up anything particularly complex with FreeNAS in terms of shares and permissions, it's actually pretty quick to tear down and rebuild / reinstall, so it's not terribly painful to decide later on that you want an ESXi box and migrate over.
|
# ? Aug 2, 2017 23:24 |
|
I am a vm admin by trade...so esxi just feels "right".
|
# ? Aug 2, 2017 23:31 |
|
Fair enough, but if you don't have a real use for it, you're just making life harder on yourself, robbing your NAS of some performance, and inserting extra layers of failure opportunity for no gain. It totally does work, though, as long as you can pass through the SATA controller(s).
|
# ? Aug 3, 2017 01:43 |
|
Since you're going with FreeNAS, if you need the thing to haul rear end, it'll need gobs of RAM (talking 16GB and more) and CPU to spare to run that 10GbE or faster adapter. Might as well go standalone.
|
# ? Aug 3, 2017 02:50 |
|
cr0y posted:For the "prosumer"/IT guys here - Would you recommend building out a dedicated NAS or a beefy ESXi host and then a virtual NAS appliance in a home lab setting? The SAN-application server combo is a thing because it works real well. You don't have to have everything in one chassis, and you retain more ability to scale in more/different machines down the road if you want. Infiniband is real cheap nowadays, dual-4x QDR adapters, rack switches, and short QSFP cables are not bad at all. I would actually go the other direction, overbuild the NAS and have that host VMs for now if you need. Might be a good idea to serve any databases on this machine too.
|
# ? Aug 3, 2017 03:20 |
|
D. Ebdrup posted:Unless you're running ZFS, I can almost guarantee that you've silently lost data. Eh. If the raw value of the uncorrectable error count (SMART attribute 187 decimal or BB hex) is still zero, barring firmware bugs in the drive he hasn't lost data. Yet. eightysixed posted:
Whatever SMART reporting tool you got this out of, re-run it and see if you can find the uncorrectable error count I mentioned. Do that ASAP, and then once more after you're done copying all data off. If it's zero and stays zero, it is highly probable that you got everything off without loss. (The SMART uncorrectable error count is supposed to bump up by 1 every time the drive was asked to read a sector, and couldn't correct the data, meaning it ended up returning garbage.) For future reference, don't worry about RAW_READ_ERROR_RATE. It's never straightforward to interpret and frequently will have insanely high looking values in a perfectly healthy drive. Reallocated sector count and current pending sectors are the important ones here.
|
# ? Aug 3, 2017 04:08 |
|
I do systems engineering for work and nowadays I'd rather not have to manage yet another set of VMs (although my machines at home are better managed historically than at work, sadly). Also, there's some disadvantages in performance running everything in ESXi in some instances. For example, I had pfSense running virtualized and pinned to certain CPUs and so forth to avoid latency spikes and so forth, but my aggregate network performance in latency and raw throughput with similar network parameters (MTU and other TCP tuning on the router) was 10% slower than the $50 Edgerouter X I setup and the latency was at least 100 microseconds better - dedicated hardware does matter and SDN doesn't fix all networking ails. And I can even try to manage my Edgerouter with Chef or Puppet - can't really do that easily with pfSense honestly either. I put all my eggs into one basket primarily because I wanted to run OS X on the VM (it was a pain in the rear but I got El Capitan working). Unfortunately, the VNIC options on a non-Apple machine leads to the reality that you can't use 10GbE and vmxnet awesomeness when you virtualize it under a ghetto budget situation. My current migration path is to run CoreOS or CentOS with ZFS and containers for any service I run for easier maintenance (might use LXD instead of Docker containers). Also, I'm going to just run iTunes on a more heavyweight HTPC. My AppleTV rocks pretty hard, but wow does the wife just not want to let go of her physical FF/RW buttons (the scrubber control on AppleTV can't be overridden and works poorly with the FF/RW buttons from most remotes that map to such a function). So I've been getting around to a Plex Media Player setup once again. Sometimes I miss the XBMC box I never updated for like 3 years.
|
# ? Aug 3, 2017 04:10 |
|
eightysixed posted:
DrDork posted:Yes. Life-raft your data elsewhere immediately. Half way through life-rafting things I want (but don't need (thats not backed up off-site)) Reallocation is already up to 121. Chug, little hard drive, chug. I hope she makes it through the night Ticking time bomb. At least I can finally find a use for this NIB Evo 850 that I've still never used edit: The only reason I thought to check was because everything was crashing all the drat time, and opening super slow and HDD was steady pinged at 100%. My ole' i5 2500K with 32GB of RAM is about to get speedy again eightysixed fucked around with this message at 01:30 on Aug 5, 2017 |
# ? Aug 5, 2017 01:17 |
|
So, I got my hands on some RAM and upgraded my home PowerEdge R710 and maxxed out the RAM. Then, boot2docker went nuts on my FreeNAS/FreeBSD install and messed up the networking, had to fix that.
|
# ? Aug 6, 2017 03:09 |
|
300 GB of RAM... are you running a Hadoop cluster or a home server? The RAM's power draw alone might rival my 8-disk NAS box.
|
# ? Aug 6, 2017 16:10 |
|
That is obscene. I like it.
|
# ? Aug 6, 2017 17:08 |
|
CommieGIR posted:So, I got my hands on some RAM and upgraded my home PowerEdge R710 and maxxed out the RAM.
|
# ? Aug 6, 2017 17:21 |
|
I really do hope that FreeNAS 11 ports over the monitoring bits of Coral sooner rather than later. To have a NAS-appliance OS in 2017 that doesn't have any built-in way to monitor SMART status via the GUI is pretty damned dumb.
|
# ? Aug 6, 2017 19:05 |
|
I have 7 hard drives in my main desktop PC and they are being used via HyperV in Windows 2016 server. At this point I don't want the HDs in my main box for various reasons. Is there a go-to way to mount them in an external enclosure/rack and connect that to Windows 10? I assume I would need something like a HBA and some type of cabling. I'd like high performance if necessary, at the bare minimum normal 150-200MB/s single drive speed.
|
# ? Aug 6, 2017 21:43 |
|
iSCSI and high-speed Ethernet. Some might also suggest Infiniband, but for some reason Ethernet works better here (I have ConnectX 3 VPI cards, which can do both). Of course, depending on how far away you want that SAN box, fiber is required, and that makes things more expensive.
|
# ? Aug 6, 2017 22:06 |
|
These are directly attached, right? You just want to move them out of the chassis? eSATA with a port multiplier is fine.
|
# ? Aug 6, 2017 22:53 |
|
Anyone have experience with the Western Digital My Cloud products? I've found a decent deal on the EX4100 with 8TB and I'm wondering if the things any good or not, would mainly be using it to store media and automatically download from Usenet. This thing here https://www.amazon.com/EX4100-Expert-Network-Attached-Storage/dp/B00TB8XN2E
|
# ? Aug 7, 2017 16:19 |
|
Incessant Excess posted:Anyone have experience with the Western Digital My Cloud products? I've found a decent deal on the EX4100 with 8TB and I'm wondering if the things any good or not, would mainly be using it to store media and automatically download from Usenet. last i checked these weren't very good but it'll serve files as well as anything
|
# ? Aug 7, 2017 16:46 |
|
Is there such a thing as sticky pads for slapping onto the inside of a cupboard to create sound-proofing when you've got a PC in there? Thanks for reading the above cumbersome sentence.
|
# ? Aug 7, 2017 19:35 |
|
apropos man posted:Is there such a thing as sticky pads for slapping onto the inside of a cupboard to create sound-proofing when you've got a PC in there? gently caress yeah there are. For the reasonable man: https://www.newegg.com/Product/Product.aspx?Item=N82E16811999222 For the overkill man: https://www.newegg.com/Product/Product.aspx?Item=9SIA8GS37V2935 (not self-stick, but some glue fixes that)
|
# ? Aug 7, 2017 19:44 |
|
Woohoo! More poo poo to waste my money on!
|
# ? Aug 7, 2017 21:38 |
|
Combat Pretzel posted:iSCSI and high-speed Ethernet. Some might also suggest Infiniband, but for some reason Ethernet works better here (I have ConnectX 3 VPI cards, which can do both). Of course, depending on how far away you want that SAN box, fiber is required, and that makes things more expensive. Sorry, I want to put my array in my garage which isn't where my workstation is.. sounds like iSCSI and high speed Ethernet is probably the ticket. Distance is around 100 feet. quote:These are directly attached, right? You just want to move them out of the chassis? Yeah kind of.. but over 100 feet'ish distance. Is there a go-to iSCSI enclosure or is it the kind of thing you are better building your own box. redeyes fucked around with this message at 22:01 on Aug 7, 2017 |
# ? Aug 7, 2017 21:57 |
|
redeyes posted:Sorry, I want to put my array in my garage which isn't where my workstation is.. sounds like iSCSI and high speed Ethernet is probably the ticket. Distance is around 100 feet. You're not going to pull 150-200MB/s per-disk over gige, period, much less 7 disks at once. You're looking at 10Ge or infiniband (or FC) for that, and CPU load is going to matter. How much performance do you actually need?
|
# ? Aug 7, 2017 22:10 |
|
evol262 posted:You're not going to pull 150-200MB/s per-disk over gige, period, much less 7 disks at once. You're looking at 10Ge or infiniband (or FC) for that, and CPU load is going to matter. If it's only one server to one client, Infiniband/FC is gonna end up being much cheaper than 10-GigE.
|
# ? Aug 7, 2017 23:43 |
|
necrobobsledder posted:300 GB of RAM... are you running a Hadoop cluster or a home server? The RAM's power draw alone might rival my 8-disk NAS box. Its running a VM cluster and Docker containers for plex, has a PERC 6/e connected to a MD1000 array, FreeNAS is runn ing on two 100GB SSDs in RAID1 and the VMs are stored on a 500GB RAID6 array of 6 146GB SAS disks.
|
# ? Aug 7, 2017 23:55 |
|
CommieGIR posted:Its running a VM cluster and Docker containers for plex, has a PERC 6/e connected to a MD1000 array, FreeNAS is runn ing on two 100GB SSDs in RAID1 and the VMs are stored on a 500GB RAID6 array of 6 146GB SAS disks.....that I use to store my downloaded movies and porn on.
|
# ? Aug 8, 2017 00:27 |
|
^ if only you could have thread titles that length. Would be perfect for this thread.
|
# ? Aug 8, 2017 06:22 |
|
Can I use regular SATA cables between a SAS device and a SAS controller?
|
# ? Aug 8, 2017 08:08 |
|
Since Google Drive has been giving my wife guff lately, I've been musing going the home NAS route as a replacement for her. All she uses the cloud storage for is photos and videos she takes while traveling, so it's not exactly taxing. The main issue is making sure files/folders on there can be remotely accessed/shared with family. I have no issues building something from scratch, but frankly that seems like it will probably be overkill. Is it more sensible to grab a store-bought unit and shove a WD Red or three in it? ~tia
|
# ? Aug 8, 2017 08:16 |
|
|
# ? May 27, 2024 02:36 |
|
CommieGIR posted:Its running a VM cluster and Docker containers for plex, has a PERC 6/e connected to a MD1000 array, FreeNAS is runn ing on two 100GB SSDs in RAID1 and the VMs are stored on a 500GB RAID6 array of 6 146GB SAS disks. Sounds like instead of spending money to move the drives to you garage you should buy a pair of cheaper 500GB SSDs to replace the RAID6 array.
|
# ? Aug 8, 2017 09:06 |