|
chizad posted:The closest equivalent Storage Spaces has to RAID5/RAIDZ1 really works more like Drobo's BeyondRAID. You can throw a bunch of different size disks at it, *magic* happens, and you get a storage pool that both offers redundancy makes the most efficient use of the disks. (It's not really magic, of course, but how it's carving things up behind the scenes is hidden from you. The BeyondRAID section in that wiki article I linked does a good job of explaining how a Drobo might handle different size disks. I'm assuming Storage Spaces works in a similar way.) There are a few others: - Flexraid (supports pooling, different sized drives, etc. Works out to $60) - Snapraid (snapshot parity, 1 or 2 drive failures, command line, free) - disParity (snapshot only, data is all kept on its own drive, windows only, free) There are also some that support drive pooling, different drive sizes, but "mirroring" only (as in, two copies of everything you have). Windows 8 storage spaces has this (as well as keeping three copies of everything), and there is also drive bender ($17.50-$29.95), which also maintains a filesystem on each disk (so you can pull out a drive and access it in any computer, although you won't have the file structure)
|
# ¿ Dec 17, 2012 20:10 |
|
|
# ¿ May 14, 2024 20:40 |
|
I have a bunch of data, about 5tb worth. It's on 3x2tb hitachi 7k3000 and 1x3tb seagate. I would like some form of checksumming that can verify that files are still what they were when they were written to the drive, and can verify the files every so often. Does something like this exist for windows? Basically what I want is ZFS scrub on windows.
|
# ¿ Jan 1, 2013 09:13 |
|
Good cases: - Fractal Design R2/3/4: 8 hdd bays for $100. + 5-in-3 = 13 drives for $10.75 per drive. - Fractal Design XL: 10 hdd bays for $140. + 5-in-3 = 15 drives for $12 per drive - NZXT Source 210: 8 hdd bays for $40. + 5-in-3 = 13 drives for $6.15 per drive. - Rosewill RSV-L4500: 15 hdd bays for $130. No expansion. - Silverstone Kublai KL04: 9 hdd bays for $120. + 5-in-3 = 14 drives for $10.71 per drive. - Azza Helios 910: 4 hdd bays for $80. + 3x 5-in-3 = 15 drives for $13.33 per drive. - Xigmatek Elysium: 8 hdd bays for $165. + 4x 5-in-3 = 20 drives for $16.20 per drive. - Xigmatek Elysium: 8 hdd bays for $165. + 2x 5-in-3 = 18 drives for $13.55 per drive.
|
# ¿ Jan 15, 2013 01:50 |
|
You can't expand hardware raid 5 either, for the most part Not that you should use raid 5. You only have protection from 1 drive failure, and rebuilds take the better part of a day. Should look into raid 6 or similar double parity.
|
# ¿ Jan 18, 2013 04:45 |
|
The cheapest way to get the most drives is the NZXT Source 210. 8 3.5" bays and 3 5.25" bays for $40. Drives face backwards though, not to the side. In comparison, the Fractal R4 only offers 8 3.5" + 2 5.25" bays for closer to $100.
|
# ¿ Jan 19, 2013 02:21 |
|
Remember when newegg would just toss your drive into a box with the rest of your order, and then cover it with a few sheets of brown paper?
|
# ¿ May 30, 2013 06:16 |
|
Last time I ordered two drives from them (and a few other things), they just tossed them into the bottom of one of those large flat boxes, tossed in a motherboard box, 2 layers of paper on top, and shipped it.
|
# ¿ Dec 3, 2013 01:45 |
|
Just ran crystaldiskinfo, and got this on a 1tb seagate drive: Time to replace? the reallocated sectors count seems high.
|
# ¿ Jan 8, 2014 04:01 |
|
The drive is probably around 5 years old; it has spent 4.6 years powered on. Of course way back when I bought it I didn't check what the reallocated sector count was.
|
# ¿ Jan 8, 2014 10:36 |
|
Don Lapre posted:This one has a socket 1150 even £313 is the only price I can find. Which would mean basically $399 with usual pricing.
|
# ¿ Jan 16, 2014 06:36 |
|
Not on this one, it's Intel C224 for 4x sata3 and 2x sata2, and LSI 2308 for the 2x mini sas 8087. LSI 2308 is also found in controllers like the SAS 9207-8i, which is 300 bucks on its own.
|
# ¿ Jan 16, 2014 08:52 |
|
If you like unraid you could also consider snapraid. Same sort of idea, with your disks of data and dedicated parity drives, but with (very) active development, block checksums, scrubbing, etc. It's snapshot raid, so you do your parity calculations when you want to, which some people like (save it until the end of the day and run it at night, for example). It's also free, and you can use as many parity drives as you want. Command line or (old and outdated) program though, but it's very easy. http://snapraid.sourceforge.net/ As for xpenology: it works best either on it's own, or running as a virtual machine guest where you pass the drives through to said guest. Works alright in ESXi, not so much in HyperV
|
# ¿ Jan 17, 2014 08:34 |
|
Minty Swagger posted:Hm, so I could for example boot windows off an SSD and then run snapraid just on top of everything. Do you just set everything up as JBOD and then it layers on top? Any downsides rolling an XPEnology build? They both sound great! That's one way of doing it. It also work with linux and freebsd and osx and everything. You don't set it up as a JBOD though, you leave your drives as-is, then use the snapraid configuration to set drives as data or parity. You can keep your existing data on those drives, too. Only downsides to xpenology are hardware support, which isn't an issue if you're buying new hardware or use supported hardware, and being limited to whatever packages in the OS you want; for most people (and for a dedicated server) it isn't a problem, since you probably won't need to use it as a dns server or a domain controller, but it's something to look at. It's also why people run it as a virtual machine guest under esxi, so they can make more virtual machines for those other tasks.
|
# ¿ Jan 17, 2014 10:11 |
|
Minty Swagger posted:I think my biggest push away from snapraid is that its still a lot of disks with parity so you still have to manage your disk space per drive still unless you utilize another app like stablebit drivepool or something. That seems like bad news layering on so many different types of tech though in my opinion! There's a new feature in new versions which include disk pooling. Not sure how it works since I haven't used it.
|
# ¿ Jan 17, 2014 20:58 |
|
Use their live chat and talk to them until they give you a shipping label.
|
# ¿ Feb 21, 2014 01:27 |
|
There are some 5-in-3 adaptors you can buy on ebay, pair it with that fractal arc xl and you'll be at 13.
|
# ¿ Jul 2, 2014 05:22 |
|
Few questions for freenas (9.2.1.5) - Is there a way to turn on staggered spinup using freenas or is that a board option? - I want weekly scrubs on a sunday. Would the following options be correct: code:
|
# ¿ Jul 5, 2014 04:02 |
|
D. Ebdrup posted:-The Common Access Method for SCSI/ATA (CAM) can control all sorts of behaviour of disks as described on the man page - you're looking to adjust the CAM_MAX_HIGHPOWER setting. -Hmm, I see that freenas has an option for advanced power management, are those options worth considering? for example level 127 being intermediate power usage with standby - Thanks - Yup, CPU isn't slowing down lz4 compression. It's a dual core sandy bridge celeron, so it should be ok. Was wondering, what is the best way to back up all the data on the pool to something like an external hard drive? I see rsync, but I'm not sure if I can use that from pool to separate drive, and I also see replication, but can it do it?
|
# ¿ Jul 6, 2014 01:23 |
|
Having some problems with my nas. - Out of nowhere, it stops sharing, and the web gui denies connections. - Cannot ssh into nas, but can ssh into jails - Rebooted to the mountpoint prompt, turns out a drive died (a really ancient 1tb seagate drive) - Remove drive, reboot, boots up just fine - Still cannot access web gui. Cannot ssh into the nas, either. - Tried a factory reset. Can't access web gui or ssh into the nas. - Posted on SA FreeNAS 9.2.1.5 Some celeron dual core sandy bridge MSI P67A GD65 Some lovely video card just for video (like a radeon 5450) Seasonic X760 3x Hitachi 2tb 1x Toshiba 2tb (raidz2 with the above 3 hitachis) 1x Seagate 3tb 1x Seagate 1tb (that died and is no longer in the system) TPLink TG-3468 Nic (uses a realtek 8111 iirc) Halp Wild EEPROM fucked around with this message at 01:10 on Aug 16, 2014 |
# ¿ Aug 16, 2014 00:39 |
|
MrMoo posted:Is the NIC dead? Can you ping somewhere from the text menu in FreeNAS? Can you ssh into itself? Yes I can ping somewhere from the shell on FreeNAS I can't ssh into it though, it denies access. I can ssh into my jails. I also forgot to mention that I had a second nic hooked up, but I tried using just that one, same thing.
|
# ¿ Aug 16, 2014 01:09 |
|
I managed to fix it: I made a new USB drive with 9.2.1.7 (previously on 9.2.1.5), plugged it into a different usb port, and it booted right up like a fresh install. Then I restored my backup config, and it's running again. Combination of usb3 drive, first-gen usb3 chipset, and a shady header was the reason I couldn't boot from that usb drive previously. I'm so glad I backed up my configuration. So so so so so glad. Not sure why it wouldn't work on my 9.2.1.5 USB drive, but it's back up so I'm happy. Also, there was no /var/log directory at all. Wild EEPROM fucked around with this message at 02:58 on Aug 16, 2014 |
# ¿ Aug 16, 2014 02:53 |
|
I always thought that idea was pretty terrible. 4 drives involves 4 rebuilds, which takes loving forever. Why not just put all your new drives in, build a new array, and then just move everything over?
|
# ¿ Nov 16, 2014 05:31 |
|
I have a few jails set up in freenas. Let's call them jail1 and jail2. I have a bunch of drives set up, too: 4x drives in raidz2, let's call it pool1 2x drives in mirror, let's call it pool2. I have both jails on pool1, and I would like to move them to pool2. How would I do this in the best or least-bad way possible
|
# ¿ Apr 5, 2015 03:36 |
|
Hooray, a seagate 3tb, my st3000dm001, just died. Thanks seagate Theagate
|
# ¿ Apr 20, 2015 02:46 |
|
Running FreeNAS 9.3 on some older hardware. Got the error code:
code:
|
# ¿ Jul 3, 2016 04:23 |
|
Uh oh, checked out my other drives: code:
code:
code:
|
# ¿ Jul 3, 2016 05:18 |
|
with my extremely small sample size of fewer than 20, seagate is at 100% failure.
|
# ¿ Dec 3, 2016 09:32 |
|
Okay so right now I have: Celeron G540 MSI P67a gd65 4x 4gb some gaming ram evga 750w b3 psu 2x hitachi 3tb drives 4x wd 4tb red drives All running on FreeNAS 11 using ZFS Ideally I'd like to have more storage. I'm not bottlenecked by the CPU because all I use this for is storage and 2x jails. not doing any re-encoding or anything. However, my last 2 sata ports remaining are a marvell or asmedia or some other non-intel chipset Also, the ram is not ECC, which would be nice. I saw that some of the older supermicro boards were going for low prices, so I am considering: - Keeping the G540, since it's not a bottleneck; does it support ECC? I seem to remember it did, but the ARK page does not mention anything about ECC. Otherwise I could pick up an E3 1220 v3 or similar. - supermicro X9 SCM or SCL or similar (~$90 CAD) - 4x 8gb DDR3 ECC ram - do I need just plain ECC, ECC unbuffered, or ECC registered? - What is the current good HBA card to use for ZFS with FreeNAS? - Keeping the psu, case, etc. Would it be a good idea, or should I just abandon that idea and get some things that aren't from 2012? Budget is undecided.
|
# ¿ Nov 25, 2019 04:39 |
|
Ok so it looks like a suitable build would be: E3 1220 V2: ~$50 Supermicro X9SCL: ~$80 4x 8gb DDR3 ECC: ~$160 Total: $290 or so. Existing cpu cooler, psu, case, and then something like an LSI 9211 or 9207 Any better ideas? I live in Canada, so selection on supermicro or asrock rack is extremely limited already.
|
# ¿ Nov 27, 2019 06:49 |
|
calculate pcie bandwidth and see if it's worth it for your use case. pci-e 2.0 x8 will do 8GB/s; you would be lucky to get more than 100-150MB/s per hard drive, so you should be ok unless you are truly maxing out your storage or using massive numbers of expanders or tons of ssds pretty much all of the half-height LSI cards will have a full height bracket available, and usually if you are buying on the second hand market it will include said full height bracket. That link will also have the manufacturer's version of that specific card, which will save you tons of money vs buying the real deal lsi card.
|
# ¿ Feb 12, 2020 09:20 |
|
the problem you're all having is that you've named your things as tank like cmon have some originality
|
# ¿ Mar 12, 2020 09:01 |
|
Idea thats probably incredibly dumb, why not pick up a couple sas expanders and go off of that? Obviously you would lose some performance but better than relying on usb for anything. Or what about getting an hba with more ports? I assume youre probably using an LSI 8i card, what about a 9206-16e and appropriate cables until youre done
|
# ¿ Mar 16, 2020 04:43 |
|
Godzilla07 posted:Following my earlier post I started looking at Ivy Bridge Xeon E3s, and then saw that a Haswell system wouldn't be that much more for better idle power consumption. How is this deal for the following CPU/RAM/MB combo for $220 after shipping? How much pain are you willing to endure? Check out an old dell workstation. I bought one with an e3 1270v3 and 8gb ecc ram for 170 cad shipped. Threw in a psu adaptor cable for 20 bucks and off to the races with some old hardware i had laying around.
|
# ¿ Apr 21, 2020 09:06 |
|
DrDork posted:The HP workstation series (Z440, etc) are also decent alternate options. They tend to sell for a bit less than the Dell or Lenovo options, probably due to being less popular on some of the DIY build sites, but they work just fine. Biggest downside to them is they tend not to have IPMI, so you'll need to manually set things up at least once. They also use a custom power cable like some of the Dell's, so same deal there ($20-$40 adapter) if you wanted to move the guts into your own case, but I've found mine to be reasonably quiet as-is. sharkytm posted:They also use really odd mounting screws with built-in shock absorbers, some of which are nearly impossible to find and are often discarded by resellers. You can buy them online or 3D print your own, but it's a word of warning to potential buyers. I've got several HP workstations, and I've had to 3D print spacers for the drive cages to work properly. There are even 2 different ones for 2.5" drives, depending on the model of computer... I'm happy with the computers, but the screws are a pain in the rear end. My SSD in the Elitedesk 800 G2 SFF are just double-sided taped in place because I couldn't be bothered to dimension the second type of screw spacer. I've got an EliteDesk 800 G2 tower too, and it takes the same dumb system. I find the key is to find one that's more or less matx sized; the lower-end precisions use straight up matx boards, with their custom power cable, but it fits in a regular sized case just fine, uses a standard io plate, etc. At least in this application, if I was going dual cpu (eg a lenovo p500) it would be a different situation... 5 in 3 would be fine for drives if you can live with 5 drives.
|
# ¿ Apr 22, 2020 00:40 |
|
SCheeseman posted:I threw together a Ryzen 3 1200-based Linux server a few years ago that I've been using for Plex etc as well as a QNAP NAS that I want to retire. The way storage is set up is a bit of a mess at the moment, so the idea is to set up a software RAID5 with 9x8TB hard drives on the server (eventually expanding to 10 or 11 drives). I already have 6x8TB drives that aren't in an array (four in the NAS and 2 in the server) and I'm going to buy another 3x8TB drives, create the initial array then transfer stuff over, adding drives to the array as I empty them. They're SMR drives, so I imagine this will be slow as hell and I understand there will be speed penalty during RAID rebuilds, though the NAS is only really used to store video files for streaming so speed requirements aren't high. Is this a terrible idea? 9 drives and 1 parity? And smr? And that much rebuilding? Terrrrrrrible idea in every single way.
|
# ¿ Apr 29, 2020 10:02 |
|
SCheeseman posted:I threw together a Ryzen 3 1200-based Linux server a few years ago that I've been using for Plex etc as well as a QNAP NAS that I want to retire. The way storage is set up is a bit of a mess at the moment, so the idea is to set up a software RAID5 with 9x8TB hard drives on the server (eventually expanding to 10 or 11 drives). I already have 6x8TB drives that aren't in an array (four in the NAS and 2 in the server) and I'm going to buy another 3x8TB drives, create the initial array then transfer stuff over, adding drives to the array as I empty them. They're SMR drives, so I imagine this will be slow as hell and I understand there will be speed penalty during RAID rebuilds, though the NAS is only really used to store video files for streaming so speed requirements aren't high. Is this a terrible idea? Okay instead of just saying it's a bad idea, here's why it's a bad idea: 1) Rebuilding is a very likely point of failure. Rebuilding 6 times is insanity. 2) Raid5 only gives you 1 drive parity. You can only afford to lose 1 drive before you lose everything. Any failure during that rebuild will be the end. 3) SMR drives in raid will not rebuild properly or calculate parity properly, and rebuilding to/from an SMR drive is just more data loss. 4) Software raid is usually not portable. That means if you have to reinstall your OS, your raid won't go with it. Usually this is if your hardware dies and you have to buy new parts. 5) If you are buying new drives, for a raid, don't buy SMR drives. 6) rebuilds take a very long time. Aim to minimize rebuilds. And what I would suggest, given that you have a spare motherboard/cpu: 1) Buy new CMR server/nas specific drives. For example, the WD easystores or mybook, and pull them out of the enclosures, or the WD Red 8tb+, seagate ironwolf, toshiba something or another drives, etc. The 8tb easystores are usually on sale, but the 10, 12, and 14tb ones go on sale pretty regularly too. 2) Install freenas onto said spare motherboard/cpu; I would suggest if you have a spare sata port then use an old hard drive or ssd. This is only for the OS, not data. If you don't have a spare sata port, perhaps a sata to usb enclosure would be the route to take. 3) Make a new pool with your new hard drives. Use as much parity as you think you will need; If you are using 3 drives, a raidz1 should be ok. 4) Make sure to set up regular scrub and snapshot tasks. 5) Copy over the data from a few drives first, then once they are empty, set up a new pool with those drives. Repeat. 6) Make sure to save your freenas configuration on something that isn't on that storage. What drives do you have now? Are you sure they are SMR drives? Obviously, just some basic steps with few specifics. - I have no idea how unraid works so someone else will have to tell you about it. - The best part about using zfs and freenas is that the storage info is stored on the drives, so even if you need to replace your cpu/motherboard/freenas boot drive and you have no backup of your configuration, you can simply import the disks and your data is still there. - The speed penalty during a raid rebuild is that your drives will basically be unusable during this time. You should strive to avoid rebuilds if you can help it. - Running VMs is a completely different can of worms.
|
# ¿ Apr 30, 2020 01:40 |
|
You can flash the H710 to IT mode
|
# ¿ May 2, 2020 05:49 |
|
I know things in europe are more expensive but that looks exceedingly expensive, especially for the xeon. Considering for ~350 usd usually (310 euros or so), dell will sell you a full precision workstation with a xeon and some ram and it works out of the box
|
# ¿ Jun 19, 2020 08:25 |
|
Speed of the drive doesnt matter do just buy a sata usb enclosure and go to town
|
# ¿ Jul 1, 2020 10:16 |
|
|
# ¿ May 14, 2024 20:40 |
|
Re 2.5” hddchat: Look up the seagate rosewood if you wanna see some real fun. It has a load bearing
|
# ¿ Dec 10, 2020 01:48 |