|
I'm building my own NAS, looking at Ubuntu + ZFS to run either a 4 x 2TB RAIDZ2 or possibly 6 x 2TB RAIDZ2. The Ubuntu selection instead of ESXi + various VMs is mostly inspired by Lowen SoDium's write-up, and future plans to use Crashplan. Is there a Linux-based utility I can use to check the drives when I first get them? I know the different manufacturers have their own utilities, but they only work in Windows, if I recall correctly. Is it worth setting up an Ubuntu/Windows 7 dual-boot for this? Also, in case anyone else is currently loading up a NAS, Newegg has the 2TB WD Red drives for $100 with coupon code EMCXSTX25 until May 6. http://promotions.newegg.com/NEemai...der-_-ClickHere
|
# ? Apr 30, 2013 21:26 |
|
|
# ? Jun 4, 2024 00:38 |
|
wanieldong posted:Also, in case anyone else is currently loading up a NAS, Newegg has the 2TB WD Red drives for $100 with coupon code EMCXSTX25 until May 6. FINALLY. I've been waiting for one of these sales for months.
|
# ? Apr 30, 2013 21:32 |
|
wanieldong posted:Also, in case anyone else is currently loading up a NAS, Newegg has the 2TB WD Red drives for $100 with coupon code EMCXSTX25 until May 6. Neeuuggh Newegg, stop it, I just spent way too much money on 3TB Red drives.
|
# ? Apr 30, 2013 21:39 |
|
Also the N54L is a Shell Shocker again, ten bucks less than last time. http://www.newegg.com/Product/Product.aspx?Item=N82E16859107921
|
# ? May 1, 2013 03:23 |
|
Does anyone know how to calculate the offset for the mdadm superblock? I had a 6x2TB raid5 array go nuclear on me and I'm trying to recover it, and it looks like the UUID on one of the drives somehow got switched over to that of my 6x1.5TB raid5 array, so I'd like to update the UUID and hope that it can be added back to the (inactive) array. I found this page with instructions on how to do just that, but the instructions for calculating the offset are confusing me: "offset = partition_size & ~65535 – 65536". I realize the & is a bitwise OR in c, but I can't for the life of me figure out how to get an working number from that (just tried the partition size - 65536 but it didn't work). Also, I'm planning on switching over to zfs, but first I need to upgrade my box (the mobo only supports ddr2 memory which is insanely expensive, so I might as well upgrade to one that does ddr3). Anyone have any suggestions? Preferably one with onboard graphics (or supportive of a CPU with graphics) since it's just a server, Intel based, and support for at least one (two would be great) x8 PCIE card (for my SAS card, and support for two would give me the ability of adding more drives).
|
# ? May 1, 2013 03:30 |
|
wanieldong posted:Is there a Linux-based utility I can use to check the drives when I first get them? I know the different manufacturers have their own utilities, but they only work in Windows, if I recall correctly. Is it worth setting up an
|
# ? May 1, 2013 12:03 |
|
thideras posted:"badblocks -wsv /dev/<device>" This destroys any data on the device. Perfect! Thank you.
|
# ? May 1, 2013 13:38 |
|
Sub Rosa posted:Also the N54L is a Shell Shocker again, ten bucks less than last time. http://www.newegg.com/Product/Product.aspx?Item=N82E16859107921 For those little self contained workstations is this the one to get now over the N40L?
|
# ? May 1, 2013 15:24 |
|
Megaman posted:For those little self contained workstations is this the one to get now over the N40L? It has a beefier processor so can handle the load a little better than the older N40L and N36L.
|
# ? May 1, 2013 15:50 |
|
Moey posted:It has a beefier processor so can handle the load a little better than the older N40L and N36L. I assume it's the same case so if I wanted to put 6 drives instead of the normal 4 I'll have to buy extra parts and tear it apart a little as usual?
|
# ? May 1, 2013 15:59 |
|
Megaman posted:I assume it's the same case so if I wanted to put 6 drives instead of the normal 4 I'll have to buy extra parts and tear it apart a little as usual? Looks pretty much the same to me. For 6 drives people were just shoving 2x3.5" drives into the optical bay.
|
# ? May 1, 2013 16:09 |
|
Moey posted:Looks pretty much the same to me. For 6 drives people were just shoving 2x3.5" drives into the optical bay. Oh, you can fit 2 3.5s in the optical bay? I thought you could only fit one. That's good news.
|
# ? May 1, 2013 16:15 |
|
Megaman posted:Oh, you can fit 2 3.5s in the optical bay? I thought you could only fit one. That's good news. This is what has been advised by this thread in the past to accomplish that. http://www.amazon.com/Noiseblocker-X-Swing-Adapter-Noise-Reducer/dp/B000S8B8J6 People have even gone farther and shoved a drive (or maybe 2x2.5" below the 5.25 bay). Don't forget about picking up an IBM M1015 from ebay if you are going nuts with drives. It has 4 onboard, plus the ODD sata port that can be used at full speed with a flashed bios, plus the eSATA which some people routed back into the case for 6 sata without an add in card.
|
# ? May 1, 2013 16:36 |
|
Moey posted:This is what has been advised by this thread in the past to accomplish that. This is what I did. Works perfectly. Get a short 1 foot e-sata to sata cable and run it out the back to the esata port (the little flap that holds down the expansion cards will still close if a cable is routed underneath it.)
|
# ? May 1, 2013 19:52 |
|
atomjack posted:Does anyone know how to calculate the offset for the mdadm superblock? I had a 6x2TB raid5 array go nuclear on me and I'm trying to recover it, and it looks like the UUID on one of the drives somehow got switched over to that of my 6x1.5TB raid5 array, so I'd like to update the UUID and hope that it can be added back to the (inactive) array. I found this page with instructions on how to do just that, but the instructions for calculating the offset are confusing me: "offset = partition_size & ~65535 – 65536". I realize the & is a bitwise OR in c, but I can't for the life of me figure out how to get an working number from that (just tried the partition size - 65536 but it didn't work). I just looked into this for you now Metadata 1.2 is actually stored at 4K from the start of the component device, i've confirmed this on mine as follows dd if=/dev/sda1 of=superblock bs=1 skip=4096 count=512 hexdump -C superblock 00000000 fc 4e 2b a9 01 00 00 00 00 00 00 00 00 00 00 00 |.N+.............| 00000010 50 6d 3e 1d b6 c3 3b 44 f7 f1 ff 97 c4 c4 f0 49 |Pm>...;D.......I| and the UUID for my array is 506d3e1d:b6c33b44:f7f1ff97:c4c4f049 If that doesn't work, check at the exact beginning of the partition for metadata 1.1 or at the last 512k of the partition for metadata 1.0, metadata 0.9 is as follows quote:The superblock is 4K long and is written into a 64K aligned block that starts at least 64K and less than 128K from the end of the device (i.e. to get the address of the superblock round the size of the device down to a multiple of 64K and then subtract 64K). The available size of each device is the amount of space before the super block, so between 64K and 128K is lost when a device in incorporated into an MD array. was that device ever a part of the 6x1.5tb array? it might be possible that the device possesses the metadata from both raid arrays considering different metadata versions put them at different locations on the disk
|
# ? May 1, 2013 22:26 |
|
theperminator posted:I just looked into this for you now Pretty much just gonna have to lose the whole array (and the worst part is it was combined with the 6x1.5TB array in an LVM volume so the data in both arrays are toast) and restart from scratch (with zfs once I upgrade the hardware). Thanks for your help though!
|
# ? May 4, 2013 01:08 |
|
I've been reading through the latest few pages and this question might be a bit too low-level for the kind of stuff I've seen. Right now all my media content is hosted on my computer, and streamed to my jailbroken ATV running XBMC. I'd like to move all that to an external drive hooked directly into the router. I'm looking for something that is 1TB or bigger, can plug straight to ethernet, does SMB and/or FTP. Ideally, it would also be aesthetically pleasing enough to sit next to the TV and not look horribly out of place, super bonus mega points if I can shut off or dim any annoying lights. I don't need it to run SABnzbd or Sickbeard since I'll be doing that from my computer so, honestly, the closer it is to plug-and-play, the better. Is there anything like that worth the price? Should I just go by whateve Newegg reviews recommend?
|
# ? May 5, 2013 21:51 |
|
atomjack posted:Pretty much just gonna have to lose the whole array (and the worst part is it was combined with the 6x1.5TB array in an LVM volume so the data in both arrays are toast) and restart from scratch (with zfs once I upgrade the hardware). Thanks for your help though! Well that sucks, I'm guessing you've got another drive failed as well? because if it's just this one drive that's giving you grief you should have at least been able to start the array in a degraded state and copy the data off. One last thing to try, is to recreate the array using the same parameters, drive numbers etc. following this. If you do it wrong though, you'll just end up making things worse as far as I know. theperminator fucked around with this message at 06:58 on May 6, 2013 |
# ? May 6, 2013 06:14 |
|
theperminator posted:Well that sucks, I'm guessing you've got another drive failed as well? because if it's just this one drive that's giving you grief you should have at least been able to start the array in a degraded state and copy the data off. Edit:Well, I went ahead and did it anyways, without specifying a chunk size, but using --assume-clean, and it seemed to recreate the array and it appears clean, although now I've hit a wall with LVM. Originally I had two raid5 arrays (6x1.5tb & 6x2tb) joined together with LVM, for a big ~17TB LVM volume. I was able to restore the physical volumes (pvs) but the logical volume only shows a PV size of 9.1TB, which is the size of the 6x2TB Raid 5 array, and the LV Status is suspended. Is there a way I can restore the LV to use both arrays? atomjack fucked around with this message at 07:23 on May 6, 2013 |
# ? May 6, 2013 07:02 |
|
if you do a pvdisplay does it show both raid sets?
|
# ? May 6, 2013 07:28 |
|
theperminator posted:if you do a pvdisplay does it show both raid sets? Edit: code:
|
# ? May 6, 2013 07:32 |
|
I'm noticing the pvdisplay is showing that only ~644GB of your extents on md2 are assigned. if you do a vgdisplay does the Total PE equal 2538255? which is what I get by adding the two allocated pe figures you posted. what happens if you try vgchange -an then vgchange -ay on it? theperminator fucked around with this message at 07:44 on May 6, 2013 |
# ? May 6, 2013 07:42 |
|
theperminator posted:I'm noticing the pvdisplay is showing that only ~644GB of your extents on md2 are assigned. I tried vgchange -an and vgchange -ay: code:
code:
atomjack fucked around with this message at 07:46 on May 6, 2013 |
# ? May 6, 2013 07:44 |
|
Odd, does mdadm --detail /dev/md3 show the correct array size?
|
# ? May 6, 2013 07:59 |
|
theperminator posted:Odd, does mdadm --detail /dev/md3 show the correct array size?
|
# ? May 6, 2013 08:01 |
|
Alright so I've figured out that it's 1.29MB short according to my terrible maths, how that's even possible I don't know because anything incorrect in the re-creation of the raid array would've caused way more than that surely? does your bash history go far enough back to when you created the array by any chance? :edit: apparently it's metadata or something, someone else had the same problem and managed to sort it out here. in the last reply. Basically, he deleted the LV then recreated the LV without formatting it, then ran an FSCK but it sounds to me like it's got a good chance of destroying any data there is. It looks like it's a similar issue to what you had, it thought one of the drives was part of another array and got them mixed up, probably wrote the extra metadata somewhere and because of that the md device is that much shorter. did you run mdadm --zero-superblock on all of the devices before recreating? Also I've found that different versions of mdadm set the data offset differently, so if you've done an update between creation of the array and now it could account for the missing space theperminator fucked around with this message at 09:34 on May 6, 2013 |
# ? May 6, 2013 08:31 |
|
theperminator posted:Also I've found that different versions of mdadm set the data offset differently, so if you've done an update between creation of the array and now it could account for the missing space Yeah, this burned the gently caress out of me in December, I was porting a md array and didn't realize the version of mdadm used initially was old enough to have the older metadata format. I spent like 20 hours loving around trying to recreate it, but eventually just gave up and started over.
|
# ? May 6, 2013 17:05 |
|
I decided to enable compression on a dataset that has about 500GB on it. It didn't really do anything... Can you do that? Enable compression with data already on the dataset or does it only compress newly written data?
|
# ? May 6, 2013 18:55 |
|
It should only compress data as it writes it - it's not going to go through and rewrite / attempt to compress the data. That said I think even on a N36L compression's overhead is negligible so there's not much reason to disable it, even if it's not helping the data you've written to date.
|
# ? May 6, 2013 18:58 |
|
I gave that a try but had no luck. I removed the LV (suitcase), and the Volume Group shows that it has the entire 15.92TiB of space available to it, with 600GB used (by timemachine), but trying to recreate the LV gives the same resume ioctl error as before. I even tried specifying /dev/md3 followed by /dev/md3 (I have a backup of the /etc/lvm/archive/ folder and the file from when I believe the array was created, July of 2011, lists the order of the LV as starting with the larger array, /dev/md3, then the smaller one, /dev/md2). If you happen to know of another way I could try to create the LV I will give it a try, but otherwise I'll be ready to just start over.
|
# ? May 6, 2013 19:41 |
|
I backed up my NAS by making a bunch of 7z archives split into 1GB chunks. I've got 1.6TB on a 3TB drive that I need to split onto some 500GB drives so I can stick the 3TB drive into the array. With some of the other data my archives became corrupt and I lost a couple of files (that I was able to recover because I had the source). I put the archives directly on my 500GB disks, then copied the archives to the new array and expanded from there, because I can't figure out how to get 7z to extract from one disk to another without first using the system temp directory. This time I'm doing an extra step from old array to big drive to little drive to new array, so I'm even more worried about losing files. My first idea was to make a par set of the files, but you try and make a 10% par set on a 700GB set of files with an Intel E6600, and tell me that's a good idea. So I need a better way to copy this data around and protect it. At the very least I could split my 700GB 7z set into two PAR sets of 350GB, but even that will take a while.
|
# ? May 7, 2013 06:16 |
|
I just bought the N54L and it's a great solution, but I want to mount a hard drive in the optical bay, can someone link me to a newegg mount solution for putting a drive into this bay?
|
# ? May 8, 2013 21:45 |
|
Megaman posted:I just bought the N54L and it's a great solution, but I want to mount a hard drive in the optical bay, can someone link me to a newegg mount solution for putting a drive into this bay? If all you're looking for is a way to mount a 3.5" drive in there then here's the cheapest option from Newegg (which is what I'm using): http://www.newegg.com/Product/Product.aspx?Item=N82E16817997040 If you want something fancier that will let you swap it easier, something like this would work: http://www.newegg.com/Product/Product.aspx?Item=9SIA1DS0FR8903 Really you can just search for '5.25 to 3.5 adapter' on Newegg and get a ton of results, just go with whatever strikes your fancy. Edit: Just remember when mounting to use whatever screws come with the adapter to secure the 3.5 drive but use the 4 silver screws found inside the door of N54L to secure the enclosure into the 5.25 slot. Krailor fucked around with this message at 22:42 on May 8, 2013 |
# ? May 8, 2013 22:36 |
|
Krailor posted:If all you're looking for is a way to mount a 3.5" drive in there then here's the cheapest option from Newegg (which is what I'm using): http://www.newegg.com/Product/Product.aspx?Item=N82E16817997040 Stupid question, I'm going to have to dig to find the SATA ports on the board, but how many are actually available on the board? Am I going to have to buy a 2x SATA extender?
|
# ? May 9, 2013 02:29 |
|
Megaman posted:I just bought the N54L and it's a great solution, but I want to mount a hard drive in the optical bay, can someone link me to a newegg mount solution for putting a drive into this bay? I got http://www.amazon.com/SilverStone-Aluminum-5-25-Inch-Converter-FP55B/dp/B0040JHMIU/ since I didn't need hot-swapping like http://www.newegg.com/Product/Product.aspx?Item=N82E16817994143&Tpk=MB971SP-B provides. It's also cleaner-looking because I can just leave the existing blanking plate on Megaman posted:Stupid question, I'm going to have to dig to find the SATA ports on the board, but how many are actually available on the board? Am I going to have to buy a 2x SATA extender? There's one on the mobo for the ODD, there's also eSATA on the back that you can route back through if you wanted.
|
# ? May 9, 2013 06:25 |
Be aware that the ODD port is by default only SATA150, not SATA300. Therefore you need to use an unofficial BIOS. Here is a handy little bookmark.
BlankSystemDaemon fucked around with this message at 09:42 on May 9, 2013 |
|
# ? May 9, 2013 09:38 |
|
So I've got my server setup running with ZFS / RaidZ2 and crashplan automating the backups of all my computers to it. Is there anything I need to do for maintenance or tuning with ZFS?
|
# ? May 10, 2013 07:47 |
Weekly short SMART tests and monthly long SMART tests, zfs scrub every 30 days, backup the contents of your server to Crashplan. Look into a vKVM solution so you don't need physical access to the machine.
|
|
# ? May 10, 2013 10:56 |
|
Hey guys, Is there a good solution that will back my data up, that ISN'T solely over wireless? I ask because my internet is reaeaallyyy terrible and I'd like to keep a backup of all my data in a convenient place (NAS) and am wondering if there's a good solution out there. Also, I'd like it if it was still accessible over the internet as well.
|
# ? May 10, 2013 11:06 |
|
|
# ? Jun 4, 2024 00:38 |
|
Guni posted:Hey guys, Can you clarify what you mean? I can't think of any backup product that is solely wireless, unless you count those little wifi storage drives for iphones/ipads/ipods.
|
# ? May 10, 2013 11:37 |