Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
movax
Aug 30, 2008

titaniumone posted:

I can get a 4-Port Intel PRO/1000PT PCI-E on Ebay for about $100, and my switch (Cisco 3560g) supports LACP, so I should be able to set this up easily.

Is that a x4 or a x8 card? I wouldn't mind doing the same, but I have no free slots for it. I guess I could team my 2 x1 Intel adapters I have for now.

Adbot
ADBOT LOVES YOU

evil_bunnY
Apr 2, 2003

movax posted:

24GB of RAM and 8 threads, solely for file-serving at the moment.
I love how you blurb for 6 lines before mentioning this gem.

IT Guy posted:

True, unless you LAGG your endpoint as well :)
LACP will only do 1 interface's worth of traffic on a single stream.

evil_bunnY fucked around with this message at 16:39 on Apr 12, 2012

movax
Aug 30, 2008

evil_bunnY posted:

I love how you blurb for 6 lines before mentioning this gem.

It was all so cheap :( G. Skill RAM and used CPU. Buying brand new is for suckers :smug:

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

I'm sitting here idly thinking about how I'm going to handle migrating my 16TB of data once hard drive prices come back down a bit. You may or may not recall me talking in here months ago about how I accidentally discovered ext4 has a 16TB filesystem limit.

Well, I need more storage, and I value having it all as one big blob instead of separate filesystems.

My current setup involves multiple mdadm RAID5 arrays joined with LVM with an ext4 filesystem.

My experience makes me lean towards just keeping my Ubuntu server and continuing with my mdadm RAID5 + LVM set up and migrating to a new filesystem that supports >16TB.

  1. Are there any filesystems which support a lossless in-place upgrade?
  2. Which filesystem is the "best" for a workload that mostly serves large video files to several other computers?
  3. If I can't do an in-place upgrade, do I have any options that gets me from where I am currently to a new >16TB filesystem that supports expanding the filesystem when I add new storage, without requiring me to buy 16TB more storage just to copy all my current data to?

IT Guy
Jan 12, 2010

You people drink like you don't want to live!

Thermopyle posted:

I'm sitting here idly thinking about how I'm going to handle migrating my 16TB of data once hard drive prices come back down a bit. You may or may not recall me talking in here months ago about how I accidentally discovered ext4 has a 16TB filesystem limit.

EXT4 has a file size limit of 16TB, not volume size.

Edit: After Googling it, it appears to be a partition size limit with e2fstools.

IT Guy fucked around with this message at 16:53 on Apr 12, 2012

movax
Aug 30, 2008

IT Guy posted:

EXT4 has a file size limit of 16TB, not volume size.

Yeah, this. Ext4 supports like an exabyte or something ridiculous volume-wise. No modern file system in use on NASes has a native limit you'll hit. Artificial ones like Nexenta Free, or maybe path depth/size issues, but not volume.

FAT32 has a 16TB limit though :laugh:

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Thermopyle posted:

I'm sitting here idly thinking about how I'm going to handle migrating my 16TB of data once hard drive prices come back down a bit. You may or may not recall me talking in here months ago about how I accidentally discovered ext4 has a 16TB filesystem limit.

Well, I need more storage, and I value having it all as one big blob instead of separate filesystems.

My current setup involves multiple mdadm RAID5 arrays joined with LVM with an ext4 filesystem.

My experience makes me lean towards just keeping my Ubuntu server and continuing with my mdadm RAID5 + LVM set up and migrating to a new filesystem that supports >16TB.

  1. Are there any filesystems which support a lossless in-place upgrade?
  2. Which filesystem is the "best" for a workload that mostly serves large video files to several other computers?
  3. If I can't do an in-place upgrade, do I have any options that gets me from where I am currently to a new >16TB filesystem that supports expanding the filesystem when I add new storage, without requiring me to buy 16TB more storage just to copy all my current data to?

BTRFS can do an inplace conversion: https://btrfs.wiki.kernel.org/articles/c/o/n/Conversion_from_Ext3_6e03.html

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

IT Guy posted:

EXT4 has a file size limit of 16TB, not volume size.

Edit: After Googling it, it appears to be a partition size limit.

movax posted:

Yeah, this. Ext4 supports like an exabyte or something ridiculous volume-wise. No modern file system in use on NASes has a native limit you'll hit. Artificial ones like Nexenta Free, or maybe path depth/size issues, but not volume.

FAT32 has a 16TB limit though :laugh:



Yes. I talked to Theodore Ts'o about it at the time. He told me that I'm basically SOL if I don't want multiple partitions. Ext4 doesn't support >16TB partitions.

Specifically, the problem is that all release versions of ext4 don't support it. ext4, as designed, does, but nothing out there in use is constructed in such a way to support it.

Ts'o posted:

There isn't a way to get around this issue, I'm afraid. Support for >
16TB file systems is something that requires a fundamental change in
the file system format, and which is in late beta status.

It requires the pre-release, e2fsprogs 1.42 development branch, and
unfortunately, the file system must be reformatted to support larger
(greater than 32-bit) block numbers. Certain on-disk data structures
(specifically, the block group descriptors) have to grow in size from
32 bytes to 64 bytes, so it's not something where we could upgrade a
file system in-place to support > 16TB.

Thermopyle fucked around with this message at 16:57 on Apr 12, 2012

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Wait, huh? What is going on? What the hell is a partition limit on a file system? That doesn't even make any sense to me.

IT Guy
Jan 12, 2010

You people drink like you don't want to live!

Thermopyle posted:

Yes. I talked to Theodore Ts'o about it at the time. He told me that I'm basically SOL if I don't want multiple partitions. Ext4 doesn't support >16TB partitions.

Does he have plans to fix this in newer versions of his software or is there a physical limit he's hitting as well?

edit:

FISHMANPET posted:

Wait, huh? What is going on? What the hell is a partition limit on a file system? That doesn't even make any sense to me.

http://blog.ronnyegner-consulting.de/2011/08/18/ext4-and-the-16-tb-limit-now-solved/

Longinus00
Dec 29, 2005
Ur-Quan

Thermopyle posted:

I'm sitting here idly thinking about how I'm going to handle migrating my 16TB of data once hard drive prices come back down a bit. You may or may not recall me talking in here months ago about how I accidentally discovered ext4 has a 16TB filesystem limit.

Well, I need more storage, and I value having it all as one big blob instead of separate filesystems.

My current setup involves multiple mdadm RAID5 arrays joined with LVM with an ext4 filesystem.

My experience makes me lean towards just keeping my Ubuntu server and continuing with my mdadm RAID5 + LVM set up and migrating to a new filesystem that supports >16TB.

  1. Are there any filesystems which support a lossless in-place upgrade?
  2. Which filesystem is the "best" for a workload that mostly serves large video files to several other computers?
  3. If I can't do an in-place upgrade, do I have any options that gets me from where I am currently to a new >16TB filesystem that supports expanding the filesystem when I add new storage, without requiring me to buy 16TB more storage just to copy all my current data to?

You want XFS.

FISHMANPET posted:

Wait, huh? What is going on? What the hell is a partition limit on a file system? That doesn't even make any sense to me.

He means volume size limit.

Longinus00 fucked around with this message at 18:39 on Apr 12, 2012

Sombrero!
Sep 11, 2001

Urgh, after seeing that $199 N40L I feel so dumb for buying 2 of them at $269. Oh well.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Sombrero! posted:

Urgh, after seeing that $199 N40L I feel so dumb for buying 2 of them at $269. Oh well.

Even at $269 it seems like a really great deal, compared to how much other devices on the market cost. (I bought one at $269)

sleepy gary
Jan 11, 2006

Sometime today, while I wasn't at work, the N40L I have in place there locked up completely. Nobody could access any shares, etc. So I came in and found it was outputting a full screen of vertical black and white stripes and was completely unresponsive.

My confidence in it is extremely shaken. This thing is supposed to work hassle-free for years once I finish setting it up...

UndyingShadow
May 15, 2006
You're looking ESPECIALLY shadowy this evening, Sir

DNova posted:

Sometime today, while I wasn't at work, the N40L I have in place there locked up completely. Nobody could access any shares, etc. So I came in and found it was outputting a full screen of vertical black and white stripes and was completely unresponsive.

My confidence in it is extremely shaken. This thing is supposed to work hassle-free for years once I finish setting it up...

Yeah, that sounds like a major hardware or memory problem. I'd run memtest on it, and if that shows clean, time to call HP and demand a new one.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

UndyingShadow posted:

Yeah, that sounds like a major hardware or memory problem. I'd run memtest on it, and if that shows clean, time to call HP and demand a new one.
Also make sure it's properly ventilated. The fans in the N40L aren't amazingly powerful (nor do they need to be normally), so if you stuffed it in a small closet with no airflow or something, it may have overheated itself.

sleepy gary
Jan 11, 2006

It's just sitting on a cart out in the open. It would be nice if it was just the memory... I'll get memtest started on it in a few minutes.

edit: first pass completed without errors... ugh.

sleepy gary fucked around with this message at 08:05 on Apr 13, 2012

mattfl
Aug 27, 2004

Anyone ever use a ReadyNAS X6? I got one for free a while back and just about ready to pull the trigger on 4x 2TB drives for my media storage. Just wondering if it's worth using or should I just pickup a N40L. It's use is strictly for tv/movie storage and that's it.

Lowen SoDium
Jun 5, 2003

Highen Fiber
Clapping Larry
Is there any kind of virtualization software that you can run on top of FreeNAS?

sleepy gary
Jan 11, 2006

Lowen SoDium posted:

Is there any kind of virtualization software that you can run on top of FreeNAS?

No but you can do it the other way around.

devilmouse
Mar 26, 2004

It's just like real life.

Lowen SoDium posted:

Is there any kind of virtualization software that you can run on top of FreeNAS?

VirtualBox mostly runs on FreeBSD, so it should run on FreeNAS as well.

http://wiki.freebsd.org/VirtualBox

Lowen SoDium
Jun 5, 2003

Highen Fiber
Clapping Larry
^^^ Thanks! I found this link disusing it more.

DNova posted:

No but you can do it the other way around.

Yeah but then I couldn't store the VM's disk files on the ZFS datastore.

Lowen SoDium fucked around with this message at 14:48 on Apr 13, 2012

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
If you use, say, ESXi, you boot it from memory stick 1. Then you boot FreeNAS virtualized from memory stick 2, and pass your hard drives to it raw. Then you tie part of the volume back to ESXi as iSCSI or somesuch, and use it to store the other VMs.

Lowen SoDium
Jun 5, 2003

Highen Fiber
Clapping Larry

Factory Factory posted:

If you use, say, ESXi, you boot it from memory stick 1. Then you boot FreeNAS virtualized from memory stick 2, and pass your hard drives to it raw. Then you tie part of the volume back to ESXi as iSCSI or somesuch, and use it to store the other VMs.

I know for a fact that this would work, but I would rather not do it this way.

Muslim Wookie
Jul 6, 2005

Factory Factory posted:

If you use, say, ESXi, you boot it from memory stick 1. Then you boot FreeNAS virtualized from memory stick 2, and pass your hard drives to it raw. Then you tie part of the volume back to ESXi as iSCSI or somesuch, and use it to store the other VMs.

Just want to say that I just did this and got pathetic performance on brand new hardware, and while I constantly run into people saying to do it this way they all magically never seem to see my follow up post about the poor performance...

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

marketingman posted:

Just want to say that I just did this and got pathetic performance on brand new hardware, and while I constantly run into people saying to do it this way they all magically never seem to see my follow up post about the poor performance...

It must be a REALLY lovely USB card, because aside from swap space and booting, you don't really use the card at all.

evil_bunnY
Apr 2, 2003

marketingman posted:

Just want to say that I just did this and got pathetic performance on brand new hardware, and while I constantly run into people saying to do it this way they all magically never seem to see my follow up post about the poor performance...
Then you break out DTrace. Then you shoot yourself, because you're digging in dtrace at home in your free time.

LmaoTheKid posted:

It must be a REALLY lovely USB card, because aside from swap space and booting, you don't really use the card at all.
I don't think he's talking about root FS IO performance :laugh:

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

evil_bunnY posted:

I don't think he's talking about root FS IO performance :laugh:

Whoops! Didn't even thing of that.

Yeah dude, if you were running your VMs off of the stick, you're doing it wrong.

Muslim Wookie
Jul 6, 2005
Mapping local disk as RDMs through to a NAS VM seems to give terrible performance for me, 1/10th what it should be. Running the same ZFS pool on the same hardware booting say FreeNAS off a USB drive gives the exact performance expected.

And basically, yes, I want this all to end why am I doing this at home daslnfskldnfsdfjnslk

Edit: Uhh can you even create a datastore on a USB stick? The VM datastore for the initial NAS VM was a single SATA disk - not many IOPS but as you noted, there's no need for them.

hucknbid
Jul 20, 2007

You sir, are a mouthful.
Anyone done any extensive testing on Windows 8 storage spaces?

I was running WHS with drive extender, but my system drive is having issues, and rather than go through the trouble of trying to reinstall WHS on a new drive and rebuild my drive extender array, I'd rather just move away from the tech, since it's EOL.

Storage spaces sounds like it's exactly what I need, since I have various SATA and USB drives in different sizes, but I ran into some issues when first messing around with it.

I set up a parity storage space with 4 drives, 2tb, 1tb, 750gb, and 60gb. Once the smallest drive in my storage space was full, the whole storage space froze, and the storage spaces management tool became unresponsive, even after I restarted windows 8. I eventually got it to unfreeze after I just unplugged the 60gb drive.

I was under the impression that a storage space using parity could still do so on different size drives, and wouldn't necessarily be limited by the size of the smallest drive.

I just made a new array with the three bigger drives, but am afraid that once the 750gb fills up, I'll experience the same issue. I realize it's just a consumer preview, but I've found a few others people the same problem, and it's a pretty serious problem.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

hucknbid posted:

Anyone done any extensive testing on Windows 8 storage spaces?

I was running WHS with drive extender, but my system drive is having issues, and rather than go through the trouble of trying to reinstall WHS on a new drive and rebuild my drive extender array, I'd rather just move away from the tech, since it's EOL.

Storage spaces sounds like it's exactly what I need, since I have various SATA and USB drives in different sizes, but I ran into some issues when first messing around with it.

I set up a parity storage space with 4 drives, 2tb, 1tb, 750gb, and 60gb. Once the smallest drive in my storage space was full, the whole storage space froze, and the storage spaces management tool became unresponsive, even after I restarted windows 8. I eventually got it to unfreeze after I just unplugged the 60gb drive.

I was under the impression that a storage space using parity could still do so on different size drives, and wouldn't necessarily be limited by the size of the smallest drive.

I just made a new array with the three bigger drives, but am afraid that once the 750gb fills up, I'll experience the same issue. I realize it's just a consumer preview, but I've found a few others people the same problem, and it's a pretty serious problem.

I was gonna say "hey, a friend of mine is having that same problem" and then, welp.

I think if you had two 750GB drives it might work (with the system treating them essentially as one 1.5TB drive for parity) but that's the common sense way to implement a filesystem, but who knows what Microsoft actually does.

Echophonic
Sep 16, 2005

ha;lp
Gun Saliva
Is there a major problem with mixing drive speeds in a RAID5? I have a set of WD EARS drives at 5400 and I got a set of 7200 Hitachis to replace a dying pair with. They have the same cache size, but I don't know if them being this different is a problem. Should I return these and get a set of 5400s or would it be fine to do it like this and just replace the other EARS drives later? Synology DS410j, if that makes a difference.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Echophonic posted:

Is there a major problem with mixing drive speeds in a RAID5? I have a set of WD EARS drives at 5400 and I got a set of 7200 Hitachis to replace a dying pair with. They have the same cache size, but I don't know if them being this different is a problem. Should I return these and get a set of 5400s or would it be fine to do it like this and just replace the other EARS drives later? Synology DS410j, if that makes a difference.
There's no major problem in the sense that it'll all work. Note, however, that without fancy RAID-drives, you'll be limited to the slowest drive's speed, so your 7200RPM Hitachis will be hamstrung by the EARS lower performance. But yeah, it'll all work.

DEAD MAN'S SHOE
Nov 23, 2003

We will become evil and the stars will come alive
Another N40L question: is the 2TB drive spec a hard limit or are there ways around it?

Mine just arrived today.. really pleased at how small it is compared to my HTPC

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
2TB is not really a hard limit. It's all that's officially supported, but whatever. Many people have run 4x3TB drives in the N40L, and the only limitation seems to be it isn't happy if you try to boot off of such a drive setup, but you should be booting off a separate USB drive or the like anyhow, so that shouldn't be an issue.

Echophonic
Sep 16, 2005

ha;lp
Gun Saliva

DrDork posted:

There's no major problem in the sense that it'll all work. Note, however, that without fancy RAID-drives, you'll be limited to the slowest drive's speed, so your 7200RPM Hitachis will be hamstrung by the EARS lower performance. But yeah, it'll all work.

Alright, that sounds reasonable enough. I figured as much that I'd be limited to the slowest drive, but worst case I can just buy two more Hitachis this summer and replace the last two and get the speed up a little and find another use for those green drives. Maybe get a RAID1 caddy or something for backups to plug into the back of the NAS.

Civil
Apr 21, 2003

Do you see this? This means "Have a nice day".

DEAD MAN'S SHOE posted:

Another N40L question: is the 2TB drive spec a hard limit or are there ways around it?

Mine just arrived today.. really pleased at how small it is compared to my HTPC

The chipset supports 3.2TB HDD's. So you'll be wasting space and money if you buy 4TB drives, but 3TB drives would be perfect. If you saw 2TB batted around in N40L conversations, it was probably in reference to backup limits supported by WHS2011 - the drive will be partitioned into 2 drives.

Star War Sex Parrot
Oct 2, 2003

3.2 TB is a very odd number that I've not encountered in drive capacity limits before. How did they end up there?

NickPancakes
Oct 27, 2004

Damnit, somebody get me a tissue.

Another N40L question here:

I've got mine coming in on Monday, with 4x 2TB Drives (Samsung F4s) and a flash drive for the OS. My intent is to run SABnzbd, SickBeard, Couchpotato, and rTorrent on top of your typical NAS fileserving. I'm familiar with unix-based stuff, and the general consensus seems to be that ZFS/RAID-Z is super legit.

So the question: Is my best route, then, from a usability, speed, and safety standpoint (assume I'm also taking snapshots here), to go with FreeNAS setup, and setup RAID-Z1 on one big pool with the 4 drives? Anyone want to throw a better/different option in the hat before I roll this all out on Monday?

Adbot
ADBOT LOVES YOU

Civil
Apr 21, 2003

Do you see this? This means "Have a nice day".

Star War Sex Parrot posted:

3.2 TB is a very odd number that I've not encountered in drive capacity limits before. How did they end up there?
Some AMD southbridge thing. Dunno. I just ran across it while reading. If it's bullshit, I'd be happy, but I'm not going to be the one to find out.

http://forums.overclockers.com.au/showthread.php?t=958208&page=316

Others speculate that if you just use a more modern SATA PCI-E card, you'll be able to use the entire thing. 2TB drives still seem to be the price/capacity king, so that's what I'm sticking with for now.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply