Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
I have 32 GB of memory and I just witnessed Windows 10's aggressive caching strategies when copying from a SSD to my HDD. It sucked all the excess right up into memory, you could see TaskMan memory usage ballooning up, the transfer "finished" on screen, but the HDD stayed at 100% write for about another 60 seconds, with the pageout visible as collapsing memory usage on TaskMan. That's pretty smart overall, greedy/race-to-yield algorithms to minimize seek losses?

Adbot
ADBOT LOVES YOU

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA
So I just learned the joys of SSD over 10GbE iSCSI. Holy crap is this thing fast, sequential write is over 300MB/sec writing out the full-fat VHDX file for use in a new VM. Now I kinda want to get a whole pile of clearance-grade SSDs to fill the 24 bay 2.5" drive expander I have, see if I can't get it to completely saturate the 10GbE on reads as well as writes.

redeyes
Sep 14, 2002

by Fluffdaddy

Methylethylaldehyde posted:

So I just learned the joys of SSD over 10GbE iSCSI. Holy crap is this thing fast, sequential write is over 300MB/sec writing out the full-fat VHDX file for use in a new VM. Now I kinda want to get a whole pile of clearance-grade SSDs to fill the 24 bay 2.5" drive expander I have, see if I can't get it to completely saturate the 10GbE on reads as well as writes.

Get a load of NVMe drives. 300MB/s can be 2000MB/s

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
The iSCSI overhead might just prevent that.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
NVMe over fabric is a thing with Fibre Channel or Ethernet supported and is pretty neat.

The Gunslinger
Jul 24, 2004

Do not forget the face of your father.
Fun Shoe
I need to upgrade my NAS pretty soon, space is getting tight. The hardware on the box itself is fine, it's just the drives that need to be swapped. What are people using to retain/transition all of the data over? Uploading 8TB to the cloud on my 10mbit upstream is going to kind of suck. I really don't want to buy a bunch of externals or something if I can help it.

The only things I can think of are biting the bullet and uploading it somewhere or just building a whole new NAS and copying everything over.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

The Gunslinger posted:

I need to upgrade my NAS pretty soon, space is getting tight. The hardware on the box itself is fine, it's just the drives that need to be swapped. What are people using to retain/transition all of the data over? Uploading 8TB to the cloud on my 10mbit upstream is going to kind of suck. I really don't want to buy a bunch of externals or something if I can help it.

The only things I can think of are biting the bullet and uploading it somewhere or just building a whole new NAS and copying everything over.

Depends on the NAS's filesystem. If it's ZFS, you just replace the drives one at a time and let it resilver inbetween each drive swap. When you've replaced them all, your total pool space is magically increased.

The Gunslinger
Jul 24, 2004

Do not forget the face of your father.
Fun Shoe
Nah not ZFS yet, that's what I'm going to transition to soon. This is my old box running Flexraid but I'm not confident in it's ability any longer, the old version I use isn't maintained. I guess I will bite the bullet and spend 2 months uploading it somewhere.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
You can buy an 8TB drive for not too terribly much, though. It'd probably also be worth it to have as a cold backup in case anything goes wrong. Or you can sell it off once you're done with it. You've got a few options other than maxing out your bandwidth for two months.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

redeyes posted:

Get a load of NVMe drives. 300MB/s can be 2000MB/s

I have a 24 bay 2.5" enclosure with SAS2 hardware in it, I want a whole crapton of $50 SSds to shove in it. I have an NVMe drive partitioned out for use as cache and intent log devices for my two larger arrays, and it makes them extra speedy.

BlankSystemDaemon
Mar 13, 2009



Thermopyle posted:

Depends on the NAS's filesystem. If it's ZFS, you just replace the drives one at a time and let it resilver inbetween each drive swap. When you've replaced them all, your total pool space is magically increased.
It's worth pointing out that since ZFS is as long-lived as it is, it's possible to have a pool that was created before autoexpand was added in v28 - so while it doesn't apply here, anyone doing this should make sure to enable autoexpand before replacing drives.

Tangentially related, a friend of mine (who originally helped test ZFS on FreeBSD before it was added in 2007, which was when he created his pool) told me how he just finished moving a pool to a new server. The pool had all but 3 disks replaced already, but since the motherboard DIMM sockets were giving him problems (crosstested with memtest86 and the whole shebang), he ordered new hardware and had just finished resilvering the last three disks of the pool in sequence, so nothing about the hardware that he set the pool up on is like it was - only the data itself was untouched.
I'm not sure that'd be possible with anything but ZFS, because to my knowledge no software RAID has been around that could ensure data integrity - and any hardware RAID would have involved controllers which might not have schemes that'd be supported by modern RAID HBAs.

ddogflex
Sep 19, 2004

blahblahblah
So I updated FreeNAS last night. Just whatever the latest updates for 9.1 are. It rebooted and now I can't connect to it. I have this thing headless sitting in my basement. Do I need to plug in a monitor to see wtf is going on? Or is there some sort of trouble-shooting I can do? I've never had this happen. Is this common with FreeNAS updates?

Greatest Living Man
Jul 22, 2005

ask President Obama

ddogflex posted:

So I updated FreeNAS last night. Just whatever the latest updates for 9.1 are. It rebooted and now I can't connect to it. I have this thing headless sitting in my basement. Do I need to plug in a monitor to see wtf is going on? Or is there some sort of trouble-shooting I can do? I've never had this happen. Is this common with FreeNAS updates?

I've only had this problem because I've hosed something up with permissions, but yes generally I plug in a monitor and keyboard.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

ddogflex posted:

Or is there some sort of trouble-shooting I can do? I've never had this happen. Is this common with FreeNAS updates?

It's uncommon to completely bork an installation via an update, but it can happen on occasion. Easiest method is, indeed, slapping a KB+monitor onto it and seeing what's what. Otherwise you can take a look at your router and see if it's even pulling an IP--more specifically, if it decided to bind to a different IP and might otherwise being working fine. Another possibility is incorrect boot order. Or, of course, that the update failed--in which case you can probably still get to the FreeNAS loader, from which you can simply opt for an older install version and go from there.

It costs a few extra bucks, but I have to say having IPMI is pretty awesome for this sort of thing.

ddogflex
Sep 19, 2004

blahblahblah

DrDork posted:

It costs a few extra bucks, but I have to say having IPMI is pretty awesome for this sort of thing.

My system actually has Intel AMT, but I've read it's a huge security hole and have it disabled due to that.

I'll just plug in a Monitor and keyboard later tonight and see what's up.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

ddogflex posted:

My system actually has Intel AMT, but I've read it's a huge security hole and have it disabled due to that.

I'll just plug in a Monitor and keyboard later tonight and see what's up.

What do you use for a client for Intel AMT anyway?

ddogflex
Sep 19, 2004

blahblahblah

Twerk from Home posted:

What do you use for a client for Intel AMT anyway?

I honestly have no idea. When I was going to set it up I read not to because of the hole in it...

SamDabbers
May 26, 2003



Twerk from Home posted:

What do you use for a client for Intel AMT anyway?

Open source tools for AMT:
http://www.meshcommander.com/

phosdex
Dec 16, 2005

ddogflex posted:

So I updated FreeNAS last night. Just whatever the latest updates for 9.1 are. It rebooted and now I can't connect to it. I have this thing headless sitting in my basement. Do I need to plug in a monitor to see wtf is going on? Or is there some sort of trouble-shooting I can do? I've never had this happen. Is this common with FreeNAS updates?

When I did the 9.10.2.u4 to u5 upgrade, mine did the same thing. I'm not sure what happened but I know the upgrade process popup disappeared much faster than it should have before the system tried to reboot into u5. You're probably going to need to pause at the bootloader and pick a previous version.

For Intel AMT, RealVNC paid can do it, that's what I've used. Going to check out that meshcommander though.

BlankSystemDaemon
Mar 13, 2009



VNC Viewer Plus will connect to AMT without requiring a license.

ddogflex
Sep 19, 2004

blahblahblah
Is AMT safe to use? If so that would save me headache in the future. I guess this thing isn't externally pointing so I'm really being overly cautious.

Thanks Ants
May 21, 2004

#essereFerrari


If the vendor has released a patched firmware, then it should be

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Do PCIe devices operating in sub-PCIe-3.0 speed modes automatically multiplex onto the higher-throughput host mode?

In other words, let's say I have an InfiniBand adapter that needs 2.0x4 but I am running it in a host system with 3.0. Does the adapter count as taking x4 lanes from the host, because it's 2.0x4, or taking x2 lanes because it's being multiplexed to PCIe 3.0 rates somewhere? (PCH perhaps?)

I would think it counts as x4, just want to be sure.

SamDabbers
May 26, 2003



Paul MaudDib posted:

Do PCIe devices operating in sub-PCIe-3.0 speed modes automatically multiplex onto the higher-throughput host mode?

In other words, let's say I have an InfiniBand adapter that needs 2.0x4 but I am running it in a host system with 3.0. Does the adapter count as taking x4 lanes from the host, because it's 2.0x4, or taking x2 lanes because it's being multiplexed to PCIe 3.0 rates somewhere? (PCH perhaps?)

I would think it counts as x4, just want to be sure.

A lane is a lane, no matter the speed.

redeyes
Sep 14, 2002

by Fluffdaddy

SamDabbers posted:

A lane is a lane, no matter the speed.

Right but if you have say a card that requires 3.0 x4 and you stick it in a 2.0 x4 slot, you get less bandwidth.

code:
Summary of PCI Express Interface Parameters:
Base Clock Speed: PCIe 3.0 = 8.0GHz, PCIe 2.0 = 5.0GHz, PCIe 1.1 = 2.5GHz
Data Rate: PCIe 3.0 = 1000MB/s, PCIe 2.0 = 500MB/s, PCIe 1.1 = 250MB/s
Total Bandwidth: (x16 link): PCIe 3.0 = 32GB/s, PCIe 2.0 = 16GB/s, PCIe 1.1 = 8GB/s
Data Transfer Rate: PCIe 3.0 = 8.0GT/s, PCIe 2.0= 5.0GT/s, PCIe 1.1 = 2.5GT/s

SamDabbers
May 26, 2003



redeyes posted:

Right but if you have say a card that requires 3.0 x4 and you stick it in a 2.0 x4 slot, you get less bandwidth.

Yes, PCIe is backward compatible from both the card end and the host end. They will negotiate the highest common speed between them.

What I meant was that a lane is a physical path. You don't get the bandwidth back to use elsewhere if you run a slower card in a faster slot.

ddogflex
Sep 19, 2004

blahblahblah

ddogflex posted:

So I updated FreeNAS last night. Just whatever the latest updates for 9.1 are. It rebooted and now I can't connect to it. I have this thing headless sitting in my basement. Do I need to plug in a monitor to see wtf is going on? Or is there some sort of trouble-shooting I can do? I've never had this happen. Is this common with FreeNAS updates?

So uh, just rebooting it fixed it. I went to plug in a monitor and accidentally hit the power button. Turned it back on with the monitor plugged in and it booted up fine. Who knows.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

SamDabbers posted:

Yes, PCIe is backward compatible from both the card end and the host end. They will negotiate the highest common speed between them.

What I meant was that a lane is a physical path. You don't get the bandwidth back to use elsewhere if you run a slower card in a faster slot.

Well, sort of. What's your definition of faster? Because wider is faster, and you can reallocate lanes between different slots based on what card is plugged in. (So long as the root complex providing those lanes is flexible enough, and also contingent on the motherboard having mux silicon to reroute the lanes.)

For example it's common for socket 11xx motherboards to have a pair of x16 PCIe card edge connectors which can accommodate any of three combinations:

1 card, up to x16
2 cards, up to x8 each
2 cards, cards are x16 capable, but only 8 lanes connected to each


The other way in which things are flexible is due to packet switching. Although it was designed to look just like classic parallel-bus PCI to software, underneath the external appearances PCIe is a packet network in which nodes talk to switches through point-to-point data links.

Any given switch IC has only so many physical lanes, so yeah that's a hard upper limit on the total bandwidth that could possibly pass through that switch. However, when multiple ports on a switch are competing for access to another port, you can get effects where one port slowing down gives something back to another port.

For example, consider a system with 3 nodes, A B and C, attached to a switch through x16 gen3 links. Both A and B are trying as hard as they can to monopolize C's bandwidth. In the absence of active QoS features in the switch, A and B should each get about half of C's link. If you drop A's link down to gen1 speed, however, it can only transmit and receive packets fast enough to use about 25% of C's gen3 link. B will now get at least 75% of C -- maybe more if the switch's fairness algorithms sometimes choose B over A. (The only way A can get all the way to using 25% is if it always wins arbitration.)

(real world PCIe link utilization is never going to sum to 100% because packet overhead etc but you get the point)

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Paul MaudDib posted:

Do PCIe devices operating in sub-PCIe-3.0 speed modes automatically multiplex onto the higher-throughput host mode?

In other words, let's say I have an InfiniBand adapter that needs 2.0x4 but I am running it in a host system with 3.0. Does the adapter count as taking x4 lanes from the host, because it's 2.0x4, or taking x2 lanes because it's being multiplexed to PCIe 3.0 rates somewhere? (PCH perhaps?)

I would think it counts as x4, just want to be sure.

Certain PCIe switches can do that, they take a 4x 3.0 host connection and multiplex it out to 16 lanes at 3.0. If you shove a bunch of 2.0 parts on that, you connect from the card to the switch at 2.0, but the switch talks to the CPU at 3.0, so you can end up with a gaggle of 2.0 spec parts consuming all the 3.0 bandwidth. Note that only the fanciest of motherboards have such a thing, because the switch chip itself is like $30 in tray quantities.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Talking about InfiniBand, if I want to use the RDMA stuff for iSER, I don't get around using Linux as target and some third party initiator on my Windows box, right? Same for SRP?

--edit: Also, if they have QSFP ports, I can use a DA cable?

Combat Pretzel fucked around with this message at 11:17 on Jun 22, 2017

Mr Shiny Pants
Nov 12, 2012

Combat Pretzel posted:

Talking about InfiniBand, if I want to use the RDMA stuff for iSER, I don't get around using Linux as target and some third party initiator on my Windows box, right? Same for SRP?

--edit: Also, if they have QSFP ports, I can use a DA cable?

I never got iSER to work, Windows has no initiatior for it, but Linux has a target and a initiator for it.

The one thing I did get working consistently was SRP on Windows using Linux and OpenIndiana as a target. Speeds were really good, 700MB/Sec to my zpool.

You can use the OFED drivers with some trickery, they have some code signing issues on Server 2012.

Mr Shiny Pants fucked around with this message at 14:57 on Jun 22, 2017

EssOEss
Oct 23, 2006
128-bit approved
I just plugged in a 6TB WD Red that I got as a replacement for a DOA drive. Much better, this one actually works. First thing I did was plug it in and tell Windows to optimize storage spaces to start making use of it.

Then I got two BSODs during the optimization. IRQL_NOT_LESS_OR_EQUAL. 90% of the case when I have seen this it has been a driver issue in the past but hard disks don't have drivers, do they? At least I can't find anything on the WD website to install.

What would you recommend? I bet this would be mighty hard to reproduce, so if I sent the drive back they'd just shrug and say it's fine. Is there some other possibility that I might exhaust before I try to somehow get it to reproduce under some controlled testing?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
The PCH does, although you'd think if there were issues then other stuff would be breaking too.

Check CrystalDiskInfo for any SMART warnings. Maybe you paged something out and then it came back in corrupt.

Maybe try replacing the SATA cable if you have a spare? And making sure both cables are firmly plugged in.

Try running MemTest86 overnight. Bad memory can manifest in all kinds of strange ways.

If that doesn't turn anything up, consider resetting your CMOS and reinstalling Windows although that's probably a hail-mary. You are using an additional SATA lane, which is some difference... but I don't see how that in particular should cause issues unless all kinds of other stuff is also causing bluescreens (plugging in/using USB devices, etc)

Also if you have another machine you might want to try it there too, just to be sure.

thiazi
Sep 27, 2002
Cross posting from backups thread, as these questions are related to the new NAS in my life and you guys seem to know what's up:

I just got a Synology DS215j with 2x6TB WD Reds for lightweight photo and media storage. My main machine is a new Win10 laptop that has a reasonably small SSD so I can no longer keep all my data locally. I want to mount the NAS shares on it and then back them up to Crashplan cloud, but this apparently isn't supported by Crashplan in Windows. So I set up a Linux VM in VirtualBox on the laptop and got the Crashplan Linux client installed (no easy feat for me, as I've never used a VM before and I don't use Linux much).

There's a few things I'm not clear on with this setup:

1) how much HDD space and RAM do I need in the VM to run Crashplan? The VM host is my Win10 laptop, which has 8 GB RAM - If I allocate 2GB to the VM for example, does it use all of that and not allow Windows to use it, or does that just set a max threshold but it will only use it dynamically as needed? I will literally have nothing but Crashplan running in this VM and I'd like to keep its resource allocation as low as possible.

2) these files have been backed up to Crashplan's cloud before, and I know there is such a thing as 'adopting' a machine but I've restructured some of the files on the NAS - do I still try to 'adopt' or let it push copies of everything again?

3) is there any sense in using the NAS as both my source for serving files and as a local Crashplan backup destination (in addition to the Crashplan cloud)? Storage space isn't an issue as I have a lot of headroom on the drives.

4) this whole VM setup will only work when I'm on my home network, right? So if I'm traveling I guess it will run but will only be able to backup local stuff on the laptop as the NAS shares are unaccessible?

Thanks.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

thiazi posted:

1) how much HDD space and RAM do I need in the VM to run Crashplan? The VM host is my Win10 laptop, which has 8 GB RAM - If I allocate 2GB to the VM for example, does it use all of that and not allow Windows to use it, or does that just set a max threshold but it will only use it dynamically as needed? I will literally have nothing but Crashplan running in this VM and I'd like to keep its resource allocation as low as possible.
2GB should be good enough unless you have a poo poo ton of stuff you're trying to back up. In which case, Crashplan will simply crash, and it should be obvious that you need to throw it more RAM. In most VM setups, allocating RAM to a VM does indeed obligate that full amount to it as soon as it spins up, rather than acting like a thin-provisioned HDD where it merely nibbles at it as it needs more, up to some max value.

thiazi posted:

2) these files have been backed up to Crashplan's cloud before, and I know there is such a thing as 'adopting' a machine but I've restructured some of the files on the NAS - do I still try to 'adopt' or let it push copies of everything again?
Up to you. Adopting would be much faster, assuming you haven't restructured a significant portion of your files.

thiazi posted:

3) is there any sense in using the NAS as both my source for serving files and as a local Crashplan backup destination (in addition to the Crashplan cloud)? Storage space isn't an issue as I have a lot of headroom on the drives.
Sure, if your internet connection isn't super fast and you want to have the option of pulling data back at the ~100MBps of Ethernet vice whatever your internet download is. If that's not much of a worry, then you're adding complexity and space use for minimal reason.

thiazi posted:

4) this whole VM setup will only work when I'm on my home network, right? So if I'm traveling I guess it will run but will only be able to backup local stuff on the laptop as the NAS shares are unaccessible?
Depends on how you set things up. If you set up file access to the outside world (say, through FTP or SSH or a VPN or whatever), then you could still back up from your mobile laptop (network allowing, of course) to the NAS. You also could just have a local Windows version of Crashplan that only backed up your laptop, and use that for on-the-road protection until you get home and can push things back to the NAS.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
I've spent a few hours, every night this week chipping away at configuring my new home server. I'm ready to tear it down after work and rebuild over the weekend. The reason is that I'm just not convinced about the way I'm trying to do storage.

I've got a CentOS 7 host, which is staying put and is very minimally configured. It's almost a fresh headless Linux installation with KVM installed. This is on a 250GB SSD.

I then have two WD Reds (only 1TB each, because I'm not that crazy about the amount of movies and TV I have on there).

My storage requirements are:

1. About 3.5GB of 'core' files: photo's, wageslips, work documents, ID and other essential digital documents. [ I want about 10GB space for expansion ]

2. About 500GB of 'media' files. TV downloads, music downloads, movie downloads. All from completely legal sources, obviously. [ +expansion room ]

For my core files I've set up a primary VM with a 50GB file system. I've got a scheduled rsync to Amazon storage. For the media files I set up a VM with a 20GB fs and Plex/Emby on it.

The plan was to just use one of the Reds as a single xfs partition and share it over nfs to both VM's and any other labs I spin up later. I want this server to be functional but also be flexible for when I have something I want to try (like nextcloud) and just spin up a 20GB VM when I feel like it and use some shared storage. So far I've used 70GB/250GB on two VM's which leaves me plenty SSD space to spin up another 3,4,5 VM's if I feel like it.

Then I'd have a regular rsync job to mirror the 'active' storage drive (WD Red) onto the other, 'backup' drive. I'm not interested in RAID at this stage.

I started by sharing the 1TB storage over nfs so that both/all VMs would get equal access to it. Ran into some niggles with SELinux and sorted those out (I don't want to take the easy way out and turn off SELinux). Then ran into a problem rebroadcasting part of my nfs share over samba because I also want my files accessible to mobile devices over samba. So I deleted the nfs share and went with samba as the means of sharing storage to the VMs, which would also allow me to share to mobile devices over WiFi that are coming and going (my phone, for example). Last night I had a problem getting Plex and Emby to access the samba share and tried various things until after midnight. Then I gave up.

I'm coming round to the idea of tearing both the VM's down and redoing it tonight and over the weekend. I'm considering partitioning the 1TB storage into LVM and adding each block of storage as a dedicated device to each VM. For my 'backup' drive I'm assuming that I could create the same LVM scheme on it and just rsync everything across. I'm just looking for confirmation that this is a good idea for my use case and will still allow me a decent chunk of spare storage for lab VM's to be added when I see fit. :shrug:

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!
Even if you're not going to throw both full disks in a RAID together I'd still consider ZFS. If I were you I'd be thinking of splitting my WD reds into two partitions, a smaller partition mirrored across both drives (ZFS RAID1) for your important stuff and then leave the other, larger partitions as separate zpools, which you can then use as you currently do.

But since all you're doing is rsyncing one drive onto the other I'd seriously consider using a mirrored zpool and setting up automatic snapshots instead of rsync. Unlike rsync that'll give you versioned copies, more-or-less free compression, and the ability to detect and repair errors. While both disks are healthy that also boosts read speeds. This is probably also much simpler to deal with and involves less moving parts.

Unless you're really trying to eke out every last megabyte of your 1TB disks I wouldn't bother using anything but ZFS, no matter how you decide to slice them.

Desuwa fucked around with this message at 06:56 on Jun 23, 2017

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
Hmm. Versioning does seem very appealing. I've got 16GB ECC RAM to play with, so ZFS seems like a good way to make use of it. I'll have a look at ZFS later. I think I'll do two zpools. A smaller one for the important stuff and a big one for the rest. Or would it be better with one big ZFS pool and two lvm inside it?

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

apropos man posted:

Hmm. Versioning does seem very appealing. I've got 16GB ECC RAM to play with, so ZFS seems like a good way to make use of it. I'll have a look at ZFS later. I think I'll do two zpools. A smaller one for the important stuff and a big one for the rest. Or would it be better with one big ZFS pool and two lvm inside it?

If you're just mirroring stuff there's really no advantage to separate pools. You almost certainly won't want to use LVM at all; ZFS takes the roles of both a traditional volume manager and the file system. Zpools aren't file systems, you'll probably want only want one zpool. You'd only want multiple pools if you're trying to mix different raid types for different purposes; if you don't have a special requirement to avoid mirroring your media (doesn't sound like you do), there's no reason to use more than one pool.

Here's the basic terminology, and the FreeBSD handbook has some more reading. You're running Linux but OpenZFS is basically* identical across platforms. You create a pool from vdevs, in this case it would be one mirrored vdev containing both of your disks, and then you create datasets (file systems) on top of the pool. So you'd "zpool create pool" and then create "zfs create pool/important" and "zfs create pool/media", and then mount those file systems.


*There can be slight differences in command line arguments but even though they're different implementations they follow the same spec and pools are compatible between them.

Adbot
ADBOT LOVES YOU

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
Yeah. I think I jumped to a conclusion there before I've had a chance to look into it. I'm gonna have a read up about it after work and I'll probably go with one big pool like you said but somehow partitioning it off to give a discreet amount of storage to each VM. I'm looking forward to tinkering with ZFS. Cheers!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply