Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Docjowles
Apr 9, 2009

It's all about the tradeoffs you want to make. I have nooooo interest in super loud and power hungry rackmount servers inside my home. I'd rather pay more up front to do it with NUCs or a tricked out desktop case or whatever. Or these days, some compute instances on the cloud provider of my choice that I power off when I'm not messing with it.

Adbot
ADBOT LOVES YOU

Docjowles
Apr 9, 2009

Wibla posted:

Have you looked at used hp G7 or G8 servers? For home lab use they're just fine, but they are noisier than a nuc. G7 is really cheap now, and G8 is following suit.

bobfather posted:

I picked up a Lenovo S30 with 48GB of ECC recently, and it works a treat without being noisy. Not conventionally rackable though.

It's all about the tradeoffs you want to make. I have nooooo interest in super loud and power hungry rackmount servers inside my home. I'd rather pay more up front to do it with NUCs or a tricked out desktop case or whatever. Or these days, some compute instances on the cloud provider of my choice that I power off when I'm not messing with it.

Wibla
Feb 16, 2011

Docjowles posted:

It's all about the tradeoffs you want to make. I have nooooo interest in super loud and power hungry rackmount servers inside my home. I'd rather pay more up front to do it with NUCs or a tricked out desktop case or whatever. Or these days, some compute instances on the cloud provider of my choice that I power off when I'm not messing with it.

My G7 with 2x X5675, 120GB ram and 2 SSD + 6 SAS 10k drives use around 155-160W on average, and it's not noisy as such - but it varies fan speeds continually due to how the sensors are setup, so it's not a box I'd want in a living space. I get where you're coming from though.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
No one wants a rack mount server at home.

Methanar
Sep 26, 2013

by the sex ghost

Wibla posted:

My G7 with 2x X5675, 120GB ram and 2 SSD + 6 SAS 10k drives use around 155-160W on average, and it's not noisy as such - but it varies fan speeds continually due to how the sensors are setup, so it's not a box I'd want in a living space. I get where you're coming from though.

What could you possibly need that at home for

Wibla
Feb 16, 2011

Methanar posted:

What could you possibly need that at home for

Well, for what I actually use it for, I'd probably be fine with an i5 with 32GB ram, but it was given to me for free, so I might as well use it? :v:

wolrah
May 8, 2006
what?

Internet Explorer posted:

That being said, I don't think leaving it off is the right call. Simply turning NUMA support on, not assigning more vCPU than you have physical cores (or memory) in a NUMA node, and understanding that if you live migrate a guest it will run less efficiently until a reboot, seem like a minor inconvenience in most cases.

But I'd be really interested to hear what others think.

My knowledge is fairly limited on this but from what I do know I think you nailed it as far as most admins need to care. Accessing memory attached to remote cores is slower than that of local cores. As far as virtualization goes, that basically comes down to don't allocate enough cores or RAM to a guest that it spills over from one node in to another. For the most part if you're provisioning your guests reasonably this shouldn't be an issue unless you're near capacity on a given host, in which case you may have a guest which can easily fit in to any given node on its own, but doesn't fit in the available space. Anything that needs to spread across nodes will need to be aware of the topology for best performance and will suffer compared to an equivalent single-node configuration.

On Intel systems AFAIK it's still pretty much one socket = one node, but AMD's newer stuff is a bit more complicated. A Ryzen desktop chip has a single die with two tightly coupled groups of four cores with their own cache but sharing RAM and PCIe lanes. A Threadripper HEDT/workstation chip is two of those dies in a single package, and an Epyc server chip is four of them. A dual socket Epyc system is thus eight NUMA nodes in two groups of four with varying speeds and latency between them.

Here's the reported topology of a single-socket 32 core Epyc:

and for comparison, my desktop (Core i7 4790K):

The Windows version seems to have a minor bug where it reports free RAM instead of total.

If you want to get a similar graphic for your system, the tool is part of Open MPI's hwloc package: https://www.open-mpi.org/projects/hwloc/

bobfather
Sep 20, 2001

I will analyze your nervous system for beer money
I accomplished something recently that I've really enjoyed: converging a bunch of physical systems in my home into a single, fairly powerful (though not very expensive) virtualization server. This project let me downsize the amount of tech in my home and sell off a bunch of older, but still-capable hardware.

The short of it is, I now have a single (ESXi-based) system that runs FreeNAS, Ubuntu Server, Windows Server 2016 Datacenter, Windows Server 2016 Standard, 2 Windows 10 installs, and a Windows 8.1 install. My total cost on this system is about $600 before hard drives, and the system is plenty efficient - with every VM running but the Win Server 2016 Standard VM, it sits at ~140 watts idle, and it peaks at closer to ~200 watts when Plex needs to transcode something.

In the long of it, I want to talk about my trials and tribulations with getting my virtualization server to serve as my Windows 8.1 Media Center. The reason I'm taking the time to document this is 1) I haven't seen anyone talk about running a virtualization server with one primary purpose being to serve as a way to watch TV and media directly from the server, and 2) Windows 8.1 recently lost mainstream support from Microsoft, but there is still not a single better product on the market for watching TV as Windows Media Center (WMC), and in fact every single existing alternative suffers from some combination of higher price and inferior functionality, compared to WMC.

If you have interest in repeating this endeavor, you require enough hardware to virtualize Windows 8.1, and that hardware has to be capable of passthrough. You also need a way of getting Windows 8.1 Pro and an MCE key. Ebay has cheapish options for that. Be mindful that Windows 8.1 Pro does not come with a MCE key and you'll need one to get Windows 8.1 Pro to enable MCE.

Installing Windows 8.1 is straightforward, but be prepared for an hour or two of update installations even if you start with the most recent build (9600). If you're using ESXi, make sure to install VMWare Tools so you can enable the VMXNet3 network driver and switch to the paravirtual driver for the SCSI controller. Once VMWare Tools are installed in the guest, I have found it best to install some kind of VNC client like Tight VNC. You could use VMRC or RDP to administer the guest, but those options don't work as well once you've passed through your video card of choice.

With all updates installed, VMWare Tools installed, and Tight VNC installed, go ahead and disable the VMWare Display adapter in Device Manager. I found that having the VMWare adapter enabled would cause the Nvidia display driver to crash randomly. Power down your VM and passthrough your video card. I also had to passthrough a sound card because initial attempts at getting audio over HDMI from the video card resulted in corrupted audio. You would probably be reasonably successful with passing through your motherboard's audio interface if you don't have an external sound card.

You'll also need to add the following to your Windows 8.1 VM configuration: hypervisor.cpuid.v0 = "FALSE" ; this is required if you're using a consumer Nvidia card of any kind, and even some lower-end Quadro cards. I don't think it's needed for AMD cards, but I haven't tested.

With all your hardware passed through, and with the hypervisor = false flag configured, power up your VM and install the newest drivers for your video card and sound card. Now you need to open your Display Properties - you may notice that you have 2 displays showing, even though you may only have 1 physical display. Go ahead and select your physical display and make it your primary display, as well as selecting the option to only display your desktop on that display. This will cut down on the remaining display driver crashes that may occur. Take this time to install the newest LAV filters for your system so Media Center can use hardware decoding, if you have it.

The last step is to install any drivers you might need for your HDHomeRun / capture card. Fire up Windows Media Center. Your first hurdle will be that you cannot download Digital Cable Advisor. Seems that Microsoft let the link die recently. No matter, check post 13 of this thread for a link to download Digital Cable Adviser, and post 6 for a way to override Digital Cable Adviser. Install DCA, then run the override, then proceed with normal setup to get everything else running.

By the end of this process, you'll have access to a forgotten relic: Windows 8.1 MCE, which is still better than every competing piece of software which tries to mimic it. I found virtualization to be an incredible solution for this problem, because who in this day and age wants to run Windows 8.1 MCE on old, bare metal?

Though few people may care about replicating my setup, it certainly is a nice alternative to Plex (bad guide data), HDHomeRun DVR (no support for Apple TV, buggy), or Channels.app (no support for anything but Apple devices, high monthly cost to record). Large parts of this writeup are also extremely relevant for someone wanting to run a Windows VM for the purpose of playing games or using hardware transcoding via a video card. Enjoy!

bobfather fucked around with this message at 03:27 on Feb 5, 2018

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

bobfather posted:

Don’t be daft. You can get a preowned E5-2650 V2 or 1650 V2 system for $300 or less these days with 8gb of RAM.

Cheapest EBay DDR3 for a Z420 or similar starts at ~$60 per 8gb DIMM, so $420 additional to bring yourself up to 64GB.

Or you could buy an E3-1225 barebones new for $300. Dell sells these regularly. God help you if you get one, because it takes DDR4 and that starts at ~$80 per 8gb DIMM on EBay, so $560 to get to 64GB.
My setup has a 4x4" footprint, is nearly silent, and consumes under 40 watts at full load, so congratulations on living and dying alone I guess

Pile Of Garbage
May 28, 2007



What are my options for a home setup to run 8 VMs (Five 2016 Core, Two 2016 DE and a SUSE Virtual Appliance)? Dual 10Gb is mandatory. Also need ~300GB of DAS to run all the poo poo, preferable raided.

Currently have an IBM x3550 M2 with an Emulex dual-port 10Gb HBA and four ~300GB SAS HDDs. It's loud af and I want the bitch gone!

Pile Of Garbage fucked around with this message at 15:40 on Feb 5, 2018

Wibla
Feb 16, 2011

Vulture Culture posted:

My setup has a 4x4" footprint, is nearly silent, and consumes under 40 watts at full load, so congratulations on living and dying alone I guess

:cawg:

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

cheese-cube posted:

What are my options for a home setup to run 8 VMs (Five 2016 Core, Two 2016 DE and a SUSE Virtual Appliance)? Dual 10Gb is mandatory. Also need ~300GB of DAS to run all the poo poo, preferable raided.

Currently have an IBM x3550 M2 with an Emulex dual-port 10Gb HBA and four ~300GB SAS HDDs. It's loud af and I want the bitch gone!

You didn't mention a budget, so I'm going to put a mid-high end suggestion in here:

SuperMicro Xeon-D.

Has 4 hotswap bays, 8 core processor with hyperthreading, sips power, takes an M.2 drive on the motherboard, makes almost no sound, and can do 64GB of ECC RAM (128GB in RDIMMS). It has dual 10GbE copper Intel ports integrated, and IPMI management.

Yeah, it's pricey, but at the wall, it also uses 1/4 of the power of my old Dell R710 at idle, and 1/6th as much at full load. And it is small. Tuck it anywhere. Easily my favorite little server, and ESXi 6.5 has no issues with it.

Pile Of Garbage
May 28, 2007



Nice thanks mate! And yeah price isnae a problem.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe
Yeah, I can't tell you how much I love that little guy. Might eventually move up to a Xeon D 2100, but I rarely have CPU constraint issues on the 1541. 12000 Passmark score is pretty good, even for multiple HD Plex transcodes.

SamDabbers
May 26, 2003



I have a Lenovo TS440 with an E3-1225 v3 (Haswell) that I've been using as an all-in-one NAS and hypervisor for a few years, and it's been great. It's very quiet and idles at less than 100W with 8 spinners in it, and it has sufficient horsepower for transcoding or compilation. You can probably pick them up for a decent price on eBay these days, and fill it with cheap(er) DDR3 UDIMMs.

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum
Anyone ever run into issues with VVols after renewing the VASA certificate? I saw the cert was going to expire, refreshed it, and our hosts using those VVols stopped being able to see those datastores. Our VMs stayed online, but once they powered off we couldn't start them back up.

e: I got to talk to a VMware tech, who recreated the SMS certificate in vCenter and restarted the vvold services on one of the hosts. The other host still wouldn't see it, and restarting all the management services on it didn't work, so we ended up migrating what VMs we could off of it (vMotion off of that host worked, vMotion onto that host didn't; it was weird), shutting down the ones we couldn't, and restarting the host.

anthonypants fucked around with this message at 21:08 on Feb 5, 2018

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

We don’t have customers using vVols and we generally steer clear of them because of the concerning dependency on the VASA provider and an incomplete or unclear recovery model if something happens to that provider.

Sorry for your lovely situation, but this is just another data point to add to my list of reasons why I’ll advise against vVols. What storage are you using if you don’t mind me asking?

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum
The backend is an EMC Unity 300, but this absolutely wasn't the storage appliance's fault -- it presented the correct certificate after I told vCenter to refresh it, but the hosts just didn't care until the SMS certificate in vCenter was blown away, I guess? These two hosts are also on ESXi 6.5, and the one on an older build of ESXi (4564106 vs 5310538) was the one we had to restart, so that might have had something to do with it.

Mr Shiny Pants
Nov 12, 2012

SamDabbers posted:

I have a Lenovo TS440 with an E3-1225 v3 (Haswell) that I've been using as an all-in-one NAS and hypervisor for a few years, and it's been great. It's very quiet and idles at less than 100W with 8 spinners in it, and it has sufficient horsepower for transcoding or compilation. You can probably pick them up for a decent price on eBay these days, and fill it with cheap(er) DDR3 UDIMMs.

I have the 1245 version of this machine and it really is awesome, the only downside is the 32GB RAM.

Potato Salad
Oct 23, 2014

nobody cares


Mr Shiny Pants posted:

I have the 1245 version of this machine

:hfive:

evol262
Nov 30, 2010
#!/usr/bin/perl

wolrah posted:

My knowledge is fairly limited on this but from what I do know I think you nailed it as far as most admins need to care. Accessing memory attached to remote cores is slower than that of local cores. As far as virtualization goes, that basically comes down to don't allocate enough cores or RAM to a guest that it spills over from one node in to another. For the most part if you're provisioning your guests reasonably this shouldn't be an issue unless you're near capacity on a given host, in which case you may have a guest which can easily fit in to any given node on its own, but doesn't fit in the available space. Anything that needs to spread across nodes will need to be aware of the topology for best performance and will suffer compared to an equivalent single-node configuration.
It's also a serious consideration for guest balancing and migration/ha.

The ideal would be for the hypervisor to automatically select the correct NUMA/SNUMA groups and spread the workload across nodes. If a node is lost, though, it may be impossible for a HA VM to come back up on a system with a coherent NUMA topology.

wolrah posted:

On Intel systems AFAIK it's still pretty much one socket = one node, but AMD's newer stuff is a bit more complicated. A Ryzen desktop chip has a single die with two tightly coupled groups of four cores with their own cache but sharing RAM and PCIe lanes. A Threadripper HEDT/workstation chip is two of those dies in a single package, and an Epyc server chip is four of them. A dual socket Epyc system is thus eight NUMA nodes in two groups of four with varying speeds and latency between them.
Intel has split cores since Haswell, give or take. Now UPI replaced QPI (as of Skylake, I think), but you're looking at, essentially, Intel sockets being 2 NUMA groups once there are more than 4 cores. Each group gets access to 3 memory channels and half of the cache banks.

QPI use[sd] a ring topology, so hypervisor scheduling being aware of SNUMA groups and coherency actually has a significant impact on performance. No matter how fast QPI/UPI are, reaching across them to a NUMA group on a different physical processor which doesn't share L3 is always gonna be slower than a SNUMA group on the same die.

It's complicated, from a hypervisor perspective. From a user perspective, just enable NUMA-aware scheduling (and SNUMA in the firmware, if you have it), and let KVM/Hyper-V/ESXi do their thing. Sorry for anyone using XenServer.

SamDabbers
May 26, 2003



Mr Shiny Pants posted:

I have the 1245 version of this machine and it really is awesome, the only downside is the 32GB RAM.

It depends what you're doing, of course. I've not run into memory pressure with 32GB RAM, but most of the stuff I do runs in bare-metal containers on the host OS and I only have a couple full-fat VMs for e.g. Windows stuff.

The best part is that the case can hold up to an EATX/SSI-EEB board and a regular ATX power supply, so you have an upgrade path that retains the nice hotswap chassis when you need more RAM, CPU, or PCIe lanes.

Internet Explorer
Jun 1, 2005





evol262 posted:

Sorry for anyone using XenServer.

I agree with this in a general sense.

Docjowles
Apr 9, 2009

Internet Explorer posted:

I agree with this in a general sense.

As a current XenServer user: hell, same

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Docjowles posted:

I don’t really know which thread is best for this, but the question is in the context of our XenServer hypervisors so I guess I’ll start here.

Can I get some real talk about NUMA? We are building out a new cluster of hypervisors and I want to make sure we have all our ducks in a row. I mentioned looking at NUMA settings and my coworker responded that it’s only relevant to HPC setups and not something you should be messing with unless you have a good reason. NUMA support is currently off and he says that is best.

Is that accurate? My understanding is that all modern server cpu architectures (these chips are Broadwell-EP) are built for NUMA. And if you aren’t enabling those features in your BIOS and OS, your system is just seeing a faked SMP setup where all memory appears equally accessible and performant. Whereas behind the scenes it could be running tasks on a CPU that’s remote from the memory it’s addressing and performing suboptimally. Enabling NUMA exposes the additional info about which node a job is running on and allows the OS to make smarter scheduling decisions.

Am I talking out of my rear end here? I’ve spent all afternoon reading blog posts about NUMA and I feel like I now know less than when I started.

You're basically right. So long as you aren't provisioning a bunch of VMs with enough vCPU/vRAM to span a NUMA node (and sometimes this is okay too), you will see better performance/lower latency/higher bandwidth for the aggregate workload.

There's two NUMA boundaries you need to be aware of: the obvious one spanning sockets which carries a severe performance penalty, but individual CPU sockets can now also present multiple NUMA nodes with 2 (maybe 4 now?) nodes per socket which represents a set of cores with their own dedicated memory controllers and that help minimize traffic on the internal crossbar which also carries a performance penalty, though much less than moving traffic between sockets. There are options to expose this internal crossbar (Dell calls it Cluster on Die) or not, but since VMware is NUMA aware, you're pretty much always better off exposing all available NUMA layers and letting the hypervisor optimize around it.

wolrah posted:


On Intel systems AFAIK it's still pretty much one socket = one node, but AMD's newer stuff is a bit more complicated. A Ryzen desktop chip has a single die with two tightly coupled groups of four cores with their own cache but sharing RAM and PCIe lanes. A Threadripper HEDT/workstation chip is two of those dies in a single package, and an Epyc server chip is four of them. A dual socket Epyc system is thus eight NUMA nodes in two groups of four with varying speeds and latency between them.


In my experience on Xeons, anything over 12c is going to have an internal crossbar with two nodes per socket. That might be a gen or two ago (haven't had to buy hardware in a while), but anything with a high core density is going to have that partitioning.

BangersInMyKnickers fucked around with this message at 18:43 on Feb 6, 2018

evol262
Nov 30, 2010
#!/usr/bin/perl
That's SNUMA (SNC or COD). And until Skylake, it was a ring. Skylake is a grid, with crossbar options once you're at 4 sockets (IIRC)

wolrah
May 8, 2006
what?

evol262 posted:

<lots of good stuff>
Aha, my lack of experience with larger modern Xeons definitely shows here.

Pile Of Garbage
May 28, 2007



Is anyone here using VDP (vSphere Data Protection) and has found it to be an absolute dumpster fire? We've got it deployed at a couple of sites and it's a god drat nightmare. The appliances keep failing at random requiring reboots and/or lengthy excruciating troubleshooting. Also from poking around under the hood the whole "solution" is just complete trash.

On that subject what's the current go-to solution for snapshot-based VM backups that is free and isn't garbage? Our customer is demanding VM backups and is refusing to pay anything beyond the cost of implementing the solution. Normally I'd just sever but we're in a weird situation and need to provide something. So if I could replace VDP with something that doesn't randomly catch fire that would be swell.yewah

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

cheese-cube posted:

Is anyone here using VDP (vSphere Data Protection) and has found it to be an absolute dumpster fire? We've got it deployed at a couple of sites and it's a god drat nightmare. The appliances keep failing at random requiring reboots and/or lengthy excruciating troubleshooting. Also from poking around under the hood the whole "solution" is just complete trash.

On that subject what's the current go-to solution for snapshot-based VM backups that is free and isn't garbage? Our customer is demanding VM backups and is refusing to pay anything beyond the cost of implementing the solution. Normally I'd just sever but we're in a weird situation and need to provide something. So if I could replace VDP with something that doesn't randomly catch fire that would be swell.yewah
We were, and it is, and we went back to NetBackup.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

cheese-cube posted:

Is anyone here using VDP (vSphere Data Protection) and has found it to be an absolute dumpster fire? We've got it deployed at a couple of sites and it's a god drat nightmare. The appliances keep failing at random requiring reboots and/or lengthy excruciating troubleshooting. Also from poking around under the hood the whole "solution" is just complete trash.

On that subject what's the current go-to solution for snapshot-based VM backups that is free and isn't garbage? Our customer is demanding VM backups and is refusing to pay anything beyond the cost of implementing the solution. Normally I'd just sever but we're in a weird situation and need to provide something. So if I could replace VDP with something that doesn't randomly catch fire that would be swell.yewah

Vdp has always been pretty bad. We went to thread favorite Veeam and it’s good, so far.

Potato Salad
Oct 23, 2014

nobody cares


Vdp is loving useless

Buy veeam god drat it, gently caress

Mr Shiny Pants
Nov 12, 2012

SamDabbers posted:

It depends what you're doing, of course. I've not run into memory pressure with 32GB RAM, but most of the stuff I do runs in bare-metal containers on the host OS and I only have a couple full-fat VMs for e.g. Windows stuff.

The best part is that the case can hold up to an EATX/SSI-EEB board and a regular ATX power supply, so you have an upgrade path that retains the nice hotswap chassis when you need more RAM, CPU, or PCIe lanes.

Ooh, this is good to know. It hums along nicely for a couple of years now, one of the best computers I've ever bought.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

There are no good, free, snapshot backup tools. Buy VEEAM or Cohesity or Rubrik or accept that your solution will suck.

Internet Explorer
Jun 1, 2005





Veeam is far and away the least awful backup software I've ever used. Just bought a new Nimble SAN, excited to use the integration between the two.

There's a few things in Veeam that frustrate me, but it's nowhere near as bad as anything else I've ever used, and I've used quite a bit.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

cheese-cube posted:

Is anyone here using VDP (vSphere Data Protection) and has found it to be an absolute dumpster fire? We've got it deployed at a couple of sites and it's a god drat nightmare. The appliances keep failing at random requiring reboots and/or lengthy excruciating troubleshooting. Also from poking around under the hood the whole "solution" is just complete trash.

On that subject what's the current go-to solution for snapshot-based VM backups that is free and isn't garbage? Our customer is demanding VM backups and is refusing to pay anything beyond the cost of implementing the solution. Normally I'd just sever but we're in a weird situation and need to provide something. So if I could replace VDP with something that doesn't randomly catch fire that would be swell.yewah

There's always Veeam Zip. No incrementals, no scheduling, but it works, and it's free. They can run it manually once or twice a week if they won't pony up for Veeam Essentials (which is really very reasonable).

bobfather
Sep 20, 2001

I will analyze your nervous system for beer money
There's also Veeam Endpoint backup. I wouldn't want to run it on dozens of VMs, but if it's just a few that you're trying to keep backed up it's pretty great, and free.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Internet Explorer posted:

Veeam is far and away the least awful backup software I've ever used. Just bought a new Nimble SAN, excited to use the integration between the two.

There's a few things in Veeam that frustrate me, but it's nowhere near as bad as anything else I've ever used, and I've used quite a bit.

I prefer Rubrik or Cohesity because they are extremely low touch and scale better than VEEAM, but yea, it’s pretty drat good. Especially compared to all of the legacy backup platforms that came before it.

Internet Explorer
Jun 1, 2005





I haven't used either, but that's great to hear there are alternatives that work just as well if not better.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

I like RapidRecovery over Veem from technical standpoint, it gets much better compression/dedupe ratios but is IOP heavy to do it. 10k SAS arrays or high-density flash for the storage array but you can get away with a fraction of the raw disk space/spindle count compared to what Veem needs in SATA/NL-SATA. Support was iffy with the Dell acquisition and spin-off, no idea how much of a headache it is to run these days. vCenter integration made life easier for most things.

Adbot
ADBOT LOVES YOU

Mr Shiny Pants
Nov 12, 2012
Veeam is wonderful, we had Commvault before and it was just a PITA. Sure it is powerful but drat does it take some setting up.

We had TSM also, that was pretty great in practice with it's versioning. Making backing up and restoring stuff pretty easy and efficient. It was IBM software through and through though.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply