Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Docjowles
Apr 9, 2009

Zero VGS posted:

I was posting before about wanting to use two identical Proliants as a HA and/or FT VM server setup.

I have to ask... is this super expensive, redundant hardware going to be used to run all of the software you're asking about how to avoid paying for in the Windows thread? I don't mean that in a judgemental way, I'm sure you're doing what you can within the constraints management gives you. But if their end goal is really "make random lovely legacy applications running on Win 7 VM's achieve 100% uptime" then goondolences :smithicide:

Adbot
ADBOT LOVES YOU

mayodreams
Jul 4, 2003


Hello darkness,
my old friend

Wicaeed posted:

Anyone played around with Sexilog yet? Seems like a pretty decent free replacement for Log Insight using a lightweight ELK installation

http://www.sexilog.fr/

We are looking for something to replace some legacy systems and this looks great. I'm playing with the appliance today.

Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy

Docjowles posted:

I have to ask... is this super expensive, redundant hardware going to be used to run all of the software you're asking about how to avoid paying for in the Windows thread? I don't mean that in a judgemental way, I'm sure you're doing what you can within the constraints management gives you. But if their end goal is really "make random lovely legacy applications running on Win 7 VM's achieve 100% uptime" then goondolences :smithicide:

So far the servers are only slated to run management software for our call center (no downtime if it goes out but voicemail and call shadowing would go down) and a PoE security camera system I'm rolling on my own. Who knows what other VMs I'm going to need in production for the future though. I'm building for 100% uptime because we may have more mission-critical stuff in the future, and hey if something stupid like voicemail goes down I'm still going to get a call at 6am so why not build it for resilience, right? I'll probably wind up running as much from Linux VMs as I can get away with.

The redundant hardware isn't super expensive so far! The two Proliants would have cost $10k each in 2010 but I slapped them together entirely from individual components off eBay for $1000 each.

Gyshall posted:

Traditionally your single point of a failure is a SAN device with dual controllers/redundant everything.

Thank you, I'm reading up on these now.

1) Since the servers are HP DL180 G6, does anyone have a recommendation for a corresponding HP StorageWorks SAN from that era that would pair up nicely with these? I'd prefer to use normal Sata drives since they're a fraction of the cost of SAS.

2) I see in additional to the SAN I might also need a SAN Switch. If I'm never going beyond these two servers, is there any kind of PCI expansion I can get or something to plug the SAN's SAS cables directly into the servers and skip this SAN Switch business?

evol262
Nov 30, 2010
#!/usr/bin/perl

Zero VGS posted:

So far the servers are only slated to run management software for our call center (no downtime if it goes out but voicemail and call shadowing would go down) and a PoE security camera system I'm rolling on my own. Who knows what other VMs I'm going to need in production for the future though. I'm building for 100% uptime because we may have more mission-critical stuff in the future, and hey if something stupid like voicemail goes down I'm still going to get a call at 6am so why not build it for resilience, right? I'll probably wind up running as much from Linux VMs as I can get away with.

The redundant hardware isn't super expensive so far! The two Proliants would have cost $10k each in 2010 but I slapped them together entirely from individual components off eBay for $1000 each.
That's kind of normal depreciation. What are they specced at?

Zero VGS posted:

Thank you, I'm reading up on these now.

1) Since the servers are HP DL180 G6, does anyone have a recommendation for a corresponding HP StorageWorks SAN from that era that would pair up nicely with these? I'd prefer to use normal Sata drives since they're a fraction of the cost of SAS.

2) I see in additional to the SAN I might also need a SAN Switch. If I'm never going beyond these two servers, is there any kind of PCI expansion I can get or something to plug the SAN's SAS cables directly into the servers and skip this SAN Switch business?
Stop there. You don't need a fabric switch. And there's absolutely no point in also getting old, janky storage. You can probably get a MD3200i pretty cheap, but budget...

Get something that does iSCSI and push it all over ethernet.

You want SATA drives? Are you planning on putting your own in, or getting from a vendor? What's your budget.

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
Yeah, important to know if you are doing this for a lab or for production, Zero VGS.

Roargasm
Oct 21, 2010

Hate to sound sleazy
But tease me
I don't want it if it's that easy

Zero VGS posted:

2) I see in additional to the SAN I might also need a SAN Switch. If I'm never going beyond these two servers, is there any kind of PCI expansion I can get or something to plug the SAN's SAS cables directly into the servers and skip this SAN Switch business?

The SAN switches are just for more redundancy and to keep your rack neat. If you have spare ports on a couple of other managed switches you can use those, but a basic implementation of a dual controller SAN would look like this with mgmt VLAN on port 1, data VLAN on ports 4-5, and trunks on 7-8.



e: Assuming you're getting an iSCSI SAN. You'd have to buy a lot of equipment to implement FC.

Roargasm fucked around with this message at 17:58 on Mar 25, 2015

Docjowles
Apr 9, 2009

Zero VGS posted:

I'm building for 100% uptime

I slapped them together entirely from individual components off eBay

So this system requires 100% uptime but there's no vendor support contracts for it?

Internet Explorer
Jun 1, 2005





This sounds terrible. Get a low-end EqualLogic.

KoeK
May 15, 2003
We dont die we multiply
For the SAN: might be interesting to look into a HP P2000G3 SAS, and a pair of SAS HBA's for each server. You get all the san features but also a very easy way to install and configure. The only downside is that you can only connect 4 hosts to it.

Also seconding the concerns about the 100% uptime without the budget. It just doesn't work that way.

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.

Docjowles posted:

So this system requires 100% uptime but there's no vendor support contracts for it?

Oh jesus.

Internet Explorer
Jun 1, 2005





You can also do the same thing with a PowerVault MD3200 and I think IBM sells the same hardware as well. All rebranded NetApp from their LSILogic purchase. But yes, you can do shared storage on SAS if you know you're not going to grow and you want to keep things simple. For small environments it's definitely worth looking into and fits very well with a VMware Essentials Plus license.

Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy

evol262 posted:

That's kind of normal depreciation. What are they specced at?

Two DL180 G6, each with dual Xeon X5675 in each (strongest CPUs they can accept), dual 750w PSU, P410 1gb FBWC RAID controllers, 48gb ram. Close to $1000 even for each.

evol262 posted:

Stop there. You don't need a fabric switch. And there's absolutely no point in also getting old, janky storage. You can probably get a MD3200i pretty cheap, but budget...

Get something that does iSCSI and push it all over ethernet.

You want SATA drives? Are you planning on putting your own in, or getting from a vendor? What's your budget.

I'll check into the iScsi, but it seems like it might wind up just as janky as the SAS when finally implemented with the older Proliants at first glance though.

No matter what I go with, I'm buying off-the-shelf SATA drives and popping them in myself. I might make split the storage into two raids, one for SSD and one for HDD if that's OK.

Let's say I've spent 2k on the servers and I have 4k left for SAN and all the drives to jam into it.

KoeK posted:

For the SAN: might be interesting to look into a HP P2000G3 SAS, and a pair of SAS HBA's for each server. You get all the san features but also a very easy way to install and configure. The only downside is that you can only connect 4 hosts to it.

I was looking at the SAS host bus adapters, that might make the most sense. I highly doubt I'll be going past the two physical machines during my time here, it is still only a call center.

Docjowles posted:

So this system requires 100% uptime but there's no vendor support contracts for it?

I guess you guys take the 100% term more seriously than I do. Nothing huge is at stake, people get mildly annoyed if the voice mail or something goes down, my prime directives are:

1) Spend as little money as possible without my poo poo being so flakey/unauthorized that it costs us more money than I've saved, later down the line. That's the balance I'm trying to strike so that I can serve 300 call center grunts.

2) I don't like off-hours calls. I've only gotten one in six months for a forgotten password, so far so good.

But it's time to scale up to 600 people and I figure I better in on the VM action. My last ten years have been physical everything (Navy then Healthcare). So I'm forcing myself to do this from scratch as a nice crash-course and hopefully I'll wind up with something production worthy for on-premises stuff.

mayodreams
Jul 4, 2003


Hello darkness,
my old friend

mayodreams posted:

We are looking for something to replace some legacy systems and this looks great. I'm playing with the appliance today.

Deployed SexiLog and we are really happy with it. The only setup/config snag was it was not getting syslogs from the ESXi hosts because the outbound firewall rule for syslog is turned off by default.

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
Yikes.

Erwin
Feb 17, 2006

Zero VGS posted:

1) Spend as little money as possible without my poo poo being so flakey/unauthorized that it costs us more money than I've saved, later down the line. That's the balance I'm trying to strike so that I can serve 300 call center grunts.
You're not striking that balance at all with used ebay poo poo and no support contracts. Your chance of downtime might be very small due to the redundancy, but that downtime will last weeks while you scramble to piece together more used hardware to replace whatever died and try to janitor it all back together. If you think you'll still have a job after 3 weeks of no phones because you saved your company a few grand, you're crazy.

I'm not trying to be a dick, but your whole MO has been to cut corners so excessively as to be building a huge technical debt to the point that it is going to come together in a perfect storm of disaster, and the only victim will be you. Stop being so cheap. You should be able to justify paying for some CALs and new hardware to management, and if you can't, then try to find another job.

mayodreams posted:

Deployed SexiLog and we are really happy with it. The only setup/config snag was it was not getting syslogs from the ESXi hosts because the outbound firewall rule for syslog is turned off by default.

Testing it today too. It's basically ELK with pre-defined ingest filters and dashboards. Pretty nice to get up and going quickly. What a stupid product name, though.

Wicaeed
Feb 8, 2005

Zero VGS posted:

Two DL180 G6, each with dual Xeon X5675 in each (strongest CPUs they can accept), dual 750w PSU, P410 1gb FBWC RAID controllers, 48gb ram. Close to $1000 even for each.


I'll check into the iScsi, but it seems like it might wind up just as janky as the SAS when finally implemented with the older Proliants at first glance though.

No matter what I go with, I'm buying off-the-shelf SATA drives and popping them in myself. I might make split the storage into two raids, one for SSD and one for HDD if that's OK.

Let's say I've spent 2k on the servers and I have 4k left for SAN and all the drives to jam into it.


I was looking at the SAS host bus adapters, that might make the most sense. I highly doubt I'll be going past the two physical machines during my time here, it is still only a call center.


I guess you guys take the 100% term more seriously than I do. Nothing huge is at stake, people get mildly annoyed if the voice mail or something goes down, my prime directives are:

1) Spend as little money as possible without my poo poo being so flakey/unauthorized that it costs us more money than I've saved, later down the line. That's the balance I'm trying to strike so that I can serve 300 call center grunts.

2) I don't like off-hours calls. I've only gotten one in six months for a forgotten password, so far so good.

But it's time to scale up to 600 people and I figure I better in on the VM action. My last ten years have been physical everything (Navy then Healthcare). So I'm forcing myself to do this from scratch as a nice crash-course and hopefully I'll wind up with something production worthy for on-premises stuff.

Not to sound too harsh, but your plan is to run a virtualization environment for a 600 person call center with a 100% uptime requirement on servers that have no current support contract, using parts you sourced from what I can only assume is NewEgg/Amazon and installed yourself, because you hate on-call & off-hours work.

You also have only two servers with 48GB RAM to host said environment, with a storage system you either put together or have yet to buy.

It hasn't even been mentioned yet, but what licensing level of VMware are you planning on using? You mentioned HA/FT features which require a certain license level of VMware that sounds like it will cost more than your entire environment you have described already.

evol262
Nov 30, 2010
#!/usr/bin/perl

Zero VGS posted:

I'll check into the iScsi, but it seems like it might wind up just as janky as the SAS when finally implemented with the older Proliants at first glance though.
You're not relying on hardware HBA support. iSCSI would be fine on an ancient Athlon64 3200+ with GigE. It just needs stable, reasonably fast ethernet and a little MTU tweaking.

But "janky" means "don't buy 5 year old storage because you think it'll be more compatible". It won't. You're not running 5-year old VMware, right? The iSCSI support is all in software.

Zero VGS posted:

No matter what I go with, I'm buying off-the-shelf SATA drives and popping them in myself. I might make split the storage into two raids, one for SSD and one for HDD if that's OK.
Unless you're going for vSAN or Inception-level "I built a VM that uses the local drives, exported them over NFS, and used that as a storage pool in vCenter!" stuff, there is almost no reason to have local storage. If you want highly-available VMs, it doesn't get you anywhere.

If you are going to put SSDs in anything, use them as cache drives for your shared storage.

Take the money you'd spend on drives for the servers and add it to your shared storage budget.

Zero VGS posted:

I was looking at the SAS host bus adapters, that might make the most sense. I highly doubt I'll be going past the two physical machines during my time here, it is still only a call center.
Those HPs are quad NIC, aren't they? You already have everything you need for iSCSI other than the storage chassis.

Zero VGS posted:

1) Spend as little money as possible without my poo poo being so flakey/unauthorized that it costs us more money than I've saved, later down the line. That's the balance I'm trying to strike so that I can serve 300 call center grunts.

2) I don't like off-hours calls. I've only gotten one in six months for a forgotten password, so far so good.

But it's time to scale up to 600 people and I figure I better in on the VM action. My last ten years have been physical everything (Navy then Healthcare). So I'm forcing myself to do this from scratch as a nice crash-course and hopefully I'll wind up with something production worthy for on-premises stuff.

The goal for all of these things should be "make everything highly available", which means reasonably good shared storage.

omeg
Sep 3, 2012

Xen stuff. I'm trying to implement mapping foreign pages on Windows (PV drivers) but I'm struggling with documentation. Is there anything fresher than this 10-years-old document?
http://xenbits.xen.org/docs/4.5-testing/misc/grant-tables.txt
At least headers are mostly well commented.

Wicaeed
Feb 8, 2005

mayodreams posted:

Deployed SexiLog and we are really happy with it. The only setup/config snag was it was not getting syslogs from the ESXi hosts because the outbound firewall rule for syslog is turned off by default.

Someone posted a nice PowerCLI script that can automate the firewall setup/redirect of Syslog to the appliance here

Yes it's Reddit.

Dr. Arbitrary
Mar 15, 2006

Bleak Gremlin

mayodreams posted:

Deployed SexiLog and we are really happy with it. The only setup/config snag was it was not getting syslogs from the ESXi hosts because the outbound firewall rule for syslog is turned off by default.

This is something I'd like to do but I'm pretty new at Linux.

Are there any detailed guides on this kind of thing that would help me through the process or am I just going to have to cowboy up and figure it out?

Erwin
Feb 17, 2006

Wicaeed posted:

Someone posted a nice PowerCLI script that can automate the firewall setup/redirect of Syslog to the appliance here

Yes it's Reddit.

There's a VMware official version linked to by the SexiLog ( :rolleyes: ) docs: http://blogs.vmware.com/vsphere/2013/07/log-insight-bulk-esxi-host-configuration-with-powercli.html

Dr. Arbitrary posted:

This is something I'd like to do but I'm pretty new at Linux.

Are there any detailed guides on this kind of thing that would help me through the process or am I just going to have to cowboy up and figure it out?

The steps are literally:
1) Deploy OVA
2) Make a DNS entry on your DNS server
3) Configure hosts in the vSphere client or PowerCLI
4) look at graphs

No linuxing involved, unlike ELK from scratch.

evol262
Nov 30, 2010
#!/usr/bin/perl

omeg posted:

Xen stuff. I'm trying to implement mapping foreign pages on Windows (PV drivers) but I'm struggling with documentation. Is there anything fresher than this 10-years-old document?
http://xenbits.xen.org/docs/4.5-testing/misc/grant-tables.txt
At least headers are mostly well commented.

"Hypercall" is your magic search term. Start here if you don't mind reading C

Dr. Arbitrary
Mar 15, 2006

Bleak Gremlin

Erwin posted:

There's a VMware official version linked to by the SexiLog ( :rolleyes: ) docs: http://blogs.vmware.com/vsphere/2013/07/log-insight-bulk-esxi-host-configuration-with-powercli.html


The steps are literally:
1) Deploy OVA
2) Make a DNS entry on your DNS server
3) Configure hosts in the vSphere client or PowerCLI
4) look at graphs

No linuxing involved, unlike ELK from scratch.

Oh cool! I was able to get ELK to analyze some data a few weeks back and it was a big undertaking. I think the target audience for ELK does not include newbies so following instructions like "Clone the git-hub repository" was a heck of a step.

I'll have to see if I can get this working today!

omeg
Sep 3, 2012

evol262 posted:

"Hypercall" is your magic search term. Start here if you don't mind reading C

Yeah I already studied the headers.

code:
 * GNTTABOP_map_grant_ref: Map the grant entry (<dom>,<ref>) for access
 * by devices and/or host CPUs. If successful, <handle> is a tracking number
 * that must be presented later to destroy the mapping(s). On error, <handle>
 * is a negative status code.
 * NOTES:
 *  1. If GNTMAP_device_map is specified then <dev_bus_addr> is the address
 *     via which I/O devices may access the granted frame.
 *  2. If GNTMAP_host_map is specified then a mapping will be added at
 *     either a host virtual address in the current address space, or at
 *     a PTE at the specified machine address.  The type of mapping to
 *     perform is selected through the GNTMAP_contains_pte flag, and the
 *     address is specified in <host_addr>.
 *  3. Mappings should only be destroyed via GNTTABOP_unmap_grant_ref. If a
 *     host mapping is destroyed by other means then it is *NOT* guaranteed
 *     to be accounted to the correct grant reference!
I wonder, does this mean you can only map foreign pages into the host address space? Does "host" mean "real underlying physical memory"? That would make this a privileged operation, unless it's fine if you pass a "real" physical address that belongs to your VM.

mayodreams
Jul 4, 2003


Hello darkness,
my old friend

Wicaeed posted:

Not to sound too harsh, but your plan is to run a virtualization environment for a 600 person call center with a 100% uptime requirement on servers that have no current support contract, using parts you sourced from what I can only assume is NewEgg/Amazon and installed yourself, because you hate on-call & off-hours work.

You also have only two servers with 48GB RAM to host said environment, with a storage system you either put together or have yet to buy.

It hasn't even been mentioned yet, but what licensing level of VMware are you planning on using? You mentioned HA/FT features which require a certain license level of VMware that sounds like it will cost more than your entire environment you have described already.

We have a call center about half the size and it brought 2 DL380 G6's with dual 6 core E5-2640's and 196gb of ram to their knees. We had to add two more ESXi hosts and spread the 4 Remote Desktop Host VM's across 4 hosts to deal with the CPU load of the RD Farm. On 2003, we were RAM constrained, and on 2008 R2, we are CPU constrained due to FireFox and O365 OWA. For storage, we are using 3 path iSCSI to Nexenta storage.

What you are trying to achieve for the money you have is going to be next to impossible.

evol262
Nov 30, 2010
#!/usr/bin/perl

omeg posted:

Yeah I already studied the headers.

code:
 * GNTTABOP_map_grant_ref: Map the grant entry (<dom>,<ref>) for access
 * by devices and/or host CPUs. If successful, <handle> is a tracking number
 * that must be presented later to destroy the mapping(s). On error, <handle>
 * is a negative status code.
 * NOTES:
 *  1. If GNTMAP_device_map is specified then <dev_bus_addr> is the address
 *     via which I/O devices may access the granted frame.
 *  2. If GNTMAP_host_map is specified then a mapping will be added at
 *     either a host virtual address in the current address space, or at
 *     a PTE at the specified machine address.  The type of mapping to
 *     perform is selected through the GNTMAP_contains_pte flag, and the
 *     address is specified in <host_addr>.
 *  3. Mappings should only be destroyed via GNTTABOP_unmap_grant_ref. If a
 *     host mapping is destroyed by other means then it is *NOT* guaranteed
 *     to be accounted to the correct grant reference!
I wonder, does this mean you can only map foreign pages into the host address space? Does "host" mean "real underlying physical memory"? That would make this a privileged operation, unless it's fine if you pass a "real" physical address that belongs to your VM.

I'm pretty sure you need to advertise the page mapping, let the hypervisor grab it, then transfer it, but this would be a good question for xen-devel

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.

mayodreams posted:

What you are trying to achieve for the money you have is going to be next to impossible.

Tempted to emptyquote this. Also the part about the necessary vCenter licensing costing more then the entire hardware setup, lol.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
Has anyone played around with very small images that run vmware-tools? I have a project where I need to test connectivity in my env by programmatically testing pinging of every vlan on every host, and due to cross-department fuckage and core networks being out of our control the only reasonable way of doing it is to spin up a bunch of VMs and have them ping each other and migrate around.

Obviously, switching vlans and migrating hosts is the limiting factor, so I'm looking to have an image as small as possible that I can send ping commands via vmware-tools. I've googled around but no one seems to offer an image that comes with vmware-tools.

Do any of you guys have a DSL or some other tiny linux distro with vmware-tools baked in that can run with 64mb or 128mb of ram?

omeg
Sep 3, 2012

evol262 posted:

I'm pretty sure you need to advertise the page mapping, let the hypervisor grab it, then transfer it, but this would be a good question for xen-devel

Yeah, I'll take it to the list if my sample code doesn't work. Thanks.

evol262
Nov 30, 2010
#!/usr/bin/perl

Bhodi posted:

Has anyone played around with very small images that run vmware-tools? I have a project where I need to test connectivity in my env by programmatically testing pinging of every vlan on every host, and due to cross-department fuckage and core networks being out of our control the only reasonable way of doing it is to spin up a bunch of VMs and have them ping each other and migrate around.

Obviously, switching vlans and migrating hosts is the limiting factor, so I'm looking to have an image as small as possible that I can send ping commands via vmware-tools. I've googled around but no one seems to offer an image that comes with vmware-tools.

Do any of you guys have a DSL or some other tiny linux distro with vmware-tools baked in that can run with 64mb or 128mb of ram?

open-vm-tools (depending on distro) is basically all the LGPL-ed parts of vmware tools, and large parts of vmxnet and other bits are mainline.

I'd suggest making a template with a minimal install of slack, debian stable, or arch with open-vm-tools installed

Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy

Wicaeed posted:

It hasn't even been mentioned yet, but what licensing level of VMware are you planning on using? You mentioned HA/FT features which require a certain license level of VMware that sounds like it will cost more than your entire environment you have described already.

I'm highly discouraged from using VMware under any circumstance (my company competes with one of their products, rather antagonistically), so I'm gonna have to figure out how to do it all on Hyper-V.

Erwin posted:

You're not striking that balance at all with used ebay poo poo and no support contracts. Your chance of downtime might be very small due to the redundancy, but that downtime will last weeks while you scramble to piece together more used hardware to replace whatever died and try to janitor it all back together. If you think you'll still have a job after 3 weeks of no phones because you saved your company a few grand, you're crazy.

First, I did buy a couple spares of every component so it won't be a scramble if anything goes.

Second, the phones and service is on a support contract. The phone management/voicemail/UC server that we have is a Dell Celeron with a single HDD, modifying it voids warranty, and they have no option for anything nicer. They said I could give them a VM to migrate to and keep the application support while giving up the hardware support on what was a time bomb anyway. Considering the circumstances I found that more prudent since the servers I make can run other stuff too.

I swear I'm not as kamikaze as I sound and I'm pretty drat resourceful in practice. I don't want to rely on these support contracts and SLAs because so far Microsoft, HP, Shoretel, and our ISP have all treated them like toilet paper on multiple occasions.

Still, sorry for the ranting, I expected a heap of criticism and naturally I've got a lot more reading to do before I finalize the design of all this. I appreciate the inputs and I at least have a few months more lead time to play around with all this stuff and get some sanity checks before I flip the switch.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

evol262 posted:

open-vm-tools (depending on distro) is basically all the LGPL-ed parts of vmware tools, and large parts of vmxnet and other bits are mainline.

I'd suggest making a template with a minimal install of slack, debian stable, or arch with open-vm-tools installed
Yeah, I was really hoping to not have to do a bunch of work and to just download someone's ova/ovf/iso. Alternatively, I can strip down our rhel6 as much as possible and hope for the best.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Zero VGS posted:

I'm highly discouraged from using VMware under any circumstance (my company competes with one of their products, rather antagonistically), so I'm gonna have to figure out how to do it all on Hyper-V.


First, I did buy a couple spares of every component so it won't be a scramble if anything goes.

Second, the phones and service is on a support contract. The phone management/voicemail/UC server that we have is a Dell Celeron with a single HDD, modifying it voids warranty, and they have no option for anything nicer. They said I could give them a VM to migrate to and keep the application support while giving up the hardware support on what was a time bomb anyway. Considering the circumstances I found that more prudent since the servers I make can run other stuff too.

I swear I'm not as kamikaze as I sound and I'm pretty drat resourceful in practice. I don't want to rely on these support contracts and SLAs because so far Microsoft, HP, Shoretel, and our ISP have all treated them like toilet paper on multiple occasions.

Still, sorry for the ranting, I expected a heap of criticism and naturally I've got a lot more reading to do before I finalize the design of all this. I appreciate the inputs and I at least have a few months more lead time to play around with all this stuff and get some sanity checks before I flip the switch.

Buy a storage array with redundancy and non-disruptive failover if 100% uptime is a requirement. You don't have any other option unless you can write your own high performance shared-nothing clustered filesystem and run it on the underpowered hardware you're purchasing.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
I installed sexilog today, it's pretty nice. We already had VMware dumping to our own logstash, but their filters and dashboards are pretty fabulous.

Wicaeed
Feb 8, 2005

adorai posted:

I installed sexilog today, it's pretty nice. We already had VMware dumping to our own logstash, but their filters and dashboards are pretty fabulous.

Yeah it's pretty nice, although with having 0 experience with ELK finding out how to add my own dashboards is going to be an experience.

jre
Sep 2, 2011

To the cloud ?



Zero VGS posted:

First, I did buy a couple spares of every component so it won't be a scramble if anything goes.

I swear I'm not as kamikaze as I sound and I'm pretty drat resourceful in practice. I don't want to rely on these support contracts and SLAs because so far Microsoft, HP, Shoretel, and our ISP have all treated them like toilet paper on multiple occasions.

Every component ? Including backplanes , motherboards , weird custom cables ? The biggest part of a having a support contract on hardware is having fast access to spare parts, shortly followed by having someone with vast experience of the failure modes of the hardware to phone. Running anything production on cobbled together old hardware is a disaster waiting to happen. Been there :smithicide:


NippleFloss posted:

Buy a storage array with redundancy and non-disruptive failover if 100% uptime is a requirement. You don't have any other option unless you can write your own high performance shared-nothing clustered filesystem and run it on the underpowered hardware you're purchasing.

^^^^^ this x 100

Cidrick
Jun 10, 2001

Praise the siamese

evol262 posted:

open-vm-tools (depending on distro) is basically all the LGPL-ed parts of vmware tools, and large parts of vmxnet and other bits are mainline.

Holy crap. How did I not know this existed until now? Thanks.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
I'd be embarrassed to run a product called Sexilog. I'd feel like that uncomfortable guy with pin-up girls and Sports Illustrated calendars all over his cubicle. I certainly wouldn't be able to refer to it out loud in a meeting.

NippleFloss posted:

Buy a storage array with redundancy and non-disruptive failover if 100% uptime is a requirement. You don't have any other option unless you can write your own high performance shared-nothing clustered filesystem and run it on the underpowered hardware you're purchasing.
Why would you have to write your own when Ceph and GlusterFS work fine? Not that I'd recommend them for anyone who doesn't know what they're getting into.

Vulture Culture fucked around with this message at 01:06 on Mar 26, 2015

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Misogynist posted:

Why would you have to write your own when Ceph and GlusterFS work fine? Not that I'd recommend them for anyone who doesn't know what they're getting into.

Most software based scale out architectures are tuned for the sort of access patterns driven by big data, not low latency small io random workloads like a general purpose VMWare cluster is going to run. You could certainly make them work for that purpose if you were willing to put enough hardware in place, but then why not just buy a small storage array to run two hosts worth of VMs?

Nobody who is talking about putting together a two node cluster out of spare eBay parts is in the market for ceph or gluster, hence the tongue in cheek comment that he'd need to write his own software to do want he wants: give him really cheap, highly available shared storage.

Adbot
ADBOT LOVES YOU

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Misogynist posted:

I'd be embarrassed to run a product called Sexilog. I'd feel like that uncomfortable guy with pin-up girls and Sports Illustrated calendars all over his cubicle. I certainly wouldn't be able to refer to it out loud in a meeting.
It was comical the first time someone said it out loud today.

I'm pretty sure the product (distro?) will get renamed for the next release.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply