|
devmd01 posted:And so is VMware support these days. We're going on a week with a case open because we can't delete any goddamn datastores out of our production cluster. I just had the case escalated because the guy that was assigned to our case isn't worth a drat. I've had a similar problem with VMware. Took 3 weeks to get my ticket to someone who knew even the basics. It ended up being something simple I overlooked (who the gently caress sets memory limits on a guest in an SMB environment?), took the guy maybe 5 minutes to nail down. But VMware as a product is generally way more polished and if you think VMware support is bad, you clearly haven't dealt with Citrix support. Let's alone Citrix's XenServer support.
|
# ? Apr 12, 2017 22:25 |
|
|
# ? May 30, 2024 23:35 |
|
I had a really good call yesterday with a guy who seemed to know what he was doing, but was still mystified as to how we managed to gently caress up as badly as we did. In the end we dumped the database and restored it onto a new vCenter server and everything but Update Manager seemed to be working. Then I got on a two-hour call which just ended, and their suggestions were: 1) Remove the third-party HPE vibsdepot URLs 2) Unregister the vcIntegrity extension 2a) I asked if taking a snapshot would be a good idea and they seemed to agree, but I don't believe they would have done so on their own. 2b) Which is good because updatemgr-util stopped responding, so we restored from the snapshot 3) updatemgr-util reset-db and register-vc, which finally worked
|
# ? Apr 12, 2017 23:55 |
|
devmd01 posted:And so is VMware support these days. We're going on a week with a case open because we can't delete any goddamn datastores out of our production cluster. I just had the case escalated because the guy that was assigned to our case isn't worth a drat. That's ok. Hyper-V support, RHEV/RHV support, Openstack support (except Mirantis), and all other options are equally bad. You should just roll your own with bhyve Honestly, though, it's really hard to catch edge cases from customers. "Remove all of your customizations and see if you can still reproduce, because we can't" is dumb from all of the above. Sure, it might be some dumb vib. But customers almost always want add-ons that MS/VMware/Oracle/Red Hat don't care about.
|
# ? Apr 13, 2017 01:34 |
|
anthonypants posted:I had a really good call yesterday with a guy who seemed to know what he was doing, but was still mystified as to how we managed to gently caress up as badly as we did. In the end we dumped the database and restored it onto a new vCenter server and everything but Update Manager seemed to be working. Then I got on a two-hour call which just ended, and their suggestions were: Honestly the easiest way to fix update manager a lot of the time is just to remove the plugin and do a fresh re-install. It's not like there's any data in there you need to preserve.
|
# ? Apr 13, 2017 02:16 |
|
big money big clit posted:Honestly the easiest way to fix update manager a lot of the time is just to remove the plugin and do a fresh re-install. It's not like there's any data in there you need to preserve.
|
# ? Apr 13, 2017 02:18 |
|
anthonypants posted:6.5, so it's embedded in the VCSA. Otherwise, yeah, that would've been easy. Ah, yea, that does complicate things.
|
# ? Apr 13, 2017 02:23 |
|
evol262 posted:Openstack support (except Mirantis)
|
# ? Apr 13, 2017 05:03 |
|
You could offer me my current salary to work half the time doing customer-facing support and I'd laugh you out of my office.
|
# ? Apr 13, 2017 18:32 |
|
Half the time, eight times the headache?
|
# ? Apr 13, 2017 18:36 |
|
evil_bunnY posted:You could offer me my current salary to work half the time doing customer-facing support and I'd laugh you out of my office. Doesn't matter. We have a guaranteed turnaround time. MS does also. VMware probably does. Engineering just means that there's a middleman between you and the customer.
This is just as bad. Maybe you can find the unicorn 'I work in engineering at a company which sells public software but there is a 0% chance I'll ever need to deal with a customer ticket', but it's unlikely unless you're working on internal-only stuff that will never see the light of day.
|
# ? Apr 13, 2017 20:12 |
|
evol262 posted:This is just as bad. Maybe you can find the unicorn 'I work in engineering at a company which sells public software but there is a 0% chance I'll ever need to deal with a customer ticket', but it's unlikely unless you're working on internal-only stuff that will never see the light of day.
|
# ? Apr 13, 2017 20:29 |
|
evil_bunnY posted:internal infrastructure This is also what I do and I love it.
|
# ? Apr 13, 2017 21:40 |
|
Same. New position is data center management, we handle everything up to getting the app teams onto their requested servers and they deal with the rest, it's glorious. We get maaaybe 14 escalated tickets a week, and that's for a team of 3 engineers, 2 dbas, and a phone guy.
|
# ? Apr 13, 2017 21:50 |
|
Moey posted:This is also what I do and I love it.
|
# ? Apr 16, 2017 13:51 |
|
devmd01 posted:And so is VMware support these days. We're going on a week with a case open because we can't delete any goddamn datastores out of our production cluster. I just had the case escalated because the guy that was assigned to our case isn't worth a drat. I've been averaging 3 months to closure for any ticket that didn't have an immediate KB for it. Good luck, its been a shithole.
|
# ? Apr 17, 2017 00:36 |
|
I've got an app I'm trying to test in a VM that requires OpenGL 3.0+. It complains about the VM I made in VMWare Workstation Player only having OpengGL 2.something. Is the 3.0+ support VMWare claims to support only available in the paid version of Workstation? If so, is there a legit way to get that for cheaper than $250 or whatever it is now?
|
# ? Apr 18, 2017 20:44 |
|
This may not be the best thread to ask, but I've got an issue that I could use some advice on. I've got around a thousand individual databases that need to have a series of time intensive scripts runs against. With our current architecture (Aws servers, connecting to Rds instances) the full series of scripts can take over a day to run. And very often, further runs of the scripts against a different db fail unless the server is restarted. Running these scripts locally, with a local mysql server takes a good 2 hours at most. Is there a good way to automate the creation of a single use vm, that can take in a dump of the database, run the poo poo show scripts, shoot another dump out, kill the vm and start it all over?
|
# ? Apr 19, 2017 06:29 |
|
It probably wouldn't be too bad to engineer something to do what you want. You could write a fairly simple web server that listens for connecting clients and assigns the client a DB from a list when pinged. Make a base worker VMDK that contains an rc.local script to 1) Reach out to the webserver and get a DB assigned 2) Dump the DB and run the scripts 3) upload the processed data somewhere 4) Ping the controller again with some kind of exit code. When the controller gets a good exit code it can do something like this to repeat until the controller runs out of DBs to assign: Trigger warning: Hyper V code:
code:
|
# ? Apr 19, 2017 07:53 |
|
Allegedly Allergic posted:Is there a good way to automate the creation of a single use vm, that can take in a dump of the database, run the poo poo show scripts, shoot another dump out, kill the vm and start it all over? Why on earth do you need to restart the server?
|
# ? Apr 19, 2017 16:24 |
|
After spending the day digging through the documentation for various options. (Fwiw, Docker seemed to be the best), my boss told me they were just going to hire a bunch of contractors to do the poo poo manually. As to why the machines have to be restarted every time, no solid answer, but I suspect the genius in charge of this project managed a memory leak or lingering mysql connections. Either way, it's off my plate.
|
# ? Apr 20, 2017 03:08 |
|
So I remember reading that Docker had a really ludicrous security model (running applications runs as root by default and/or needs to be run as root a lot of the time, or something like that). Is there a container system with a bit more of a reasonable security model? FreeBSD doesn't seem like they'd do that bullshit, do jails work reasonably well? How about LXC? Obviously a full-on hypervisor is the way to go if you really want to totally sandbox everything, but that's fairly heavyweight.
|
# ? Apr 26, 2017 00:25 |
|
Paul MaudDib posted:So I remember reading that Docker had a really ludicrous security model (running applications runs as root by default and/or needs to be run as root a lot of the time, or something like that). rkt claims to be a 'very secure way to run containers' and can import docker containers for portability. I haven't played with it much and the vagrant VM bombed on me last time I tried, but given another year I think it might overtake docker.
|
# ? Apr 26, 2017 00:36 |
|
I thought it was called Moby now.
|
# ? Apr 26, 2017 01:09 |
|
anthonypants posted:I thought it was called Moby now. Moby is the umbrella project for all the other crap docker does now (compose, swarm, etc). It includes docker, but "docker" itself is still docker, I think. Paul MaudDib posted:So I remember reading that Docker had a really ludicrous security model (running applications runs as root by default and/or needs to be run as root a lot of the time, or something like that). This can be constrained with cgroups, namespaces, and selinux. uid0 is still uid0, and it's the same kernel, so a breakout gets you everything, but it's not as dumb as that Paul MaudDib posted:Is there a container system with a bit more of a reasonable security model? FreeBSD doesn't seem like they'd do that bullshit, do jails work reasonably well? How about LXC? Jails basically have the same problem. I haven't seen a breakout since 2010, but a lot of that may honestly be the fact that freebsd's market share is so small that they're not widely researched. The kernel devs still think it's possible. And LXC. You're looking for lxd, but it's not necessarily well supported. Paul MaudDib posted:Obviously a full-on hypervisor is the way to go if you really want to totally sandbox everything, but that's fairly heavyweight. There are microkernels which boot in less than a second. You're looking for clear containers
|
# ? Apr 26, 2017 01:46 |
|
Paul MaudDib posted:Is there a container system with a bit more of a reasonable security model? FreeBSD doesn't seem like they'd do that bullshit, do jails work reasonably well? How about LXC? FreeBSD Jails were explicitly designed as a security boundary to confine root. They are mature and work well, and with FreeBSD 11 the VIMAGE network stack virtualization has most of its bugs (memory leaks/instability at teardown) ironed out. Admin tools like iocage and warden (both in Ports) are the way to go for provisioning jails with ZFS. Linux namespaces were not designed as a security boundary, so your namespace-using toolset (Docker, LXC, systemd-nspawn) is responsible for locking down your container with SELinux/AppArmor et al. LXD purports to do this, building on top of LXC, and it includes libvirt and OpenStack bindings for automation. Illumos Zones are another secure container tech to check out in SmartOS. Like Jails, they were designed as a security boundary. SmartOS has first-rate Linux binary emulation, so you can use the familiar userland of your distro of choice with the Illumos kernel and all its services underneath.
|
# ? Apr 26, 2017 03:44 |
|
Can someone sum up why Docker is better than setting up VMs yourself? I don't claim setting up VMs yourself is better. But I would love to know what advantages Docker has.
|
# ? Apr 26, 2017 12:57 |
Containers aren't VMs, they're whatever you need to make an application work and nothing else. The advantage is that it's easier and quicker to deploy and the containers Just Work without any additional fluff. Edit: not saying you thought containers are VMs or anything, they just have different use cases. If you're running an infrastructure that makes heavy use of application servers and databases containers are pretty cool. Reduced attack surface, easy deployment, all that fun stuff. milk milk lemonade fucked around with this message at 14:09 on Apr 26, 2017 |
|
# ? Apr 26, 2017 14:03 |
|
milk milk lemonade posted:Containers aren't VMs, they're whatever you need to make an application work and nothing else. The advantage is that it's easier and quicker to deploy and the containers Just Work without any additional fluff.
|
# ? Apr 26, 2017 14:47 |
|
What would you guys say is the best way to start playing with docker containers?
|
# ? Apr 26, 2017 17:36 |
|
I started playing with docker by installing it on my RasPi and containerizing the things I wanted to use the Pi for. I currently have 4 containers, a ZNC bouncer, a duckdns updater, and an owncloud server with supporting mysql server. It was a good way to learn the very basics, and it's nice to be able to commit and save the images of the containers but I didn't learn any deployment stuff. I've been meaning to go get a few Zeros and see if I could make a swarm with the Pi3 as the management node. One of my teachers keeps trying to convince me to play with docker in Azure but I haven't come up with a worthwhile project for it yet. Frankly I think I'd rather learn Kubernetes on GCE.
|
# ? Apr 26, 2017 18:20 |
|
Beefstorm posted:What would you guys say is the best way to start playing with docker containers? Find an application which is already run in microservices or easily-separated parts (redis+db+web, for example). Containers almost never run init, don't run services (ssh, etc) Read about dockerfiles. Pick a base image. For db/redis/etc, a well-built container is probably already on dockerhub. apt-get/yum/pip/whatever install your dependencies. Add your web crap on top. Create one dockerfile per component and link them. Expose the port for the webapp. Done. Trivially, write a small script which is used by your ci/cd system. Put it in a dockerfile with a volume mount. Do a git checkout && ./autogen.sh && make. Copy the artifacts do your volume. Run the container. View your compiled/packaged/whatever crap in the volume on the host. Later, move it to kubernetes or something to make it resilient (docker swarm is also ok, but not reliable in my experience). Or openshift origin if you want a paas experience. Which is also packaged as a dockerfile if you want. Or use a prebuilt container for sickbeard, IRC bouncer, whatever. The world is your oyster. Many of the use cases are similar to vagrant, except containers make more sense for some of the workloads. For example, here's a teaching/hacking environment for React on top of Cockpit (the web dashboard for fedora/CentOS). These containers are huge because it was a 20 minute one-off and cockpit is pretty dependent on systemd, which makes it all a mess, but you can edit code on the host and it gets rebuilt on the fly in a container which exposes port 9091 so you don't need to set up an entire cockpit development environment yourself. There are 2 entrypoints -- systemd for cockpit, and a trivial shell script for webpack. "Development environment in a box" doesn't require vagrant anymore. The advantage of this is that you can hand new devs a tiny git repo and they can be up and running in 15 minutes. Similarly, for CI/CD, you can autobuild and redeploy your application in no time flat, and your lazy developers can use ubuntu (or shove their crap inside Alpine or another microdistro), with relatively clean separation. Your DB is now an application in a box which can run anywhere in your cluster without building a whole VM and worrying about firewall rules, DMZ, fail2ban, etc. You could do all this stuff before with , and don't let this convince you that you don't need to worry about security anymore (you do, except now twice -- container and host), but the attack surface and runtime footprint are much smaller. The actual use just aligns closer to an ideal model for many applications. The Nards Pan posted:Frankly I think I'd rather learn Kubernetes on GCE. k8s setup can be a bear, but you can do this pretty trivially with rancher, coreos, or atomic now if you want to see what running your own environment is like for curiosity.
|
# ? Apr 26, 2017 19:21 |
|
So, docker container. It's basically an OS, an application, and only the bits required to make that application purr; all inside a VM. Is that a very broken way to describe it? EDIT: I noticed something. milk milk lemonade posted:Containers aren't VMs, they're whatever you need to make an application work and nothing else. The advantage is that it's easier and quicker to deploy and the containers Just Work without any additional fluff. How is it not a VM? Or is that just one of those things. People don't want you to call it a 'VM' when it is actually a 'Container'? I guess I'm just missing fundamental differences. Beefstorm fucked around with this message at 21:20 on Apr 26, 2017 |
# ? Apr 26, 2017 20:55 |
OS isn't required.
|
|
# ? Apr 26, 2017 21:15 |
|
milk milk lemonade posted:OS isn't required. O rly? Time for me to read! Thank you.
|
# ? Apr 26, 2017 21:20 |
|
Beefstorm posted:So, docker container. It's basically an OS, an application, and only the bits required to make that application purr; all inside a VM. A virtual machine has its own completely virtualized hardware and BIOS, along with all of the bloat that it brings. A container shares the host OS's kernel and all of the resources that come with it - cpu, memory, disk, network, et al. However, it comes with its own binaries and libraries, which are most of what make up a traditional operating system. However, it's not quite what I would describe as a "full" operating system, since you don't have to sweat things like configuring interfaces or storage and whatnot. The containerization platform (docker, in this case) abstracts all of that out for you. The most common metaphor I see to describe the differences is to imagine your host hardware as a high-rise building: - in a traditional bare metal world, your building looks like a hostel, with everyone sharing the space and resources and falling on top of one another - in a virtualized world, your building looks like a condo, where every unit is self-sufficient with its own circuit breakers, water heater, furnace, and so on - in a container world, your building is more like an apartment building, where every unit shares the same basic plumbing, power, trash services, and so on The folks at Redhat love their coloring books, so here's their take on the metaphor in PDF form along with some basic pros and cons to each of the different approaches. Edit: Dan Walsh's take likens the virtual machine approach to more of a duplex than a condo building, but you get the idea Cidrick fucked around with this message at 21:24 on Apr 26, 2017 |
# ? Apr 26, 2017 21:22 |
|
Good description! Just read through the PDF. Didn't color. Somehow feel unaccomplished. But overall, this gave a great 'for idiots' overview to the differences. Thanks!
|
# ? Apr 26, 2017 22:27 |
|
Cidrick posted:The folks at Redhat love their coloring books, so here's their take on the metaphor in PDF form along with some basic pros and cons to each of the different approaches. The coloring book thing is great.
|
# ? Apr 26, 2017 23:41 |
|
Moey posted:The coloring book thing is great. I have copies of both this and the SELinux coloring book in my desk at work. To date, nary a speck of color on any of the pages, though
|
# ? Apr 27, 2017 00:07 |
|
It has been so long since I've done it that I can't even think of the answer clearly - what functions do I lose if I have ESXi installed with access to only local storage? We're giving up what, vMotion, HA, DRS? (e: FT...)
|
# ? Apr 27, 2017 04:55 |
|
|
# ? May 30, 2024 23:35 |
|
MC Fruit Stripe posted:It has been so long since I've done it that I can't even think of the answer clearly - what functions do I lose if I have ESXi installed with access to only local storage? We're giving up what, vMotion, HA, DRS? (e: FT...) HA and HA related things. You can do svmotion and regular vmotion without shared storage now as of 5.5 if I recall.
|
# ? Apr 27, 2017 05:17 |