|
I've had these disks passed through via RDM for over a year now and have had zero issues. It's running on consumer hardware too. Granted, it isn't mission critical, but it has been stable as hell.
|
# ? Jul 15, 2017 04:35 |
|
|
# ? May 9, 2024 17:06 |
|
Company is sending me to VMworld Vegas in August will be my first time at VMworld, though I've been to Interop a couple of times.
|
# ? Jul 18, 2017 19:15 |
|
For those of you deeply into virt, what is the (virtual) real world difference between a 4-c processor (with no HT) and a 2-c / 2-t processor with HT? I get that real cores are always better, but are HT cores also pretty ok? Relevance is I'm thinking about decommissioning some old hardware. The new hardware is the 2c / 2t hardware.
|
# ? Jul 18, 2017 23:59 |
|
Surely it completely depends on workload? If you want to run a few general-purpose VMs to sit there and be web hosts then it probably doesn't matter, but if the single-thread performance of the dual-core boxes is better and you're doing media transcoding then use those.
|
# ? Jul 19, 2017 00:30 |
|
Is this home stuff or work?
|
# ? Jul 19, 2017 00:46 |
|
Also like, what actual processors are we comparing here? I agree with Thanks Ants, depends on your workload. I know when I am troubleshooting performance issues I don't care how many logical cores a host has, I care how many physical cores it has.
|
# ? Jul 19, 2017 00:51 |
|
I'm 99% sure really bad things happen if you try to treat a HT core as if it was a physical core. IE don't assign a VM 4 cores on a 2 physical core host.
|
# ? Jul 19, 2017 01:13 |
|
Hyperthreaded cores still only have a one set of caches so you'll get some performance improvement up to the point that cache is not the bottleneck, and where instructions from both threads fit in the same cycle, but it's not going to be anything like a 2x increase.
|
# ? Jul 19, 2017 01:26 |
|
Thanks for the feedback. I have a couple systems virtualized - a PBX and a Windows 10 install. The PBX is used daily but is pretty light on resources. The Win 10 install gets used a couple times a month and would benefit from more horsepower. The old hardware is a Xeon X3360. The "new" hardware are 2x i3-4130 systems. My thought was to move the existing VMs to one of the i3 systems, keep the 3360 as cold backup in case the i3 system ever fails, and then use the other i3 system as a FreeNAS pool. Then again, I may just do nothing because the venerable 3360 is doing fine puttering along, and I'm predicting that Xeon E5 v2 systems are going to be hitting the market for cheap in the next 12 months and would be a vast upgrade over either choice. I know the extra cache, physical cores, and extra RAM probably help the case for the 3360, so I think I'll just keep it in play and find some other projects for the i3 systems.
|
# ? Jul 19, 2017 02:27 |
|
bobfather posted:I know the extra cache, physical cores, and extra RAM probably help the case for the 3360, so I think I'll just keep it in play and find some other projects for the i3 systems.
|
# ? Jul 19, 2017 03:09 |
|
I'm an engineer for a major Cloud Company. One common thing we've been having is people getting their passwords cracked by super computers, then the people will take over their cloud environment and spin up a bunch of Bitcoin mining instances.
|
# ? Jul 22, 2017 03:49 |
|
ZHamburglar posted:I'm an engineer for a major Cloud Company. One common thing we've been having is people getting their passwords cracked by super computers, then the people will take over their cloud environment and spin up a bunch of Bitcoin mining instances.
|
# ? Jul 22, 2017 04:18 |
|
Do people not use 2FA?
|
# ? Jul 22, 2017 04:30 |
|
don't work for Aliyun
|
# ? Jul 22, 2017 06:15 |
|
ZHamburglar posted:I'm an engineer for a major Cloud Company. One common thing we've been having is people getting their passwords cracked by super computers, then the people will take over their cloud environment and spin up a bunch of Bitcoin mining instances. Right, "cracked by a super computer" not uploaded to github by accident
|
# ? Jul 22, 2017 12:18 |
|
ZHamburglar posted:getting their passwords cracked by super computers ahahahaha what Don't believe everything you're told.
|
# ? Jul 23, 2017 02:38 |
|
H2SO4 posted:ahahahaha what I'm guessing he just means a hashed pw list being worked on in an AWS compute instance against rainbow tables. Close enough to a supercomputer.
|
# ? Jul 23, 2017 15:09 |
|
The bitcoin thing is true though Cracked accounts are turned into mining hosts until they're detected Its why the trials have limits among other things
|
# ? Jul 23, 2017 15:40 |
|
insularis posted:I'm guessing he just means a hashed pw list being worked on in an AWS compute instance against rainbow tables. Close enough to a supercomputer. I'm guessing they are in a non technical role and is parroting something they heard. Which "major cloud company" has a password table vulnerable to rainbow tables and has lost it to allow this ? jre fucked around with this message at 16:16 on Jul 23, 2017 |
# ? Jul 23, 2017 16:01 |
|
If you have enough supercomputer to attack modern password hashes why aren't you running bitcoin on the supercomputer instead of scamming pennies from hijacked rackspace accounts? (I originally said softlayer but what hackers are gonna set up a fax machine to provision new vms lol)
|
# ? Jul 23, 2017 16:33 |
|
jre posted:I'm guessing they are in a non technical role and is parroting something they heard. Which "major cloud company" has a password table vulnerable to rainbow tables and has lost it to allow this ? Ain't defendin', just doing translation.
|
# ? Jul 23, 2017 16:35 |
|
I don't think he was saying the supercomputers were cracking the passwords on the Cloud Company. What's happened a few times recently is a site gets hacked and the database of passwords gets leaked. Hackers brute force the leaked hashed password DB to find the original password, then try that username/password combination on other sites and services because people tend to reuse passwords.
|
# ? Jul 23, 2017 18:52 |
|
THF13 posted:I don't think he was saying the supercomputers were cracking the passwords on the Cloud Company.
|
# ? Jul 23, 2017 19:00 |
|
THF13 posted:I don't think he was saying the supercomputers were cracking the passwords on the Cloud Company. There's still no super computer involved in any of that. A desktop pc with a couple of graphics cards running hashcat isn't a supercomputer
|
# ? Jul 23, 2017 19:07 |
|
jre posted:There's still no super computer involved in any of that. A desktop pc with a couple of graphics cards running hashcat isn't a supercomputer If you don't consider an AWS Compute instance rental a supercomputer, I don't know what to say, except, can I have your decommissioned hardware, please.
|
# ? Jul 23, 2017 19:16 |
|
insularis posted:If you don't consider an AWS Compute instance rental a supercomputer, I don't know what to say, except, can I have your decommissioned hardware, please.
|
# ? Jul 23, 2017 19:26 |
|
insularis posted:If you don't consider an AWS Compute instance rental a supercomputer, I don't know what to say, except, can I have your decommissioned hardware, please. A single p2 would be great for cracking, but it's not even vaguely a super computer. 1000 of them on the other hand
|
# ? Jul 23, 2017 19:27 |
|
Alright, so I realize this is a dumb question along the lines of "what flavor of Linux to companies use," but in a DevOps style shop that uses its own hardware, are there any Linux-based virtualization platforms that are more popular? I'm already very familiar with ESXi and XenServer (not Xen), but I am wanting to branch out a bit and start learning a bit of Python, along with things like Puppet / Chef / Ansible / Docker. I know that much of this type of stuff is in (or heading to) AWS right now, but I'm just looking for stuff to play around with in a home lab. Basically, I've been a Windows admin for a long time and have a pretty good grasp on that side of things, but I am starting to get a bit bored of my current career path and would like to be able to work at sexy tech shops instead of just normal run of the mill businesses. Thoughts?
|
# ? Jul 24, 2017 00:54 |
|
Have you looked into container orchestration platforms like Kubernetes or Docker Swarm? Those are super interesting and should be demoable on any VPS for cheap. With Jenkins or another build pipeline you can build your code right into docker images that you can feed into a container orchestrator to be pushed around. It's less than friendly to get started but the benefits for any sort of micro-service/distributed system are immediately obvious. https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes For CM I've been using Ansible and Chef and both have their use cases. Ansible is okay for adhoc stuff like pushing code, checking statuses and other human intervention tasks, but I'd never dream of using it exclusively as a CM (Protip: figure out dynamic inventory and feed your Chef inventory into Ansible). Chef is great because it fully enables you to drop into arbitrary code at any moment if you need to do something fancy or need special logic tailored to your environment.
|
# ? Jul 24, 2017 01:24 |
|
Kubernetes isn't great on a single host, and docker-swarm is debatably not production-ready (plus losing the war to kubernetes pretty badly) Openshift on a single node with Jenkins and a S2i pipeline is probably what you want, or bare Kubernetes on 3+ hosts with CoreOS Tectonic
|
# ? Jul 24, 2017 02:11 |
|
Sounds like I've got some reading to do, thanks guys. If anyone else has some advice for someone making the Windows -> Linux / "DevOp-sy" transition, I've love to hear it.
|
# ? Jul 24, 2017 02:20 |
|
I've been tinkering with oVirt in my home lab lately, probably not as valuable to learn if you're already very familiar with vSphere stuff unless you know you want to go down the RHEV/RHEL road. I can also second Docker Swarm not being production ready, we're having a hell of a time moving to it at work, basic features break, often. I don't know all the gory details since I'm just the intern, but there is talk of scrapping swarm all together and starting over with kubernetes.
|
# ? Jul 24, 2017 03:18 |
|
I've been playing around with Docker to learn about proxy caching but I've hit a bit of a roadblock with Docker's networking. I'm running both Apache and Squid on the same CentOS 7 (Atomic) host, each in their own container, publishing their relevant ports. The Apache server works fine, I can make requests over HTTP and HTTPS, and I can set the Squid server as my proxy and connect to websites fine, but I can't connect to the Apache server through the proxy. I can't even wget/curl the webserver from any other container on the host. I can ping the host from another container on both its IP on my local network and its IP on the docker bridged network, but any connection over port 80 gets "host is unreachable". At first, I thought it might be a firewall issue on the host, I tested this by disabling iptables but no dice. I'm very new to Docker and I don't know where else I can poke at to fix this, surely it must be possible? ArcticZombie fucked around with this message at 21:30 on Jul 24, 2017 |
# ? Jul 24, 2017 19:20 |
|
ArcticZombie posted:I've been playing around with Docker to learn about proxy caching but I've hit a bit of a roadblock with Docker's networking. I'm running both Apache and Squid on the same CentOS 7 (Atomic) host, each in their own container, publishing their relevant ports. The Apache server works fine, I can make requests over HTTP and HTTPS, and I can set the Squid server as my proxy and connect to websites fine, but I can't connect to the Apache server [i]through[\i] the proxy. I can't even wget/curl the webserver from any other container on the host. I can ping the host from another container on both its IP on my local network and its IP on the docker bridged network, but any connection over port 80 gets "host is unreachable". At first, I thought it might be a firewall issue on the host, I tested this by disabling iptables but no dice.
|
# ? Jul 24, 2017 19:35 |
|
My team at work has a need to VPN into a client's network to access a web application. Our internal security team doesn't want us to install the VPN client and allow us to directly VPN into the client from our own desktop. They say that they wouldn't be able to control traffic coming in through that tunnel. I'm not a networking expert, so I'll trust them on that. My security/network team proposed setting up a site-to-site VPN with the client, but without going in too many details, the client doesn't want to. The temporary solution we came up with was to set up physical desktops in our DMZ. When someone from my team needs to connect via VPN to the client, they will RDP into one of these physical desktops and initiate the VPN from there. It's hardly ideal as the number of desktops are limited, you can't be more that two users on at the same time on the same host and they sometime need to be physically rebooted because Windows 7. We use Citrix (I couldn't tell you which product specifically, but we call it the Cloud Desktop and it uses the Citrix Receiver) as the virtualization technology for end users at work and I wanted to see if that would work. My Citrix guy said it wouldn't work based on his previous experience, but he humored me in our test environment anyway by installing the VPN software onto two user accounts. Unfortunately, as soon as the second user opened the VPN connection, the first got disconnected. So, all that to ask you virtualization experts: is there a virtualization solution that will allow multiple users to be concurrently logged onto the same host and allow each of them to independently create their own VPN tunnel?
|
# ? Jul 24, 2017 19:42 |
|
anthonypants posted:Are you using iptables or firewalld? Is selinux in permissive or enforcing? I'm using iptables, but that doesn't matter because selinux was the culprit
|
# ? Jul 24, 2017 20:01 |
|
Can you get them to put up an RDgateway? Pretty much all RDP clients support it now and its a lot less convoluted that what you are trying to do. Only RDP out, nothing coming back like a VPN. What your security people are saying only applies if they've disabled the software firewall. It's increasing your exposure a bit but should be hitting the default windows Public firewall which doesn't let anything through unless massively misconfigured.Secx posted:So, all that to ask you virtualization experts: is there a virtualization solution that will allow multiple users to be concurrently logged onto the same host and allow each of them to independently create their own VPN tunnel? I think you're going to have problems with the routing tables since most tunnels are going to modify that system-wide and they'll start conflict. However, if you have a single host and some admin fires up the tunnel and leaves their session up you might be able to get other user sessions on the same box to push their traffic through it. BangersInMyKnickers fucked around with this message at 20:07 on Jul 24, 2017 |
# ? Jul 24, 2017 20:04 |
|
Client to site VPNs are a scourge.
|
# ? Jul 24, 2017 20:10 |
|
ArcticZombie posted:I'm using iptables, but that doesn't matter because selinux was the culprit Is container-selinux installed? It should be. In the future, though, if you want to do networking between containers, you should either use docker-compose so they come up as a group, or use something completely overblown for your use case (like calico or flannel). Kubernetes will manage this for you. Not allowing containers to talk to each other by default is a feature, not a bug `docker network link...` will also work And don't disable iptables -- docker manipulates iptables very heavily to allow port mapping to work, since each containers gets a namespaced NIC
|
# ? Jul 24, 2017 21:16 |
|
|
# ? May 9, 2024 17:06 |
|
Secx posted:
The problem with using Xendesktop here is that as soon as you establish the VPN there's no route back to you as the client to send the Citrix traffic back through. You either need a machine you can access the console of (through VMWare or similar), a tool like GoToMyPC that tunnels through the internet, or have desktops setup in the other company's network that you can connect to. To be real clear here, your network/security guys are the ones who are blocking you so it's really on them to work out a solution that works with the other company. I totally understand not wanting client VPN traffic on their network but they should have a more workable solution. They're clearly OK with letting you access it but not from the corporate internal Network - why not push back and request being allowed to use guest WiFi or something like that?
|
# ? Jul 25, 2017 00:49 |