Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Moey
Oct 22, 2010

I LIKE TO MOVE IT
I've had these disks passed through via RDM for over a year now and have had zero issues. It's running on consumer hardware too. Granted, it isn't mission critical, but it has been stable as hell.

Adbot
ADBOT LOVES YOU

Alfajor
Jun 10, 2005

The delicious snack cake.
Company is sending me to VMworld Vegas in August :toot: will be my first time at VMworld, though I've been to Interop a couple of times.

bobfather
Sep 20, 2001

I will analyze your nervous system for beer money
For those of you deeply into virt, what is the (virtual) real world difference between a 4-c processor (with no HT) and a 2-c / 2-t processor with HT?

I get that real cores are always better, but are HT cores also pretty ok?

Relevance is I'm thinking about decommissioning some old hardware. The new hardware is the 2c / 2t hardware.

Thanks Ants
May 21, 2004

#essereFerrari


Surely it completely depends on workload? If you want to run a few general-purpose VMs to sit there and be web hosts then it probably doesn't matter, but if the single-thread performance of the dual-core boxes is better and you're doing media transcoding then use those.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
Is this home stuff or work?

Internet Explorer
Jun 1, 2005





Also like, what actual processors are we comparing here? I agree with Thanks Ants, depends on your workload. I know when I am troubleshooting performance issues I don't care how many logical cores a host has, I care how many physical cores it has.

Methanar
Sep 26, 2013

by the sex ghost
I'm 99% sure really bad things happen if you try to treat a HT core as if it was a physical core.

IE don't assign a VM 4 cores on a 2 physical core host.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Hyperthreaded cores still only have a one set of caches so you'll get some performance improvement up to the point that cache is not the bottleneck, and where instructions from both threads fit in the same cycle, but it's not going to be anything like a 2x increase.

bobfather
Sep 20, 2001

I will analyze your nervous system for beer money
Thanks for the feedback.

I have a couple systems virtualized - a PBX and a Windows 10 install. The PBX is used daily but is pretty light on resources. The Win 10 install gets used a couple times a month and would benefit from more horsepower.

The old hardware is a Xeon X3360. The "new" hardware are 2x i3-4130 systems. My thought was to move the existing VMs to one of the i3 systems, keep the 3360 as cold backup in case the i3 system ever fails, and then use the other i3 system as a FreeNAS pool.

Then again, I may just do nothing because the venerable 3360 is doing fine puttering along, and I'm predicting that Xeon E5 v2 systems are going to be hitting the market for cheap in the next 12 months and would be a vast upgrade over either choice.

I know the extra cache, physical cores, and extra RAM probably help the case for the 3360, so I think I'll just keep it in play and find some other projects for the i3 systems.

evol262
Nov 30, 2010
#!/usr/bin/perl

bobfather posted:

I know the extra cache, physical cores, and extra RAM probably help the case for the 3360, so I think I'll just keep it in play and find some other projects for the i3 systems.
Memory, yes. For light workload systems which spend a lot of time idle like yours do, I wouldn't worry too much about cache bottlenecking or anything else. HT is fine unless what you're doing is really performance sensitive, or you're pinning VMs to HT cores for some reason.

ZHamburglar
Aug 24, 2006
I have a penis.
I'm an engineer for a major Cloud Company. One common thing we've been having is people getting their passwords cracked by super computers, then the people will take over their cloud environment and spin up a bunch of Bitcoin mining instances.

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

ZHamburglar posted:

I'm an engineer for a major Cloud Company. One common thing we've been having is people getting their passwords cracked by super computers, then the people will take over their cloud environment and spin up a bunch of Bitcoin mining instances.
I don't know which scenario is worse; that a hacker would have access to your users database and need to break an unsalted md5 hash, or that someone is making massive amounts of logon attempts without getting blocked.

Wibla
Feb 16, 2011

Do people not use 2FA?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
don't work for Aliyun

jre
Sep 2, 2011

To the cloud ?



ZHamburglar posted:

I'm an engineer for a major Cloud Company. One common thing we've been having is people getting their passwords cracked by super computers, then the people will take over their cloud environment and spin up a bunch of Bitcoin mining instances.

Right, "cracked by a super computer" not uploaded to github by accident

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord

ZHamburglar posted:

getting their passwords cracked by super computers

ahahahaha what

Don't believe everything you're told.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

H2SO4 posted:

ahahahaha what

Don't believe everything you're told.

I'm guessing he just means a hashed pw list being worked on in an AWS compute instance against rainbow tables. Close enough to a supercomputer.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.
The bitcoin thing is true though

Cracked accounts are turned into mining hosts until they're detected

Its why the trials have limits among other things

jre
Sep 2, 2011

To the cloud ?



insularis posted:

I'm guessing he just means a hashed pw list being worked on in an AWS compute instance against rainbow tables. Close enough to a supercomputer.

I'm guessing they are in a non technical role and is parroting something they heard. Which "major cloud company" has a password table vulnerable to rainbow tables and has lost it to allow this ?

jre fucked around with this message at 16:16 on Jul 23, 2017

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

If you have enough supercomputer to attack modern password hashes why aren't you running bitcoin on the supercomputer instead of scamming pennies from hijacked rackspace accounts?

(I originally said softlayer but what hackers are gonna set up a fax machine to provision new vms lol)

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

jre posted:

I'm guessing they are in a non technical role and is parroting something they heard. Which "major cloud company" has a password table vulnerable to rainbow tables and has lost it to allow this ?

Ain't defendin', just doing translation.

THF13
Sep 26, 2007

Keep an adversary in the dark about what you're capable of, and he has to assume the worst.
I don't think he was saying the supercomputers were cracking the passwords on the Cloud Company.

What's happened a few times recently is a site gets hacked and the database of passwords gets leaked. Hackers brute force the leaked hashed password DB to find the original password, then try that username/password combination on other sites and services because people tend to reuse passwords.

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

THF13 posted:

I don't think he was saying the supercomputers were cracking the passwords on the Cloud Company.

What's happened a few times recently is a site gets hacked and the database of passwords gets leaked. Hackers brute force the leaked hashed password DB to find the original password, then try that username/password combination on other sites and services because people tend to reuse passwords.
If that is the case their company should consider using a service like haveibeenpwned.com to reset their users' passwords once they show up in a leak. In every other scenario they should enforce 2fa through Google Authenticator or something.

jre
Sep 2, 2011

To the cloud ?



THF13 posted:

I don't think he was saying the supercomputers were cracking the passwords on the Cloud Company.

What's happened a few times recently is a site gets hacked and the database of passwords gets leaked. Hackers brute force the leaked hashed password DB to find the original password, then try that username/password combination on other sites and services because people tend to reuse passwords.

There's still no super computer involved in any of that. A desktop pc with a couple of graphics cards running hashcat isn't a supercomputer

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

jre posted:

There's still no super computer involved in any of that. A desktop pc with a couple of graphics cards running hashcat isn't a supercomputer

If you don't consider an AWS Compute instance rental a supercomputer, I don't know what to say, except, can I have your decommissioned hardware, please.

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

insularis posted:

If you don't consider an AWS Compute instance rental a supercomputer, I don't know what to say, except, can I have your decommissioned hardware, please.
The very first reason it's not is that AWS offers a very large range of performance in their servers.

jre
Sep 2, 2011

To the cloud ?



insularis posted:

If you don't consider an AWS Compute instance rental a supercomputer, I don't know what to say, except, can I have your decommissioned hardware, please.
A what now.

A single p2 would be great for cracking, but it's not even vaguely a super computer.

1000 of them on the other hand :getin:

Internet Explorer
Jun 1, 2005





Alright, so I realize this is a dumb question along the lines of "what flavor of Linux to companies use," but in a DevOps style shop that uses its own hardware, are there any Linux-based virtualization platforms that are more popular? I'm already very familiar with ESXi and XenServer (not Xen), but I am wanting to branch out a bit and start learning a bit of Python, along with things like Puppet / Chef / Ansible / Docker. I know that much of this type of stuff is in (or heading to) AWS right now, but I'm just looking for stuff to play around with in a home lab.

Basically, I've been a Windows admin for a long time and have a pretty good grasp on that side of things, but I am starting to get a bit bored of my current career path and would like to be able to work at sexy tech shops instead of just normal run of the mill businesses.

Thoughts?

Methanar
Sep 26, 2013

by the sex ghost
Have you looked into container orchestration platforms like Kubernetes or Docker Swarm? Those are super interesting and should be demoable on any VPS for cheap. With Jenkins or another build pipeline you can build your code right into docker images that you can feed into a container orchestrator to be pushed around. It's less than friendly to get started but the benefits for any sort of micro-service/distributed system are immediately obvious.

https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes

For CM I've been using Ansible and Chef and both have their use cases. Ansible is okay for adhoc stuff like pushing code, checking statuses and other human intervention tasks, but I'd never dream of using it exclusively as a CM (Protip: figure out dynamic inventory and feed your Chef inventory into Ansible). Chef is great because it fully enables you to drop into arbitrary code at any moment if you need to do something fancy or need special logic tailored to your environment.

evol262
Nov 30, 2010
#!/usr/bin/perl
Kubernetes isn't great on a single host, and docker-swarm is debatably not production-ready (plus losing the war to kubernetes pretty badly)

Openshift on a single node with Jenkins and a S2i pipeline is probably what you want, or bare Kubernetes on 3+ hosts with CoreOS Tectonic

Internet Explorer
Jun 1, 2005





Sounds like I've got some reading to do, thanks guys.

If anyone else has some advice for someone making the Windows -> Linux / "DevOp-sy" transition, I've love to hear it.

BallerBallerDillz
Jun 11, 2009

Cock, Rules, Everything, Around, Me
Scratchmo
I've been tinkering with oVirt in my home lab lately, probably not as valuable to learn if you're already very familiar with vSphere stuff unless you know you want to go down the RHEV/RHEL road. I can also second Docker Swarm not being production ready, we're having a hell of a time moving to it at work, basic features break, often. I don't know all the gory details since I'm just the intern, but there is talk of scrapping swarm all together and starting over with kubernetes.

ArcticZombie
Sep 15, 2010
I've been playing around with Docker to learn about proxy caching but I've hit a bit of a roadblock with Docker's networking. I'm running both Apache and Squid on the same CentOS 7 (Atomic) host, each in their own container, publishing their relevant ports. The Apache server works fine, I can make requests over HTTP and HTTPS, and I can set the Squid server as my proxy and connect to websites fine, but I can't connect to the Apache server through the proxy. I can't even wget/curl the webserver from any other container on the host. I can ping the host from another container on both its IP on my local network and its IP on the docker bridged network, but any connection over port 80 gets "host is unreachable". At first, I thought it might be a firewall issue on the host, I tested this by disabling iptables but no dice.

I'm very new to Docker and I don't know where else I can poke at to fix this, surely it must be possible?

ArcticZombie fucked around with this message at 21:30 on Jul 24, 2017

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

ArcticZombie posted:

I've been playing around with Docker to learn about proxy caching but I've hit a bit of a roadblock with Docker's networking. I'm running both Apache and Squid on the same CentOS 7 (Atomic) host, each in their own container, publishing their relevant ports. The Apache server works fine, I can make requests over HTTP and HTTPS, and I can set the Squid server as my proxy and connect to websites fine, but I can't connect to the Apache server [i]through[\i] the proxy. I can't even wget/curl the webserver from any other container on the host. I can ping the host from another container on both its IP on my local network and its IP on the docker bridged network, but any connection over port 80 gets "host is unreachable". At first, I thought it might be a firewall issue on the host, I tested this by disabling iptables but no dice.

I'm very new to Docker and I don't know where else I can poke at to fix this, surely it must be possible?
Are you using iptables or firewalld? Is selinux in permissive or enforcing?

Secx
Mar 1, 2003


Hippopotamus retardus
My team at work has a need to VPN into a client's network to access a web application. Our internal security team doesn't want us to install the VPN client and allow us to directly VPN into the client from our own desktop. They say that they wouldn't be able to control traffic coming in through that tunnel. I'm not a networking expert, so I'll trust them on that. My security/network team proposed setting up a site-to-site VPN with the client, but without going in too many details, the client doesn't want to.

The temporary solution we came up with was to set up physical desktops in our DMZ. When someone from my team needs to connect via VPN to the client, they will RDP into one of these physical desktops and initiate the VPN from there. It's hardly ideal as the number of desktops are limited, you can't be more that two users on at the same time on the same host and they sometime need to be physically rebooted because Windows 7.

We use Citrix (I couldn't tell you which product specifically, but we call it the Cloud Desktop and it uses the Citrix Receiver) as the virtualization technology for end users at work and I wanted to see if that would work. My Citrix guy said it wouldn't work based on his previous experience, but he humored me in our test environment anyway by installing the VPN software onto two user accounts. Unfortunately, as soon as the second user opened the VPN connection, the first got disconnected.

So, all that to ask you virtualization experts: is there a virtualization solution that will allow multiple users to be concurrently logged onto the same host and allow each of them to independently create their own VPN tunnel?

ArcticZombie
Sep 15, 2010

anthonypants posted:

Are you using iptables or firewalld? Is selinux in permissive or enforcing?

I'm using iptables, but that doesn't matter because selinux was the culprit :downs:

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Can you get them to put up an RDgateway? Pretty much all RDP clients support it now and its a lot less convoluted that what you are trying to do. Only RDP out, nothing coming back like a VPN. What your security people are saying only applies if they've disabled the software firewall. It's increasing your exposure a bit but should be hitting the default windows Public firewall which doesn't let anything through unless massively misconfigured.

Secx posted:

So, all that to ask you virtualization experts: is there a virtualization solution that will allow multiple users to be concurrently logged onto the same host and allow each of them to independently create their own VPN tunnel?

I think you're going to have problems with the routing tables since most tunnels are going to modify that system-wide and they'll start conflict. However, if you have a single host and some admin fires up the tunnel and leaves their session up you might be able to get other user sessions on the same box to push their traffic through it.

BangersInMyKnickers fucked around with this message at 20:07 on Jul 24, 2017

Internet Explorer
Jun 1, 2005





Client to site VPNs are a scourge.

evol262
Nov 30, 2010
#!/usr/bin/perl

ArcticZombie posted:

I'm using iptables, but that doesn't matter because selinux was the culprit :downs:

Is container-selinux installed? It should be.

In the future, though, if you want to do networking between containers, you should either use docker-compose so they come up as a group, or use something completely overblown for your use case (like calico or flannel). Kubernetes will manage this for you.

Not allowing containers to talk to each other by default is a feature, not a bug :eng99: `docker network link...` will also work

And don't disable iptables -- docker manipulates iptables very heavily to allow port mapping to work, since each containers gets a namespaced NIC

Adbot
ADBOT LOVES YOU

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord

Secx posted:


We use Citrix (I couldn't tell you which product specifically, but we call it the Cloud Desktop and it uses the Citrix Receiver) as the virtualization technology for end users at work and I wanted to see if that would work. My Citrix guy said it wouldn't work based on his previous experience, but he humored me in our test environment anyway by installing the VPN software onto two user accounts. Unfortunately, as soon as the second user opened the VPN connection, the first got disconnected.

So, all that to ask you virtualization experts: is there a virtualization solution that will allow multiple users to be concurrently logged onto the same host and allow each of them to independently create their own VPN tunnel?

The problem with using Xendesktop here is that as soon as you establish the VPN there's no route back to you as the client to send the Citrix traffic back through.

You either need a machine you can access the console of (through VMWare or similar), a tool like GoToMyPC that tunnels through the internet, or have desktops setup in the other company's network that you can connect to.

To be real clear here, your network/security guys are the ones who are blocking you so it's really on them to work out a solution that works with the other company. I totally understand not wanting client VPN traffic on their network but they should have a more workable solution. They're clearly OK with letting you access it but not from the corporate internal Network - why not push back and request being allowed to use guest WiFi or something like that?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply