|
If you're not wedging VMs into kubernetes containers running on inside a nested 3-node kubernetes cluster, you're behind the times. In seriousness, i3 nucs are a terrible recommendation. Most home users won't experience a storage failure or compute failure, so one big box with a ton of memory and 8+ cores is cheaper, more efficient, and will just let you virtualize clusters.
|
# ? Aug 6, 2017 00:21 |
|
|
# ? May 9, 2024 15:51 |
|
Speaking of that, I just got a used Lenovo E5-2650 v2 box with 48gb of RAM for $380 (like $350 after Ebay bucks) for my work. Will be using it to do a lot of physical to virtual conversions in the coming months. Exciting!
|
# ? Aug 6, 2017 01:24 |
|
I hate P2V conversions. Legacy crap you inherited that you can't migrate?
|
# ? Aug 6, 2017 01:35 |
|
Back to my original question, there is a ton of xeon boards out there. Anything specific to look for/avoid?
|
# ? Aug 6, 2017 01:44 |
|
evol262 posted:In seriousness, i3 nucs are a terrible recommendation. Most home users won't experience a storage failure or compute failure, so one big box with a ton of memory and 8+ cores is cheaper, more efficient, and will just let you virtualize clusters.
|
# ? Aug 6, 2017 04:39 |
|
cr0y posted:Back to my original question, there is a ton of xeon boards out there. Anything specific to look for/avoid? A supported CPU that will fit your needs and (physical) case.
|
# ? Aug 6, 2017 04:45 |
|
Vulture Culture posted:I have a cluster of six N3700 NUCs on my desk that uses half the power of an incandescent lightbulb, speaking of more efficient I meant in terms of cash. My lab is large enough that I use a bunch of nucs (well, gigabyte brix) plus a microserver for shared storage, but I'm not gonna deny that it cost twice as much as a similar amount of cores/memory would be in half as many chassis, even with desktop hardware in microatx cases instead of off-lease server gear.
|
# ? Aug 6, 2017 05:00 |
|
evol262 posted:If you're not wedging VMs into kubernetes containers running on inside a nested 3-node kubernetes cluster, you're behind the times. Cause I couldn't sleep... If 8 cores includes hyperthreading with something like a regular i7, you can build for $500-600. More likely $800 thou. If you are talking legit xeons, expect to drop $500-$600 for just the mobo and cpu with 6 threads. A build with lots of ram is gonna run $1200+. Now you can get that cost down using parts like engineering samples for the CPU, but it's not cheap to build a monster of a PC. NUCs aren't great either pricewise as compared to some of the stuff you can get of Alibaba, but I don't have to deal with insane issues and can somewhat trust that theNUC is just gonna work. The base level celeron comes out to $260 per node with 4 CPUs and 8 gigs of ram. An i3 NUC of the current gen with 16 gigs of ram is $500 a node with 2 cores and 4 threads. You can also double the ram for another $120 for 32 gigs. Now that xeon is gonna be a lot faster, but it's gonna cost $1200+ to get 12 threads and enough RAM. And unless you plan on hitting it hard, that cheap celeron would be more then enough, which you can get to 12 cores for $800.
|
# ? Aug 6, 2017 11:20 |
|
Ryzen CPUs are really great bang for the buck right now.
|
# ? Aug 6, 2017 14:18 |
|
Internet Explorer posted:Ryzen CPUs are really great bang for the buck right now. Pity about RAM prices though. That was part of the reason I went with an older 8 core E5 Xeon.
|
# ? Aug 6, 2017 15:10 |
|
Moey posted:I hate P2V conversions. Nothing major. The hardest part will be our SQL server, which I've wanted to virtualize for a while. Best of all the new hardware will let me decommission a really old X3363-based Dell which has been puttering along for like a decade and can only handle like 3 VMs on its own.
|
# ? Aug 6, 2017 15:20 |
|
bobfather posted:Nothing major. The hardest part will be our SQL server, which I've wanted to virtualize for a while. Best of all the new hardware will let me decommission a really old X3363-based Dell which has been puttering along for like a decade and can only handle like 3 VMs on its own. What OS is it on? Any reason to not build a new SQL VM and just migrate databases?
|
# ? Aug 6, 2017 15:29 |
|
Moey posted:What OS is it on? Any reason to not build a new SQL VM and just migrate databases? Unfortunately, Windows Server 2012 R2. As for tooling around with SQL databases, I'm not too knowledgable about it to know whether we could easily do that. bobfather fucked around with this message at 15:45 on Aug 6, 2017 |
# ? Aug 6, 2017 15:39 |
|
Punkbob posted:Cause I couldn't sleep... There's no good reason to get Xeons for a lab, IMO. i5s were fine. Last time I built compute was haswell, but 4 cores and 64gb of memory plus a cheap, tiny SSD would have run me about $600/node for everything. Why build a "monster" instead of smaller servers on desktop hardware? Shoving that much memory in a NUC gets very expensive very fast, too. I do hit my lab very hard, and it would be overkill for a lot of people, but a Celeron (or i3) with 16gb or 32gb wouldn't be enough....
|
# ? Aug 6, 2017 15:44 |
|
Moey posted:I hate P2V conversions.
|
# ? Aug 6, 2017 20:30 |
|
underlig posted:What is your least hated p2v method? Is disk2vhd still what's used? I've always worked in VMware shops, so when I have had to do them, I just use the vCenter Converter. And then the old rear end Cold Clone 4.x disk for Windows 2000 stuff.
|
# ? Aug 6, 2017 20:41 |
|
I'm going to throw out there that if you're looking for a 10+ Core Xeon, don't make the mistake I did and order it new, that poo poo is hella cheap on ebay if you can deal with shipping from HK or China. Intel Xeon E5 2630 V4 ES QHVK 2.1Ghz 25MB 10Core 20threads 14nm 85W CPU - $189 on ebay.
|
# ? Aug 6, 2017 21:25 |
|
ILikeVoltron posted:I'm going to throw out there that if you're looking for a 10+ Core Xeon, don't make the mistake I did and order it new, that poo poo is hella cheap on ebay if you can deal with shipping from HK or China. Usually they are engineering samples if they are dirt-cheap. At least the one's I was looking at from HK and the like. Don't know if it matters, just saying.
|
# ? Aug 6, 2017 23:02 |
|
For a home use I wouldn't care about ES as long as your mobo takes it. 2011 non -3 non ES are dirt cheap
|
# ? Aug 6, 2017 23:25 |
|
I've been happy with my Supermicro SYS-5028D-TN4T with the Xeon D-1541 8-core processor for my home lab, but I valued quiet and low power (and ability to eventually upgrade to 128GB of RAM) over price.
|
# ? Aug 7, 2017 02:55 |
|
At work I've just found some hosts with 6 years up time... Not sure if I should be amazed or horrified
|
# ? Aug 8, 2017 19:59 |
|
Little of column A, little of column B.
|
# ? Aug 8, 2017 20:04 |
|
I'm at a vmug today and Scott Lowe is here, anybody have some questions for him they'd like answered?
|
# ? Aug 10, 2017 15:45 |
|
evol262 posted:There's no good reason to get Xeons for a lab, IMO. Craiglist is your friend. I run an R710 with 4 x 2.5 Ghz Quad Xeons and 288GB DDR3 for my lab, doubles as my NAS and iSCSI host with an MD1000. And altogether, it really only consumes a little more than a fully built desktop while also letting me spin up Boot2Docker or anything I need without adding more machines to the power bill. But I also use it for client work extensively within jails, so it largely pays for itself.
|
# ? Aug 10, 2017 16:08 |
|
wibble posted:At work I've just found some hosts with 6 years up time... Reboot them and report back with results.
|
# ? Aug 10, 2017 17:21 |
|
CommieGIR posted:Craiglist is your friend. I run an R710 with 4 x 2.5 Ghz Quad Xeons and 288GB DDR3 for my lab, doubles as my NAS and iSCSI host with an MD1000. And altogether, it really only consumes a little more than a fully built desktop while also letting me spin up Boot2Docker or anything I need without adding more machines to the power bill. I said there's no good reason for Xeons mostly because of noise, but I also don't think you've compared the power consumption on that to NUCs or something else small. "Fully built desktops" usually have excessively large PSUs and probably GPUs. People don't dump hardware on eBay or Craigslist for next to nothing because it's efficient compared to new builds. Unless you have an attached garage you can shove it in, rackmount servers great for price, and terrible for everything else lab related (power, heat, noise, form factor) Cost isn't the reason I got rid of all of my rackmount servers.
|
# ? Aug 10, 2017 18:51 |
|
evol262 posted:I said there's no good reason for Xeons mostly because of noise, but I also don't think you've compared the power consumption on that to NUCs or something else small. "Fully built desktops" usually have excessively large PSUs and probably GPUs. Mine is using about 215 watts + ~75 for the MD1000 and makes very little noise, at least compared to my old R905.
|
# ? Aug 10, 2017 19:32 |
|
CommieGIR posted:Mine is using about 215 watts + ~75 for the MD1000 and makes very little noise, at least compared to my old R905. It's all relative, really. And if you like it, that's important. i5 haswell NUCs are about 30w and silent at full load. Different strokes.
|
# ? Aug 10, 2017 20:01 |
|
evol262 posted:It's all relative, really. And if you like it, that's important. i5 haswell NUCs are about 30w and silent at full load. Different strokes. True. I have a thing for full size servers too.
|
# ? Aug 10, 2017 21:07 |
|
fordan posted:I've been happy with my Supermicro SYS-5028D-TN4T with the Xeon D-1541 8-core processor for my home lab, but I valued quiet and low power (and ability to eventually upgrade to 128GB of RAM) over price. My favorite home server. Add an NVMe stick, run ESXi, profit.
|
# ? Aug 10, 2017 23:59 |
|
What are my options for a 1U server focused on storage? I've got a supermicro 4 node 2U box that's loaded but doesn't have onboard RAID, and I played with vSAN a bit but the queue depth is 64 which makes baby jesus cry. I'd like to throw something together to do iSCSI for the four nodes over a 10Gbit backbone. It's looking like a 10-bay R620 is the leader right now, can build it out with modest specs and 10gig for around 800 or so. I also thought of trying out Storage Spaces Direct on the nodes themselves but that requires datacenter licensing which I can't get through the Action Pack. With this I might try loading regular storage spaces on it and using tiering between the SSD and spinning platters. If I go the FreeNAS route I'll have to sacrifice four bays for a mirrored ZIL and L2ARC if I don't want sync writes to be slow as molasses from what I've read. I would try Nutanix CE again but they don't have support for some of the virtual appliances I need to run. Maybe it's a better idea to go that route again and just get a couple simple 1U boxes for the stuff that requires ESX?
|
# ? Aug 11, 2017 01:11 |
|
Cisco C220?
|
# ? Aug 11, 2017 01:16 |
|
None of those solutions actually provide full redundancy except storage spaces and vSAN, which you've already said won't work. Does availability actually matter for this data?
|
# ? Aug 11, 2017 01:30 |
|
Availability doesn't matter in the sense that it's not production, but it's a lab environment that I use to test/mock things up so if I suddenly lose a chunk of it while I'm travelling it becomes a pain in the rear end. Storage spaces should work, storage spaces direct is a specific flavor of SS for server 2016 that has some hyperconverged goodies thrown in.
|
# ? Aug 11, 2017 18:56 |
|
Just take good nightly images of your VMs and back them up on a separate drive. That's what I do with my current lab, which has a lot of client copied images on it for testing/diagnostics, so while it is a lab environment, I need to maintain data integrity, so I know how you feel.
|
# ? Aug 11, 2017 19:02 |
|
Yeah I'll end up shifting to an agent based backup either way. Veeam's NFR is great but only comes with two socket licenses. I should be able to use their free agent and point it to the B&R repository, I just lose the vsphere integration which isn't really a huge issue.
|
# ? Aug 11, 2017 19:17 |
|
Anyone doing any vmware automation not in Powershell? The vmware SDK documentation is painfully obtuse and picking any bindings outside of PS is very difficult as there are no books or tutorials as far as I can tell for Python or even others, except for Java which I don't care to learn.
|
# ? Aug 14, 2017 07:04 |
|
nicky_glasses posted:Anyone doing any vmware automation not in Powershell? The vmware SDK documentation is painfully obtuse and picking any bindings outside of PS is very difficult as there are no books or tutorials as far as I can tell for Python or even others, except for Java which I don't care to learn. So one of our customers uses vRealize Automation and they have an in-house developer who writes the code that drives most of their workflows. They have SDK support which means he has access to the VMware SDK developers to ask questions and get assistance. He asked them whether there was an API call that could pull all VMs owned by a certain business unit and received silence back, so eventually he just wrote his own in java and sent it over asking them if it looked okay. They said "wow, that's great, we're going to go ahead use that!" That's my story about VMware SDKs and the people who develop them. I don't actually have a useful answer to your question.
|
# ? Aug 14, 2017 22:38 |
|
nicky_glasses posted:Anyone doing any vmware automation not in Powershell? The vmware SDK documentation is painfully obtuse and picking any bindings outside of PS is very difficult as there are no books or tutorials as far as I can tell for Python or even others, except for Java which I don't care to learn. We use the python at work to create new virtual machines, but it has been mostly done by my coworkers so I don't really know it that well.
|
# ? Aug 15, 2017 21:58 |
|
|
# ? May 9, 2024 15:51 |
|
nicky_glasses posted:Anyone doing any vmware automation not in Powershell? The vmware SDK documentation is painfully obtuse and picking any bindings outside of PS is very difficult as there are no books or tutorials as far as I can tell for Python or even others, except for Java which I don't care to learn. Their APIs and docs are mostly poo poo and remind me of working with Active Directory over COM.
|
# ? Aug 16, 2017 03:58 |