|
Beowulfs_Ghost posted:https://pub400.com/ has free basic 0S/400 accounts if you want to scratch that itch. NICE!!
|
# ? Feb 2, 2022 12:34 |
|
|
# ? Jun 10, 2024 06:51 |
|
BlankSystemDaemon posted:It's too bad polarhome.com shut down, because that's exactly what it was for. Beowulfs_Ghost posted:https://pub400.com/ has free basic 0S/400 accounts if you want to scratch that itch. Heck why did I have to post. Now something else to toy with! (thank you!!) Though I'm still working my way through the IBM zxpore Z/OS challenge hoping to qualify for their cheap "education" zd&t system. Word is right now that it's USA only but I'm hoping I can maybe fake out by putting my friend's address or something. IBM just does NOT want to take my money to play with a mainframe on my own terms. I can it but I'd rather do it properly IF ONLY I COULD. There's some kind of stick-in-bike-spoke meme material here but too early for me to put effort into a snarky post.
|
# ? Feb 2, 2022 13:20 |
|
Completely unrelated, I think I’m at the point where I am ready to free myself of the burdens of static IPs in a homelab environment, but that would really require a working DNS server with proper DHCP update support, and I’m not sure what my best strategy here should be. I’m happy to have my UDM-Pro do DHCP relay and set up a dedicated server for this, but I’d have to have it run on some equipment that is pretty much up forever, so I’m thinking maybe my Synology.. I’d probably try to avoid hacking it into the UDM-Pro itself since I’d rather not monkey with the thing responsible for my internet, but at the same time if I outsource this to the Synology it becomes a second thing that HAS to exist for my network to function. Hmm.
|
# ? Feb 2, 2022 18:14 |
|
I just do DHCP reservations for things that need to stay fairly static especially in the local DNS records.
|
# ? Feb 2, 2022 18:19 |
|
Er, yeah, sorry I should have added I'm happy to work with static reservations, and honestly letting the UDM or Synology manage that through a friendly UI would be a bonus. I have internal zone resolution on the Synology either way, so that won't change. With my old Edgerouter it was a bit of a config hack to get it to update DNS on synology but it was doable. Not sure how to do that on the UDM-Pro and make it resilient against config changes, so that's kind of why I'm thinking I might just have the Synology do the whole thing if it can integrate well enough together. The super obvious solution would be to have a second router or firewall just for homelab but honestly I can't be bothered to start throwing hardware at the problem, and if I did it would probably be some kind f DDI appliance which seems way overkill for a lab.
|
# ? Feb 2, 2022 18:26 |
|
Yeah my homelab is sectioned up by VM firewalls that manage each subnet
|
# ? Feb 2, 2022 18:41 |
If you're into networking you might've heard about it, but it seems to me that the solution is pvlan (the architecture, not the Cisco-specific implementation). I've got mine setup using FreeBSDs private bridge interfaces, but I'm pretty sure it can be done using other OS' too.
|
|
# ? Feb 2, 2022 20:56 |
|
Anyone here run a Kubernetes homelab? Curious what you use for your host OS. I've fallen down the black hole of looking at end to end solutions like sidero and talos to bring up bare metal clusters from the commandline and I think I might give that a whirl just for fun. Does rely on a little bit of PXE and DHCP monkeying but shouldn't be too complex. I'm trying to be very cognizant that in doing this I'm actually actively NOT learning about kubernetes but instead veering into "provisioning bare metal resourcing" territory when it's easy enough to just spin up a CentOS or RockyOS instance and call it a day. Something about minimal surface Linux really appeals to me, though other alternatives like CoreOS seem to rely on either ugly hacks or control planes to provision properly. Just shower thoughts. Ultimately I'll spend a day or two on this and if I'm struggling then I'm kind of making a promise to myself that I'll blow my thing away and just re-provision bare metal CentOS on my M93p's and call it a day. It wouldn't be "me" if it wasn't needlessly complicated and delaying something useful in favour of endless frustration.
|
# ? Feb 8, 2022 12:45 |
|
Martytoof posted:Anyone here run a Kubernetes homelab? Curious what you use for your host OS. I've fallen down the black hole of looking at end to end solutions like sidero and talos to bring up bare metal clusters from the commandline and I think I might give that a whirl just for fun. Does rely on a little bit of PXE and DHCP monkeying but shouldn't be too complex. I have a couple K8s homelabs running on Ubuntu as far as the host.
|
# ? Feb 8, 2022 14:30 |
|
it’s happening, I’m finally getting an A400 or A500 on which to run MPE! hope it’s not horrifyingly loud
|
# ? Feb 9, 2022 10:26 |
Since zfs raidz expansion is now ready to be tested some of all ya'all extra rack servers can surely be used to test it.eschaton posted:hope it’s not horrifyingly loud
|
|
# ? Feb 9, 2022 10:31 |
|
eschaton posted:hope it’s not horrifyingly loud It depends on your metrics for loud. Is it loud compared to a b1 taking off at full blast? Nah, you won't even notice. Is it loud compared to anything not in a Hilti catalog? Eh, not so much. If your homelab is in your household and it hasn't any kind of sound proofing, the rest of your family might raise concerns about it.
|
# ? Feb 9, 2022 10:48 |
|
I need to run some cables from one network device to another directly above it without a cabling flopping loosely all over the place. Two considerations: 1. I didn't plan ahead for a 1u cable management blank plate 2. It's not just immediate short runs from ports to ports vertically so cabling will be going left and right, and since I won't be cutting them to size but buying patches in 6" and 12" lengths there will have to be some slack to manage. What are my options to keep this clean without trying to move everything around to recoup 1U for cable mgmt insert? I was kind of hoping there would be some kind of cool super thin "slide between rack units" insert I could find that would at least let me ziptie cabling to it but that doesn't seem to exist. I'm articulating it terribly but think of something like this: Only instead of pulling out of that specific rack synth it would go between rack units, span the whole 19" and basically just be like ... a bar that I could use to at least prevent cables from flopping freely around by using zipties. I'm not sure I described that any better now that I think about it. Anyway, other than my weird imaginary thing, do I have any options? at the very least even if I can't find something it'll still look better once I'm finished given I'm replacing random cables I had lying around from the past 10 years of network hoarding with relatively to-size patches. e: Oh huh just randomly found this while googling randomly to answer my own question -- would probably do what I'm looking for: https://www.startech.com/en-ca/server-management/cmlb102 No way am I paying 99 canadabucks for a bent bar though, has to be something cheaper. But I guess "lacing bar" is the term I found that I need to focus on. Hope I can find one that isn't laughably expensive for what it is, and that isn't too front-proud to foul the glass door on the cabinet. e2: OK I think this is way more my speed: https://www.pennelcomonline.com/en/Penn-Elcom-51mm--2-Cable-Support-Tie-Bar-R1311-1A/m-5874.aspx 2" offset and like.. twelve bucks. I can buy enough to manage my whole rack for the price of one of the startech bars. some kinda jackal fucked around with this message at 13:22 on Feb 9, 2022 |
# ? Feb 9, 2022 13:07 |
|
It looks like the startech is a 10 pack but yeah unless you need ten the others are definitely a better deal. I'd have resorted to 3d printing something.
|
# ? Feb 9, 2022 13:34 |
|
Oh I didn't realize it was a 10pack which is actually a lot more reasonable now. Guess that's what I get for googling before coffee. Doubly so since it literally says "10 pack" right there in the title. At any rate, ordered five of the 2" angled bars -- found them for ten bucks locally -- which is probably more than I'll need but will be good to have a spare or two in case I want to lace something to a wall or something. Now to put my money where my mouth is and actually clean up this mess. I'm buying the cables last so I don't have to re-measure and re-order. Goal is to have everything in place and then figure out what lengths I need for each port. I'll probably opt for those ultra-slim cat6a since I'm not buying a metric ton of them.
|
# ? Feb 9, 2022 13:38 |
|
BlankSystemDaemon posted:About that.. my metric is my rx2620, for which I still need to acquire the latest (noise-dampening) firmware I need to set up an LA34 as OPCOM for it sometime and see which is louder eschaton fucked around with this message at 02:09 on Feb 13, 2022 |
# ? Feb 9, 2022 20:52 |
|
OK so vPro and AMT is basically iLO/iDRAC for commodity hardware I threw all my M93p’s into my enclosed rack and I’ve got a badass KVM over IP going with VNC. Somehow I went all this time without even knowing this existed. Only downside is that the machines have to be powered on for the AMT engine to respond but I guess you can’t have it all. I would have loved to used this functionality to just spin up little commodity nodes as my lab deploys containers and consumes resources but I guess I’ll have to make do without. But I mean I was about to start investing in an avocent KVM with dongles and stuff like that just so I wouldn’t need to wander into the basement every time I wanted to monkey with a machine so this is basically 99% of what I wanted. I still have two Optiplex 9020s to bring online as proxmox, one of these days, for regular rear end VMs and I forgot those are vPro too
|
# ? Feb 10, 2022 03:57 |
Martytoof posted:OK so vPro and AMT is basically iLO/iDRAC for commodity hardware Some of it can be probed with with the IPMI protocol, just like the other OOB BMCs, but just like them it's not as good as a full IPMI implementation like the one used by Supermicro on the AST2x00 chips they use. The biggest problem with vPro, though, is that it's only available on a very tiny subset of CPUs in each generation, compared to the total number of CPU models per generation - so it's not really available on commodity hardware, as it's usually sold at a bit of a premium.
|
|
# ? Feb 10, 2022 13:08 |
|
Too true, though just a sampling of everything I have at home appears to support it. I'm generally super pleased with this discovery though I suspect this is one of those things everyone knew but me
|
# ? Feb 10, 2022 14:15 |
|
Martytoof posted:Only downside is that the machines have to be powered on for the AMT engine to respond but I guess you can’t have it all. I would have loved to used this functionality to just spin up little commodity nodes as my lab deploys containers and consumes resources but I guess I’ll have to make do without. You can set vPro to be reachable at S0(ON) or S0/S3/S4-5(ON+Standby+Hibernation). You can find that on the intel ACU profile tool or on the web page under "Power Policies"
|
# ? Feb 10, 2022 16:41 |
|
Yeah that threw me, it was enabled and responding to ICMP but meshcommander wouldn't connect to any of the nodes. This morning it worked fine though, except for one machne which wasn't responding to ICMP at all until I booted it. So I think it's probably fine and maybe some weird shenanigans that I'll just chalk up to mystery network stuff or old second hand hardware. I got it in my head to maybe send a WoL packet too but honestly I tried so many random things (WoL, plain ICMP, closing and reopening the switchport to see if it would jolt awake, etc) that I'm not sure which of them managed to work. I'll keep an eye on it and see if it acts up again.
|
# ? Feb 10, 2022 17:58 |
|
vPro seems to react negatively if the psu is throwing errors, our remedy was to remove power to the issue desktop, wait 30s, reattach power and go to IPADDRESS:16992 to validate operation of the ME. Mesh commander loves to fail connections too so we used vnc plus or intel manageability commander instead
|
# ? Feb 10, 2022 19:17 |
|
my HP rp2430 (aka HP 3000/9000 A400) has arrived along with its 650MHz PA-8700 CPU, its DAT drive, and MPE/iX 7.5 media (tape/CD/preinstalled HD) I also have all the necessary firmware patches and even found the rx2620 Itanium2 firmware I need in the process unfortunately the media is in a box in my mailbox which I can’t get to until Monday worse, the system had its GSP removed, so I can’t actually connect to it and just netboot Linux or something in the meantime; a replacement GSP should arrive Tuesday it’s way quieter than the rx2620 but not as quiet as the Gen8 DL360p in normal use, closer to the AlphaServer DS10L
|
# ? Feb 13, 2022 02:15 |
|
Martytoof posted:Heck why did I have to post. Now something else to toy with! Cheer up, IBM is now providing zOS directly thru IBM Cloud https://newsroom.ibm.com/2022-02-14-IBM-Simplifies-Modernization-of-Mission-Critical-Applications-for-Hybrid-Cloud
|
# ? Feb 15, 2022 08:06 |
|
I now have my rp2430 booting and self-testing OK next step is to netboot Linux or BSD and image the hard disk with MPE/iX 7.5
|
# ? Feb 15, 2022 10:02 |
|
Speaking of netbooting. I have this deep deep deep desire to buy a NextStation or Indigo2 or something and netboot it. After tinkering with a PXE self-provisioning Kube cluster I'm huge on the network boot bandwagon, and it's kind of an excuse to play with Next or IRIX again. I mean at this point it would just be an excuse to own the hardware. I think I'll try to budget for a frivolous purchase, and honestly if I have to put my money somewhere I'd probably rather have the NextStation just because it feels like the more exotic of the two. Plus I have not-so-fond memories of IRIX. SlowBloke posted:Cheer up, IBM is now providing zOS directly thru IBM Cloud I've done everything humanly possible to distance myself from the IBM cloud based on having been involved in running our critical application there for the past six or seven years But best of luck to 'em some kinda jackal fucked around with this message at 13:33 on Feb 15, 2022 |
# ? Feb 15, 2022 13:30 |
|
Martytoof posted:I've done everything humanly possible to distance myself from the IBM cloud based on having been involved in running our critical application there for the past six or seven years Congratulations, you are the second person I've ever heard mention an actual business critical use of IBM Cloud by someone other than IBM. The other is BleepingComputer lmao.
|
# ? Feb 17, 2022 05:01 |
|
Hmm so I'm trying to come up with an NFS/iSCSI storage backbone based on a 10gbe synology possibly, or maybe temporarily 2.5 or 5gbe via USB off my existing 920+ (yes, I know I won't achieve a full 5gbe). This would serve a proxmox (or just kvm) box with a 10gbe NIC and a bunch of m93ps that right now are honestly fine at 1gb because they just run local container stuff but I'd love to eventually bump those to 2.5 if the mood hits me. Anyway all this to ask -- from my research this isn't a case where one solution fits all. a 10gbe switch won't negotiate 2.5 or 5 and those devices would be stuck at 1. I gather there are some SFP+ that will present as 10gb to the switch but negotiate to 2.5 or 5 with the remote partner. Is that basically my only option here without splitting my network into specific 1/2.5/5 switching, and 10g? In a mixed X-baseT environment I'm still not sure how you'd uplink efficiently between a 1/10-only switch and a 2.5/5 if they don't speak a common negotiation other than 1. I'm probably articulating this terribly so maybe a more concrete question. In something like the cheap 5-port Mikrotik 10gb switch, can I combine 2.5/5/10 SFP+ (if 2.5/5 even exist)? I know there is discussion on the ubnt forums of SFP+ which will present to the switch ASIC as 10gb and negotiate n-baset remote so maybe that's the way to go for a mixed environment assuming the switch will support these? Really the only thing I'd IMMEDIATELY want to escape from 1gb land is the proxmox server since it only has a 240gb ssd and runs actual VMs which, even with thin provisioning, is getting a little cramped. I know the switch+sfp+whatever will be more expensive than just buying a larger SSD so that is the practical solution here, but I'm just thinking out loud at this point. I guess if I can get faster access to my Synology via USB/whatever I could just do a direct point to point storage network with the Proxmox by throwing one of those presumed 10-to-nbaset SFP+ into its mellanox card and calling it a day. And honestly treat this as me just playing around, not trying to solve a problem in a pragmatic easy way because the standalone SSD is really the right answer here for cheap/fast/simple solutioning. some kinda jackal fucked around with this message at 14:18 on Feb 24, 2022 |
# ? Feb 24, 2022 14:15 |
|
Martytoof posted:Anyway all this to ask -- from my research this isn't a case where one solution fits all. a 10gbe switch won't negotiate 2.5 or 5 and those devices would be stuck at 1. Netgear XS508M seems to fit your needs. It’s pricey though. There might be other options, that was just a quick search.
|
# ? Feb 24, 2022 14:24 |
|
I've never used those SFP+s but that seems like a non-standards based solution that I'd avoid. Mgig is specifically copper. To uplink them efficiently you'd just get an mgig switch with 10gig uplinks I guess. Seems like it'd be more efficient power-wise to use one switch though. Just looked and I'm using every flavor of my mgig ports except 100 mbit. code:
KS fucked around with this message at 21:56 on Feb 24, 2022 |
# ? Feb 24, 2022 15:42 |
Martytoof posted:Hmm so I'm trying to come up with an NFS/iSCSI storage backbone based on a 10gbe synology possibly, or maybe temporarily 2.5 or 5gbe via USB off my existing 920+ (yes, I know I won't achieve a full 5gbe). This would serve a proxmox (or just kvm) box with a 10gbe NIC and a bunch of m93ps that right now are honestly fine at 1gb because they just run local container stuff but I'd love to eventually bump those to 2.5 if the mood hits me. Devices may claim they can do 5, 10 or even 20Gbps but it's almost always the case that the PHY inside the device is only a USB 3.1 Gen1 device, which in reality makes getting above 3Gbps quite unrealistic given the signal overhead. I have one of these running with SwOS, with which I connect my servers and workstation that all have 10Gbps SFP+ - and with jumboframes enabled doing 4k native sector size iSCSI traffic works wonders - but that's probably in no small part because SFP+ also has an order of magnitude lower latency compared to RJ45 based Ethernet. That being said, you do need to keep in mind that SFP+ adapter compatibility is a bit of a mess so the best solution is probably to use fibershop (fs.com nowadays) to ensure that a given module has compatibility with a NIC and buy OM3 cable from them as well (unless you're doing +300 meter runs, which means you need OS1 or OS2).
|
|
# ? Feb 24, 2022 16:30 |
|
Oh for sure. I threw out 5 just because it exists, but frankly just getting 2.5x theoretical performance over 1gig is plenty for the machines that can’t take a 10gb card (kube nodes) Great food for thought gang.
|
# ? Feb 24, 2022 16:49 |
|
@martytoof qnap makes small switches with access ports at 2.5 and uplinks at 10g in managed(QSW-M2108 series) or unmanaged(QSW-2104 series) variants if you need a temporary patch with 4-8 nbase ports. Zyxel has a handful of unmanaged and websmart switches if you prefer(full managed are costly)
|
# ? Feb 24, 2022 18:19 |
|
Was completely unaware qnap made anything but storage. Awesome info, thanks.
|
# ? Feb 24, 2022 18:28 |
|
Martytoof posted:Hmm so I'm trying to come up with an NFS/iSCSI storage backbone based on a 10gbe synology possibly, or maybe temporarily 2.5 or 5gbe via USB off my existing 920+ (yes, I know I won't achieve a full 5gbe). This would serve a proxmox (or just kvm) box with a 10gbe NIC and a bunch of m93ps that right now are honestly fine at 1gb because they just run local container stuff but I'd love to eventually bump those to 2.5 if the mood hits me. I’ve daisy chained 10gb network with some of the intel dual 10gb nics. Had the NAS connect to the VM node on one port and then the other to my desktop. All 3 also had a separate 1gb mic to the local switch and eventually internet. Also I just ran proxmox as my NAS on a white box Xeon with ECC memory.
|
# ? Feb 24, 2022 21:33 |
|
I've made a mistake Needs some more modern blades, but was practically free.
|
# ? Feb 25, 2022 03:23 |
|
I think buying a used blade chassis is a rite of passage. Well done you!
|
# ? Feb 25, 2022 03:52 |
|
Agrikk posted:I think buying a used blade chassis is a rite of passage. Worth remembering I already have an M1000e and an MX7000 in my current rack Probably only the C3000 will be added to the rack. It was a bulk deal so I had to take all of them.
|
# ? Feb 25, 2022 04:19 |
I like the lone Nerf dart on the floor lol
|
|
# ? Feb 25, 2022 05:21 |
|
|
# ? Jun 10, 2024 06:51 |
|
CommieGIR posted:Worth remembering I already have an M1000e and an MX7000 in my current rack Okay no. I’d forgotten. At this point you are just nuts.
|
# ? Feb 25, 2022 07:16 |