Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
ChiralCondensate
Nov 13, 2007

what is that man doing to his colour palette?
Grimey Drawer

D. Ebdrup posted:

By the sounds of it it's as simple as this oneliner.
pre:
zpool create tank raidz3 /dev/ada0 /dev/ada1 /dev/ada2 /dev/ada3 /dev/ada4 \
&& camdd -i /dev/random,bs=1M,depth=`sysctl -n hw.ncpu` -o file=/tank/random.bin -m 1024G \
&& zfs scrub
Granted, Linux might have trouble with it because of its CSPRNG and its lack of camdd which can operate at multiple queues (ie. make use of FreeBSDs threaded CSPRNG that cannot be exhausted because it's based on Fortuna and doesn't block).
Then again, that seems like a problem for Linux.

well, I meant without having to use zfs and multiple devices, etc.

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



H110Hawk posted:

Just use urandom instead. I also think they've improved random materially in the last decade to make exhaustion less likely.
But I'm on FreeBSD, there's no point in using a symlink.
If you're going to write a TB of random data, you're going to need something better than what's in Linux.
Also, so far as I know, Linux doesn't have good concurrent random generation on multiple threads. FreeBSD does.

ChiralCondensate posted:

well, I meant without having to use zfs and multiple devices, etc.
The reason why ZFS is really good at triggering it is that it actually knows where on the disk the data is stored, unlike traditional filesystems.
Probably there are other ways of doing it, but none have been documented as of what I've seen?

H110Hawk
Dec 28, 2006

D. Ebdrup posted:

But I'm on FreeBSD, there's no point in using a symlink.
If you're going to write a TB of random data, you're going to need something better than what's in Linux.
Also, so far as I know, Linux doesn't have good concurrent random generation on multiple threads. FreeBSD does.

I meant in Linux re: urandom.

I didn't realize that Linux has a global lock on (u)random so it can only read by one process at a time. Reading wikipedia around it it looks like it's to guarantee that no two processes get the same random data, and generally applications needing more than a certain amount are likely mis-designed. (As our SMR beater script is obviously mis-designed, it's an abuse script.) How does FBSD handle the two processes potentially getting the same random data? Or does it assume that is too outside of a chance to bother with?

Really /dev/zero should be sufficient for these tests, no? SMR doesn't do anything with the actual data like compress it or similar? The randomness of the data may actually be slowing down the script here by causing the kernel to think harder regardless of which kernel it is.

Devian666
Aug 20, 2008

Take some advice Chris.

Fun Shoe

xarph posted:

My zpool replace operation completed successfully sometime overnight. Pool is fine.

The story has hit Ars: https://arstechnica.com/gadgets/2020/04/caveat-emptor-smr-disks-are-being-submarined-into-unexpected-channels/

I saw that article go up. The drives do perform at the specified 180 MB/s write speed.

I have a theory that the drives use the whole disk as CMR cache, which would mean that if you write more than half the drive's free capacity in one go you would find out what the SMR performance hit is. A real issue for the tech review sites to look into (rather than an LTT video).

Yaoi Gagarin
Feb 20, 2014

On Linux if you really really really want lots of random bytes with no contention with other processes the way to do it is to use /dev/urandom to generate a seed value and feed that into your own prng. For a 1-liner you can use openssl to run aes on /dev/zero with the /dev/urandom bytes as the key

BlankSystemDaemon
Mar 13, 2009



H110Hawk posted:

I meant in Linux re: urandom.

I didn't realize that Linux has a global lock on (u)random so it can only read by one process at a time. Reading wikipedia around it it looks like it's to guarantee that no two processes get the same random data, and generally applications needing more than a certain amount are likely mis-designed. (As our SMR beater script is obviously mis-designed, it's an abuse script.) How does FBSD handle the two processes potentially getting the same random data? Or does it assume that is too outside of a chance to bother with?

Really /dev/zero should be sufficient for these tests, no? SMR doesn't do anything with the actual data like compress it or similar? The randomness of the data may actually be slowing down the script here by causing the kernel to think harder regardless of which kernel it is.

The commit cited in the commit I linked posted:

I think the primary difference is that the specific sequence of AES keys will differ if READ_RANDOM_UIO is accessed concurrently (as the 2nd thread to take the mutex will no longer receive a key derived from rekeying the first thread). However, I believe the goals of rekeying AES are maintained: trivially, we continue to rekey every 1MB for the statistical property; and each consumer gets a forward-secret, independent AES key for their PRF.
ZFS has transparent lz4 compression for every dataset, so writing zeroes to ZFS is going to write very little actual data to the disk - so you would have to turn that off which is arguably trivial as it's just an extra 'zfs create -o compress=off tank/tub' and changing /tank/random.bin to /tank/tub/random.bin
And you still don't get camdd, so no queue depths to utilize all the threads your system has which means you're gonna need to make the oneliner more complex by doing foreach - except oh wait! You're not on csh so that won't work, as won't the backtick syntax I used for getting the number of CPUs, so you'll have to deal with undefined variables that bash loves so much when invoking environment variables.
Still means you're going to be overloading Linux' CSPRNG while simultaneously performing offloads of Galois fields in finite field theory to the applicable hardware in the CPU, as well as doing hardware offload for fletcher checksums. What I'm saying is, the CPU is gonna be quite toasty.

Relying on /dev/zero and writing to /tank/tub/null.bin assumes that what I remember about ZFS object ranges doesn't apply to strings of zeroes. Again, ZFS is pretty drat smart, so I believe it might even be capable of specifying a range of zeroes - in effect deduplicating it for free - instead of writing all of them, irrespective of the inline compression used.
My brain is too tired to go source spelunking now, though.

BlankSystemDaemon fucked around with this message at 22:52 on Apr 17, 2020

LordOfThePants
Sep 25, 2002

I’ve got 4, 4TB Reds, they are the EFAX version so likely SMR, sitting in boxes next to my server. I was going to set them up in RaidZ as an upgrade for my home server.

I should just return them to Amazon, right? I’m still within the return window.

H110Hawk
Dec 28, 2006

LordOfThePants posted:

I’ve got 4, 4TB Reds, they are the EFAX version so likely SMR, sitting in boxes next to my server. I was going to set them up in RaidZ as an upgrade for my home server.

I should just return them to Amazon, right? I’m still within the return window.

Yes. Select the not as described option.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

LordOfThePants posted:

I’ve got 4, 4TB Reds, they are the EFAX version so likely SMR, sitting in boxes next to my server. I was going to set them up in RaidZ as an upgrade for my home server.

I should just return them to Amazon, right? I’m still within the return window.

Well, what would you replace them with? Finding a solid 4TB non-SMR option right now seems challenging, unless there are 4TB whites in Easystores that are SMR (non verified AFAIK).

Raymond T. Racing
Jun 11, 2019

DrDork posted:

Well, what would you replace them with? Finding a solid 4TB non-SMR option right now seems challenging, unless there are 4TB whites in Easystores that are SMR (non verified AFAIK).

IIRC easystores below 8tb are not SATA drives, they're straight USB.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Buff Hardback posted:

IIRC easystores below 8tb are not SATA drives, they're straight USB.

Ah poo poo, I forgot about that. Welp. Sounds like a good impetus to bump up to 8TB drives, especially since the 8TB Easystores are usually cheaper than retail 6TB Reds, and only like $15 more than retail 4TB Reds.

Sucks for corporate users and those poor souls out there who don't have a source of shuckable externals.

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer
Agree on the 8TBs.

Raymond T. Racing
Jun 11, 2019

Did some double checking, looks like Easystores <=6TB are blues (so probably also SMR).

LordOfThePants
Sep 25, 2002

DrDork posted:

Ah poo poo, I forgot about that. Welp. Sounds like a good impetus to bump up to 8TB drives, especially since the 8TB Easystores are usually cheaper than retail 6TB Reds, and only like $15 more than retail 4TB Reds.

Sucks for corporate users and those poor souls out there who don't have a source of shuckable externals.

I’ll have to get a whole new server since my ThinkServer only supports 6gb drives. Not a huge deal, aside from the cost. It’ll definitely make migrating easier since the ThinkServer only supports 4 drives.

Sure glad I procrastinated on setting up that array.

IOwnCalculus
Apr 2, 2003





Does it really have a limit or is that just because 6TB was the max available when the server came out? The only controller limitation I know of is at 2TB, and that'd be for really, really old first-gen SATA/SAS controllers.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast
Seagate Ironwolf would probably be my suggestion for that size of drive, otherwise 8TB+? Shucking WD externals all day long.

LordOfThePants
Sep 25, 2002

IOwnCalculus posted:

Does it really have a limit or is that just because 6TB was the max available when the server came out? The only controller limitation I know of is at 2TB, and that'd be for really, really old first-gen SATA/SAS controllers.

It’s a Lenovo TS140, so it’s quite old. You may be right about that being the maximum available drive at the time of release. I looked earlier this year and there’s just not a ton of people trying that because the hardware is so old at this point.

I’ve got to get 8gb drives anyway so I guess I’ll give it a shot. I like that form factor for the server so it’ll be nice if I can keep using it.

VulgarandStupid
Aug 5, 2003
I AM, AND ALWAYS WILL BE, UNFUCKABLE AND A TOTAL DISAPPOINTMENT TO EVERYONE. DAE WANNA CUM PLAY WITH ME!?




Anyone familiar with FreeNAS here?

I built a 2200G system, currently on a 256GB NVME and using an Easyshare 8TB for storage and running Plex Server on it.

Two questions:

Am I that bad off running the FreeNAS plug in instead of doing the command line install for Plex Server?

The next question is I also have a few 2.5" SSDs around and I think my mom could use my NVME for her laptop. I'm pretty sure Plex is running off the HDD, which means it could be slower. Is there a way I can throw two 2.5" in, run FreeNAS off the the 40GB and run Plex off a 256GB SSD, and obviously still use the HDD for storage?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

LordOfThePants posted:

It’s a Lenovo TS140, so it’s quite old. You may be right about that being the maximum available drive at the time of release. I looked earlier this year and there’s just not a ton of people trying that because the hardware is so old at this point.

I’ve got to get 8gb drives anyway so I guess I’ll give it a shot. I like that form factor for the server so it’ll be nice if I can keep using it.

While 6TB drives were the largest Lenovo would ship them with, there's pretty much no reason they wouldn't support arbitrarily large drives--they just weren't reasonable options when Lenovo bothered to validate drive options. There are plenty of people on storage forums discussing their TS140 builds full of 8TB drives, for example.

VulgarandStupid posted:

Anyone familiar with FreeNAS here?

Lot of us use it, yeah. To answer your questions:

(1) The primary downside to the Plex plug-in is that it is consistently behind (sometimes by months) the current release. If this doesn't bother you, then no, the plug-in works fine and makes everything super easy. Frankly Plex hasn't added much that really matters lately, and if you really get antsy you can always drop into the jail and manually update it from there, I believe. I never bothered because it Just Worked well enough that I didn't care to tinker.

(2) If you're itching for space you can keep Plex where it is in the jail and then do a simlink for the Metadata folder (that's the one that eats all the space) and have it actually live on the HDD. That said, having Plex run off the HDD probably isn't going to make all that much of a difference performance-wise if you've got 16GB or more RAM for your system and only are using it for one stream at a time. But, yeah, FreeNAS will let you happily install it on one SSD, have jails live on another SSD, and bulk media storage on a HDD. It don't give a poo poo. You'd just have to make mount points in the Plex jail for the HDD media folder, which is pretty trivial and can be done in the FreeNAS GUI with a couple of clicks.

BurgerQuest
Mar 17, 2009

by Jeffrey of YOSPOS
Looks like I got lucky with my 4TB Reds being EFRX

sharkytm
Oct 9, 2003

Ba

By

Sharkytm doot doo do doot do doo


Fallen Rib

LordOfThePants posted:

It’s a Lenovo TS140, so it’s quite old. You may be right about that being the maximum available drive at the time of release. I looked earlier this year and there’s just not a ton of people trying that because the hardware is so old at this point.

I’ve got to get 8gb drives anyway so I guess I’ll give it a shot. I like that form factor for the server so it’ll be nice if I can keep using it.

I've got a ts430, which is the previous generation. It's got 8x 8TB white labels on the SAS card, perfectly happy. Ditto with my ts440 (which is the same gen as your 140).

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo
for the two source of random, 'random' will lock and 'urandom' will never lock.

random will lock whenever it runs out of 'entropy'


quote:

Without the assistance of external hardware RNG's, Linux uses naturally occurring chaotic events on the local system to generate entropy for the kernel pool. These events, such as disk IO events, network packet arrival times, keyboard presses, and mouse movements, are the primary sources of entropy on many systems.


this is why applications to create certs or password managers will sometimes will ask you to move your mouse or hit random keys.

urandom will also suck down this entropy but when it runs out, it will switch to psuedo-random but will be 'enough' in the cases above. it will never fail to return random values ever.

i agree that using disk fill will be better to test gzip compression than all 0's but gzip compression only works well with files that use ASCII , pictures, and anything not just pure binaries like apllications. if you are testing random gzip compression... its not ever going to good...

BlankSystemDaemon
Mar 13, 2009



EVIL Gibson posted:

for the two source of random, 'random' will lock and 'urandom' will never lock.

random will lock whenever it runs out of 'entropy'
/dev/random was never supposed to have a blocking mode when it's "run out of entropy", because it's up to the sysop to supply the hardware, either as a daughterboard for the individual machine via safe(4), ubsec(4), or hifn(4) or as a network-available device for a fleet of machines in modern datacenters.
All of the BSDs had support for at least hifn(4), which is from 2000, ie as soon as the U.S munitions export laws were relaxed - and before then blocking wasn't a thing because 4.2BSD didn't do it, and neither does HP-UX.

Modern CSPRNGs (for example Fortuna, from 2003) are designed to never run out of entropy as long as they're well-seeded by at least one source of entropy, be it RDRAND from Intel or its equivalent from AMD, ARM, POWER, or RISC-V CPUs, as well as the hardware devices mentioned above.
That's why FreeBSD lets you supply harvest sources as you see fit at runtime and by defualt uses a mix of software interrupts, hardware interrupts, net_ng (netgraph nodes, if used), net_ether (ie all the ethernet-level noise that's present on any network with more than one device, available thanks to BPF), as well as the small timing differences for when you move your mouse or between key-strokes on the keyboard, plus device attach events, as well as the information cached in /entropy which is the last thing to get written to during shutdown, in addition to those hardware PRNGs in the CPU I mentioned above.

The idea that it does block is because Linux for a long time didn't have rngd from rng-tools until 2004 and Linux mainline kernel didn't support the hardware to seed it until 2006 in order to seed information from hardware devices into /dev/random because Linus thought he was gods gift to programming and he still hasn't fully conceded that he has no loving idea what he's talking about and just need to trust domain experts.
You might also question why /dev/random needs to be seeded from userspace, but that's a mystery even I can't answer.

BlankSystemDaemon fucked around with this message at 21:29 on Apr 19, 2020

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

D. Ebdrup posted:

Linus thought he was gods gift to programming and he still hasn't fully conceded that he has no loving idea what he's talking about and just need to trust domain experts.

Why I never.

Shocked I am. Shocked!

Godzilla07
Oct 4, 2008

I'm interested in building a NAS but I have no idea what hardware to go with. I don't know whether to build a modern AMD system (R3 3200G, Athlon 3000G, R5 1600), a modern Intel system (Pentium G5400) or an older Intel system (Ivy Bridge Xeon E3s on LGA 1155.)

My main uses for this NAS would be Plex, running apps I don't want to keep on my laptop 24/7 (e.g. Sonarr/Radarr), and serving as a network backup hub. My Plex needs seem pretty light for now as I'm not seeing major CPU usage when using Plex internally and I haven't bothered to get my Plex server working for external users. I'd be running unRAID for this system. My number one priority would be noise. For reference I think I'd get a Synology DS418play if I were to go the prebuilt NAS route.

IOwnCalculus
Apr 2, 2003





I'd do the Xeon, but that's as much because if you get a server board you should also have IPMI, and that's a huge quality of life improvement if you can't stash a keyboard / monitor with it for those times poo poo goes sideways.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Godzilla07 posted:

I'm interested in building a NAS but I have no idea what hardware to go with. I don't know whether to build a modern AMD system (R3 3200G, Athlon 3000G, R5 1600), a modern Intel system (Pentium G5400) or an older Intel system (Ivy Bridge Xeon E3s on LGA 1155.)

My main uses for this NAS would be Plex, running apps I don't want to keep on my laptop 24/7 (e.g. Sonarr/Radarr), and serving as a network backup hub. My Plex needs seem pretty light for now as I'm not seeing major CPU usage when using Plex internally and I haven't bothered to get my Plex server working for external users. I'd be running unRAID for this system. My number one priority would be noise. For reference I think I'd get a Synology DS418play if I were to go the prebuilt NAS route.

The ASRock Rack X470D4U (or X570D4i for ITX) is really slick and has IPMI so you can safely pick a Zen2 CPU if you go that route. Get a 3600 or one of the new 3300X quad cores - video encoding is one of those tasks that Zen2 is wildly better at than the Zen1/Zen+ chips you listed, it’s around 60% faster per core than Zen1 at video encoding.

There are diy NAS chassis like the U-NAS or DS380 although they definitely tend a bit towards the chintzy. My U-NAS has had the lights burn out on 3 of the 8 drive trays after like two years of operation.

Or you can just get a a Fractal Design R6/R7 and stack in a bunch of drives on the tray mounts.

Paul MaudDib fucked around with this message at 09:06 on Apr 20, 2020

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

IOwnCalculus posted:

I'd do the Xeon, but that's as much because if you get a server board you should also have IPMI, and that's a huge quality of life improvement if you can't stash a keyboard / monitor with it for those times poo poo goes sideways.

Seconded, but mostly for price. You can get an older Xeon (v3 or v4) for a song these days, and server boards with IPMI are readily available.

That said, if you don't like the idea of going used, the ASRock Rack Paul mentioned is a pretty nifty piece of kit that I'd totally have gone for myself if I hadn't stumbled upon a hilariously good eBay deal.

Unless you're thinking about transcoding multiple 4k streams at once, you probably don't really need to worry about performance much. I don't know how much Plex experience you have, but if you're just playing 4k stuff on a 4k TV, you're probably not even transcoding in the first place.

H110Hawk
Dec 28, 2006

EVIL Gibson posted:

urandom will also suck down this entropy but when it runs out, it will switch to psuedo-random

Both random and urandom are always psuedo-random regardless of platform. This is fundamental to this slap fight between the various unixes.

Crunchy Black
Oct 24, 2017

by Athanatos
If you're throwing money at the problem, not rolling your own and not having IPMI is very dumb, IMO. ECC support doesn't hurt, either.

H110Hawk
Dec 28, 2006

Crunchy Black posted:

If you're throwing money at the problem, not rolling your own and not having IPMI is very dumb, IMO. ECC support doesn't hurt, either.

Ipmi is very much a cost benefit thing. Knowing what I know about those $100 cards* (not the Dell idrac $500 cards) I would choose to hook up a monitor and keyboard the one time every few years I needed to do something that the booted os couldn't do. Even if it's "free" with the chassis I don't think I would set it up. If you buy a dell/HP with the $500 level card go hog wild those things are tanks.

* based on my sample size professionally (20k+ of those things) those cards barely work from the factory and can't be relied on long term. Especially on the used markets they have a near 100% failure rate by 5 years in.

IOwnCalculus
Apr 2, 2003





The Supermicro boards I use have it built in and I've never seen a reliability problem with it.

H110Hawk
Dec 28, 2006

IOwnCalculus posted:

The Supermicro boards I use have it built in and I've never seen a reliability problem with it.

Maybe there is some fundamental difference to one that is fully integrated vs a small daughter board. Asus, Quanta, and Intel BMC cards are pure poo poo.

Also from a security standpoint, which I realize is inside your perimeter, you should treat them like any other Internet Of poo poo device and assume they are riddled with security issues - never expose them to the internet.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

IOwnCalculus posted:

The Supermicro boards I use have it built in and I've never seen a reliability problem with it.

Same. I have zero experience with add-in IMPI cards, but I've never had a single problem with a motherboard that had it built-in. Which makes sense, considering what IMPI is designed for: if you build it in, it already has access to everything it needs to and can Just Work. If it's an add-in card it has to figure out wtf a given board might have decided to do with implementing various functions.

Crunchy Black
Oct 24, 2017

by Athanatos
yeah, what in the gently caress lol

What kind of off the shelf mobo these days has an optional IPMI that isn't iDRAC or iLO, etc. ?

H110Hawk
Dec 28, 2006

DrDork posted:

Same. I have zero experience with add-in IMPI cards, but I've never had a single problem with a motherboard that had it built-in. Which makes sense, considering what IMPI is designed for: if you build it in, it already has access to everything it needs to and can Just Work. If it's an add-in card it has to figure out wtf a given board might have decided to do with implementing various functions.

The addons are dedicated parts manufactured to spec by the motherboard manufacturer. It's in theory literally the same chip. For example it can still be a tagged vlan on the onboard ethernet ports.

For example: https://www.intel.com/content/www/us/en/server-management/intel-remote-management-module.html / https://ark.intel.com/content/www/us/en/ark/products/91525/remote-management-module-4-lite-2-axxrmm4lite2.html these are plugged into something like this, from the HCL in Ark: https://ark.intel.com/content/www/us/en/ark/products/192606/intel-server-board-s2600bpbr.html

Maybe this is something Supermicro got fundamentally correct in their sea of crap server products. :v:

Crunchy Black posted:

yeah, what in the gently caress lol

What kind of off the shelf mobo these days has an optional IPMI that isn't iDRAC or iLO, etc. ?

Anything that isn't Dell or HP. Those are also $500/shot and built to way different standards. Dell also offers a $100 option but I haven't tested it out. (For all I know it's literally the same physical card and a license upgrade.) In the used market those iDRAC cards are great if you want to deal with ipmi, they even support html5 video consoles like the Intel cheapy modules do when they work.

H110Hawk fucked around with this message at 18:40 on Apr 20, 2020

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

H110Hawk posted:

Maybe this is something Supermicro got fundamentally correct in their sea of crap server products. :v:

Presumably this. The Supermicro IPMI works well, works every time, and is included on-board for a lot of their servers. I find it kinda wild that Dell sells an iDRAC controller for twice the price of a lot of motherboards. Then again, I've installed my fair share of >$50,000 1U Dell servers wondering how the hell they get away with charging what they do for the hardware involved and questionable software support.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
I would love some sort of universal pcie IPMI card.

IOwnCalculus
Apr 2, 2003





Moey posted:

I would love some sort of universal pcie IPMI card.

I once sat down and started working out what it would take to use a Raspberry pi to control some relays to be able to remotely power on/off the box, and use it as a serial terminal.

I stopped when I realized the BOM was already close to the cost increase of just buying a proper SM board. I think just about all Supermicro boards past the X34xx / X55xx generations have IPMI standard.

I mean it'd be cool as hell - there's no reason a PCIe card couldn't present a super-low-end GPU over the PCIe bus, a jumper to a USB header to provide keyboard/mouse/storage, and jumpers to the power/reset switches. You'd need to provide it with its own power separate from the main power supply. But the quantity of people who would buy that, probably would keep the price for such a device well above the cost increase of buying actual server parts.

Adbot
ADBOT LOVES YOU

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
We use a simple network capable relay board to hit the power/reset pins on motherboards (just have to connect them together, ie grounding the signal input) and it works good for that. Having a vga and other ports relay would be useful too sometimes, but mostly the power cycling is all we need for doing power/reset testing.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply