Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
H110Hawk
Dec 28, 2006

Harik posted:

Quick followup - in trying to get the X8SI6-F running I hit a snag. The on-board SATA does not work in BIOS, at all. I had to turn on the BIOS option for the SAS and put my boot media on there in order to even boot.

It must have worked at some point, as it came with one of those 16gb high-endurance SATA-socket SSDs installed in it, but that wasn't recognized at boot either.

It's booted, but it takes forever to start running the SAS 2008 ROM, I'd prefer to disable that and let it come up as the server starts.

(Also, I'm out 6 SATA ports if I can't figure this out)

At least updating the IPMI and cross-flashing the SAS to IT mode went smoothly.

Do a physical cmos reset (from the motherboard. Make sure to read the whole process as some you keep the battery installed and some you remove.) remove all devices and see if it shows up. From there start adding devices back in starting with the satadom or a regular hard drive from the shelf.

You might also have a bad battery. Those things cause the strangest errors when the voltage is low. It's a dollar for a new one and you likely should do it with any used server right off the bat. Hardware stores sell them.

Adbot
ADBOT LOVES YOU

MonkeyFit
May 13, 2009
I just bought 4 of the 8TB easy stores. All white label drives at 5400 rpm but with 256MB cache. Are they worth keeping or should I return them?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
I believe the white labels are the same as the WD Reds, literally just with a different label, with the one exception of having a weird 3.3V reset pin issue that takes a piece of tape to fix.

e; or a molex connector, if you want to go that route.

DrDork fucked around with this message at 03:40 on Aug 7, 2018

MonkeyFit
May 13, 2009
That's only on older systems though right? I have a new motherboard and processor showing up on Wednesday.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
I'll admit to not having looked into it extensively, since I didn't end up with any white labels, but I believe the issue was that any power detected on a particular 3.3V pin was being treated as a hard reset command, and most normal SATA cables power that pin continuously, despite not usually being used for anything. Meanwhile molex -> SATA adapters apparently don't power that pin, again because it's not typically used for anything, thus avoiding the issue.

I mean, either way, you can just set everything up and if the drives don't seem to want to function, immediately assume that the 3.3V thing is the issue, tape it off, and try again.

MonkeyFit
May 13, 2009
Yep. Well getting this server together took less time than I thought.

Sheep
Jul 24, 2003

MonkeyFit posted:

That's only on older systems though right? I have a new motherboard and processor showing up on Wednesday.

I've got them running in my ProLiant N54L which is from like 2010 or something so I think it really just is luck of the draw as to whether you run into issues or not.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

H2SO4 posted:

Have you tried booting with that SATADOM unplugged? I've had misbehaving drives make an HBA unhappy at boot before.

Yes, it's a wierdly shaped little bugger (flat with a SATA connector on the face) so when it's plugged in it blocks most of the SATA bank. I had to pull it to use the SATA ports, although I could re-install it so it only takes up 2.

The SATA does work after boot, I was able to access it and found it was a pull from an old (retired jan 2015) solaris based readynas. Reading a random ZFS partition was interesting, but ultimately it only had a single old 1TB disk in it to test that everything still worked. It did get me to try out zfsonlinux at least.

H110Hawk posted:

Do a physical cmos reset (from the motherboard. Make sure to read the whole process as some you keep the battery installed and some you remove.) remove all devices and see if it shows up. From there start adding devices back in starting with the satadom or a regular hard drive from the shelf.

You might also have a bad battery. Those things cause the strangest errors when the voltage is low. It's a dollar for a new one and you likely should do it with any used server right off the bat. Hardware stores sell them.

Unlikely it's the battery, since CMOS throws a checksum error when the battery lets the data corrupt for as long as I can remember. It's in-service now, but I'll try a full cmos reset next time I work on it. Can't hang the SSD off the SATA anyway because they're only 3gb, so it'll be for the 16gb flash it came with as boot/root. Pretty sure I have a spare battery too, I'll throw it in just because it's easier to do that now then after it dies.

Also I'm 99% sure it's running RAM the CPU is supposed to be physically incapable of using. No idea how that works, but both intel and supermicro insist it can't take 4gb rank-1 dimms, yet that's what the datasheet says.

Am I reading this wrong? Because 4gb/rank would be amazing, I could put a cubic fuckton of ram in this so cheaply.

E: IPMI is such a godsend, the server is now in it's own separate room at the other end of the house. It's in the corner of our oversized walk-in closet where the network/security drops all terminate. Balancing a monitor/keyboard in there sucked.

Harik fucked around with this message at 16:44 on Aug 7, 2018

H110Hawk
Dec 28, 2006

Harik posted:

Unlikely it's the battery, since CMOS throws a checksum error when the battery lets the data corrupt for as long as I can remember.

:allears:

While that is the failure mode that is written in the manual, and perhaps what you have experienced, I can tell you from way more than anecdotal experience that those batteries cause the weirdest errors. This is across several makes (SuperMicro, ASUS, Quanta, Dell off the top of my head), probably a hundred models, and conservatively 20k servers. I'm glad you have it in service now, if you have strange errors like those again I would strongly suggest tossing a new battery in the next time you're at the hardware store. It's cheap insurance.

IOwnCalculus
Apr 2, 2003





Harik posted:

E: IPMI is such a godsend, the server is now in it's own separate room at the other end of the house. It's in the corner of our oversized walk-in closet where the network/security drops all terminate. Balancing a monitor/keyboard in there sucked.

I will never again build a home server without it, and I sincerely wish I had a way to securely hook up the IPMI on my server at work (because :lol: at exposing THAT to the internet).

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
What IPMI tools do you folks use? I am looking at using it more at my workplace, our servers are all lab environment ones that get beat on pretty hard and it isn’t a production environment but it would be useful for console debug, remote rebooting etc.

IOwnCalculus
Apr 2, 2003





Depending on the box I'm using I'll either use the Supermicro IPMITool local client, or I'll just connect to it using a web browser. I have one server in my work lab that refuses to load the KVM console in the local client but works fine via browser :iiam:

H110Hawk
Dec 28, 2006

IOwnCalculus posted:

I will never again build a home server without it, and I sincerely wish I had a way to securely hook up the IPMI on my server at work (because :lol: at exposing THAT to the internet).

They make super cheap vpn units aimed at people with a "home office" if you can convince your boss of it or whatever. Other companies make even cheaper ones, but there isn't much room to go down:

https://www.amazon.com/Juniper-Networks-SRX110H-VA-Services-Gateway/dp/B006NHPHPC
https://www.amazon.com/Juniper-Services-Gateway-Ethernet-SRX210HE2-POE/dp/B00FOWKJZU/

Grab something used/grey market.


priznat posted:

What IPMI tools do you folks use? I am looking at using it more at my workplace, our servers are all lab environment ones that get beat on pretty hard and it isn’t a production environment but it would be useful for console debug, remote rebooting etc.

Despair and a VM with the old and busted java they all require. Otherwith OpenIPMI tools on linux for the basics (`ipmitool chassis power on` etc)

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

IOwnCalculus posted:

Depending on the box I'm using I'll either use the Supermicro IPMITool local client, or I'll just connect to it using a web browser. I have one server in my work lab that refuses to load the KVM console in the local client but works fine via browser :iiam:

Same, except all my stuff works more or less fine.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

IOwnCalculus posted:

Depending on the box I'm using I'll either use the Supermicro IPMITool local client, or I'll just connect to it using a web browser. I have one server in my work lab that refuses to load the KVM console in the local client but works fine via browser :iiam:

Try updating the IPMI firmware (if the lab server is something you can risk doing a firmware update on). The KVM console popped up and insta-closed on this board until I did that.

Surprised Java doesn't explode at the self-signed cert the JWS file points at. I had to add security exceptions and got a big popup on the web client whining about it.

H110Hawk posted:

:allears:

While that is the failure mode that is written in the manual, and perhaps what you have experienced, I can tell you from way more than anecdotal experience that those batteries cause the weirdest errors. This is across several makes (SuperMicro, ASUS, Quanta, Dell off the top of my head), probably a hundred models, and conservatively 20k servers.

Interesting. I've never seen it fail in any other way than a checksum error. Either way I'll have downtime to add more RAM when it arrives so I'll be putting a new battery in then.

fatman1683
Jan 8, 2004
.
I posted this in one of the IT threads, but then I realized that you guys might have an opinion as well:

I've been tasked with setting up a sort of searchable document library, mostly for PDFs but possibly some other formats too. I've researched a bunch of document management systems, but I don't need 90% of the functionality they offer, since I'm not doing change tracking or access control or any like that.

Can anyone recommend a product that will let me set up a catalog of documents with full-text and metadata search, that I can make accessible over a simple web page? Can be on-premise or hosted, free or paid. I really don't want to build something myself at this point.

H110Hawk
Dec 28, 2006

fatman1683 posted:

I posted this in one of the IT threads, but then I realized that you guys might have an opinion as well:

I've been tasked with setting up a sort of searchable document library, mostly for PDFs but possibly some other formats too. I've researched a bunch of document management systems, but I don't need 90% of the functionality they offer, since I'm not doing change tracking or access control or any like that.

Can anyone recommend a product that will let me set up a catalog of documents with full-text and metadata search, that I can make accessible over a simple web page? Can be on-premise or hosted, free or paid. I really don't want to build something myself at this point.

Google tells me, but I have not tried, that an ELK stack can index pdf documents. I would spin it up in aws as a demo.

Comedy option: a Mac with screen sharing. Login, cmd-space, search.

SlowBloke
Aug 14, 2017

fatman1683 posted:

I posted this in one of the IT threads, but then I realized that you guys might have an opinion as well:

I've been tasked with setting up a sort of searchable document library, mostly for PDFs but possibly some other formats too. I've researched a bunch of document management systems, but I don't need 90% of the functionality they offer, since I'm not doing change tracking or access control or any like that.

Can anyone recommend a product that will let me set up a catalog of documents with full-text and metadata search, that I can make accessible over a simple web page? Can be on-premise or hosted, free or paid. I really don't want to build something myself at this point.

Qnap x86 nas have a suite of apps that does exactly what you ask, exposed over a hosted webpage. It's called qsirch, it does require more ram and cores as the indexed items list grows.

Sheep
Jul 24, 2003
Best Buy via eBay has WD 8TB EasyStores (Red/White labels inside) for $150 + 15% off with promo code PRONTO15 = $127.50 each. Pretty sure the promo code is today only.

MagusDraco
Nov 11, 2011

even speedwagon was trolled

Yes the promo code is today through 10pm PST

Moey
Oct 22, 2010

I LIKE TO MOVE IT
It was OOS this morning when I tried to get them. Just caved and purchased two more.

redeyes
Sep 14, 2002

by Fluffdaddy
Man, I have a real problem buying 5400RPM drives. I get they are for mass storage but 7200RPM please!

Internet Explorer
Jun 1, 2005





There's pretty much no reason for drives above 5k to exist anymore.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
They're great for consumer NAS purposes, though: any sort of RAIDed 5400's are pretty likely to be able to more than saturate a 1GigE link without any real effort, and you're not gonna notice the slight increase in latency, either. So why not enjoy the savings in money, power, and heat?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Internet Explorer posted:

There's pretty much no reason for drives above 5k to exist anymore.

Really? I'd say there's pretty much no reason for ethernet below 10GbE to exist anymore :smuggo:

It is infinitely annoying to me that we're in a catch-22, faster networking is moving at a glacial pace in the consumer sphere, because what are you going to serve it from? And drives aren't getting much faster, because you're going to be bottlenecked on gigabit ethernet anyway. And yet we have much faster storage available on the desktop, so there's clearly demand for faster-than-HDD speeds, and we also have fancy tiered-storage systems that let us pretend we have big SSDs, etc.

In particular, the abysmal IOPS of gigabit pretty much rules out any of the fun applications, like serving steam disks/booting from your NAS/etc, which might be interesting in a power-user space. Instead you either have everything client-side or you virtualize everything, really no in-between.

We've been stuck at gigabit for what... 15 years now? Like, gigabit's fine for the low-budget stuff, but 10 GbE still costs an arm and a leg even for lovely entry-level stuff that will burn out in a year. You're easily looking at $1000 for a pair of 4-port 10GBase-T switches.

Paul MaudDib fucked around with this message at 03:42 on Aug 9, 2018

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

Paul MaudDib posted:

Really? I'd say there's pretty much no reason for ethernet below 10GbE to exist anymore :smuggo:

It is infinitely annoying to me that we're in a catch-22, faster networking is moving at a glacial pace in the consumer sphere, because what are you going to serve it from? And drives aren't getting much faster, because you're going to be bottlenecked on gigabit ethernet anyway. And yet we have much faster storage available on the desktop, so there's clearly demand for faster-than-HDD speeds, and we also have fancy tiered-storage systems that let us pretend we have big SSDs, etc.

In particular, the abysmal IOPS of gigabit pretty much rules out any of the fun applications, like serving steam disks/booting from your NAS/etc, which might be interesting in a power-user space. Instead you either have everything client-side or you virtualize everything, really no in-between.

Man, I've been saying this for years. 10gigE switches are still pretty expensive but 1gigE switches, even the prosumer stuff is super cheap. I wanted to run 10gigE over my house but I couldn't justify it because the switches still don't seem fairly priced and nothing else would even have a 10gigE nic in it.. it blows my mind.

H110Hawk
Dec 28, 2006

Paul MaudDib posted:

Really? I'd say there's pretty much no reason for ethernet below 10GbE to exist anymore :smuggo:

I think you meant 25gbe :smugdog:

Paul MaudDib posted:

It is infinitely annoying to me that we're in a catch-22, faster networking is moving at a glacial pace in the consumer sphere, because what are you going to serve it from? And drives aren't getting much faster, because you're going to be bottlenecked on gigabit ethernet anyway. And yet we have much faster storage available on the desktop, so there's clearly demand for faster-than-HDD speeds, and we also have fancy tiered-storage systems that let us pretend we have big SSDs, etc.

Because the only use people have at home for anything above 802.11AC kinda speeds is :filez: so no one cares. Even if all 4 of you are streaming 4k netflix all at once that's STILL only around 100mbps.

https://help.netflix.com/en/node/13444

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Internet Explorer posted:

There's pretty much no reason for drives above 5k to exist anymore.

Just because you personally can't see a need, they shouldn't exist. Got it.
What about large backup appliances that need to ingest data quickly, where SSDs for actual storage would be cost prohibitive?
There are good reasons.

Even if we get cheap qlc and god-knows-what bits-per-cell, reasonably fast hdds still have a place - long term data retention without power is likely to be terrible on those many-bits-per-cell devices.

HalloKitty fucked around with this message at 06:47 on Aug 9, 2018

Internet Explorer
Jun 1, 2005





I've been dealing with backups professionally for a long, long time. If you need the IOPs and you can't afford flash caching you simply add more drives. There's no reason for 7k, 10k, or 15k drives to exist anymore.

Thanks Ants
May 21, 2004

#essereFerrari


10k SAS never made sense to me anyway as it occupied a weird middle ground between SATA and 15k SAS. And then flash came along and you're wasting your money if you buy anything quicker than SATA.

A guy in our office specced a NetApp system with 3 shelves of 10k 2TB SAS disks in 2017 and I just had a :wtc: moment

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Paul MaudDib posted:

It is infinitely annoying to me that we're in a catch-22, faster networking is moving at a glacial pace in the consumer sphere, because what are you going to serve it from? And drives aren't getting much faster, because you're going to be bottlenecked on gigabit ethernet anyway. And yet we have much faster storage available on the desktop, so there's clearly demand for faster-than-HDD speeds, and we also have fancy tiered-storage systems that let us pretend we have big SSDs, etc.

You could serve it from....lots of things? Even a fairly pedestrian RAID-5 of 4 drives that can do 120MB/s each will collectively push 200+MB/s, or about double what a GigE link can carry. And that's without even talking about SSDs, larger arrays, etc.

10GigE has stagnated in consumer-land because, as H110Hawk notes, the vast majority of people simply don't need it--GigE is plenty fast enough for your Average Joe. Hell, 100Mbps networks are still fast enough for probably 90% of people. The number of people at home who want to virtualize their setup is so niche as to be ignorable...which is why it basically is ignored. I mean, no one is gonna bother buying $1000 worth of networking gear so they can watch Netflix and wirelessly print their lovely cat photos.

Which doesn't make any of your other points invalid or less irritating; I, too, would love to be able to slap all my Steam installs onto a giant shared host instead of having to have them locally on multiple computers.

Thanks Ants
May 21, 2004

#essereFerrari


I'd go as far to say that wired networking has stagnated in the consumer space. The market for people who cable up their houses is tiny, and as long as the Wi-Fi is good enough to not cause buffering then most people wouldn't know if it was limiting them in some way. It's not that uncommon to see the broadband routers bundled with your ISP service only provide a couple of ethernet ports where four used to be standard, and they were still being supplied with 10/100 ports not that long ago.

For people who do want to wire up their house then the SMB HP stuff, Ubiquiti, enterprise gear 'borrowed' from work etc. will serve those needs.

BlankSystemDaemon
Mar 13, 2009



Paul MaudDib posted:

Really? I'd say there's pretty much no reason for ethernet below 10GbE to exist anymore :smuggo:
Aside from everything everyone else said, 7200RPM disks can't satuate 10GbE either, and bulk storage on a consumer budget puts a great limit on the number of drives you can obtain, especially if they need to be able to satuate 10GbE.
When 5400RPM drives satuate 1Gbps and consumer networking has stagnated at 1Gbps, we're not as bad off as we could be - it'd be worse if drives couldn't even satuate 100Mbps and had stagnated.

IOwnCalculus
Apr 2, 2003





Internet Explorer posted:

I've been dealing with backups professionally for a long, long time. If you need the IOPs and you can't afford flash caching you simply add more drives. There's no reason for 7k, 10k, or 15k drives to exist anymore.

Yup. When hard drives were all we had, the incremental gains with higher RPM drives looked significant.

These days, comparing them against even a cheap SSD makes those differences irrelevant. Professionally, the name of the game is either all-flash for those who can afford it, or hybrid flash with a few SSDs caching for a giant stack of cheap spindles, and some software magic to make it all work. It's still capable of tens to hundreds of thousands of IOPS, which are numbers even most of us in here don't give a gently caress to reach.

nerox
May 20, 2001

Thanks Ants posted:

I'd go as far to say that wired networking has stagnated in the consumer space. The market for people who cable up their houses is tiny, and as long as the Wi-Fi is good enough to not cause buffering then most people wouldn't know if it was limiting them in some way. It's not that uncommon to see the broadband routers bundled with your ISP service only provide a couple of ethernet ports where four used to be standard, and they were still being supplied with 10/100 ports not that long ago.

For people who do want to wire up their house then the SMB HP stuff, Ubiquiti, enterprise gear 'borrowed' from work etc. will serve those needs.

I have 2 devices of my ~30 internet connected items in my house that are hard-wired at this point. My desktop and NAS. The only time that 10gbe would have even been useful for me was the initial data dump from my desktop to NAS.

Wired just isn't needed for most of the population these days. If they have a 100mb down connection to the internet, they can easily saturate it with wireless. Most people's traffic is exclusively to outside their LAN. I bet I can count on one hand how many people's houses I have been into in the last year that even have a desktop computer still.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

HalloKitty posted:

Even if we get cheap qlc and god-knows-what bits-per-cell, reasonably fast hdds still have a place - long term data retention without power is likely to be terrible on those many-bits-per-cell devices.

QLC is actually back to HDD speeds. The Intel drive uses its cells in SLC mode at first, which is fast, but as it fills up it switches more cells to QLC mode, which gets about 100 MB/s of throughput, and performance asymptotically degrades down to that level as you fill it up.

Paul MaudDib fucked around with this message at 16:43 on Aug 9, 2018

MonkeyFit
May 13, 2009
I built a new server:
  • Ryzen 3 2200G
  • 8GB DDR4 2400
  • 4x 8TB WD 5400rpm 256MB cache
  • Intel 520 series 240GB SSD (OS drive)

Primary use will be as a media/Plex server along with backups of a couple PCs. I'm looking for data redundancy though but I don't have experience with RAID at all.

I'm trying to decide what OS to put on it. I saw FreeNAS wants a minimum of 8GB of RAM, so I'm not sure if I'll need more RAM or if it will be fine. Or should I choose a linux distribution like ubuntu?

IOwnCalculus
Apr 2, 2003





FreeNAS vs Ubuntu won't really change your system requirements. If you're going to use an actual SSD as a boot device, and if you're comfortable using bash to configure things instead of a web interface, I'd go Ubuntu. FreeNAS is better if you want to boot off of a cheap USB stick and perform almost all management via the web interface.

derk
Sep 24, 2004

IOwnCalculus posted:

FreeNAS vs Ubuntu won't really change your system requirements. If you're going to use an actual SSD as a boot device, and if you're comfortable using bash to configure things instead of a web interface, I'd go Ubuntu. FreeNAS is better if you want to boot off of a cheap USB stick and perform almost all management via the web interface.

You can boot off SSD for FreeNAS as well, but i am a new FreeNAS to Ubuntu server convert. Been a while since i had to type that many commands but i still had to type quite a bit in FreeNAS as well as the plugin system was so behind on updated versions of said plugins, it was just easier to do manual jails and keep the stuff updated that way.

Adbot
ADBOT LOVES YOU

SamDabbers
May 26, 2003



If you are willing to type commands and also want ZFS, then vanilla FreeBSD is actually really solid, and arguably has better ZFS support than Ubuntu. Also better documentation.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply