|
Harik posted:Quick followup - in trying to get the X8SI6-F running I hit a snag. The on-board SATA does not work in BIOS, at all. I had to turn on the BIOS option for the SAS and put my boot media on there in order to even boot. Do a physical cmos reset (from the motherboard. Make sure to read the whole process as some you keep the battery installed and some you remove.) remove all devices and see if it shows up. From there start adding devices back in starting with the satadom or a regular hard drive from the shelf. You might also have a bad battery. Those things cause the strangest errors when the voltage is low. It's a dollar for a new one and you likely should do it with any used server right off the bat. Hardware stores sell them.
|
# ? Aug 6, 2018 16:49 |
|
|
# ? May 28, 2024 05:07 |
|
I just bought 4 of the 8TB easy stores. All white label drives at 5400 rpm but with 256MB cache. Are they worth keeping or should I return them?
|
# ? Aug 7, 2018 03:29 |
|
I believe the white labels are the same as the WD Reds, literally just with a different label, with the one exception of having a weird 3.3V reset pin issue that takes a piece of tape to fix. e; or a molex connector, if you want to go that route. DrDork fucked around with this message at 03:40 on Aug 7, 2018 |
# ? Aug 7, 2018 03:36 |
|
That's only on older systems though right? I have a new motherboard and processor showing up on Wednesday.
|
# ? Aug 7, 2018 04:10 |
|
I'll admit to not having looked into it extensively, since I didn't end up with any white labels, but I believe the issue was that any power detected on a particular 3.3V pin was being treated as a hard reset command, and most normal SATA cables power that pin continuously, despite not usually being used for anything. Meanwhile molex -> SATA adapters apparently don't power that pin, again because it's not typically used for anything, thus avoiding the issue. I mean, either way, you can just set everything up and if the drives don't seem to want to function, immediately assume that the 3.3V thing is the issue, tape it off, and try again.
|
# ? Aug 7, 2018 04:18 |
|
Yep. Well getting this server together took less time than I thought.
|
# ? Aug 7, 2018 04:30 |
|
MonkeyFit posted:That's only on older systems though right? I have a new motherboard and processor showing up on Wednesday. I've got them running in my ProLiant N54L which is from like 2010 or something so I think it really just is luck of the draw as to whether you run into issues or not.
|
# ? Aug 7, 2018 12:13 |
|
H2SO4 posted:Have you tried booting with that SATADOM unplugged? I've had misbehaving drives make an HBA unhappy at boot before. Yes, it's a wierdly shaped little bugger (flat with a SATA connector on the face) so when it's plugged in it blocks most of the SATA bank. I had to pull it to use the SATA ports, although I could re-install it so it only takes up 2. The SATA does work after boot, I was able to access it and found it was a pull from an old (retired jan 2015) solaris based readynas. Reading a random ZFS partition was interesting, but ultimately it only had a single old 1TB disk in it to test that everything still worked. It did get me to try out zfsonlinux at least. H110Hawk posted:Do a physical cmos reset (from the motherboard. Make sure to read the whole process as some you keep the battery installed and some you remove.) remove all devices and see if it shows up. From there start adding devices back in starting with the satadom or a regular hard drive from the shelf. Unlikely it's the battery, since CMOS throws a checksum error when the battery lets the data corrupt for as long as I can remember. It's in-service now, but I'll try a full cmos reset next time I work on it. Can't hang the SSD off the SATA anyway because they're only 3gb, so it'll be for the 16gb flash it came with as boot/root. Pretty sure I have a spare battery too, I'll throw it in just because it's easier to do that now then after it dies. Also I'm 99% sure it's running RAM the CPU is supposed to be physically incapable of using. No idea how that works, but both intel and supermicro insist it can't take 4gb rank-1 dimms, yet that's what the datasheet says. Am I reading this wrong? Because 4gb/rank would be amazing, I could put a cubic fuckton of ram in this so cheaply. E: IPMI is such a godsend, the server is now in it's own separate room at the other end of the house. It's in the corner of our oversized walk-in closet where the network/security drops all terminate. Balancing a monitor/keyboard in there sucked. Harik fucked around with this message at 16:44 on Aug 7, 2018 |
# ? Aug 7, 2018 16:40 |
|
Harik posted:Unlikely it's the battery, since CMOS throws a checksum error when the battery lets the data corrupt for as long as I can remember. While that is the failure mode that is written in the manual, and perhaps what you have experienced, I can tell you from way more than anecdotal experience that those batteries cause the weirdest errors. This is across several makes (SuperMicro, ASUS, Quanta, Dell off the top of my head), probably a hundred models, and conservatively 20k servers. I'm glad you have it in service now, if you have strange errors like those again I would strongly suggest tossing a new battery in the next time you're at the hardware store. It's cheap insurance.
|
# ? Aug 7, 2018 17:33 |
|
Harik posted:E: IPMI is such a godsend, the server is now in it's own separate room at the other end of the house. It's in the corner of our oversized walk-in closet where the network/security drops all terminate. Balancing a monitor/keyboard in there sucked. I will never again build a home server without it, and I sincerely wish I had a way to securely hook up the IPMI on my server at work (because at exposing THAT to the internet).
|
# ? Aug 7, 2018 17:46 |
|
What IPMI tools do you folks use? I am looking at using it more at my workplace, our servers are all lab environment ones that get beat on pretty hard and it isn’t a production environment but it would be useful for console debug, remote rebooting etc.
|
# ? Aug 7, 2018 17:48 |
|
Depending on the box I'm using I'll either use the Supermicro IPMITool local client, or I'll just connect to it using a web browser. I have one server in my work lab that refuses to load the KVM console in the local client but works fine via browser
|
# ? Aug 7, 2018 17:52 |
|
IOwnCalculus posted:I will never again build a home server without it, and I sincerely wish I had a way to securely hook up the IPMI on my server at work (because at exposing THAT to the internet). They make super cheap vpn units aimed at people with a "home office" if you can convince your boss of it or whatever. Other companies make even cheaper ones, but there isn't much room to go down: https://www.amazon.com/Juniper-Networks-SRX110H-VA-Services-Gateway/dp/B006NHPHPC https://www.amazon.com/Juniper-Services-Gateway-Ethernet-SRX210HE2-POE/dp/B00FOWKJZU/ Grab something used/grey market. priznat posted:What IPMI tools do you folks use? I am looking at using it more at my workplace, our servers are all lab environment ones that get beat on pretty hard and it isnt a production environment but it would be useful for console debug, remote rebooting etc. Despair and a VM with the old and busted java they all require. Otherwith OpenIPMI tools on linux for the basics (`ipmitool chassis power on` etc)
|
# ? Aug 7, 2018 18:04 |
|
IOwnCalculus posted:Depending on the box I'm using I'll either use the Supermicro IPMITool local client, or I'll just connect to it using a web browser. I have one server in my work lab that refuses to load the KVM console in the local client but works fine via browser Same, except all my stuff works more or less fine.
|
# ? Aug 7, 2018 19:27 |
|
IOwnCalculus posted:Depending on the box I'm using I'll either use the Supermicro IPMITool local client, or I'll just connect to it using a web browser. I have one server in my work lab that refuses to load the KVM console in the local client but works fine via browser Try updating the IPMI firmware (if the lab server is something you can risk doing a firmware update on). The KVM console popped up and insta-closed on this board until I did that. Surprised Java doesn't explode at the self-signed cert the JWS file points at. I had to add security exceptions and got a big popup on the web client whining about it. H110Hawk posted:
Interesting. I've never seen it fail in any other way than a checksum error. Either way I'll have downtime to add more RAM when it arrives so I'll be putting a new battery in then.
|
# ? Aug 7, 2018 20:45 |
|
I posted this in one of the IT threads, but then I realized that you guys might have an opinion as well: I've been tasked with setting up a sort of searchable document library, mostly for PDFs but possibly some other formats too. I've researched a bunch of document management systems, but I don't need 90% of the functionality they offer, since I'm not doing change tracking or access control or any like that. Can anyone recommend a product that will let me set up a catalog of documents with full-text and metadata search, that I can make accessible over a simple web page? Can be on-premise or hosted, free or paid. I really don't want to build something myself at this point.
|
# ? Aug 8, 2018 15:18 |
|
fatman1683 posted:I posted this in one of the IT threads, but then I realized that you guys might have an opinion as well: Google tells me, but I have not tried, that an ELK stack can index pdf documents. I would spin it up in aws as a demo. Comedy option: a Mac with screen sharing. Login, cmd-space, search.
|
# ? Aug 8, 2018 15:46 |
|
fatman1683 posted:I posted this in one of the IT threads, but then I realized that you guys might have an opinion as well: Qnap x86 nas have a suite of apps that does exactly what you ask, exposed over a hosted webpage. It's called qsirch, it does require more ram and cores as the indexed items list grows.
|
# ? Aug 8, 2018 19:18 |
|
Best Buy via eBay has WD 8TB EasyStores (Red/White labels inside) for $150 + 15% off with promo code PRONTO15 = $127.50 each. Pretty sure the promo code is today only.
|
# ? Aug 8, 2018 22:22 |
|
Sheep posted:Best Buy via eBay has WD 8TB EasyStores (Red/White labels inside) for $150 + 15% off with promo code PRONTO15 = $127.50 each. Pretty sure the promo code is today only. Yes the promo code is today through 10pm PST
|
# ? Aug 8, 2018 22:33 |
|
It was OOS this morning when I tried to get them. Just caved and purchased two more.
|
# ? Aug 8, 2018 22:36 |
|
Man, I have a real problem buying 5400RPM drives. I get they are for mass storage but 7200RPM please!
|
# ? Aug 9, 2018 01:57 |
|
There's pretty much no reason for drives above 5k to exist anymore.
|
# ? Aug 9, 2018 02:43 |
|
They're great for consumer NAS purposes, though: any sort of RAIDed 5400's are pretty likely to be able to more than saturate a 1GigE link without any real effort, and you're not gonna notice the slight increase in latency, either. So why not enjoy the savings in money, power, and heat?
|
# ? Aug 9, 2018 02:45 |
|
Internet Explorer posted:There's pretty much no reason for drives above 5k to exist anymore. Really? I'd say there's pretty much no reason for ethernet below 10GbE to exist anymore It is infinitely annoying to me that we're in a catch-22, faster networking is moving at a glacial pace in the consumer sphere, because what are you going to serve it from? And drives aren't getting much faster, because you're going to be bottlenecked on gigabit ethernet anyway. And yet we have much faster storage available on the desktop, so there's clearly demand for faster-than-HDD speeds, and we also have fancy tiered-storage systems that let us pretend we have big SSDs, etc. In particular, the abysmal IOPS of gigabit pretty much rules out any of the fun applications, like serving steam disks/booting from your NAS/etc, which might be interesting in a power-user space. Instead you either have everything client-side or you virtualize everything, really no in-between. We've been stuck at gigabit for what... 15 years now? Like, gigabit's fine for the low-budget stuff, but 10 GbE still costs an arm and a leg even for lovely entry-level stuff that will burn out in a year. You're easily looking at $1000 for a pair of 4-port 10GBase-T switches. Paul MaudDib fucked around with this message at 03:42 on Aug 9, 2018 |
# ? Aug 9, 2018 03:23 |
|
Paul MaudDib posted:Really? I'd say there's pretty much no reason for ethernet below 10GbE to exist anymore Man, I've been saying this for years. 10gigE switches are still pretty expensive but 1gigE switches, even the prosumer stuff is super cheap. I wanted to run 10gigE over my house but I couldn't justify it because the switches still don't seem fairly priced and nothing else would even have a 10gigE nic in it.. it blows my mind.
|
# ? Aug 9, 2018 03:31 |
|
Paul MaudDib posted:Really? I'd say there's pretty much no reason for ethernet below 10GbE to exist anymore I think you meant 25gbe Paul MaudDib posted:It is infinitely annoying to me that we're in a catch-22, faster networking is moving at a glacial pace in the consumer sphere, because what are you going to serve it from? And drives aren't getting much faster, because you're going to be bottlenecked on gigabit ethernet anyway. And yet we have much faster storage available on the desktop, so there's clearly demand for faster-than-HDD speeds, and we also have fancy tiered-storage systems that let us pretend we have big SSDs, etc. Because the only use people have at home for anything above 802.11AC kinda speeds is so no one cares. Even if all 4 of you are streaming 4k netflix all at once that's STILL only around 100mbps. https://help.netflix.com/en/node/13444
|
# ? Aug 9, 2018 04:32 |
|
Internet Explorer posted:There's pretty much no reason for drives above 5k to exist anymore. Just because you personally can't see a need, they shouldn't exist. Got it. What about large backup appliances that need to ingest data quickly, where SSDs for actual storage would be cost prohibitive? There are good reasons. Even if we get cheap qlc and god-knows-what bits-per-cell, reasonably fast hdds still have a place - long term data retention without power is likely to be terrible on those many-bits-per-cell devices. HalloKitty fucked around with this message at 06:47 on Aug 9, 2018 |
# ? Aug 9, 2018 06:44 |
|
I've been dealing with backups professionally for a long, long time. If you need the IOPs and you can't afford flash caching you simply add more drives. There's no reason for 7k, 10k, or 15k drives to exist anymore.
|
# ? Aug 9, 2018 07:22 |
|
10k SAS never made sense to me anyway as it occupied a weird middle ground between SATA and 15k SAS. And then flash came along and you're wasting your money if you buy anything quicker than SATA. A guy in our office specced a NetApp system with 3 shelves of 10k 2TB SAS disks in 2017 and I just had a moment
|
# ? Aug 9, 2018 11:04 |
|
Paul MaudDib posted:It is infinitely annoying to me that we're in a catch-22, faster networking is moving at a glacial pace in the consumer sphere, because what are you going to serve it from? And drives aren't getting much faster, because you're going to be bottlenecked on gigabit ethernet anyway. And yet we have much faster storage available on the desktop, so there's clearly demand for faster-than-HDD speeds, and we also have fancy tiered-storage systems that let us pretend we have big SSDs, etc. You could serve it from....lots of things? Even a fairly pedestrian RAID-5 of 4 drives that can do 120MB/s each will collectively push 200+MB/s, or about double what a GigE link can carry. And that's without even talking about SSDs, larger arrays, etc. 10GigE has stagnated in consumer-land because, as H110Hawk notes, the vast majority of people simply don't need it--GigE is plenty fast enough for your Average Joe. Hell, 100Mbps networks are still fast enough for probably 90% of people. The number of people at home who want to virtualize their setup is so niche as to be ignorable...which is why it basically is ignored. I mean, no one is gonna bother buying $1000 worth of networking gear so they can watch Netflix and wirelessly print their lovely cat photos. Which doesn't make any of your other points invalid or less irritating; I, too, would love to be able to slap all my Steam installs onto a giant shared host instead of having to have them locally on multiple computers.
|
# ? Aug 9, 2018 11:31 |
|
I'd go as far to say that wired networking has stagnated in the consumer space. The market for people who cable up their houses is tiny, and as long as the Wi-Fi is good enough to not cause buffering then most people wouldn't know if it was limiting them in some way. It's not that uncommon to see the broadband routers bundled with your ISP service only provide a couple of ethernet ports where four used to be standard, and they were still being supplied with 10/100 ports not that long ago. For people who do want to wire up their house then the SMB HP stuff, Ubiquiti, enterprise gear 'borrowed' from work etc. will serve those needs.
|
# ? Aug 9, 2018 11:57 |
Paul MaudDib posted:Really? I'd say there's pretty much no reason for ethernet below 10GbE to exist anymore When 5400RPM drives satuate 1Gbps and consumer networking has stagnated at 1Gbps, we're not as bad off as we could be - it'd be worse if drives couldn't even satuate 100Mbps and had stagnated.
|
|
# ? Aug 9, 2018 14:07 |
|
Internet Explorer posted:I've been dealing with backups professionally for a long, long time. If you need the IOPs and you can't afford flash caching you simply add more drives. There's no reason for 7k, 10k, or 15k drives to exist anymore. Yup. When hard drives were all we had, the incremental gains with higher RPM drives looked significant. These days, comparing them against even a cheap SSD makes those differences irrelevant. Professionally, the name of the game is either all-flash for those who can afford it, or hybrid flash with a few SSDs caching for a giant stack of cheap spindles, and some software magic to make it all work. It's still capable of tens to hundreds of thousands of IOPS, which are numbers even most of us in here don't give a gently caress to reach.
|
# ? Aug 9, 2018 14:22 |
|
Thanks Ants posted:I'd go as far to say that wired networking has stagnated in the consumer space. The market for people who cable up their houses is tiny, and as long as the Wi-Fi is good enough to not cause buffering then most people wouldn't know if it was limiting them in some way. It's not that uncommon to see the broadband routers bundled with your ISP service only provide a couple of ethernet ports where four used to be standard, and they were still being supplied with 10/100 ports not that long ago. I have 2 devices of my ~30 internet connected items in my house that are hard-wired at this point. My desktop and NAS. The only time that 10gbe would have even been useful for me was the initial data dump from my desktop to NAS. Wired just isn't needed for most of the population these days. If they have a 100mb down connection to the internet, they can easily saturate it with wireless. Most people's traffic is exclusively to outside their LAN. I bet I can count on one hand how many people's houses I have been into in the last year that even have a desktop computer still.
|
# ? Aug 9, 2018 14:46 |
|
HalloKitty posted:Even if we get cheap qlc and god-knows-what bits-per-cell, reasonably fast hdds still have a place - long term data retention without power is likely to be terrible on those many-bits-per-cell devices. QLC is actually back to HDD speeds. The Intel drive uses its cells in SLC mode at first, which is fast, but as it fills up it switches more cells to QLC mode, which gets about 100 MB/s of throughput, and performance asymptotically degrades down to that level as you fill it up. Paul MaudDib fucked around with this message at 16:43 on Aug 9, 2018 |
# ? Aug 9, 2018 16:40 |
|
I built a new server:
Primary use will be as a media/Plex server along with backups of a couple PCs. I'm looking for data redundancy though but I don't have experience with RAID at all. I'm trying to decide what OS to put on it. I saw FreeNAS wants a minimum of 8GB of RAM, so I'm not sure if I'll need more RAM or if it will be fine. Or should I choose a linux distribution like ubuntu?
|
# ? Aug 9, 2018 20:18 |
|
FreeNAS vs Ubuntu won't really change your system requirements. If you're going to use an actual SSD as a boot device, and if you're comfortable using bash to configure things instead of a web interface, I'd go Ubuntu. FreeNAS is better if you want to boot off of a cheap USB stick and perform almost all management via the web interface.
|
# ? Aug 9, 2018 20:53 |
|
IOwnCalculus posted:FreeNAS vs Ubuntu won't really change your system requirements. If you're going to use an actual SSD as a boot device, and if you're comfortable using bash to configure things instead of a web interface, I'd go Ubuntu. FreeNAS is better if you want to boot off of a cheap USB stick and perform almost all management via the web interface. You can boot off SSD for FreeNAS as well, but i am a new FreeNAS to Ubuntu server convert. Been a while since i had to type that many commands but i still had to type quite a bit in FreeNAS as well as the plugin system was so behind on updated versions of said plugins, it was just easier to do manual jails and keep the stuff updated that way.
|
# ? Aug 9, 2018 21:24 |
|
|
# ? May 28, 2024 05:07 |
|
If you are willing to type commands and also want ZFS, then vanilla FreeBSD is actually really solid, and arguably has better ZFS support than Ubuntu. Also better documentation.
|
# ? Aug 9, 2018 21:37 |