Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
wolrah
May 8, 2006
what?

King Nothing posted:

On that topic, when I was looking at hard drives some were advertised as being good for video feeds because they could write multiple data streams. Is that an actual feature, or some sort of marketing thing? Doesn't any drive with multiple platters have multiple read/write heads, and thus the ability to do that?

It's just a firmware thing. The access patterns of a DVR or similar are different from those of most home users, so the firmware can be designed in a way that's optimized for that use, sacrificing performance in other areas.

In terms of physical hardware, the "DVR Edition" or whatever drives are identical to their standard use counterparts. They're just software-optimized to perform well with at least two "simultaneous" writes and one read going at any given time.

Adbot
ADBOT LOVES YOU

wolrah
May 8, 2006
what?

900ftjesus posted:

They're optimized for writing large chunks of contiguous data. You wouldn't want to use this in your computer:

7200RPM 160GB WD SATA:
#Average Seek Time: 8.7ms

7200RPM 160GB Seagate SATA - optimized to write multiple streams at once:
# Average Seek Time: 17ms

For a same-brand comparison, straight from the Seagate datasheets:

Barracuda 7200.11 (desktop/workstation) 1TB: 4.16ms
Barracuda ES.2 (nearline/NAS/SAN) 1TB: 4.16ms
DB35.3 (professional/security DVR) 1TB: Read <14ms, Write <15ms

They don't have numbers listed for the SV35 (consumer DVR) aside from the vague "up to 10 HDTV streams" and it's also a generation out of date (based on the Barracuda 7200.10), otherwise I'd have included that too. As far as I know the three drives I listed are all physically the same, just with different firmware for their intended application.

wolrah
May 8, 2006
what?

Alystair posted:

I have a 3Ware 9500S 8 port raid controller WITH battery backup unit that I'm no longer using. The 4-port without the BBU sells for $300 on eBay and I'm willing to match that, so you get 4 extra ports, plus the BBU for free. I could easily sell it for more else where, but some of you guys might actually use it. PM me if interested.

Specs can be found here, it's one beefy mofo: http://www.3ware.com/products/serial_ata9000.asp

drat, if that was PCIe I'd be all over it, but unfortunately I don't have a single machine with 64 bit or 66 MHz PCI, much less both (hell, I don't think I've even seen 64 bit PCI in real life) and while I think it's backwards compatible it would be a huge waste to put that beast of a card in a standard PCI slot.

wolrah
May 8, 2006
what?
I really want to use ZFS, but I have a somewhat irrational dislike of Solaris thanks to some old-rear end SPARC boxes I had to use in college. My fileserver currently runs Ubuntu Linux with a mix of LVM+XFS on the internal drives and a few USB drives via NTFS-3G.

Right now it seems like my choices if I go down the ZFS road are as follows:
  • Deal with it and use Solaris, probably with Nexenta to keep things as close to the Debian/Ubuntu environment I prefer
  • FreeBSD, which I've never used outside of pfSense routers (though I understand the userland is similar to Mac OS X which I use as my standard workstation OS)
  • Comedy option, Hackintosh Server. Not really expected to be usable until Snow Leopard anyways, on top of the fun inherent in installing OS X Server on my old AMD file server.
  • Keep Linux, use ZFS on FUSE. Performance problems all around apparently though since it's not nearly as well developed for the FUSE environment nor as heavily used as NTFS-3G.

Are any of the non-Solaris options really worth considering? FreeBSD would probably be my preference unless ZFS on FUSE has been updated to acceptable performance since I last looked in to it.

wolrah
May 8, 2006
what?

Combat Pretzel posted:

Your dislike lies probably more with CDE than Solaris. Latter comes with Gnome enabled by default now.

Actually no, I've never used Solaris in a GUI mode, only command line over SSH.

The_Last_Boyscout posted:

Windows: enable the LAN connection, assigned it a manual IP address and left the gateway blank. If you give it a gateway it tries to use this connection to access the internet.

Linux: assigned a manual IP address through the Ubuntu Network Configuration GUI, disabled the wireless card

Technically with any version of Windows newer than 98 and a modern Linux distro set up for use as a desktop you shouldn't even need to set static IPs. Both ends should choose IPs in the link-local range (169.254.0.0/16) without a gateway if they're configured for DHCP and receive no response.

That said I still tend to set mine static when I'm trying to link two computers too, if only for it being easier to just think 10.0.0.1 and 10.0.0.2 when I need to do something by IP.

wolrah
May 8, 2006
what?
I was thinking about my desire for ZFS more and realized I might be looking at this all wrong and ZFS may not actually be what I want.

I'm basically looking to have a box that I can throw disks at whenever I need more space. I'd also like to be able to tolerate at least a single drive failure either for the entire system, one set "important" volume, or specific files/folders. I have no preference as to which of the three.

When I run out of space for disks, I'd like to be able to take advantage of that failure tolerance to remove the smallest drive and replace it with a new larger one. Being able to do this online is preferred, but I don't mind taking it offline for a few minutes since this is likely to only happen once or twice a year at peak.

I don't believe I'll ever need to shrink a volume, but I would see the capability as a plus. Being able to grow is obviously mandatory.

I guess basically what I want is Drobo-like functionality, but in my own homebrew machine. Four drives is not enough, plus I like the other things I can do with a server. Right now I have SABnzbd+, Samba file/print sharing, AFP, DHCP, DNS, rtorrent, uShare UPnP, Zoneminder motion-detecting security running off a webcam, and probably a number of other things I forgot about running on this thing. All of those will run on basically any Unix-like platform and have Windows ports or counterparts.

I know how to do everything except the fault tolerance with LVM. I think WHS can do everything I want, but setting up local servers on it aside from those built specifically for a WHS environment was interesting last time I looked in to it. I don't think I can get AFP working on it at all, but that's not really important. I used to have a Server2008 install on this machine which would stop responding about once every three days, it's also had Vista Ultimate, WHS, and a few Linux variants without any trouble whatsoever.

wolrah
May 8, 2006
what?

vanjalolz posted:

Its so easy to get carried away chasing speed and getting cockblocked by the PCI bus when making a NAS. I think everyone should take a deep breath and really consider the chances of breaking 100mb/s throughput in real world use.

Very true. Unless you have bonded gigabit network links, you will not exceed the PCI bus' top speed with networked disk access alone. Now if your network card is also sharing the PCI bus, then you could have a legitimate problem.

Then again, many of the machines out there that lack PCI Express or decent onboard SATA also lack onboard gigabit, so the PCI bus is actually a concern for these users. In that case, I'd just recommend just biting the bullet and upgrading to a low-end AM2 platform.

wolrah
May 8, 2006
what?

invid posted:

From a small office point of view, I have a NAS system that needs to be hooked up in the DMZ to allow workers to use it.

Baring the use of VPN, is there any security issues that I need to be aware of?

Depends on what you mean by DMZ. Thanks to a lot of consumer routers, the term DMZ is often abused to mean "make this one machine wide open, assume everything not explicitly destined elsewhere goes here" where it used to refer to a third network off the router that was neither part of the LAN or WAN and was firewalled from both, but would be where internet-exposed machines went. That way you can have a simple "no unsolicited inbound traffic" from WAN to LAN, forward needed traffic from WAN to DMZ, and then sometimes also forward some traffic from DMZ to LAN, though this has obvious security implications if a DMZ machine with LAN access is compromised.

Remember that many NAS devices are running embedded Linux and using the same services one would use to build a standard server, so they can also have the same vulnerabilities. On top of that, most consumer/SOHO NAS vendors seem to be terribly slow about releasing updated firmware even if there is a critical security flaw. If your NAS is exposed to the internet and has an exploitable flaw, you could be giving anyone who desires it full control over a box on your network unless you're properly restricting it with a real DMZ

wolrah
May 8, 2006
what?

angelfoodcakez posted:

Ah, so I couldn't serve from a UNC path like a WHS box?

NFS or iSCSI

wolrah
May 8, 2006
what?

Interlude posted:

Here's what I'm trying to do - set up a server box that's easily accessable via CIFS and AFP (mostly PCs in the house but my wife's laptop is a mac and she'll want to access it).

Don't bother with AFP at all. It gains you pretty much nothing and as NeuralSpark said it's a pain to configure. Mac OS X can access CIFS shares perfectly fine, it even uses Samba to do it, so it'll be 100% compatible with any *nix host you might want to use it with. I use my MBP as my primary workstation connected to Debian and Ubuntu-hosted CIFS shares all day at work, then go home to access a Ubuntu fileserver. AFP technically helps with Time Machine if you have certain commands supported that it uses, but last time I checked netatalk did not implement those commands so it doesn't matter anyways.

wolrah
May 8, 2006
what?

NeuralSpark posted:

I know OS X server uses Samba to do Windows sharing, but I think the client is something of Apple's own design. My only gripe with it is that it can be VERY slow.

I don't think that's right. The connections are handled by smbclient, and a man smbclient on the MBP I'm using right now brings up the Samba smbclient man page.

As for speed, obviously saying "works for me" isn't really useful, but I don't see a bit of difference between any non-compressed and non-encrypted protocols for large files. SCP is obviously slower for those two reasons, and some protocols (FTP in particular) are really bad at large batches of small files, but in my experience SMB is one of the better ones.

wolrah
May 8, 2006
what?
I was about to begin a migration to Windows Home Server, but since Microsoft has now announced that WHS v2 will be completely pointless I'm having strong second thoughts about the platform. I already did not like Vail's updated Drive Extender removing some of the useful features of the old one, but now that it's being removed altogether that means there will be no upgrade path from WHSv1 of any interest.

I currently have a machine running Ubuntu 10.04 with LVM2 set up as follows:
code:
250GB - ext4 /

500GB \
500GB |
1TB   |- LVM VG - ext4 /volumes/pool0/
1TB   |
1.5TB /

1TB - NTFS /volumes/ntfs1/

1.5TB - Unformatted, awaiting installation.
The NTFS volume is from preparation to move to WHS.

Right now I have the single volume filesystem and can easily add the two other drives to the pool, but LVM provides no protection against disk failure and to the best of my knowledge is actually nearly as bad as RAID 0 in terms of "you lose a drive, you just lost everything". This is the main thing I'd like to gain from anything I move to.

Is there anything other than WHS which works reliably and can offer both a pool that single arbitrary size drives can be added to as well as the knowledge that a disk failure will only kill the data on the failed disk, not the entire pool?

I've experimented with AUFS, but it does not handle writes properly at all from my tests. I can't get it to write to more than one volume, when the ideal would be for new files to automatically get assigned to the volume with the most free space.

wolrah
May 8, 2006
what?

ephori posted:

I'd like to use ESXi as the host at the top-level, since I've got a bunch of work VMs that'd be convenient to play with at home, but a RAID-Z with ZFS is also really appealing to me. Does ZFS-Fuse work with ESXi? If I instead run an OpenIndiana VM under ESXi, can I give it direct disk-access to build a RAID-Z? If so, can I then connect that storage back to the ESXi host using iSCSI to expand the datastore? Is that a terrible idea?

I have heard of this being done by VMware themselves for demos. One machine running ESXi hosting a Solaris variant with ZFS plus two more ESXi guests allowing them to demo the nifty stuff like vMotion and high availability with only one physical machine.

Not sure on the performance though, and last time I checked it took some work to get a raw device mapping to a SATA device on ESXi.

wolrah
May 8, 2006
what?

devilmouse posted:

Unraid? http://www.lime-technology.com/

One of my friends runs UnRAID and as far as I can tell it requires a "parity drive" to have any failure tolerance at all, and of course that must be either the or one of the largest drives in the system. I do not want to waste any drives, I have almost nothing that's actually of any importance on my server and all of that stuff is backed up elsewhere, I just want to tolerate a failed drive without losing what's on the other drives.

UnRAID will fail a drive and let me lose nothing, but that means I lose the capacity of that drive off the bat. It's also a limited OS that basically exists solely for file serving, rather than a full Linux or Windows Server install like the other options. What WHS does is let me lose absolutely zero capacity, and if I lose a 250GB drive I lose at most 250GB of my data rather than all of it. That's what I want, just not with a solution like WHS which apparently now has no ongoing support. I'm looking in to Greyhole since AUFS didn't work out.

wolrah fucked around with this message at 06:55 on Nov 27, 2010

wolrah
May 8, 2006
what?

fadderman posted:

Hey Goons is a molex to sata something to recommend if my psu doesn't have enough sata connections
There is absolutely nothing wrong with using them if they're well made. As noted though, many are not well made. I've personally seen three of them short out and melt down.

wolrah
May 8, 2006
what?

priznat posted:

Must have been some crazy amount of splitting to melt down something providing power to a sata drive, they’re not really power hogs.

Startech stuff is weirdly expensive but it’s usually fine.
Two of them were in the same computer at the same time, and it was two single-port adapters powering SSDs so they were under no meaningful load, they were just poo poo. It was a hard short within the SATA connector itself in both cases, one looked like it had been arcing occasionally and was just a bit burnt where the other one was obviously where the magic smoke was released from.

They were all exactly the type shown in the video someone else linked, I found the same things when looking in to it.

wolrah
May 8, 2006
what?

That Works posted:

If that was universal then we should tell people up front not to use UnRaid on a network with windows systems.
I would say we probably should be...

UnRaid seriously doesn't support even SMB2 yet? It's Linux-based, right? Samba has supported SMB2 since 2011 and SMB3 wince 2013. What's their excuse? Are they rolling their own SMB server for some idiotic reason? It's not like the fact that SMB1 is a gaping security hole hasn't been well known for years...

If they can't be bothered to update this, what else are they slacking off on?

wolrah
May 8, 2006
what?

Matt Zerella posted:

Missed my post where I mentioned the modern SMB is coming when 6.8 goes final?
As H2SO4 correctly assumed, the point was that this is something that they should have done literally years ago.

SMB1 is horrifically insecure for a variety of reasons and having it enabled at all means that someone who is able to gain a man-in-the-middle position could downgrade a connection even between two modern systems and then do whatever they wanted with it.

Even Microsoft has been recommending that everyone disable SMB1 since 2016 and has been doing so by default in Windows since late 2017. It'd be OK if they were just testing a new version that disabled SMB1 where previous versions supported all of them, but if they really do not support anything but SMB1 on the current stable release that's just plain irresponsible. Anything that requires SMB1 be enabled in the last few years should have been treated as outdated junk.

As far as I can tell they use Samba and aren't doing anything special with it, they just for whatever reason have configured it to disable the newer protocols. There are instructions on their forums for enabling and enforcing SMB2+ with a few lines of config, why they didn't do the same long ago and require those who need to support ancient trash to do the config edits I have no idea.

wolrah fucked around with this message at 07:10 on Dec 9, 2019

wolrah
May 8, 2006
what?

Buff Hardback posted:

Having network shares show in the "Network" pane requires SMBv1 to be enabled. Mapping shares as a network drive or navigating directly to \\hostname will work without enabling SMBv1 on Windows.
My local Samba server shows up just fine in my Network pane on Windows 10 with SMB1 disabled entirely at both ends (Samba is actually set to use only the Win7 and later variant of SMB2 because there will never again be a Vista machine on my LAN), so this is definitely not true. According to Samba as long as nmbd is set up properly it should browse normally.

quote:

No clue why everyone decided to interpret it as "unraid only uses SMBv1"
I was going off Matt Zerella's post that ended page 552 and the responses from other users like That Works who also had to enable SMB1 on their Windows machines to access their Unraid machines.

HalloKitty posted:

Ok, that's a pretty important distinction
It definitely is, now I'm almost considering installing unraid in a VM myself just to verify one way or another.

If it supports SMB3 but still allows connections from SMB1 that's not the most secure configuration in the world but it's a reasonable default for a commercial product where compatibility without configuration is desirable to some users.

If it requires that clients have SMB1 enabled to access the current stable version, something is horribly wrong with their priorities and it'd make me wonder what else they have that badly wrong.

wolrah
May 8, 2006
what?
There are plenty of applications where more memory is more valuable than ultimate memory performance. The moment you have to swap that difference becomes irrelevant.

wolrah
May 8, 2006
what?

This is a good summary of the topic. Linus' legal concerns with OpenZFS are well founded, without Oracle's explicit approval there's no reasonable way it could end up in the kernel proper, and as a result the technical issues with how the kernel handles internal interfaces are similar to what we've seen for years with binary GPU drivers.

His opinions on ZFS as a filesystem though are pretty much entirely nonsensical and the idea that btrfs is even in the same ballpark is hilarious.

wolrah fucked around with this message at 22:28 on Jan 13, 2020

wolrah
May 8, 2006
what?

taqueso posted:

Are there any especially good deals for bulk storage sized SSDs? I'd like to make a small array for a car computer where I'm scared to use spinning disks.
Microcenter's in-house brand "Inland" has the best $/GB ratio I've seen in SSDs without going to complete no-name hardware. I wouldn't use them for ultimate performance or reliability applications, but for general purpose computers I love 'em. 1TB for $88.

wolrah
May 8, 2006
what?

Moey posted:

4.1 TB free...


This is after temporarily moving over a few large TV series I wasn't actively watching to one of my PCs that happened to have a 5TB drive in it.

I need to pretty much just rebuild from scratch at this point. On the plus side, I could copy literally everything to a single drive for holding.

wolrah
May 8, 2006
what?
Do any of the NAS-focused distros have first class support for both ZFS and a more flexible drive pooling system that can work with a random collection of disks? It seems like those that support one don't support the other, at least not officially, and if I'm going to have to manage one or the other from the command line I figure I may as well just run Ubuntu and do it all manually.

Just to ensure I'm not X/Ying myself, here's my situation and logic.

The vast majority of my data, everything before the point in my terabyte count and then some, is downloaded content of some sort. Linux ISOs, lancache, podcasts, etc. Most of that could be trivially re-downloaded with little to no effort on my part as long as I knew what I had lost. The more free-form drive pooling solutions like WHS2011, Greyhole, and maybe SnapRAID if I'm understanding it correctly are perfect for this stuff. Losing part of a file is a lot worse than losing the whole thing, so I would like to avoid any kind of striped pool for this one. I lost a single drive in a LVM JBOD once and sorting out what files survived and what hadn't from that mess was such a pain in the rear end that I ended up just deleting a large chunk of it and starting over.

That said, I would like to also be able to use this box as the storage host for my VMs so I can play around with failover and such. For that performance is going to be a lot more important than raw capacity, with high availability coming in second.

My thought right now is something along the lines of a ZFS RAID 10 of 1TB SSDs for the high performance pool and some kind of file-level pooling solution configured to single redundancy for the bulk pool. Does that sound like the right answer for what I want to do, and if so do any of the "appliance" style distros support doing both of these things without leaving the web interface? If I just go at it on my own again any thoughts on SnapRAID vs. Greyhole vs. other for the random disk pooling?

Or should I maybe consider two separate boxes, maybe moving bulk storage up to my HTPC and making the actual server machine high performance only.

wolrah
May 8, 2006
what?

DrDork posted:

I'm not really sure your use cases have outlined a good reason to go with ZFS at all, honestly. From the sound of it you have:

-A group of data you don't really care much about
-A VM store that you also don't really care much about, other than wanting it to be fast

ZFS is aimed at high availability and security over speed. If you're just playing around with VMs for learning and such, a solo SSD will probably already provide more than sufficient disk speeds. Doing any sort of SSD RAID only really makes sense if you're plunking them into RAID0 to use as a larger single drive for whatever reason, or if you're using them to run large IOPS intensive databases or similar.
The reason I was looking at using ZFS is pretty much the same reason for the VMs in the first place, I like to do the "home lab" thing where I overcomplicate my home setup to get some experience working with different technologies. I want to be able to at least loosely simulate situations I might encounter professionally, play with snapshots, etc.

In that context does it make a bit more sense?

I mean yeah, from a practical sense everything I do with VMs currently runs off of a single SATA SSD in my desktop.

wolrah
May 8, 2006
what?

IOwnCalculus posted:

I once sat down and started working out what it would take to use a Raspberry pi to control some relays to be able to remotely power on/off the box, and use it as a serial terminal.

I stopped when I realized the BOM was already close to the cost increase of just buying a proper SM board. I think just about all Supermicro boards past the X34xx / X55xx generations have IPMI standard.
At this point really the hard part is the video input. If you have a host motherboard that supports serial in the BIOS then you really just need a couple bucks worth of level converters for that and relays or transistors for the power/reset signals. A Pi 4 already supports USB OTG peripheral mode and can act as a USB keyboard/mouse, flash drive, etc. while running off of PoE.

I for one would love a general purpose "IPMI" adapter even if it was just that, video capture, power, and USB, because while it's usually easy enough to spec a proper server platform in new builds it would be really nice to be able to at least have some kind of remote diagnostic ability I could add to home machines built from spare parts or existing servers at customer sites that were not specced with remote management or aren't really actual server hardware.

Doing a bit of poking around I've found someone selling an adapter that claims to convert 1080p HDMI to a signal a Pi can accept on its camera input for $20, so I think I might order one of those and see how it goes. It'd obviously be more useful for a lot of server hardware if I could get VGA input on it, but my own home hardware has a desktop GPU installed so this will work for my needs.

wolrah
May 8, 2006
what?

Rexxed posted:

That seems very cheap, I was looking into the same thing a year or two back and the best I could find (at the time) was https://auvidea.eu/product-category/csi2bridge/hdmi2csi/ which interfaces with the pi camera input and wasn't that cheap.

Well, the source is some random guy on Youtube whose only contact information is a gmail address, but there are people on a few forums talking about having received parts from him so I figure I'll give it a shot as long as he takes paypal or something along those lines. Worst case I'm out $20. There's also a product on Alibaba for $36. It's the same chip as in the Auvidea design AFAIK.

Apparently a normal Pi only has two lanes of CSI and thus can only capture 1080p at 25-30 FPS, but for purposes of a DIY external remote management box that's perfectly sufficient. A Compute Module has a full four lane interface.

wolrah
May 8, 2006
what?

Lowen SoDium posted:

I have actually ordered a couple of those HDMI to SCI2 bridge boards this last week from Aliexpress. I am attempting to build a zero-U ip-KVM device using an R PI.
That's pretty much exactly what I'm looking to accomplish too, so neat that we're both working on the same-ish project.

IP KVM is basically my minimum goal, remote media should be an easy second milestone if I can get the KVM part to work, and then remote power/reset is icing on the cake.

quote:

I actually didn't know that the Pi4 could present it self as a Keyboard and mouse. My plan was was going to up an Arduino Micro Pro on top to be the HID device for the managed PC and communicate to it from the Pi using serial.
Yeah, all of the Pi SoCs technically support USB gadget mode but since the "B" models all had a hub on the line it was not exposed on those. It was only properly exposed on the Zero and can be enabled on the hub-less 1A/3A models using a setting tweak and an invalid A-A USB cable.

The Pi 4 uses a completely different peripheral architecture and connects the gadget mode interface to the USB-C port so it's available at all times even with the USB hub and ethernet port.

There's not much Pi 4 specific documentation at the moment but for the most part what you can find out there for Pi Zero as a USB gadget will work the same on the other compatible models.

Moey posted:

Keep us posted.
If I get anywhere useful with it I'll definitely be posting about it. I'm currently waiting on a response from Youtube guy, if that doesn't go anywhere in a reasonable amount of time I'll probably order one of the Aliexpress boards.

wolrah fucked around with this message at 20:21 on Apr 21, 2020

wolrah
May 8, 2006
what?

Wild EEPROM posted:

4) Software raid is usually not portable. That means if you have to reinstall your OS, your raid won't go with it. Usually this is if your hardware dies and you have to buy new parts.
With you on everything but this. What makes you believe software RAID isn't portable? Back when the choice was just between software and hardware RAID that was one of the main points in favor of software RAID other than cost.

I have personally moved Windows dynamic disks and Linux md arrays between systems with no problems, and as far as I'm aware OS X's disk sets are equally portable. Windows won't automatically mount the array, it'll flag it as foreign by default, but that's a matter of two clicks in Disk Management to import it. Likewise on Linux, you have to do a mdadm scan for it to identify a newly attached array but we're not talking about rocket science here.

Are you maybe thinking about those setups often bundled with "gamer" motherboards that are some proprietary softraid pretending to be hardware RAID? Those are somewhat tied to the hardware of course, but they're easy enough to just not use. I'm pretty sure Linux md is actually able to mount a lot of these as well, as long as the array layout is stored on disk somewhere and not just in an EEPROM on the motherboard.

wolrah
May 8, 2006
what?
I will vouch for that case being great for DIY server builds. It's small enough to fit in to a "LackRack" setup but still large enough to support full-size desktop computer components so the fans are quiet and you can use normal PSUs, expansion cards, etc.

wolrah
May 8, 2006
what?

H110Hawk posted:

It genuinely surprises me that pi implementations r/w to their disk so much. In theory the whole point is to not do that.

Most of the actual appliance distros built for the purpose are pretty good about this and either run entirely r/o or use a r/o boot partition separate from a r/w partition for user data.

Anything built on top of Raspbian on the other hand behaves mostly like a normal Debian system, as you'd reasonably expect.

wolrah
May 8, 2006
what?

H110Hawk posted:

I apparently unreasonably expect something called pihole that's been around for years now to have a very low i/o footprint due to the same reasons that are listed here. Nothing but basic config documents should be persisted to disk. Databases can be downloaded per boot. Stats can be lost on unclean reboot. If you desperately want to it could be persisted on clean reboot.
Pi-Hole is a terrible hack in a lot of ways, and this is one of them. As far as I can tell they don't actually even have a "distro" of their own, it's literally just software you install on top of a standard Raspbian install using a "pipe curl to bash" command line that's a horrible idea in its own way.

I have no idea why it's so popular and do not encourage people to run it. It's just HOSTS file based ad blocking on a larger scale.

wolrah
May 8, 2006
what?

D. Ebdrup posted:

Speaking of 10G, an article on the cheapest 10GbE just appeared in my RSS feed.
I looked in to this recently and ended up jumping right to 40G because the cards and optics/DACs really weren't that much more expensive on the used market compared to 10G. Basically all 40G cards can support adapters to connect 10G SFP+ modules and some of them can even support a breakout to 4x10G too so there didn't really seem to be any significant downside other than requiring an x8 PCIe slot for full performance.

I got 2x Mellanox ConnectX-3 40/56G InfiniBand cards off eBay and bought a brand new QSFP+ DAC from FiberStore. Less than $100 total with shipping and I can hit 35 gigabits per second of file transfer between RAM disks. My server's hard drives are the real world limiting factor at this point and it's glorious.

The catch of course is that unless you're looking at some EoL gear like old Brocade ICXes the switch costs get crazy. For now I'm avoiding that problem by just having my desktop and server plugged directly together, the rest of the LAN can share the gigabit link.

wolrah
May 8, 2006
what?

Hadlock posted:

Too bad you can't boot from S3 yet. Just store the access keys and S3:url in the BIOS and boom. Might be a little slow to recover from a power outage longer than your UPS can handle, but yeah
You could definitely do this with iPXE. You can flash it directly to a NIC's boot ROM with a HTTPS URL to pull a config file from and go from there.

If you're interested in that sort of thing, you can install it to a USB drive and boot from there as an experiment before committing to warranty voiding.

wolrah
May 8, 2006
what?

rufius posted:

I really like my little TVS-471. I would definitely buy another QNAP. Had bad exp with Synology previously so I’m wary to try them again.

If i strayed from QNAP, it’d be to a FreeNAS device.
Pretty much exactly my experience too, we deploy TS-453s as backup storage for our customers and have no complaints.

wolrah
May 8, 2006
what?
I bought a few of the cheap HDMI-USB adapters after seeing that and they do work, I'm going to set up my Pi 4 with one hooked up to my server as soon as I get my 3D printer back online to print a case. Will definitely post a trip report.

wolrah
May 8, 2006
what?

BabyFur Denny posted:

Smb is just the network protocol for sharing the drive and if you're the only users on the network it should be fine. Otherwise you might be able to set a higher version of the protocol on the server, maybe after installing the latest update. No need to replace the whole thing
Nope, the DNS-323's official firmware releases use Samba version 3.0.something and the latest version of the community-developed "Alt-F" firmware (https://sites.google.com/site/altfirmware/) seems to use version 3.5. SMB2 support was added to Samba in version 3.6 and SMB3 in version 4.0, so it doesn't seem like that particular product can support anything newer.

The Alt-F firmware adds support for a few other protocols, but nothing that would be easily used from a Windows PC as a shared drive.

I'd say it's time to retire it. A Raspberry Pi 4 with some USB hard drive enclosures would probably be the cheapest solution and would almost certainly outperform the D-Link, but if DIY isn't your thing then any modern commercial appliance should also be good.

wolrah
May 8, 2006
what?

Warbird posted:

Coming off of that, I assume that a Pi4 should be fine for NFS/SMB duties since they ungoobered the USB/Ethernet stuff? I resurrected my old desktop setup for server duties and I'm debating just letting the drives live there and network mounting on the more powerful machine.
Correct, it doesn't really take much CPU power to serve file shares. The Pi was previously bottlenecked as a DIY NAS by the fact that all the relevant I/O shared a single USB 2.0 interface to the host SoC, but the 4 finally fixed this with a SoC that has Ethernet built in plus a PCIe-based USB 3.0 controller.

At this point the bottleneck should be Gigabit Ethernet itself.

quote:

And coming off the coming off, am I losing a noticeable amount of throughput by having the machines not hooked into the same switch? Outside of what I can see via tracert that is.
There should be no meaningful difference unless the links between switches are congested with other traffic. Switches themselves should not impact the usable bandwidth across the link unless they're configured to do so in some way (QoS, port speed limits, etc.)

wolrah
May 8, 2006
what?

KingKapalone posted:

Earlier this year I installed two Noctua NF-R8 PWM fans since the last non-PWM fans started whining. Now the fans consistently spin up and slow down. I looked in the manual and saw that I can control the fans in IPMI https://www.supermicro.com/manuals/motherboard/C222/MNL-1428.pdf
https://forums.servethehome.com/index.php?resources/supermicro-x9-x10-x11-fan-speed-control.20/ seems to have some info on undocumented commands with which you can change the fan speeds manually. If you leave it set to automatic mode it'll eventually start changing things on its own again, but if you set the BMC to full fan speed and then manually turn them down from there it won't mess with things under normal circumstances.

edit:

The Wonder Weapon posted:

Do you guys have any suggestions on all-in-one keyboards for your media PCs?
Use a frontend with a 10 foot UI designed for use on a TV. The standard PC desktop was just not designed for this and you're never going to have as good of an experience. A normal keyboard and mouse should not be required for basic media playback.

Plex is the favorite over in the HTPC thread and Kodi is still very popular with those of us who have been doing this a long time. There's also Emby and Jellyfin.

wolrah fucked around with this message at 17:56 on Sep 8, 2020

Adbot
ADBOT LOVES YOU

wolrah
May 8, 2006
what?

KingKapalone posted:

So is this saying it sets the fans to Full but then redefines Full as say 75% or something? It sounds like a software way of just disabling the PWM functionality since it will be fixed at that speed percent.
It doesn't redefine full, it controls the actual speed setting. Setting it to Full just prevents the BMC from changing the setting except possibly in alarm conditions. If you leave it in automatic mode it'll eventually change the setting based on its predefined behavior which you've already determined isn't desirable for your application.

It wouldn't be too hard to write a script that polled the temperature sensors every X time period and set the fan speed as you'd like. There's a script linked at the bottom of the post that does something close.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply