|
King Nothing posted:On that topic, when I was looking at hard drives some were advertised as being good for video feeds because they could write multiple data streams. Is that an actual feature, or some sort of marketing thing? Doesn't any drive with multiple platters have multiple read/write heads, and thus the ability to do that? It's just a firmware thing. The access patterns of a DVR or similar are different from those of most home users, so the firmware can be designed in a way that's optimized for that use, sacrificing performance in other areas. In terms of physical hardware, the "DVR Edition" or whatever drives are identical to their standard use counterparts. They're just software-optimized to perform well with at least two "simultaneous" writes and one read going at any given time.
|
# ¿ Mar 19, 2008 14:26 |
|
|
# ¿ May 4, 2024 06:30 |
|
900ftjesus posted:They're optimized for writing large chunks of contiguous data. You wouldn't want to use this in your computer: For a same-brand comparison, straight from the Seagate datasheets: Barracuda 7200.11 (desktop/workstation) 1TB: 4.16ms Barracuda ES.2 (nearline/NAS/SAN) 1TB: 4.16ms DB35.3 (professional/security DVR) 1TB: Read <14ms, Write <15ms They don't have numbers listed for the SV35 (consumer DVR) aside from the vague "up to 10 HDTV streams" and it's also a generation out of date (based on the Barracuda 7200.10), otherwise I'd have included that too. As far as I know the three drives I listed are all physically the same, just with different firmware for their intended application.
|
# ¿ Mar 19, 2008 17:06 |
|
Alystair posted:I have a 3Ware 9500S 8 port raid controller WITH battery backup unit that I'm no longer using. The 4-port without the BBU sells for $300 on eBay and I'm willing to match that, so you get 4 extra ports, plus the BBU for free. I could easily sell it for more else where, but some of you guys might actually use it. PM me if interested. drat, if that was PCIe I'd be all over it, but unfortunately I don't have a single machine with 64 bit or 66 MHz PCI, much less both (hell, I don't think I've even seen 64 bit PCI in real life) and while I think it's backwards compatible it would be a huge waste to put that beast of a card in a standard PCI slot.
|
# ¿ Jul 16, 2008 21:45 |
|
I really want to use ZFS, but I have a somewhat irrational dislike of Solaris thanks to some old-rear end SPARC boxes I had to use in college. My fileserver currently runs Ubuntu Linux with a mix of LVM+XFS on the internal drives and a few USB drives via NTFS-3G. Right now it seems like my choices if I go down the ZFS road are as follows:
Are any of the non-Solaris options really worth considering? FreeBSD would probably be my preference unless ZFS on FUSE has been updated to acceptable performance since I last looked in to it.
|
# ¿ Dec 30, 2008 17:03 |
|
Combat Pretzel posted:Your dislike lies probably more with CDE than Solaris. Latter comes with Gnome enabled by default now. Actually no, I've never used Solaris in a GUI mode, only command line over SSH. The_Last_Boyscout posted:Windows: enable the LAN connection, assigned it a manual IP address and left the gateway blank. If you give it a gateway it tries to use this connection to access the internet. Technically with any version of Windows newer than 98 and a modern Linux distro set up for use as a desktop you shouldn't even need to set static IPs. Both ends should choose IPs in the link-local range (169.254.0.0/16) without a gateway if they're configured for DHCP and receive no response. That said I still tend to set mine static when I'm trying to link two computers too, if only for it being easier to just think 10.0.0.1 and 10.0.0.2 when I need to do something by IP.
|
# ¿ Jan 3, 2009 22:57 |
|
I was thinking about my desire for ZFS more and realized I might be looking at this all wrong and ZFS may not actually be what I want. I'm basically looking to have a box that I can throw disks at whenever I need more space. I'd also like to be able to tolerate at least a single drive failure either for the entire system, one set "important" volume, or specific files/folders. I have no preference as to which of the three. When I run out of space for disks, I'd like to be able to take advantage of that failure tolerance to remove the smallest drive and replace it with a new larger one. Being able to do this online is preferred, but I don't mind taking it offline for a few minutes since this is likely to only happen once or twice a year at peak. I don't believe I'll ever need to shrink a volume, but I would see the capability as a plus. Being able to grow is obviously mandatory. I guess basically what I want is Drobo-like functionality, but in my own homebrew machine. Four drives is not enough, plus I like the other things I can do with a server. Right now I have SABnzbd+, Samba file/print sharing, AFP, DHCP, DNS, rtorrent, uShare UPnP, Zoneminder motion-detecting security running off a webcam, and probably a number of other things I forgot about running on this thing. All of those will run on basically any Unix-like platform and have Windows ports or counterparts. I know how to do everything except the fault tolerance with LVM. I think WHS can do everything I want, but setting up local servers on it aside from those built specifically for a WHS environment was interesting last time I looked in to it. I don't think I can get AFP working on it at all, but that's not really important. I used to have a Server2008 install on this machine which would stop responding about once every three days, it's also had Vista Ultimate, WHS, and a few Linux variants without any trouble whatsoever.
|
# ¿ Jan 5, 2009 00:31 |
|
vanjalolz posted:Its so easy to get carried away chasing speed and getting cockblocked by the PCI bus when making a NAS. I think everyone should take a deep breath and really consider the chances of breaking 100mb/s throughput in real world use. Very true. Unless you have bonded gigabit network links, you will not exceed the PCI bus' top speed with networked disk access alone. Now if your network card is also sharing the PCI bus, then you could have a legitimate problem. Then again, many of the machines out there that lack PCI Express or decent onboard SATA also lack onboard gigabit, so the PCI bus is actually a concern for these users. In that case, I'd just recommend just biting the bullet and upgrading to a low-end AM2 platform.
|
# ¿ Feb 4, 2009 04:56 |
|
invid posted:From a small office point of view, I have a NAS system that needs to be hooked up in the DMZ to allow workers to use it. Depends on what you mean by DMZ. Thanks to a lot of consumer routers, the term DMZ is often abused to mean "make this one machine wide open, assume everything not explicitly destined elsewhere goes here" where it used to refer to a third network off the router that was neither part of the LAN or WAN and was firewalled from both, but would be where internet-exposed machines went. That way you can have a simple "no unsolicited inbound traffic" from WAN to LAN, forward needed traffic from WAN to DMZ, and then sometimes also forward some traffic from DMZ to LAN, though this has obvious security implications if a DMZ machine with LAN access is compromised. Remember that many NAS devices are running embedded Linux and using the same services one would use to build a standard server, so they can also have the same vulnerabilities. On top of that, most consumer/SOHO NAS vendors seem to be terribly slow about releasing updated firmware even if there is a critical security flaw. If your NAS is exposed to the internet and has an exploitable flaw, you could be giving anyone who desires it full control over a box on your network unless you're properly restricting it with a real DMZ
|
# ¿ Feb 4, 2009 15:10 |
|
angelfoodcakez posted:Ah, so I couldn't serve from a UNC path like a WHS box? NFS or iSCSI
|
# ¿ Mar 3, 2009 16:55 |
|
Interlude posted:Here's what I'm trying to do - set up a server box that's easily accessable via CIFS and AFP (mostly PCs in the house but my wife's laptop is a mac and she'll want to access it). Don't bother with AFP at all. It gains you pretty much nothing and as NeuralSpark said it's a pain to configure. Mac OS X can access CIFS shares perfectly fine, it even uses Samba to do it, so it'll be 100% compatible with any *nix host you might want to use it with. I use my MBP as my primary workstation connected to Debian and Ubuntu-hosted CIFS shares all day at work, then go home to access a Ubuntu fileserver. AFP technically helps with Time Machine if you have certain commands supported that it uses, but last time I checked netatalk did not implement those commands so it doesn't matter anyways.
|
# ¿ Jun 9, 2009 19:06 |
|
NeuralSpark posted:I know OS X server uses Samba to do Windows sharing, but I think the client is something of Apple's own design. My only gripe with it is that it can be VERY slow. I don't think that's right. The connections are handled by smbclient, and a man smbclient on the MBP I'm using right now brings up the Samba smbclient man page. As for speed, obviously saying "works for me" isn't really useful, but I don't see a bit of difference between any non-compressed and non-encrypted protocols for large files. SCP is obviously slower for those two reasons, and some protocols (FTP in particular) are really bad at large batches of small files, but in my experience SMB is one of the better ones.
|
# ¿ Jun 11, 2009 02:42 |
|
I was about to begin a migration to Windows Home Server, but since Microsoft has now announced that WHS v2 will be completely pointless I'm having strong second thoughts about the platform. I already did not like Vail's updated Drive Extender removing some of the useful features of the old one, but now that it's being removed altogether that means there will be no upgrade path from WHSv1 of any interest. I currently have a machine running Ubuntu 10.04 with LVM2 set up as follows: code:
Right now I have the single volume filesystem and can easily add the two other drives to the pool, but LVM provides no protection against disk failure and to the best of my knowledge is actually nearly as bad as RAID 0 in terms of "you lose a drive, you just lost everything". This is the main thing I'd like to gain from anything I move to. Is there anything other than WHS which works reliably and can offer both a pool that single arbitrary size drives can be added to as well as the knowledge that a disk failure will only kill the data on the failed disk, not the entire pool? I've experimented with AUFS, but it does not handle writes properly at all from my tests. I can't get it to write to more than one volume, when the ideal would be for new files to automatically get assigned to the volume with the most free space.
|
# ¿ Nov 23, 2010 20:41 |
|
ephori posted:I'd like to use ESXi as the host at the top-level, since I've got a bunch of work VMs that'd be convenient to play with at home, but a RAID-Z with ZFS is also really appealing to me. Does ZFS-Fuse work with ESXi? If I instead run an OpenIndiana VM under ESXi, can I give it direct disk-access to build a RAID-Z? If so, can I then connect that storage back to the ESXi host using iSCSI to expand the datastore? Is that a terrible idea? I have heard of this being done by VMware themselves for demos. One machine running ESXi hosting a Solaris variant with ZFS plus two more ESXi guests allowing them to demo the nifty stuff like vMotion and high availability with only one physical machine. Not sure on the performance though, and last time I checked it took some work to get a raw device mapping to a SATA device on ESXi.
|
# ¿ Nov 23, 2010 21:59 |
|
devilmouse posted:Unraid? http://www.lime-technology.com/ One of my friends runs UnRAID and as far as I can tell it requires a "parity drive" to have any failure tolerance at all, and of course that must be either the or one of the largest drives in the system. I do not want to waste any drives, I have almost nothing that's actually of any importance on my server and all of that stuff is backed up elsewhere, I just want to tolerate a failed drive without losing what's on the other drives. UnRAID will fail a drive and let me lose nothing, but that means I lose the capacity of that drive off the bat. It's also a limited OS that basically exists solely for file serving, rather than a full Linux or Windows Server install like the other options. What WHS does is let me lose absolutely zero capacity, and if I lose a 250GB drive I lose at most 250GB of my data rather than all of it. That's what I want, just not with a solution like WHS which apparently now has no ongoing support. I'm looking in to Greyhole since AUFS didn't work out. wolrah fucked around with this message at 06:55 on Nov 27, 2010 |
# ¿ Nov 27, 2010 06:26 |
|
fadderman posted:Hey Goons is a molex to sata something to recommend if my psu doesn't have enough sata connections
|
# ¿ Nov 29, 2019 18:27 |
|
priznat posted:Must have been some crazy amount of splitting to melt down something providing power to a sata drive, they’re not really power hogs. They were all exactly the type shown in the video someone else linked, I found the same things when looking in to it.
|
# ¿ Dec 1, 2019 18:12 |
|
That Works posted:If that was universal then we should tell people up front not to use UnRaid on a network with windows systems. UnRaid seriously doesn't support even SMB2 yet? It's Linux-based, right? Samba has supported SMB2 since 2011 and SMB3 wince 2013. What's their excuse? Are they rolling their own SMB server for some idiotic reason? It's not like the fact that SMB1 is a gaping security hole hasn't been well known for years... If they can't be bothered to update this, what else are they slacking off on?
|
# ¿ Dec 9, 2019 03:13 |
|
Matt Zerella posted:Missed my post where I mentioned the modern SMB is coming when 6.8 goes final? SMB1 is horrifically insecure for a variety of reasons and having it enabled at all means that someone who is able to gain a man-in-the-middle position could downgrade a connection even between two modern systems and then do whatever they wanted with it. Even Microsoft has been recommending that everyone disable SMB1 since 2016 and has been doing so by default in Windows since late 2017. It'd be OK if they were just testing a new version that disabled SMB1 where previous versions supported all of them, but if they really do not support anything but SMB1 on the current stable release that's just plain irresponsible. Anything that requires SMB1 be enabled in the last few years should have been treated as outdated junk. As far as I can tell they use Samba and aren't doing anything special with it, they just for whatever reason have configured it to disable the newer protocols. There are instructions on their forums for enabling and enforcing SMB2+ with a few lines of config, why they didn't do the same long ago and require those who need to support ancient trash to do the config edits I have no idea. wolrah fucked around with this message at 07:10 on Dec 9, 2019 |
# ¿ Dec 9, 2019 07:08 |
|
Buff Hardback posted:Having network shares show in the "Network" pane requires SMBv1 to be enabled. Mapping shares as a network drive or navigating directly to \\hostname will work without enabling SMBv1 on Windows. quote:No clue why everyone decided to interpret it as "unraid only uses SMBv1" HalloKitty posted:Ok, that's a pretty important distinction If it supports SMB3 but still allows connections from SMB1 that's not the most secure configuration in the world but it's a reasonable default for a commercial product where compatibility without configuration is desirable to some users. If it requires that clients have SMB1 enabled to access the current stable version, something is horribly wrong with their priorities and it'd make me wonder what else they have that badly wrong.
|
# ¿ Dec 10, 2019 22:51 |
|
There are plenty of applications where more memory is more valuable than ultimate memory performance. The moment you have to swap that difference becomes irrelevant.
|
# ¿ Dec 23, 2019 17:07 |
|
This is a good summary of the topic. Linus' legal concerns with OpenZFS are well founded, without Oracle's explicit approval there's no reasonable way it could end up in the kernel proper, and as a result the technical issues with how the kernel handles internal interfaces are similar to what we've seen for years with binary GPU drivers. His opinions on ZFS as a filesystem though are pretty much entirely nonsensical and the idea that btrfs is even in the same ballpark is hilarious. wolrah fucked around with this message at 22:28 on Jan 13, 2020 |
# ¿ Jan 13, 2020 22:26 |
|
taqueso posted:Are there any especially good deals for bulk storage sized SSDs? I'd like to make a small array for a car computer where I'm scared to use spinning disks.
|
# ¿ Feb 20, 2020 23:32 |
|
Henrik Zetterberg posted:6.4 TB free... Moey posted:4.1 TB free... This is after temporarily moving over a few large TV series I wasn't actively watching to one of my PCs that happened to have a 5TB drive in it. I need to pretty much just rebuild from scratch at this point. On the plus side, I could copy literally everything to a single drive for holding.
|
# ¿ Mar 11, 2020 21:46 |
|
Do any of the NAS-focused distros have first class support for both ZFS and a more flexible drive pooling system that can work with a random collection of disks? It seems like those that support one don't support the other, at least not officially, and if I'm going to have to manage one or the other from the command line I figure I may as well just run Ubuntu and do it all manually. Just to ensure I'm not X/Ying myself, here's my situation and logic. The vast majority of my data, everything before the point in my terabyte count and then some, is downloaded content of some sort. Linux ISOs, lancache, podcasts, etc. Most of that could be trivially re-downloaded with little to no effort on my part as long as I knew what I had lost. The more free-form drive pooling solutions like WHS2011, Greyhole, and maybe SnapRAID if I'm understanding it correctly are perfect for this stuff. Losing part of a file is a lot worse than losing the whole thing, so I would like to avoid any kind of striped pool for this one. I lost a single drive in a LVM JBOD once and sorting out what files survived and what hadn't from that mess was such a pain in the rear end that I ended up just deleting a large chunk of it and starting over. That said, I would like to also be able to use this box as the storage host for my VMs so I can play around with failover and such. For that performance is going to be a lot more important than raw capacity, with high availability coming in second. My thought right now is something along the lines of a ZFS RAID 10 of 1TB SSDs for the high performance pool and some kind of file-level pooling solution configured to single redundancy for the bulk pool. Does that sound like the right answer for what I want to do, and if so do any of the "appliance" style distros support doing both of these things without leaving the web interface? If I just go at it on my own again any thoughts on SnapRAID vs. Greyhole vs. other for the random disk pooling? Or should I maybe consider two separate boxes, maybe moving bulk storage up to my HTPC and making the actual server machine high performance only.
|
# ¿ Apr 3, 2020 01:48 |
|
DrDork posted:I'm not really sure your use cases have outlined a good reason to go with ZFS at all, honestly. From the sound of it you have: In that context does it make a bit more sense? I mean yeah, from a practical sense everything I do with VMs currently runs off of a single SATA SSD in my desktop.
|
# ¿ Apr 3, 2020 16:25 |
|
IOwnCalculus posted:I once sat down and started working out what it would take to use a Raspberry pi to control some relays to be able to remotely power on/off the box, and use it as a serial terminal. I for one would love a general purpose "IPMI" adapter even if it was just that, video capture, power, and USB, because while it's usually easy enough to spec a proper server platform in new builds it would be really nice to be able to at least have some kind of remote diagnostic ability I could add to home machines built from spare parts or existing servers at customer sites that were not specced with remote management or aren't really actual server hardware. Doing a bit of poking around I've found someone selling an adapter that claims to convert 1080p HDMI to a signal a Pi can accept on its camera input for $20, so I think I might order one of those and see how it goes. It'd obviously be more useful for a lot of server hardware if I could get VGA input on it, but my own home hardware has a desktop GPU installed so this will work for my needs.
|
# ¿ Apr 21, 2020 16:29 |
|
Rexxed posted:That seems very cheap, I was looking into the same thing a year or two back and the best I could find (at the time) was https://auvidea.eu/product-category/csi2bridge/hdmi2csi/ which interfaces with the pi camera input and wasn't that cheap. Well, the source is some random guy on Youtube whose only contact information is a gmail address, but there are people on a few forums talking about having received parts from him so I figure I'll give it a shot as long as he takes paypal or something along those lines. Worst case I'm out $20. There's also a product on Alibaba for $36. It's the same chip as in the Auvidea design AFAIK. Apparently a normal Pi only has two lanes of CSI and thus can only capture 1080p at 25-30 FPS, but for purposes of a DIY external remote management box that's perfectly sufficient. A Compute Module has a full four lane interface.
|
# ¿ Apr 21, 2020 18:01 |
|
Lowen SoDium posted:I have actually ordered a couple of those HDMI to SCI2 bridge boards this last week from Aliexpress. I am attempting to build a zero-U ip-KVM device using an R PI. IP KVM is basically my minimum goal, remote media should be an easy second milestone if I can get the KVM part to work, and then remote power/reset is icing on the cake. quote:I actually didn't know that the Pi4 could present it self as a Keyboard and mouse. My plan was was going to up an Arduino Micro Pro on top to be the HID device for the managed PC and communicate to it from the Pi using serial. The Pi 4 uses a completely different peripheral architecture and connects the gadget mode interface to the USB-C port so it's available at all times even with the USB hub and ethernet port. There's not much Pi 4 specific documentation at the moment but for the most part what you can find out there for Pi Zero as a USB gadget will work the same on the other compatible models. Moey posted:Keep us posted. wolrah fucked around with this message at 20:21 on Apr 21, 2020 |
# ¿ Apr 21, 2020 20:17 |
|
Wild EEPROM posted:4) Software raid is usually not portable. That means if you have to reinstall your OS, your raid won't go with it. Usually this is if your hardware dies and you have to buy new parts. I have personally moved Windows dynamic disks and Linux md arrays between systems with no problems, and as far as I'm aware OS X's disk sets are equally portable. Windows won't automatically mount the array, it'll flag it as foreign by default, but that's a matter of two clicks in Disk Management to import it. Likewise on Linux, you have to do a mdadm scan for it to identify a newly attached array but we're not talking about rocket science here. Are you maybe thinking about those setups often bundled with "gamer" motherboards that are some proprietary softraid pretending to be hardware RAID? Those are somewhat tied to the hardware of course, but they're easy enough to just not use. I'm pretty sure Linux md is actually able to mount a lot of these as well, as long as the array layout is stored on disk somewhere and not just in an EEPROM on the motherboard.
|
# ¿ Apr 30, 2020 16:38 |
|
I will vouch for that case being great for DIY server builds. It's small enough to fit in to a "LackRack" setup but still large enough to support full-size desktop computer components so the fans are quiet and you can use normal PSUs, expansion cards, etc.
|
# ¿ Jun 7, 2020 15:18 |
|
H110Hawk posted:It genuinely surprises me that pi implementations r/w to their disk so much. In theory the whole point is to not do that. Most of the actual appliance distros built for the purpose are pretty good about this and either run entirely r/o or use a r/o boot partition separate from a r/w partition for user data. Anything built on top of Raspbian on the other hand behaves mostly like a normal Debian system, as you'd reasonably expect.
|
# ¿ Jun 22, 2020 19:16 |
|
H110Hawk posted:I apparently unreasonably expect something called pihole that's been around for years now to have a very low i/o footprint due to the same reasons that are listed here. Nothing but basic config documents should be persisted to disk. Databases can be downloaded per boot. Stats can be lost on unclean reboot. If you desperately want to it could be persisted on clean reboot. I have no idea why it's so popular and do not encourage people to run it. It's just HOSTS file based ad blocking on a larger scale.
|
# ¿ Jun 23, 2020 22:14 |
|
D. Ebdrup posted:Speaking of 10G, an article on the cheapest 10GbE just appeared in my RSS feed. I got 2x Mellanox ConnectX-3 40/56G InfiniBand cards off eBay and bought a brand new QSFP+ DAC from FiberStore. Less than $100 total with shipping and I can hit 35 gigabits per second of file transfer between RAM disks. My server's hard drives are the real world limiting factor at this point and it's glorious. The catch of course is that unless you're looking at some EoL gear like old Brocade ICXes the switch costs get crazy. For now I'm avoiding that problem by just having my desktop and server plugged directly together, the rest of the LAN can share the gigabit link.
|
# ¿ Jun 30, 2020 00:26 |
|
Hadlock posted:Too bad you can't boot from S3 yet. Just store the access keys and S3:url in the BIOS and boom. Might be a little slow to recover from a power outage longer than your UPS can handle, but yeah If you're interested in that sort of thing, you can install it to a USB drive and boot from there as an experiment before committing to warranty voiding.
|
# ¿ Jul 1, 2020 03:04 |
|
rufius posted:I really like my little TVS-471. I would definitely buy another QNAP. Had bad exp with Synology previously so I’m wary to try them again.
|
# ¿ Jul 2, 2020 15:44 |
|
I bought a few of the cheap HDMI-USB adapters after seeing that and they do work, I'm going to set up my Pi 4 with one hooked up to my server as soon as I get my 3D printer back online to print a case. Will definitely post a trip report.
|
# ¿ Aug 3, 2020 15:06 |
|
BabyFur Denny posted:Smb is just the network protocol for sharing the drive and if you're the only users on the network it should be fine. Otherwise you might be able to set a higher version of the protocol on the server, maybe after installing the latest update. No need to replace the whole thing The Alt-F firmware adds support for a few other protocols, but nothing that would be easily used from a Windows PC as a shared drive. I'd say it's time to retire it. A Raspberry Pi 4 with some USB hard drive enclosures would probably be the cheapest solution and would almost certainly outperform the D-Link, but if DIY isn't your thing then any modern commercial appliance should also be good.
|
# ¿ Sep 6, 2020 19:51 |
|
Warbird posted:Coming off of that, I assume that a Pi4 should be fine for NFS/SMB duties since they ungoobered the USB/Ethernet stuff? I resurrected my old desktop setup for server duties and I'm debating just letting the drives live there and network mounting on the more powerful machine. At this point the bottleneck should be Gigabit Ethernet itself. quote:And coming off the coming off, am I losing a noticeable amount of throughput by having the machines not hooked into the same switch? Outside of what I can see via tracert that is.
|
# ¿ Sep 8, 2020 16:13 |
|
KingKapalone posted:Earlier this year I installed two Noctua NF-R8 PWM fans since the last non-PWM fans started whining. Now the fans consistently spin up and slow down. I looked in the manual and saw that I can control the fans in IPMI https://www.supermicro.com/manuals/motherboard/C222/MNL-1428.pdf edit: The Wonder Weapon posted:Do you guys have any suggestions on all-in-one keyboards for your media PCs? Plex is the favorite over in the HTPC thread and Kodi is still very popular with those of us who have been doing this a long time. There's also Emby and Jellyfin. wolrah fucked around with this message at 17:56 on Sep 8, 2020 |
# ¿ Sep 8, 2020 17:46 |
|
|
# ¿ May 4, 2024 06:30 |
|
KingKapalone posted:So is this saying it sets the fans to Full but then redefines Full as say 75% or something? It sounds like a software way of just disabling the PWM functionality since it will be fixed at that speed percent. It wouldn't be too hard to write a script that polled the temperature sensors every X time period and set the fan speed as you'd like. There's a script linked at the bottom of the post that does something close.
|
# ¿ Sep 9, 2020 15:38 |