|
I used XP Pro x64 Edition for a couple years, and if drivers were available for your hardware it was outright a better OS than XP.
|
# ? Oct 21, 2016 15:14 |
|
|
# ? May 15, 2024 03:53 |
|
During my Vista years, my boot volume was a WD Raptor RAID0. That was WAY fast, but only like 72GB, and the whole RAID0 thing. I never lost any data, but I was drat sure to have my actual personal data on a 2nd drive.
|
# ? Oct 21, 2016 15:24 |
|
I didn't run windows update until the late XP SP2 and Windows 7 era. So that was a thing. Also I forgot to put in the motherboard standoffs when I built one of my computers. Took several months to figure out why it would randomly crash.
|
# ? Oct 21, 2016 18:25 |
|
mayodreams posted:During my Vista years, my boot volume was a WD Raptor RAID0. That was WAY fast, but only like 72GB, and the whole RAID0 thing. I never lost any data, but I was drat sure to have my actual personal data on a 2nd drive. I ran two 36GB Raptors in RAID 0 on my Athlon 64 3000+. That thing was a screamer for the time. The sounds the drives made were awesome. Still have them.
|
# ? Oct 21, 2016 19:09 |
|
havenwaters posted:I didn't run windows update until the late XP SP2 and Windows 7 era. So that was a thing. Also I forgot to put in the motherboard standoffs when I built one of my computers. Took several months to figure out why it would randomly crash. Oof! I had two SSD in RAID0. I think they were 30GB Vertex 1's. I remember it being fast, but I also spent £200+ on a controller card just so I could have 'hardware raid'. I'm still not sure if it was genuinely running as hardware raid. I own two 250gb Samsung 850 Evo's at the moment: one of each for laptop and desktop. I removed the laptop one a few days ago to run a different drive in it. So I'm sitting there thinking to myself "you've got two Samsung 850's here... Why not have them both in the desktop running RAID0?". I was quite tempted before realising that I'd only be doing it for the sake of it: even if it was noticeably faster (I have my doubts) then I wouldn't actually need extra speed for anything in particular.
|
# ? Oct 21, 2016 23:00 |
|
NihilCredo posted:We all did stupid nerd poo poo in the aughts. I used Windows Server 2003 as my gaming machine for a few years because somebody, somewhere said that it has less bloat and was therefore faster than XP. apropos man posted:That makes sense. The only time I've noted the number of platters before purchase was on an old Seagate Momentus XT: I bought the single platter 250gb version because I obsessively figured that a single platter would load Windows from the outside edge and be therefore faster as an OS drive. How obsessively embarrassing! Combat Pretzel fucked around with this message at 23:05 on Oct 21, 2016 |
# ? Oct 21, 2016 23:00 |
|
I think my work PC still has a raptor in it. I probably should stop using it one of these days, its been in use for 7+ years. It hasn't been the OS drive for 4 years or so though, ever since I got an SSD. Completely unrelated: If I want to replace a drive in my synology NAS, can I just dd it's data to a new one and somehow expand the partitions in the admin panel? I have a 2 drive box running without any type of raid configured.
|
# ? Oct 21, 2016 23:07 |
|
Combat Pretzel posted:There is actually something to managing data on platter drives. By partitioning the drive, you can force the drive to "short-stroke" (lol), because data will be constrained to a certain region on the drive, regardless of fragmentation. Of course, if you're randomly accessing data over all partitions, the advantages fly out of the window, but it'd still work when just booting or whatever. I remember the term "short stroke", so that's definitely what I was trying to achieve. I can't remember if I partitioned it to keep Windows constrained to a certain area on the edge of the platter. Probably not.
|
# ? Oct 21, 2016 23:24 |
|
Combat Pretzel posted:There is actually something to managing data on platter drives. By partitioning the drive, you can force the drive to "short-stroke" (lol), because data will be constrained to a certain region on the drive, regardless of fragmentation. Of course, if you're randomly accessing data over all partitions, the advantages fly out of the window, but it'd still work when just booting or whatever. This is also why games on optical disks often had padding files. I personally dealt with them when trying to reduce the size of my PSP ISOs after ripping.
|
# ? Oct 21, 2016 23:37 |
|
OpenSolaris actually had done a whole effort back then in improving the boot times of their live CDs, by analyzing the whole boot cycle and rearranging the physical data layout on the disc for minimal seeks.
|
# ? Oct 22, 2016 00:02 |
|
Walked posted:Can anyone offer a suggestion to nail down a bottleneck? So I need some help figuring this out. I moved my 10gbe NIC to a 710 with H700 controller, and 2x 850evo in RAID 0 to completely eliminate HDD as a possible bottleneck. I also directly connected it to my workstation to remove cabling and the switch from the picture. Still capping at 2gb/sec transfer. gently caress. I've verified the source disk (850pro 1tb) is capable of so much more than 2gbit/sec. So it seems it has to be something with the PCIE or some other weird quirk in Windows 10. PCIe is running in Gen3 mode at 8x. So it shouldn't be a bus limitation. Any other ideas?
|
# ? Oct 22, 2016 14:41 |
|
What are you using for your 10g switch? I would check that your firmware and drivers line up. Is Server 2016 and Windows 10 actually supported for the 10g HBA? From the HDD perspective, 4 disks, even in RAID 10, isn't a ton of spindle speed.
|
# ? Oct 22, 2016 15:19 |
|
mayodreams posted:What are you using for your 10g switch? Like I said; to eliminate HDD as the bottleneck I'm going from SSD (850pro 1tb) --> SSD RAID 0 (2x 850pro on hardware RAID0, with battery and 5112mb cache, write-back enabled/forced); I should very easily be doing more than 2gbit/sec; maybe not maxing out 10gbe, but notably better than what I'm seeing. I've eliminated the switch from the equation by directly connecting the hosts. Windows 10 is supported; I'm using mainstream Intel X540-T2 adapters. Edit; on a whim I blew away VMware and installed Server 2016. Getting speeds as expected now. Something is amiss in the ESXi default drivers it seems. Walked fucked around with this message at 15:42 on Oct 22, 2016 |
# ? Oct 22, 2016 15:24 |
|
Walked posted:Like I said; to eliminate HDD as the bottleneck I'm going from SSD (850pro 1tb) --> SSD RAID 0 (2x 850pro on hardware RAID0, with battery and 5112mb cache, write-back enabled/forced); I should very easily be doing more than 2gbit/sec; maybe not maxing out 10gbe, but notably better than what I'm seeing. You didn't mention VMware, so that complicates things a lot. The HBA you are using is supported in ESXi 6.0, but it looks like you need to download the driver from VMware: VMware Download Also you should check if your server is also on the compatibility list, and if it needs additional drivers. The vanilla ESXi image will 'work' until it doesn't.
|
# ? Oct 22, 2016 16:17 |
|
Meant to edit but hit reply. Was your WIndows VM using E1000 or VMXnet3?
|
# ? Oct 22, 2016 16:21 |
|
What's the CPU usage during the transfer? You can lose a lot of throughput if you're doing things like TCP checksums on the CPU (yes, I know it should be faster than 2 Gbps on CPU). Check your TCP frame / segment sizes as well. If you're doing primarily sequential bandwidth you should check your MTU is at least 4k. Your OS could be mucking up buffering making bandwidth severely limited. On Linux you need to tune settings in /etc/sysctl.conf to increase kernel buffer sizes for TCP sockets, for example, because the defaults are not the best for throughput beyond a gigabit of throughput. Windows probably doesn't need this tuning but it's worth mentioning. Furthermore, what are you using to test bandwidth? You should be using tools that test primarily NIC-to-NIC with as little use of other components as possible such as your disk. iperf is fine for this, but you can also just do pings and estimate bandwidth based upon ICMP packet size and latency and compare bandwidth delay product to the buffer sizes you set.
|
# ? Oct 22, 2016 16:30 |
|
So, is this Intel X540 a worthwhile investments for a direct high speed link to the NAS? I assume T1 and T2 indicate the amount of ports on the card, because that's how it looks on Ebay?
|
# ? Oct 22, 2016 16:33 |
|
I don't feel 10g is worth the cost yet unless you want a baller home lab with separate storage and hypervisor. But you can also accomplish that with LACP on a managed switch. Unless you have a very high performing storage system, you are really not going to push the limits of more than gigabit in the home. Even in the entry level enterprise level 10g isn't that necessary unless you have a lot of load on the storage. A lot storage fabric is 4g/8g connections. Our production ESXi hosts are 2 x 10g twinax for networking and 2 x 8g FC for storage. They host anywhere from 20-50 VMs and I'd have to look, but I doubt they really push the storage that much. The dual connections are really for redundancy rather than aggregate bandwidth. The benefit of having everything using VMXnet3 on a single ESXi host is that everything is 10g internally. Of course, you can't put your storage vm on that storage, but stuff like FreeNAS is supposed to boot from a USB/SD card anyway on bare metal.
|
# ? Oct 22, 2016 17:14 |
|
How much can I switch things around in freeNAS? I have 5% used of a 4x3 TB striped pool and I'd like to switch it to raid5. What's the sanest way to do this?
|
# ? Oct 22, 2016 19:16 |
|
Greatest Living Man posted:How much can I switch things around in freeNAS? I have 5% used of a 4x3 TB striped pool and I'd like to switch it to raid5. What's the sanest way to do this? Copy it elsewhere and then destroy the pool and remake it as whatever you want. One of the biggest limitations of FreeNAS and everything else built on ZFS is that it really doesn't appreciate reforming storage units on the fly.
|
# ? Oct 23, 2016 04:17 |
|
Assuming you did it as ZFS because FreeNAS really pushes ZFS... You have to blow it away and start over.
|
# ? Oct 23, 2016 04:46 |
|
Especially with 5% usage, burn it down and rebuild.
|
# ? Oct 23, 2016 05:01 |
|
I finally pulled the trigger on the gigantic NAS I've been planning for like 6 or 7 years now. It's running on FreeNAS in 2 8-drive RAID-Z2 arrays, one with 8TB drives and one with 4TB drives for a total of ~60TB of usable space. I already owned the 4TB drives, so I built the system with the 8TB drives, migrated the data off the old 4TB drives onto the NAS, then added the 4TB drives to the NAS to expanded the primary volume. All data is shared via SMB/CIFS. The system also hosts 2 Debian bhyve's running rtorrent/rutorrent and sonarr. I tried qbittorrent but the web ui is awful. I also tried hosting stuff in jails, but it wasn't very stable and FreeNAS is moving away from jails anyway. I'll be setting up a third bhyve (probably FreeBSD) to host nginx for my personal website. I have scripts set up to send SMART reports, etc, to my email address on a weekly basis and scrubs/SMART checks scheduled every 2 weeks. I got a Perl script from the FreeNAS forums that controls my fan speed based on HDD and CPU temps. I have CrashPlan running on my desktop to keep the most important data from the server backed up. I might try to move CrashPlan to the server in the future (there's a FreeNAS plugin, but it's outdated and doesn't work any more), but it seems like it will be a huge pain. I also hate CrashPlan, so I might just drop it and go with ACD + rsync instead. Main volume -- http://i.imgur.com/JzYh5u1.png Pics of the server itself -- http://imgur.com/a/hHsAL Server noise while under moderate load -- https://www.youtube.com/watch?v=n1P188d9zsk Here's the parts list: code:
|
# ? Oct 28, 2016 20:54 |
|
A thought: with the new power delivery standards from USB Type C, should we now be able to have external 3.5" HDDs that don't need a separate power brick?
|
# ? Oct 30, 2016 12:14 |
That is pretty badass Melp. Thanks for the writeup and pics!
|
|
# ? Oct 30, 2016 17:13 |
|
Lack rack strikes again. Surprised that the coffee table can support the weight of all that, I'd expect to see use of several L brackets. Also, that's pretty pricey for what amounts to FreeBSD-supported 16 extra SATA ports. The M1015 is way overpriced on Ebay now because of all the people that were putting them into their random NAS builds and you can find even M1115 controllers for cheaper than the M1015.
|
# ? Oct 30, 2016 20:10 |
|
necrobobsledder posted:Lack rack strikes again. Surprised that the coffee table can support the weight of all that, I'd expect to see use of several L brackets. I noted the price on the M1015s wrong, they were $75 shipped each for new/open box, so not too bad.
|
# ? Oct 30, 2016 20:51 |
|
NihilCredo posted:A thought: with the new power delivery standards from USB Type C, should we now be able to have external 3.5" HDDs that don't need a separate power brick? Yes it's possible, although there's not many right now. http://www.seagate.com/consumer/backup/innov8/
|
# ? Oct 30, 2016 21:06 |
|
I've got one of those early gen Seagate 8TB Archive drives. I have the luxury of running some lengthy tests on it before I put it into 'production' (home media player). Should I be running Seatools or something else to validate the drive? I see the latest Seatools DOS is v2.23 on seagate.com which is like 2010/2011 vintage. Should I use that or some 3rd party tool? edit: Seagate tech support says v2.23 supports my drive even though my drive is PMR and about 5 years newer than the v2.23 build of Seatools. Shaocaholica fucked around with this message at 18:07 on Oct 31, 2016 |
# ? Oct 31, 2016 17:07 |
|
Shaocaholica posted:I've got one of those early gen Seagate 8TB Archive drives. I have the luxury of running some lengthy tests on it before I put it into 'production' (home media player). Should I be running Seatools or something else to validate the drive? I see the latest Seatools DOS is v2.23 on seagate.com which is like 2010/2011 vintage. Should I use that or some 3rd party tool? While there are a bunch of programs that can stress test a drive most of them also include a bunch of benchmarking tools that aren't really needed just for making sure a drive is good. The only things you really need to do are to read and write every sector of the drive; this should catch most drives that are going to fail early, and Seatools works just fine for this. The two things you'll want to do are a Full Erase (writes 0s to every sector) and Long Generic (read every sector). If the drive makes it through these two without any errors I'd call them good to go.
|
# ? Nov 1, 2016 18:19 |
|
Krailor posted:While there are a bunch of programs that can stress test a drive most of them also include a bunch of benchmarking tools that aren't really needed just for making sure a drive is good. Yep, did the full erase pass in Seatools but it crashes around 15-20% on my 8TB drive. I might just have a bad copy of it. Oh well, I already put it into 'production' after a full surface scan in some GNU app I forget the name.
|
# ? Nov 2, 2016 22:23 |
|
Doing either a full write or a full erase on a shingled drive should be a very bad idea, no? Wouldn't it instantly put it into its super slow rewrite mode? I didn't run any extensive tests on mine for that reason, but then again I don't have anything that can't be really replaced on it (or in any other drive, for that matter).
|
# ? Nov 2, 2016 22:34 |
|
NihilCredo posted:Doing either a full write or a full erase on a shingled drive should be a very bad idea, no? Wouldn't it instantly put it into its super slow rewrite mode? This shouldn't impact anything as long as you format the drive before (re)using it; the shingled rewrite issue only comes into play if it has actual data to rearrange; it still uses the same GPT as a normal drive and therefore should be aware that it's "empty" and thus can write-over everything.
|
# ? Nov 3, 2016 01:06 |
|
NihilCredo posted:Doing either a full write or a full erase on a shingled drive should be a very bad idea, no? Wouldn't it instantly put it into its super slow rewrite mode? Wouldn't this have come up in designing the drive and its FW? Not having the ability to zero the drive and maintain performance would be kind of a show stopper, no? Is it not standard practice to zero drives before putting them into production for enterprises? Catch the bad ones early.
|
# ? Nov 3, 2016 20:24 |
|
I'm helping my friend put together a computer for video editing. He had suggested doing an external Thunderbolt 3 RAID for storage, which I feel like would be overpriced for the performance. I figured we could maybe do an internal RAID with 3 7200 drives, but at the end of the day I feel like a high capacity Samsung PRO drive might be faster and easier to set up to run the editing off of. I figured a 250-500 gb EVO drive for OS/programs, 1TB PRO drive for working off of, and 1-2 more drives for bulk storage. How much of a difference does 7200 vs 5400 rpm make in transfer speeds for storage? Is doing an internal RAID wort it? I still need to find out the max amount of raw footage he would work with would be, but I feel like if its under 1tb at a time then a SSD for working off of would be the best bet, with a few more bulk storage drives.
|
# ? Nov 3, 2016 22:20 |
|
Wouldn't an ssd be better for this sort of thing than raided rust?
|
# ? Nov 4, 2016 03:30 |
|
Having fast external storage is going to be useful if hes going to moving data around sites/clients. Plus he can run off with it in a hurry if his house burns down or something. I'd only seriously commit to it if hes doing it for money or really really serious about it. SSDs inside ofcourse.
|
# ? Nov 4, 2016 03:37 |
|
Hello experts, i scanned the OP (though unsurprisingly it was last edited in 2012 and most links are dead or outdated) and the last page and did a bit of own research but i'm still uncertain so i thought i'd just ask if i'm even looking for the right thing. Basically i had an old pc serving files in my home LAN (via http server, network drive share etc) from a mixture of existing 1,2,3 or 4 TB drives that i'm looking to replace with something more economical that can be set up and administrated remotely (the pc, not the drives). Data is mostly noncritical stuff like compressed drive backup images or a history of database backup dumps that are infrequently written and even more rarely read, so transfer speed is not a factor at all. The few non-redundant parts that matter are backed up specifically off-site anyway, so internal redundancy or a lost harddrive barely matters. I was looking at entry level 4 bay NAS stuff from QNAP and Synology, but those apparently all (re)format any existing drives even for non-raid setups, and i'd rather avoid the required dump/restore for all existing data for absolutely no benefit. If i just want to hook up a variety of existing, non-uniform disks (more would be better but 4 is fine) to my network, are dedicated multi-bay enclosures even the right place to look or is a custom built fanless pc the only way to go? Is there some obvious alternative ready-made solution that i just haven't stumbled on yet? RoadCrewWorker fucked around with this message at 14:47 on Nov 6, 2016 |
# ? Nov 6, 2016 14:43 |
|
I got an e-mail ad from NewEgg for the QNAP TS-431+ @ $265, which seemed pretty good, but it's only got 1 review which is 1 egg. Anyone familiar with the product line have any input? http://www.newegg.com/Product/Product.aspx?Item=N82E16822107243&ignorebbr=1
|
# ? Nov 7, 2016 16:45 |
|
|
# ? May 15, 2024 03:53 |
|
Has anybody set up Ceph storage in their home lab as a proof of concept? It looks pretty appealing, but the minimum scale where it starts to make sense is far larger than home NAS scale, more like 300TB+ clusters.
|
# ? Nov 7, 2016 16:52 |