|
Just get one of these https://www.amazon.com/Seagate-Arch...rds=8TB+seafate
|
# ? Jun 1, 2016 17:16 |
|
|
# ? May 26, 2024 16:33 |
|
redeyes posted:Just get one of these https://www.amazon.com/Seagate-Arch...rds=8TB+seafate I snagged one for testing at work. I'm thinking that they might be great for media hoarding. Write once read many.
|
# ? Jun 1, 2016 17:38 |
|
Moey posted:I snagged one for testing at work. My thoughts exactly.
|
# ? Jun 1, 2016 20:25 |
|
What about once you fill the array and start deleting old content / overwriting it?
|
# ? Jun 1, 2016 20:37 |
|
Shaocaholica posted:Can't seem to find any info on what internal drive is inside the 'WD 8TB My Book Desktop External Hard Drive'.
|
# ? Jun 1, 2016 20:41 |
|
IOwnCalculus posted:What about once you fill the array and start deleting old content / overwriting it? Who deletes old content? (I'm curious as well about how well it will handle rewriting data until you start hitting bad sectors)
|
# ? Jun 1, 2016 20:53 |
|
Moey posted:Who deletes old content? 4K > 1080p > 720p > SD. Also, that time period in between "gently caress my array is full, do I *really* need this" and "fine I'll plunk down the cash for more / bigger drives".
|
# ? Jun 1, 2016 21:02 |
|
IOwnCalculus posted:What about once you fill the array and start deleting old content / overwriting it?
|
# ? Jun 1, 2016 21:05 |
|
IOwnCalculus posted:What about once you fill the array and start deleting old content / overwriting it? Your write performance grinds to a screeching halt. SMR is a great for increasing storage density, but any re-write operation is enormously painful because the drive has to up and move a crap-ton of other data to get at the appropriate "shingles." It's not like it can't do it, but it's gonna be slow as gently caress. To give you an idea of what "slow as gently caress" means, these guys did a simple RAID-1 rebuild on a pair of SMR drives and watched them crawl along at <10MB/s average speed, while a pair of HGST drives zipped along on the same task at over 150MB/s.
|
# ? Jun 1, 2016 21:05 |
|
Star War Sex Parrot posted:Performance is really bad, and in RAID environments they can take several times longer to rebuild than PMR drives. Blech. Just gonna stay on the WD Red bandwagon.
|
# ? Jun 1, 2016 21:10 |
|
So spend the extra 20 bux and shuck that WD 8tb. Wonder about in a non-raid environment, probably similar speeds when deleting and re-adding files?
|
# ? Jun 1, 2016 21:13 |
|
DrDork posted:Your write performance grinds to a screeching halt. SMR is a great for increasing storage density, but any re-write operation is enormously painful because the drive has to up and move a crap-ton of other data to get at the appropriate "shingles." It's not like it can't do it, but it's gonna be slow as gently caress. To give you an idea of what "slow as gently caress" means, these guys did a simple RAID-1 rebuild on a pair of SMR drives and watched them crawl along at <10MB/s average speed, while a pair of HGST drives zipped along on the same task at over 150MB/s. IIRC there's some work underway on shingle-aware SAS/SATA command set extensions so that the host can query the device about its shingle size and layout, and write whole shingles atomically. This plus support in higher layers would help a great deal with RAID rebuild. There's no reason why that task specifically can't go as fast as a conventional drive, because RAID rebuild is (or can be) very linear. That bad a slowdown has to be the drive being forced into doing many read-modify-writeback passes per shingle, instead of the optimal write once (no read). It could also be interesting if some of the advanced filesystems get tuned to be shingle-aware. With a SSD for a write cache you could even make random writes reasonably fast.
|
# ? Jun 2, 2016 10:06 |
|
Yeah, no doubt there are ways that the process could be optimized to not give such a hit to SMR drives, but at the moment no one's bothered to do so (and considering how small a market they are, may never get around to it), so they make pretty lovely drives for everything other than what they explicitly are for: literal write-once read-many.
|
# ? Jun 2, 2016 16:01 |
people posted:about SMR drives
|
|
# ? Jun 2, 2016 16:17 |
Ok, I should be getting everything I need to make a file/plex server this weekend. I know there are NAS OSs like FreeNAS that can run plex servers as well, however I still need a way to play steam games on my big TV. Instead of getting a steambox (which doesn't work well with DS4 controllers), a cheapo windows or linux box, or just running an extra long HDMI cable from my desktop (while my computer and my TV are literally on the opposite sides of the same wall, I rent so I can't drill a hole lol), I was considering using the new server to stream games. Is there any dedicated NAS OS that can run steam easily, or would it be better to just install a basic linux distro. Hell, am I crazy for even thinking about doing this?
|
|
# ? Jun 2, 2016 22:25 |
FreeNAS 10 might be able to with bhyve, but for now I think it's better to go with Steam for linux which should be available in any well-maintained package repository for your favorite linux distro. However, judging by sales figures, the future of steam for linux isn't exactly looking bright - so while the idea of having one machine doing everything on a high-powered centralized machine as a back-end and just using thin-clients at the head-end might be a cool one, the practical implimentation of it on any kind of budget is still a ways more off than some videos that have recently started surfacing tend to suggest (though in all fairness, the videos can easily be seen more as a technology showcase than actual practical setup). That being said, it is possible. I have FreeBSD 11-CURRENT running with bhyve and iohyve on top of a zpool and a VT-d for graphics card set up for a Windows 10 VM, and gaming works - but getting it to work has certainly made use of my +10 years of experience in working with (and troubleshooting/debugging) FreeBSD - so it's not something I'd recommend, as it requires - among other things - a MSI-X capable graphics card and a lot of time to fiddle with a lot of things.
|
|
# ? Jun 3, 2016 11:20 |
Well yeah while that would be cool, I actually meant stream to the server from my desktop. Though with something like an -E processor with lots of RAM, someone could just run windows on it with something like freeNAS in a vm or the other way around...
|
|
# ? Jun 3, 2016 14:24 |
|
One Linux server running my Zpool and a Windows VM pumping the video to my steam link is still a dream setup. Too bad getting it to work proved too much of a hassle. Maybe I'll try again in a couple of months. First I have a Vive and it needs all the frames my current machine can throw at it.
|
# ? Jun 3, 2016 16:55 |
|
I asked this in the backup thread, but I wanted to try here, too. Does anyone have experience with Windows server storage essentials? I have a whs 1.0 and over time I've added gpt based machines that aren't supported for backups. I want bare metal restore, data de-dupe, and something like storage spaces to mitigate single drive loss. The media on the server is currently also backed up to crashplan. The old whs migration path was to buy wse2012r2, but this storage spaces thing looks to be more straightforward. Right now I only see a thecus w2810 pro, but it isn't out yet and has no reviews.
|
# ? Jun 3, 2016 19:47 |
|
Just a reminder to always backup your data if its valuable to you! I didn't get any smart errors on my RDM mounted drives to NAS VM and lost two disks in my RAID5 last week. Luckily CrashPlan backup was pretty current so that's been a really nice. Setup unRAID again, since the app side is drastically better since Docker has come along. Containerized CrashPlan has been amazing vs some of the community NAS packages as it rolls Java/noVNC into the container for simple html5 based management.
|
# ? Jun 4, 2016 01:07 |
|
BobHoward posted:Plastic shell? I only see the HDD's chassis, which is cast aluminum with a really thick black anodize. For some reason I thought the underside was hard plastic, don't know why. I'm dumb. EDIT: By the way, I've added the two drives through a PCIe SATA adapter, now they show up as Safely Remove Hardware. Should I ignore it or is there a way to remove that? Storm- fucked around with this message at 02:35 on Jun 5, 2016 |
# ? Jun 5, 2016 02:23 |
|
I need a PCIe controller card for a JBOD setup that has to have a minimum of 2 eSATA and 2 internal SATA ports. Poking around on newegg brings up a number of cards by Syba, StarTech, SIIG, Rosewill and Sedna for under $60, and some HighPoint cards $40-$100 and up. Of the brands listed, I've only ever heard of HighPoint before, in relation to RAID cards. So I guess what I'm asking is what are the good brands and what is trash ? I'm not inclined to go super cheap, but I want to avoid spending more than $115-$125.
|
# ? Jun 5, 2016 07:44 |
|
Does anyone have any experience with the Drobo 5N? I was thinking about getting one for storing and watching videos. How well does the Drobo version of the Transmission BitTorrent client work?
|
# ? Jun 7, 2016 01:32 |
|
The thread does not like Drobo. Overpriced and proprietary.
|
# ? Jun 7, 2016 04:36 |
|
Thermopyle posted:The thread does not like Drobo. Overpriced and proprietary.
|
# ? Jun 7, 2016 04:59 |
|
Thermopyle posted:The thread does not like Drobo. Overpriced and proprietary. The thread has spoken.
|
# ? Jun 7, 2016 05:29 |
|
MREBoy posted:I need a PCIe controller card for a JBOD setup that has to have a minimum of 2 eSATA and 2 internal SATA ports. Poking around on newegg brings up a number of cards by Syba, StarTech, SIIG, Rosewill and Sedna for under $60, and some HighPoint cards $40-$100 and up. Of the brands listed, I've only ever heard of HighPoint before, in relation to RAID cards. So I guess what I'm asking is what are the good brands and what is trash ? I'm not inclined to go super cheap, but I want to avoid spending more than $115-$125. Do you need port multiplier support? If not, an LSI2008 based card like this one might suit you. It's a version of the popular M1015 that has 4 internal SATA3 ports and 4 external SAS2/SATA3 ports via a SFF-8088 connector. Flash it to IT mode, add a SFF-8088 to 4x eSATA forward breakout cable and you're in business.
|
# ? Jun 7, 2016 18:55 |
|
DrDork posted:Your write performance grinds to a screeching halt. SMR is a great for increasing storage density, but any re-write operation is enormously painful because the drive has to up and move a crap-ton of other data to get at the appropriate "shingles." It's not like it can't do it, but it's gonna be slow as gently caress. To give you an idea of what "slow as gently caress" means, these guys did a simple RAID-1 rebuild on a pair of SMR drives and watched them crawl along at <10MB/s average speed, while a pair of HGST drives zipped along on the same task at over 150MB/s. Is there any way to tell a SMR drive to "rearrange" itself ahead of time while it's idle, after you've deleted your old data but before you have new data to write?
|
# ? Jun 7, 2016 19:29 |
|
I think SMR drives have a certain area reserved and uses it as a write buffer, to do all rearranging in the background. As long you don't run it full, you should get decent performance. Things like a rebuild, or anything else that writes a lot of data in a short time, screw the drive over. Last I heard it was rumored to be 20GB on the Seagate disks.
|
# ? Jun 7, 2016 22:11 |
|
Are there specific SCSI commands available for controllers/OSes to know when they are dealing with an SMR disk to work with it better?
|
# ? Jun 7, 2016 22:51 |
|
Thanks Ants posted:Are there specific SCSI commands available for controllers/OSes to know when they are dealing with an SMR disk to work with it better? Combat Pretzel posted:I think SMR drives have a certain area reserved and uses it as a write buffer, to do all rearranging in the background. As long you don't run it full, you should get decent performance.
|
# ? Jun 7, 2016 22:53 |
|
I'm putting 16 drives into a JBOD enclosure that has 4 MiniSAS connectors. 1 lane for every disk. The enclosure ships with a 16-port HighPoint card with 4 external MiniSas ports. I also have a spare Areca card with a single MiniSAS external port. HighPoint is crap, Areca is pretty good. I could do 4 RAID6 volumes with 4 drives each if I hack the enclosure with a SAS expander module and hook it up to the Areca with a single cable. The HighPoint doesn't do RAID6, but if I use it in JBOD mode I have the option of running either ZFS or StorageSpaces on the host. ZFS would give me more final usable space but is trickier because the host is Windows; with StorageSpaces I'd want to do triple-mirroring which seems a bit wasteful. How would you use the space?
|
# ? Jun 7, 2016 23:46 |
|
Hi Jinx posted:I'm putting 16 drives into a JBOD enclosure that has 4 MiniSAS connectors. 1 lane for every disk. The enclosure ships with a 16-port HighPoint card with 4 external MiniSas ports. I also have a spare Areca card with a single MiniSAS external port. HighPoint is crap, Areca is pretty good. How about using the HighPoint card in JBOD/non-RAID mode (so you don't need an expander) and running a storage VM under Hyper-V with the disks passed through? Then you aren't limited in your choice of OS or file system for software RAID, and you can export the storage via SMB/NFS/iSCSI/etc to use it outside of the VM. The Pro versions of Windows 8 and 10 come with Hyper-V built-in. Windows Server 2016 Technical Preview supports proper PCI passthrough of the whole controller, but both motherboard and CPU must support VT-d, which isn't likely on desktop-class hardware. I'd do two 8-drive RAIDZ2/6 volumes if I was trying to maximize available storage space without sacrificing too much redundancy. Otherwise, I'd say it depends on requirements of your application.
|
# ? Jun 8, 2016 01:09 |
|
SamDabbers posted:How about using the HighPoint card in JBOD/non-RAID mode (so you don't need an expander) and running a storage VM under Hyper-V with the disks passed through? Then you aren't limited in your choice of OS or file system for software RAID, and you can export the storage via SMB/NFS/iSCSI/etc to use it outside of the VM. I was actually thinking the same, in case I go with ZFS. SamDabbers posted:Windows Server 2016 Technical Preview supports proper PCI passthrough of the whole controller, but both motherboard and CPU must support VT-d, which isn't likely on desktop-class hardware. It's an old (~6 years) Intel server board with two Xeons and ECC RAM. I would probably be able to run PCI passthrough but the problem with the Technical Preview is that it won't let you upgrade to newer versions - at least it did not let me go from TP4 to TP5. I'd rather not upgrade to it on a server that's in use, and then be stuck on it forever. Attaching the disks to a VM is fine - it probably has a performance hit, but since I'm accessing storage over 1Gb Ethernet it's really not a concern. SamDabbers posted:I'd do two 8-drive RAIDZ2/6 volumes if I was trying to maximize available storage space without sacrificing too much redundancy. Otherwise, I'd say it depends on requirements of your application. It's for backups and general home packrat storage. It may be overly cautious but I wouldn't want to do less than 100% redundancy - this is why I was entertaining the idea of StorageSpaces, it gives you even more of a safety net. For the ZFS VM, what distribution would you recommend? I'm leaning towards FreeNAS. I'm pretty good with Windows but not so much on BSD/Solaris/Linux, so I figure the point-and-click nature of FreeNAS will suit me better. Unless there's a must-have feature or level of control in the others.
|
# ? Jun 8, 2016 01:42 |
|
Thermopyle posted:The thread does not like Drobo. Overpriced and proprietary. What does this thread like, as far as simple, inexpensive, user-friendly NAS boxes go? I just need something that can store and stream video, run BitTorrent, and give me a heads-up when a drive needs to be replaced. No sophisticated user account setups, no fancy IT stuff.
|
# ? Jun 8, 2016 02:32 |
|
Synology seems to be reasonably well liked. More expensive than DIY but reasonably quick.
|
# ? Jun 8, 2016 02:44 |
|
Hi Jinx posted:How would you use the space? Why not use Linux? You have KVM and LXC for your VM needs and can have ZFS natively so you don't need a storage VM. I would not run Storage Spaces from a Beta OS to be honest.
|
# ? Jun 8, 2016 05:41 |
|
Cockmaster posted:What does this thread like, as far as simple, inexpensive, user-friendly NAS boxes go? I just need something that can store and stream video, run BitTorrent, and give me a heads-up when a drive needs to be replaced. No sophisticated user account setups, no fancy IT stuff. Synology or Qnap are pretty good.
|
# ? Jun 8, 2016 05:42 |
|
Cockmaster posted:What does this thread like, as far as simple, inexpensive, user-friendly NAS boxes go? I just need something that can store and stream video, run BitTorrent, and give me a heads-up when a drive needs to be replaced. No sophisticated user account setups, no fancy IT stuff. I'll second Synology. Given that it's a NAS, user account setups are something you'll have to live with however.
|
# ? Jun 8, 2016 10:01 |
|
|
# ? May 26, 2024 16:33 |
|
Mr Shiny Pants posted:Why not use Linux? You have KVM and LXC for your VM needs and can have ZFS natively so you don't need a storage VM. Storage Spaces is non-beta in Server 2012... and the reason for not running Linux as the host OS is lack of familiarity. I'll get there eventually, but not for this project.
|
# ? Jun 8, 2016 10:08 |