|
Do we have any timeline for Denverton boards actually purchaseable? I don't really need the beef / price of a Xeon-D board except maybe a 2 core, but low TDP and good connectivity is appealing.
|
# ¿ Mar 14, 2017 22:23 |
|
|
# ¿ May 15, 2024 05:27 |
|
thebigcow posted:People have wanted to run everything under the sun on their NAS and FreeNAS finally integrated docker and bhyve support in the GUI. The new GUI also in theory supports multiple operations from multiple logins and and a CLI for their front end. I think it's because surprise surprise, the greenfield complete rewrite of their UI feels less mature than their polished over the last decade solution. Corral will be great in 3 years time.
|
# ¿ Mar 31, 2017 16:31 |
|
DrDork posted:If only there were cheap rj45 10GbE options. Fiber cable prices really kill my plans to shove the NAS on the other side of the house. Even those are getting more affordable, if you look at Xeon-D with multiple onboard 10G-BaseT and the Ubiquiti 10GbE switch.
|
# ¿ Apr 12, 2017 21:15 |
|
n.. posted:Fiber cable is cheap, and CAT6A is a huge pain in the rear end. Including 2 transceivers? Those are the expensive parts, right?
|
# ¿ Apr 12, 2017 21:33 |
|
Moey posted:Look on FS.com, they have dirt cheap compatible optics. Other companies have cheap compatible optics as well. Huh, thanks for opening my eyes. 10G home network here I come.
|
# ¿ Apr 12, 2017 21:55 |
|
Methylethylaldehyde posted:Intel, HP and Cisco all have transceiver brand lockin, you have to use their lovely branded ones or it flat out won't work, which made it loving hellish getting my HP switch to talk to my Intel NIC using a copper cable. I think Cisco still has the iOS command that lets you use the non-branded ones, but intel and HP's response was more or less 'eat a dick' when I asked them how to disable it. Wait, so what transceivers should I be looking at to work with the onboard 10Gb SFP+ NICs on Xeon-D boards?
|
# ¿ Apr 13, 2017 17:41 |
|
caberham posted:I dont want to buy unraid. what else can I use in the mean time? Ubuntu + ZFS, or your linux distro of choice + mdadm with LVM, or Windows + Storage Spaces, honestly anything that's been around for a while will work and people get all picky over minutia here. Edit: Don't use btrfs in RAID 5 or 6. Twerk from Home fucked around with this message at 06:25 on Apr 14, 2017 |
# ¿ Apr 14, 2017 06:09 |
|
caberham posted:Alright, attempt #4 to make things work. While you certainly can run your NAS OS inside a VM, the model I've seen people happiest with is having the NAS OS be the VM host.
|
# ¿ Apr 14, 2017 16:02 |
|
Stutes posted:I was ready to upgrade my DS1512+ on day one if they had launched with the new Atom processors. Why would they launch a model with a 4-year-old known defective processor? These are appliances, not general purpose servers. Intel promises that the defect is fixed on current produced Avoton atoms, and the NAS vendor doesn't want you to care about what CPU is in your NAS any more than you care what CPU is in your network switch.
|
# ¿ Apr 16, 2017 04:49 |
|
How strong is the suggestion that you shouldn't use ext4 on volumes larger than 16TB? I can't tell if it's just a minor performance thing, or if I should not even consider ext4 on large volumes and look at xfs instead.
|
# ¿ Apr 24, 2017 15:26 |
|
DrDork posted:Same with the other NAS drives--most of them are 5400/5900 RPM and tuned for low power use. If the drives you already have are 5400's, though, it probably won't be a night and day difference and you should look at trying to improve airflow. Admittedly, there have been a few reports out that suggest that moderately high temperatures like 50C aren't actually all that bad for drives, but YMMV. Toshiba and Seagate didn't get the memo, most of their new line of NAS drives are 7200 RPM. Worst of all, on the Seagate Ironwolf basically no specs are constant across the line.
|
# ¿ Apr 24, 2017 22:49 |
|
DrDork posted:I haven't looked at the new Toshibas, but the normal Seagate IronWolf's are 5900 up to at least 4TB. The IronWolf Pro's are 7200, but I don't think too many consumers are buying those anyhow--same with the Red Pro's, which are also all 7200. I was looking specifically at the 6-10TB Ironwolf non-Pros, which are all 7200RPM and seem to be where mainstream "storage drives" are right now. I just had a friend buy 5 8TB ones to put in a NAS.
|
# ¿ Apr 24, 2017 23:21 |
|
Paul MaudDib posted:For a basic ZFS NAS build (just NAS, not hosting a bunch of other poo poo): G4400T or G4560? The T-series low-power is kinda compelling but the hyperthreading makes the 4560 quite a bit faster when needed. I'd go with a G4560 for sure.
|
# ¿ Apr 28, 2017 19:40 |
|
Paul MaudDib posted:Tom's Hardware with a hot take: consumer SSDs are getting shittier (particularly write performance and write endurance), used enterprise SSDs are a good deal, and Optane is the way forward. I disagree with this take. Used enterprise SSDs are sure a good deal, but current DDR4 and Flash prices are not permanent and they will continue to fall in the long run once fabs have finished retooling for newer processes and are able to catch up some with pent-up demand. Optane is new, expensive, and not mainstream yet. I'm cautiously optimistic about it.
|
# ¿ Jun 12, 2017 15:14 |
|
ddogflex posted:My system actually has Intel AMT, but I've read it's a huge security hole and have it disabled due to that. What do you use for a client for Intel AMT anyway?
|
# ¿ Jun 21, 2017 19:16 |
|
evol262 posted:I'm gonna assume you mean decoding here. Accel has been there for a desktop, and it's practically required for s number of DEs. GPUs are really good at encoding 1080p streams in real time with very low CPU usage. Sure I could do better burning 2 whole cores of my quad core CPU, but I don't have that to spare.
|
# ¿ Jun 28, 2017 02:58 |
|
ddogflex posted:I'm sure this is a stupid question, but is there anywhere to get 8GB of unregistered DDR4 ECC cheaper than crucial.com ($100)? Newegg doesn't even sell any that I can find, and on eBay everything is $130+. Which seems pretty crazy to me. DDR4 prices have been going up dramatically. That pricing doesn't seem that out of line for ECC UDIMMs at current pricing, and big RDIMMs are more like $15+/GB if you can find them in stock.
|
# ¿ Jul 13, 2017 17:54 |
|
D. Ebdrup posted:Come on, you and I both know you should've gone with: Dive, dive, dive! Hit your burners, pilot! What's the deal with Intel releases now? It's weird to see OEM hardware released that has unannounced Intel CPUs in it, but Google Cloud has been selling Xeon Platinums for 6 months, and they only got announced 3 weeks ago.
|
# ¿ Aug 15, 2017 16:55 |
|
Thanks Ants posted:Azure Cool Blob is priced close to Glacier but without the retrieval caveats. I had to look up if that's the actual product name. "Cool Blob" is a lot worse than Glacier or Coldline.
|
# ¿ Aug 23, 2017 17:48 |
|
If there's no need for HA and storage needs are about 100TB, there is absolutely zero reason to look at something like Ceph instead of just a 12+ drive ZFS (or mdadm) box, right? It looks like you have to throw absolutely huge amounts of resources at Ceph before it begins to approach the performance of a pretty simple zfs setup.
Twerk from Home fucked around with this message at 16:32 on Feb 21, 2018 |
# ¿ Feb 21, 2018 16:26 |
|
Zorak of Michigan posted:12 drives would be awfully small for Ceph, but I'd want to hear more about your environment to make a solid recommendation about what you ought to do. I want to try Ceph at home one way or another, and am considering recommending it to a group I work with. An academic lab that I consult with (not as IT, but writing software for them) currently has 1 storage box with 12x10TB hard drives in RAID 6 on an LSI raid card. Write performance is really lovely on it, which makes me think that the RAID card is likely the bottleneck and mdadm or zfs might actually be faster, given that it's got a lot of CPU power. They need another 100TB of storage soon-ish, and their IT guy mentioned that he wants to look at other options than just using a RAID card. They don't need HA, and don't need the storage to appear to be in a unified pool. What I'm getting at is for the same money, Ceph is just going to be way slower than their current solution, right? I (and their IT guy) are also curious if zfs on linux on the new storage server would outperform the 12-drive wide RAID 6 on a MegaRAID card, in which case they could move everything over to the new box, and re-do the original box's raid as software instead of hardware. I also don't know a ton about Ceph and maybe I'm misunderstanding and it can be cost and performance competitive with linux software raid at the 100TB - 200TB space.
|
# ¿ Feb 21, 2018 22:34 |
|
Zorak of Michigan posted:Which performance metric is it lagging on? Sustained write is poorer than expected. I don't have specifics off of the top of my head, but the workload deals more with long sustained reads / writes than lots of small I/O.
|
# ¿ Feb 22, 2018 04:21 |
|
Careful Drums posted:has anyone here set up a NAS w/ a raspberry pi and some USB hard drives? There's a lot of guides to them but I'm wondering about performance. Could I, say, rip a bunch of blu-rays/dvds onto it and stream them to my TV or would I need something beefier to handle that? It'll work fine as a toy, but terribly as an actual NAS solution. I'd buy a cheap commercial NAS if you don't want to tinker, or look at more conventional ZFS or mdadm linux NAS solutions if you want to fiddle and learn. Raspberry Pis have terrible network performance and won't even saturate their 100mbit ethernet interface. Loading stuff onto and off of them is a bad time, and if you have things like logs writing to the SD card eventually you'll wear out the SD card and have to replace it.
|
# ¿ Feb 26, 2018 18:52 |
|
Greatest Living Man posted:Plex doesn't deal with DVD ISO's or VIDEO_TS folders. So you're gonna spend a lot of time transcoding them and dealing with weird conversion issues. I think Kodi can work with VIDEO_TS folders. You can also just rip the disks straight to an MKV with MakeMKV, and then Plex (or any media player at all) can handle it just fine.
|
# ¿ Feb 28, 2018 22:53 |
|
wargames posted:There are 10g nas boxes out there but the only ones I could find use SFP not cat6a or cat7. How about this one? 10GBaseT, 2 ports: https://www.synology.com/en-us/products/DS1817 https://www.amazon.com/Synology-bay-DiskStation-DS1817-Diskless/dp/B071K9J4MS/
|
# ¿ Mar 6, 2018 03:53 |
|
DrDork posted:They are, indeed, though at $900 their price puts them solidly out of range for most, especially as you can get better performance for much less unless you really need 4x10Gig NICs. What's the "better for less" if you need >64GB of RAM and at least 1 10Gig NIC? Just spitballing here because Intel's product stack has gotten too complex for me to follow.
|
# ¿ Mar 16, 2018 15:41 |
|
Paul MaudDib posted:x264 was patched for AVX-512 about a year ago, and x265 can make very good use of it as well. But I was talking about lower AVX levels - the 7100 only supports up to AVX2 but the G4560 does not support AVX at all. You made me do a search about this, and it looks like Skylake Xeons are fifty percent faster per core than E5-v4 Broadwell xeons at x265: http://x265.org/x265-receives-significant-boost-intel-xeon-scalable-processor-family/ That's unbelievable. That's a bigger single-generation CPU than we've seen since the Pentium 2 days.
|
# ¿ Mar 16, 2018 15:59 |
|
Paul MaudDib posted:It smacks the poo poo out of high-end off-the-shelf solutions unless they're something absurd like X99/X299 platforms, which you will pay dearly for in an off-the-shelf product. If you are willing to splash out $800 for a mobo+CPU, it's arguably a better buy than anything you can do on the LGA1151 platform, and it's way more power efficient than an X99/X299 platform. I hear this, but I can't help but feel that a $180 6-core i5, reasonable motherboard, and simple HBA in IT mode is still far better bang for the buck. Just like the Xeon-D platforms I think that for home enthusiast usage, Denverton is really cool, but is a splurge for someone who just wants an excuse to tinker with less-mainstream products.
|
# ¿ Apr 4, 2018 06:12 |
|
HalloKitty posted:I built two supermicro denverton machines in silverstone ds380s with 8 drives in each How far up the denverton product stack did you go? Did you have 2 cores, or 16 cores in each?
|
# ¿ May 8, 2018 14:41 |
|
Schadenboner posted:Are WD Reds Pro ok or are they ? It seems like 7200 is a lot of RPMs to keep going at for years at a time? 7200RPM disks will use more power and make more heat also. If you've got deep pockets, make sure you're getting helium drives (look for lower power usage). It looks like 10TB Red Pros are helium, which makes them use half the wattage at idle (while spinning) vs the 8TB ones.
|
# ¿ May 3, 2019 22:55 |
|
Is anybody using an HP Microserver Gen10? Are they an awful value for money? I've been using an i5-4250u NUC that I got on eBay used for $110 with a 5TB external shingled Seagate hooked up to it as a VM host and NAS for almost 5 years now, and I finally want redundancy and more storage. I'm willing to have a somewhat weird low performance setup in exchange for better power efficiency than mainstream parts could get me, but I don't want to be like that dude a few posts up who spent $1400 for 16TB raw storage. My current plan is shucking 4x10TB Easystores into a Microserver Gen10, but I could just as easily put an i3 into a Fractal Design Node 304 or something. I want more than 16GB RAM, low power usage, and more CPU performance than a 15W Haswell. That's about all my requirements. Edit with one more question: I have found near zero info about the Operon X3421 online. My current understanding is that it's two Excavator modules on 28nm at 35W TDP, right? Looks to perform favorably compared to Atoms, and C3xxx atoms look like a miserable value for money overall. Twerk from Home fucked around with this message at 14:50 on May 13, 2019 |
# ¿ May 13, 2019 14:39 |
|
Sheep posted:I have a Gen10 and it is awesome. The CPU is not some super stripped down or mobile line, but yeah information is a little sparse. My other option I'm looking at also has ECC and is cheap-ish, which is putting an i3 in a mini-ITX supermicro board. That would have IPMI and way more CPU power, but I'm not sure about overall power consumption or nice-ness of case. I'll probably just do the microserver. Do I need the SSD tray or can i just velcro that sucker in there? I've always just taped SSDs to the sides of cases.
|
# ¿ May 13, 2019 20:11 |
|
Thermopyle posted:You can get the 8TB Easystore on Amazon for $167.78. Are people really returning reassembled drives after shucking for warranty coverage? I assumed that shucking the drive violated the warranty, and we were self-insuring with how much cheaper the drives are.
|
# ¿ May 16, 2019 19:28 |
|
Do you have to put it back together, or return all the pieces at least? Or can you just ship back the drive without all the plastic enclosure.
|
# ¿ May 16, 2019 19:37 |
|
I know that shingled drives should never be used in arrays, and are garbage at workloads that have a mix of reads and writes. What about for a single sustained write? What sort of write throughput would I see on a shingled hard drive with a write-once workload?
|
# ¿ May 28, 2019 21:27 |
|
Bob Morales posted:Dropbox uses servers full of 100 SMR drives but they basically are doing WORM and by using SSD’s to buffer data and controlling the whole drat stack, they manage to push 40GB/s worth of writes Good to know. They've gotta be some type of replication-based solution that's similar to an inhouse Ceph, right?
|
# ¿ May 29, 2019 14:40 |
|
Does anybody have case recommendations with at least 4 2.5" bays that can hold 15mm tall drives? Is this something that exists? Almost all of the cases I've found specify 9.5mm high 2.5" bays.
|
# ¿ Jun 19, 2019 16:31 |
|
Have you guys had SSDs die? What's expected lifespan for consumer SSDs? This guy kicked the bucket this week.
|
# ¿ Oct 30, 2019 16:20 |
|
H110Hawk posted:2 petabytes of write against that amount of host writes suggests offset issues. Though I haven't looked deeply into it. Yes I have had thousands die but in datacenter workloads, though most as os drives not heavy hit ones. (The heavy hitters nuked out way faster.) Yeah, I was curious about that but don't know how to address it in the future. This disk had a couple of hyper-V VMs, including 1 Win10 and 2 or 3 Ubuntu. It was running a plex server, minecraft server, Uni-Fi wifi controller, qbittorent scratch area, couple low-traffic NodeJS and PHP websites, TICK stack for monitoring & dashboards. I have no idea which of those was writing enough to hit even 150TBW, because all of them were extremely low utilization. I had configs and database backups and didn't lose anything that I will miss, but it was still annoying. Edit: I didn't need Windows, so I've already got everything set up on Ubuntu 18.04. I'll monitor to see what's writing so much, assuming that it wasn't something related to Hyper-V / Windows that was writing so much. DrDork posted:You're clearly using that drive for some business/enterprise workload: shell out a few extra bucks for a larger enterprise-level drive next time and it should last a good bit longer. A 512GB 970 Pro, for example, is warrantied to 600TBW, or probably 6x what that SanDisk was engineered to hit. I replaced it with an 840 Evo out of my junk drawer and I am preparing to make the next drive failure even faster to recover from. Twerk from Home fucked around with this message at 17:21 on Oct 30, 2019 |
# ¿ Oct 30, 2019 16:42 |
|
|
# ¿ May 15, 2024 05:27 |
|
H110Hawk posted:What caused it is your host OS (Windows?) formatted the disk such that FS block boundaries were offset from the NAND ones. Two examples: Thanks, I appreciate this. Hyper-V was the host, with Windows and Ubuntus installed on it. I didn't think about VM OS disks block sizes at all, and there's a separate long-term storage system that's on hard drives. I just wanted to use the SSD as some fast scratch space. This drive should have been doing barely anything, so 150TBW does not add up with its extremely light usage. It's a mystery to me why it wrote so much. I also kept it under 80% full.
|
# ¿ Oct 30, 2019 18:10 |