|
I recently got a NetApp DS14 Mk4 Disc Array, and got another one from a shop oddly enough but not the filer head unfortunately. I connected them through an FC card. I wanted to low level format them but Windows didn't offer that kind of accessibility to my knowledge. Enter CentOS7, both of my servers were running CentOS7 so I tried to do the llf using lscsci and sginfo. No luck. I had read about FreeNAS and decided to implement it on two 3tb drive on an old poweredge that was going unused, so I installed to a SanDisc removable and tried it out. Heres, where the solution came from. So NetApp uses 520 bit format sectors which is not going to work in the FreeNAS environment for the above reason, it's 512 not 520. I almost lost hope until I found an article that pointed me in the right direction. http://www.sysop.ca/archives/208. The guy had a brilliant idea to do camcontrol, which worked. SO if you see some disc shelves for cheap, and you're looking for a disc array for home or small shop use, give this a look see. I think I got the netapps with the drives for around 200+ shipping. I plan to run Openstack after I mount the LDAP volume to the centOS server so that I can do god knows what.
|
# ? Jan 11, 2016 23:03 |
|
|
# ? Jun 10, 2024 11:26 |
|
Thanks Ants posted:Goddam I am so out of touch on storage. I think I'll try and push this off to a VAR to solve and see what I can learn from the process. You want a nimble or a netapp or something of that nature that does flash caching. The what you want isn't hard at all. The real question is "What are you willing to pay".
|
# ? Jan 11, 2016 23:47 |
|
Frank Viola posted:I recently got a NetApp DS14 Mk4 Disc Array, and got another one from a shop oddly enough but not the filer head unfortunately. I connected them through an FC card. I wanted to low level format them but Windows didn't offer that kind of accessibility to my knowledge. Enter CentOS7, both of my servers were running CentOS7 so I tried to do the llf using lscsci and sginfo. No luck. I had read about FreeNAS and decided to implement it on two 3tb drive on an old poweredge that was going unused, so I installed to a SanDisc removable and tried it out. Heres, where the solution came from. Hnnnnnnng.
|
# ? Jan 11, 2016 23:49 |
|
Rhymenoserous posted:Hnnnnnnng. That's pretty much the sound I made while trying to solve the problem
|
# ? Jan 11, 2016 23:59 |
|
Has anyone started getting into encrypting all of their data? I am starting to design a data at rest / data in motion type thing. Data at rest is easy, buy self encrypting drives. In motion is harder, SMB 3.0 supports encryption but what about ISCSI and NFS traffic? I've seen some inline encryption devices that exist but don't have any experience with them.
|
# ? Jan 12, 2016 00:40 |
|
Frank Viola posted:I recently got a NetApp DS14 Mk4 Disc Array, and got another one from a shop oddly enough but not the filer head unfortunately. I connected them through an FC card. I wanted to low level format them but Windows didn't offer that kind of accessibility to my knowledge. Enter CentOS7, both of my servers were running CentOS7 so I tried to do the llf using lscsci and sginfo. No luck. I had read about FreeNAS and decided to implement it on two 3tb drive on an old poweredge that was going unused, so I installed to a SanDisc removable and tried it out. Heres, where the solution came from. This (used to?) work in reverse as well, but you can use sg3_utils when Linux is willing to view the disks.
|
# ? Jan 12, 2016 00:47 |
|
TeMpLaR posted:Has anyone started getting into encrypting all of their data? I am starting to design a data at rest / data in motion type thing. Data at rest is easy, buy self encrypting drives. In motion is harder, SMB 3.0 supports encryption but what about ISCSI and NFS traffic? I've seen some inline encryption devices that exist but don't have any experience with them. IPSec?
|
# ? Jan 12, 2016 03:36 |
|
Storage traffic should be segregated onto a secured, unrouted VLAN, so encryption in motion for it is usually not necessary unless it needs to leave the datacenter. NFS supports encryption natively, and IPSec can be run in software, but you're going to pay enough of a performance penalty that it's usually a bad idea for storage traffic that you want to be low latency. Hardware IPSec encryption end points would be the best option if you have to do it for some reason. Also, make sure you have a handle on key management before you start using SEDs.
|
# ? Jan 12, 2016 03:41 |
|
Rhymenoserous posted:You want a nimble or a netapp or something of that nature that does flash caching. The what you want isn't hard at all. The real question is "What are you willing to pay". Speaking of Nimble, we just got quoted for some equipment and they are doing some real good deals on hardware right now.
|
# ? Jan 12, 2016 04:00 |
|
https://www.pagerduty.com/blog/security-fault-tolerance/ This is a great article. To be honest, encryption is motion is not the performance penalty it is made out to be on modern CPUs. The initial handshake is the most expensive part, but you only do that once a day at most.
|
# ? Jan 12, 2016 04:37 |
|
Moey posted:Speaking of Nimble, we just got quoted for some equipment and they are doing some real good deals on hardware right now. Their stock dropped 50% in November after bad earnings, and it's even lower now. They're probably desperate for sales, and I hope they don't get acquired.
|
# ? Jan 12, 2016 06:09 |
|
H110Hawk posted:https://www.pagerduty.com/blog/security-fault-tolerance/ How efficiently IPSec works depends on the implementation. But in any case, storage arrays will generally be driving far more throughout to disparate clients than something like a point to point tunnel. The overhead of encryption hundreds of MB a second worth of packets to dozens of distinct clients can drive CPU utilization and latency up. Modern arrays target sub millisecond latency, so additional latency in the path can cause a large proportional increase. And, of course, most arrays don't support IPsec on box so you'd be looking at inline hardware encryption devices anyway. YOLOsubmarine fucked around with this message at 23:44 on Jan 12, 2016 |
# ? Jan 12, 2016 22:50 |
|
Erwin posted:Their stock dropped 50% in November after bad earnings, and it's even lower now. They're probably desperate for sales, and I hope they don't get acquired. Yeah I saw that, figured they are trying to get larger deployments out there to save face.
|
# ? Jan 12, 2016 22:55 |
|
Moey posted:Yeah I saw that, figured they are trying to get larger deployments out there to save face. They need revenue growth and it's not coming from the enterprise which means driving volume in the mid market.
|
# ? Jan 12, 2016 23:53 |
|
NippleFloss posted:Storage traffic should be segregated onto a secured, unrouted VLAN, so encryption in motion for it is usually not necessary unless it needs to leave the datacenter. NFS supports encryption natively, and IPSec can be run in software, but you're going to pay enough of a performance penalty that it's usually a bad idea for storage traffic that you want to be low latency. Hardware IPSec encryption end points would be the best option if you have to do it for some reason. Yeah, storage traffic is already on a secured unrouted VLAN (a whole bunch of them depending on what environment it is). I checked out some hardware encryption endpoints but they don't do block, only file. Going with KeySecure for the key management. Glad to hear I didn't really miss too much from what it sounds like. Thanks.
|
# ? Jan 12, 2016 23:58 |
Erwin posted:Their stock dropped 50% in November after bad earnings, and it's even lower now. They're probably desperate for sales, and I hope they don't get acquired. Speaking of Nimble http://www.businesswire.com/news/home/20160107005043/en/INVESTOR-ALERT-Investigation-Nimble-Storage-Announced-Law I really like their product but me thinks something fishy is going on there.
|
|
# ? Jan 14, 2016 17:37 |
|
Langolas posted:Speaking of Nimble Eh, that happens every time a notable stock drops after earnings. It's the stock market equivalent of ambulance chasers and might as well be an ad saying "have you or a loved one been injured by NMBL?" Just google "investigation on behalf of investors."
|
# ? Jan 14, 2016 17:50 |
|
KennyG posted:7.2.0.4 with 6 x410 nodes with GNA. Would you mind sharing your file pool policies and smartpool settings?
|
# ? Jan 14, 2016 19:02 |
|
Not at all.FilePoolSettings posted:
Smartpool Settings posted:
|
# ? Jan 14, 2016 19:24 |
|
What's your utilization? Are your clients primarily SMB2/2.1?
|
# ? Jan 14, 2016 19:51 |
|
KennyG posted:Not at all. What is your current SSD utilization?
|
# ? Jan 14, 2016 23:38 |
|
95% SMB 2/2.1 by volume, the rest are vSphere hosts using NFS as tertiary tier storage. SSD is for metadata, how do I check the utilization levels? We are currently going through the smart fail process for the new firmware FCO issue affecting the SED-SSDs
|
# ? Jan 18, 2016 04:53 |
|
Running InsightIQ?
|
# ? Jan 19, 2016 21:06 |
|
KennyG posted:95% SMB 2/2.1 by volume, the rest are vSphere hosts using NFS as tertiary tier storage. It should show on isi stat -d or in the gui
|
# ? Jan 21, 2016 05:18 |
|
Does anyone have ceph experience? I've been interested it in for a while now but it's not something anyone at work would get interested in unless there was native VMware or Windows support. I'm pondering assuaging my curiosity and my need for more NAS space by setting up a very small Ceph cluster and Ceph->NFS gateway in my basement. I know it wouldn't be close to enterprise standards of redundancy, since it would have just a single monitor running in a kvm guest, but is there any reason I couldn't do it?
Zorak of Michigan fucked around with this message at 23:39 on Feb 1, 2016 |
# ? Feb 1, 2016 22:57 |
|
Go for it. A former coworker of mine had a giant boner for ceph and set up a home server with like 12 cheap consumer disks running it to store all his He eventually got poached by Time Warner to help build their multi-petabyte ceph cluster and now makes $alot. Ceph is pretty cool and there are some really huge and interesting deployments of it out there.
|
# ? Feb 1, 2016 23:18 |
|
If you have Ceph experience or are interested in Ceph and have a resume, send me a PM.
|
# ? Feb 1, 2016 23:20 |
|
Zorak of Michigan posted:Does anyone have ceph experience? I've been interested it in for a while now but it's not something anyone at work would get interested in unless there was native VMware or Windows support. I'm pondering assuaging my curiosity and my need for more NAS space by setting up a very small Ceph cluster and Ceph->NFS gateway in my basement. I know it wouldn't be close to enterprise standards of redundancy, since it would have just a single monitor running in a kvm guest, but is there any reason I couldn't do it? Hey, Ceph community monkey here. While it's really easy to set up a tiny Ceph cluster (and the tech is wildly awesome...I'm on board with the kool-aid), there is a fairly big learning curve between "tiny proof-of-concept to play with" and "usable cluster that can grow as you do," so I'd be careful about overcommitting. That said, there are quite a few different ways to play with Ceph from a (slightly aging) qemu img to running in Docker as well as pretty much every major deployment and orchestration framework (Chef, Puppet, Ansible, Juju, several Salt options). WRT VMWare -- there have been a couple of people that have home-rolled Ceph-backed VMWare infrastructure setups, but the mainline support definitely isn't there. I know Intel is working on a VMWare integration, and there are rumblings of other major folks doubling down on that with them. Just hard to convince community FOSS fanatics to write code for a proprietary solution sometimes. It might be worth keeping on your radar though. Windows support is a bit more developed, with quite a few people having different ways of serving content to Windows machines (NFS/pNFS, FS, object, etc). I'd say the best approach varies wildly depending on what you want to do with it. There still isn't a "run ceph ON windows" option though, so in that regards it's pretty much nil. As far as whether or not to do it, without knowing your skill level, I'd say jump in with both feet on setting up a Ceph cluster and throwing some data at it. However, unless you are prepared to really dig in and do your homework beforehand, I'd suggest caution on how much you rely on it until you are comfortable. There are HUGE number of ways to tune (read: screw-up-performance), balance (put your cluster in a damaged state), or use a Ceph cluster. A familiarity with the distributed storage paradigm, and sometimes Ceph in particular, is often required to really get out of the gate without a few false starts. That said, I'm a huge proponent of Ceph even beyond the whole "cutting me a paycheck" thing. If I stopped working at Red Hat tomorrow, I'd still be proselytizing Ceph use and spouting "The Future of Storage" in my sleep, so definitely check it out. Feel free to hit me up if you have questions about where to start or resources that might be able to help you beyond what I've linked here.
|
# ? Feb 4, 2016 17:54 |
|
Scuttlemonkey posted:As far as whether or not to do it, without knowing your skill level, I'd say jump in with both feet on setting up a Ceph cluster and throwing some data at it. However, unless you are prepared to really dig in and do your homework beforehand, I'd suggest caution on how much you rely on it until you are comfortable. There are HUGE number of ways to tune (read: screw-up-performance), balance (put your cluster in a damaged state), or use a Ceph cluster. A familiarity with the distributed storage paradigm, and sometimes Ceph in particular, is often required to really get out of the gate without a few false starts. Thanks for the feedback! My skill level is weird because I've been a UNIX guy for 20 years now but my role gives me very limited hands-on experience. I'm effectively a tier 3 guy for weird performance problems but I've never actually loaded a Linux system from bare metal. Back in the 1990s I was an AFS admin but I haven't done distributed storage since then. The good news is that my performance needs are trivial by modern standards (support a max of 3 concurrent HD video streams through the Ceph->NFS gateway box) and I can afford some false starts since I'll keep the first ~5TB of data live on other systems for a while. I'm thinking that I'll scale out to two data servers with just 2 data disks each and make sure they're stable and then begin stacking them up. Question I'm pondering as I design this scheme - would I better off trying to use the Ceph file system or a Ceph block device? If I read the docs right, file system means that I need metadata servers, and I'm not sure if it would be kosher to put them in the same KVM as my monitor daemons. On the other hand, file system implies that if I experience data loss, it will be localized to specific files, whereas data loss in objects making up a block device could mean the entire block device is trashed.
|
# ? Feb 4, 2016 19:34 |
|
There's an outside chance I'll need to set up some manner of scalable storage backend for Cloudstack, and Ceph seems to be a popular option for backing VMs. Do you have any recommendations for reference architectures that I can look at to do some light research, from a hardware and network equipment standpoint? One of the concerns I have is having enough of a pipe for all the storage cross-chatter, but most of the network designs I'm familiar with would require going from top-of-rack Nexus 2Ks to middle-of-row 5Ks in order to keep costs down, which would likely get saturated real fast at scale.
|
# ? Feb 4, 2016 19:43 |
|
Cidrick posted:There's an outside chance I'll need to set up some manner of scalable storage backend for Cloudstack, and Ceph seems to be a popular option for backing VMs. Do you have any recommendations for reference architectures that I can look at to do some light research, from a hardware and network equipment standpoint? One of the concerns I have is having enough of a pipe for all the storage cross-chatter, but most of the network designs I'm familiar with would require going from top-of-rack Nexus 2Ks to middle-of-row 5Ks in order to keep costs down, which would likely get saturated real fast at scale. Look at low cost 10G Clos architecture. https://en.wikipedia.org/wiki/Clos_network Remember you don't have to buy brand name optics at 10-100x the price. Call someone like Prolabs. Arista and Juniper have some decent offerings in this line, or if you're feeling really frisky, Cumulus.
|
# ? Feb 4, 2016 20:06 |
|
I've got a project to phase out a Compellent array being used as primary storage for our VMWare cluster. I have limited funding available, so I'm looking away from Dell, HP, Lenovo, IBM and towards use-your-own-disk solutions so that I don't get into the situation we are in with the Compellent - spending money to spend money so that we're allowed to spend some money. It makes sense for some organizations - not ours. Anyway I think we're going with a Synology RS3614XS+ system, with 4TB of SSD storage as cache and 32TB of regular drive space for lower frequency of access data. Does anyone have experience with Synology's SSD Caching features and interoperability with VMWare? We have used Dell's data "tiering" or whatever and that seemed to work pretty well but I'm curious if anyone here has used this feature before. We have this specc'd out for around $10k which seems a lot more reasonable that similar offerings from Dell hitting 17-25k for similar featuresets and size.
|
# ? Feb 10, 2016 20:18 |
|
It's dogshit, there's no support in the event of a problem, any claims of poor performance will be shrugged away with "dunno", and you have to down the box to perform software updates, of which there are loads. How many hosts are you trying to provide storage for? At the barest minimum I'd try and stretch to a SAS SAN like a Dell MD3, HP MSA 2040, or potentially a VNXe1600 if you need to use iSCSI.
|
# ? Feb 10, 2016 21:00 |
|
Spudalicious posted:I've got a project to phase out a Compellent array being used as primary storage for our VMWare cluster. I have limited funding available, so I'm looking away from Dell, HP, Lenovo, IBM and towards use-your-own-disk solutions so that I don't get into the situation we are in with the Compellent - spending money to spend money so that we're allowed to spend some money. It makes sense for some organizations - not ours. We're probably going to need an idea of what you're using this for to make any suggestions. But in general I'd recommend Synology for production if you often think "eh, who really needs this data in a timely manner, or indeed, at all!?"
|
# ? Feb 11, 2016 04:29 |
|
If you have a Compellent array and are thinking of a prosumer NAS to replace it, either you were vastly over sold the first time around or you are vastly underestimating your needs the second time around.
|
# ? Feb 11, 2016 07:00 |
|
I have three quad-hypervisor VRTX boxes each with full arrays of drives. These drives were purchased as part of boxes w/o a real plan for how their storage would be served. The intent is for them to store a first tier of backups. The VRTXes were purchased more as blade chassis than remote office branch devices. Meh, not the worst fuckup ever. They handle our research load fine. So, I need to serve up three separate VRTX drive pools. With software. Ideally, I'd get some redundancy between them automatically. Tiered storage is a dead technology. Before I start going to vendors, what if any recommendations would this thread have? Set up Nutanix on VMs and serve 'em up? VMware v$AN with insane licensing fees? I believe the VRTXes our hardware dude bought use H710 controllers as the hardware dude wasn't given any data on how the stoage, not compute, would be used in the future. http://www.dell.com/learn/us/en/04/campaigns/dell-raid-controllers Potato Salad fucked around with this message at 07:23 on Feb 11, 2016 |
# ? Feb 11, 2016 07:20 |
|
Internet Explorer posted:If you have a Compellent array and are thinking of a prosumer NAS to replace it, either you were vastly over sold the first time around or you are vastly underestimating your needs the second time around. This. An old director of mine was hooked on buying some QNAP units and filling them with SSDs. They worked fine until the iSCSI service poo poo the bed (happened too often). I bought some Synology RS2416RP+ units at the end of the year with excess budget money. While they work fine for slowly serving up some data, putting load on the box via iSCSI has some pathetic performance. From doing some research, NFS seems to perform better, so I'll be testing that out soon. I would not use units like this for any sort of real production.
|
# ? Feb 11, 2016 20:07 |
|
Also you can get fairly close to Synology pricing with a NetApp FAS2520 if you pick a bundle and push on the pricing, maybe lining it up with the end of a quarter. Close as in "taking into account the fact it's a far better supported product", not literally the same pricing, that would be insane. Moey posted:I bought some Synology RS2416RP+ units at the end of the year with excess budget money. While they work fine for slowly serving up some data, putting load on the box via iSCSI has some pathetic performance. From doing some research, NFS seems to perform better, so I'll be testing that out soon. I have inherited a client running really lightweight VM workloads using a similar model Synology with NFS datastores, and it's poo poo. It doesn't take a lot to choke the box and latency goes through the roof. Thanks Ants fucked around with this message at 21:22 on Feb 11, 2016 |
# ? Feb 11, 2016 21:20 |
|
Why are our vm's so slow!?!
|
# ? Feb 18, 2016 01:41 |
|
|
# ? Jun 10, 2024 11:26 |
|
mayodreams posted:Why are our vm's so slow!?!
|
# ? Feb 18, 2016 02:19 |