|
goobernoodles posted:Yeah, just 1gb. I currently have a Netgear GSM7248 and GS724AT switches on the rack that the hosts and iSCSI and and other things plug into. I'm hoping to get a full infrastructure refresh (servers, switches, SAN) on the budget for next year, but I want to go through our existing setup and redo the iSCSI networking as a stopgap until that point. We've had occasional, ongoing issues with this setup and I really want to get the iSCSI traffic segregated. Sounds like your requirements are modest and really any vendor would work. Is there an OS you're more comfortable working in (IOS, JunOS, etc)? If so, go with that one. If not, just go with the cheapest deal you can get from a reputable vendor that meets your needs. Moey's suggestion is good.
|
# ? Nov 14, 2014 23:56 |
|
|
# ? May 21, 2024 18:19 |
|
Get something with decent sized port buffers. Stuff like the Cisco 2960 looks like it will do the job, but it falls over at pretty modest throughout levels due to very small port buffer caches. As a result Storage traffic can really crater due to pause frames and restransmits.
|
# ? Nov 15, 2014 00:32 |
|
goobernoodles posted:Yeah, just 1gb. I currently have a Netgear GSM7248 and GS724AT switches on the rack that the hosts and iSCSI and and other things plug into. I'm hoping to get a full infrastructure refresh (servers, switches, SAN) on the budget for next year, but I want to go through our existing setup and redo the iSCSI networking as a stopgap until that point. We've had occasional, ongoing issues with this setup and I really want to get the iSCSI traffic segregated. Just look for something with vlan support, given your environment and what you need to do; a Cisco SG will do fine...
|
# ? Nov 15, 2014 02:07 |
|
Kaddish posted:I'm seriously considering consolidating our entire VMware environment (about 45TB) to a Pure FA-420. I can get 60TB usable (assuming 5:1 compression) for about 240k. Anyone have any first hand experience? It seems like a solid product and perfect for VMware. This is some reply necromancy, but we just replaced our FA-320s with FA-420s and added another 12TB shelf to take us to 23TB raw capacity. We do see compression ratios of around 6:1 over the entire array, but I don't recall what it is on the VMFS LUNs specifically.
|
# ? Nov 15, 2014 04:32 |
|
We use Catalyst 3750-X switches in small storage environments with great success. We even break the rules and share duties with L3 routing.
|
# ? Nov 16, 2014 23:32 |
|
I ended up with HP 2910-al switches for iSCSI and they were fine.
|
# ? Nov 16, 2014 23:38 |
|
Got our storage arrays all racked and ready to go! I've asked this before, but has anyone had any experience with the Dell compellent synchronous live volumes? I'd like to hear some experiences with using it in a production environment.
|
# ? Nov 17, 2014 17:36 |
|
Are they finally putting the controller heads into Dell Chassis?
|
# ? Nov 17, 2014 18:16 |
|
FISHMANPET posted:Are they finally putting the controller heads into Dell Chassis? The SC8000 controllers are in Dell chassis and have been out for over two years. This looks like the new-ish 4020 that integrates the controllers and a disk shelf into one 2u unit. bigmandan posted:I've asked this before, but has anyone had any experience with the Dell compellent synchronous live volumes? I'd like to hear some experiences with using it in a production environment. Synchronous replication is a big-boy feature and you need to make sure your network is rock solid. Remember, the remote array has to acknowledge the write before it completes. Any kind of latency and you can kiss performance goodbye. A single storage switch plus a small SAN and talk of sync replication are usually not things that go together well. There are very specific use cases for it, like split metro clusters. Async replication is good enough for DR and backup. What's your use case?
|
# ? Nov 17, 2014 19:54 |
|
Our newer Compellent trays are looking to just be 720xd chasis, but our initial kit, from soon after Dell bought Compellent, was still the Super Micro stuff.
|
# ? Nov 17, 2014 20:47 |
|
KS posted:The SC8000 controllers are in Dell chassis and have been out for over two years. This looks like the new-ish 4020 that integrates the controllers and a disk shelf into one 2u unit. The picture I have only shows one cabinet. Duplicate everything there (-1 server) in another cabinet and that's our initial setup (3 hosts, 2 switches, 2 storage arrays). Eventually the second storage array will be offsite (with multiple 10 Gbps links), but we are waiting for the DR site to built. Once that's done, one of the storage arrays will move over and then we'll add 3 more hosts in a new VM cluster. The idea is that we want to be able to fail over to the DR site if there is a ever a communications outage to our main data centre (we are building a redundant ring within the city). Network latency in general should not be an issue as we can easily provision 10 or 40 Gbps links if needed (we prefer 10 right now because 40 Gbps optics are expensive as gently caress right now). One of the reasons I was asking about synchronous live volumes was: "Since the disk signatures of the datastores remain the same between sites, this means that during a recovery, volumes do not need to be resignatured, nor do virtual machines need to be removed and re-added to inventory." (Dell Compellent Best Practices with VMware vSphere 5.x) I understand we can get by with async replication but the above feature seems pretty enticing as it seems it would reduce administration headaches when dealing with a fail-over. Also I think I need to get out and exercise. Racking the SC220 disk trays gave me quite the workout.
|
# ? Nov 17, 2014 21:32 |
|
For ASync, a product like SRM breaks the replication relationship and re-signatures the datastores automatically. It also has far more robust DR handling than a stretched cluster. Here's the VMware whitepaper with metro cluster requirements. Check out page 12 for the "When to Use/When not to use" discussion. There is also an entry in the VMware Storage HCL for "iscsi metro cluster storage." It appears the Compellent is not on it.
|
# ? Nov 17, 2014 22:10 |
|
KS posted:For ASync, a product like SRM breaks the replication relationship and re-signatures the datastores automatically. It also has far more robust DR handling than a stretched cluster. Thanks for this link!
|
# ? Nov 17, 2014 22:21 |
|
orange sky posted:Holy poo poo, nice. I wish my company was selling you that Because we didn't get a good deal?
|
# ? Nov 18, 2014 10:14 |
|
Company I interviewed yesterday talked about buying parts for their 5+ year old emc cx380 on ebay, and the replacement brand new San didn't make the project list for next years budget. Pass.
|
# ? Nov 18, 2014 12:14 |
|
Jadus posted:Would you mind expanding on this? I've just recently purchased a PS6500ES and am very happy with it, but have no experience beyond that. I've got about 30 various units, so maybe it's just that I've got a much higher chance of failure. I've had 4 controller failures in the past month and a bunch of Firmware related issues including the "Resets every 248 days" bug that was fixed in the latest firmware. and this is in the last month. I've lost a lot of sleep in the last year, the V7 firmwares have been horrible, while our few groups that are still running 6.x have been flawless for years. Now we've apparently hit another firmware bug that has resulted in a couple of controller panics, and we're waiting on Engineering to figure out what's causing it. theperminator fucked around with this message at 13:46 on Nov 18, 2014 |
# ? Nov 18, 2014 13:42 |
|
devmd01 posted:Company I interviewed yesterday talked about buying parts for their 5+ year old emc cx380 on ebay, and the replacement brand new San didn't make the project list for next years budget. Pass. Sounds like the company equivalent of telling someone that you live with your mother on a first date. Some employers are just looking for rogues.
|
# ? Nov 23, 2014 01:22 |
|
devmd01 posted:Company I interviewed yesterday talked about buying parts for their 5+ year old emc cx380 on ebay, and the replacement brand new San didn't make the project list for next years budget. Pass. Who tells you that on an interview? That is usually the poo poo you find out first week. I could maybe see we have a 5 year old SAN we are looking to replace; Its not in the budget so one of your first tasks will be to price one out. They will tell you they don't have the budget after you spec it. Took a year and a half but finally got budget to replace my completely full array. Switching vendors too It's small but so is our budget it's over 10% of the ITs operating budget for the year.
|
# ? Nov 23, 2014 13:42 |
|
pixaal posted:Who tells you that on an interview? That is usually the poo poo you find out first week. I could maybe see we have a 5 year old SAN we are looking to replace; Its not in the budget so one of your first tasks will be to price one out. They will tell you they don't have the budget after you spec it.
|
# ? Nov 23, 2014 14:57 |
|
adorai posted:They are probably looking for a goony hacker that won't flinch at that poo poo. "Oh wow a chance to run OpenFiler in production on some DL380s I got off eBay! Finally I can save someone else's money and it will only cost me some of my worthless time."
|
# ? Nov 23, 2014 15:13 |
|
pixaal posted:Took a year and a half but finally got budget to replace my completely full array. Switching vendors too It's small but so is our budget it's over 10% of the ITs operating budget for the year.
|
# ? Nov 23, 2014 15:38 |
|
Thanks Ants posted:"Oh wow a chance to run OpenFiler in production on some DL380s I got off eBay! Finally I can save someone else's money and it will only cost me some of my worthless time." The problem with this in an employment context is that unless you get some equity all that blood to get the savings disappears into the CEO's private jet. When it comes time to ask for a raise you hear "we don't have the budget for that, see we spent $50 on your San last month what more do you want?" Just move on.
|
# ? Nov 25, 2014 13:54 |
|
I refuse to support any SAN in a production environment without a maintenance contract in place, unless its something they really, really don't give a poo poo about on there. Ultimately its my rear end on the line if something goes south, and I want that contract in my back pocket to call up and make them fix it when there's a goddamn production outage.
|
# ? Nov 25, 2014 14:05 |
|
devmd01 posted:I refuse to support any SAN in a production environment without a maintenance contract in place, unless its something they really, really don't give a poo poo about on there. Ultimately its my rear end on the line if something goes south, and I want that contract in my back pocket to call up and make them fix it when there's a goddamn production outage. This, 1000x this. Unless you have a team with vendor-level knowledge of the product and keep spares, this is gospel.
|
# ? Nov 25, 2014 15:21 |
|
Richard Noggin posted:This, 1000x this. Unless you have a team with vendor-level knowledge of the product and keep spares, this is gospel. This is why I bought a Hitachi SAN. In many ways it's been a nightmare, but it keeps serving my data, keeping the company up, and helps me keep my job.
|
# ? Nov 26, 2014 22:40 |
|
devmd01 posted:I refuse to support any SAN in a production environment without a maintenance contract in place, unless its something they really, really don't give a poo poo about on there. Ultimately its my rear end on the line if something goes south, and I want that contract in my back pocket to call up and make them fix it when there's a goddamn production outage. I'm not sure that jives with your username.
|
# ? Nov 27, 2014 01:04 |
|
Got a dell rep on the phone to say "wow you guys are crazy, you know you all should do X/Y/Z right? San upgrades when you have a <8hr notice for 25K customers is fun, glad I sperged myself so much over SAN/IP poo poo. 8 controllers 125TB of upgrades in 2 hours + EMC battery back-plane failure + hooters; poo poo owns.
|
# ? Nov 27, 2014 02:42 |
|
Since this thread needs some action: how many people are running 16gb FC. I have seen a lot of marketing chatter in this area and with AFAs starting to become more than an edge case, they can easily supply the throughput needed. Obviously specs march on but has anyone seen it in the wild?
|
# ? Dec 7, 2014 18:51 |
|
KennyG posted:Since this thread needs some action: how many people are running 16gb FC. I'd say it's still an edge case. Two 8Gb links provide 2GB/s of throughput which is more than enough for most use cases, especially when you're talking single array performance rather than scale out. And SSD arrays haven't really increased throughout substantially over disk arrays anyway. I've seen it for ISLs, but 4 or 8 GB is still what I see for host connectivity.
|
# ? Dec 7, 2014 19:54 |
|
Netapp FAS 2554 installed and NFS'ing in maybe 2 hours. Compared to my Hitachi this thing is so easy to use. e: I may go to 16Gb FC with my next Hitachi SAN if the stars align. As much for latency as throughput.
|
# ? Dec 11, 2014 03:22 |
|
Aquila posted:Netapp FAS 2554 installed and NFS'ing in maybe 2 hours. Compared to my Hitachi this thing is so easy to use. Clustered ONTAP, or 7 mode? If Clustered ONTAP, are you running 8.3RC1 with disk partitioning to save root aggregates?
|
# ? Dec 14, 2014 01:41 |
|
NippleFloss posted:I'd say it's still an edge case. Two 8Gb links provide 2GB/s of throughput which is more than enough for most use cases, especially when you're talking single array performance rather than scale out. And SSD arrays haven't really increased throughout substantially over disk arrays anyway. I did a 2 x FAS8040 install (dual sites, ROADM connectivity, including a shelf of SSDs).. customer also wanted a quote for 4 x 16Gb FC switches. We got pricing, standard discounts, etc. Their response was "how do the switches cost as much as the filer?"
|
# ? Dec 14, 2014 17:32 |
|
Hey guys, I work for a pretty big movie production studio in Japan. Our department within the studio has requested my help on finding a solution to their problem... Problem: We are working on multiple large, high bitrate 4K, high FPS data files. Most of our content going forward is going to be done in 4K. Obviously as a movie production studio we will be working on CG, but also doing most of our work in After Effects. We are talking about hundreds of gigabytes of data for just seconds of movie footage. Our first project this year showed that we have a major bottleneck in our current set up. We have a large storage server in-house that we use to store backups and all our data. When we record the footage we need to move that footage onto the server. After that we need to work on the footage. Our current setup means that we can't work on the data directly on the server. First we have to copy all the data to our PCs, work on it there and then copy it back, but when the size of the data for just 2 minutes of footage is over 10TB, you can see how the time lost copying data back and forth is pretty inefficient. My boss has asked me to research, build a budget and get the ball moving on a way to make a fast, high capacity storage server that doesn't buckle when multiple people connect. Ideally what we would like is a server that allows normal users to connect to it via ethernet just as you would a normal NAS storage device, like we have now. Then we would also like to have the ability for 5 or 6 PCs that will be dedicated to compositing and working on the data to connect directly to it via a fibre channel. The server needs to not buckle when those 5 or 6 people are doing work on it, and it also needs the ability to share the data to normal users via ethernet (with priority given to data transfer via optical cable. I've done a bit of research and it looks like if we bought something like a Powervault SAN from Dell, it could connect to our already existing network and be seen by regular users as a NAS, and then also can be directly connected to a PC and be seen as a DAS. That is according to this; http://www.smallnetbuilder.com/nas/nas-howto/31485-build-your-own-fibre-channel-san-for-less-than-1000-part-1 The diagram seems to suggest that to be the case. I guess I have a few questions regarding the feasibility or my understanding of this but, can you connect 5 or 6 PCs directly to a powervault SAN via optical like we want or does it not work like that? Given our requirements, can you recommend something other than a Dell (that might be available in Japan) or suggest a configuration for a Dell Powervault?
|
# ? Dec 15, 2014 08:24 |
|
Rekka posted:words On the train so can't give too detailed a reply but: I used to look after a lot of media accounts at my old role and what you are talking about it 100% the use case for EMC isilon. Here's a big old list of users from years ago: http://www.storagenewsletter.com/rubriques/business-others/apple-isilon-itunes/ So it's a node based file architecture. Basically you add 'nodes' and each node adds the amount of compute, storage and networks so it gets bigger and faster the larger it gets and used by pretty much every media house for exactly what you say. Dead easy to use
|
# ? Dec 15, 2014 09:32 |
|
Vanilla posted:EMC isilon. Heh, my company has 4 from that list as clients, either replacing or installed alongside their Isilons. I would send you a PM Rekka, but you don't have the button.
|
# ? Dec 15, 2014 16:54 |
|
Rekka posted:NAS and "DAS" stuff What you want is functionally impossible. You can have arrays that serve both NAS data and FC data at the same time, but nothing can serve THE SAME data as both NAS and FC. One is block level and one is file level, one is meant for shared access and one isn't, except in a clustered environment. There's nothing special about FC or DAS that makes it faster than NAS, particularly when your workload is based largely on throughput. Buy a good NAS, build a good, high throughput network for your storage traffic, and you will be fine. Isilon is a good recommendation for what you're doing, as mentioned above, though there are certainly other possibilities.
|
# ? Dec 15, 2014 23:22 |
|
NippleFloss posted:There's nothing special about FC or DAS that makes it faster than NAS
|
# ? Dec 16, 2014 00:52 |
|
Ataraxia posted:Heh, my company has 4 from that list as clients, either replacing or installed alongside their Isilons. thanks Ataraxia, can you send me an email at dominic2401@gmail.com Also thanks to everyone for their replies about Isilons, I'm taking a look into it!
|
# ? Dec 16, 2014 12:19 |
|
NippleFloss posted:What you want is functionally impossible. You can have arrays that serve both NAS data and FC data at the same time, but nothing can serve THE SAME data as both NAS and FC. One is block level and one is file level, one is meant for shared access and one isn't, except in a clustered environment. Hmmmm, how many people can be connected via FC at one time? Could we get 30 people or so connected via fibre?
|
# ? Dec 16, 2014 12:20 |
|
|
# ? May 21, 2024 18:19 |
|
Think of fibre channel like connecting a hard drive's SATA cable to multiple PCs. Your first question should be "how do I stop people overwriting my stuff?". FC can only have a 1:1 relationship between LUNs and hosts, unless the host is cluster-aware, which it's unlikely a bunch of video editing machines will be. You're looking for shared storage, or NAS. You can connect to this over 10Gbps fibre assuming the arrays can keep up, but that's not FC.
|
# ? Dec 16, 2014 16:06 |