|
bigmandan posted:What's the main drive for upgrading? Depending on what that is will dictate what solution will work for you. Do you need more performance, capacity, features, etc.. ? Would like more of everything but it's mainly "this is 3-4 years old and we should get a new one"
|
# ? Sep 21, 2015 16:14 |
|
|
# ? May 27, 2024 02:41 |
|
Bob Morales posted:Would like more of everything but it's mainly "this is 3-4 years old and we should get a new one" It's been awhile since I looked at pricing from Nimble, but I think their entry solutions are near that price point. A quote I have from about a year ago was ~20k CDN for a step above entry. bigmandan fucked around with this message at 16:37 on Sep 21, 2015 |
# ? Sep 21, 2015 16:33 |
|
Bob Morales posted:We have (I think) a Dell MD3200 that only has the external SAS connectors, and only 2 of those. We have 2 servers running VMware (just the lowest paid version). Basic Linux and Windows file servers, no big databases or anything. I think we have like 1.5TB worth of stuff. I really like the SAS DAS models for shared storage deployments that are never going to grow beyond 3 hosts. If you are not going to grow I would think long and hard before moving to iSCSI or FC. Unless you just want to gain experience with a "proper" SAN. The SAS DAS solution is super simple, super cheap, and significantly faster than what you can do at entry level prices.
|
# ? Sep 21, 2015 18:36 |
|
There's basically nothing you can get from an established vendor for less than 15k. NetApp, Nimble, and Tegile are all going to start above 20k for the lowest end models unless you get some outrageously good discounts. You may want to look at VSAN licensing (probably going to be too expensive) or something like a Dell VRTX.
|
# ? Sep 21, 2015 19:14 |
|
Internet Explorer posted:I really like the SAS DAS models for shared storage deployments that are never going to grow beyond 3 hosts. If you are not going to grow I would think long and hard before moving to iSCSI or FC. Unless you just want to gain experience with a "proper" SAN. The SAS DAS solution is super simple, super cheap, and significantly faster than what you can do at entry level prices. That makes sense. I had never used a DAS unit like that before, we had a couple NetApps at my last gig (and we only had 3 hosts, and migrated to 2) so that's what I know.
|
# ? Sep 21, 2015 19:42 |
|
bigmandan posted:It's been awhile since I looked at pricing from Nimble, but I think their entry solutions are near that price point. A quote I have from about a year ago was ~20k CDN for a step above entry. Nimble entry level has about 3-4 times more storage than he needs. At 1.5TB I wouldn't buy anything fancy.
|
# ? Sep 21, 2015 21:24 |
|
Are DAS just traditionally JBOD, or do they ever have more advanced features like dedupe, compression, hybrid flash, etc.?
|
# ? Sep 21, 2015 22:50 |
|
beepsandboops posted:Are DAS just traditionally JBOD, or do they ever have more advanced features like dedupe, compression, hybrid flash, etc.? They're pretty much always just drive enclosures hanging off a typical RAID controller doing raid 0/1/5/6/10/50/60 etc. Nothing more than that and the cache on the controller itself.
|
# ? Sep 22, 2015 00:06 |
|
beepsandboops posted:Are DAS just traditionally JBOD, or do they ever have more advanced features like dedupe, compression, hybrid flash, etc.? You're going to need whatever is connecting to it to do that magic.
|
# ? Sep 22, 2015 16:41 |
|
I have been looking at building my own directly attached storage device. From my understanding, I can get a separate case and a device like http://www.newegg.com/Product/Product.aspx?Item=9SIA24G28M7361&cm_re=sas_expander-_-16-117-207-_-Product + a tiny motherboard to provide power and then connect it to an existing HBA in my server? I was reading this guide http://www.servethehome.com/sas-expanders-diy-cheap-low-cost-jbod-enclosures-raid/ and it seems like an interesting option. this is not for anything mission critical, just for holding backups and stuff. I have a current setup that uses windows storage spaces with a JBOD array set up in a mirror and it has been great. I have just run out of sata ports and plan on getting an HBA. However I also will need to get a case to hold all of these drives anyway which is what led to me looking at various directly attached storage devices. I just want to future proof so I can expand even further. The cost of direct attached storage device is crazy and I would love to build my own, especially if I could buy a $250 expander card + a case with 24 hot swap bays instead of a synology box which would cost more then a months rent.
|
# ? Sep 23, 2015 21:35 |
|
What's the opinion on EMC Isilon? Company I started working for has a 400-something TB setup that we just dropped 80k on for software renewals. Only problem is the guy who set the drat thing up is two years gone, nobody has a clue as to how this thing runs.
|
# ? Sep 25, 2015 10:07 |
|
Wicaeed posted:What's the opinion on EMC Isilon? Company I started working for has a 400-something TB setup that we just dropped 80k on for software renewals. Only problem is the guy who set the drat thing up is two years gone, nobody has a clue as to how this thing runs.
|
# ? Sep 25, 2015 14:35 |
|
Just send someone off to the EMC certification class. If you have an isilon it's worth doing.
|
# ? Sep 25, 2015 14:40 |
|
Hey guys I have a conceptual question as I'm new to all of this fiber what-nots. I do motion graphics and visual effects (and sometimes some editing) for an ad agency and our colorist is right next door to me. We're gonna get this QNAP Thunderbolt2/Fiber NAS/DAS combo: https://www.qnap.com/solution/thunderbolt-nas/en-us/index.php So that we can work on shared storage instead of constantly round-tripping to each other. Obviously we both need Fiber cards in our systems and a sort of fiber switch (a few more devices may connect in the future... also the thunderbolt 2 ports will be taken up by an LTO backup). But I'm not sure where to go besides that. Someone at work recommended this 10gig switch: http://www.amazon.com/NETGEAR-ProSA...=Netgear+XS712T and these cards: http://www.newegg.com/Product/Product.aspx?Item=N82E16833106184 So I'm guessing Fiber goes from the QNAP to the Combo ports on those switches and then the Cat 6A/7 from the switches to the computer cards?
|
# ? Sep 25, 2015 14:48 |
|
BonoMan posted:Hey guys I have a conceptual question as I'm new to all of this fiber what-nots. There's no fiber in that config. The QNAP has 10G-BaseT ports, which are normal RJ-45 copper ports.
|
# ? Sep 25, 2015 15:17 |
|
NippleFloss posted:There's no fiber in that config. The QNAP has 10G-BaseT ports, which are normal RJ-45 copper ports. Uh, what the hell? I don't know what I was looking at. I swear when I initially researched it it said fiber. As well as having cute little diagrams about how to setup the fiber switch. Now that I go back to it I can't find a single mention of anything like that anywhere on QNAPs site. Did I fall into a loving black hole or something?
|
# ? Sep 25, 2015 16:53 |
|
Speed of 10GBase-T = Speed of 10GBase-SR. Doesn't matter. In truth, optical transceivers add more latency to each jump than copper RJ-45 ports, so if you have copper 10GbE available to you and you aren't spanning distances greater than 100m, the fiber doesn't add any value outside, say, future proofing against future end-to-end fiber upgrades if you see that sort of thing happening in this NAS's operational lifespan in your environment. If one or both of you can attach to a 10GBase-T switch, you are golden. Eight drives providing 20Gbit throughput? That's 313MB/s per spindle in non-redundant Raid 0, which sounds like a stretch.
|
# ? Sep 25, 2015 18:38 |
|
Potato Salad posted:Speed of 10GBase-T = Speed of 10GBase-SR. Doesn't matter. In truth, optical transceivers add more latency to each jump than copper RJ-45 Base-T latency is an order of magnitude higher than SFP+ owing to the encoding scheme required. But for high throughout operations running on spinning drives that won't matter much. The other reason to use SFP+ is the host side adapters are cheaper and more plentiful.
|
# ? Sep 25, 2015 18:55 |
|
NippleFloss posted:Base-T latency is an order of magnitude higher than SFP+ owing to the encoding scheme required. My understanding was previously the other way around for the case of top-end networking copper networking gear. I'm going to go do more reading to make sure. Thanks.
|
# ? Sep 25, 2015 21:43 |
|
I've never understood 10Gbase-T - if it's going a short distance, use SFP+ direct attach because it's cheaper. If it's going further then you're going to have to replace whatever you have with Cat6a anyway so just put fibre in. I guess it could make sense if you're doing shortish drops to video workstations and don't want to constantly break fibre terminations. Anyone else got a use case for it?
|
# ? Sep 25, 2015 21:53 |
|
Thanks Ants posted:Anyone else got a use case for it? Yeah, environments that are irrationally terrified of the danger presented by solid state optics. Mind, this is a research environment and not all the IT techs have the necessary skills to handle OMG LAZURS safely. On paper, environment safety protocol on some projects does not distinguish between the devices on Optics Bench #482 and the SFP+ interface in the back of Joe's desktop. I and a few others who actually do networking poo poo have degrees and/or training with fiber optics, but...yeah. My own sphere of influence is going 10GBase-SR as we speak, so some progress is being made. Potato Salad fucked around with this message at 22:07 on Sep 25, 2015 |
# ? Sep 25, 2015 22:04 |
|
Potato Salad posted:My understanding was previously the other way around for the case of top-end networking copper networking gear. I'm going to go do more reading to make sure. Thanks. Yeah 10GBASE-T is around 1-2 usec slower than DAC or fiber depending on the PHY.
|
# ? Sep 25, 2015 23:13 |
|
Potato Salad posted:My understanding was previously the other way around for the case of top-end networking copper networking gear. I'm going to go do more reading to make sure. Thanks. One example: http://www.cisco.com/c/en/us/products/switches/nexus-3000-series-switches/models-comparison.html Thanks Ants posted:I've never understood 10Gbase-T - if it's going a short distance, use SFP+ direct attach because it's cheaper. If it's going further then you're going to have to replace whatever you have with Cat6a anyway so just put fibre in. The biggest appeal is backwards compatibility and low interconnect cost. If you're not upgrading everything to 10G at the same time the fact that you can plug in plain 1G RJ-45 and it will work is very appealing. Think upgrading your storage to a new 10G compatible array but not having the budget to update all of your ESX hosts. Also, SFP+ optics are expensive and can add up quickly, and DAC cables aren't always an option due to length restrictions or a need to integrate 1g copper devices.
|
# ? Sep 26, 2015 03:05 |
|
Also DAC cable vendor compatibility, which is still a thing last I checked.
|
# ? Sep 26, 2015 18:26 |
|
CrazyLittle posted:Also DAC cable vendor compatibility, which is still a thing last I checked. Huge thing, even with verified twinax pairings. Server/storage vendors and networking vendors will point fingers at each other "it's their fault it's not negotiating 10Gbit" while your direct-attach support tickets rot away unresolved. Edit: Going 850nm over twinax tacked another 10% on to our most recent system update + convergence project, but at every turn we've been proverbially pulled to the side and told "I wouldn't loving trust it." There's a future unrelated use for fiber anyway. Potato Salad fucked around with this message at 02:18 on Sep 27, 2015 |
# ? Sep 27, 2015 01:51 |
|
I'm looking for a smb/cifs splitter. Basically I have billions files at //fs01/share and need to move it to //fs02 but can't take the 30 day outage to copy the 100tb off a 1gb link. I know this is a thing with some sans but I want to set up a third party proxy to do it. If I setup a splitter that would prioritize reads and writes from //fs02 and fail back to //fs01 if the data hasn't yet been written to the new location. I know this is more complicated than it appears but it is really a dead simple concept on its face. Someone has to do it. Anyone know of a solution?
|
# ? Sep 27, 2015 15:28 |
|
You can't just map the volume to the new server?
|
# ? Sep 27, 2015 19:30 |
|
Or cname it. Or drop a standalone DFS root in front of it. Without knowing more, what you're describing is possibly overly complex.
|
# ? Sep 27, 2015 19:41 |
|
KennyG posted:I'm looking for a smb/cifs splitter. Basically I have billions files at //fs01/share and need to move it to //fs02 but can't take the 30 day outage to copy the 100tb off a 1gb link. I know this is a thing with some sans but I want to set up a third party proxy to do it.
|
# ? Sep 27, 2015 20:23 |
|
adorai posted:why can't you just use a few robocopy threads to do it? Get them close to synced and then schedule a few hours to finalize. This is pretty much what we've done for these large migrations. You probably don't have 100TB of change every day so pre-stage as much as you can then just do a final cutover.
|
# ? Sep 27, 2015 20:39 |
|
Minus Pants posted:Yeah 10GBASE-T is around 1-2 usec slower than DAC or fiber depending on the PHY. And DAC cables are cheap as gently caress. Honestly 10g fiber is pretty loving close to commodity pricing these days. I don't even see the point of 10GBASE-T anymore.
|
# ? Sep 28, 2015 17:21 |
|
1000101 posted:This is pretty much what we've done for these large migrations. You probably don't have 100TB of change every day so pre-stage as much as you can then just do a final cutover. https://technet.microsoft.com/en-us/magazine/2009.04.utilityspotlight.aspx Spawns multiple robocopy threads/does everything robocopy does, with a gui. You can even control how many copy threads it fires up at a time. It's a lifesaver for migrations and it's also free.
|
# ? Sep 28, 2015 17:23 |
|
Wicaeed posted:What's the opinion on EMC Isilon? Company I started working for has a 400-something TB setup that we just dropped 80k on for software renewals. Only problem is the guy who set the drat thing up is two years gone, nobody has a clue as to how this thing runs. Isilon is pretty easy. My team manages ~40PB of isilon in primary and replication clusters. If you have any questions I can help out, just pm me.
|
# ? Oct 1, 2015 16:20 |
|
We're an EMC shop and not happy about this potential EMC/Dell acquisition/merger news. Guess we'll have to go back to NetApp if Dell f's up EMC like I know they will.
|
# ? Oct 9, 2015 18:12 |
|
skipdogg posted:We're an EMC shop and not happy about this potential EMC/Dell acquisition/merger news. Guess we'll have to go back to NetApp if Dell f's up EMC like I know they will. They managed to not really gently caress up Force10 though, so maybe it won't be a colossal wreck, just a regular wreck?
|
# ? Oct 9, 2015 19:29 |
|
I have serious doubts about a Dell/EMC merger. But I'll take Dell over EMC any day of the week. I can't imagine Dell getting VMware in any potential merger.
|
# ? Oct 9, 2015 19:37 |
|
Anyone going to Insight? Any tips on what not to miss?
|
# ? Oct 10, 2015 21:08 |
|
parid posted:Anyone going to Insight? Any tips on what not to miss? I'll be there. The level 3 and 4 breakout sessions are usually pretty good. Also try to do some free cert testing while you're there.
|
# ? Oct 11, 2015 00:33 |
|
Internet Explorer posted:I have serious doubts about a Dell/EMC merger. But I'll take Dell over EMC any day of the week. Well, it just happened. I'd guess Compellent goes away, some EMC product lines get trimmed, and there's a bigger push towards hyperconverged. Also guessing that ScaleIO becomes Dell only.
|
# ? Oct 12, 2015 16:54 |
|
|
# ? May 27, 2024 02:41 |
|
NippleFloss posted:Well, it just happened. I'd guess Compellent goes away, some EMC product lines get trimmed, and there's a bigger push towards hyperconverged. Also guessing that ScaleIO becomes Dell only. Pretty crazy news. Going to be an interesting time once it gets approved.
|
# ? Oct 12, 2015 17:24 |