|
Yeah, NL-SAS is basically SATA. 7200 RPM drives, but they uses SAS because that makes it easier for SAN manufactures to make it interchangeable with real SAS drives. I think you'd be okay with NL-SAS 7200 RPM, but if you can go for 10k SAS you give yourself a little room to grow. RAID50 would be the way to go.
|
# ? Feb 24, 2012 16:00 |
|
|
# ? May 13, 2024 09:28 |
|
Ok, so NL-SAS really is essentially SATA but the 10K 600GB drives we're looking at are actually proper SAS drives. That clears up some confusion, thank you.
|
# ? Feb 24, 2012 16:04 |
|
ozmunkeh posted:As far as load, max 80 Exchange 2007 users with a dozen DynamicsGP on SQL2005 and a half dozen on a different app on SQL2008. Aggregate R/W across the whole environment right now is 80/20.
|
# ? Feb 24, 2012 16:44 |
|
Is anyone in here using or worked with the Equallogic XS/XVS line? Looking at a 6110XS, 7x 400GSSD and 17x 600 10k SAS with 10GigE. For the price it seems like a beast. Planning on using it to backend a heavy write/read MS SQL cluster and want to try and cover future performance needs by just killing it with more IOPS then I'll theoretically ever need. But a lot of this performance is predicated on the balancing algorithms working well, first sales guy said they re-balance every 2 weeks, the tech resource said that no they actually re-balance blocks almost continually. They couldn't give me a definitive answer on where new blocks get written to, if they go SSD first and then downgrade later, just that's a blackbox. The white papers look good, 170% TPC-C and 360% IOPS over an XV, on the older 6010 line. Just curious if anyone has any real world experience on how well they hybrid ssd since I pretty much have to trust the black box model is going to work well.
|
# ? Feb 24, 2012 17:08 |
|
evil_bunnY posted:I don't know about that DynamicsGP business but 80 exch2k7 users is basically nothing. Yes, most of our storage demand comes from CAD and internal engineering/design apps.
|
# ? Feb 24, 2012 18:00 |
|
Here's a good link on the difference between SATA, SAS, and NL-SAS.
|
# ? Feb 24, 2012 19:27 |
|
Nukelear v.2 posted:Planning on using it to backend a heavy write/read MS SQL cluster and want to try and cover future performance needs by just killing it with more IOPS then I'll theoretically ever need. But a lot of this performance is predicated on the balancing algorithms working well, first sales guy said they re-balance every 2 weeks, the tech resource said that no they actually re-balance blocks almost continually. They couldn't give me a definitive answer on where new blocks get written to, if they go SSD first and then downgrade later, just that's a blackbox. I'm pretty sure you can set the aggressiveness of the AST routines.
|
# ? Feb 24, 2012 20:13 |
|
Hyrax posted:
Honestly it would take less time to pull the team, set up a single iSCSI network on one of the 10G lines and throw the continuing errors in their face than it took for me to write this post (almost). I'd do that during downtime.
|
# ? Feb 24, 2012 20:23 |
|
paperchaseguy posted:Here's a good link on the difference between SATA, SAS, and NL-SAS. This article is pretty good, but on top of those points, SAS drives have 520 byte sectors and SATA ones have 512. The extra 8 bytes is for that SCSI error checking. When you plug a SATA drive into a SAS port, it needs to allocate a few extra sectors every now and then to store that data. This makes the drive consume additional IO, so a nearline SAS drive will have better performance because of that as well.
|
# ? Feb 25, 2012 00:29 |
|
quote:SAS drives have 520 byte sectors and SATA ones have 512. The extra 8 bytes is for that SCSI error checking. Is that right? My understanding is that many could be reformatted to 520 byte sectors, but many aren't. A SAS drive in an EMC system will be 520 bytes. I don't think a SAS drive hooked up to a PERC controller in a Dell server will be 520-byte formatted. Does anyone know what kind of kind of error correction there is in Equallogic systems?
|
# ? Feb 25, 2012 17:06 |
|
My boss just walked up and said we want to spend ~$20K-30K on a pair of servers and possibly some shared storage...by Thursday. I haven't run perfmon on the current servers in question (and was actually spinning up some logged performance stats earlier today) so I'm not sure where to start. I've been pushing to virtualize for a while, since our SQL server is a single socket, and our voicemail server is a dual PIII, so off the cuff we're a good candidate for server consolidation. Is there a way to get decent quotes with almost no information up front?
|
# ? Feb 28, 2012 21:40 |
|
Has anyone had any trouble setting up Active/Active storage arrays with centos. Any issues or problems anyone run into? Openfiler only does active/passive and Freenas 8 only does Rsync I believeOddhair posted:My boss just walked up and said we want to spend ~$20K-30K on a pair of servers and possibly some shared storage...by Thursday. I haven't run perfmon on the current servers in question (and was actually spinning up some logged performance stats earlier today) so I'm not sure where to start. I've been pushing to virtualize for a while, since our SQL server is a single socket, and our voicemail server is a dual PIII, so off the cuff we're a good candidate for server consolidation. Is there a way to get decent quotes with almost no information up front? Dell will let you build/customize your own NAS devices and get a quote instantly. NX300 and NX3100 are good if you work in a medium size company, and some R210's are cheap good first time virtualizing type servers. VVVV- answer that but, if you just need something to go off of Servers storage + Essetianls plus kit should run you right up near 25-26k, and if you don't already have some get some gig switches and make a network just for storage. Dilbert As FUCK fucked around with this message at 22:14 on Feb 28, 2012 |
# ? Feb 28, 2012 21:45 |
|
Oddhair posted:My boss just walked up and said we want to spend ~$20K-30K on a pair of servers and possibly some shared storage...by Thursday. I haven't run perfmon on the current servers in question (and was actually spinning up some logged performance stats earlier today) so I'm not sure where to start. I've been pushing to virtualize for a while, since our SQL server is a single socket, and our voicemail server is a dual PIII, so off the cuff we're a good candidate for server consolidation. Is there a way to get decent quotes with almost no information up front? Well first thing is first: How much data? What type of data? How much growth? IOPS requirements? Replication/DR requirements? And today is Tuesday, what is so special about Thursday that means he can risk loving it up by rushing in?
|
# ? Feb 28, 2012 21:51 |
|
Oddhair posted:My boss just walked up and said we want to spend ~$20K-30K on a pair of servers and possibly some shared storage...by Thursday. I haven't run perfmon on the current servers in question (and was actually spinning up some logged performance stats earlier today) so I'm not sure where to start. I've been pushing to virtualize for a while, since our SQL server is a single socket, and our voicemail server is a dual PIII, so off the cuff we're a good candidate for server consolidation. Is there a way to get decent quotes with almost no information up front? Tell him you need more time and that rushing it is going to cause way more problems than it solves. Even small companies spend more than 2 days on this type of stuff.
|
# ? Feb 28, 2012 23:11 |
|
Oddhair posted:My boss just walked up and said we want to spend ~$20K-30K on a pair of servers and possibly some shared storage...by Thursday. I haven't run perfmon on the current servers in question (and was actually spinning up some logged performance stats earlier today) so I'm not sure where to start. I've been pushing to virtualize for a while, since our SQL server is a single socket, and our voicemail server is a dual PIII, so off the cuff we're a good candidate for server consolidation. Is there a way to get decent quotes with almost no information up front? Do you have warm/fuzzy feelings about your LAN? For that amount you have no choice but to run over iSCSI. Even though there's nothing wrong with iSCSI sometimes people assume there won't be problems even though their copper network is all 100mb saturated 15 year old switches. Just one more thing to stress over in case you haven't thought about that yet.
|
# ? Feb 28, 2012 23:39 |
|
Spamtron7000 posted:Do you have warm/fuzzy feelings about your LAN? For that amount you have no choice but to run over iSCSI. Even though there's nothing wrong with iSCSI sometimes people assume there won't be problems even though their copper network is all 100mb saturated 15 year old switches. Just one more thing to stress over in case you haven't thought about that yet. If you don't want to do iSCSI, you can connect four servers via SAS to a Dell MD3200.
|
# ? Feb 29, 2012 03:33 |
|
So from my understanding, Microsoft's DFS can be used to abstract the actual storage to the servers. If I want to replace a storage server, I can throw it in and set DFS to replicate to it, and when I want remove the old server. So first, is that a correct assumption? And second, does anything like that exist for NFS? It looks like NFS v4.1 does this, is that right?
|
# ? Feb 29, 2012 03:58 |
|
FISHMANPET posted:So from my understanding, Microsoft's DFS can be used to abstract the actual storage to the servers. If I want to replace a storage server, I can throw it in and set DFS to replicate to it, and when I want remove the old server. EDIT: Sorry, thought I was in a different thread. Yeah, DFS does abstract the back end of how you have your serves and what not laid out from the users who are connecting to your SMB shares. You can add/change servers, do whatever without the clients having any idea what's going on. I'm also curious what's going come out with SMB 2.2 in Windows 8. For NFS, are you talking pNFS? Maneki Neko fucked around with this message at 05:47 on Feb 29, 2012 |
# ? Feb 29, 2012 05:43 |
|
FISHMANPET posted:So from my understanding, Microsoft's DFS can be used to abstract the actual storage to the servers. If I want to replace a storage server, I can throw it in and set DFS to replicate to it, and when I want remove the old server. pNFS is really the thing that will drive NFS 4.x client adoption up. Nobody has really paid any attention to it for any other reason.
|
# ? Feb 29, 2012 06:01 |
|
Since we're on a push lately for redundant everything, I've been thinking about making everything reduntant. For block storage we can do a SAN with dual controllers and maybe dual arrays (depending on budget). For servers with can virtulize and rely on HA and FT. For services we can have redundant servers (AD, DHCP, LDAP, Kerberos). For Windows file sharing I can use DFS. The only ones I'm not sure about are Unix file sharing and mail (and probably others). My goal is to get everything to the point where I can reboot just about anything I want without a service interuption, and more importantly, never have downtime where we copy data to a new location, or tell users they have to point their client at a new location. And then adjust those requirements downward for budget.
|
# ? Feb 29, 2012 15:57 |
|
Anyone here have any experience with FCIP (Fibre Channel Tunneling)? I'm going to be working on a project soon that involves merging two, physically seperate FC fabrics via FCIP for the purpose of volume copy/mirroring. We will be using IBM V7000 SANs connected via FC with IBM SAN06B-R Multi-Protocol Routers to do the FCIP tunneling. Any feedback or anecdotal accounts would be awesome. Alternatively I have extensive experience working with IBM hardware, including SANs, so I can answer questions anyone may have.
|
# ? Feb 29, 2012 16:29 |
|
cheese-cube posted:IBM V7000 SANs
|
# ? Feb 29, 2012 19:19 |
|
FISHMANPET posted:Since we're on a push lately for redundant everything, I've been thinking about making everything reduntant. .... For Windows file sharing I can use DFS. (and probably others). Thats not entirely accurate. While DFS can give you higher availability than just having one file server, the service was primarily designed to solve problems with low bandwidth between two sites and needing file access between them. DFS relies heavily on Active Directory replication, which by definition only provides loose convergence. It also has some very big caveats dealing with file sizes, replication times, and a few other things I can't really remember right off the top of my head. If you want true high availability from your windows file servers the answer is the same as it has been for a decade: HA cluster.
|
# ? Feb 29, 2012 19:31 |
|
Thanks for the replies guys, he knows hurrying doesn't help, and this was just a thought he had right at the end of the fiscal year. He's the principal/CEO/Owner/President so he's got plans and ideas I'm not always privy to in advance, plus he's the in-house SQL guru, so his time is already stretched thin. I've been pushing for this for a while now, really since Ike took our power out for two weeks and we had the whole org crammed into the server closet, running its little 1/8th ton AC unit and their computers off a 3KW generator. We had to move at least three physical servers to a local hosting facility we use for their generator. This sparked the conversation about virtualization I'm running 24-hour runs of perfmon on some servers today to get a feel for how they're taxed. It's currently a mix of 2003 and 2008 servers: a SQL server, a pair of DCs, an Exchange server, a terminal server for all of 3 remote users, and an in-house web server. ~150GB of email (not sure on growth rate) ~100GB of SQL that grows pretty slowly. User's documents are ~50GB. To further complicate matters, we're a MS VAR, and have a one-year demo contract for Office365 that we're starting to use, so some of the roles might be migrated out to MS' servers. I sent the Dell images linked right on to my boss, and am now trying to get his ear for some discussion of where he wants to go with this.
|
# ? Feb 29, 2012 19:31 |
|
evil_bunnY posted:What's the sticker like on those? Couple of $100k?
|
# ? Feb 29, 2012 19:32 |
|
cheese-cube posted:Anyone here have any experience with FCIP (Fibre Channel Tunneling)? I'm going to be working on a project soon that involves merging two, physically seperate FC fabrics via FCIP for the purpose of volume copy/mirroring. We will be using IBM V7000 SANs connected via FC with IBM SAN06B-R Multi-Protocol Routers to do the FCIP tunneling. I have set up FCIP on both Cisco and Brocade/IBM. Make sure your link between sites is solid (I had one that fritzed out every 30 seconds) and that the network group has given you enough bandwidth. There's two ways to configure the fabrics: either have your fabrics stretch across sites, or have separate fabrics but do interfabric VSAN to VSAN communication (on Cisco this requires the enterprise license). Each option has pluses and minuses but the former is a bit easier. If you do that then you can set one switch with a different domain ID and no config, then once you have the two sites communicating properly the fabrics will merge automatically. The Brocade stuff I did was a bit longer ago but is similar in concept. code:
evil_bunnY posted:What's the sticker like on those (v7000s)? Couple of $100k? The entry level units list a lot lower than that. Of course it depends on how much storage you want but they are very competitively priced mid range, especially considering the features. paperchaseguy fucked around with this message at 20:03 on Feb 29, 2012 |
# ? Feb 29, 2012 20:00 |
|
Misogynist posted:IBM pricing is all about your vendor relationship; smart shops don't pay retail with IBM. I'm not able to disclose the pricing that we got on our units, but considering the DS8000/SVC featureset (and SONAS if you go V7000 Unified) that you get in the units, IBM has a very competitive midrange offering on their hands. Seconding this, as I already explained to evil_bunnY in the virtualisation thread. IBM Special Bid prices can be extremely good, especially if they think they can take business away from well-established HP/EMC/Dell customers.
|
# ? Feb 29, 2012 20:02 |
|
Syano posted:Thats not entirely accurate. While DFS can give you higher availability than just having one file server, the service was primarily designed to solve problems with low bandwidth between two sites and needing file access between them. DFS relies heavily on Active Directory replication, which by definition only provides loose convergence. It also has some very big caveats dealing with file sizes, replication times, and a few other things I can't really remember right off the top of my head. If you want true high availability from your windows file servers the answer is the same as it has been for a decade: HA cluster. Thanks, I guess that's another thing to put on my list.
|
# ? Feb 29, 2012 20:18 |
|
FISHMANPET posted:Thanks, I guess that's another thing to put on my list. Heres the good news: If you already have a SAN and are virtualizing then its really trivial nowadays to put together a windows cluster.
|
# ? Feb 29, 2012 21:00 |
|
Syano posted:Heres the good news: If you already have a SAN and are virtualizing then its really trivial nowadays to put together a windows cluster. Excellent. Our existing infrastructure is garbage, but we'll be rebuilding with a proper SAN and VMWare.
|
# ? Feb 29, 2012 21:26 |
|
FISHMANPET posted:VMWare.
|
# ? Mar 1, 2012 00:07 |
|
Syano posted:Heres the good news: If you already have a SAN and are virtualizing then its really trivial nowadays to put together a windows cluster. Or just buy a SAN that can handle both block and file protocols and run your CIFS services directly off of your highly available SAN.
|
# ? Mar 1, 2012 00:59 |
|
FISHMANPET posted:Thanks, I guess that's another thing to put on my list. You can use both, they have different objectives and solve different problems. HA cluster can solve the need to patch a front end server with minimal disruption or provide a backup for that unit in case it dies. It won't protect you from a SAN on fire/RAID failure. With DFS you can use the ability to abstract shares from their physical host and DFS Replication to provide shared nothing availability. DFSr is significantly less lovely then pre-03R2 when it had to copy whole files. But you don't even have to use it. Just use the share abstraction and ship nightlies to your fail over. So, you can do something like make a DFS namespace that contains a share that points to an HA cluster which replicates to a B site which you can then (semi)transparently fail-over in the event your primary ha cluster is downed. Or you can use HA fail-over to keep your primary share online while you patch. Edit: http://blogs.technet.com/b/filecab/archive/2009/06/29/deploying-dfs-replication-on-a-windows-failover-cluster-part-iii.aspx Nukelear v.2 fucked around with this message at 16:59 on Mar 2, 2012 |
# ? Mar 2, 2012 16:53 |
|
Lately my array has been trying to sing:
|
# ? Mar 2, 2012 16:59 |
|
Wonder what it would sound like? Reminds me of a time in school. I had been working at a store that took trade ins of old computer equipment, and someone brought us an IBM 3340 or something "Hard Disk Drive" that was about the size of a large dorm fridge. It had several 14" platters and stored some kind of "huge" amount of data (probably under 1GB). I paid them $50 for the nostalgia of the thing, and figured I'd probably be able to get that out of the scrap metal if I really wanted to. I brought it in to show my professor, and he said that he'd written a program once to spin up/down the platters to different speeds, creating music. We messed with the controllers enough to get very basic control, but the semester ended before we got it to play full musical compositions. I did end up scrapping the thing later, after it was in a basement that flooded, but I still have one of the head arm control bars. It's about 3/4 inch wide and a 14-18 inches long and solid steel. I keep it in the car next to my driver's seat as an emergency self defense tool.
|
# ? Mar 2, 2012 17:14 |
|
evil_bunnY posted:Lately my array has been trying to sing: You must figure out a way to dump that to a wav file and leave it playing softly in your office.
|
# ? Mar 2, 2012 18:21 |
|
Intraveinous posted:I still have one of the head arm control bars. It's about 3/4 inch wide and a 14-18 inches long and solid steel. I keep it in the car next to my driver's seat as an emergency self defense tool.
|
# ? Mar 2, 2012 19:59 |
|
http://www.youtube.com/watch?v=yHJOz_y9rZE
|
# ? Mar 2, 2012 22:37 |
|
raise ya! e: thanks, phone borked the link. evil_bunnY fucked around with this message at 14:47 on Mar 4, 2012 |
# ? Mar 2, 2012 22:57 |
|
|
# ? May 13, 2024 09:28 |
|
evil_bunnY posted:raise ya! Your link was broken so I fixed it.
|
# ? Mar 4, 2012 00:23 |