Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

EoRaptor posted:

there is always an amount of overhead for them.

If your backup consists solely of snapshots, you are going to run into trouble. They are a great tool, but just one of the many needed to reliably backup and protect data.
This is simply not true. A copy on write snapshot, like the kind data on tap (netapp) and zfs (sun/oracle) use have zero performance penalty associated with them. Additionally, snapshots coupled with storage system replication make for a great backup plan. We keep our snapshots around for as long as we kept tapes, and we can restore in minutes.

Now I'll admit, there are some extra pieces we use to put our databases into hot backup mode and we create a vmware snapshot before we snap the volume that hosts the lun, but it's not changing the snapshot itself, just making sure the data is consistent before we snapshot.

Adbot
ADBOT LOVES YOU

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

conntrack posted:

If you keep three months of one snap each day writing one block will result in 90 writes per block? Im sure that database will be fast for updates.

Edit: i guess that depends on how smart the software is. clasical snaps would turn to poo poo.
it writes one time. The original blocks are locked and when they are written to, instead a new location is written to. The orignals are then only referenced by the oldest snapshot. Subsequent snapshots lock their blocks, and so forth. New data is written only once, and there is no performance penalty.

edit: I didn't see that we had rolled to a new page.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

shablamoid posted:

Are there any recommended practices for performing a defrag on large volume systems? I have heard that backing up the data, then restoring it is a method of doing it, but this seems cumbersome
there is likely zero benefit to defragging a san.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

BelDin posted:

We just had our first major SAN failure this weekend, and the IT guys are paying for it. ...
Holy poo poo. Your entire post is a tale of failure. Did your guys get any training on this SAN at all, or did the vendor just drop it off and say here you go? I hope they are hourly for their sake. Personally, I have heard pretty much all bad things about HP SANs. We didn't even look at them when we were purchasing.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Misogynist posted:

I can't even wait to see the havoc that Exchange 2010 wreaks on the most retarded of administrators.
I hope you aren't implying that lazy admins will use replication as an excuse to not backup their data.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Syano posted:

It made me stop and completely redesign my hosts to have 3 NICs minimum, all connected to different layer 2 devices.
3 is probably not enough, especially if you are using iSCSI. It will be enough 90% of the time, but when DRS kicks off at the same time that there is some heavy disk IO and someone is imaging a new box using an ISO from their local disk, you'll have users asking why their email is so slow and then you'll make a sad face. We use 6 NICs per host. 4 for iSCSI and management and 2 for guest networking. The portgroups are setup in such a way that a single switch failure will not cause any problems other than possibly performance.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Cultural Imperial posted:

Reading all these posts about people aggregating 6+ ethernet ports together, I was curious if anyone had thought about using 10GbE instead.
It's not just about speed, it's also about redundancy and network segregation. I am not going to run iSCSI, management, and guest networking on one link, even if it is 10GbE, because I don't want a vmotion to suck up all of my iscsi bandwidth.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Misogynist posted:

Putting aside the concerns about link redundancy, it certainly seems in the virtualization spirit to consolidate links and let the bandwidth be partitioned out as needed, letting software handle the boundaries dynamically, rather than installing dramatically underutilized hardware to enforce resource boundaries.
I certainly agree with the idea, and as an individual system administrator I am all for combining the links, but it's hard to go against such an easy to implement best practice. However, I am very glad you posted the links, because we are rebuilding our entire VMware infrastructure over the next few weeks so we'll certainly be able to consider doing so.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

conntrack posted:

To get rid of all the files an get a clean share:
snapdrive snap restore -snapname start

I started this when i left work and it was still running 14 hours later.

Sounds a tad bit slow?
Is there something preventing the volume from unmounting? Snapdrive will unmount the volume before rolling it back i believe, and if there is a file open the unmount may fail.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

three posted:

How do you think I should handle this with the consultant?
What was he there to consult on? Just the speed issue? If so, I would handle it by not paying the bill until he did his job. In testing we have been able to saturate a single gigabit link (around 95MBps, close enough) with iSCSI, so he's full of poo poo.

Are you able to add a second link, just to see if that helps?

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

MrMoo posted:

slapping a single SSD into the SAN
Not sure about your environment, but we don't "slap" anything into our SAN. nor do we even have a free drive bay to do so even if we wanted to.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

alanthecat posted:

Is it possible to share the 1.5tb drive (all of it) over iSCSI from the r2 machine?
You could install the hyper-v role, install a nexenta VM and share the disc out via that.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

skipdogg posted:

50K ? We spent a quarter of a million dollars on a full NetApp 3020 setup that we've now pushed to capacity not but 2.5 or 3 years ago. We're looking at a complete 'forklift upgrade' of our SAN and I don't even want to know what its going to cost. Probably at least 500K if we go NetApp again.
You can get a 3140 HA pair w/ ~10TB (usable) of FC and ~10TB (usable) sata and every licensed feature for well under $200k if you play your cards right. Figure another $30k per 5TB of FC or 10TB of SATA.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

H110Hawk posted:

Sometimes it is far cheaper to buy two of everything and design your application to tolerate a "tripped over the cord" level event.
That's basically the design philosophy of a SAN to begin with, everything is redundant.

But I agree with the sentiment. buy a 5 year old san, then buy an identical unit for spare parts.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

oblomov posted:

Wow, I would love to see where you are getting that pricing.
They were quoting us around $300k for the initial 3140 HA pair and 10tb + 10tb sata until we told them we were going to go with Sun. Then they price matched. As far as shelves, the pricing I am quoting is pretty standard.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

skipdogg posted:

What are some other options I should be considering?
Not sure how much you are looking to spend, but wouldn't something like this fit the bill?

http://www.cdw.com/shop/products/default.aspx?EDC=2127179

otherwise, you can also do an Oracle (Sun) 7110 for under $10k.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
Seriously, if you don't need HA there is no reason to look further than Oracle (Sun) storage. You can get a 4.2TB single head SAN for probably 12k after discounts, 2TB for well under 10k, or you can go with a StorageTek array, 10TB raw would be about 12k. For the 7110 SAN devices, the management is stupid simple, and they can replicate to each other very very easily.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

skipdogg posted:

Any Compellent users? Opinions?
The sales engineer i spoke to about a year ago could not have possibly more condescending.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
I would guest side iscsi and allocate 30% more space than you are currently using. Then I would resize volume (hopefully your storage supports this) as needed

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
Check your alignment

priv set diag; stats show lun; priv set admin

Look for any writes that are on .1 through .7, they should all be on .0 or partial if you are aligned.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Crowley posted:

I'm on the verge of signing the order for a HP x9720 in a 164TB configuration with three accelerators for online media archiving and small-time online viewing.

Anyone have an informed opinion on those? I haven't been able to find any independent comments since the system is still pretty new, and I'd like to know if I'm buying a piece of crap.
It would be a cold day in hell before I pulled the trigger on any HP storage.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Nomex posted:

As an HP vendor I'd be interested to know your reasoning behind that.
I have witnessed each of these three events: single failed disk taking down an array, firmware upgrades failing an array, and a failed head and the partner didn't take over properly.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Jadus posted:

Were these all on the same unit? Not saying that's an excuse; I'd blacklist HP storage altogether if this was just one unit in my environment. More curiosity.
First two were on one unit, third one was on a seperate unit. I was never the storage admin at this place, but I witnessed it all go down.

We also had a terrible performance issue on the second unit, but that was probably more due to an admin that didn't know wtf.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

da sponge posted:

..and they can only present that storage to the app server over NFS and not iSCSI or FC?
even if that's the case, tell them to install openindiana in a VM and share the NFS VMDK via iscsi.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

bob arctor posted:

would I be better off using the windows server's iSCSI target connect directly to the NAS or should I be looking at having the storage allocated to a VMWare datastore then having VWWare present a drive to the Windows Server.
Ultimately it is probably a matter of preference, unless the storage vendor provides tools for snapshot management like NetApp does. One thing to consider is that using the default block size of 1MB in a VMFS volume, the maximum VMDK is 256GB. This means if you will need more than that you will need to either use iSCSI luns directly attached to your guest, or you will need to use multiple VMDKs with windows combining the disks into a single volume. On our WSUS servers we use iSCSI luns connected to the guests.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Cultural Imperial posted:

How big do WSUS repositories get these days anyway?
Ours is about 45GB for english only and about half the available products.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

ghostinmyshell posted:

Has anyone used OpenFiler before for an Enterprise enviroment? Our resident linux zealot thinks we should look at this since it's FREE, which now has the higher ups asking us to take a look.

I want to smack him.

edit: \/\/\/\/\/ Great thanks!
If you are looking for something that is free, look at nexenta.

Misogynist posted:

Doesn't work with MSCS = useless
I don't think that is quite a deal killer.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

1000101 posted:

Are you sure this is the problem? It doesn't make a lot of sense in my head when I think about it. inode limitations would be a filesystem issue not a block device issue.
maybe he has a netapp with a million luns in a single volume

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Cultural Imperial posted:

Respectfully, I think you're oversimplifying the problem. As a vendor, idolmind's company is going to have a tough time trying to tell his clients to throw out their NFS infrastructure because they don't support it. If his company's competitors do support NFS, it might result in a lot of lost sales. As I've stated in a previous post, it might be worth looking at tuning NFS mount options based upon the performance profile of the application. At the moment I can think of at least 2 large corporations that would make it difficult for idolmind's company to make a sale if someone were to state that the application only supported block storage protocols.
If they will only provision storage as NFS from whatever device they have, the solution is simple: use an opensolaris/openindiana/openfiler/plain old linux VM stored on NFS that presents that storage as iSCSI. Problem solved, cost: $0 and the storage admin doesn't know any better.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Cultural Imperial posted:

I can appreciate that someone as astute as yourself can develop a multitude of workarounds to accomodate a requirement for block storage. However, from a manager's point of view, particularly one that is evaluating your solution as part of an RFP, would you consider this an acceptable solution in comparison with other application that may support NFS? As an account manager working for idolmind's company, would you even entertain it as a proposal?
have sales sell it as part of the stack.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

H110Hawk posted:

This seems like such a terrible hack. It doesn't cost $0, and when has playing tricks on storage/network/system admins ever wound up being a net benefit when they inevitably find out?
I'm not sure how it is any more of a 'hack' than trying to run a database off of NFS.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Cultural Imperial posted:

I know of at least two large telecoms/outsourcers that standardize their oracle databases on NFS. One of these telecoms runs the billing system for a national utility company using 10 GbE with NFS.
Oracle specifically supports it. I cannot think of a single other relational database that does, but if you know of one, please enlighten me.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

szlevi posted:

Errr... it's called HA, you might have heard it somewhere... ;)
HA and DR are completely different concepts.

If you want an easy, drop-in, push one button for DR solution in a VMware environment with all VMDKs you would probably use SRM. It's like $3k a proc.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

three posted:

It's Per-VM now, I believe.
after I posted that I remembered that fact, but chose not to edit, because gently caress VMware for that one.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

InferiorWang posted:

Also, we're probably going with the Essentials Plus bundle from VMware, which might not have all the bells and whistles including SRM, but I have to dig into it to be sure.
It definately doesnt come with SRM, that's an entirely different product. If you are using LUNs connected up to VMs you are going to require one of the following to achieve your DR plan:

A) Scripting
B) Built in SAN utilities (such as vFilers on NetApp)
C) manual intervention

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

szlevi posted:


Did you even read his post? I doubt it - he has no money and he wants to make the DR site live, not recover from it...

...you know, HA vs DR.
he wants to know how to initiate a dr failover, making his dr site live in the event of a disaster.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Mausi posted:

But 4 cores / 32 Gb is fine for a consolidation footprint; Depending on the workloads you're going after it's usually between 4Gb and 16Gb per core.
We're at 12 gigs per physical core (6gb per logical core) currently, and I'll bet we get to double that before we need any more CPU, unless we deploy some kind of virtual desktop infrastructure or get serious about virtualizing citrix.

As far as iSCSI goes, we are running about 100 VMs from SQL to webservers to exchange for a 500 person company on iSCSI, and we push approximately 2 gigabits per second in total over iSCSI. That includes VMFS and iSCSI connections inside the guests. Any company that spends even $1 more for FC is dumb, spend the money on 10 gig ethernet and call it a day.

adorai fucked around with this message at 01:53 on Jan 19, 2011

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
I would play a video of a car crash and then play it back in reverse frame by frame

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

conntrack posted:

I'm not 100% convinced autotiering in it's current forms(VSP,VMAX) would benefit all setups.
Personally I think Sun had it right -- throw a shitload of commodity spindles at the problem, and put a shitload of cache in front of it. 90+% of your reads come from cache, and 100% of writes are cached and written sequentially. Saves you the trouble of tiering, period and you never have to worry about fast disk, just cache. Which, iirc, an 18GB SSD for ZIL from Sun was ~$5k, and a shelf of 24 1TB SATA drives was around $15k. Too bad Oracle is already killing that poo poo.

Adbot
ADBOT LOVES YOU

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Mausi posted:

These days I find CPU to process the deduplication as the bottleneck, rather than raw storage throughput.
Our bottleneck is cpu for gzip compression on our replication traffic.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply