|
EoRaptor posted:there is always an amount of overhead for them. Now I'll admit, there are some extra pieces we use to put our databases into hot backup mode and we create a vmware snapshot before we snap the volume that hosts the lun, but it's not changing the snapshot itself, just making sure the data is consistent before we snapshot.
|
# ¿ Sep 2, 2010 00:04 |
|
|
# ¿ May 21, 2024 03:30 |
|
conntrack posted:If you keep three months of one snap each day writing one block will result in 90 writes per block? Im sure that database will be fast for updates. edit: I didn't see that we had rolled to a new page.
|
# ¿ Sep 2, 2010 22:37 |
|
shablamoid posted:Are there any recommended practices for performing a defrag on large volume systems? I have heard that backing up the data, then restoring it is a method of doing it, but this seems cumbersome
|
# ¿ Sep 3, 2010 20:27 |
|
BelDin posted:We just had our first major SAN failure this weekend, and the IT guys are paying for it. ...
|
# ¿ Sep 20, 2010 13:38 |
|
Misogynist posted:I can't even wait to see the havoc that Exchange 2010 wreaks on the most retarded of administrators.
|
# ¿ Sep 21, 2010 13:21 |
|
Syano posted:It made me stop and completely redesign my hosts to have 3 NICs minimum, all connected to different layer 2 devices.
|
# ¿ Sep 21, 2010 14:15 |
|
Cultural Imperial posted:Reading all these posts about people aggregating 6+ ethernet ports together, I was curious if anyone had thought about using 10GbE instead.
|
# ¿ Sep 22, 2010 22:20 |
|
Misogynist posted:Putting aside the concerns about link redundancy, it certainly seems in the virtualization spirit to consolidate links and let the bandwidth be partitioned out as needed, letting software handle the boundaries dynamically, rather than installing dramatically underutilized hardware to enforce resource boundaries.
|
# ¿ Sep 22, 2010 23:47 |
|
conntrack posted:To get rid of all the files an get a clean share:
|
# ¿ Sep 24, 2010 13:17 |
|
three posted:How do you think I should handle this with the consultant? Are you able to add a second link, just to see if that helps?
|
# ¿ Oct 12, 2010 01:45 |
|
MrMoo posted:slapping a single SSD into the SAN
|
# ¿ Oct 18, 2010 03:28 |
|
alanthecat posted:Is it possible to share the 1.5tb drive (all of it) over iSCSI from the r2 machine?
|
# ¿ Oct 26, 2010 23:05 |
|
skipdogg posted:50K ? We spent a quarter of a million dollars on a full NetApp 3020 setup that we've now pushed to capacity not but 2.5 or 3 years ago. We're looking at a complete 'forklift upgrade' of our SAN and I don't even want to know what its going to cost. Probably at least 500K if we go NetApp again.
|
# ¿ Oct 27, 2010 23:21 |
|
H110Hawk posted:Sometimes it is far cheaper to buy two of everything and design your application to tolerate a "tripped over the cord" level event. But I agree with the sentiment. buy a 5 year old san, then buy an identical unit for spare parts.
|
# ¿ Oct 28, 2010 02:19 |
|
oblomov posted:Wow, I would love to see where you are getting that pricing.
|
# ¿ Oct 28, 2010 23:15 |
|
skipdogg posted:What are some other options I should be considering? http://www.cdw.com/shop/products/default.aspx?EDC=2127179 otherwise, you can also do an Oracle (Sun) 7110 for under $10k.
|
# ¿ Oct 29, 2010 02:41 |
|
Seriously, if you don't need HA there is no reason to look further than Oracle (Sun) storage. You can get a 4.2TB single head SAN for probably 12k after discounts, 2TB for well under 10k, or you can go with a StorageTek array, 10TB raw would be about 12k. For the 7110 SAN devices, the management is stupid simple, and they can replicate to each other very very easily.
|
# ¿ Oct 29, 2010 03:14 |
|
skipdogg posted:Any Compellent users? Opinions?
|
# ¿ Nov 15, 2010 23:50 |
|
I would guest side iscsi and allocate 30% more space than you are currently using. Then I would resize volume (hopefully your storage supports this) as needed
|
# ¿ Nov 16, 2010 22:03 |
|
Check your alignment priv set diag; stats show lun; priv set admin Look for any writes that are on .1 through .7, they should all be on .0 or partial if you are aligned.
|
# ¿ Nov 18, 2010 21:47 |
|
Crowley posted:I'm on the verge of signing the order for a HP x9720 in a 164TB configuration with three accelerators for online media archiving and small-time online viewing.
|
# ¿ Nov 19, 2010 14:20 |
|
Nomex posted:As an HP vendor I'd be interested to know your reasoning behind that.
|
# ¿ Nov 20, 2010 00:25 |
|
Jadus posted:Were these all on the same unit? Not saying that's an excuse; I'd blacklist HP storage altogether if this was just one unit in my environment. More curiosity. We also had a terrible performance issue on the second unit, but that was probably more due to an admin that didn't know wtf.
|
# ¿ Nov 20, 2010 01:05 |
|
da sponge posted:..and they can only present that storage to the app server over NFS and not iSCSI or FC?
|
# ¿ Nov 29, 2010 23:56 |
|
bob arctor posted:would I be better off using the windows server's iSCSI target connect directly to the NAS or should I be looking at having the storage allocated to a VMWare datastore then having VWWare present a drive to the Windows Server.
|
# ¿ Dec 9, 2010 00:17 |
|
Cultural Imperial posted:How big do WSUS repositories get these days anyway?
|
# ¿ Dec 9, 2010 03:07 |
|
ghostinmyshell posted:Has anyone used OpenFiler before for an Enterprise enviroment? Our resident linux zealot thinks we should look at this since it's FREE, which now has the higher ups asking us to take a look. Misogynist posted:Doesn't work with MSCS = useless
|
# ¿ Dec 11, 2010 00:04 |
|
1000101 posted:Are you sure this is the problem? It doesn't make a lot of sense in my head when I think about it. inode limitations would be a filesystem issue not a block device issue.
|
# ¿ Dec 14, 2010 00:17 |
|
Cultural Imperial posted:Respectfully, I think you're oversimplifying the problem. As a vendor, idolmind's company is going to have a tough time trying to tell his clients to throw out their NFS infrastructure because they don't support it. If his company's competitors do support NFS, it might result in a lot of lost sales. As I've stated in a previous post, it might be worth looking at tuning NFS mount options based upon the performance profile of the application. At the moment I can think of at least 2 large corporations that would make it difficult for idolmind's company to make a sale if someone were to state that the application only supported block storage protocols.
|
# ¿ Dec 21, 2010 00:35 |
|
Cultural Imperial posted:I can appreciate that someone as astute as yourself can develop a multitude of workarounds to accomodate a requirement for block storage. However, from a manager's point of view, particularly one that is evaluating your solution as part of an RFP, would you consider this an acceptable solution in comparison with other application that may support NFS? As an account manager working for idolmind's company, would you even entertain it as a proposal?
|
# ¿ Dec 21, 2010 01:03 |
|
H110Hawk posted:This seems like such a terrible hack. It doesn't cost $0, and when has playing tricks on storage/network/system admins ever wound up being a net benefit when they inevitably find out?
|
# ¿ Dec 21, 2010 03:02 |
|
Cultural Imperial posted:I know of at least two large telecoms/outsourcers that standardize their oracle databases on NFS. One of these telecoms runs the billing system for a national utility company using 10 GbE with NFS.
|
# ¿ Dec 21, 2010 03:46 |
|
szlevi posted:Errr... it's called HA, you might have heard it somewhere... If you want an easy, drop-in, push one button for DR solution in a VMware environment with all VMDKs you would probably use SRM. It's like $3k a proc.
|
# ¿ Jan 5, 2011 23:41 |
|
three posted:It's Per-VM now, I believe.
|
# ¿ Jan 6, 2011 02:17 |
|
InferiorWang posted:Also, we're probably going with the Essentials Plus bundle from VMware, which might not have all the bells and whistles including SRM, but I have to dig into it to be sure. A) Scripting B) Built in SAN utilities (such as vFilers on NetApp) C) manual intervention
|
# ¿ Jan 6, 2011 04:03 |
|
szlevi posted:
|
# ¿ Jan 7, 2011 18:21 |
|
Mausi posted:But 4 cores / 32 Gb is fine for a consolidation footprint; Depending on the workloads you're going after it's usually between 4Gb and 16Gb per core. As far as iSCSI goes, we are running about 100 VMs from SQL to webservers to exchange for a 500 person company on iSCSI, and we push approximately 2 gigabits per second in total over iSCSI. That includes VMFS and iSCSI connections inside the guests. Any company that spends even $1 more for FC is dumb, spend the money on 10 gig ethernet and call it a day. adorai fucked around with this message at 01:53 on Jan 19, 2011 |
# ¿ Jan 19, 2011 01:49 |
|
I would play a video of a car crash and then play it back in reverse frame by frame
|
# ¿ Feb 16, 2011 21:07 |
|
conntrack posted:I'm not 100% convinced autotiering in it's current forms(VSP,VMAX) would benefit all setups.
|
# ¿ Mar 4, 2011 02:41 |
|
|
# ¿ May 21, 2024 03:30 |
|
Mausi posted:These days I find CPU to process the deduplication as the bottleneck, rather than raw storage throughput.
|
# ¿ Jun 2, 2011 22:40 |