|
Internet Explorer posted:If I ever meet anyone responsible for the VNX line I am going to punch them in the face. Our two VNX 5300 were the buggiest pieces of poo poo. What did you do to those poor little things? Only time I've seen issues with them is during upgrades to 5.32 flare code.
|
# ? Apr 26, 2013 15:57 |
|
|
# ? May 21, 2024 14:41 |
|
Our VNX5500's haven't been terrible, but our old NetApp units had less issues. One of our VNX5500's sent us erroneous emails about a fan being dead or something even though everything was fine. Updating the code fixed it. Support has been good so far, but our old NetApp 3020's were basically problem free.
|
# ? Apr 26, 2013 16:15 |
|
Can anyone point me in the direction of some good resources for NetApp E5400 and HP EVA P6000 devices? I've already got a full understanding of NAS/SAN tech however I've only really worked with IBM NAS/SAN devices before so it'd be much appreciated if anyone could recommend some reading material which goes into the specific nuances of the aforementioned NetApp and HP devices.
|
# ? Apr 26, 2013 17:50 |
|
Amandyke posted:What did you do to those poor little things? Only time I've seen issues with them is during upgrades to 5.32 flare code. I had an issue where a firmware upgrade bricked the 5300. Seriously, they had to send us two new SPs from the local depot and then do some voodoo to revert the upgrade. In the next release the release notes specifically mentioned fixing that problem. Here lately I've taken to just calling up EMC and telling them to send a tech whenever I need something upgraded because I don't want to catch poo poo again for installing a buggy upgrade. I'll let EMC take the blame if it fucks up.
|
# ? Apr 26, 2013 18:11 |
|
So apparently I jinxed myself. EMC came down today to upgrade our VNX5300 from 5.31 to 5.32 to match our other SAN. I haven't touched the thing aside from running through VIA and already one of the control stations was hosed and had to be reinstalled. The upgrade of the file portion went well once that was fixed. Moving on to block, yeah well it shat itself with the same problem the other one had that was supposedly fixed by EMC in the release we're upgrading to. Thankfully this VNX is the DR SAN so I don't have to work all weekend getting it back up.
|
# ? Apr 26, 2013 21:51 |
|
I have a fax server (running Windows) with a 1TB storage volume that's producing 70GB snapshots per day, despite receiving only about 1GB of faxes per day. Any suggestions for tools at the OS level to track down the source of the change rate? There are 8M+ files involved here. I can't replicate it for DR until I can find it and fix it.
|
# ? May 1, 2013 21:25 |
|
What kind of snapshots?
|
# ? May 1, 2013 23:56 |
|
Array snapshots. So that represents 70GB of block changes across the 1TB volume.
|
# ? May 2, 2013 03:17 |
|
What version of Windows is writing to that volume? If it is 2003 or earlier consider disabling the last accessed timestamp on files. NTFSDisableLastAccessUpdate HKLM\System\CurrentControlSet\Control\FileSystem\ (REG_DWORD)
|
# ? May 2, 2013 03:28 |
|
KS posted:Array snapshots. So that represents 70GB of block changes across the 1TB volume.
|
# ? May 2, 2013 03:48 |
|
Seems like asking about defragging servers would be an excellent sysadmin interview question. If the candidate lets you finish asking without punching you in the face, show them the door.
|
# ? May 2, 2013 04:46 |
|
Internet Explorer posted:What version of Windows is writing to that volume? If it is 2003 or earlier consider disabling the last accessed timestamp on files. NTFSDisableLastAccessUpdate Thanks, going to try this. It is indeed 2003. Thankfully no defrag. We're not that bad -- just bad enough to still be using Server 2003 for a hugely important app. But I'm sure I'm not alone there KS fucked around with this message at 04:58 on May 2, 2013 |
# ? May 2, 2013 04:55 |
|
KS posted:Thanks, going to try this. It is indeed 2003. What kind of storage array is this? What is the snapshot block size? What is the underlying block size that windows is using? It would be really hard to generate 70GB of changes from timestamp changes. Timestamps are just a few bytes stored in the files record so you'd be changing, at most, a single block on disk. To generate that much change data with a fairly standard block size of 4k you would have to have about 17 million files updating their access times, which you probably don't have. If your array snapshots are happening at a much larger block size, that could have a compounding effect. If disabling the access timestamps doesn't produce results I would start by generating a report showing all files with a modify time within the past 24 hours to get an idea of what files are changing, and then try to figure out what might be causing them to change. YOLOsubmarine fucked around with this message at 05:54 on May 2, 2013 |
# ? May 2, 2013 05:40 |
|
Already looked at changed files, nothing obvious. It's a Compellent array, so 2MB blocks for snapshots. I think the access time thing is a strong possibility given the profile of the server -- we're going to try it on Sunday.
|
# ? May 2, 2013 17:24 |
|
KS posted:Already looked at changed files, nothing obvious. It's a Compellent array, so 2MB blocks for snapshots. I think the access time thing is a strong possibility given the profile of the server -- we're going to try it on Sunday. Yea, a 2MB snapshot block size could certainly do it.
|
# ? May 2, 2013 18:48 |
|
KS posted:Already looked at changed files, nothing obvious. It's a Compellent array, so 2MB blocks for snapshots.
|
# ? May 2, 2013 23:03 |
|
evil_bunnY posted:So, how many changed files? 70GB / 2MB blocks = 35,000, give or take.
|
# ? May 2, 2013 23:50 |
|
madsushi posted:70GB / 2MB blocks = 35,000, give or take.
|
# ? May 3, 2013 10:03 |
|
There are <8,000 changed files on a daily basis when searching by date modified, totaling less than 1GB. However, a service constantly scans directories containing 700k+ files. It's read-only -- it looks for faxes containing a barcode and links them to a web app. Access time seems like a really good explanation. I'll update after Sunday.
|
# ? May 3, 2013 16:12 |
|
Here's an update as promised. Sunday and Monday's snapshots were 2 and 9 GB respectively. Thanks to Internet Explorer for the fix. That helps quite a bit.
|
# ? May 7, 2013 18:00 |
|
Glad to hear that fixed it for you. Thanks for the update. Was getting curious and was about to ask how it went.
|
# ? May 7, 2013 22:53 |
|
I hope it's ok if I ask a total noob question here. I'm not a storage guy at all, but I'm working on getting my VCP, which requires a decent baseline knowledge. I'm looking at the paths to a particular disk, and the target's WWNN and WWPN are identical. I thought the whole point of WWNs was that they're universally unique? For example, here's something I found in a vmware kb article showing the same thing (emphasis mine)-- fc.5001438005685fb7:5001438005685fb6-fc.5006048c536915af:5006048c536915af-naa.60060480000290301014533030303130 The datastore works fine so it's obviously allowed, I'm just having trouble grasping why.
|
# ? May 8, 2013 19:39 |
|
stubblyhead posted:I hope it's ok if I ask a total noob question here. I'm not a storage guy at all, but I'm working on getting my VCP, which requires a decent baseline knowledge. WWNN and WWPN aren't universally unique inside all namespaces, just within their own namespaces. The former uniquely identifies an endpoint behind the physical connection, and the latter uniquely identifies the physical connection itself. I'm not 100%, but I think when the two are identical it implies that you're running a configuration where there's only one connection path for that HBA. It makes sense if you think about it since you really need the pair of identifiers to specify both the destination and the path to it - the pairing is definitely always globally unique.
|
# ? May 9, 2013 18:06 |
|
This seems like the best place to ask: has anyone here had any experience with SAS switches? I wasn't even aware that they existed until I stumbled upon this yesterday: http://www.lsi.com/channel/products/storagecomponents/Pages/LSISAS6160Switch.aspx Seems like a cheap way to implement shared storage without involving any expensive FC cards or switches. Two of these paired with a couple of MD1200 seems like a great way to set up some storage for some ESX or Hyper-V hosts, at least in a dev environment.
|
# ? May 13, 2013 14:30 |
|
You'll still need a software layer that can deal with shared storage, and fail gracefully.
|
# ? May 13, 2013 16:08 |
|
evil_bunnY posted:You'll still need a software layer that can deal with shared storage, and fail gracefully. So something like OCFS2, VMFS, GFS2, etc?
|
# ? May 13, 2013 17:26 |
|
Goon Matchmaker posted:So something like OCFS2, VMFS, GFS2, etc? The filesystem lock manager that all of those depend on (lock_dlm, ocfs2_dlm, etc) more so than the actual filesystem. You need a cluster manager to handle that aspect of it (vSphere, RHCS, OCFS2's native DLM/cluster manager, something hacked up with Corosync/Pacemaker, whatever) more than just "mkfs.gfs2", but those filesystems are the general idea, yeah.
|
# ? May 13, 2013 19:20 |
|
Zero VGS posted:Yeah I see what you mean, the thing doesn't even have USB. Back to the drawing board. Sorry for the necromancy, but this provided me so much joy/horror, I couldn't not bring it back. What did you end up getting? I think I'll be decommissioning some older boxen in the near future if you're still needing something.
|
# ? May 13, 2013 22:16 |
|
So I'm currently supporting an environment which has a HP EVA SAN with two NetApp filers. NDMP backups are being done from the NetApp directly to tape. Today it took two hours to restore a single folder containing two PDFs. Kill me.
|
# ? May 14, 2013 16:19 |
|
Intraveinous posted:Sorry for the necromancy, but this provided me so much joy/horror, I couldn't not bring it back. I'm still needing something, those were too old. PM me if you have some newer-ish stuff and I might be able to purchase it off you.
|
# ? May 14, 2013 19:56 |
|
Can anyone recommend a good iSCSI target solution for Linux that supports SCSI-3 Persistent Reservation? I've setup a small lab on my home PC to learn about Microsoft Failover Clustering and need to provision shared storage to the cluster nodes.
|
# ? May 21, 2013 09:58 |
|
cheese-cube posted:Can anyone recommend a good iSCSI target solution for Linux that supports SCSI-3 Persistent Reservation? I've setup a small lab on my home PC to learn about Microsoft Failover Clustering and need to provision shared storage to the cluster nodes.
|
# ? May 21, 2013 13:20 |
|
cheese-cube posted:So I'm currently supporting an environment which has a HP EVA SAN with two NetApp filers. NDMP backups are being done from the NetApp directly to tape. This is what snapshots are for? I don't understand how something ended up on tape but not in your daily snap(s)?
|
# ? May 21, 2013 14:30 |
|
EoRaptor posted:This is what snapshots are for? I don't understand how something ended up on tape but not in your daily snap(s)?
|
# ? May 21, 2013 14:50 |
|
Heh tape.
|
# ? May 21, 2013 15:14 |
|
Is there an answer to "I need to keep x TB of stuff for 7 years" that isn't tape? Tapes are comforting really, they can't get corrupted by bad firmware or hit by a power surge or become EoL'd by a vendor. The drives can but you can always find drives.
|
# ? May 21, 2013 15:34 |
|
No it's fine, but the less I have to read from them (outside of checks) the happier I am. poo poo's slow, and some robots can be finicky (hello HP).
|
# ? May 21, 2013 16:52 |
|
EoRaptor posted:This is what snapshots are for? I don't understand how something ended up on tape but not in your daily snap(s)? Misogynist posted:Why would someone have the same retention policy for tape and on-disk snapshots? The answer to both of these is We are in the process of completely overhauling the environment so hopefully things will get better. For backups they are using CommVault which I've never really used before but it seems pretty solid so far (Given that I've only had about 4 weeks to work with it).
|
# ? May 21, 2013 16:53 |
|
Misogynist posted:LIO supports SCSI-3 PGRs, but my honest recommendation is to skip Linux here and go with COMSTAR from an Illumos derivative (OmniOS would probably be my choice). There is absolutely no comparison among other open-source stacks in terms of robustness, performance, and maturity. Is what you're suggesting free? I'm really just looking to setup a small home lab environment and want to avoid purchasing anything (Hence why I'm going with Hyper-V).
|
# ? May 21, 2013 16:57 |
|
|
# ? May 21, 2024 14:41 |
|
cheese-cube posted:The answer to both of these is I'm not complaining that there are tapes, just that for deleted folders and other simple file recoveries, it's usually much, much faster to use a storage devices volume snapshot abilities to go back in time X hours/days and grab it. Unless you have historical tape that goes back much farther than the snapshots are able to and these are very old files that went missing?
|
# ? May 21, 2013 17:00 |