|
Nomex posted:As an HP vendor I'd be interested to know your reasoning behind that.
|
# ? Nov 20, 2010 00:25 |
|
|
# ? May 13, 2024 08:00 |
|
adorai posted:I have witnessed each of these three events: single failed disk taking down an array, firmware upgrades failing an array, and a failed head and the partner didn't take over properly. Were these all on the same unit? Not saying that's an excuse; I'd blacklist HP storage altogether if this was just one unit in my environment. More curiosity.
|
# ? Nov 20, 2010 00:40 |
|
Jadus posted:Were these all on the same unit? Not saying that's an excuse; I'd blacklist HP storage altogether if this was just one unit in my environment. More curiosity. We also had a terrible performance issue on the second unit, but that was probably more due to an admin that didn't know wtf.
|
# ? Nov 20, 2010 01:05 |
|
ferrit posted:Is there any way to increase the write performance on a NetApp FAS3140 running OnTap 7.2.6.1? It appears that our options, according to NetApp support, are: 7.3.x has significant performance improvement over 7.2.x. Ontap 7.3.x was written to take advantage of multiple core filer heads by running more threads to take care of background processes. I'm surprised support didn't mention this to you. Is it possible for you to upgrade ontap?
|
# ? Nov 20, 2010 01:50 |
|
GrandMaster posted:just heard back from support, they will be replacing the cabling on the SPA side bus 0 as there were some other strange bus errors. it looks like SPA crashed, SPB didnt so i'm not sure why the luns didnt all trespass and stay online The lcc took out a whole enclosure in my case. Yes, the LUNs should have trespassed unless the whole enclosure faulted. This appears to be a rare, but occasional achilles' heel of the Clariion.
|
# ? Nov 22, 2010 22:22 |
|
Fresh support story? Don't mind if i do. Netapp just sent me a log as proof one of my raidgroups isnt degraded. The thing is the log is a day old BEFORE the raid rebuild was started and the system became unresponsive during said rebuild. The reply to calling the lady on her poo poo is "thank you for the information". I have now downed a stiff drink and i guess i will have to do several more and just sleep a few hours until the men come back on shift.
|
# ? Nov 26, 2010 20:41 |
|
conntrack posted:Fresh support story? Don't mind if i do.
|
# ? Nov 26, 2010 22:33 |
|
Crowley posted:I would too. I've been using EVAs for the better part of a decade without any issue at all. Haven't used EVAs, but I've had a terrible time dealing with HP sales team on desktop/laptop purchases. Talking large scale account with 8 figure sales a year and bad responsiveness, slow ordering, lags on delivery, just overall a bad experience. On the other hand EMC, NetApp, and Dell are always prompt, responsive and provided excellent support from pretty much anything we got from them. Now, Dell we escalate through the TAM sometimes, but that's how it rolls, and it's still quick. Personally, this soured me enough on HP that I wouldn't look at them as a vendor for anything for a while.
|
# ? Nov 27, 2010 04:43 |
|
complex posted:Anyone have any thoughts on NetApp's new offerings? The FAS6200, but in particular ONTAP 8.0.1. I'm thinking of going to 8 just for the larger aggregates. Data ONTAP 8.0.1 also brings DataMotion, which lets you move volumes between aggregates without downtime. The catch is that you can't move a volume in a 32-bit aggregate to a 64-bit aggregate, or vice-versa. Also compression might be nice for shrinking user shares, but I haven't got a chance to see that in action yet to see how much it actually helps. Finally, the introduction of VAAI in 8.0.1 makes a ton of improvements for VMWare via iSCSI on NetApp, notably much faster storage vMotion.
|
# ? Nov 27, 2010 11:11 |
|
Did they get SMB2 back in? When 8 was releasing there was alot of grumbling about that.
|
# ? Nov 27, 2010 16:20 |
|
conntrack posted:Did they get SMB2 back in? When 8 was releasing there was alot of grumbling about that. SMB2 is in 8.0.1, but not SMB2.1, which I guess Windows 7 is capable of.
|
# ? Nov 27, 2010 19:31 |
|
madsushi posted:SMB2 is in 8.0.1, but not SMB2.1, which I guess Windows 7 is capable of. For everyone that has access to NOW, here's the release notes: http://now.netapp.com/NOW/knowledge/docs/ontap/rel801rc2/html/ontap/rnote/frameset.html
|
# ? Nov 28, 2010 18:58 |
|
Two questions... I am a software developer and the increasing trend to go to storage over NFS is loving us big time. It seems that a lot of companies don't fully grasp the technology or how to correctly configure it. Our product contains a proprietary database and we run into many issues including, but not limited to, locking and caching issues, known locking and caching bugs with NFS, stale NFS mounts causing our product to hang until the mount becomes available again, etc. Anyone else out there seeing situations like this? Any advice. We are scrambling internally to figure out better ways to deal but there are frequent incidents that I'm not sure we could avoid if we want to. Some examples are customers doing maintenance to the filer while our product is running and our processes hanging until a machine reboot. Kind of similar to if you just bring down an NFS mount host without unmounting it first. In this case the customer was blaming our software for the problem even though any attempt to do simple UNIX commands in the product install directory made the process hang (such as cd, ls, pwd, df, etc). Another example was a customer with an NFS version with known caching issues where we wrote a test program that wrote to a file and another test program that read from the file and the reader would not immediately find the data. Anyway, it has been a nightmare to support so I figured I'd throw the question out there. Also, on SunOS and someone using a veritas cluster does a mount file system of "vfs" mean veritas file system or SunOS's virtual file system?
|
# ? Nov 28, 2010 19:31 |
|
Woohoo! I just made it through the third round of interviews for NetApp support. I asked them some areas of study to try and brush up on. The rep suggested looking into LUN's, mounting and configuring exchange, maybe a little mild SQL, but really stressed the LUN-NAS/SAN side. Any suggestions? I've been reading wikipedia, read some of the IBM redbooks and some of NetApp's Technical resources( Thanks 1000101!!) but I'll be honest, some of them are a bit...chewy. I'm trying to step up into the world of enterprise level support, and want to at least have an idea since I sure as hell won't know everything. The client even stated that I won't know most of the answers for a good 4-6 months after starting. But any foundation I can build on is better than none.
|
# ? Nov 29, 2010 17:48 |
|
idolmind86 posted:Two questions... Just use iSCSI for your database, that's what it's meant for. I don't want to reiterate the title of the thread, but explain your database requires direct attached storage or a SAN, and that NFS is not acceptable. This is the case with many databases. It's perfectly reasonable to require they use a system that grants you block level access to files. File level access over a network drive is often just not going to cut it - that's what you're experiencing. Require block level access and your problems go away.
|
# ? Nov 29, 2010 19:26 |
|
Misogynist posted:Their support might suck but don't be a chauvinist douche Post/username combo, right here. content: For a FAS2020, is there anything I'm doing wrong that forces me to spend half my time rebooting these damned BMCs? Pretty frequently, when I try to ssh to them I get 'server unexpectedly closed connection' and NetApp's answer was to just reboot the BMC. Works fine, but it's happening frequently enough that I'm this close to using the filerview command line, and ugh.
|
# ? Nov 29, 2010 20:25 |
|
Anyone using datadomain? We got quoted a price that would buy us a petabyte of raw disk for the same price as a data domain box. We could probably buy half a petabyte, compress it with standard gzip and come out paying less money. Going back to tape and tape robots is starting to sound good again.......
|
# ? Nov 29, 2010 21:29 |
|
conntrack posted:Anyone using datadomain? We got quoted a price that would buy us a petabyte of raw disk for the same price as a data domain box. We use them. Work as advertised. Not cheap though. code:
|
# ? Nov 29, 2010 21:35 |
|
skipdogg posted:We use them. Work as advertised. Not cheap though. This looks so sweet. I have neck beard envy right now.
|
# ? Nov 29, 2010 21:43 |
|
what is this posted:I don't want to reiterate the title of the thread, but explain your database requires direct attached storage or a SAN, and that NFS is not acceptable. I agree 100% but we work with some pretty large customers who just don't seem to get it. For instance the latest headache has been by a very large customer who claims they have no physical storage in house and that all storage is done on a central veritas cluster and that there is absolutely no way to install on a physical disk.
|
# ? Nov 29, 2010 23:04 |
|
idolmind86 posted:I agree 100% but we work with some pretty large customers who just don't seem to get it. For instance the latest headache has been by a very large customer who claims they have no physical storage in house and that all storage is done on a central veritas cluster and that there is absolutely no way to install on a physical disk. ..and they can only present that storage to the app server over NFS and not iSCSI or FC?
|
# ? Nov 29, 2010 23:08 |
|
da sponge posted:..and they can only present that storage to the app server over NFS and not iSCSI or FC?
|
# ? Nov 29, 2010 23:56 |
|
da sponge posted:..and they can only present that storage to the app server over NFS and not iSCSI or FC? I'm not sure. That's when we're getting out of my area of expertise. Actually, any SAN is out of my area of expertise. Our biggest problem is that we constantly get DBAs opening up tickets about our software, related to NFS (or other filer issues). We usually resolve the issue manually and then the issue never gets escalated to the UNIX admins at the customer site. This repeats in a vicious cycle until some CTO or other higher up freaks out over the amount of tickets and then the DBAs point fingers at us, not the file system. Anyway, it appears as if it is going to be an epic struggle so I'm trying to educate myself more, and hopefully come up with something.
|
# ? Nov 30, 2010 02:46 |
|
idolmind86 posted:I'm not sure. That's when we're getting out of my area of expertise. Actually, any SAN is out of my area of expertise. Our biggest problem is that we constantly get DBAs opening up tickets about our software, related to NFS (or other filer issues). We usually resolve the issue manually and then the issue never gets escalated to the UNIX admins at the customer site. This repeats in a vicious cycle until some CTO or other higher up freaks out over the amount of tickets and then the DBAs point fingers at us, not the file system. Yeah, that's going to almost always be a customer storage issue. NFS/CIFS does some really bizarre poo poo depending on which halfassed implementation you're using. iSCSI is pretty much designed to fix the file level caching and buffering crao. Sure it has it's own problems, but dealing with that poo poo isn't one of them.
|
# ? Nov 30, 2010 04:12 |
|
idolmind86 posted:Two questions... Sometimes I run into clients that use NFS without any consideration for the type of workload they require. That is you end up seeing them mount up their exports without putting in the right options. The problem is that databases treat the filesystem differently than conventional applications. Oracle is an example that has intimate knowledge of filesystems, particularly local filesystems. Sometimes this results in some pretty messed up performance over a NAS protocol like NFS. One way of overcoming this problem is to get rid of NFS and use FC or ISCSI. Another way is to simply tune your NFS options to suit your database workload. What follows below is a crib of a NetApp techinical report (TR3322, get it here: http://media.netapp.com/documents/tr-3322.pdf ) . I want to try and explain what NetApp considers to be problems with respect to configuring NFS to suit a database workload. There are 4 considerations in using NFS instead of a local filesystem to store your database. 1) Data Caching Mechanisms 2) Data Integrity 3) Asynchronous I/O 4) I/O pattern With respect to 1) Conventional file I/O doesn't have a facility to deal with the caching of data. The file system will have a mechanism to cache data to reduce I/O. On the other hand, a database is likely smart enough to have its own caching mechanism. This presents a problem where it may be possible that a 'double' caching effect can occur which is undesirable. 2) File systems will often defer the writing of data to disk until some point in time as determined by the operation system. Sometimes databases require that data is immediately written to disk to provide data integrity. The deferral of writes (known as write back) to the file system can cause unwanted latency to the db. 3) Asynchronous I/O (AIO) is a feature of an OS that enables your application to continue processing while the file system I/O requests are being serviced. AIO is relevant to databases because they can control their read-ahead and write-back behaviour. Read-ahead and write-back behaviour are intimately intertwined with AIO. 4) I/O patterns of databases, particularly online transactional processing, are generate a high amount of small, random, highly parallelized reads and writes. NFS performance improvements (as far as I'm told anyway) have neglected this sort of workload. So what do you do about this? Without knowing what your proprietary database does and how it works, I would suggest generating some sort of workload and then benchmarking the I/O when it is run against local attached storage. Then I'd suggest running the same workload and benchmark against ISCSI and FC attached storage. Finally, run it against NFS mounted storage. If you think you're seeing problems with performance which may be related to any of the above problems, the next step is to start playing around with your mount options. The problem with this is that you could be playing around for a long time and the options you end up using depend on the OS and version. Maybe taking a look at NetApp's NFS options for Oracle might provide a starting a point: https://kb.netapp.com/support/index?page=content&id=3010189 . On Solaris it's possible to mount an export with forced direct io - forcedirectio, no attribute caching - noac. As far as I've seen, lock problems generally arise when something is interrupting the network or communication with the storage device. There's no easy way to deal with this but take a look at the nointr option. I believe Veritas is usually referred to as VxFS.
|
# ? Nov 30, 2010 04:44 |
|
what is this posted:Just use iSCSI for your database, that's what it's meant for. While I think your approach has merit, I've done work with a significant number of clients (national utilities, oil & gas, investment banking) that use NFS with Oracle. I think there's a lot of problems with any particular protocol you use and it just depends on the amount of time and energy you're willing to dedicate to solving the problem that makes sense in your choice in the end.
|
# ? Nov 30, 2010 04:47 |
|
toplitzin posted:Woohoo! I just made it through the third round of interviews for NetApp support. I asked them some areas of study to try and brush up on. The rep suggested looking into LUN's, mounting and configuring exchange, maybe a little mild SQL, but really stressed the LUN-NAS/SAN side. Any suggestions? Congratulations on making it this far. I'd suggest spending some time to review networking, particular anything that you think would aid in troubleshooting network problems. Any NAS implementation I've come across has always been delayed because of misconfigured networks. I always come across misconfigured VLANs, dns problems, firewalls tightened up like fort knox and routers sending traffic across ISLs. Have you ever used wireshark? It might be worth thinking over how you would use wireshark to solve any of these problems. Are you sure the third interview will be tehcnical?
|
# ? Nov 30, 2010 04:53 |
|
idolmind86 posted:I'm not sure. That's when we're getting out of my area of expertise. Actually, any SAN is out of my area of expertise. Our biggest problem is that we constantly get DBAs opening up tickets about our software, related to NFS (or other filer issues). We usually resolve the issue manually and then the issue never gets escalated to the UNIX admins at the customer site. This repeats in a vicious cycle until some CTO or other higher up freaks out over the amount of tickets and then the DBAs point fingers at us, not the file system. I guarantee almost all your customers can expose storage from their existing SAN/NAS through iSCSI. Simply require this and your problems will go away. They will not have to buy new storage hardware. Their IT department should know how to set up iSCSI if they are not idiots, and if they are idiots they can call up their storage vendor who will explain how it's done. Your problems are 100% related to the fact that your storage is currently file level storage instead of block level storage. I'm not going to make a long post about the differences between block level access storage and file level access storage but suffice to say that for your application block level storage will appear the same as directly attached and mounted storage and you will have no caching issues, no file locking issues, etc.
|
# ? Nov 30, 2010 09:30 |
|
conntrack posted:Anyone using datadomain? We got quoted a price that would buy us a petabyte of raw disk for the same price as a data domain box. yeah, we had a similar quote.. decided to go with a sun thumper instead, zfs inline dedupe is out in the next release of solaris
|
# ? Nov 30, 2010 11:28 |
|
GrandMaster posted:yeah, we had a similar quote.. decided to go with a sun thumper instead, zfs inline dedupe is out in the next release of solaris They just came out with the 9/10 release, and no dedup. The previous release was 11 months ago. Solaris 11 is coming at us at lightning speed. Not sure how long you're going to be waiting for dedup in Solaris 10.
|
# ? Nov 30, 2010 16:07 |
|
toplitzin posted:Woohoo! I just made it through the third round of interviews for NetApp support. I asked them some areas of study to try and brush up on. The rep suggested looking into LUN's, mounting and configuring exchange, maybe a little mild SQL, but really stressed the LUN-NAS/SAN side. Any suggestions? Learn about the SnapX products, since those will be the ones hardest to troubleshoot because your hands-on is limited. The reason they give the 6-8 months bullshit is because you are thrown to the wolves within two weeks and actual training is pretty hard to get into. Your most valuable resource will be your co-workers so don't piss them off. Also if you are going night shift, I hope you can understand Indian accents on a grainy connection. Which part of support are you getting into? ghostinmyshell fucked around with this message at 16:49 on Nov 30, 2010 |
# ? Nov 30, 2010 16:35 |
|
FISHMANPET posted:They just came out with the 9/10 release, and no dedup. The previous release was 11 months ago. Solaris 11 is coming at us at lightning speed. Not sure how long you're going to be waiting for dedup in Solaris 10. code:
|
# ? Nov 30, 2010 20:38 |
|
Welp, that sure is "special" on Oracle's part. They're probably going to try and make it a big selling point of Solaris 11, which maybe means they'll come out with an Intel thumper? Ans I think the Solaris 11 express is only for evaluation, you can't actually use it in production (but they haven't released Oracle Solaris Studio 12 for it, WTF Oracle?). I'm "evaluating" it at home, which isn't really a lie because I'm learning all sorts of great poo poo that I can use when we go to 11 here at work.
|
# ? Nov 30, 2010 20:57 |
|
Bluecobra posted:Also, Solaris 11 Express is out and is supported by Oracle if you are brave enough to put it into production.
|
# ? Nov 30, 2010 21:30 |
|
FISHMANPET posted:Ans I think the Solaris 11 express is only for evaluation, you can't actually use it in production (but they haven't released Oracle Solaris Studio 12 for it, WTF Oracle?). I'm "evaluating" it at home, which isn't really a lie because I'm learning all sorts of great poo poo that I can use when we go to 11 here at work. http://www.theregister.co.uk/2010/11/29/oracle_sunrise_supercluster/ Who knows if that implies it'll be a supported configuration for end-users. TobyObi posted:Interesting... since I'm still running OpenSolaris in production, as an FC SAN.
|
# ? Nov 30, 2010 21:48 |
|
Sweet, I guess I hope they enjoy violating their own license if they start selling products running Solaris 11 express.Oracle posted:You may not: There's also a section on making sure you don't accidentally GPL Solaris code or something: Oracle posted:Open Source Software
|
# ? Nov 30, 2010 21:57 |
|
It's really not violating anything if they're not legally bound to agree to it in the first place. They're sort of the copyright holder.
|
# ? Nov 30, 2010 22:00 |
|
Misogynist posted:Haven't upgraded to OpenIndiana yet? Though, to be honest, it may also never get upgraded to Solaris 11 either, based on the fact that getting downtime for that server now will be pretty difficult.
|
# ? Nov 30, 2010 22:40 |
|
I got a FusionIO IODrive to play with, but I'm having some issues. VMWare formats the drive with 512 byte sectors. I've made sure it starts at sector 128, so it should be write aligned for 4k blocks in the VM, however I'm getting absolutely terrible 4k random IO. Have a look: Does anyone know if there's any way to format the drive with 4k blocks? Or does anyone have any suggestions?
|
# ? Dec 6, 2010 13:12 |
|
|
# ? May 13, 2024 08:00 |
|
Does anyone have any experience with the HP P4300 G2 SAN starter kit? Thoughts? Has HP screwed up the Lefthand units or are they still a good option for an iscsi SAN? I'm looking into virtualizing a large chunk of our physical machines. We only have one server running one mssql database and no Oracle. It will mostly be for our GroupWise system and network file storage, along with the odds and ends boxes that are just wasting electricity. I got pricing and it was more than I was expecting. Then again, I have no real basis for my expectations.
|
# ? Dec 8, 2010 15:48 |