|
lilbean posted:I was just gonna post this. I ordered a 4540 today (more cores, more RAM, etc). Can't loving wait for it to arrive. I've started the process of getting a try-and-buy of a 4540+J4000. I'm waiting to see if they've fixed a minor sata driver problem. http://www.dreamhoststatus.com/index.php?s=file+server+upgrades
|
# ¿ Aug 29, 2008 15:50 |
|
|
# ¿ Apr 28, 2024 17:18 |
|
lilbean posted:Hm, haven't heard of that one. Do you have a bug ID for that? Is it a performance related issue? They seem to not be making a big deal of it. The X4500 shipped with a broken sata driver, which they consider low priority, even though the box is 6x8 sata cards with some cpu and memory stuffed in the back. We had to install IDR137601 (or higher) for Solaris 10u4 to fix it. The thumpers all ship wih u3, so first you have to do a very tedious upgrade process. Sorry, I don't have a bug ID. OpenSolaris suffers as well, google "solaris sata disconnect bug" or "solaris marvell" and you will find some people who hit it. It's pretty much anyone who puts any kind of load on a thumper. Or in my case, 29 thumpers.
|
# ¿ Aug 29, 2008 19:56 |
|
lilbean posted:Jeeeeesus Christ, 29 of them? Nicely done. Thanks. They're big, loud, HEAVY, NOISY monsters, but if you don't care about power redundancy you can stuff 6 of them in a 120v 60amp rack! Once they're purring along with that IDR they're lots of fun. quote:How do you have your ZPOOLs laid out on them? We're basically going to use ours for a backup disk (with NetBackup 6.5's disk staging setup), so we'll be writing and reading in massive sequential chunks. We plan on benchmarking with different setups like 40 drives in mirrors of 2, raidz and raidz2 vdevs (in different group sizes). Only disks 0 and 1 are bootable (c5t0d0 and c5t4d0), but you are correct, they come in a SVM mirror. It makes upgrades not so scary, since if you totally bone it somewhere, you can revert quickly. The new x4540 seems to be able to boot from flash, which will be quite nice, adding 2 more spindles to the mix. Right now we're only getting 11tb usable out of a machine, with 5 disks raidz2's and a handful of spare disks. Oh, and a stock thumper zpool won't rebuild from a spare, either. It gets to 100% and starts over. Enjoy!
|
# ¿ Sep 1, 2008 20:10 |
|
lilbean posted:Well with only one it shouldn't be too much trouble. As for the weight, well I think I'll make our co-op student rack mount the thing - and take the cost out of his paycheck if he breaks it by dropping it. Hah! I hope your disability insurance is paid up. quote:Yeesh, is that with the unpatched Solaris 10 that comes with it? I'd planned on a fresh install once I get it with the latest ISOs and then patching it. Yup! Thing should ship with a damned working copy of Solaris. Wedge of Lime posted:The 'Marvell bugs' have now been fixed as part of an official patch, the following patches: Do you work for Sun? If so, I would like to speak with your privately about this stuff. I've sunk a lot of man hours into this thing trying to patch something with an IDR for U4 of Solaris based on a plan from our gold support contract. It looks like the marvell patch was just released a month ago. I'll have to ask my sales rep why we weren't notified about it. quote:Also, before doing anything with ZFS please read this: Yes, read this, it is awesome.
|
# ¿ Sep 1, 2008 22:11 |
|
I come seeking a hardware recommendation. It's not quite enterprise, but it's not quite home use, either. We need a bulk storage solution on the cheap. Performance is not a real concern of ours, it just needs to work. Our requirements: Hardware raid, dual parity preferred (RAID6), BBU Cheap! Runs or attachable to Debian Etch, 2.6 kernel. Power-dense per gb. Cheap! To give you an idea, we currently have a 24-bay server with a 3ware 9690SA hooked up to a SAS/SATA backplane, and have stuffed it full of 1TB disks. People are using SFTP/FTP and soon RSYNC to upload data, but once the disks are full they are likely to simply stay that way with minimal I/O. We are looking for the cheapest/densest cost/gig. If that is buying an external array to attach to this system via SAS/FC, OK! If it's just buying more of these systems, then so be it. I've started poking around at Dot Hill, LSI, and just a JBOD looking system, but I figured I would ask here as well, since LSI makes it hard to contact a sales rep, and Dot Hill stuff on CDW has no pricing. The ability to buy the unit without any disks is a big plus, we are frequently able to purchase disks well below retail. I need about 50-100tb usable in the coming month or two, then it will scale back. I am putting these in 60amp/110v four post 19" racks. Edit: Oh and I will murder anyone who suggests coraid, and not just because it is neither power dense nor hardware raid.
|
# ¿ Sep 17, 2008 17:23 |
|
complex posted:This is for the 50GB backup offer, I presume? Hrm? rage-saq posted:Get an HP SmartArray P800 card and then attack up to 8 MSA60 12xLFF enclosures. Right now our current theory is a 3ware 9690SA card with these: http://www.siliconmechanics.com/i20206/4u-24drive-jbod-sas_sata.php So your solution is about 2x the cost. It's a supermicro backplane, we're getting a demo unit in about 5-10 days. Any horror stories about the card? Backplane? Is there something cheaper per rack U per gb? (Or moderately close, mmonthly cost of the rack and all.)
|
# ¿ Sep 17, 2008 22:06 |
|
Misogynist posted:If you're really afraid of Sun OSes, you could also run Linux on the thing, but if you're trying to manage a 48TB pool without using ZFS you're kind of an rear end in a top hat. I have 26 thumpers.
|
# ¿ Sep 18, 2008 02:10 |
|
M@ posted:I've got one 6080 Sorry about that 6080. I'll vouch for M@, he's yet to let me down, and I've bought one or two things from him. I'll have a look see at the IX systems stuff, thanks. A 48-in-4u beige box chassis is just the kind of thing I was looking for to knock off this Sun X4500/J4000 stuff. Edit: Hah, I was just looking at the Xyratex stuff and being annoyed at no obvious link to resellers. http://xyratex.com/products/storage-systems/storage-F5404E.aspx I've had good luck with their DS14 shelves, I can't imagine this being much worse, besides the onboard raid. H110Hawk fucked around with this message at 22:15 on Sep 18, 2008 |
# ¿ Sep 18, 2008 22:02 |
|
Catch 22 posted:What? Is that a tool? Google turns up nothing for that. How did you make that chart? If I had to guess it's just snmp data graphed with something like cacti/rrdtool. Edit: Jesus, you would think they could just sell you a SNMP license. H110Hawk fucked around with this message at 07:05 on Sep 21, 2008 |
# ¿ Sep 20, 2008 19:07 |
|
Maneki Neko posted:We once hit a bug in Data Ontap while resizing a LUN that knocked one head offline, then the other head took over and continued the same operation, hit the same bug and died too. It took about an hour for everything to come back up and be happy after replaying the logs, but I would classify that as a failure. Hah! We had this same thing happen with our BlueArc once, only the bug triggering data was committed into the logs. Replaying the logs caused both the clustered heads to crash. Wound up having to wipe the logs (data loss ahoy!) and go from there. Never again with BlueArc. I've never had an error like that on a NetApp.
|
# ¿ Sep 25, 2008 18:38 |
|
Catch 22 posted:You can store data on the OS LUN, but its best not to. Coming from the world of netapp, this seems ridiculous to me. On a single tray why would I give up 4 disks to OS? What is it even paging, and why doesn't it simply pre-allocate what it needs to manage your system? Doesn't a SAN OS know everything about itself when you initialize it? That stinks of horrible design flaw. Am I missing something obvious here besides "buy more spindles?"
|
# ¿ Oct 3, 2008 17:58 |
|
Catch 22 posted:Winner Winner Chicken Dinner, I would say, but he cant with the AX150. My main point is the AX150 seems like a pretty poorly designed solution if I have to burn 25% of my available spindles for what should be some paltry OS space consumption. What the heck are they storing on those disks?
|
# ¿ Oct 3, 2008 20:39 |
|
Catch 22 posted:That paltry OS is rather large depending on the SAN series you pick. The CX line referenced in the above post requires 62GB. That seems excessive. To diverge from the thrilling debate about some Dell bottom of the barrel disk enclosure, why am I seeing a shitton of ECC (and similar) errors on my filers in the past week? In the past 7 days I've had transient ECC errors, a watchdog reset, and two filers with NMI's being shot off the PCI bus. We also had our Cisco 6509 register: code:
At this point I'm blaming bogons and the LHC.
|
# ¿ Oct 3, 2008 21:15 |
|
Mierdaan posted:Sorry we don't all have 23 thumpers to brag about. Seriously dude, you're a loving dick sometimes. It was a joke. Catch22 and I seemed to be having quite the lively (read: boring) argument over something I had no idea about, and I had actual content I was curious about related to this thread. Aquila posted:Actually he has 27, but I had lost three and not bothered to rma a fourth when I worked there. Normally when people quit they steal stuff, not alert others of what they find. In that sense, you failed at quitting.
|
# ¿ Oct 4, 2008 04:37 |
|
Saukkis posted:Maybe he did that to distract you from the real loot. When was the last time you took a look at your core router? Are you sure it hasn't been replaced by a WRT54G? It could explain why my site is down. Don't be silly, those three thumpers are worth FAR more than some paltry core router. (And we use an Airport Express. Can't you read my witty avatar?)
|
# ¿ Oct 5, 2008 21:02 |
|
Ray_ posted:I know I can pickup a StorVault for pretty cheap, but as far as I know they are single-controller only. StorVault is indeed single controller only. No clustering options. They are fairly robust in that netapp sort of way, but unless they've upgraded ontap it has some strange bugs that will bite you in the long run. Memory leaks in various supported/unsupported features will eventually cause them to stop working. Failover is kinda hokey last time I tested it, in that you couldn't just move all of your disks to a cold chassis and have it work instantly. License codes are tied to the chassis, etc. Support that had trouble comprehending requests also was a minor issue.
|
# ¿ Oct 6, 2008 17:09 |
|
Catch 22 posted:That's not failover you know... Yeah, it is a bad habit of mine. At work we have a system for "failing over" hardware on to new hardware. It is not the same as what everyone else refers to as a high availability clustered failover. Sorry about that misuse of the term. I did mention no clustering/single controller.
|
# ¿ Oct 6, 2008 17:48 |
|
Catch 22 posted:Cool. As long as you know there are better ways. I would hate for you to think that is the only "failover" available out there. Good God I think I would shoot myself if it was. code:
quote:Question: Why do you have it setup for cold replacements like that? Or are you referencing that you are using StorVault meaning there is no other kind of recovery other than cold replacements. Cold replacement (or warm replacements in the case of some of our web servers) is a failsafe problem solver. We just order N+1 of everything and leave one sitting there. It scales out for us since normally N winds up being a very large number. Backplane goes out? Just throw in a new entire system and then work to replace the backplane on your now cold and dead system. In reality N+1 is actually closer to N+N*5-10% depending on hardware class, reliability, ease and speed of repairs, etc. The StorVault wound up being kind of a special case for us, we tried out two of them, own three of them (cold spare!), and they didn't work for our needs. We're migrating users off them, slowly, but the above bugs hold true for anyone who is going to use one. Internally they are pretty much like a FAS250 or whatever, only with truly off the shelf hardware.
|
# ¿ Oct 6, 2008 20:13 |
|
optikalus posted:Well, my "SAN" has just turned itself into a pile of crap. What is your "SAN"? What brand+part number hard disks did you have in it? Did you order the exact same model disk, down to the revision, or a "compatible" one? Guaranteed sector count is something you need to pay attention to in the future. It is a good idea to only carve 95% of your disks into your array, this lets you use the high end of them as fudge factor in case you ever have to switch disk manufacturers. How long between disk failures? Were you able to gank the old disk before the second one failed?
|
# ¿ Oct 11, 2008 04:04 |
|
optikalus posted:I quote "SAN" because it isn't really a dedicated SAN at all, just a RAID box with SAN protocols running on it. Pretty much a vanilla fileserver, then? Those are some real cost savers. We're using them at work now over netapp for a lot of things. Nothing wrong with them, so long as you don't go overboard on the cheap factor. I would suggest picking up some Hitachi or Seagate 5-year warranty disks and rebuilding from scratch. This can save you a lot of money over Netapp/HDS/EMC, and likely give you as much reliability as you need. I don't know about recent years, but around 8-10 years ago my uncle worked for a company that assembled and sold raid systems, I forget the name. He managed the disk burn-in and certification for their units. He told me never ever to buy Fujitsu disks. They apparently hit 50% failure in about half the time as all other major brands. We got a batch of brand new Fujitsu-label 15k disks, and they still seem to fail about twice as often as the Seagate equivalents. I don't have hard numbers, though, so I don't know if it's just the bad taste left in my mouth by my creepy uncle, or if they really do still suck.
|
# ¿ Oct 12, 2008 06:38 |
|
Wicaeed posted:How hopelessly outdated are these things? I believe the HDD magazine has 36GB drives in it, and I know my company has about 9 or 10 older magazines with 16GB drives in them. As long as it has something like ontap 6 on it you should be able to get a feel for it. Have fun. See if they have any hard-copy manuals for it, or ontap installation diskettes.
|
# ¿ Nov 10, 2008 21:55 |
|
Cultural Imperial posted:I'm an employee of NetApp and I can try to put you in touch with someone more responsive if you're still interested. What do you do at NetApp?
|
# ¿ Nov 11, 2008 03:36 |
|
Wicaeed posted:Erg, I guess I had a mistype, it's not a 270, it's a Net Appliance NetApp F720 and it uses Eurologic NetApp XL501R Fibre Channel JBOD FC9 (I know our company has a bunch of FC7's laying around) For learning it is fine. OnTAP 6.5 or something will the be the latest OS it runs, since they stopped making Alpha versions sometime around then, maybe 6.4. It is going to be brutal on your electric bill, though.
|
# ¿ Nov 11, 2008 19:24 |
|
Wicaeed posted:Welp, I made the plunge As long as it's the correct architecture type (Alpha vs. i386) you will be fine. I don't believe any of the ontap images are any different from any other. The only thing it does is run/not run processes based on licensing.
|
# ¿ Nov 19, 2008 01:39 |
|
Mierdaan posted:(sales bullshit) Always get them to put it in writing. It helps keep sales people honest. If you're really not sure, call up Netapp and ask them directly!
|
# ¿ Nov 19, 2008 22:27 |
|
Wicaeed posted:Can someone explain to me how Netapp does their licensing? Expensively, and based on raw capacity/filer capacity. quote:I figure it probably wouldn't be worth my time/money to buy a license from Netapp, am I right? You might as well give them a call. The worst thing that happens is they laugh at you. If you can't get a license from them, and are willing to work something out a little hokey just to get the units legally working shoot me a PM/IM and I can tell you a couple companies that can help.
|
# ¿ Dec 11, 2008 20:18 |
|
Wicaeed posted:Well thats part of the problem, I can't access the CLI. Rather the OS wont load all the way. Oh, jeez. I misread the filer model you had. F720 is a very old filer. It looks like your PCI cards are out of order. Try moving your NVRAM card to slot 1 and putting the FastEthernet card in Slot 2 (or whatever). Don't bother calling netapp. Shoot me an IM, I tried sending you one and it said "refused by client." Edit: \/ I tried sending you an IM on AIM. H110Hawk fucked around with this message at 22:55 on Dec 11, 2008 |
# ¿ Dec 11, 2008 22:06 |
|
Wicaeed posted:I assume there's going to be a way to reset it to something I want, correct? I know nothing at all about ONTAP5, in 6+ you would hit control-C sometime after starting and before it prints the OS banner, and press that you needed to reset the password. In Ontap 5.3.6R1 according to NOW: quote:How to reset the filer password
|
# ¿ Dec 12, 2008 01:10 |
|
Jesus christ. 14 hours later:code:
|
# ¿ Dec 16, 2008 07:35 |
|
tinabeatr posted:Does anyone have any experience with BlueArc? There is no pole long enough.
|
# ¿ Jan 19, 2009 23:39 |
|
skullone posted:And whatever NAS/SAN you get, it'll suck. There will always be odd performance problems that you'll have to spend hours troubleshooting before your vendor will listen to what your saying, only to have them say its a known problem, and a patch will be ready in a few weeks. Seems to be the standard Sun way of doing business. We had similar problems with the X4500 units which shipped with a simply faulty SATA driver, which won't be fixed until Update4, no Update5. You should also be ready for a real heck of a time if you ever start doing what the ZFS papers say you should be able to do with the filesystem. Things start to get hairy with the management tools around a few thousand nested filesystems. Update6 resolved a lot of those issues, but it's still there. From what I hear, and contrary to what their sales guys insisted, BlueArc appears to be doing demo units now, or is it still their "if we think you like it you have to buy it" try-and-buy program? In theory I will have a couple of Titan 2200? 2400? units for sale w/ NFS, Clustering, and Data Migrator licenses. Anyone interested? Might also sell the disk trays if we don't have another use for them, Engenio (LSI), around 10 trays of FC and 10 trays of sata. Exact numbers available for serious inquiries.
|
# ¿ Jan 28, 2009 19:30 |
|
skullone posted:You guys are scaring me... I already have a drive with predictive failure on my Sun box. Haven't reported it to Sun yet... but now I'm thinking "this RAID-Z set with hot spares isn't look as good as RAID-Z2 anymore" It's a lot cheaper to keep a sata disk on hand than it is to replace dead data. Just buy a Hitachi/Seagate disk of correct size for your array. It costs, what, $150? Swap it in, deal with Sun, swap their part in.
|
# ¿ Jan 29, 2009 05:13 |
|
InferiorWang posted:I hate sales people. I want to get pricing on some lefthand gear, but I don't want to listen to any of their spiel, or get follow up calls only to have the person get pissy when I remind them we're a public school and everything comes down to dollars, not necessarily doing things the proper way. I don't want to talk to a reseller either. All I want to know is how much it costs. Get a full retail quote from them for "the right way", have them itemize costs as much as they can claiming government red tape. Then, generate quote for what you actually want, cut price 50%, and make up a PO. Call them up and send them your proposal and tell them it's this or EMC. See what happens. You generally won't be able to cut the service contract by nearly 50%, and I assume this is required by your school policy. If they still balk, remind them that you have to keep a service contract on the device for 3? 5? years due to that same government red tape. Don't be afraid to lie outright to them, their sales guys will do the same to you. H110Hawk fucked around with this message at 17:14 on Mar 18, 2009 |
# ¿ Mar 18, 2009 17:12 |
|
Catch 22 posted:You can also save by doing the install yourself (if they are charging for it) and if you get the Manufacture behind you they can discount the Vendors quote passing savings to you. In my case I got a discount and a extra warranty year for nothing. Call Lefthand and talk with them. It can pay off. Self-install of most stuff is a snap, assuming you aren't afraid of lifting disk trays. Just make sure a self-install doesn't conflict with your service contract and warranty. If it does, they should do it for free since you are paying for it with the service contract.
|
# ¿ Mar 18, 2009 19:18 |
|
InferiorWang posted:My problem is I can't even get anyone to listen to me about doing this and getting away from bare metal machines for our critical data without having some semblance of a dollar figure attached to it. I have to approach this rear end backwards from how most normal people would approach it. Remember above where I said lie to them? Do that. I imagine at this point they're calling you and wanting to setup meetings? Tell them to bring their spreadsheets because you have to talk cost as well as performance. Let them go through their whole dog and pony show with the glossy brochures and powerpoint, then ask them what "that" (pointing at the screen) costs. Play hardball right back.
|
# ¿ Mar 19, 2009 00:56 |
|
brent78 posted:I have 4 Coraid SR-1521's with 15 x 500GB drives each (7.5 TB RAW each), willing to unload real cheap to the first people who PM me. I was using them for a media project and now no longer need them. I might take them. Shoot me a PM with your initial cost quote. How used are they?
|
# ¿ Mar 27, 2009 21:41 |
|
Sock on a Fish posted:I came in this morning to find my Solaris box had crashed, and when I brought it backup it threw this at me: The filesystem is now suspect, because you don't know if a block had data changed and then re-checksummed to appear valid. I would check your logs for information about the crash itself, versus just what zpool is telling you about your pool.
|
# ¿ Apr 28, 2009 01:41 |
|
Sock on a Fish posted:Say I took the mirror pool containing c1t0d0s0 and c1t8d0s0, and then one at a time removed each device from the pool and then added back as c1t(0|8)d0, allowing the pool to resilver in between moves. Also, let's say I rebooted the machine only to discover that I'd wiped out my boot sector. Whoops. In theory you can just dd the boot sector from one of your old disks on to a new one. If they're in the mirror the worst you'll do is hose one of them. Remember to do it to the disk device itself (c1t0d0) or the whole disk partition (c1t0d0s2). grub-install or what not *should* work, googling around found this untested bit: http://opensolaris.org/jive/message.jspa?messageID=179529 As for what to dd on and off, the boot sector is a set size, and from there it should be enough to get you reading ZFS. Google dd grub gave this: http://www.sorgonet.com/linux/grubrestore/ You will want to use the stage1 loader first and see what happens. I don't know where they switch from raw disk reading to actually being bootstrapped.
|
# ¿ Apr 29, 2009 17:26 |
|
Syano posted:What about provisioning? Is it usually worth it to split up the disks into separate raid groups or just build one raid set from all the available disks? Or is this something you need to know more about your IO load to make a decision on? This depends on your workload. If you make one large raid set, then carve up luns, you will have the maximum possible IO throughput, but any one VM can bog down the rest of them with an I/O spike. Consider a logging, email, and sql server running on your array. Each one has its own lun. We all know logging services sometimes go batshit insane and start logging things several times per second until you kill them. Do you want that to be able to bog down your email and sql service until you fix it? There are very real tradeoffs, and my suggestion to you is if you have time to test out several setups. You may also want to consider which workloads have the heaviest read load (email, typical sql) and which have the heaviest write load (logging, atypical sql) for combining. Figure out how the various read and write caches interact and you may be able to squeeze out some extra iops, but I will leave that part to the more experienced in this field.
|
# ¿ Jun 14, 2009 01:07 |
|
|
# ¿ Apr 28, 2024 17:18 |
|
bmoyles posted:What's the scoop on Coraid, btw? I tried looking into them a few years back, but they didn't do demo units for some reason so I passed. They suck balls, for lack of a more elegant way of phrasing it. Their "device" is just plan9 with a AOE stack on it. They're slow, latent, and bug prone. We bought 400-500TB worth of them a few years back, using 750gb disks, and regretted it every second of the way.
|
# ¿ Jun 16, 2009 21:48 |