|
Anyone have a chance to play with an IBM Flash 900 yet? I'm replacing a DS8100 later this year and have been weighing some options. With pretty much any system these days reliability is pretty great. I'm not seeing the point in something like DS8870 considering the cost. Right now I have our main EMR Oracle DB mirrored between a Flash 820 and DS8100 via SVC.
|
# ? Jun 1, 2015 19:06 |
|
|
# ? May 13, 2024 06:53 |
|
I've worked with the Flashsystem 900, some of my customers are getting 900s and v9000s. The 840 is a good bit nicer than the 820 all around, modular, nicer GUI, easier upgrades. The 900 is not drastically different than the 840, primarily it has more capacity.
|
# ? Jun 1, 2015 20:09 |
|
paperchaseguy posted:I've worked with the Flashsystem 900, some of my customers are getting 900s and v9000s. Can you virtualize storage on a v9000 just like an SVC/v7000? It looks like they literally stuffed an SVC pair in there. If yes this may not be a bad option.....
|
# ? Jun 1, 2015 22:17 |
|
Kaddish posted:Can you virtualize storage on a v9000 just like an SVC/v7000? It looks like they literally stuffed an SVC pair in there. If yes this may not be a bad option..... Yes, the v9000 is basically an SVC DH8 node pair connected to a Flashsystem 900. It's slightly more complex than that but that's the idea. You could put a v7000 in front of a 900, or flip it around and put a v9000 in front of a v5000 if you needed more processing power (since the DH8 nodes are more powerful than the v7000 gen2 controllers). http://www.redbooks.ibm.com/technotes/tips1281.pdf
|
# ? Jun 2, 2015 16:18 |
|
So a drive failed but the vendor replaced the wrong one... RAID6 just saved 4PB of data, thanks RAID6!
|
# ? Jun 3, 2015 23:31 |
|
NetApp?
|
# ? Jun 3, 2015 23:35 |
|
the hardware is, but the vendor is everyone's favorite three-letter acronym
|
# ? Jun 3, 2015 23:51 |
|
I feel your pain. Getting them to do something right feels like wrangling cats sometimes.
|
# ? Jun 4, 2015 02:58 |
|
The_Groove posted:the hardware is, but the vendor is everyone's favorite three-letter acronym Hahaha. Not shocking at all. A friend of mine had IBM tell them that they couldn't move their rebadged NetApp SAN from one datacenter to another on their own, otherwise they wouldn't support it. IBM came in and installed the SAN backwards in the rack.
|
# ? Jun 4, 2015 03:27 |
|
IBM regular support sucks but Premium/Account Advocate is great.
|
# ? Jun 4, 2015 15:25 |
|
Kaddish posted:IBM regular support sucks but Premium/Account Advocate is great. Well, the problem is that IBM stopped selling the rebranded NetApp stuff a while ago so they don't really give a poo poo about supporting it beyond token efforts.
|
# ? Jun 4, 2015 17:58 |
|
Last I knew they still had some good NSeries/Data Ontap people in Raleigh, NC but they could be laid off now for all I know.
|
# ? Jun 4, 2015 18:10 |
|
I honestly miss working with IBM gear and their System x support in Australia was amazing. Being able to call a single number and have an actual technician answer instead of a log-and-flog hell-desk grunt was awesome. In my current position I'm only dealing with HP who have horrible support and Cisco which isn't really an issue as we engage them via our VAR. If Lenovo didn't completely absorb IBM's support infrastructure for System x then I imagine that it's become a nightmare. Was any storage-systems stuff included in the sale of System x/BladeCentre to Lenovo? Regardless IBM only seems to care about SVC and SONAS/enterprise storage kit these days.
|
# ? Jun 4, 2015 18:24 |
|
We're storing a few hundred gigs of files on a server at corporate HQ, with about 500 more at a remote office. Before I got this job they'd been sold on moving to cloud based storage, like OneDrive but I'm concerned that even with a good internet connection there's going to be a noticeable delay working with their typically large files. Is there some sort of hybrid where the cloud service can replicate to local file servers to keep the speed up? Ideally HQ and the remote office will have centralized storage instead of their currently split locally stored files.
|
# ? Jun 8, 2015 17:23 |
|
Dick Trauma posted:We're storing a few hundred gigs of files on a server at corporate HQ, with about 500 more at a remote office. Before I got this job they'd been sold on moving to cloud based storage, like OneDrive but I'm concerned that even with a good internet connection there's going to be a noticeable delay working with their typically large files. Is there some sort of hybrid where the cloud service can replicate to local file servers to keep the speed up? Ideally HQ and the remote office will have centralized storage instead of their currently split locally stored files. We use Riverbeds for acceleration for things like this. I believe that BranchCache in Server 2012 will also do this.
|
# ? Jun 8, 2015 17:34 |
|
Dick Trauma posted:We're storing a few hundred gigs of files on a server at corporate HQ, with about 500 more at a remote office. Before I got this job they'd been sold on moving to cloud based storage, like OneDrive but I'm concerned that even with a good internet connection there's going to be a noticeable delay working with their typically large files. Is there some sort of hybrid where the cloud service can replicate to local file servers to keep the speed up? Ideally HQ and the remote office will have centralized storage instead of their currently split locally stored files. There's a number of vendors that have basically on site appliances (either full replicas or just caching layers) with the "master" copy being in the cloud. Does your company have a particular product already selected? If not, probably the beginning of a bigger discussion.
|
# ? Jun 8, 2015 17:51 |
|
Dick Trauma posted:We're storing a few hundred gigs of files on a server at corporate HQ, with about 500 more at a remote office. Before I got this job they'd been sold on moving to cloud based storage, like OneDrive but I'm concerned that even with a good internet connection there's going to be a noticeable delay working with their typically large files. Is there some sort of hybrid where the cloud service can replicate to local file servers to keep the speed up? Ideally HQ and the remote office will have centralized storage instead of their currently split locally stored files. Amazon Storage Gateway might be a good fit for you.
|
# ? Jun 8, 2015 17:57 |
|
The only directive leadership had was to: 1. Centralize storage: The remote office added a local file server because access to HQ's was too slow. Since HQ and remote need to access each other's files centralizing storage streamlines things. 2. Reduce importance of HQ server room: We're in a disaster prone location (Los Angeles) and they want to get locally hosted services moved to the cloud, so for us that means AD, file storage and email. One consultant created a proposal that was MS for everything, with Azure AD, OneDrive and Office 365/Outlook. My concern has been speed of file access as well as integration issues if we part out the systems to different vendors. We haven't committed to anyone yet so our options are still open but it's time for me to narrow down the field and get costs for a budget and I think speed is going to be an issue if we don't make that part of the project.
|
# ? Jun 8, 2015 18:27 |
|
Egnyte has quite a few customers but it gets really expensive once you put enough features on to make it workable in an AD environment.
|
# ? Jun 8, 2015 21:23 |
This thread a good place to ask questions about OpenFiler and whatever the gently caress I need to do to make it work or better replace it with something that is more intuitive for me, a non computer tech type end user? I have a 10 TB RAID 5 file server that was set up for my lab's use to back up data by one of our IT fellows. This fellow was then dismissed in one of those overnight , so the management of this arcane system now falls to me. If I am reading the system right, we have 7 TB of backup data on disc. Its all working, so at a minimum I'd just need to learn how to set up individual SFTP accounts for sharing out data, but really, this system should probably be replaced. Unfortunately I am guessing it can't be done without wiping the data? (Before you ask yes, I already have a call into central IT and will probably have to migrate these data elsewhere)
|
|
# ? Jun 11, 2015 18:43 |
|
Bilirubin posted:10 TB RAID 5 file server Run. Run far. That's terrifying unless it's all really tiny disks.
|
# ? Jun 11, 2015 20:10 |
Maybe its RAID 4, I don't honestly remember. 3 3TB HDs, and one 500 Gig for the OS is what the original spec sheet he sent me said.
|
|
# ? Jun 11, 2015 21:23 |
|
If it's RAID 4 it would be, like, the only RAID 4 setup to actually exist in the real world. That's a rarely used (or even supported) level. Also, 3 3TB disks in RAID 5 would be 6TB usable, not 10. So they may have added more capacity since that spec sheet was sent. The point is that if you lose 2 disks out of a RAID 5 array, the whole thing is dead. The larger the array, the longer it takes to rebuild when you replace a dead disk. 10TB is pretty large. So there's a great chance a second disk will die during the 3 days it takes to rebuild, and then you are hosed. Rebuilds by definition have to access every single bit of data on every disk, so they are super stressful, and likely to trigger that second failure. Getting "central IT" to help you move this somewhere better ASAP is definitely your best option.
|
# ? Jun 11, 2015 21:30 |
|
Docjowles posted:If it's RAID 4 it would be, like, the only RAID 4 setup to actually exist in the real world. That's a rarely used (or even supported) level. Also, 3 3TB disks in RAID 5 would be 6TB usable, not 10. So they may have added more capacity since that spec sheet was sent.
|
# ? Jun 12, 2015 03:01 |
|
Vulture Culture posted:RAID-4 was actually really common on NetApp filers before RAID-DP came into wide use. You can still build raid groups with it even and for certain use cases it still makes sense.
|
# ? Jun 12, 2015 03:46 |
Docjowles posted:If it's RAID 4 it would be, like, the only RAID 4 setup to actually exist in the real world. That's a rarely used (or even supported) level. Also, 3 3TB disks in RAID 5 would be 6TB usable, not 10. So they may have added more capacity since that spec sheet was sent. Thanks. Its been a few years so I have forgotten the details of the build, but what you describe for RAID 5 was what we were going for initially so we must have upped the size of the drives from the quote I had found. The problem with the central data centre is that its cost is nearly the same as what we spent on the server ourselves. Every two months. Its well outside my grant's budget. Meanwhile, after poking around in the OpenFiler admin panel and doing what little reading I could find on the web I have to wonder wtf any of us were thinking letting this guy set our server up this way.
|
|
# ? Jun 13, 2015 20:49 |
|
Whatever you do get away from Raid 5 or make sure you have good backups and can be down for as long as it takes to restore.
|
# ? Jun 13, 2015 21:08 |
|
Correct me if I'm wrong, but any filer you buy today is going to be cdot right, there's not a 7 mode option anymore?
|
# ? Jun 18, 2015 01:21 |
|
You can still choose 7-mode if you like, just bear in mind there is no 8.3+ 7-mode, 8.2.x is the last 7-mode release and it'll continue to get bug fixes, etc., for years. You can also change your mind after it's delivered really, just get with your licensing reps and get licenses switched. At some point there will probably be a platform that isn't supported for 7-mode, I'm just guessing. The default is cDOT now though, if that was your question.
|
# ? Jun 18, 2015 01:27 |
|
Thanks Ants posted:Egnyte has quite a few customers but it gets really expensive once you put enough features on to make it workable in an AD environment. ...Anyone using FreeNAS for production file servers?
|
# ? Jun 21, 2015 17:43 |
|
Make sure you clearly (in writing) explain to the Egnyte reps what you want to do before you do it. I work for a company that is partnered with them and honestly the product is a moving target. The claims made on their website vs. the reality of the situation are worlds apart. I'm not sure how much stuff is covered by NDA so I'm treading carefully on this one. Edit: What I can say since it's publicly accessible is read these pages: http://egnyte.com/storage-and-sync/other-storage-systems.html and http://egnyte.com/file-access/local-storage-access.html Build a picture in your mind of what you think the system behaves like. Now read the actual documentation for the product, in particular this paragraph: https://helpdesk.egnyte.com/hc/en-us/articles/201639284-Storage-Sync-for-VMWare-Installation posted:Restrictions Tell me that is in any way apparent from reading the sales blurb. Thanks Ants fucked around with this message at 18:25 on Jun 21, 2015 |
# ? Jun 21, 2015 18:16 |
|
I'm going to have to read into that one a little more. If files copied to the share via robocopy or something along the lines of rsync won't be synchronized to the ~*~ cloud ~*~ then that definitely won't work for me. On another note... has anyone paired servers with directed attached storage and created an extremely cheap "SAN" using FreeNAS? To this point I've only used FreeNAS with just the server's built in capacity. I was thinking of using either a poweredge 2900 paired with an MD1000 or MD3000 to really give me room to add capacity and redundancy. Anyone know if DAS enslosures only work with their own manufacturer's servers? Could I use an IBM EXP3000 with a Dell server or an MD1000 with an IBM server? I'm only using the storage provided by these servers for archive file servers that are being backed up as well, and for a test lab environment.
|
# ? Jul 1, 2015 04:02 |
|
An MD1000 will work on a machine that isn't a dell server. The far more likely problem you will run in to is controller compatibility, depending. Dell stuff generality plays nice though. That reminds me that I have a shitload of IBM DAS boxes to test out on some old R710s too! 144 * 139gig 15k disks isn't exactly the latest and greatest, but it's not like I care about the power or cooling bill Gwaihir fucked around with this message at 05:28 on Jul 1, 2015 |
# ? Jul 1, 2015 05:26 |
|
goobernoodles posted:Engyte is one of my long term potential projects/solutions that I haven't gotten to yet. I'm considering buying a cheap NAS or other form of storage to run the proprietary Egnyte file server VM off of, the idea being to replicate from our existing file server to the Egnyte one. Point would be to not have our file server entrenched in a subscription service and to be able to limit licensing costs to only the people who need to access files remotely. Not sure how I'll replicate between the two without problems though. Robocopy? If you are using Server 2012 Branchcache is a possibility: https://technet.microsoft.com/en-us/library/dd425028.aspx
|
# ? Jul 1, 2015 15:16 |
|
Rhymenoserous posted:If you are using Server 2012 Branchcache is a possibility: https://technet.microsoft.com/en-us/library/dd425028.aspx
|
# ? Jul 1, 2015 15:50 |
|
Gwaihir posted:An MD1000 will work on a machine that isn't a dell server. The far more likely problem you will run in to is controller compatibility, depending. Dell stuff generality plays nice though. Filing this one under "Huh, I didn't really expect that to work at all." These SAS enclosures (IBM 5886es that were providing DAS for our AS400) actually work just fine hanging off an LSI SAS3008 based controller card with Dell's I/T firmware on it in one of my old R710s. I can't even use the original IBM controllers since they're all PCI-X instead of PCI-e. Which is a shame since they were quad port models with 2 gigs of cache and the nice battery setup where you could swap it from the outside without even popping the cover on the machine. Now what the gently caress can I use these things for? Home grown openfiler or freenas install as a disk based backup pool?
|
# ? Jul 1, 2015 16:04 |
|
Gwaihir posted:Filing this one under "Huh, I didn't really expect that to work at all." The power bill alone would be astronomical I think. Usually it's cheaper to get new drives.
|
# ? Jul 1, 2015 17:14 |
|
This is all at work, don't give a poo poo about the power or cooling costs luckily! That and the space/noise is what keeps me from nabbing these for myself though.
|
# ? Jul 1, 2015 17:25 |
|
Gwaihir posted:That reminds me that I have a shitload of IBM DAS boxes to test out on some old R710s too! 144 * 139gig 15k disks isn't exactly the latest and greatest, but it's not like I care about the power or cooling bill
|
# ? Jul 1, 2015 20:01 |
|
|
# ? May 13, 2024 06:53 |
|
goobernoodles posted:I'm going to have to read into that one a little more. If files copied to the share via robocopy or something along the lines of rsync won't be synchronized to the ~*~ cloud ~*~ then that definitely won't work for me. It means that you have to host your file shares from their poorly documented virtual appliance - you can't point it at a DFS share and say "ok these shares need to be synced to ~*the cloud*~" which is a total deal breaker for any organisation that values having file shares running from Windows Server / NetApp shares / whatever.
|
# ? Jul 1, 2015 21:49 |