|
Can you configure it with SSD and NLSAS? The 6 SSDs should give you more than enough IOPs to work with. Unless you really need the extra IOPs from 15k SAS that seems like a waste to me. Pricing doesn't seem too outrageous to me.
|
# ? Jan 27, 2015 15:54 |
|
|
# ? May 13, 2024 09:35 |
|
The 15k is going to hold the VHD for the file server. I need a fair amount of storage space, but I need more IO that what the host chassis can support with it's current HDDs. I could fill it up with NLSAS and get the IO increase I want, but it's worth a couple extra bucks to have some room to grow. I saw Wicaeed paying about the same for a network appliance with chock full of drives, and I was afraid I was getting hosed. It could be the SSDs cost even more than I think, or the difference between direct attached and a basic LAN storage appliance isn't as far off as I think. Thanks!
|
# ? Jan 27, 2015 16:22 |
|
SpaceRangerJoe posted:The 15k is going to hold the VHD for the file server. I need a fair amount of storage space, but I need more IO that what the host chassis can support with it's current HDDs. I could fill it up with NLSAS and get the IO increase I want, but it's worth a couple extra bucks to have some room to grow. Lately Dell has been pricing EqualLogic units less than their Powervault line for us. Not quite sure why.
|
# ? Jan 27, 2015 18:01 |
|
I talked about it earlier but basically their PowerVault line is OEMed by NetApp. Dell would rather sell their own stuff.
|
# ? Jan 27, 2015 18:24 |
|
honestly i wouldn't mess with the 15k drives. With the ssd cache you aren't going to see a huge speed improvement with 15k over 10k.
|
# ? Jan 28, 2015 00:21 |
|
when I specced up my last lot of DELL kit the price difference to go to SSD was about $2000 per disk. Throwing more spindles at the problem is cheaper with DELL.
|
# ? Jan 28, 2015 01:52 |
|
Anyone have some recommendations for networked backup storage? Does not have to be really fast, but needs to have 16TB+ and do CIFS, NFS and/or SFTP (NVSD, DDB or RDA are also an option) for use with vRanger. We're entertaining reusing some old servers and adding new controllers and disks, but I'd like to explore new options as well.
|
# ? Jan 28, 2015 17:24 |
|
Thanks for the advice and info. I don't believe the 3420 does SSD caching, but I'm checking the documentation. I think we are in the $1,500-1,800 range for each SSD. Part of the plan for the SSDs is to make some key people who like to complain about performance happy. A tray of 15k drives would easily handle current IO needs and estimated growth for the next year, but we don't need anywhere near that amount of storage space. A couple of SSDs will give us way more throughput, and some extra drive capacity if I need to add more spindles/SSDs later. The 15k drives are so I can migrate one of our highest IO systems off of some 10k drives. I hadn't worried about the pricing until I saw some stuff here, so I wanted to make sure it was reasonable. I have a feeling we are going to be fork lifting a whole new environment in a year or so, and this should meet our short-medium term needs. Jeez, that was a lot of words. Sorry. Edit: This model does do some SSD caching, Dell calls it High Performance Tier. It looks like I'm nowhere near the IO or throughput they recommend for using it though. I'm also below their recommended number of drives. SpaceRangerJoe fucked around with this message at 18:07 on Jan 28, 2015 |
# ? Jan 28, 2015 17:58 |
|
bigmandan posted:Anyone have some recommendations for networked backup storage? Does not have to be really fast, but needs to have 16TB+ and do CIFS, NFS and/or SFTP (NVSD, DDB or RDA are also an option) for use with vRanger. We're entertaining reusing some old servers and adding new controllers and disks, but I'd like to explore new options as well. Can you throw a VM in front of it? I thought vRanger ran off Windows Server? For my on site backups, I went with some Dell MD3200i filled with 12x4tb disks. After setting up a Dynamic Disk Pool with all of them, I have just over 28tb usable. Others have mentioned getting EqualLogic for cheaper than the PowerVaults lately. Here is a paper about Dynamic Disk Pools. http://www.dell.com/learn/us/en/04/shared-content~data-sheets~en/documents~dynamic_disk_pooling_technical_report.pdf
|
# ? Jan 28, 2015 19:05 |
|
Moey posted:Can you throw a VM in front of it? I thought vRanger ran off Windows Server? Thanks for the link. I have vRanger running in a VM already. Currently it's backups are being stored on our storage array (by the way of a linux VM with NFS), Not ideal obviously but hopefully that'll fixed soon with whatever we decide on.
|
# ? Jan 28, 2015 20:30 |
|
bigmandan posted:Anyone have some recommendations for networked backup storage? Does not have to be really fast, but needs to have 16TB+ and do CIFS, NFS and/or SFTP (NVSD, DDB or RDA are also an option) for use with vRanger. We're entertaining reusing some old servers and adding new controllers and disks, but I'd like to explore new options as well. Dell DR4100 / something Data Domain? Not sure what your budget is.
|
# ? Jan 28, 2015 20:47 |
|
I found that the purpose built backup appliances all want you to turn off any compression your backup software is using (so they can do their own compression/dedup). Since I had been happy with PHD Virtual/Unitrends, I ended up going with a big dumb array just for block storage, and let my backup software handle the rest.
|
# ? Jan 28, 2015 21:24 |
|
Moey posted:I found that the purpose built backup appliances all want you to turn off any compression your backup software is using (so they can do their own compression/dedup). Since I had been happy with PHD Virtual/Unitrends, I ended up going with a big dumb array just for block storage, and let my backup software handle the rest. I think we're leaning towards dumb arrays. Just need a huge chunk of storage to throw backups on. The backup software handles compression and such, and we're not doing that much data at moment As far as a budget, I wish I knew... so far it's always been find several options and choose the best one that fits our needs. Pricing is usually a secondary concern...
|
# ? Jan 28, 2015 22:25 |
|
If you want to roll your own with loads of disk then a Dell R730xd with Windows Storage Server isn't a terrible option.
|
# ? Jan 28, 2015 22:34 |
|
Thanks Ants posted:If you want to roll your own with loads of disk then a Dell R730xd with Windows Storage Server isn't a terrible option. Would probably use Debian or some other distro. We're mostly a *nix shop and the owner wants to keep it that way as much as possible. Hated that fact we needed Windows servers for vRanger and Dell Enterprise Manager (for our compellent arrays).
|
# ? Jan 28, 2015 22:40 |
|
bigmandan posted:Would probably use Debian or some other distro. We're mostly a *nix shop and the owner wants to keep it that way as much as possible. Hated that fact we needed Windows servers for vRanger and Dell Enterprise Manager (for our compellent arrays).
|
# ? Jan 29, 2015 00:58 |
|
Just noticed this, BTRFS starting to appear in NAS software, http://rockstor.com
|
# ? Jan 29, 2015 02:34 |
|
MrMoo posted:Just noticed this, BTRFS starting to appear in NAS software, http://rockstor.com That looks pretty interesting. I'll have to give it try on some spare hardware i have lying around.
|
# ? Jan 29, 2015 03:11 |
|
adorai posted:If you are going to roll your own, I'd look into openindiana. Stay the gently caress away from OpenIndiana; it is dead and it sucks. If you want a OpenSolaris derivative, use one of the Illumos distributions. Alternatively, the FreeBSD/ZFS port is pretty mature, and the ZFS on Linux port isn't bad for experimental/development work if you're comfortable with either one of those.
|
# ? Jan 30, 2015 03:51 |
|
PCjr sidecar posted:Stay the gently caress away from OpenIndiana; it is dead and it sucks. OpenIndiana is an Illumos distribution, are you thinking of something else?
|
# ? Jan 30, 2015 04:40 |
|
thebigcow posted:OpenIndiana is an Illumos distribution, are you thinking of something else? When was the last OpenIndiana release? Almost two years ago. Tumbleweeds on the dev-list, etc.
|
# ? Jan 30, 2015 04:50 |
|
PCjr sidecar posted:When was the last OpenIndiana release? Almost two years ago. Tumbleweeds on the dev-list, etc.
|
# ? Jan 30, 2015 05:31 |
|
Just found out our only area IBM CE was let go as part of the big "restructuring". IBM is hosed.
|
# ? Jan 30, 2015 19:24 |
|
Speaking of that, anyone know how V7000 support is going to work with the Lenovo takeover of the IBM x86 stuff? All I could find is that the V7000 and other SVC lines are going to be "licensed" by Lenovo since they're so closely tied to the xSeries servers, however is IBM going to be providing backend support or would we get Lenovo support/CEs?
|
# ? Jan 30, 2015 20:57 |
|
IBM is keeping Storwize everything as far as I know. Unless you're referring to the x86 parts since they're basically xSeries but IBM will still be supporting that.
|
# ? Jan 30, 2015 21:11 |
|
On the other hand, most of the people in the engineered systems division have seen the writing on the wall and peaced the gently caress out, so good luck actually getting support from whoever is left. The group I used to manage has had a Sev1 ticket open for four months with twice-weekly conference calls going all the way to Janis Landry-Lane and basically every call is IBM going "yeah, we still don't really know what's going on."
|
# ? Jan 30, 2015 21:26 |
|
We will begin looking to replace our DS8100 soon-ish. We (probably) won't be going with IBM. What's infuriating is the worthless Client Executive in this area is still there. These guys make like $120k on average and in this area that's a lot of loving money. I know for a fact the CE guys get paid dick for the crazy hours they have to pull. The Client Executive provides the customer with no value and is mainly just a thorn in our side. Seriously, this guys main existence, as far as I can see, is sitting in the middle and taking a piece of the customer pie. Kaddish fucked around with this message at 21:32 on Jan 30, 2015 |
# ? Jan 30, 2015 21:27 |
|
adorai posted:There is development on the hipster branch. https://github.com/OpenIndiana/oi-userland/graphs/contributors Unless I'm misreading this, almost all of the updates in the last 6 months are from one sysadmin in Russia.
|
# ? Jan 30, 2015 21:48 |
|
Misogynist posted:On the other hand, most of the people in the engineered systems division have seen the writing on the wall and peaced the gently caress out, so good luck actually getting support from whoever is left. The group I used to manage has had a Sev1 ticket open for four months with twice-weekly conference calls going all the way to Janis Landry-Lane and basically every call is IBM going "yeah, we still don't really know what's going on." IBM has been slowly circling the drain for the last 10 years, I can't for the life of me understand why anyone would buy their loving products anymore.
|
# ? Jan 30, 2015 21:56 |
|
Rhymenoserous posted:IBM has been slowly circling the drain for the last 10 years, I can't for the life of me understand why anyone would buy their loving products anymore.
|
# ? Jan 30, 2015 22:33 |
|
Anyone familiar with emc data domains as backup solutions?
|
# ? Jan 31, 2015 00:30 |
|
Misogynist posted:On the other hand, most of the people in the engineered systems division have seen the writing on the wall and peaced the gently caress out, so good luck actually getting support from whoever is left. The group I used to manage has had a Sev1 ticket open for four months with twice-weekly conference calls going all the way to Janis Landry-Lane and basically every call is IBM going "yeah, we still don't really know what's going on." I had a similar experience back in early 2012 when I lodged a fault for a V7000 cluster which experienced a sudden and total failure (Relevant Post). Took just over a month of back-and-forth and being given the run-around before we got an answer from product engineering. From the sound of it things haven't improved.
|
# ? Jan 31, 2015 04:36 |
|
Rhymenoserous posted:IBM has been slowly circling the drain for the last 10 years, I can't for the life of me understand why anyone would buy their loving products anymore. One funny thing about the Lenovo x86 deal. IBM sells (sold) supercomputers to government organizations, most of these are not allowed to communicate with Lenovo or Lenovo employees, being a Chinese company. They sold a system to NOAA a few years back, and had a contract to update the system this year. Since NOAA can't deal with Lenovo, IBM is using Cray (their largest competitor in the supercomputing market) as a subcontractor for this new NOAA system. For our support, we've been told that if/when IBM says "OK I need to send this info to a Lenovo employee to figure out what's going on" that we have to say "NO, you can't". It's at the point now where we have an IBM employee recite logs and stuff (minus "sensititve" info like, ip addresses???) to a Lenovo employee for support calls. The majority of the former IBM team that knew anything at all about our system were moved to Lenovo and we can no longer directly communicate with them.
|
# ? Feb 2, 2015 19:45 |
|
Captain Foo posted:Anyone familiar with emc data domains as backup solutions? We used them for a while for D2D 2 Tape. No real issues, compression was good. Expensive. We didn't keep using them though. Went back to LTO5 tape for a while, and now we use Evault appliances and cloud backup storage.
|
# ? Feb 2, 2015 20:01 |
|
skipdogg posted:We used them for a while for D2D 2 Tape. No real issues, compression was good. Expensive. We didn't keep using them though. Went back to LTO5 tape for a while, and now we use Evault appliances and cloud backup storage. Thanks for the info
|
# ? Feb 5, 2015 00:18 |
|
I’m upgrading our backup system and I’m debating between LTO6 Tapes and RDX Hard drives. I have been using LTO tapes for years and it’s worked ok but there is a fair amount of dust in our office and the drives tend to tank after 3-4 years. Does anyone have experience with the RDX system in general? I’m looking at this right now http://www.tandbergdata.com/us/index.cfm/products/removable-disk/rdx-quikstation/ and the features look interesting, the fact that it is forward compatible with future larger drives is a big plus and I also like there is no tape mechanism that will get dirty. Is RDX any good? Media is a little expensive but the enclosure is substantially cheaper than the LTO autoloader I was looking at.
|
# ? Feb 13, 2015 00:09 |
|
I trust tape a lot more in the ability to put it on a shelf and have it work a couple of years later, and also if it's being shipped off-site then I trust tape to travel better.
|
# ? Feb 13, 2015 00:48 |
|
Get really nice cases for your tapes (like camera cases) and you'll be fine.
|
# ? Feb 13, 2015 02:08 |
|
We have a 5 Node 72NL (320 TB Usable) cluster about to hit EOL this fall and I am working on options to replace it with a much higher capacity system. We really have enjoyed the simplicity of Isilon and it has worked great, but we are now being tasked with storing bigger datasets most around 20TB to 50TB, one edge case is 400TB. Unfortunately, it is becoming more and more apparent that I am not going to be able to afford a 1.5 PB+ Isilon cluster and was interested in hearing if you all had some suggestions. This data is usually large files, not commonly accessed, we used the Near-Line archival storage from Isilon and speed was *never* an issue. So no IO requirements, but the data does need to be accessible, so Amazon Glacier really isn't useful here. We use CIFS and NFS on the Isilon. I have a good group of Sysadmin, but none are dedicated storage admin, so I am not interested in rolling my own cloud with OpenStack or Ceph. We have some Dell 3200i with 3 1200i attached for 96 TB RAW, it has worked okay, but wasn't a huge fan of the LUN management...but the more I read this thread, the more I am thinking this may be one of my best options for capacity + reliability + price. We also just bought vSAN for our VM storage, ... we could scale that out, hadn't really thought of it for this particular use case. Anyways, would love to hear any suggestions if you have them. TLDR: Need easy to manage, reliable PB+ storage, oh yeah inexpensive would be great.
|
# ? Feb 13, 2015 08:54 |
|
|
# ? May 13, 2024 09:35 |
|
For not gigantic data Amazon S3 is actually surprisingly reasonable and is my current choice for TB per day. Glacier turns out not to be effective with such small datasets.
|
# ? Feb 13, 2015 16:39 |