|
Serfer posted:Oh hi everyone with HP SAN equipment, http://seclists.org/bugtraq/2010/Dec/102 Don't really know what to say...
|
# ? Dec 14, 2010 19:04 |
|
|
# ? May 21, 2024 13:44 |
|
I'm about 90% certain a similar backdoor exists on IBM Midrange Storage series SANs with Telnet enabled At least the password isn't quite so gutfucking retarded, though.
|
# ? Dec 14, 2010 19:39 |
|
Nukelear v.2 posted:Just tried on mine and can confirm this. WTF HP. What version of the mgmt software do you have installed? We have 8.1 and I couldn't get the account to work, or it might because of older hardware.
|
# ? Dec 14, 2010 19:47 |
|
ghostinmyshell posted:What version of the mgmt software do you have installed? We have 8.1 and I couldn't get the account to work, or it might because of older hardware. Using the G3's web interface with controller bundle TS200R021. Which appears to be the most recent. Edit: HP's site is terrible. Edit2: This made /. frontpage, hopefully this means HP will bother to fix it. Nukelear v.2 fucked around with this message at 00:13 on Dec 15, 2010 |
# ? Dec 14, 2010 20:03 |
|
For everyone else with a Compellent setup, appears they are merging with Dell To our valued customers, We are entering some exciting times at Compellent. This morning we issued a joint announcement that Dell and Compellent have signed a definitive agreement for Dell to acquire Compellent. We couldn’t have made it to this point without the support of customers like you. You recognized our game-changing technology before anyone and now that foresight has been validated by the industry. The Details of Today’s Announcement • Dell will pay $27.75 per share in cash, for an aggregate purchase price of approximately $960 million in equity value. • The transaction is subject to approval by Compellent’s shareholders, and is expected to close in Q1 of 2011. • Dell also signed a reseller agreement with Compellent that extends the storage portfolio it can offer its worldwide customer base, effective immediately. Why Dell + Compellent Makes Sense • We both have a strong entrepreneurial spirit, a heritage of innovation and a commitment to our customers. • Dell has an incredible global distribution network, and their reach will help us scale beyond. • Dell plans to redefine the data center and Compellent’s Fluid Data becomes a strategic asset in delivering on that vision. What This Means to You • Experience the same world-class customer support from the same account teams and technical specialists. • Increased recognition for the Compellent name and validation of your decision from your management team. • Accelerated integration and innovation to support your business from the data center to the desktop. You’ve been here for us, and we will continue to be here to meet and exceed your needs. Thanks to your vision, loyalty and support thus far. We look forward to making the storage technology you've chosen even better. Phil Soran President/CEO Compellent Technologies
|
# ? Dec 14, 2010 21:34 |
|
Dell! Where good technology goes to die! I hope they're forking out good amounts of cash to keep the engineers on for a couple more years. I just can't fathom much good coming out of that merger.
|
# ? Dec 14, 2010 21:38 |
|
Speaking of Dell, anybody actually use the MD3200 product? It *looks* decent, and we need a successor to the Sun 2530 series - does it fit the bill?
|
# ? Dec 14, 2010 22:07 |
|
1000101 posted:We generally hear nothing but positive feedback about 3par in the field as well. In fact the only real negative thing I've heard is that its hard to find people that really understand 3par/know it inside and out for services. 1000101 posted:I hope they're forking out good amounts of cash to keep the engineers on for a couple more years. I just can't fathom much good coming out of that merger. I see a lot of software/patents being migrated to the EqualLogic group and existing Compellent customers moved to the next-gen high-end EqualLogic, which will move to Dell-designed/made NetApp-style hardware. Maybe, know knows. EnergizerFellow fucked around with this message at 22:29 on Dec 14, 2010 |
# ? Dec 14, 2010 22:25 |
|
So goons, I managed to get my hands on a Dell/EMC AX150i pretty cheap with 12x750gb drives. Now looking at the price of 1tb/2tb can I use larger drives in this SAN ? and yes I have been googling this
|
# ? Dec 16, 2010 08:38 |
|
lilbean posted:Speaking of Dell, anybody actually use the MD3200 product? It *looks* decent, and we need a successor to the Sun 2530 series - does it fit the bill? I have one in production at the moment. For an entry level SAN Im not sure it can be beat. The one big feature I wish it had was replication but other than that its been a great unit.
|
# ? Dec 16, 2010 15:08 |
|
We got the "admin / !admin" account as the inital management account for ours G3 box. If this was supposed to be a "secret account" they did a pisspoor job of it. We have used it since we got the G3 array.
|
# ? Dec 16, 2010 16:46 |
|
So I've just inherited a load of SCSI drives pulled from what I believe was a Dell Powervault 220S. These drives are still in their Dell Caddies and have 80pin scsi connectors. Here's my conundrum. I can't use these drives in any machines I have. I'd like to sell them but I can't test them. How can I test these cheaply?
|
# ? Dec 16, 2010 16:51 |
|
Shaocaholica posted:So I've just inherited a load of SCSI drives pulled from what I believe was a Dell Powervault 220S. These drives are still in their Dell Caddies and have 80pin scsi connectors. Do you have a machine with a SCSI controller? You can buy adapters to go from 80 pin SCA to 50 or 68 pin connectors with a power jack. One such adapter.
|
# ? Dec 16, 2010 17:06 |
|
BelDin posted:Do you have a machine with a SCSI controller? You can buy adapters to go from 80 pin SCA to 50 or 68 pin connectors with a power jack. Thats what I was thinking but I don't have any PCs right now I'd like to use for this task. Mac Mini - nope Shuttle - nope HTPC - guh, I could but I'd rather not dig it out from its cave Various laptops - nope Are there any old workstations I could buy that come with hot swap 80pin backpane type thing? I've found some old Dell workstations with SCSI onboard for around the same price as actually getting a SCSI card which is better for me since I don't really have a machine to put the card into.
|
# ? Dec 16, 2010 17:13 |
|
Syano posted:I have one in production at the moment. For an entry level SAN Im not sure it can be beat. The one big feature I wish it had was replication but other than that its been a great unit.
|
# ? Dec 16, 2010 17:18 |
|
Skipdogg, thanks again for the info. I have macro design question. I'm planning to virtualize a large number of our physical machines with ESXi 4.1, migrating away from Netware in the process and going totally Suse Linux. I'll be using our SAN(most likely the HP P4300) as the data store for all of the Virtual Machines. Currently we're using Novell Cluster services for file system access and GroupWise. I plan to keep that in place. The cluster nodes will be virtual machines and I'll carve up part of the SAN into LUNs which I can then present to the cluster nodes as shared storage. I'll leave as much space as possible unallocated in the event that a migration to Exchange or something else pops up down the road. From a high level, does anything sound off base?
|
# ? Dec 16, 2010 17:22 |
|
InferiorWang posted:From a high level, does anything sound off base?
|
# ? Dec 16, 2010 17:35 |
|
Misogynist posted:Your organization's dedication to Novell ha, I was going to say that. It's not a dedication, rather it's what we have and until they completely implode there's nothing we'd gain from making a lateral move over to equivalent MS products other than migration invoices.
|
# ? Dec 16, 2010 17:43 |
|
dj_pain posted:So goons, I managed to get my hands on a Dell/EMC AX150i pretty cheap with 12x750gb drives. Now looking at the price of 1tb/2tb can I use larger drives in this SAN ? and yes I have been googling this The highest I can find on a spec sheet is 750GB http://japan.emc.com/collateral/hardware/data-sheet/c1111-clariion-ax150-ax150i.pdf I'll have a look next week though.
|
# ? Dec 16, 2010 22:15 |
|
Syano posted:I have one in production at the moment. For an entry level SAN Im not sure it can be beat. The one big feature I wish it had was replication but other than that its been a great unit. Syano, can you describe a bit of your environment and how you have your MD3220i carved up with disk groups and virtual disks? I've got two Dell R410's with 6 NIC's total, and an MD3220i (and MD1200 attached). I am using Hyper-V Server 2008 R2 with Failover Clustering. I've got 2 NICs teamed at the host level and set up as an external virtual network in Hyper-V. 3 NICs are dedicated to iscsi storage (direct attached without a switch since only 2 hosts), with 2 ports to controller0 on the md3220i and 1 to controller 1. The opposite for the other host. The last NIC is a direct link between the two hosts for cluster heartbeat and live migration. I've got MPIO set up and working properly (I think), and my plan was a single disk group of 16 disks, one quorum LUN and one big data LUN that is turned into a CSV in the cluster. However, I keep getting warnings from the MD3220i that the preferred path is not being used, and my live migration is failing. It seems like I'm doing something wrong, and reading this only re-inforces that: http://www.delltechcenter.com/thread/4305668/MD3xxxi+-+disk+groups,+luns+and+VMware+-+tips+to+separate+out+LUN+path?offset=40&maxResults=20 General thought there is to spread disk groups across controllers, but that doesn't fit my plan of one big disk group and one LUN as a CSV within the cluster. You mentioned earlier you're using your MD3220i with Hyper-V, so any examples would be great.
|
# ? Dec 16, 2010 22:31 |
|
Ive actually got the 3200i, the one with the 3.5in disks. Regardless... This particular array is being used to host Hyper-V VMs though we do not have any shared volumes on it. I set up the thing as one huge RAID 6 array with 2 hot spares. I have 3 hosts currently connected to it, a poweredge r410 and 2 poweredge 2950s. I am not using any NIC teaming at all. I am actually using 2 catalyst gigabit switches for the fabric and connecting through it. I am presenting a single volume of storage to each host at the moment so that means I have one controller running 1 volume and the other running 2. I wonder if your array is freakin out because you do not have all the controller ports connected or because there is a mismatch in the number of pathways between the host and each controller.
|
# ? Dec 16, 2010 22:51 |
|
Shaocaholica posted:Thats what I was thinking but I don't have any PCs right now I'd like to use for this task. Honestly, there's no point in using these drives. Those drives will be at best U320. A single SSD can saturate that. You can probably use a whole shelf of those and still not get nearly the performance one 2.5" drive can get you.
|
# ? Dec 17, 2010 01:39 |
|
Syano posted:
I've got it fixed, and it was either what you suggested (since I had 3 nics per host used) or just a mistake in the MPIO settings. I went back to 4 nics per host, two per controller, and now it appears to be working correctly. Time to start doing some performance benchmarking.
|
# ? Dec 17, 2010 05:43 |
|
Nomex posted:Honestly, there's no point in using these drives. Those drives will be at best U320. A single SSD can saturate that. You can probably use a whole shelf of those and still not get nearly the performance one 2.5" drive can get you. Yeah I'm going to just test them and sell them. They seem to be fetching $150 a pop on ebay.
|
# ? Dec 18, 2010 19:01 |
|
Help me resolve an argument: when you have two FC ISLs with the same cost, is the path selection by login session or by frame? (In other words, do frames from the same FC login always take the same path?)
|
# ? Dec 20, 2010 16:29 |
|
Sorry for causing all this back and forth. I'm not a systems guy and I think that causes me to be a little loose with my wording when talking about these technologies. I know SANs don't have file systems and NAS do, I've read multiple, very boring, several hundred page books on the differences between the two (although I still wouldn't consider myself knowledgeable on the subject). When I say "customer provide a stable file system" I guess I'm being a little broad but include this to mean actual file system, whether it be nfs, vxfs, vfs, ext, etc. The actual physical disk, wherever it may be, and if it is not local disk the transport method as well. I see that this is an incorrect way to refer to all of the moving parts. Again, not my area of expertise so I apologize. This has been difficult for us to troubleshoot as a vendor. We do not technically support our software being installed on these systems as a proprietary third party database we use does not either. That doesn't stop customers from installing the software wherever they see fit. We are seeing more and more customers move to SAN and NAS without investigating the impact on the applications they run on these technologies. Some of these customers are small mom and pop shops that just don't know any better and you'd be surprised, but some of them are also large corporations that have multiple divisions globally using our software with 6 figure annual support contracts. Regardless, I appreciate the insight from everyone. There are clearly multiple issues with different technologies, none of which will be addresses very easily.
|
# ? Dec 20, 2010 17:35 |
|
idolmind86 posted:Regardless, I appreciate the insight from everyone. There are clearly multiple issues with different technologies, none of which will be addresses very easily.
|
# ? Dec 20, 2010 18:08 |
|
I'm going to quote myself in lieu of repostingwhat is this posted:I'm genuinely trying to help you here. You don't even need to hire someone as other people have suggested. Here is the fix for your issues: This will solve every single one of your database file locking, caching, and network issues. SANs appear to be local storage. It will fix your problems.
|
# ? Dec 20, 2010 18:11 |
|
I mean, there is another possibility here, I suppose. Your "proprietary database" is SQLite and you're trying to do multi user by sharing the database file over the network. In which case have fun.
|
# ? Dec 20, 2010 18:14 |
|
$10 says it's FileMaker.
|
# ? Dec 20, 2010 18:20 |
|
I assumed that when he said his company's product contains a proprietary database he meant it was their proprietary database system. Otherwise I'd assume he'd be getting support for this from the proprietary rdbms vendor. I was also assuming they use an off the shelf database system with maybe a customization here and there. Writing an rdbms for commercial use from scratch is basically idiotic unless you're google or something. So that usually means postgresql if you're smart. Maybe sqlite if you don't need multiple user, or if you're dumb. You can't make a proprietary version of MySQL due to the licensing. H2 can't really be proprietary either again due to licensing. I guess the only others would be Apache Derby and HSQLDB. and I already posted documentation about postgres explaining not to use it on NFS. as for SQLite, quote:Situations Where Another RDBMS May Work Better what is this fucked around with this message at 18:39 on Dec 20, 2010 |
# ? Dec 20, 2010 18:33 |
|
Misogynist posted:Especially as long as you keep referring to NFS and VxFS like they're even remotely similar. Ugh, I give up. I never even knew that I implied these two were similar.
|
# ? Dec 20, 2010 22:36 |
|
You keep ignoring the posts telling you to use a SAN. Why is this? Also, which proprietary database are you using? Or which one have you customized? (to put things simply, NFS is a protocol for file-level access/locking/etc over a network, and VxFS is a file system. You can make VxFS available over NFS. You can't format a drive in NFS, but you can format a drive in VxFS)
|
# ? Dec 20, 2010 22:58 |
|
what is this posted:You keep ignoring the posts telling you to use a SAN. Why is this? Respectfully, I think you're oversimplifying the problem. As a vendor, idolmind's company is going to have a tough time trying to tell his clients to throw out their NFS infrastructure because they don't support it. If his company's competitors do support NFS, it might result in a lot of lost sales. As I've stated in a previous post, it might be worth looking at tuning NFS mount options based upon the performance profile of the application. At the moment I can think of at least 2 large corporations that would make it difficult for idolmind's company to make a sale if someone were to state that the application only supported block storage protocols.
|
# ? Dec 21, 2010 00:31 |
|
Cultural Imperial posted:Respectfully, I think you're oversimplifying the problem. As a vendor, idolmind's company is going to have a tough time trying to tell his clients to throw out their NFS infrastructure because they don't support it. If his company's competitors do support NFS, it might result in a lot of lost sales. As I've stated in a previous post, it might be worth looking at tuning NFS mount options based upon the performance profile of the application. At the moment I can think of at least 2 large corporations that would make it difficult for idolmind's company to make a sale if someone were to state that the application only supported block storage protocols.
|
# ? Dec 21, 2010 00:35 |
|
adorai posted:If they will only provision storage as NFS from whatever device they have, the solution is simple: use an opensolaris/openindiana/openfiler/plain old linux VM stored on NFS that presents that storage as iSCSI. Problem solved, cost: $0 and the storage admin doesn't know any better. I can appreciate that someone as astute as yourself can develop a multitude of workarounds to accomodate a requirement for block storage. However, from a manager's point of view, particularly one that is evaluating your solution as part of an RFP, would you consider this an acceptable solution in comparison with other application that may support NFS? As an account manager working for idolmind's company, would you even entertain it as a proposal?
|
# ? Dec 21, 2010 00:54 |
|
Cultural Imperial posted:I can appreciate that someone as astute as yourself can develop a multitude of workarounds to accomodate a requirement for block storage. However, from a manager's point of view, particularly one that is evaluating your solution as part of an RFP, would you consider this an acceptable solution in comparison with other application that may support NFS? As an account manager working for idolmind's company, would you even entertain it as a proposal?
|
# ? Dec 21, 2010 01:03 |
|
adorai posted:If they will only provision storage as NFS from whatever device they have, the solution is simple: use an opensolaris/openindiana/openfiler/plain old linux VM stored on NFS that presents that storage as iSCSI. Problem solved, cost: $0 and the storage admin doesn't know any better. This seems like such a terrible hack. It doesn't cost $0, and when has playing tricks on storage/network/system admins ever wound up being a net benefit when they inevitably find out?
|
# ? Dec 21, 2010 02:29 |
|
H110Hawk posted:This seems like such a terrible hack. It doesn't cost $0, and when has playing tricks on storage/network/system admins ever wound up being a net benefit when they inevitably find out?
|
# ? Dec 21, 2010 03:02 |
|
|
# ? May 21, 2024 13:44 |
|
Cultural Imperial posted:Respectfully, I think you're oversimplifying the problem. As a vendor, idolmind's company is going to have a tough time trying to tell his clients to throw out their NFS infrastructure because they don't support it. If his company's competitors do support NFS, it might result in a lot of lost sales. As I've stated in a previous post, it might be worth looking at tuning NFS mount options based upon the performance profile of the application. At the moment I can think of at least 2 large corporations that would make it difficult for idolmind's company to make a sale if someone were to state that the application only supported block storage protocols. He can easily require that his database server needs block level storage. They can either use a sufficient virtualized server, and virtualized storage mounted with a block level access protocol, or they can use a physical server with virtualized storage and block level, or a physical server with physical direct attached storage. Any decent enterprise will either be able to hook a special server directly to a LUN on their SAN, or be willing to buy a physical server with a couple drives in RAID1. I agree that if every client required block level access it would be a tremendous headache to have to set up iSCSI initiators on all kinds of machines. But it doesn't. Only the RDBMS server needs block level access. The clients talk over web browser or SQL or however it's implemented. There are many, many corporate apps that require the purchase of a server or meeting a set of hardware/virtualization requirements. All it requires is a tiny amount of effort by a vendor, and it's something that I have never seen scuttle a sale. The only situation where it could even matter would be a case where buying a server somehow costs a lot more than their app+service contract, and if that's the case they should get out of the business because enterprise support alone should cost a bunch more than a $6,000 dell server, and they've got to be losing money if their pricing is so out of whack. Even in that case they could buy a rackmounted consumer grade NAS that provides iSCSI block level access to LUNs (eg Synology RP810+) and that would be under $2000. Oh no consumer grade gear is too risky? Well then spend the money on professional gear. Places will comply with this very reasonable request. I'm intimately familiar with the purchasing process of enterprise orgs and they'll bust your balls on a lot of stuff, but this kind of thing is not going to be a sticking point. I mean Jesus, just put out a document like this Microsoft does not support network-attached storage and be done with the whole thing. what is this fucked around with this message at 03:12 on Dec 21, 2010 |
# ? Dec 21, 2010 03:10 |