|
madsushi posted:So you might be able to wiggle a few more percentage points out of them, but it's very unlikely you take them to $450k without some serious leverage. Very helpful. I guess I'll throw the ol' "I like you and your product, but your competitor (truthfully) is offering me similar at 80%. What can you do to help me keep Finance from railroading me into taking that solution?" and see what happens. 1.3M for this would be highway robbery. Why do they even do that?! Nobody is going to pay that, for that point why not just say it costs $1b and then you could say hey, 99% off!
|
# ? Nov 27, 2013 20:42 |
|
|
# ? May 30, 2024 13:34 |
|
Because somewhere there's a person who genuinely believes that they are getting huge discounts.
|
# ? Nov 27, 2013 20:59 |
|
In my experience the 80% margin number is usually on the software side of things. Hardware is probably around 60% depending. I personally shoot for at least 50% off MSRP, but it doesn't always happen. Commodity servers I can't seem to get below around 40, maybe 45% off MSRP if I spend 6 figures. A note about services, I've seen a huge push the last couple of years to increase 'services' revenue even if it means losing revenue on the hardware side of things. I purchased a quarter million worth of desktops from Dell a couple years ago. They took 20K off the price of the hardware if we also purchased a 10,000 dollar Dell Kace box with 100 licenses. They were hoping I would like it and they would make it up by me buying another 350 licenses for the Kace box. Didn't work though, it's still sealed in the box, I sometimes use it to hold the server room door open. 53% off list is pretty fair for not knowing exactly what you're buying. You can probably squeeze some more out of them if you offer fast payment, or wait until the quarter is about to end. If they're short on revenue targets they'll give poo poo away at the end of the quarter to make their bonus.
|
# ? Nov 27, 2013 21:38 |
|
skipdogg posted:In my experience the 80% margin number is usually on the software side of things. Hardware is probably around 60% depending. I personally shoot for at least 50% off MSRP, but it doesn't always happen. Commodity servers I can't seem to get below around 40, maybe 45% off MSRP if I spend 6 figures. I will agree that 60% is closer for hardware, but not for enterprise storage. NetApp makes almost 60% margin average on the SALE price, let alone MSRP. They're giving him 60% off MSRP without too much negotiation, so clearly there's got to be some margin left, or else he wouldn't have a sale anyway.
|
# ? Nov 27, 2013 21:50 |
|
That's good info to have. I'm not familiar with the big boy toys to be honest. I know I got a decent deal on my VNXe and we've beat HP up really good on commodity servers. I wasn't involved with the purchase of our VNX 5500's though, but I heard it was 50%+ off list.
|
# ? Nov 27, 2013 22:04 |
|
skipdogg posted:That's good info to have. I'm not familiar with the big boy toys to be honest. I know I got a decent deal on my VNXe and we've beat HP up really good on commodity servers. I wasn't involved with the purchase of our VNX 5500's though, but I heard it was 50%+ off list. This is one data point I was looking at (corroborated by other sites too):
|
# ? Nov 27, 2013 22:06 |
|
I haven't signed anything but I can tell you the company I am talking to is on the left side of that graph and doesn't give away beer mugs as a promotion for their data onTap OE. The two other arrays are designed to scale to 20PB in a single namespace. If it is true that the pricing is rock bottom I appreciate that, but I definitely get the feeling that I can put the screws to them for a bit and save $50-100k
|
# ? Nov 28, 2013 01:46 |
|
KennyG posted:I haven't signed anything but I can tell you the company I am talking to is on the left side of that graph and doesn't give away beer mugs as a promotion for their data onTap OE.
|
# ? Nov 28, 2013 02:51 |
|
I said they DON'T give away beer mugs, do that leaves.... I did speak with NetApp but the Unified solution wasn't as good for our needs.
|
# ? Nov 28, 2013 17:20 |
|
KennyG posted:Everything is direct attach at this moment.
|
# ? Nov 28, 2013 17:54 |
|
KennyG posted:I said they DON'T give away beer mugs, do that leaves.... Yea, even without that caveat EMC is the only vendor that makes sense. Sounds like they're quoting you a VNX and then a couple of Isilon clusters. What is the use case for the Isilon?
|
# ? Nov 28, 2013 19:00 |
|
Evil_bunny I know! Only been here 4 months. Been trying to change it for 3.9. Document management. Without being too specific, we are a legal services company that does e-discovery and doc review. This means company's ship us documents by the TB for legal issues and we "handle" them. Due to the nature of our business and the architecture of our business cluster one is going to be largely CIFS and a few other ancillary stores. The larger cluster is for archive of the first two (yes it's VNX) and Hadoop archive target. Isilon is desirable for us due to the scale out nature as we currently have an IT staff of 3 and are growing at a rate of 1 tb+ /week
|
# ? Nov 29, 2013 01:13 |
|
KennyG posted:Can we get dirty and talk about pricing? Kenny, I went around to all the vendors in our price range and got competitive quotes. I reminded all of them that we were shopping around and looking at all their competitors. After I had "final" quotes from everyone I decided on the vendor I wanted to go with then used the other vendors prices to get them to drop another 15% on the hardware and throw in free installation and training. I was also able to get Nimble to drop the price further by telling them I hated to have to replace my FC switch so they cut another 5k to cover the storage switches. If your looking at Nimble or Pure they will both let you return your units within 30 days if your not happy with it.
|
# ? Dec 3, 2013 18:03 |
|
Edit: Wrong thread.
|
# ? Dec 3, 2013 20:31 |
|
Quick question: for real-time file server share change auditing, which *reasonably* priced 3rd party tool (Windows Server 2012 is still utter junk when it comes to reading logs) should I be looking at...? Just two servers, half a dozen shares on each...
|
# ? Dec 10, 2013 02:05 |
|
skipdogg posted:In my experience the 80% margin number is usually on the software side of things. Hardware is probably around 60% depending. I personally shoot for at least 50% off MSRP, but it doesn't always happen. Commodity servers I can't seem to get below around 40, maybe 45% off MSRP if I spend 6 figures. I have an ongoing horror story about KACE but I promised to give them one more chance to right all the wrongs before I go nuclear online - which I will do, for sure, at least others would not buy into their BS anymore -; we will see, only few weeks left... szlevi fucked around with this message at 02:35 on Dec 10, 2013 |
# ? Dec 10, 2013 02:09 |
|
I have one more question: is anyone using ION Data Accelerator from Fusion-IO...? It seems to me like a competitive alternative to Violin's boxes (MUCH cheaper, even if you add the price of 2-3 FIO cards per server) - are those bandwidth numbers are for real..? Last time I've seen such claims - see http://www.fusionio.com/load/-media-/2griol/docsLibrary/FIO_DS_ION_DataAccelerator.pdf - was when I *almost* got a box from Nimbus Data, only to see the CEO himself (!) injecting some really nasty lending terms into the final doc he sent over for my signature, done in a very low-brow, disgustingly sneaky manner (and then even had the audacity of accusing me not having funds ready - while implicitly admitting he didn't even have available test boxes in circulation... clowns in the storage circus.)
|
# ? Dec 10, 2013 02:35 |
|
What kind of workloads are you doing?
|
# ? Dec 11, 2013 06:19 |
|
szlevi posted:I have one more question: is anyone using ION Data Accelerator from Fusion-IO...? It seems to me like a competitive alternative to Violin's boxes (MUCH cheaper, even if you add the price of 2-3 FIO cards per server) - are those bandwidth numbers are for real..? Nimbus has like 50 employees. The CEO sometimes handles support calls as well. It's pretty close to a one-man-show. From what I've heard from NetApp SEs that see Nimbus in the field they aren't lying about the performance though, at least for sequential workloads. Not sure about the FIO product, but it's not outside of the realm of possibility that can push some pretty serious throughput.
|
# ? Dec 11, 2013 06:28 |
|
Dilbert As gently caress posted:What kind of workloads are you doing? We're a medical visualization firm, the actual workflow here would be high-end compositing using uncompressed data (we have our own raw file format), sometimes 4K, possibly even higher res in the future - to put it simple I want to reach at least 10-15 fps and frame size can go up as high as 128MB (those workstations have dual 10Gb adapters.) Currently all I'm doing is using two FIO cards as transprane t cahce, fronting two production volumes but 1. it's not nearly fast enough 2. it's limited to one card/volume 3. it's a clunky, manual process when you un-bind the crad from one volume and bind it to another one, depending on the location of the next urgent project folder...
|
# ? Dec 11, 2013 20:45 |
|
szlevi posted:We're a medical visualization firm, the actual workflow here would be high-end compositing using uncompressed data (we have our own raw file format), sometimes 4K, possibly even higher res in the future - to put it simple I want to reach at least 10-15 fps and frame size can go up as high as 128MB (those workstations have dual 10Gb adapters.) Currently all I'm doing is using two FIO cards as transprane t cahce, fronting two production volumes but 1. it's not nearly fast enough 2. it's limited to one card/volume 3. it's a clunky, manual process when you un-bind the crad from one volume and bind it to another one, depending on the location of the next urgent project folder... The problem with doing something like FIO cards in the host is that data may/may not be actually written or modified back to your storage processors in time to provide a viable copy of media if a host fails. Could it potentially speed up things? Sure, just wait till a locks up, freezes, or the FIO card decides to take a dump. Doctors are going to be PISSED. before I got further are you using a VDI infrastructure like EPIC to do this on? it's been a bit since I have worked with FIO cards so apologizes if I'm incorrect in their nature. Things I would look at is for some kind of bottleneck. You can have 10Gbps cards on all workstations but if traffic is being routed through a 1gb/s switch what's the point? If 128MB/s is your peak, chances are network isn't the issue. What video cards do these workstations have? IGP may not cut it, but something like a 50 dollar AMD/Nvidia card in the remote workstation might. Something like an R7 240 with some ample video ram can drastically change these things. What is the latency of the client talking to the server hosting these images? Is the latency high on the Image server to datastore? What Datastores are you using? Flash accelerated storage works wonders for things, and flash storage is cheap. If using VDI have you thought about GPU acceleration in your servers for VM's? Dilbert As FUCK fucked around with this message at 21:06 on Dec 11, 2013 |
# ? Dec 11, 2013 20:55 |
|
Dilbert As gently caress posted:The problem with doing something like FIO cards in the host is that data may/may not be actually written or modified back to your storage processors in time to provide a viable copy of media if a host fails. Could it potentially speed up things? Sure, just wait till a locks up, freezes, or the FIO card decides to take a dump. Doctors are going to be PISSED. We have artists and developers, we have no doctors. Our clients are big pharma and media firms so no end-users here. quote:before I got further are you using a VDI infrastructure like EPIC to do this on? No, it'd make no sense. We need 96-128GB RAM in these machines, they sport 6-core 3.33GHz Xeons etc (they were maxed-out Precision T7500s 3 years ago when I bought them.) quote:it's been a bit since I have worked with FIO cards so apologizes if I'm incorrect in their nature. I thought it goes without saying that I have the infrastructure in place... why would anyone install dual 10-gig NICs for gigabit switches? We run on 10Gb for 3-4 years now, few workstations. We can pull over 500MB/s from the FIO but that's enough for higher-res raw stuff. quote:What video cards do these workstations have? IGP may not cut it, but something like a 50 dollar AMD/Nvidia card in the remote workstation might. Something like an R7 240 with some ample video ram can drastically change these things. We have several plugins/tools we developed in CUDA so various NV cards: for the few we have high-end Quadros like K5000, rest are GTX desktop ones (480, 570, 770) - beyond CUDA compatibility the only thing that matters to us is the amount of memory, to work with larger datasets fast these tools load them into the video card's memory... I just bought a few GTX770 4GB for dirt cheap, they are great deals. quote:What is the latency of the client talking to the server hosting these images? Is the latency high on the Image server to datastore? These are simple project folders, on SMB3.0 file shares (Server 2012) and that's the issue. quote:What Datastores are you using? Flash accelerated storage works wonders for things, and flash storage is cheap. AGain, VDI is out of question - we need very high-end WS performance, that would make no sense.
|
# ? Dec 11, 2013 21:56 |
|
Sorry I completely misread your "We're a medical visualization firm" as medical e.g. healthcare; It's been really an off day for me. I apologize.
|
# ? Dec 11, 2013 22:06 |
|
That sounds fairly similar to a data analysis/visualization environment I used to manage, 12 clients with a ton of memory, 10GbE, and GPUs, so virtualization didn't make sense. We ran linux though, and ended up running IBM's GPFS filesystem with a 2MB block size on large DDN9550/DDN9900 storage systems (about 1.5PB total) with 8 NSD (I/O) servers in front of it, serving everything out over 10G. A single client could max out it's 10G card when doing sequential reads/writes and the 9900 alone could hit 4-5 GB/sec peaks under a heavy load. Granted GPFS is not even close to free and probably pretty expensive for a relatively "small" installation like that. It's more geared towards huge clusters and HPC, but drat did it rock for that environment. I'm not saying a different filesystem or anything will solve your issues. I just wanted to give a description of a similar environment where disk I/O was pretty sweet.
|
# ? Dec 12, 2013 00:23 |
|
That's what pnfs and lustre are for and used in many places exactly so.
|
# ? Dec 12, 2013 02:06 |
|
Seconding that Lustre is probably the sweet spot here. Or AFS if you hate yourself, and Gluster if you run Linux workstations (it'll take quite a few to get performance up to Lustre levels). If you want to max 10GB, it's going to be an enormous SAN or a distributed filesystem, and the latter is easier/better/more scalable.
|
# ? Dec 12, 2013 15:54 |
|
I know that both are relatively new products, but does anyone here have anything to say about either IBM V5000 or EMC VNX5200? We're currently looking into replacing our aging DS3300 and these two seem like pretty good candidates. The DS3k is currently providing about 8TB (15k SAS) of iSCSI storage for 12 blades running about 50 VMs which do a mishmash of webhosting (with backend databases), DNS and email. As we'd like to be able to phase out the DS3k without any major downtime (don't we all?), the V5k would seem like a more attractive alternative as it appears to be able to do non-disruptive online mirroring from volumes on the DS3k, whereas the VNX cannot. I'm also inclined to assume that IBM-to-IBM migration might be a more trouble-free experience. Our hardware supplier is pushing us EMC really hard, but they probably get better margins from those rather than IBM. I'm going to be sending out requests for quotes next week, so I don't yet have prices for comparable configurations. While EMC is probably the more expensive one, I've read that VNX2 is at least a bit more competitively priced than it's predecessors.
|
# ? Dec 12, 2013 16:00 |
|
evol262 posted:Or AFS if you hate yourself I used to run a couple AFS cells and wrote parts of an AFS client (arla). This is absolutely true.
|
# ? Dec 12, 2013 17:14 |
|
evol262 posted:If you want to max 10GB, it's going to be an enormous SAN This isn't really true. There are some reasonably cheap arrays out there that can easily max out a 10G link with highly sequential workloads. An E5400 can max out dual 10G links and that's not big iron.
|
# ? Dec 12, 2013 17:59 |
|
NippleFloss posted:This isn't really true. There are some reasonably cheap arrays out there that can easily max out a 10G link with highly sequential workloads. An E5400 can max out dual 10G links and that's not big iron. Granted in some respects, and medical imaging is probably enormous files. You can find cheap arrays which'll max 10g with nearline SAS, clever caching, or SSDs. Doing it for multiple workstations simultaneously and with a relatively unknown I/O pattern is much harder.
|
# ? Dec 12, 2013 18:48 |
|
Dilbert As gently caress posted:Sorry I completely misread your "We're a medical visualization firm" as medical e.g. healthcare; It's been really an off day for me. I apologize. Oh, no, your questions were totally valid, I wasn't clear enough.
|
# ? Dec 12, 2013 19:29 |
|
The_Groove posted:That sounds fairly similar to a data analysis/visualization environment I used to manage, 12 clients with a ton of memory, 10GbE, and GPUs, so virtualization didn't make sense. We ran linux though, and ended up running IBM's GPFS filesystem with a 2MB block size on large DDN9550/DDN9900 storage systems (about 1.5PB total) with 8 NSD (I/O) servers in front of it, serving everything out over 10G. A single client could max out it's 10G card when doing sequential reads/writes and the 9900 alone could hit 4-5 GB/sec peaks under a heavy load. Granted GPFS is not even close to free and probably pretty expensive for a relatively "small" installation like that. It's more geared towards huge clusters and HPC, but drat did it rock for that environment. Nice, thanks for the info. Funny you said DDN - I have a 9550, it used to pump out ~1.5GB/s total, around 400-500MB/s per client but it's out of warranty for a while now and being a single-headed unit I don't dare to put it into production anymore.
|
# ? Dec 12, 2013 19:31 |
|
The problem with pnfs, lustre etc that 1. we are a small shop and hard to get proper support for something like that 2. we are a Windows shop and clients are totally not supported at best (or does not even exist)... We tested Stornext with our DDN back then and it sucked - I got better results with native NTFS, seriously (and that was a crap too.)
|
# ? Dec 12, 2013 19:38 |
|
evol262 posted:Granted in some respects, and medical imaging is probably enormous files. You can find cheap arrays which'll max 10g with nearline SAS, clever caching, or SSDs. Doing it for multiple workstations simultaneously and with a relatively unknown I/O pattern is much harder. Yes and actually FIO just showcased their setup in the Summer, they had it running in their booth at SIGGRAPH: http://www.fusionio.com/siggraph-2013-demo/ This seems to be a LOT less complex for my team than a lustre setup (I'm handy with linux but pretty much that's it, nobody touches it unless necessary); essentially FIO applies some very clever caching all the way to the artist's WS, using block-level (IB) access... I probably wouldn't get those client cards but since I already have 2 FIO Duos that could drive the ioTurbine (newer version of my DirectCache) segment and I figured I might take a look how much those ioControl boxes (ex-Nexgen) cost, say, around 40-50TB... adding another pair of FIO cards in the other file server node could possibly bump me up to ~4GB/s, perhaps even further up...? Granted, it won't be as fast as the demo was and I will have to build it step-by-step but that's life.
|
# ? Dec 12, 2013 19:48 |
|
MrMoo posted:That's what pnfs and lustre are for and used in many places exactly so. It's anyone using ceph? It seems attractive because it combines block and object, but I'm not sure how widely used it is.
|
# ? Dec 12, 2013 20:12 |
|
szlevi posted:Yes and actually FIO just showcased their setup in the Summer, they had it running in their booth at SIGGRAPH: http://www.fusionio.com/siggraph-2013-demo/ FusionIO is awesome if you just want to throw money at the problem, and Infiniband lets them RDMA straight to the PCIe bus on the next server in the chain, but I don't know that there's anything particularly clever about it. It works and it's easy, though. szlevi posted:The problem with pnfs, lustre etc that 1. we are a small shop and hard to get proper support for something like that 2. we are a Windows shop and clients are totally not supported at best (or does not even exist)... I guess the question is this: FusionIO works for your needs, but it's a mess to scale out and isn't really standardized. Do the benefits outweight the costs? If yes, buy FIO. If not, get some hard numbers on the performance you need and an approximate price point and we'll get you recommendations. Serfer posted:It's anyone using ceph? It seems attractive because it combines block and object, but I'm not sure how widely used it is.
|
# ? Dec 12, 2013 20:27 |
|
evol262 posted:Granted in some respects, and medical imaging is probably enormous files. You can find cheap arrays which'll max 10g with nearline SAS, clever caching, or SSDs. Doing it for multiple workstations simultaneously and with a relatively unknown I/O pattern is much harder. Well, sure, if you're talking about random small block IO maxing out 10g is a lot harder, but you're not going to be talking about throughput in that scenario, you're going to be talking about IOPs and latency. You don't even need clever caching or SSD to do high throughput from a pretty modest SAN. Spinning platters are pretty efficient at sequential IO already so all you need is a raid/volume management scheme that divides data between spindles pretty effectively, some good readahead algorithms to help manage multiple concurrent streams, and enough CPU and internal bandwidth between the different busses that your controller itself isn't the bottleneck. Most general purpose arrays are optimized for random IO because it's harder to get right and because most IO that affects the user experience is latency sensitive small block IO. These optimizations for random IO tend to make them less effective for sequential IO, but when you get an array built for that purpose you can do a lot with relatively little hardware. Again, to reference the E-series, which is what I'm most familiar with, lustre running on top of a single E5460 with 30 drives can push about 10Gbs while handling 50 concurrent IO streams. Add more disk and we can get to 20Gbs split across even more concurrent streams.
|
# ? Dec 13, 2013 06:54 |
|
We max 40GB Infiniband using just spindle drives (massive quantities of spindle drives mind you.) This is on Solaris 11/ZFS/Dual 6-Core xeons with a minimum of 144 disks.
|
# ? Dec 13, 2013 09:58 |
|
the spyder posted:We max 40GB Infiniband using just spindle drives (massive quantities of spindle drives mind you.) This is on Solaris 11/ZFS/Dual 6-Core xeons with a minimum of 144 disks. How are you hanging that many spindles off a single box? Are you doing awful cheap hacky solutions or enterprise class hardware with service contracts and the whole bit?
|
# ? Dec 15, 2013 11:52 |
|
|
# ? May 30, 2024 13:34 |
|
I'm looking for something for nearline local backups for our systems, mostly db backups. I'm thinking 3-6u, one or two boxes, 20-40tb usable, bonded Gbe or 10Gbe connected, nfs and or rsync, ftp, etc transfer. While we have alot of in house expertise rolling just this kind of solution ourselves I'm hoping for something very turnkey and reliable, while not being horrendously expensive, moderately expensive is potentially ok. We already have a hitachi fc san for db's and vm's, but it's file options appear to be so bad we're not even considering them (and they gave us a free file module).
|
# ? Dec 16, 2013 22:46 |