Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Syano
Jul 13, 2005
Scale computing does something sort of like you guys are talking about I believe. I have no idea how it runs or what technologies it is built on or anything. I just had someone tell me about it once and I looked at the web page once. Here it is if you want to read more... I think I will when I get a free moment http://www.scalecomputing.com/products/hc3/features-and-benefits

Adbot
ADBOT LOVES YOU

Pile Of Garbage
May 28, 2007



Misogynist posted:

Hey, did I just come upon the only other SONAS user on the forums? :raise:

Unfortunately no. I wish I got to work with SONAS but the closest I've ever come is the Storwize V7000 (Just the normal one, not the Unified). I've read the SONAS Redbook back-to-back though! :shobon:

three posted:

What is the benefit of continuing the traditional SAN architecture?

I would rather have a resilient scale-out infrastructure that uses cheaper technology. Scale-out SANs are already very popular (e.g. Equallogic), so let's go a step further and push that into the server, make it resilient and highly available, and ditch the behemoth SAN architecture. Solid-state drives becoming affordable and easily obtainable makes this idea easier, as well.

Push everything into the software layer.

I've worked with SANs/NASs for several years and I like to think I'm somewhat on top of things but what the gently caress defines a "scale-out SAN"? A quick search on Google has simply led me to believe that it's just another lovely buzzword.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

cheese-cube posted:

I've worked with SANs/NASs for several years and I like to think I'm somewhat on top of things but what the gently caress defines a "scale-out SAN"? A quick search on Google has simply led me to believe that it's just another lovely buzzword.

Equallogic calls its "frame-based" versus "frameless".

Pile Of Garbage
May 28, 2007




Oh I see what they're saying, despite the stupid names they've used (Although that's probably just my completely irrational Dell hatred talking). I've worked with IBM SVC (SAN Volume Controller) before which which does the same sort of thing (They call it "external storage virtualisation").

edit: vvv ahahahaha love it

Pile Of Garbage fucked around with this message at 06:38 on Nov 9, 2012

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
hey guys i was gonna roll my own SAM-SAMBA-SOC (Scott Allen Miller SAMBA Scale Out Cloud) for my 15000 user Exchange 2005 production environment do you have any hardware to recommend? My budget is $650.

Serfer
Mar 10, 2003

The piss tape is real



Misogynist posted:

A product based on "virtualization bricks" that runs a dead-easy Isilon-like scale-out storage architecture and also hosts VMs would be loving incredible.

Isn't this pretty much what openstack is supposed to be? I mean with an openstack distribution like airframe, you just connect the machine and it boots into openstack, gets its storage added to the pool, and becomes a VM host.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

The MD 12x0 series are DAS units, the MD32X0 line are their entry level SANs.

three posted:

What is the benefit of continuing the traditional SAN architecture?

What if I only have a limited number hosts and have exceeded the internal (software shared) storage in them? I would be forced to purchase another host + licensing. With a traditional SAN you would just be adding on a shelf.

Moey fucked around with this message at 03:31 on Nov 9, 2012

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Moey posted:

What if I only have a limited number hosts and have exceeded the internal (software shared) storage in them? I would be forced to purchase another host + licensing. With a traditional SAN you would just be adding on a shelf.

Perhaps we will see the architecture for the server change the accomodate this. No particular reason a server can't have "shelves" added.

Also, Equallogic, for example, can't have shelves added to it. You have to buy a whole new member, which includes two controllers attached to it; controllers are, more or less, "compute" so you're paying the same price in this approach that you would in the SAN-less approach except you also gain compute capacity in your virtual environment too in the SAN-less strategy.

zero0ne
Jul 20, 2007
Zero to the O N E
Hardware: HP P2000 MSA G3 SFF - loaded with 24 600GB SAS 10k drives
2 drives setup as hot spares
1Gbps iSCSI (dual controller, so each one has 4 ports) - total of 8Gbps for the SAN


From my initial research, I should be looking at around 1500 - 4000 IOPS depending on usage.

Should I be able to get ~20 (maybe more) VMs from this?
(Hyper V 3)

Rest of infra looks like this:
1x HP DL560 g8 @ 16 cores (2x 8 cores) with around 192 - 256GB RAM
2x HP DL380 g7 @ 8 cores (2x 4 cores) with 128GB RAM
1x HP DL380p g8 @ 8 cores (2x 4 cores) with 128GB RAM (replica server off site)


Looking at running 2 environments here - an old XenApp 4.5 environment as well as a new XenApp 6.5 environment. Primarily to make the transition smoother.

for the amount of concurrent users (100+) I am probably still over building this, but I want to be absolutely sure I can scale up and out if needed in the next 3 years. There may be requirements or requests to start segregating the new XenApp environment even more if the company keeps growing at the rate it is.


I think my bottleneck for the above server hardware is still going to be that SAN. Probably don't need all that RAM either, but I was planning on 16GB XenApp servers, and 32GB each for their database servers.


Thoughts? Critiques? Suggestions? Note that we go HP because of big discounts so I don't see something beating that MSA G3 fully loaded @ ~12K

Thanks!

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

paperchaseguy posted:

hey guys i was gonna roll my own SAM-SAMBA-SOC (Scott Allen Miller SAMBA Scale Out Cloud) for my 15000 user Exchange 2005 production environment do you have any hardware to recommend? My budget is $650.

dude you should totally splurge out on this spanking deal


Green drives will save you power! everyone on the spiceworks forums agreed this was the shizzle.

PS run NAS4Free

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Corvettefisher posted:

dude you should totally splurge out on this spanking deal


Green drives will save you power! everyone on the spiceworks forums agreed this was the shizzle.

PS run NAS4Free

God damnit, Corvettefisher, you're over budget.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

three posted:

Perhaps we will see the architecture for the server change the accomodate this. No particular reason a server can't have "shelves" added.

Also, Equallogic, for example, can't have shelves added to it. You have to buy a whole new member, which includes two controllers attached to it; controllers are, more or less, "compute" so you're paying the same price in this approach that you would in the SAN-less approach except you also gain compute capacity in your virtual environment too in the SAN-less strategy.

That's pretty strange that the Equallogic line cannot expand with just shelves. So every time you want to grow your storage, you are pretty much buying an entire new SAN?

Even Dell's lesser MD32X0 line supports add in shelves with the MD12X0 DAS units.

The change you are speaking of would certainly be interesting and poses some large cost savings, but I can't see this expanding into large scale any time soon. It would be popular in the low cost SMB market (where the VSA is trying to get a foothold in).


Please tell me this is what your old companies RAID 0 setup was on.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Moey posted:




Please tell me this is what your old companies RAID 0 setup was on.

No that was an MD3200

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

three posted:

Perhaps we will see the architecture for the server change the accomodate this. No particular reason a server can't have "shelves" added.

Also, Equallogic, for example, can't have shelves added to it. You have to buy a whole new member, which includes two controllers attached to it; controllers are, more or less, "compute" so you're paying the same price in this approach that you would in the SAN-less approach except you also gain compute capacity in your virtual environment too in the SAN-less strategy.

Consolidated SAN/Virtualization blocks like Nutanix or VSA aren't going to run traditional storage vendors out of business. The are good for small offices running general workloads, or for specific applications that they are tailored to (like VDI with Nutanix), but they have limitations that will prevent them from becoming the dominant method of providing enterprise storage.



As another poster said, if you have a fixed "block" of compute/storage that you have to add it means you always end up over-sizing for one or the other. You mention having the ability to add shelves to scale individual nodes up, but that wouldn't work with the way things like Nutanix or VSA handle data protection. They maintain multiple copies of data on multiple nodes, but if each node has a different capacity how do you guarantee that you have room to maintain multiple copies of that data? For instance, if I have one node with 5T and two nodes with 1T and the two 1T nodes are near full where are the redundant blocks from the remaining 4T on the 5T node going to live? Certainly you could change the way you handle data protection to address this (NetApp does scale-out and scale-up on clustered OnTAP because they just use normal RAID-DP, and Nimble is planning or has already done the same) but it's currently a problem.

So maybe you make is so you can both scale out nodes, and scale out specific nodes with more capacity. Well, those nodes with more capacity are likely going to have higher storage demands which means there will be less CPU and memory left over to run VMs under ESX, so you've got to keep the VM load on there fairly light. But then you end up in a situation where you have VM compute nodes and storage nodes with the purposes roughly segregated and you're pretty much back to having a traditional SAN, except without the benefits of supporting external protocols and all the other nice stuff traditional SANs provide.

That's a problem I see generally, which is that it can be tricky enough to guarantee VM performance when VMs move around a cluster but when you're also providing storage services that use those same memory and CPU resources and those demands are going to change independent of the VM load changes it becomes every harder. I'd worry about situations where you had node failure since you're doubling up on failure. It would be like if an ESX node failure caused a node failure on your EQL every time it happened. You've lost compute resources for running VMs AND compute resources for running storage AND IO from the spindles that are no longer available. Failure could ugly unless you build a LOT of headroom in which doesn't mesh well with the value proposition.

And, you know, there are still tons of things that aren't going to get stuffed into VMware. Lots of shops still use Power or Sparc architectures that will continue to use dedicated SANs. Lots of shops want to consolidate their file services onto their unified SAN/NAS devices since they don't have to worry as much about patching OS flaws on file servers and they can use array level snapshots and replication features. And there are some workloads where scaling up a single large SAN is better than a lot of distributed nodes.

I could be totally wrong about all of this, that's just my perspective interacting with other people in the industry. I think traditional SANs will still be the storage solution of choice for enterprise customers for a long time and I also think that most vendors are getting so good at making their storage hardware REALLY easy to use that you'll see more adoption in the SMB space that is also the target for these unified virtualization/storage blocks.

Syano
Jul 13, 2005

Moey posted:

That's pretty strange that the Equallogic line cannot expand with just shelves. So every time you want to grow your storage, you are pretty much buying an entire new SAN?

Even Dell's lesser MD32X0 line supports add in shelves with the MD12X0 DAS units.


That's actually the advantage of an eqaullogic kit though (at least according to them). Each unit you add gives you more of everything. More iops more throughput capacity more storage and more redundancy. Lefthand kits from hp work this way too, though not near as good.

Check out scale computing hc3. They are doing exactly what we are talking about. They call hyper convergence. I have no clue what their hyper visor is though.

EDIT: I did some reading on the scale hc3 solution last night. It looks neat in theory. You buy a cluster of nodes that serves both your compute needs and your storage needs. If you need to expand you just buy another node and it adds to everything: more storage, more memory, more compute power more network capacity. If you need just storage you can buy just storage nodes. Of course they aren't very forthcoming on their site about the technology under the hood they use to make this happen. Still, if it works would be sort of neat.

Syano fucked around with this message at 14:30 on Nov 9, 2012

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

NippleFloss posted:

[...]

I could be totally wrong about all of this, that's just my perspective interacting with other people in the industry. I think traditional SANs will still be the storage solution of choice for enterprise customers for a long time and I also think that most vendors are getting so good at making their storage hardware REALLY easy to use that you'll see more adoption in the SMB space that is also the target for these unified virtualization/storage blocks.

I agree that there are a lot of roadblocks. That's why whoever manages to solve them is going to make lots of money. :)

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

Corvettefisher posted:

dude you should totally splurge out on this spanking deal


Green drives will save you power! everyone on the spiceworks forums agreed this was the shizzle.

PS run NAS4Free

what kind of lovely config is this, everybody knows you need helium drives for best performance

eta has anyone rolled their own helium drives? I have a side job as a kid's party clown making balloon animals so I know what I'm doing OK

Picardy Beet
Feb 7, 2006

Singing in the summer.

zero0ne posted:

Hardware: HP P2000 MSA G3 SFF - loaded with 24 600GB SAS 10k drives
2 drives setup as hot spares
1Gbps iSCSI (dual controller, so each one has 4 ports) - total of 8Gbps for the SAN


From my initial research, I should be looking at around 1500 - 4000 IOPS depending on usage.

Should I be able to get ~20 (maybe more) VMs from this?
(Hyper V 3)

Rest of infra looks like this:
1x HP DL560 g8 @ 16 cores (2x 8 cores) with around 192 - 256GB RAM
2x HP DL380 g7 @ 8 cores (2x 4 cores) with 128GB RAM
1x HP DL380p g8 @ 8 cores (2x 4 cores) with 128GB RAM (replica server off site)
...
Thanks!

I'm running something like 33 VM in 4 DL380G7 with 96 GB RAM in total.
12 are my XenApp 4.5 farm, serving complete desktops to 150 users (most on Wyse Thin Clients).
So concerning the server part for me, good to go. I'm already really underusing mine, and you'll have way more CPU and RAM.
On the storage side, I use 24 FC 15k 300GB in an EVA4400, with a brocade FC fabric. It's a bit more beefy, but I've got an ERP which principal occupation is hammering its production DB server.
And like I said, I totalize 33 VMs.
My Citrix farm works ok, concerning the SQL latency I have seen DBA which have a real problem with it. But not me. If the devs would debug the loving product configurator i'll be the happiest sysadmin in the world.
All in all, as I'm probably your worst case scenario, you should be good too on this side. 20 VMs seems pretty reasonable, seeing my usage.
In fact you'll probably be able to go to 30 and upper without any sweat, if you don't neglect one important point, the exact one you didn't write about : the storage network infrastructure. It really has to be done on 2 dedicated switches, and you have to carefully distribute each path across both of them . Don't spare money on them, they are really crucial for performance and resiliency.

Picardy Beet fucked around with this message at 23:05 on Nov 9, 2012

Rhymenoserous
May 23, 2008

paperchaseguy posted:

hey guys i was gonna roll my own SAM-SAMBA-SOC (Scott Allen Miller SAMBA Scale Out Cloud) for my 15000 user Exchange 2005 production environment do you have any hardware to recommend? My budget is $650.

Man we're getting a lot of mileage out of me posting that SAM-SD poo poo here aren't we?

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Rhymenoserous posted:

Man we're getting a lot of mileage out of me posting that SAM-SD poo poo here aren't we?
I could have happily lived the rest of my life without knowing about Scott Allen Miller but now I know about him and I can't unknow about him and that makes me angry.

Thanks Rhymenoserous, thanks.

Rhymenoserous
May 23, 2008

NippleFloss posted:

I could have happily lived the rest of my life without knowing about Scott Allen Miller but now I know about him and I can't unknow about him and that makes me angry.

Thanks Rhymenoserous, thanks.

Did you know he writes tech blogs? Man he has an entire series on how raid 5 is not a backup neat huh!

Syano
Jul 13, 2005
He's inspired me to me to build my next san entirely from FreeNAS

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Rhymenoserous posted:

Did you know he writes tech blogs? Man he has an entire series on how raid 5 is not a backup neat huh!

He's a mod on spiceworks so you know he is legit.

zero0ne
Jul 20, 2007
Zero to the O N E

Picardy Beet posted:

I'm running something like 33 VM in 4 DL380G7 with 96 GB RAM in total.
12 are my XenApp 4.5 farm, serving complete desktops to 150 users (most on Wyse Thin Clients).
So concerning the server part for me, good to go. I'm already really underusing mine, and you'll have way more CPU and RAM.
On the storage side, I use 24 FC 15k 300GB in an EVA4400, with a brocade FC fabric. It's a bit more beefy, but I've got an ERP which principal occupation is hammering its production DB server.
And like I said, I totalize 33 VMs.
My Citrix farm works ok, concerning the SQL latency I have seen DBA which have a real problem with it. But not me. If the devs would debug the loving product configurator i'll be the happiest sysadmin in the world.
All in all, as I'm probably your worst case scenario, you should be good too on this side. 20 VMs seems pretty reasonable, seeing my usage.
In fact you'll probably be able to go to 30 and upper without any sweat, if you don't neglect one important point, the exact one you didn't write about : the storage network infrastructure. It really has to be done on 2 dedicated switches, and you have to carefully distribute each path across both of them . Don't spare money on them, they are really crucial for performance and resiliency.

Thanks Picardy,

Storage network at the time of posting was going to be split across 2x gigE switches, however those would also be used for normal network traffic for the servers. Making sure to properly segregate the iSCSI traffic on its own VLAN.

However I have been contemplating getting 2 switches just for iSCSI traffic as networking is my weakness.

Should be able to wing it too, as I could just drop down to a 380p g8 with a bit less ram based on what you are saying (server performance wise).

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Serfer posted:

Isn't this pretty much what openstack is supposed to be? I mean with an openstack distribution like airframe, you just connect the machine and it boots into openstack, gets its storage added to the pool, and becomes a VM host.
This is absolutely true, but it stops a little short of what I mean. OpenStack handles replication of its object store, but unless things have changed a lot since I last really looked into OpenStack, all of a single object (VM image, etc.)'s data still resides on a single server in your OpenStack cloud. This is fine if your VMs are consuming storage for stuff like Hadoop and HDFS, because of the way that system works (you move your code to where the data lives, rather than the other way around). It doesn't mean much for traditionally virtualized workloads, especially tricky-to-virtualize things like SQL Server or Exchange. Running a high-throughput transactional workload on a single server's disks under contention from other systems sounds like a recipe for disaster.

One of the really nice benefits of something like Isilon (or even dumber SAN like IBM V7000, for that matter) is that as you add more disks to the pool, it becomes very easy to wide-stripe for better performance. This is a big win for manageability, at least until true scale-out filesystems hit price-performance parity with traditional filesystems on wide-striped SAN, for generalized high-throughput workloads.

Being able to leverage that sort of storage, but also have insanely low-latency access to the underlying disk, would be really killer for being able to virtualize high-throughput applications in shops that don't have much virtualization or storage experience.

Put into the simplest possible terms: it would be neat to have something nearly identical to an Isilon cluster, but that had gobs of RAM and used its system resources for running virtual machines instead of providing file-level NAS services.

Vulture Culture fucked around with this message at 20:03 on Nov 10, 2012

Amandyke
Nov 27, 2004

A wha?

Misogynist posted:

Put into the simplest possible terms: it would be neat to have something nearly identical to an Isilon cluster, but that had gobs of RAM and used its system resources for running virtual machines instead of providing file-level NAS services.

I believe those are called Blade Enclosures.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Misogynist posted:

Put into the simplest possible terms: it would be neat to have something nearly identical to an Isilon cluster, but that had gobs of RAM and used its system resources for running virtual machines instead of providing file-level NAS services.

Isilon has traditionally been a poor choice for VMWare since their strength is sequential throughput and most VM workloads, especially things like transactional DBs are going to be more dependent on low latency for random I/O to perform well. Isilon has problems with latency that have kept it from being very competitive in VMWare environments.

Supposedly that is fixed in Mavericks due to more aggressive caching but I haven't seen any benchmarks yet to prove that. Until Isilon gets it's response/latency curve oriented in the right direction and can provide very low latencies to transactional I/O I wouldn't want to use them for my VMWare store.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Amandyke posted:

I believe those are called Blade Enclosures.
This doesn't really do anything for wide-striping or high IO requirements whatsoever.

NippleFloss posted:

Isilon has traditionally been a poor choice for VMWare since their strength is sequential throughput and most VM workloads, especially things like transactional DBs are going to be more dependent on low latency for random I/O to perform well. Isilon has problems with latency that have kept it from being very competitive in VMWare environments.

Supposedly that is fixed in Mavericks due to more aggressive caching but I haven't seen any benchmarks yet to prove that. Until Isilon gets it's response/latency curve oriented in the right direction and can provide very low latencies to transactional I/O I wouldn't want to use them for my VMWare store.
Agreed, I was using Isilon as more of a conceptual example because their interface and storage blocks are the same units and you could easily repurpose the same architecture in different ways.

Nomex
Jul 17, 2002

Flame retarded.
If you have a virtualized workload that requires high IO you should use a raw device map for your storage disk. You decrease your available IO when you slap VMFS on a disk, whereas all you have to deal with is your underlying storage file system if you just RDM it.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Nomex posted:

If you have a virtualized workload that requires high IO you should use a raw device map for your storage disk. You decrease your available IO when you slap VMFS on a disk, whereas all you have to deal with is your underlying storage file system if you just RDM it.

The performance difference between RDM and VMFS is negligible: http://www.vkernel.com/files/docs/white-papers/mythbusting-goes-virtual.pdf

Bitch Stewie
Dec 17, 2011

Misogynist posted:

A product based on "virtualization bricks" that runs a dead-easy Isilon-like scale-out storage architecture and also hosts VMs would be loving incredible.

It's iSCSI not NFS but at a basic level a commodity Dell/HP running ESXi and a HP P4000 VSA doesn't sound a million miles off, though it only scales so far.

Rhymenoserous
May 23, 2008

three posted:

The performance difference between RDM and VMFS is negligible: http://www.vkernel.com/files/docs/white-papers/mythbusting-goes-virtual.pdf

It would have been nice if he had charted the iSCSI results considering he did test them.

the spyder
Feb 18, 2011
I racked one of my new internal 336TB (raw) ZFS SAN's this week and realized that my field engineers are not configuring hot spares.

My question is what drive groupings would you use for such a large storage pool?
We currently use Raidz2 with 7 disk sets (16). I configured this one with Raidz2, 6 disk sets (18) and 4 hot spares (one per JBOD.)

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

the spyder posted:

I racked one of my new internal 336TB (raw) ZFS SAN's this week and realized that my field engineers are not configuring hot spares.

My question is what drive groupings would you use for such a large storage pool?
We currently use Raidz2 with 7 disk sets (16). I configured this one with Raidz2, 6 disk sets (18) and 4 hot spares (one per JBOD.)
Are you talking about 7 16-disk sets, or 16 7-disk sets? Single-digit disk sets for RAID-Z2 seem extremely small, even with high-capacity drives, unless you're experiencing phenomenally high failure rates in the field (you did mention the middle of the desert in the other thread).

the spyder
Feb 18, 2011
16- 7 disk sets. This was not designed by me. We do see several drive failures a year, but none of these larger systems have been in service more then a few months. They are all Seagate XT's :(.

evil_bunnY
Apr 2, 2003

Max recommended vdev size is 9 isn't it?

jedibeavis
Mar 23, 2004

Bag 'em & tag 'em, Sarah!

Excuse me, Agent Walker.
Last week our primary file server had a bit of a hiccup and two of the drives in the array died. We're operating on our secondary server now, and my boss is wanting to replace the file server with a NAS appliance. We've been pretty much just doing tape backups with Backup Exec 10d since before I started here 5 years ago. We're looking to do disk to disk to tape, and operate as a file server as well. Any recommendations for an appliance that would handle that? I believe he said I've got about $5,000 to work with. The file server currently has about 500 GB of storage space. I looked at a Data Domain DD-160, but from what little pricing info I can find, it looks like it would be around $10,000 for one of those.

sanchez
Feb 26, 2003
I cannot see a compelling reason not to buy another server for that amount of data.

Rhymenoserous
May 23, 2008
I'm going to say just buy another server with plenty of storage in it.

Adbot
ADBOT LOVES YOU

jedibeavis
Mar 23, 2004

Bag 'em & tag 'em, Sarah!

Excuse me, Agent Walker.
Yeah, that'll probably be best. Thanks guys!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply