|
Cocoa Crispies posted:yeah that's a problem with the application, not my butt some prole autoscales low hanging fruit poo poo like a php website and suddenly thinks you can do it to everything
|
# ? Aug 19, 2014 01:11 |
|
|
# ? May 13, 2024 06:36 |
|
just rearchitect your application to use cassandra
|
# ? Aug 19, 2014 01:12 |
|
actually the reason most things cant scale to a large level is because they're poorly designed by people who wanted to do it themselves most of this stuff is a solved problem as long as you dont try to solve it yourself
|
# ? Aug 19, 2014 01:13 |
|
uhh no the reason enterprise applications cant scale like NETFLIX is because they were made years ago and the newest thing they designed for is 'cluster computing' see: oracle fusion
|
# ? Aug 19, 2014 01:16 |
|
you mean the company that uses microsoft silverlight?>?
|
# ? Aug 19, 2014 01:20 |
|
or really any relational database, scales like poo poo. how do you design your mysql app to span geographic regions and be active/active ?? if your application wasnt designed to be in the cloud from the start its going to be a pos
|
# ? Aug 19, 2014 01:21 |
|
to a first approximation no applications are meant to autoscale horizontally or failover across active active datacenters, because that poo poo is hard, expensive, performance-impacting, and usually totally loving impractical netflix can be magical autoscaling wonder wizards because it doesn't matter if 10% of their requests fail or the system isn't globally consistent or performance is degraded or whatever. the loving client can just fail over and your movie is delivered 30 seconds late or stutters or you don't have hd that night. big whoop the rest of us, poo poo has to be up and deliver on certain performance/uptime guarantees during particular hours, and extraordinarily rare events like a datacenter failure are less important to cope with than extraordinarily common events like an amazon datacenter failure p.s. you still want cfg management and one-button deployments even if you're just deploying to the same dozen FIPS-certified ultraSPARC NETRAs from 2005 though. those things are important man Notorious b.s.d. fucked around with this message at 02:45 on Aug 19, 2014 |
# ? Aug 19, 2014 02:42 |
|
wrap it up linuxailures http://arstechnica.com/business/2014/08/linux-on-the-desktop-pioneer-munich-now-considering-a-switch-back-to-windows/
|
# ? Aug 19, 2014 13:39 |
|
Captain Foo posted:wrap it up linuxailures quote:Agreed. Is there any way you can look at this and not think 'shady' especially with all of MS's previous dirty dealings. quote:I smell sabotage, likely indirectly. Microsoft has a lot riding on the failure of this project, if more municipalities follow this example it stands to lose millions. quote:Just remember that this probably all because some poor public servant's Powerpoints get screwed up in LibreOffice, nothing more. quote:I have no clue why this is controversial. First, the guy in charge is a proclaimed Microsoft fan. Second, Microsoft is no angel and doing dirty deeds was something they were well known for throughout the 80s and 90s. Third, do you know how easy it is for the guy on top to sabotage a project? Any project? Killing an OS migration would be like shooting fish in a barrel, point blank, with a rocket launcher. jre fucked around with this message at 14:17 on Aug 19, 2014 |
# ? Aug 19, 2014 13:53 |
|
Notorious b.s.d. posted:to a first approximation no applications are meant to autoscale horizontally or failover across active active datacenters, because that poo poo is hard, expensive, performance-impacting, and usually totally loving impractical Yes and no. I mean it's fairly obvious that cross-DC failover is hard as gently caress -- the link between DCs to even figure out you need to failover is likely less reliable than individual DCs -- but it totally matters if 10% of their requests fail. 10% of video streaming at their scale is a poo poo ton of load and bandwidth to shift around for the sake of it. It's big enough they need to strike deals with ISPs and carriers to keep delivering poo poo, and doing it while going over adversarial guys like Verizon trying their hardest to ruin their QoS. Sure they don't have a hard need for consistency and clients can retry (which clients can't by the way?), but there's no question that their problems are hard. It's cool to feel better by telling oneself "well they couldn't scale the systems I work on because they can make compromises", but it's not like their problem space is intrinsically easier to work in than yours or mine, or that their problem space isn't the result of compromises they made already. IMO the main reasons poo poo can't scale are:
|
# ? Aug 19, 2014 15:22 |
|
Hahaha. They actually believe MS needs to sabotage desktop Linux.
|
# ? Aug 19, 2014 15:31 |
|
Suspicious Dish posted:Hahaha. They actually believe MS needs to sabotage desktop Linux. The comments section on that article is predictably terrible slashdot lite. Your exchange setup can be replaced by gmail, puppet is a viable replacement for system center , ldap is just as good as active directory, open office isn't a terrible piece of poo poo
|
# ? Aug 19, 2014 15:42 |
system center 2012 owns
|
|
# ? Aug 19, 2014 16:33 |
|
at my last job i blew a bunch of time getting a basic db to the point where it could handle a few billion records/day on a single system* solely because they didnt want deployments to require several machines (and therefore cost more) was a p fun project and a hell of a line item on my resume but lmao * with a small disk array providing p good iop capacity tbf
|
# ? Aug 19, 2014 16:46 |
|
Mr Dog posted:A phone that plugs into a docking station to become a desktop is actually pretty ownage why
|
# ? Aug 19, 2014 16:52 |
|
Notorious b.s.d. posted:to a first approximation no applications are meant to autoscale horizontally or failover across active active datacenters, because that poo poo is hard, expensive, performance-impacting, and usually totally loving impractical crotchety cj stymie
|
# ? Aug 19, 2014 16:53 |
|
Notorious b.s.d. posted:
thx person who doesnt know how ansible works
|
# ? Aug 20, 2014 01:27 |
|
MononcQc posted:
you can scale anything if you have a blank check. i dont think netflix had it 'easier' but it's certainly 'easier' when your platform architecture is designed that way from the very beginning. instead of shoehorning some crufty piece of poo poo into teh clod
|
# ? Aug 20, 2014 01:34 |
|
new discussion topic: websphere vs weblogic, the epic final showdown
|
# ? Aug 20, 2014 01:40 |
|
pram posted:it's certainly 'easier' when your platform architecture is designed #wow #wwhoa
|
# ? Aug 20, 2014 02:44 |
|
lurmickesquote:On Mon, Aug 04, 2014 at 03:38:20PM +1000, Dave Airlie wrote: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=bdc3ae7221213963f438faeaa69c8b4a2195f491 quote:diff --git a/drivers/platform/x86/toshiba_acpi.c b/drivers/platform/x86/toshiba_acpi.c quote:On Wed, Aug 6, 2014 at 9:59 AM, Andev <debian...@gmail.com> wrote:
|
# ? Aug 20, 2014 21:33 |
|
pram posted:you can scale anything if you have a blank check. Netflix definitely has it easier. they're like 99.9999% reads and the writes they do aren't latency sensitive
|
# ? Aug 20, 2014 21:36 |
|
A good observation from the shaggmeister
|
# ? Aug 20, 2014 22:34 |
|
raruler posted:lurmickes
|
# ? Aug 21, 2014 01:10 |
|
MononcQc posted:Yes and no. I mean it's fairly obvious that cross-DC failover is hard as gently caress -- the link between DCs to even figure out you need to failover is likely less reliable than individual DCs -- but it totally matters if 10% of their requests fail. which clients can't retry? which clients can? in my career i don't think i've ever run into a client with robust failover. web browsers and lovely vertical market fat clients are the worst offenders client robustness is a hard enough problem that netflix has published p. fancy open source libraries specifically for that MononcQc posted:IMO the main reasons poo poo can't scale are: yeah idk man i live in a world where the "needs to" is "it would be awful nice if that once-a-week scheduled job could run hourly" or "it would be nice if this once a year job could be done monthly" it's just not the same order of problem as serving thousands of discrete clients and not giving a gently caress about error rate. it is a different space entirely, and most of these problems don't benefit from distribution
|
# ? Aug 21, 2014 02:08 |
|
Cocoa Crispies posted:crotchety cj stymie that would be pram. itt i am catching bullshit from both ends: cjs are mad that i think you should have real working automation, startup manchildren are equal parts astonished and disgusted that anyone thinks about operations in the first place
|
# ? Aug 21, 2014 02:10 |
|
lol no you are catching poo poo because youre dumb
|
# ? Aug 21, 2014 02:35 |
|
dumb
|
# ? Aug 21, 2014 02:35 |
|
accusing another yosposter of being a cj is a serious matter
|
# ? Aug 21, 2014 02:38 |
|
pram posted:you can scale anything if you have a blank check. pram posted:dumb its u
|
# ? Aug 21, 2014 03:09 |
|
hack each others systems. first one wins
|
# ? Aug 21, 2014 03:10 |
|
Brain Candy posted:its u no
|
# ? Aug 21, 2014 03:31 |
|
wtf
|
# ? Aug 21, 2014 03:32 |
|
pram posted:lol no you are catching poo poo because youre dumb
|
# ? Aug 21, 2014 03:33 |
|
pram didnt you have some other avatar for like a hot minute what happened to that
|
# ? Aug 21, 2014 03:33 |
P sure whoever's the one advocating for actually telnetting into a server deployment is the wrong one here jfyi
|
|
# ? Aug 21, 2014 05:40 |
|
no one is advocating telnetting into anything
|
# ? Aug 21, 2014 05:45 |
|
i miss rsh, it was so much faster than ssh
|
# ? Aug 21, 2014 09:14 |
|
Shaggar posted:Netflix definitely has it easier. they're like 99.9999% reads and the writes they do aren't latency sensitive Shaggar was right
|
# ? Aug 21, 2014 10:11 |
|
|
# ? May 13, 2024 06:36 |
|
Notorious b.s.d. posted:which clients can't retry? which clients can? in my career i don't think i've ever run into a client with robust failover. web browsers and lovely vertical market fat clients are the worst offenders But people in a web browser can and will retry. Enough that measures had to be taken in many places to protect against things like double-posting, re-submitting forms that were too old, etc. Most programmed clients can retry by just calling the method they're in again or whatever. As far as I can tell, what makes it hard to retry is usually poor communication of the cause of error, or assumptions such that the user or caller doesn't really have a good way to know. Here's a fun one if I turn off my wifi connection and try to pull a repo: code:
Then again, I'd like to read the netflix stuff and what they found was particularly hard. Notorious b.s.d. posted:yeah idk man i live in a world where the "needs to" is "it would be awful nice if that once-a-week scheduled job could run hourly" or "it would be nice if this once a year job could be done monthly" Yeah, that is a different form of scale. And it's one case where you won't really need cross-DC failover, which is what I was really focusing on. You often don't need that kind of scaling, if that makes my statement more accurate. I just want to say that you never really not give a gently caress about error rates. Higher error rates is a shittier QoS (there's a cost to retrying), it's angrier customers, and so on. If your job is to serve thousands of discrete clients, error rates are one of the major metrics you have, because they pretty much correlate with how good of a job you're doing.
|
# ? Aug 21, 2014 13:05 |