|
whenever i read your posts I'm really happy i make lovely on premises apps for like 1 or 2 concurrent users and don't do anything difficult
|
# ? Jun 18, 2013 14:04 |
|
|
# ? May 19, 2024 17:51 |
|
polpotpi posted:whenever i read your posts I'm really happy i make lovely on premises apps for like 1 or 2 concurrent users and don't do anything difficult same
|
# ? Jun 18, 2013 14:07 |
|
Zombywuf posted:MySQL is never good enough; Postgres damnit. the only thing that limits relational dbs in terms of performance is hardware (assuming you're using a real sql server). thousands of transactions a minute is nothing even on old junk hardware. the drawbacks to relational dbs are purely distribution related. when you hit the limits of your hardware you have to add more nodes and then you start running into the consistency vs availability problem.
|
# ? Jun 18, 2013 14:27 |
|
MononcQc posted:Even if it's not really complex, it still shows some properties of distributed systems where you choose to lack consistency in favor of lowered latency, as described in PACELC. The more state needs to be replicated over many nodes, with the more operations over the data set, the more difficult it gets. Caching a website is almost always a read heavy load which makes the problems trivial. Pass-through cache on read and you're most of the way there. The usual mechanism is a simple If-Modified-Since check. Shaggar posted:the only thing that limits relational dbs in terms of performance is hardware (assuming you're using a real sql server). thousands of transactions a minute is nothing even on old junk hardware. the drawbacks to relational dbs are purely distribution related. when you hit the limits of your hardware you have to add more nodes and then you start running into the consistency vs availability problem. Yeah, my point is essentially that these limits are massively high for most people. And that the hardest problems are usually in gaming where you normally have soft real time constraints and everyone to everyone communication (i.e. you can't shard the data).
|
# ? Jun 18, 2013 14:34 |
|
Zombywuf posted:Caching a website is almost always a read heavy load which makes the problems trivial. Pass-through cache on read and you're most of the way there. The usual mechanism is a simple If-Modified-Since check. I did note it as trivial, and offered solutions for the problem mentioned. Thanks for missing the point entirely, though. Also note that 'If-Modified-Since' doesn't necessarily work blindly on its own when what you get is a once-in-a-while request from a CDN or a service acting on your behalf to know if it's been updated or not and will only ask again once in a while. To reuse the same example from my last post, I have index.html, which loads an external styles.css, which insert my-background.png. I push the new files on a server over whatever means you want, but for the sake of it, let's say I'm using SFTP, or rsync and I overwrite files as they're uploaded. I've got the following state combinations possible because uploading over only 2 servers, sequentially (<resource>, serv1, serv2):
If at any point between the first and the last step, a request for any of my resources that are cached by a CDN hit me on two different servers, there's potential to have bad data on the CDN, which might only refresh in how many minutes you configured it to check for changes. This leads to workarounds where you embed the version names in the URL, as I mentioned in my previous post, because it usually ends up being simpler to do things that way. Hell, it works to bypass lovely proxies in some networks that decide to do that caching for your users without you knowing about it. If-Modified-Since only works in a somewhat foolproof way when you control the entirety of your data and that you are ready to deal with opening and closing connections just to check that the resource is up to date. Anyway, the point was that caching is state replication and even that is subject to CAP shenanigans and distributed system properties, even on smaller systems. It's not a very severe problem, but it's one nonetheless. MononcQc fucked around with this message at 15:00 on Jun 18, 2013 |
# ? Jun 18, 2013 14:53 |
|
MononcQc posted:Also note that 'If-Modified-Since' doesn't necessarily work blindly on its own when what you get is a once-in-a-while request from a CDN or a service acting on your behalf to know if it's been updated or not and will only ask again once in a while. You are kind of inventing a problem here by having a lovely CDN that works this way. Even then you're describing a common household or garden race condition. Not to say that CDNs can't be lovely, but I'm struggling to see an actual operational scenario like this. Which is kinda my point, there's a huge bundle of academic work on distributed data stores, each little branch having it's own jargon and it's own model of the whichever subset of the real world problems they're trying to solve and they are mostly irrelevant to the majority of people. To most people a network partition means their datacenter is on fire.
|
# ? Jun 18, 2013 15:20 |
|
Zombywuf posted:You are kind of inventing a problem here by having a lovely CDN that works this way. Even then you're describing a common household or garden race condition. You can have Akamai work this way by using a simple TTL and having it refresh from time to time. Not all TTLs will be in perfect sync, and your state can get weird if that happens with long TTLs with some skew. It's a real world problem I've hit in production a few years ago when working for a dating website that suddenly had its homepage (visually) exploding because of it. It lasted for a few seconds to minutes, but users who happened to have their on-disk cache time out at the same time the CDN was in a borked state would end up complaining about the "broken website" a few hours after the fact. It's not even a question of network partition in this case, it's a question of not having atomic updates across the board. Uploading new files wasn't equivalent to an ACID transaction and someone made multiple disjoint data reads from it and got an inconsistent view of the system later on. It is a race condition by all means, but it's also a real thing about distributed systems. A netsplit will be a problem when you guarantee consistency. In this case, caching doesn't guarantee consistency by design, and this is one of the real world consequences of relaxing consistency for latency in PACELC (a fancier tool than CAP to discuss design, that differentiates strategies used when there is or isn't a netsplit). Such a cache would be a PA-EL (in case of Partition, go for Availability, Else choose Latency). Most SQL databases would be PC-EC (always consistent). You can visualize the messed up state during the deployment to what would happen if the node you deploy from ended up in a netsplit halfway through the deployment. The consequences would be the same, but it would just take much longer to resolve the problem. For most practical purposes, a partition with a node is indistinguishable from high latency. MononcQc fucked around with this message at 15:53 on Jun 18, 2013 |
# ? Jun 18, 2013 15:51 |
|
y couldn't you treat the uploaded data as a set and require each node to wait to publish it locally until it has a good copy of the entire set?
|
# ? Jun 18, 2013 15:53 |
|
Shaggar posted:y couldn't you treat the uploaded data as a set and require each node to wait to publish it locally until it has a good copy of the entire set? You could. This either means you need some extra software to manage it or use a protocol (or filesystem) that supports such a feature in the first place. It looks a lot like a database, but could be a valid option. Another alternative would be to have nodes standing by where you can just upgrade everything, then swap the live ones with the upgraded ones in one operation (say, reload HAProxy's config). The original question was "at what size does MySQL/oracle stop being good enough, provided nothing stupid going on?" -- if you start installing these things or planning for them, the question's answer is likely "as soon as you get more than one node that stores and serves state, and that it is important that this state is synchronized."
|
# ? Jun 18, 2013 16:03 |
|
MononcQc posted:You can have Akamai work this way by using a simple TTL and having it refresh from time to time. Not all TTLs will be in perfect sync, and your state can get weird if that happens with long TTLs with some skew. quote:It's not even a question of network partition in this case, it's a question of not having atomic updates across the board. Uploading new files wasn't equivalent to an ACID transaction and someone made multiple disjoint data reads from it and got an inconsistent view of the system later on. There are a number of ways you can make them atomic. quote:Most SQL databases would be PC-EC (always consistent). How do you have a network partition with *an* SQL database? Most have support for (or have third party support) for many replication modes, with different capabilities. I've never found using the CAP metaphor useful for anything other than fielding demands from managers that it always do everything perfectly. There are many questions you need to ask about a system with multiple interlocking parts, reducing it to 5 letters (with only 4 degrees of freedom) just seems an oversimplification.
|
# ? Jun 18, 2013 16:26 |
|
Zombywuf posted:How do you have a network partition with *an* SQL database? quote:Well… almost. Even though the Postgres server is always consistent, the distributed system composed of the server and client together may not be consistent. It’s possible for the client and server to disagree about whether or not a transaction took place. http://aphyr.com/posts/282-call-me-maybe-postgres
|
# ? Jun 18, 2013 16:42 |
|
Zombywuf posted:Well don't use Akamai that way *shakes fist*. That's an option. The other one was version counters on CSS poo poo so that we would never see requests for static assets past a few page loads. As far as I remember that's what went with in the end, and some hook that updated the value once the deployment was done or something. Been years since then though, and they might have changed how it works since then. Zombywuf posted:There are a number of ways you can make them atomic. Yes, but that you need to make them atomic means that you're dealing with out-of-SQL distributed system shenanigans. Zombywuf posted:How do you have a network partition with *an* SQL database? Most have support for (or have third party support) for many replication modes, with different capabilities. Well I mentioned that latency to a single node and a partition from a single node are often impossible to distinguish. The other part of that is that a failure and latency to a node are also pretty much impossible to distinguish unless there is some kind of explicit and reliable notification for it. The fun thing about that is that you can start treating your master failing the same way as you would treat it going alone behind a partition from all the other nodes. You avoid the split brain problem and the node that failed can reasonably know it failed, but you can't necessarily assume that the problem is really that it crashed without possibly being wrong from the perspective of any other node in the system. You may also need to see clients as nodes in that system if they end up making decisions based on some of the state, and that may suddenly make things way more complex. Because of this, you end up usually still having to think in CAP terms "just in case", but assuming that the master crashed when you're in a closed network in your control will be a much safer bet than when you do it over AWS in many separate geographical regions. The choice of distributed algorithm will often reflect that bet and impact the production experience. E: if you meant for a single instance of a SQL DB, then yeah, partitions are very unlikely, although as posted above, clients can be considered part of the system and mess things up. Zombywuf posted:I've never found using the CAP metaphor useful for anything other than fielding demands from managers that it always do everything perfectly. I agree that CAP is pretty much just the assumption that "yeah, you can't have the system do everything" being proved, and so I will also agree with you it won't be useful outside of getting rid of ridiculous demands and some basic theorem to keep in mind. It didn't keep people (myself included) from using it blindly to describe everything distributed. PACELC is an improvement because it does have the idea that partitions (and failures and high latency) aren't there all the time, and it brought the distinction that you could have consistency as much as you want when the cluster is healthy, although it may cost more. It's an oversimplification, but it's as useful of a label as any other one when it comes to filtering technology. There's a shitload more ways to describe systems and ways they can mess your life up, but I guess we needed to start somewhere. MononcQc fucked around with this message at 17:04 on Jun 18, 2013 |
# ? Jun 18, 2013 17:00 |
|
|
# ? Jun 19, 2013 16:51 |
|
|
# ? Jun 19, 2013 18:25 |
|
|
# ? Jun 19, 2013 19:31 |
|
pro
|
# ? Jun 19, 2013 19:32 |
|
Paging MononcQc: http://forums.somethingawful.com/showthread.php?threadid=3551821 just c/p your posts over tia
|
# ? Jun 19, 2013 21:50 |
|
looking at this E: it is a drat old thread and I have no idea what to post in there
|
# ? Jun 19, 2013 22:31 |
|
This is your daily warning to move away from MySQL: https://blog.mariadb.org/mysql-man-pages-silently-relicensed-away-from-gpl/
|
# ? Jun 20, 2013 10:37 |
|
Zombywuf posted:This is your daily warning to move away from MySQL: https://blog.mariadb.org/mysql-man-pages-silently-relicensed-away-from-gpl/ this is your daily reminder that oracle owns: [19 Jun 7:28] Norvald Ryeng Description: The man pages of Community Server should be GPL, but since 5.5.31, packages have contained man pages with a different license. 5.5.30 man pages are correctly licensed. The bug exists in the latest release of 5.1, 5.5, 5.6 and 5.7. I haven't checked older versions of 5.1, 5.6 or 5.7. How to repeat: Read the man pages. Suggested fix: Change back to the correct license. [19 Jun 7:52] Yngve Svendsen Thank you for the report. This is indeed a bug, where the build system erroneously and silently started pulling in man pages with the wrong set of copyright headers. [19 Jun 8:23] Balasubramanian Kandasamy Working on the fix. [19 Jun 8:47] Yngve Svendsen Once the fixes have been made to the build system, we will rebuild the latest 5.1, 5.5, 5.6 releases plus the latest 5.7 milestone and make those available publicly asap. [19 Jun 18:00] Yngve Svendsen 5.5 rebuilds started. Other versions will follow as soon as these have been confirmed to contain the correctly licensed man files. [19 Jun 18:28] Balasubramanian Kandasamy Fix tested on 5.5.32 release branch. GPL packages has got the correct man pages now. Re-build in progress for 5.6.12 and 5.7.1 release branches.
|
# ? Jun 20, 2013 10:53 |
|
seriously, two people just pulled 10-hour workdays to fix four different open-source software branches because they had wrong documentation which other open-source organization than oracle would do that in response to a bug report
|
# ? Jun 20, 2013 11:05 |
|
MononcQc posted:looking at this post this https://www.youtube.com/watch?v=yArRnznVJwY
|
# ? Jun 20, 2013 12:37 |
|
Because distributed systems links just keep falling in my hands for no good reason these days, Notes on Distributed Systems for Young Bloods.
|
# ? Jun 20, 2013 13:04 |
|
Max Facetime posted:seriously, two people just pulled 10-hour workdays to fix four different open-source software branches because they had wrong documentation mozilla, red hat... pretty much all of them tbh
|
# ? Jun 20, 2013 13:12 |
|
Suspicious Dish posted:mozilla, red hat... pretty much all of them tbh I want to see one obvious bug affecting multiple branches that isn't a security vulnerability get fixed in 10 hours without asking the reporter for more information, by Mozilla, redhat or comedy option GNU linus says all kernel bugs are exploits waiting to happen so Linux kernel bugs don't apply
|
# ? Jun 20, 2013 14:47 |
|
Max Facetime posted:I want to see one obvious bug affecting multiple branches that isn't a security vulnerability get fixed in 10 hours without asking the reporter for more information, by Mozilla, redhat or comedy option GNU yeah, we're talking about updating the license information for the documentation -- not a lot of people are going to treat that as high-priority
|
# ? Jun 20, 2013 14:51 |
|
heres one i found after two seconds of looking http://sourceware.org/bugzilla/show_bug.cgi?id=14273
|
# ? Jun 20, 2013 14:57 |
|
Suspicious Dish posted:heres one i found after two seconds of looking pretty good, but it looks like the second, older branch got its patch accepted only after about 1 week from the bug report
|
# ? Jun 20, 2013 15:21 |
|
backports for glibc are done on a schedule. they cherry-pick all applicable commits at once.
|
# ? Jun 20, 2013 15:28 |
|
but hey if you want to knock them for sound engineering practices be my guest
|
# ? Jun 20, 2013 15:29 |
|
Sound engineers must awesome for all other engineering disciplines to try to copy them
|
# ? Jun 20, 2013 15:58 |
|
Suspicious Dish posted:but hey if you want to knock them for sound engineering practices be my guest ok their "engineering" "practices" make them inflexible and cause unnecessary delays in getting fixes to customers and i'm done knocking
|
# ? Jun 20, 2013 16:07 |
|
MononcQc posted:Because distributed systems links just keep falling in my hands for no good reason these days, Notes on Distributed Systems for Young Bloods. I like that there's a distributed systems scene in SF and most of the people in it, probably because I only read their tweets ad blogs and chats and don't interact with them in SF
|
# ? Jun 20, 2013 16:09 |
|
OK, its time for me to add some features to my twitter cop logger I want to try to replace any street addresses that are clearly a business with that business name. For example, I know 2875 W MLK is the Walmart, I just want that to say "Walmart" instead of having the address. whats my best angle on this - google query for the address, scrape the first result? any ideas
|
# ? Jun 20, 2013 18:49 |
|
code:
|
# ? Jun 20, 2013 18:52 |
|
we were laughing about it in irc last night, basically the two walmarts in town are the most dispatched addresses by a factor of two. the third is the police station itself imma rerun my parsers tonight and see how it looks after 11 months of tracking, last time i ran them was after 5
|
# ? Jun 20, 2013 18:53 |
|
Jonny 290 posted:OK, its time for me to add some features to my twitter cop logger Look through your last few months of logs, you will probably find that addresses follow an exponential distribution or something like it. Scrape google once for the top 100 or whatever and throw em in a lookup table.
|
# ? Jun 20, 2013 18:56 |
|
yeah that will prob work tbqh im going to start looking at heatmap libraries so i can make some crime maps phase 2 is databasing the intake log records, and those have names and addresses, and that is when this here data mine will start to be really fun
|
# ? Jun 20, 2013 18:59 |
|
Jonny 290 posted:we were laughing about it in irc last night, basically the two walmarts in town are the most dispatched addresses by a factor of two. the third is the police station itself there's a reason they call it fayettenam
|
# ? Jun 20, 2013 19:03 |
|
|
# ? May 19, 2024 17:51 |
|
wait, arkansas has a fayettenam too apparently. i thought it was a north carolina exclusive, but i guess it's a chain.
|
# ? Jun 20, 2013 19:03 |