|
enraged_camel posted:Incidentally, this was on Ars Technica today. The technical issues that still cripple the project are telling. The devices were tedious and annoying to use, and were wrong more than a quarter of the time despite having been primed with hourly emotion data from the previous stage. Who's going to use an anti-suicide machine that will falsely mark a non-suicidal person as suicidal 25% of the time? RealityApologist posted:Again, this doesn't kill CP; it doesn't even kill it on reddit. But the action has an impact on how the issue is handled by all the interested parties. It might drive CP underground even more, making it that much harder to call attention to the matter. But it also reinforces the norms of the rest of society, making explicit that the general consensus is that such behavior isn't acceptable, and engaging it in will make you the target of the crowd. Some extreme elements of the anti-CP group might overstep that consensus by attempting vigilante justice but hunting down and humiliating the CPers; this might result in an opposite form of extremism that attempts to assert the legitimacy of CP enthusists against their attackers. Both sides might see their cause as incredibly important and worth the devotion of the resources at their command, but fortunately they aren't the ones deciding these things. What matters on my system is the overall consensus of how important the issue is; if they think they need more assistance in their cause, they have to make the case for it in public, or among the communities which they are appealing. Do you understand that you are literally saying that lynching accused criminals is preferable to our current society, and that it's okay if vigilantes are roaming the streets because you think communities will self-correct by forming anti-lynching mobs to defend suspected pedophiles? How can this be anything but purestrain libertarianism? It's also telling that you respond to questions about the handling of any major social problem in your "attention economy" is "well, I'm sure community or humanitarian groups will form to help the disadvantaged". What if, yknow, that doesn't happen and the boot keeps stamping on a human face forever? RealityApologist posted:On my system, your information is partially public, but no one is put in a position of authority (like an "employer") where they might use that information against you, or use it to deprive you of your livelihood. There simply are no institutional authorities in my system that can leverage that knowledge against your institutional role. People don't need to be authorities to use information against you. Have you ever heard the term "blackmail"?
|
# ? Dec 8, 2013 19:55 |
|
|
# ? Jun 7, 2024 17:09 |
|
Eripsa, your Attention Economy literally sounds like the setting of a dystopian sci-fi novel.
|
# ? Dec 8, 2013 20:36 |
|
RealityApologist posted:This trade off in privacy should be accompanied with a reduction in the disparity of knowledge/power between parties. Or at least, that would be the case if privacy were traded in exchange for public utility, as I'm arguing. On my system, your information is partially public, but no one is put in a position of authority (like an "employer") where they might use that information against you, or use it to deprive you of your livelihood. There simply are no institutional authorities in my system that can leverage that knowledge against your institutional role. So giving up your information isn't making any private parties more powerful. In the existing system, in contrast, privacy is usually traded to private or closed parties (like Facebook or the NSA) who not only can use that information against you, but can also use it as a competitive advantage over other private parties, with no functional guarantees for the user beyond the convenience of the service itself. So in the existing world, the user has to be careful that the wrong party doesn't acquire their information, whereas that's not a systemic issue with mine. *makes a claim that in The Attention Economy, there are no advantages to having all the information* *is a peon, but still holds a position of authority over a bunch of students submitting their work directly to him* *sees no contradiction in terms as he makes an analogy involving a creepy uncle who obsessively harvests information on his relatives for years*
|
# ? Dec 8, 2013 20:44 |
|
In the Attention Economy, if someone is spouting enough nonsense to enough people that he registers as a blip to the all-volunteer Troll Police, does he get fired? Or is this a thing where the plucky resistance fighters offer you a red pill and a blue pill and if you pick the right one you step outside all the security cameras and can also fly?
|
# ? Dec 8, 2013 20:47 |
|
When I masturbate, I often stare blankly at my dick while I imagine a steamy sex session with Jessica Alba. In the attention economy, would this ensure that more resources are allocated to dicks than Jessica Alba? Because, drat...
|
# ? Dec 8, 2013 20:52 |
|
archangelwar posted:When I masturbate, I often stare blankly at my dick while I imagine a steamy sex session with Jessica Alba. In the attention economy, would this ensure that more resources are allocated to dicks than Jessica Alba? Because, drat... After the masturbation session, you would simply fill out an online form honestly saying what you were masturbating about. Then, the attention economy would ensure more Jessica Albas were produced, leading to a golden age of prosperity and great asses for all.
|
# ? Dec 8, 2013 20:56 |
|
archangelwar posted:When I masturbate, I often stare blankly at my dick while I imagine a steamy sex session with Jessica Alba. In the attention economy, would this ensure that more resources are allocated to dicks than Jessica Alba? Because, drat... For the past five years, the creepy uncle has been taking multi-camera recordings of your wanking, which has allowed him to make obsessively detailed records over every porn video you had open each year
|
# ? Dec 8, 2013 21:01 |
|
Obdicut posted:After the masturbation session, you would simply fill out an online form honestly saying what you were masturbating about. Then, the attention economy would ensure more Jessica Albas were produced, leading to a golden age of prosperity and great asses for all.
|
# ? Dec 8, 2013 21:16 |
|
A Buttery Pastry posted:You're assuming here that it can produce additional Jessica Albas. If that's not possible, it seems like archangelwar spending enough time on his Jessica Alba masturbation sessions might push the priority of him having sex with her to the highest level of the attention economy, which it will as efficiently as possible convert to Jessica Alba showing up at archangelwar's doorstep. Jessica Alba is going to have to spend all day long paying attention to a chastity belt or a taser or the concept of solitude or something. Won't someone think of Jessica Alba?
|
# ? Dec 8, 2013 21:19 |
|
A Buttery Pastry posted:You're assuming here that it can produce additional Jessica Albas. If that's not possible, it seems like archangelwar spending enough time on his Jessica Alba masturbation sessions might push the priority of him having sex with her to the highest level of the attention economy, which it will as efficiently as possible convert to Jessica Alba showing up at archangelwar's doorstep. Eripsa, you are on to something here!
|
# ? Dec 8, 2013 21:32 |
|
Applied Lessons in Attentionology 318: Final Exam 1. You are a TA in Applied Attentionology at a highly regarded educational institution. For most of the year, your students have paid no attention to you whatsoever and your RateMyProfessor Minute-By-Minute graphs are through the floor; if nothing changes, you have no chance at passing your PhD boards and may have to get a job exporting Rick Astley videos to the Third World. The night before the final exam, an attractive undergrad walks into your office and begs for help. Staring at you the whole time, she offers to make it worth your while to pass her. Do you: 1)Activate your Google Glass, record a porn video and upload it to Youtube hoping her attractiveness overcomes your pastiness and flab and gets you a better job offer in the reality sex industry; 2)See through her ruse and do nothing - she's obviously already recording this conversation as part of her final project, and going along with a fourth rate student who won't even be able to edit your flab out of her video will get you nowhere; 3)She's already staring at you. As long as you keep her attention long enough, both of your metrics are bound to skyrocket. "Have you ever seen A Clockwork Orange?"
|
# ? Dec 8, 2013 21:39 |
|
archangelwar posted:
|
# ? Dec 8, 2013 21:40 |
|
It's snowing outside my room now, if I stare at it because I think snow is pretty looking, especially when it's covering the trees, will the attention economy produce more snow via nuclear winter? Same question also applies to pugs, and furthermore, will the attention economy produce pugs that look even cuter but suffer from worse and worse health problems, or healthier ones that aren't as cute and thus, aren't as interesting? Also, say someone goes to an abortion clinic and there aren't enough doctors or nurses there to service their clients, do all the women who want abortions just pay attention to the lack of a doctor until one appears, or pay attention to the clinic as a whole and hope a clinic bomber doesn't notice the increased attention and firebomb the clinic?
|
# ? Dec 8, 2013 21:43 |
|
Eripsa's slow descent into angrily blaming others for being unable to comprehend his incomprehensible gibberish arguments is my favorite part of this thread. Seriously dude you post like you've spent years and years generating bullshit markov verbage to fill out thirty page papers and now you're unable to stop. What makes this funny is that you somehow think that this won't be noticed on a forum predominantly frequented by other academic knuckledraggers who have also drunkenly cranked out last minute deconstructive essays about nothing. boner confessor fucked around with this message at 22:30 on Dec 8, 2013 |
# ? Dec 8, 2013 22:27 |
|
Main Paineframe posted:The technical issues that still cripple the project are telling. The devices were tedious and annoying to use, and were wrong more than a quarter of the time despite having been primed with hourly emotion data from the previous stage. Who's going to use an anti-suicide machine that will falsely mark a non-suicidal person as suicidal 25% of the time? They demonstrated it as a proof of concept. You know what that is, right?
|
# ? Dec 9, 2013 00:41 |
|
enraged_camel posted:They demonstrated it as a proof of concept. You know what that is, right? Yeah, it's an exercise that you do in order to identify critical flaws in your idea and test it for feasibility. Given that, how do you think the test went as a proof of concept? What would have to change in order to improve the results?
|
# ? Dec 9, 2013 00:43 |
|
Popular Thug Drink posted:Seriously dude you post like you've spent years and years generating bullshit markov verbage to fill out thirty page papers and now you're unable to stop. What makes this funny is that you somehow think that this won't be noticed on a forum predominantly frequented by other academic knuckledraggers who have also drunkenly cranked out last minute deconstructive essays about nothing. But how can my knowledge be pretend? The debt I incurred to acquire it is all too real.
|
# ? Dec 9, 2013 02:15 |
|
RealityApologist posted:You are correct that everyone has individual preferences. The point isn't to predict them from scratch, but to anticipate them from models generated by their past behaviour. If I have the history of the meals you've eaten over the last year, there will be patterns that emerge that will allow us to predict what you will likely eat next year. This is why I've emphasized the importance of human computation in this thread, because these problems can be (and are routinely) solved by brains, and that reduces the computational load that is carried elsewhere. Ok, so how is this model of computation and self-organisation any different say to Amazon's Product Suggestions? Viewing and purchasing habits are used to predict what products you might like to buy. Or perhaps Target predicting that you are pregnant before your parents found out (http://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/). After each criticism, you move the goal posts wider and wider to the point that it appears you are arguing for little more than a product suggestion algorithm. Sure you are able to correlate customer supplied details/demographics to clusters of purchasing habits but that is far less ambitious and controversial than what you were originally proposing. Again these algorithms only work in situations where the personal data is explicitly supplied and for highly categorised and monitored behaviours; such as commercial products and behaviours such as completed purchases and webpage views. More importantly these sets of data all share the same well defined assumptions (mandatory customer details) and a relatively small (each customer has a handful of transactions and personal details), consistent (each purchase has a quantity, date, cost), sorted and categorised set of data (product catalogues are better categorised then book libraries). When you apply this more broadly to an individuals behaviour, all of the technical requirements that are needed for an effective suggestion algorithm cease to exist. You then have the problem of a computer needed to determine the intention of each behaviour in order to be able to categorise and deduce accurate predictions; which is far, far harder then you give credit for. RealityApologist posted:The attention economy (aka, the turkey singularity) Oh great, we already do that now. Governments and businesses plan to accommodate future needs from historical, census and consultation data; see any European style socialist government. It might not make the best decisions for everyone, but it does try to make sure no one is starving or dying from a treatable illness. Your examples have a very American view of how government and societies operate. If the 'central planning authority' had access to the attention economies computer resources, it would provide the planners a way to best distribute their food. They could keep all of historical food needs on file as well as recipes that are varied and healthy. It would also have population health data on allergy rates, so when someone requests a modification, there are enough safe foods already baked into the master plan. Then when the year's harvest comes in, they can stick it in the computer and crunch out meals that would best serve its citizens. This is essentially how socialist governments plan things today. From what my foreigners perspective, it seems that a large part of America's problems stem from the self-organising, independent governorship that you are advocating for. Each state has a different school, health, infrastructure system which cause a whole heap of problems whenever one state/counties system interacts with another's. Universal/Single Payer healthcare is a great example of something centrally planned that benefits everyone, but would be next to impossible under a self-organising government.
|
# ? Dec 9, 2013 03:25 |
|
Tokamak posted:Ok, so how is this model of computation and self-organisation any different say to Amazon's Product Suggestions? Again, I'm suggesting that we give these kinds of algorithms actual political authority over public decision making, instead of merely using them as a tool to assist private interests. Although the central planning authority could appeal to the same data, they have only indirect incentive to use it, and a mandate that doesn't depend on that data. I'm suggesting building a system with these features built into its basic mechanics. It's not radically different than the technologies you are drawing analogies to, but it's certainly different than the system on the table. Again, I'm not suggesting it fixes all problems, and I'm not assuming some radical technological leap. There are real issues that need addressing with such a system, and I'm happy to talk about its advantages and disadvantages relative to the existing system. Somehow this thread adopted a rhythm where people demand answers of me and then laugh and mock me when they're not satisfied. That's unfortunate because I obviously don't have all the answers, and the basic premise of the idea seems simple enough to me that others should be able to carry on the discussion of both criticizing and defending such a system, without it all turning on what I say. quote:You then have the problem of a computer needed to determine the intention of each behaviour in order to be able to categorise and deduce accurate predictions; which is far, far harder then you give credit for. Again, I don't think you need to determine the intention of the behavior; I think you just need to observe the actions being taken. A lot of actions can be taken; its not a trivial task. But it's not as intractable as you suggest. Here's a link to my thoughts after the keynote from HCOMP13, which involved a discussion about predicting (and motivating) certain actions from users. It gives some indications of where we are today, but these models have obvious and straightforward extensions into the kinds of systems we are talking about. It also adds some content to this thread beyond the bullying. http://digitalinterface.blogspot.com/2013/11/steering-crowd.html quote:I have been completely enamored with Jon Kleinberg keynote address from HCOMP2013. It is the first model of human computation in field-theoretic terms I've encountered, and it is absolutely brilliant.Kleinberg is concerned with badges, like those used on Foursquare, Coursera, StackOverflow and the like. The badges provide some incentive to complete tasks that the system wants users to make; it gamifies the computational goals so people are motivated to complete the task. Kline's paper provides a model for understanding how these incentives influence behavior.
|
# ? Dec 9, 2013 03:47 |
|
enraged_camel posted:Academia has criticism, sure, but it doesn't come in the form of publicly insulting the person. Man your in for a shock..... I've sat there and watched people come to blows (granted at a university staff tavern after a lot of booze) over Derrida before. Literally punching each other.
|
# ? Dec 9, 2013 04:07 |
|
^^^ Hey Duck, your just in time. It seems like we are not the only people confused by Eripsa's incoherent ramblings and dismissal of criticism and dissenting views... Student Feedback: quote:He makes his opinions known. Because this is a class dealing a lot with religion, people had many different opinions. He many times called people out on their beliefs, rolled his eyes at comments or questions, or was just rude. We have a month left of the semester and have not received one grade back yet. Says um and paces, overall not impressed! Summarises Eripsa's academic rigour in 3 sentences! Very Impressive. quote:First half of class was very easy and interesting. I was very nervous to take it because i HATE math. But it's not the kind of math you typically think of. Second half of class got pretty hard but he helps you through it and grades the tests pretty leaniently. You definitely have to go to class..trust me, you'll regret it if you don't. Funny guy Not as mathematically analytic as my first impressions seem to indicate. Needs lots of help getting through the material, but at the end of the course just gives everyone passing marks. Don't worry though, he will make you laugh (and tear your eyeballs out)! quote:Class points are mostly posting on blogs. Dan will open your mind to some brilliant ideas and also keep you up to date on current political events. HIGHLY recommend this class to any IS or CS majors becuase he incorporates cool concepts relating to IT. Overall great class, but make sure you put lots of effort into your midterm and final papers. Stoking the poor, naive CS libertarian mind with bunk philosophy. Just hang out and post on the class blog to get through the course. Brilliant guy, as long as you come from the same sheltered background as Eripsa. quote:somewhat challenging, yet interesting class. Not the easiest grader, made it clear you had to earn your A's. But when points were deducted, his reasons made sense. Entire class's work is written. blogs, twitter, written take home essay exams. Thought provoking discussions. offered a lot of EC tho, especially at the end. When you realise what the gently caress he is saying... it just makes sense quote:Horrible teacher. He says umm and uhh every other word. Very distracting. He also doesn't care about the opinions of students and will rip you a new one if you disagree. Attendance is not mandatory, but he does give pop quizzes on the readings so important to go. no tests all grades are from blogs, papers, and participation. Goes off on anyone with a divergent view, and doesn't care what you think. Says um and ah constantly to fill the space while he is thinking up poo poo to say on the fly. Doesn't bother to formally assess the class and give grades based on self-organisation. That's our Eripsa Mods I've tried to keep the post free of private, personal details and is sourced from the posters public academic record. It is meant to illustrate why we are having so much trouble communicating and figuring out what the gently caress they are talking about. His arxiv paper co-author's (reality apologist, the account he is posting under ) faculty photo has him wearing welding goggles and a fedora. I can't make this poo poo up. Also let your burnout philosophy buddy know about abductive reasoning, because it seems he is doing a dissertation on the scientific validity of climate change. We already have philosophically justified categorically related sciences like Evolution and Astronomy in the early 1900's. (USER WAS PUT ON PROBATION FOR THIS POST)
|
# ? Dec 9, 2013 04:11 |
|
RealityApologist posted:Again, I'm suggesting that we give these kinds of algorithms actual political authority over public decision making, instead of merely using them as a tool to assist private interests. Although the central planning authority could appeal to the same data, they have only indirect incentive to use it, and a mandate that doesn't depend on that data. I'm suggesting building a system with these features built into its basic mechanics. It's not radically different than the technologies you are drawing analogies to, but it's certainly different than the system on the table. Wait... planned, European-socialism doesn't have a mandate to best serve it's citizens? Do you think commies and socialists are doing all this planning and government spending because it is ideologically dictated to plan and spend? Of course the incentive for computer assistance is indirect. They will use computational models, if they are cheaper and more effective then more traditional techniques. We would be in dire straits if we relied on computers any less then we do now. So problem solved you guys! Make customer analytics the core of your political philosophy and let the great machine figure out what you really want. Eventually it will get so good from all the data it is mysteriously collecting, it will materialise your desires in the matter compiler before you even think about them. Don't worry poor browns, Google has got your back with Google Loon; floating the singularity to a remote settlement near you. Unfortunately for you, it IS a technological leap. And it certainly is for all intents and purposes intractable. Our brains evolved to filter out all of the potential choices and hazards that would choke a computer. If we can't simulate even the most simplest brain, how do you propose we simulate a brain-like decision algorithm that takes the sum of everyone's monitored actions as input? Hang on .. Now I'm listening. You haven't even suggested how this level of computational complexity is remotely possible, you just assume it is and let your theory run wild. Badges => Self-Organisation I should really reconsider doing postgraduate Philosophy of Science if this is the calibre of minds the American college system is churning out. I guess when they are making a profit off your tuition fees, they couldn't care less if you are actually learning anything. rudatron posted:Crazy uncle has this one weird trick to solve politics forever! Economists HATE him! Pretty much... Tokamak fucked around with this message at 04:55 on Dec 9, 2013 |
# ? Dec 9, 2013 04:46 |
|
Do I have to watch Zeitgeist to understand this Guff? Because if it doesn't involve some sort of crazed floating future city based on barter and supercomputers and some sort of illuminati-repulsion field I'm sure gunna be disapointed.
|
# ? Dec 9, 2013 05:51 |
|
RealityApologist posted:Again, I'm suggesting that we give these kinds of algorithms actual political authority over public decision making, instead of merely using them as a tool to assist private interests. Although the central planning authority could appeal to the same data, they have only indirect incentive to use it, and a mandate that doesn't depend on that data. I'm suggesting building a system with these features built into its basic mechanics. It's not radically different than the technologies you are drawing analogies to, but it's certainly different than the system on the table. Algorithms, by definition, cannot have political authority. They're tools that can be used by the entity that does have political authority, but they cannot themselves have political authority. Something needs to be put into place to enforce and administer and execute the algorithm's instructions. And then, since we're dealing with important real-life issues rather than a hypothetical wonderland, there also needs to be a central authority that can identify mistakes or problems in the algorithm or the results of the algorithm, and correct both those issues and the algorithms themselves. RealityApologist posted:Again, I'm not suggesting it fixes all problems, and I'm not assuming some radical technological leap. There are real issues that need addressing with such a system, and I'm happy to talk about its advantages and disadvantages relative to the existing system. Somehow this thread adopted a rhythm where people demand answers of me and then laugh and mock me when they're not satisfied. That's unfortunate because I obviously don't have all the answers, and the basic premise of the idea seems simple enough to me that others should be able to carry on the discussion of both criticizing and defending such a system, without it all turning on what I say. I'm not sure you've demonstrated that it fixes any problems, nor have you really pointed out the "real issues" that exist in our current system and would be fixed in this hypothetical system of yours. Instead of using marbles and Thanksgiving dinner, why not suggest some real-world improvements you think it would make as well as how it would solve the social issues you're so concerned about now? And yes, we demand answers because you're not giving us any reason why your system would be practical, nor are you even backing up your assertion that it would be preferable to the current system. The intention of the behavior is actually really loving important! You're proposing massive social changes and a fundamental reorganization in how society controls and responds to the needs of humans, when you don't even think it's important to know why people do the things they do? Tokamak posted:Wait... planned, European-socialism doesn't have a mandate to best serve it's citizens? Do you think commies and socialists are doing all this planning and government spending because it is ideologically dictated to plan and spend? Of course the incentive for computer assistance is indirect. They will use computational models, if they are cheaper and more effective then more traditional techniques. We would be in dire straits if we relied on computers any less then we do now. The key word there is "direct" - go look back at his "laws vs code" rail a few pages back to see what his conception of "direct" is - basically, if the consequences for bad actions are not immediate and automatic, with punishments being decided and enforced immediately and automatically at the exact moment that the crime is committed, then he thinks it's not a real disincentive and people will happily break the hell out of those laws because they're "inconsistent". I suspect RealityApologist would say that since European leaders are physically capable of doing something that isn't in the best interest of their citizens without immediately and automatically being removed from power by the very mechanism that allows them to rule, then they're not "directly" being punished for their actions, and thus there's no reason for them to NOT stomp all over the interests of their own citizens.
|
# ? Dec 9, 2013 05:53 |
|
Obdicut posted:Yeah, it's an exercise that you do in order to identify critical flaws in your idea and test it for feasibility. Increased battery life for the sensors as well as improved accuracy in detecting emotions.
|
# ? Dec 9, 2013 06:26 |
|
Main Paineframe posted:Instead of using marbles and Thanksgiving dinner, why not suggest some real-world improvements you think it would make as well as how it would solve the social issues you're so concerned about now? If he couldn't come up with any novel approaches to solve a scheduling problem like a Thanksgiving dinner, what hopes does he have with solving problems of greater complexity? Baby steps first. Yet it seems these governments are doing a really terrible job at screwing over citizens. Even the old anarcho-capitalist Wild West did a better job with letting people self-organise to screw each other over. I wonder if the Wild West with a Robo-Cop sheriff is close to the utopia that Eripsa is dreaming of.
|
# ? Dec 9, 2013 06:58 |
|
MeramJert posted:Eripsa, your Attention Economy literally sounds like the setting of a dystopian sci-fi novel. This Perfect Day by Ira Levin sounds like a perfect fit.
|
# ? Dec 9, 2013 07:33 |
|
RealityApologist posted:I think it's pretty clear from the thread that I'm denying the unanimity constraint on Arrow's theorem. Not...no, not at all. So if everyone's attention is focused on A to the commensurate exclusion of B, Attentopia can still rank B above A? Where does the imposition come from? Skynet?
|
# ? Dec 9, 2013 10:48 |
|
Main Paineframe posted:I suspect RealityApologist would say that since European leaders are physically capable of doing something that isn't in the best interest of their citizens without immediately and automatically being removed from power by the very mechanism that allows them to rule, then they're not "directly" being punished for their actions, and thus there's no reason for them to NOT stomp all over the interests of their own citizens.
|
# ? Dec 9, 2013 11:28 |
|
Obdicut posted:Jessica Alba is going to have to spend all day long paying attention to a chastity belt or a taser or the concept of solitude or something. No, you fool, don't you get it? Won't someone not think of Jessica Alba?? I'll be honest, I just contributed to the problem, because I had to Google who that is. Eripsa, you're crazy, but at least you're in good company. There are plenty of technofetishists rolling around the Bay that will hire you to do...well, something.
|
# ? Dec 9, 2013 11:42 |
|
RealityApologist posted:Here's a link to my thoughts after the keynote from HCOMP13, which involved a discussion about predicting (and motivating) certain actions from users. It gives some indications of where we are today, but these models have obvious and straightforward extensions into the kinds of systems we are talking about. It also adds some content to this thread beyond the bullying. Wait, so your system's method of motivating people to do ugly, dirty, or boring but necessary work is
|
# ? Dec 9, 2013 12:13 |
|
enraged_camel posted:Increased battery life for the sensors as well as improved accuracy in detecting emotions. That's a portion of what would have to change. Did you not notice they were analyzing logs, not real-time data? The difference between the two is large, especially when there's supposed to be consequent actions. Second the 'intervention' chosen had only a 37.5% success rate. There may be ways to improve that, but that's an enormous challenge, too. Third, this required users to log their emotional states, when obviously is a very poor form of capture. The correlation revealed in this study isn't between physiological state and actual emotional state, but reported emotional state. The problem of measurement is large here: how can you be sure you're capturing emotions, beyond the bare-bones 'arousal' type, based on self-reported data? Finally, this was a tripartate study: all three elements were tested separately, not in conjunction. The study itself is cool because it's a real-world study producing actual results. They sell it beyond its conclusions, which is unfortunate but expected, but they're still sober in what needs to happen next. Furthermore, this is kind of the opposite of the 'attention economy', since this is all recording things that people don't pay attention to or requiring them to log things they don't pay attention to. It is interfering with or ignoring the actual user attention.
|
# ? Dec 9, 2013 14:05 |
|
A Buttery Pastry posted:When in fact it's because there's no mechanism that punishes them at all that they're stomping all over the interests of their own citizens. Pretty sure they're in fact meeting the interests of some of their citizens, at the cost of stomping all over the interests of some of their citizens. Which is another big problem with the "attention economy" - nation-sized populations don't have always have similar or even consistent interests, and any aggregate approach would inevitably gently caress the minority for the sake of the majority.
|
# ? Dec 9, 2013 19:25 |
|
In the world of actual academics writing about similar topics, James Grimmelmann (law professor who works in the law of technology space regularly) uploaded a new paper on SSRN that touches on some of the same issues identified in this thread with what looks to be some actual thought behind it (and text you can actually read.) The abstract: "Social software has a power problem. Actually, it has two. The first is technical. Unlike the rule of law, the rule of software is simple and brutal: whoever controls the software makes the rules. And if power corrupts, then automatic power corrupts automatically. Facebook can drop you down the memory hole; Paypal can garnish your pay. These sovereigns of software have absolute and dictatorial control over their domains. Is it possible to create online spaces without technical power? It is not, because of social software's second power problem. Behind technical power there is also social power. Whenever people come together through software, they must agree on which software they will use. That agreement vests technical power in whoever controls the software. Social software cannot be completely free of coercion - not without ceasing to be social, or ceasing to be software. Rule-of-law values are worth defending in the age of software empires, but they cannot be fully embedded in the software itself. Any technical design can always be changed through an exercise of social power. Software can help by making this coercion more obvious, or by requiring more people to join together in it, but it alone cannot fully protect users. Whatever limits make social software humane, fair, and free will have to come from somewhere else - they will have to come from We the Users."
|
# ? Dec 9, 2013 21:04 |
|
The difference between that paper and Eripsa's proposal is the author's willingness to immediately address actual historical cases of technology regressing rights and power relations, you know problems that would be amplified if these systems became the framework of society, while the latter has a blind wanton urge to jump right into a techno-utopia where the influence of people, power and abuse are irrelevant because of perfect implementation. That would be interesting if it were mere musings, but that dismissive attitude precludes having a basic Socratic dialogue to elucidate how our real society would be affected. This is why I brought up the whole CP thing, it pulls together your thesis of self-organizing interest groups being the basis of all relations and touches on concerns any revolutionary movement must address: conflicting and minority interests, ethics and rights, and the balance of power and resources. Basic stuff. I'm just disappointed that you refuse to scratch beyond the surface of these questions, or handwave away problems that might arise by saying those concerns would be addressed by the magic of impartial algorithms and interest groups.
|
# ? Dec 10, 2013 18:22 |
|
Ratoslov posted:Wait, so your system's method of motivating people to do ugly, dirty, or boring but necessary work is To be honest it sounds a lot like the gamified dystopia in Fifteen Million Merits. Only that possible scenario for the future actually took into account the existence and dominance of corporate power.
|
# ? Dec 20, 2013 07:35 |
|
So, here's my argument: 1. Twitter bots EXIST 2. Twitter bots will soon be sentient 3. Twitter is Good and Right 4. Algorithms 5. The world will inevitably be run by benevolent software that let's us produce enough resources for everyone, without requiring anyone to work, and ensuring optimal distribution. Proper application of quantum computing & The Eternal Love of Jesus makes the travelling salesman problem pretty NP-easy and laid back and we solve that poo poo in like 20 minutes. Also google glass.
|
# ? Jan 8, 2014 19:21 |
|
Morozov just had a piece in the New Yorker where he mentions the attention economy. Not the OP's private definition, of course, but the actual term. http://www.newyorker.com/arts/critics/atlarge/2014/01/13/140113crat_atlarge_morozov?currentPage=1 quote:Hatch and Anderson alike invoke Marx and argue that the success of the maker movement shows that the means of production can be made affordable to workers even under capitalism. Now that money can be raised on sites such as Kickstarter, even large-scale investors have become unnecessary. But both overlook one key development: in a world where everyone is an entrepreneur, it’s hard work getting others excited about funding your project. Money goes to those who know how to attract attention. Also, does this sound familiar? quote:Ivan Illich’s “Tools for Conviviality,” ... called for devices and machines that would be easy to understand, learn, and repair, thus making experts and institutions unnecessary. “Convivial tools rule out certain levels of power, compulsion, and programming, which are precisely those features that now tend to make all governments look more or less alike,” Illich wrote. He had little faith in traditional politics. Whereas Stewart Brand wanted citizens to replace politics with savvy shopping, Illich wanted to “retool” society so that traditional politics, with its penchant for endless talk, becomes unnecessary.
|
# ? Jan 8, 2014 20:01 |
|
Slanderer posted:So, here's my argument: My "FEMACAMP/ACORNBOT" and "WHITEHOUSE SECURITY INVESTIGATION" bots I wrote to troll twitter's #tcot (Top conservatives on twitter) hashtags would probably by the waterboard weilding KGB bastards of such a scheme.FEMACAMP monitored #tcot, and any mention of Acorn would start producing messages signaling that a candidate for internment at a fema camp was being processed. Whitehouse security investigation would watch for mentions of Obama and various hot-topics in conservative land, and then when triggered inform the user that they had been reported to investigations @whitehouse.org review by homeland security. Both bots long since banned. For a while there acornbot was generating a huge amount of tears, although my goal was to get glen beck to cite it as evidence of a huge conspiracy, which alas it was strangled before I had tuned it well enough to upset enough people. I wonder if I still have the sourcecode. duck monster fucked around with this message at 04:51 on Jan 9, 2014 |
# ? Jan 9, 2014 04:47 |
|
|
# ? Jun 7, 2024 17:09 |
Best send-up of TED to appear in a TED talk so far? http://gawker.com/tedx-speaker-talks-about-how-ted-talks-are-bullshit-1496985980 This isn't that lovely comedian who did twenty minutes of valley gibberish, but couldn't keep a straight face and gave it away. He sucked. agarjogger fucked around with this message at 09:43 on Jan 9, 2014 |
|
# ? Jan 9, 2014 09:33 |