Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Main Paineframe
Oct 27, 2010

enraged_camel posted:

Incidentally, this was on Ars Technica today.


This is interesting because it is along the lines of what Eripsa has been saying with regards to the attention economy.

The technical issues that still cripple the project are telling. The devices were tedious and annoying to use, and were wrong more than a quarter of the time despite having been primed with hourly emotion data from the previous stage. Who's going to use an anti-suicide machine that will falsely mark a non-suicidal person as suicidal 25% of the time?

RealityApologist posted:

Again, this doesn't kill CP; it doesn't even kill it on reddit. But the action has an impact on how the issue is handled by all the interested parties. It might drive CP underground even more, making it that much harder to call attention to the matter. But it also reinforces the norms of the rest of society, making explicit that the general consensus is that such behavior isn't acceptable, and engaging it in will make you the target of the crowd. Some extreme elements of the anti-CP group might overstep that consensus by attempting vigilante justice but hunting down and humiliating the CPers; this might result in an opposite form of extremism that attempts to assert the legitimacy of CP enthusists against their attackers. Both sides might see their cause as incredibly important and worth the devotion of the resources at their command, but fortunately they aren't the ones deciding these things. What matters on my system is the overall consensus of how important the issue is; if they think they need more assistance in their cause, they have to make the case for it in public, or among the communities which they are appealing.

Do you understand that you are literally saying that lynching accused criminals is preferable to our current society, and that it's okay if vigilantes are roaming the streets because you think communities will self-correct by forming anti-lynching mobs to defend suspected pedophiles? How can this be anything but purestrain libertarianism? It's also telling that you respond to questions about the handling of any major social problem in your "attention economy" is "well, I'm sure community or humanitarian groups will form to help the disadvantaged". What if, yknow, that doesn't happen and the boot keeps stamping on a human face forever?

RealityApologist posted:

On my system, your information is partially public, but no one is put in a position of authority (like an "employer") where they might use that information against you, or use it to deprive you of your livelihood. There simply are no institutional authorities in my system that can leverage that knowledge against your institutional role.

People don't need to be authorities to use information against you. Have you ever heard the term "blackmail"?

Adbot
ADBOT LOVES YOU

fart simpson
Jul 2, 2005

DEATH TO AMERICA
:xickos:

Eripsa, your Attention Economy literally sounds like the setting of a dystopian sci-fi novel.

Adar
Jul 27, 2001

RealityApologist posted:

This trade off in privacy should be accompanied with a reduction in the disparity of knowledge/power between parties. Or at least, that would be the case if privacy were traded in exchange for public utility, as I'm arguing. On my system, your information is partially public, but no one is put in a position of authority (like an "employer") where they might use that information against you, or use it to deprive you of your livelihood. There simply are no institutional authorities in my system that can leverage that knowledge against your institutional role. So giving up your information isn't making any private parties more powerful. In the existing system, in contrast, privacy is usually traded to private or closed parties (like Facebook or the NSA) who not only can use that information against you, but can also use it as a competitive advantage over other private parties, with no functional guarantees for the user beyond the convenience of the service itself. So in the existing world, the user has to be careful that the wrong party doesn't acquire their information, whereas that's not a systemic issue with mine.

*makes a claim that in The Attention Economy, there are no advantages to having all the information*

*is a peon, but still holds a position of authority over a bunch of students submitting their work directly to him*

*sees no contradiction in terms as he makes an analogy involving a creepy uncle who obsessively harvests information on his relatives for years*

Adar
Jul 27, 2001
In the Attention Economy, if someone is spouting enough nonsense to enough people that he registers as a blip to the all-volunteer Troll Police, does he get fired? Or is this a thing where the plucky resistance fighters offer you a red pill and a blue pill and if you pick the right one you step outside all the security cameras and can also fly?

archangelwar
Oct 28, 2004

Teaching Moments
When I masturbate, I often stare blankly at my dick while I imagine a steamy sex session with Jessica Alba. In the attention economy, would this ensure that more resources are allocated to dicks than Jessica Alba? Because, drat...

Obdicut
May 15, 2012

"What election?"

archangelwar posted:

When I masturbate, I often stare blankly at my dick while I imagine a steamy sex session with Jessica Alba. In the attention economy, would this ensure that more resources are allocated to dicks than Jessica Alba? Because, drat...

After the masturbation session, you would simply fill out an online form honestly saying what you were masturbating about. Then, the attention economy would ensure more Jessica Albas were produced, leading to a golden age of prosperity and great asses for all.

Adar
Jul 27, 2001

archangelwar posted:

When I masturbate, I often stare blankly at my dick while I imagine a steamy sex session with Jessica Alba. In the attention economy, would this ensure that more resources are allocated to dicks than Jessica Alba? Because, drat...

For the past five years, the creepy uncle has been taking multi-camera recordings of your wanking, which has allowed him to make obsessively detailed records over every porn video you had open each year

A Buttery Pastry
Sep 4, 2011

Delicious and Informative!
:3:

Obdicut posted:

After the masturbation session, you would simply fill out an online form honestly saying what you were masturbating about. Then, the attention economy would ensure more Jessica Albas were produced, leading to a golden age of prosperity and great asses for all.
You're assuming here that it can produce additional Jessica Albas. If that's not possible, it seems like archangelwar spending enough time on his Jessica Alba masturbation sessions might push the priority of him having sex with her to the highest level of the attention economy, which it will as efficiently as possible convert to Jessica Alba showing up at archangelwar's doorstep.

Obdicut
May 15, 2012

"What election?"

A Buttery Pastry posted:

You're assuming here that it can produce additional Jessica Albas. If that's not possible, it seems like archangelwar spending enough time on his Jessica Alba masturbation sessions might push the priority of him having sex with her to the highest level of the attention economy, which it will as efficiently as possible convert to Jessica Alba showing up at archangelwar's doorstep.

Jessica Alba is going to have to spend all day long paying attention to a chastity belt or a taser or the concept of solitude or something.

Won't someone think of Jessica Alba?

archangelwar
Oct 28, 2004

Teaching Moments

A Buttery Pastry posted:

You're assuming here that it can produce additional Jessica Albas. If that's not possible, it seems like archangelwar spending enough time on his Jessica Alba masturbation sessions might push the priority of him having sex with her to the highest level of the attention economy, which it will as efficiently as possible convert to Jessica Alba showing up at archangelwar's doorstep.

:aaaaa::aaaaa::aaaaa::aaaaa::aaaaa::aaaaa::aaaaa::aaaaa:

Eripsa, you are on to something here!

Adar
Jul 27, 2001
Applied Lessons in Attentionology 318: Final Exam

1. You are a TA in Applied Attentionology at a highly regarded educational institution. For most of the year, your students have paid no attention to you whatsoever and your RateMyProfessor Minute-By-Minute graphs are through the floor; if nothing changes, you have no chance at passing your PhD boards and may have to get a job exporting Rick Astley videos to the Third World. The night before the final exam, an attractive undergrad walks into your office and begs for help. Staring at you the whole time, she offers to make it worth your while to pass her.

Do you:

1)Activate your Google Glass, record a porn video and upload it to Youtube hoping her attractiveness overcomes your pastiness and flab and gets you a better job offer in the reality sex industry;
2)See through her ruse and do nothing - she's obviously already recording this conversation as part of her final project, and going along with a fourth rate student who won't even be able to edit your flab out of her video will get you nowhere;
3)She's already staring at you. As long as you keep her attention long enough, both of your metrics are bound to skyrocket. "Have you ever seen A Clockwork Orange?"

A Buttery Pastry
Sep 4, 2011

Delicious and Informative!
:3:

archangelwar posted:

:aaaaa::aaaaa::aaaaa::aaaaa::aaaaa::aaaaa::aaaaa::aaaaa:

Eripsa, you are on to something here!
See Eripsa, when you engage with people and their perspective on the world, it's much easier to get them to support your ideas! Unfortunately for archangelwar, he has forgotten what happens next. The rest of the Jessica Alba Sex Fantasy Group will become aware of his imminent intimate encounter, which will boost the signal, alerting the Anti-Female Celebrity Sex Fantasy Group, who will pay so much loving attention that a taxi will show up and drive Jessica to safety.

burnishedfume
Mar 8, 2011

You really are a louse...
It's snowing outside my room now, if I stare at it because I think snow is pretty looking, especially when it's covering the trees, will the attention economy produce more snow via nuclear winter? Same question also applies to pugs, and furthermore, will the attention economy produce pugs that look even cuter but suffer from worse and worse health problems, or healthier ones that aren't as cute and thus, aren't as interesting?

Also, say someone goes to an abortion clinic and there aren't enough doctors or nurses there to service their clients, do all the women who want abortions just pay attention to the lack of a doctor until one appears, or pay attention to the clinic as a whole and hope a clinic bomber doesn't notice the increased attention and firebomb the clinic?

boner confessor
Apr 25, 2013

by R. Guyovich
Eripsa's slow descent into angrily blaming others for being unable to comprehend his incomprehensible gibberish arguments is my favorite part of this thread.

Seriously dude you post like you've spent years and years generating bullshit markov verbage to fill out thirty page papers and now you're unable to stop. What makes this funny is that you somehow think that this won't be noticed on a forum predominantly frequented by other academic knuckledraggers who have also drunkenly cranked out last minute deconstructive essays about nothing.

boner confessor fucked around with this message at 22:30 on Dec 8, 2013

Slow News Day
Jul 4, 2007

Main Paineframe posted:

The technical issues that still cripple the project are telling. The devices were tedious and annoying to use, and were wrong more than a quarter of the time despite having been primed with hourly emotion data from the previous stage. Who's going to use an anti-suicide machine that will falsely mark a non-suicidal person as suicidal 25% of the time?

They demonstrated it as a proof of concept. You know what that is, right?

Obdicut
May 15, 2012

"What election?"

enraged_camel posted:

They demonstrated it as a proof of concept. You know what that is, right?

Yeah, it's an exercise that you do in order to identify critical flaws in your idea and test it for feasibility.

Given that, how do you think the test went as a proof of concept? What would have to change in order to improve the results?

woke wedding drone
Jun 1, 2003

by exmarx
Fun Shoe

Popular Thug Drink posted:

Seriously dude you post like you've spent years and years generating bullshit markov verbage to fill out thirty page papers and now you're unable to stop. What makes this funny is that you somehow think that this won't be noticed on a forum predominantly frequented by other academic knuckledraggers who have also drunkenly cranked out last minute deconstructive essays about nothing.

But how can my knowledge be pretend? The debt I incurred to acquire it is all too real.

Tokamak
Dec 22, 2004

RealityApologist posted:

You are correct that everyone has individual preferences. The point isn't to predict them from scratch, but to anticipate them from models generated by their past behaviour. If I have the history of the meals you've eaten over the last year, there will be patterns that emerge that will allow us to predict what you will likely eat next year. This is why I've emphasized the importance of human computation in this thread, because these problems can be (and are routinely) solved by brains, and that reduces the computational load that is carried elsewhere.

Ok, so how is this model of computation and self-organisation any different say to Amazon's Product Suggestions? Viewing and purchasing habits are used to predict what products you might like to buy. Or perhaps Target predicting that you are pregnant before your parents found out (http://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/).

After each criticism, you move the goal posts wider and wider to the point that it appears you are arguing for little more than a product suggestion algorithm. Sure you are able to correlate customer supplied details/demographics to clusters of purchasing habits but that is far less ambitious and controversial than what you were originally proposing. Again these algorithms only work in situations where the personal data is explicitly supplied and for highly categorised and monitored behaviours; such as commercial products and behaviours such as completed purchases and webpage views.

More importantly these sets of data all share the same well defined assumptions (mandatory customer details) and a relatively small (each customer has a handful of transactions and personal details), consistent (each purchase has a quantity, date, cost), sorted and categorised set of data (product catalogues are better categorised then book libraries).

When you apply this more broadly to an individuals behaviour, all of the technical requirements that are needed for an effective suggestion algorithm cease to exist. You then have the problem of a computer needed to determine the intention of each behaviour in order to be able to categorise and deduce accurate predictions; which is far, far harder then you give credit for.

RealityApologist posted:

The attention economy (aka, the turkey singularity)

Advantages: an exact solution to the problem without computing all the factors that inform the decisions of each individuals. Everyone has opportunities to provide feedback in the calibration process, and can be sure that their individual concerns are taken into consideration in the planning process. This system also can't be cheated over the long run: we'll know if any of the food you took was wasted, and it doesn't particularly matter if you didn't contribute anything as long as all the work was assigned.

Disadvantages: While it's not a purely technological solution, it does require some serious technological infrastructure that has to be maintained (presumably in the same open, collaborative way in which the menu is produced). In order to perform well, it also has to assume that people will actually take advantage of the system by providing feedback to the model as required during the planning stages. So beyond the time that goes into the meal preparation itself, this solution also requires some work from everyone to support the organizing infrastructure to support the model. It involves more work from some people than they might have to do in other models, but less than in some others.

Oh great, we already do that now. Governments and businesses plan to accommodate future needs from historical, census and consultation data; see any European style socialist government. It might not make the best decisions for everyone, but it does try to make sure no one is starving or dying from a treatable illness. Your examples have a very American view of how government and societies operate.

If the 'central planning authority' had access to the attention economies computer resources, it would provide the planners a way to best distribute their food. They could keep all of historical food needs on file as well as recipes that are varied and healthy. It would also have population health data on allergy rates, so when someone requests a modification, there are enough safe foods already baked into the master plan. Then when the year's harvest comes in, they can stick it in the computer and crunch out meals that would best serve its citizens. This is essentially how socialist governments plan things today.

From what my foreigners perspective, it seems that a large part of America's problems stem from the self-organising, independent governorship that you are advocating for. Each state has a different school, health, infrastructure system which cause a whole heap of problems whenever one state/counties system interacts with another's. Universal/Single Payer healthcare is a great example of something centrally planned that benefits everyone, but would be next to impossible under a self-organising government.

RealityApologist
Mar 29, 2011

ASK me how NETWORKS algorithms NETWORKS will save humanity. WHY ARE YOU NOT THINKING MY THESIS THROUGH FOR ME HEATHENS did I mention I just unified all sciences because NETWORKS :fuckoff:

Tokamak posted:

Ok, so how is this model of computation and self-organisation any different say to Amazon's Product Suggestions?

Again, I'm suggesting that we give these kinds of algorithms actual political authority over public decision making, instead of merely using them as a tool to assist private interests. Although the central planning authority could appeal to the same data, they have only indirect incentive to use it, and a mandate that doesn't depend on that data. I'm suggesting building a system with these features built into its basic mechanics. It's not radically different than the technologies you are drawing analogies to, but it's certainly different than the system on the table.

Again, I'm not suggesting it fixes all problems, and I'm not assuming some radical technological leap. There are real issues that need addressing with such a system, and I'm happy to talk about its advantages and disadvantages relative to the existing system. Somehow this thread adopted a rhythm where people demand answers of me and then laugh and mock me when they're not satisfied. That's unfortunate because I obviously don't have all the answers, and the basic premise of the idea seems simple enough to me that others should be able to carry on the discussion of both criticizing and defending such a system, without it all turning on what I say.

quote:

You then have the problem of a computer needed to determine the intention of each behaviour in order to be able to categorise and deduce accurate predictions; which is far, far harder then you give credit for.

Again, I don't think you need to determine the intention of the behavior; I think you just need to observe the actions being taken. A lot of actions can be taken; its not a trivial task. But it's not as intractable as you suggest.

Here's a link to my thoughts after the keynote from HCOMP13, which involved a discussion about predicting (and motivating) certain actions from users. It gives some indications of where we are today, but these models have obvious and straightforward extensions into the kinds of systems we are talking about. It also adds some content to this thread beyond the bullying.

http://digitalinterface.blogspot.com/2013/11/steering-crowd.html

quote:

I have been completely enamored with Jon Kleinberg keynote address from HCOMP2013. It is the first model of human computation in field-theoretic terms I've encountered, and it is absolutely brilliant.Kleinberg is concerned with badges, like those used on Foursquare, Coursera, StackOverflow and the like. The badges provide some incentive to complete tasks that the system wants users to make; it gamifies the computational goals so people are motivated to complete the task. Kline's paper provides a model for understanding how these incentives influence behavior.



In this model, agents can act in any number of ways. If we consider StackOverflow, users might ask a question, answer a question, vote on questions and answers, and so on. They can also do something else entirely, like wash their cars. Each of these actions is represented as a vector in high dimensional space: one dimension for each action they might perform. In Figure 2, they consider a two dimensional sample of that action space, with distinct actions on the x and y axis. The dashed lines represent badge thresholds; completing 15 actions of type A1 earns you a badge, as does 10 actions of type A2. On this graph, Kleinberg draws arrows the length and orientation of which represent the optimal decision policies for users as they move through this action space.

Users begin with some preferences for taking some actions over others, and the model assumes that the badges have some value for the users. The goal of the model is to show how the badges augment user action preferences as they approach the badge. Figure 2 shows a user near the origin has no strong incentives towards actions of either type. But as one starts accumulating actions and nearing a badge, the optimal policy changes. When I have 12 actions of type A1, I have a stronger incentive for doing that action again than I did when I only had 5.



In this way the badge thresholds work like attractors for user behaviors; Kleinberg discussed how to decide where to place badges in order to motivate desirable behavior. It's also interesting to see what happens to user behavior after they cross the threshold. In Figure 2, you see that once you've received a badge in dimension, you lose all incentive to continue performing actions in that dimension. And indeed that's exactly what you find from the data taken from StackOverflow. Figure 3 models the activity of users in the days before and after earning a badge. In the run up to a badge, user activity in that dimension of action sees a sharp spike in activity. After the badge is received, there's a precipitous fall as that activity returns to levels when it is not motivated by a badge.

I've been calling fields like the one represented in Figure 2 "goal fields", because they represent orientation towards certain goals. The goal field describes the natural "flow" or trajectory of users as they move through this action space. In his talk, Kleinberg compared the model to electrical fields, with workers like magnetic filings orienting along the field. He's interested in making the metaphor accurate enough to work for describing the goal orientation of agents in real activity fields like StackOverflow.

Interestingly, his model suggests bounds on where badges might be places, and limitations on what behavior might be extracted from users in this way. Figure 9 from the paper describes this beautiful and confusing landscape carves out by all possible badge placements for two dimensions of action. Notice that some of the action space is inaccessible for any badge placement policy; for instance, no badge thresholds will yield an action policy of (10%, 60%).



I've been fascinated by this paper and its implications for the last few days, and I'm flooded with ideas for extending and refining the model. I'll try to list a few thoughts I had:


I'd be interested to see if the model can be extended to accommodate action sets that aren't entirely orthogonal. I wonder if actions might be categorized into certain types or clusters, with badges aimed at augmenting behavior for all actions within that category. For instance, "not using StackOverflow" isn't just another dimension of action, it's an entirely different class of actions. It would then be interesting to see how badges aimed at one action class impact incentives for different classes.
I'm also interested in the possibility of evolving badges that represented moving targets instead of static thresholds, as a way of avoiding the precipitous cliff past a threshold. If users were able to maintain a constant (and short) distance from a potential reward, then the user will always act like a highly motivated worker on the verge of an incentive. Call this the "carrot and stick" badge, in contrast to the threshold badge. If users never get a reward they'll lose incentive; but if reaching a threshold always unlocks new, close-by badges, then I'm always in a motivated position to keep acting. Instead of a magnetic force field, I'm thinking of something like an event horizon: I always keep getting close without ever falling in.
One way to engineer a carrot-stick badges is to have its placement emerge as a function of the average activity of users. For instance, if instead of giving me a badge for answering 600 questions, I had a badge for being in the top 1% of question-answerers, this becomes a moving target. If I ever stop answering questions, I risk losing the badge to workers staying busier than me.
Badges are often a way of identifying experts, and you might think that carrot-stick badges are especially suited to identifying experts in real-world situations. In science, experts aren't merely the authorities who have the "right answers" for deciding answers to hard questions. Being an expert in physics 30 years ago would not suffice for being an expert today, and carrot-stick badges might model this situation better. Experts also deform the problem space itself, and can be responsible for changing the standards for future experts and the space of what questions can be answered. In other words, expertise is a higher-order type of work: you have your plumbers and scientists, and then you have experts of those types. Experts from one domain might not have any relation to experts of another domain; what they share in common are their relations to the average performance within their respective domains. These relations might not be obvious or translatable across domains. For instance, lots of people can drive cars, so the actions of an expert driver (a stunt driver, or NASCAR) might be very close to the actions of a normal, competent driver. In contrast, not many people are good at physics, so the difference between the average person's skills at physics and the expert's skills might be huge. In the driving case, the large competence in the community means that finding experts requires very sensitive methods for distinguishing the two; in the physics cases, the differences are so large that the criteria for expertise will be much looser.

duck monster
Dec 15, 2004

enraged_camel posted:

Academia has criticism, sure, but it doesn't come in the form of publicly insulting the person.

Just sayin'.

Man your in for a shock.....

I've sat there and watched people come to blows (granted at a university staff tavern after a lot of booze) over Derrida before. Literally punching each other.

Tokamak
Dec 22, 2004

^^^
Hey Duck, your just in time.

It seems like we are not the only people confused by Eripsa's incoherent ramblings and dismissal of criticism and dissenting views...
Student Feedback:

quote:

He makes his opinions known. Because this is a class dealing a lot with religion, people had many different opinions. He many times called people out on their beliefs, rolled his eyes at comments or questions, or was just rude. We have a month left of the semester and have not received one grade back yet. Says um and paces, overall not impressed!

Summarises Eripsa's academic rigour in 3 sentences! Very Impressive.

quote:

First half of class was very easy and interesting. I was very nervous to take it because i HATE math. But it's not the kind of math you typically think of. Second half of class got pretty hard but he helps you through it and grades the tests pretty leaniently. You definitely have to go to class..trust me, you'll regret it if you don't. Funny guy

Not as mathematically analytic as my first impressions seem to indicate. Needs lots of help getting through the material, but at the end of the course just gives everyone passing marks. Don't worry though, he will make you laugh (and tear your eyeballs out)!

quote:

Class points are mostly posting on blogs. Dan will open your mind to some brilliant ideas and also keep you up to date on current political events. HIGHLY recommend this class to any IS or CS majors becuase he incorporates cool concepts relating to IT. Overall great class, but make sure you put lots of effort into your midterm and final papers.

Stoking the poor, naive CS libertarian mind with bunk philosophy. Just hang out and post on the class blog to get through the course. Brilliant guy, as long as you come from the same sheltered background as Eripsa.

quote:

somewhat challenging, yet interesting class. Not the easiest grader, made it clear you had to earn your A's. But when points were deducted, his reasons made sense. Entire class's work is written. blogs, twitter, written take home essay exams. Thought provoking discussions. offered a lot of EC tho, especially at the end.

When you realise what the gently caress he is saying... it just makes sense :okpos:

quote:

Horrible teacher. He says umm and uhh every other word. Very distracting. He also doesn't care about the opinions of students and will rip you a new one if you disagree. Attendance is not mandatory, but he does give pop quizzes on the readings so important to go. no tests all grades are from blogs, papers, and participation.

Goes off on anyone with a divergent view, and doesn't care what you think. Says um and ah constantly to fill the space while he is thinking up poo poo to say on the fly. Doesn't bother to formally assess the class and give grades based on self-organisation. That's our Eripsa :allears:


Mods I've tried to keep the post free of private, personal details and is sourced from the posters public academic record. It is meant to illustrate why we are having so much trouble communicating and figuring out what the gently caress they are talking about.

His arxiv paper co-author's (reality apologist, the account he is posting under :psyduck:) faculty photo has him wearing welding goggles and a fedora. I can't make this poo poo up. Also let your burnout philosophy buddy know about abductive reasoning, because it seems he is doing a dissertation on the scientific validity of climate change. We already have philosophically justified categorically related sciences like Evolution and Astronomy in the early 1900's.

(USER WAS PUT ON PROBATION FOR THIS POST)

Tokamak
Dec 22, 2004

RealityApologist posted:

Again, I'm suggesting that we give these kinds of algorithms actual political authority over public decision making, instead of merely using them as a tool to assist private interests. Although the central planning authority could appeal to the same data, they have only indirect incentive to use it, and a mandate that doesn't depend on that data. I'm suggesting building a system with these features built into its basic mechanics. It's not radically different than the technologies you are drawing analogies to, but it's certainly different than the system on the table.

Again, I'm not suggesting it fixes all problems, and I'm not assuming some radical technological leap. There are real issues that need addressing with such a system, and I'm happy to talk about its advantages and disadvantages relative to the existing system. Somehow this thread adopted a rhythm where people demand answers of me and then laugh and mock me when they're not satisfied. That's unfortunate because I obviously don't have all the answers, and the basic premise of the idea seems simple enough to me that others should be able to carry on the discussion of both criticizing and defending such a system, without it all turning on what I say.


Again, I don't think you need to determine the intention of the behavior; I think you just need to observe the actions being taken. A lot of actions can be taken; its not a trivial task. But it's not as intractable as you suggest.

Wait... planned, European-socialism doesn't have a mandate to best serve it's citizens? Do you think commies and socialists are doing all this planning and government spending because it is ideologically dictated to plan and spend? Of course the incentive for computer assistance is indirect. They will use computational models, if they are cheaper and more effective then more traditional techniques. We would be in dire straits if we relied on computers any less then we do now.

So problem solved you guys! Make customer analytics the core of your political philosophy and let the great machine figure out what you really want. Eventually it will get so good from all the data it is mysteriously collecting, it will materialise your desires in the matter compiler before you even think about them. Don't worry poor browns, Google has got your back with Google Loon; floating the singularity to a remote settlement near you.

Unfortunately for you, it IS a technological leap. And it certainly is for all intents and purposes intractable. Our brains evolved to filter out all of the potential choices and hazards that would choke a computer. If we can't simulate even the most simplest brain, how do you propose we simulate a brain-like decision algorithm that takes the sum of everyone's monitored actions as input? Hang on :350: .. Now I'm listening. You haven't even suggested how this level of computational complexity is remotely possible, you just assume it is and let your theory run wild.

Badges => Self-Organisation :psyduck:

I should really reconsider doing postgraduate Philosophy of Science if this is the calibre of minds the American college system is churning out. I guess when they are making a profit off your tuition fees, they couldn't care less if you are actually learning anything.

rudatron posted:

Crazy uncle has this one weird trick to solve politics forever! Economists HATE him!

Pretty much...

Tokamak fucked around with this message at 04:55 on Dec 9, 2013

duck monster
Dec 15, 2004

Do I have to watch Zeitgeist to understand this Guff?

Because if it doesn't involve some sort of crazed floating future city based on barter and supercomputers and some sort of illuminati-repulsion field I'm sure gunna be disapointed.

Main Paineframe
Oct 27, 2010

RealityApologist posted:

Again, I'm suggesting that we give these kinds of algorithms actual political authority over public decision making, instead of merely using them as a tool to assist private interests. Although the central planning authority could appeal to the same data, they have only indirect incentive to use it, and a mandate that doesn't depend on that data. I'm suggesting building a system with these features built into its basic mechanics. It's not radically different than the technologies you are drawing analogies to, but it's certainly different than the system on the table.

Algorithms, by definition, cannot have political authority. They're tools that can be used by the entity that does have political authority, but they cannot themselves have political authority. Something needs to be put into place to enforce and administer and execute the algorithm's instructions. And then, since we're dealing with important real-life issues rather than a hypothetical wonderland, there also needs to be a central authority that can identify mistakes or problems in the algorithm or the results of the algorithm, and correct both those issues and the algorithms themselves.

RealityApologist posted:

Again, I'm not suggesting it fixes all problems, and I'm not assuming some radical technological leap. There are real issues that need addressing with such a system, and I'm happy to talk about its advantages and disadvantages relative to the existing system. Somehow this thread adopted a rhythm where people demand answers of me and then laugh and mock me when they're not satisfied. That's unfortunate because I obviously don't have all the answers, and the basic premise of the idea seems simple enough to me that others should be able to carry on the discussion of both criticizing and defending such a system, without it all turning on what I say.


Again, I don't think you need to determine the intention of the behavior; I think you just need to observe the actions being taken. A lot of actions can be taken; its not a trivial task. But it's not as intractable as you suggest.

I'm not sure you've demonstrated that it fixes any problems, nor have you really pointed out the "real issues" that exist in our current system and would be fixed in this hypothetical system of yours. Instead of using marbles and Thanksgiving dinner, why not suggest some real-world improvements you think it would make as well as how it would solve the social issues you're so concerned about now? And yes, we demand answers because you're not giving us any reason why your system would be practical, nor are you even backing up your assertion that it would be preferable to the current system.

The intention of the behavior is actually really loving important! You're proposing massive social changes and a fundamental reorganization in how society controls and responds to the needs of humans, when you don't even think it's important to know why people do the things they do?


Tokamak posted:

Wait... planned, European-socialism doesn't have a mandate to best serve it's citizens? Do you think commies and socialists are doing all this planning and government spending because it is ideologically dictated to plan and spend? Of course the incentive for computer assistance is indirect. They will use computational models, if they are cheaper and more effective then more traditional techniques. We would be in dire straits if we relied on computers any less then we do now.

The key word there is "direct" - go look back at his "laws vs code" rail a few pages back to see what his conception of "direct" is - basically, if the consequences for bad actions are not immediate and automatic, with punishments being decided and enforced immediately and automatically at the exact moment that the crime is committed, then he thinks it's not a real disincentive and people will happily break the hell out of those laws because they're "inconsistent". I suspect RealityApologist would say that since European leaders are physically capable of doing something that isn't in the best interest of their citizens without immediately and automatically being removed from power by the very mechanism that allows them to rule, then they're not "directly" being punished for their actions, and thus there's no reason for them to NOT stomp all over the interests of their own citizens.

Slow News Day
Jul 4, 2007

Obdicut posted:

Yeah, it's an exercise that you do in order to identify critical flaws in your idea and test it for feasibility.

Given that, how do you think the test went as a proof of concept? What would have to change in order to improve the results?

Increased battery life for the sensors as well as improved accuracy in detecting emotions.

Tokamak
Dec 22, 2004

Main Paineframe posted:

Instead of using marbles and Thanksgiving dinner, why not suggest some real-world improvements you think it would make as well as how it would solve the social issues you're so concerned about now?

I suspect RealityApologist would say that since European leaders are physically capable of doing something that isn't in the best interest of their citizens without immediately and automatically being removed from power by the very mechanism that allows them to rule, then they're not "directly" being punished for their actions, and thus there's no reason for them to NOT stomp all over the interests of their own citizens.

If he couldn't come up with any novel approaches to solve a scheduling problem like a Thanksgiving dinner, what hopes does he have with solving problems of greater complexity? Baby steps first.

Yet it seems these governments are doing a really terrible job at screwing over citizens. Even the old anarcho-capitalist Wild West did a better job with letting people self-organise to screw each other over. I wonder if the Wild West with a Robo-Cop sheriff is close to the utopia that Eripsa is dreaming of.

kdjohnson
Jan 18, 2003
slug

MeramJert posted:

Eripsa, your Attention Economy literally sounds like the setting of a dystopian sci-fi novel.

This Perfect Day by Ira Levin sounds like a perfect fit.

captainbananas
Sep 11, 2002

Ahoy, Captain!

RealityApologist posted:

I think it's pretty clear from the thread that I'm denying the unanimity constraint on Arrow's theorem.

Not...no, not at all. So if everyone's attention is focused on A to the commensurate exclusion of B, Attentopia can still rank B above A? Where does the imposition come from? Skynet?

A Buttery Pastry
Sep 4, 2011

Delicious and Informative!
:3:

Main Paineframe posted:

I suspect RealityApologist would say that since European leaders are physically capable of doing something that isn't in the best interest of their citizens without immediately and automatically being removed from power by the very mechanism that allows them to rule, then they're not "directly" being punished for their actions, and thus there's no reason for them to NOT stomp all over the interests of their own citizens.
When in fact it's because there's no mechanism that punishes them at all that they're stomping all over the interests of their own citizens.

Bleu
Jul 19, 2006

Obdicut posted:

Jessica Alba is going to have to spend all day long paying attention to a chastity belt or a taser or the concept of solitude or something.

Won't someone think of Jessica Alba?

No, you fool, don't you get it? Won't someone not think of Jessica Alba??

I'll be honest, I just contributed to the problem, because I had to Google who that is.

Eripsa, you're crazy, but at least you're in good company. There are plenty of technofetishists rolling around the Bay that will hire you to do...well, something.

Ratoslov
Feb 15, 2012

Now prepare yourselves! You're the guests of honor at the Greatest Kung Fu Cannibal BBQ Ever!

RealityApologist posted:

Here's a link to my thoughts after the keynote from HCOMP13, which involved a discussion about predicting (and motivating) certain actions from users. It gives some indications of where we are today, but these models have obvious and straightforward extensions into the kinds of systems we are talking about. It also adds some content to this thread beyond the bullying.

Wait, so your system's method of motivating people to do ugly, dirty, or boring but necessary work is

Obdicut
May 15, 2012

"What election?"

enraged_camel posted:

Increased battery life for the sensors as well as improved accuracy in detecting emotions.

That's a portion of what would have to change. Did you not notice they were analyzing logs, not real-time data? The difference between the two is large, especially when there's supposed to be consequent actions.

Second the 'intervention' chosen had only a 37.5% success rate. There may be ways to improve that, but that's an enormous challenge, too.

Third, this required users to log their emotional states, when obviously is a very poor form of capture. The correlation revealed in this study isn't between physiological state and actual emotional state, but reported emotional state. The problem of measurement is large here: how can you be sure you're capturing emotions, beyond the bare-bones 'arousal' type, based on self-reported data?

Finally, this was a tripartate study: all three elements were tested separately, not in conjunction.

The study itself is cool because it's a real-world study producing actual results. They sell it beyond its conclusions, which is unfortunate but expected, but they're still sober in what needs to happen next. Furthermore, this is kind of the opposite of the 'attention economy', since this is all recording things that people don't pay attention to or requiring them to log things they don't pay attention to. It is interfering with or ignoring the actual user attention.

Main Paineframe
Oct 27, 2010

A Buttery Pastry posted:

When in fact it's because there's no mechanism that punishes them at all that they're stomping all over the interests of their own citizens.

Pretty sure they're in fact meeting the interests of some of their citizens, at the cost of stomping all over the interests of some of their citizens. Which is another big problem with the "attention economy" - nation-sized populations don't have always have similar or even consistent interests, and any aggregate approach would inevitably gently caress the minority for the sake of the majority.

Kalman
Jan 17, 2010

In the world of actual academics writing about similar topics, James Grimmelmann (law professor who works in the law of technology space regularly) uploaded a new paper on SSRN that touches on some of the same issues identified in this thread with what looks to be some actual thought behind it (and text you can actually read.)

The abstract:

"Social software has a power problem. Actually, it has two. The first is technical. Unlike the rule of law, the rule of software is simple and brutal: whoever controls the software makes the rules. And if power corrupts, then automatic power corrupts automatically. Facebook can drop you down the memory hole; Paypal can garnish your pay. These sovereigns of software have absolute and dictatorial control over their domains.

Is it possible to create online spaces without technical power? It is not, because of social software's second power problem. Behind technical power there is also social power. Whenever people come together through software, they must agree on which software they will use. That agreement vests technical power in whoever controls the software. Social software cannot be completely free of coercion - not without ceasing to be social, or ceasing to be software.

Rule-of-law values are worth defending in the age of software empires, but they cannot be fully embedded in the software itself. Any technical design can always be changed through an exercise of social power. Software can help by making this coercion more obvious, or by requiring more people to join together in it, but it alone cannot fully protect users. Whatever limits make social software humane, fair, and free will have to come from somewhere else - they will have to come from We the Users."

salisbury shake
Dec 27, 2011
The difference between that paper and Eripsa's proposal is the author's willingness to immediately address actual historical cases of technology regressing rights and power relations, you know problems that would be amplified if these systems became the framework of society, while the latter has a blind wanton urge to jump right into a techno-utopia where the influence of people, power and abuse are irrelevant because of perfect implementation. That would be interesting if it were mere musings, but that dismissive attitude precludes having a basic Socratic dialogue to elucidate how our real society would be affected.

This is why I brought up the whole CP thing, it pulls together your thesis of self-organizing interest groups being the basis of all relations and touches on concerns any revolutionary movement must address: conflicting and minority interests, ethics and rights, and the balance of power and resources. Basic stuff.

I'm just disappointed that you refuse to scratch beyond the surface of these questions, or handwave away problems that might arise by saying those concerns would be addressed by the magic of impartial algorithms and interest groups.

Republican Vampire
Jun 2, 2007

Ratoslov posted:

Wait, so your system's method of motivating people to do ugly, dirty, or boring but necessary work is


To be honest it sounds a lot like the gamified dystopia in Fifteen Million Merits. Only that possible scenario for the future actually took into account the existence and dominance of corporate power.

Slanderer
May 6, 2007
So, here's my argument:

1. Twitter bots EXIST

2. Twitter bots will soon be sentient

3. Twitter is Good and Right

4. Algorithms

5. The world will inevitably be run by benevolent software that let's us produce enough resources for everyone, without requiring anyone to work, and ensuring optimal distribution. Proper application of quantum computing & The Eternal Love of Jesus makes the travelling salesman problem pretty NP-easy and laid back and we solve that poo poo in like 20 minutes. Also google glass.

OXBALLS DOT COM
Sep 11, 2005

by FactsAreUseless
Young Orc
Morozov just had a piece in the New Yorker where he mentions the attention economy. Not the OP's private definition, of course, but the actual term.

http://www.newyorker.com/arts/critics/atlarge/2014/01/13/140113crat_atlarge_morozov?currentPage=1

quote:

Hatch and Anderson alike invoke Marx and argue that the success of the maker movement shows that the means of production can be made affordable to workers even under capitalism. Now that money can be raised on sites such as Kickstarter, even large-scale investors have become unnecessary. But both overlook one key development: in a world where everyone is an entrepreneur, it’s hard work getting others excited about funding your project. Money goes to those who know how to attract attention.

Simply put, if you need to raise money on Kickstarter, it helps to have fifty thousand Twitter followers, not fifty. It helps enormously if Google puts your product on the first page of search results, and making sure it stays there might require an investment in search-engine optimization. Some would view this new kind of immaterial labor as “virtual craftsmanship”; others as vulgar hustling. The good news is that now you don’t have to worry about getting fired; the bad news is that you have to worry about getting downgraded by Google.

Hatch assumes that online platforms are ruled by equality of opportunity. But they aren’t. Inequality here is not just a matter of who owns and runs the means of physical production but also of who owns and runs the means of intellectual production—the so-called “attention economy” (or what the German writer Hans Magnus Enzensberger, in the early sixties, called the “consciousness industry”). All of this suggests that there’s more politicking—and politics—to be done here than enthusiasts like Anderson or Hatch are willing to acknowledge.

Also, does this sound familiar?

quote:

Ivan Illich’s “Tools for Conviviality,” ... called for devices and machines that would be easy to understand, learn, and repair, thus making experts and institutions unnecessary. “Convivial tools rule out certain levels of power, compulsion, and programming, which are precisely those features that now tend to make all governments look more or less alike,” Illich wrote. He had little faith in traditional politics. Whereas Stewart Brand wanted citizens to replace politics with savvy shopping, Illich wanted to “retool” society so that traditional politics, with its penchant for endless talk, becomes unnecessary.
...
[B]ut the naďveté of Illich and his followers shouldn’t be underestimated. Seeking salvation through tools alone is no more viable as a political strategy than addressing the ills of capitalism by cultivating a public appreciation of arts and crafts. Society is always in flux, and the designer can’t predict how various political, social, and economic systems will come to blunt, augment, or redirect the power of the tool that is being designed. Instead of deinstitutionalizing society, the radicals would have done better to advocate reinstitutionalizing it: pushing for political and legal reforms to secure the transparency and decentralization of power they associated with their favorite technology.
...
A reluctance to talk about institutions and political change doomed the Arts and Crafts movement, channelling the spirit of labor reform into consumerism and D.I.Y. tinkering. The same thing is happening to the movement’s successors.

duck monster
Dec 15, 2004

Slanderer posted:

So, here's my argument:

1. Twitter bots EXIST

2. Twitter bots will soon be sentient

3. Twitter is Good and Right

4. Algorithms

5. The world will inevitably be run by benevolent software that let's us produce enough resources for everyone, without requiring anyone to work, and ensuring optimal distribution. Proper application of quantum computing & The Eternal Love of Jesus makes the travelling salesman problem pretty NP-easy and laid back and we solve that poo poo in like 20 minutes. Also google glass.

My "FEMACAMP/ACORNBOT" and "WHITEHOUSE SECURITY INVESTIGATION" bots I wrote to troll twitter's #tcot (Top conservatives on twitter) hashtags would probably by the waterboard weilding KGB bastards of such a scheme.FEMACAMP monitored #tcot, and any mention of Acorn would start producing messages signaling that a candidate for internment at a fema camp was being processed. Whitehouse security investigation would watch for mentions of Obama and various hot-topics in conservative land, and then when triggered inform the user that they had been reported to investigations @whitehouse.org review by homeland security. Both bots long since banned. For a while there acornbot was generating a huge amount of tears, although my goal was to get glen beck to cite it as evidence of a huge conspiracy, which alas it was strangled before I had tuned it well enough to upset enough people.

I wonder if I still have the sourcecode.

duck monster fucked around with this message at 04:51 on Jan 9, 2014

Adbot
ADBOT LOVES YOU

agarjogger
May 16, 2011
Best send-up of TED to appear in a TED talk so far?
http://gawker.com/tedx-speaker-talks-about-how-ted-talks-are-bullshit-1496985980

This isn't that lovely comedian who did twenty minutes of valley gibberish, but couldn't keep a straight face and gave it away. He sucked.

agarjogger fucked around with this message at 09:43 on Jan 9, 2014

  • Locked thread