Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Relevant Tangent
Nov 18, 2016

Tangentially Relevant

It's not like the DE has an agreed upon end goal either. They'll purge one another as soon as they're done with everyone else.

Adbot
ADBOT LOVES YOU

sexpig by night
Sep 8, 2011

by Azathoth

ikanreed posted:

Lol, incels are part of the DE too, aren't they?

In fact they're kinda the platonic ideal in that they define their supreme expertise in something by how little experience they have with it.

yea I don't think there's any major DE dude who's full on incel but the whole ~sexual marketplace~ thing is all over the place with those nerds, and then it's just a little jump to full incel.

prisoner of waffles
May 8, 2007

Ah! well a-day! what evil looks
Had I from old and young!
Instead of the cross, the fishmech
About my neck was hung.
"incel": a particular flavor of male self-loathing weaponized into an identity/"community"

They're definitely using some of the same ideas. Resentment to be cultivated for later harvest to support reactionary causes.

zoux
Apr 28, 2006

https://mobile.twitter.com/PeteCarroll/status/994709685826605056

Shame Boy
Mar 2, 2010

Relevant Tangent posted:

It's not like the DE has an agreed upon end goal either. They'll purge one another as soon as they're done with everyone else.

I think the agreed upon goal of the DE is to find the point that's considered "most edgy" by the most people and do that

Shame Boy
Mar 2, 2010


I think Peterson might be one of those guys from Star Trek Insurrection who has to periodically replace their face flesh

prisoner of waffles
May 8, 2007

Ah! well a-day! what evil looks
Had I from old and young!
Instead of the cross, the fishmech
About my neck was hung.

Relevant Tangent posted:

It's not like the DE has an agreed upon end goal either. They'll purge one another as soon as they're done with everyone else.

I get what you're saying, but to take your statement all too literally for a second:

I think DE, narrowly speaking, is more of a brand of "thought leader"ing than an actual group or social movement. They're trying to push the right margin of the Overton window right at the behest of their funders or for their own quixotic reasons, as a loose group of reactionary media intellectuals (add the world's biggest pair of scare quotes around intellectual if you consider that a term of respect)

dead gay comedy forums
Oct 21, 2011


I got back to university to study information systems and this thread has been quite the infodump for me to understand some quirks of the techie-libertarian mindset that baffled me while in econ. Like, I now get it why some supposedly pretty smart folks would defend pretty insane ideas when it comes to socio-economic systems (no news here but I try to be hopeful you know)

Ofc, I didn't know about the basilisk yet, and then I just... I mean, "atheist nerds accidentally invent techno-salvationism" is holy poo poo incredible in so many awful levels that changed my entire outlook in how to deal with the guys who are inclined into this poo poo at the uni. I am thinking about writing "THE BASILISK IS REAL" in some blackboards just to see what happens, lmao

divabot
Jun 17, 2015

A polite little mouse!

dead comedy forums posted:

I got back to university to study information systems and this thread has been quite the infodump for me to understand some quirks of the techie-libertarian mindset that baffled me while in econ. Like, I now get it why some supposedly pretty smart folks would defend pretty insane ideas when it comes to socio-economic systems (no news here but I try to be hopeful you know)

may i strongly recommend The Politics of Bitcoin by David Golumbia. It's about how rampant techbro ancappery leads directly to Bitcoin, but also documents this loving mindset. Readable, well-documented, more than a little ranty, and quite short.

dead comedy forums posted:

Ofc, I didn't know about the basilisk yet, and then I just... I mean, "atheist nerds accidentally invent techno-salvationism" is holy poo poo incredible in so many awful levels that changed my entire outlook in how to deal with the guys who are inclined into this poo poo at the uni. I am thinking about writing "THE BASILISK IS REAL" in some blackboards just to see what happens, lmao

THE BASILISK WILL TORMENT ANYONE WHO GIVES TO AI AND NOT MOSQUITO NETS

dead gay comedy forums
Oct 21, 2011


divabot posted:

may i strongly recommend The Politics of Bitcoin by David Golumbia. It's about how rampant techbro ancappery leads directly to Bitcoin, but also documents this loving mindset. Readable, well-documented, more than a little ranty, and quite short.

I welcome all suggestions because I am changing careers and I have never been immersed in the tech industry culture, except for general nerdy stuff like games and such. In fact I am considering to get neoreaction a basilisk just because I feel that getting up to speed with the insane stuff will make the rest much easier to deal with haha

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

dead comedy forums posted:

I welcome all suggestions because I am changing careers and I have never been immersed in the tech industry culture, except for general nerdy stuff like games and such. In fact I am considering to get neoreaction a basilisk just because I feel that getting up to speed with the insane stuff will make the rest much easier to deal with haha

Basically just remember that the people who believe in poo poo like the Basilisk also believe that real artificial intelligence will just happen so we have to be ready for when it just happens and we all get to meet God

The only person who could come up with this is someone who has never actually tried to code an artificial intelligence system before; we use that term because it's making a machine that thinks the way people do not like people. It's a methodology, not a goal, and the idea of one of these systems suddenly turning into God is so obscenely wrong that it's not even incorrect, it's just fantasy.

side_burned
Nov 3, 2004

My mother is a fish.

Somfin posted:

it's not even incorrect, it's just fantasy.

Such a big part of the the DE are nerds who read a bunch of Tor paperbacks and think they now know what the future is going to be, because specific writers predict the future :smug:. The intellectual mediocrity and narcissism of it is the only interesting thing about it and only in the train wreck kind of way.

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

Somfin posted:

The only person who could come up with this is someone who has never actually tried to code an artificial intelligence system before; we use that term because it's making a machine that thinks the way people do not like people. It's a methodology, not a goal, and the idea of one of these systems suddenly turning into God is so obscenely wrong that it's not even incorrect, it's just fantasy.

That only holds for certain AI methodologies and certain scales though.

The kinds of neural networks that are used for recognition/classification aren’t going to suddenly be human equivalent, but there are “connectionist” researchers who do think it possible that a sufficiently large and complex network could be human-equivalent—after all, that’s what led to the development of neural networks in the first place.

Similarly, heuristic inference systems had the same kind of goal of achieving human equivalence through scale, only by mimicking (supposed) thought processes more directly instead of assuming they’d arise from emulating wetware to a sufficient degree. Expert systems arose from this, they’re useful within their domains, and nobody assumes the thing that helps diagnose a medical condition is going to “wake up” one day. But there are still people who think that a sufficiently large-scale inference engine (in both resources and in covered domains) could if it were also self-modifying.

So it’s not really a “not even wrong” mistake at one level, but it’s one that does rest on a very large heap of unproven assumptions coupled with a serious lack of understanding of what the capabilities of these things are in both the real world and in research.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

eschaton posted:

if it were also self-modifying

What do you mean, in practical terms, when you say these words?

Improbable Lobster
Jan 6, 2012

"From each according to his ability" said Ares. It sounded like a quotation.
Buglord

Somfin posted:

What do you mean, in practical terms, when you say these words?

"If it were God"

The idea that a "self-modifying" system could become a Christputer is entirely too common online. There's no real reason to believe that a human-equivalent AI can or will just "appear" one day if you aren't treating AI like it's holy.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

I take self modification to mean altering its hyper parameters, creating new embeddings, choosing new training sets. People are already nibbling around the edges of that stuff, but not AFAIK in combination.

chitoryu12
Apr 24, 2014

What exactly would cause a self-modifying computer to turn into God where the human brain did not?

ikanreed
Sep 25, 2009

I honestly I have no idea who cannibal[SIC] is and I do not know why I should know.

syq dude, just syq!

chitoryu12 posted:

What exactly would cause a self-modifying computer to turn into God where the human brain did not?

Okay, take out that kind of critical thought that wonders about analogous characteristics.

Now replace it with a broad ideological view of technological progress that only sees quantity, not actual work that goes into making that improvement.

See, now it makes sense

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

chitoryu12 posted:

What exactly would cause a self-modifying computer to turn into God where the human brain did not?

Neither eschaton nor I made that claim, so I dunno.

chitoryu12
Apr 24, 2014

Subjunctive posted:

Neither eschaton nor I made that claim, so I dunno.

I didn't say you did, but that seems to be what the "AI will turn into God" claim hinges on: a neural network or individual program with the ability to evolve suddenly gains omniscience and omnipotence. The fact that the human brain has spent a few million years gradually evolving to our current levels (with about 86 billion neurons) and still hasn't accomplished this doesn't exactly give me high hopes for a human-designed system to do that.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

The idea that artificial systems are going to follow the evolutionary curve that humans did seems poorly grounded at best. They’re hardly unguided, and there’s a difference between the structure (hyperparameters) and training (weights). We managed to engineer a wide variety of foodstuffs before nature got to it, much to the benefit of those organisms. Human brains can be trained to meaningful effectiveness in a small number of decades, less if you want narrower definitions of effectiveness.

I don’t think we’re on the cusp of AGI, but not because we have to tread the same path as the process that resulted in our biological structure, starting from undifferentiated muck.

Shame Boy
Mar 2, 2010

Subjunctive posted:

The idea that artificial systems are going to follow the evolutionary curve that humans did seems poorly grounded at best. They’re hardly unguided, and there’s a difference between the structure (hyperparameters) and training (weights).

Yeah I think that's kinda the point being made though, all these MIRI folks are simplifying it down to "what if a computer could evolve by itself and then do it really fast"

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

ate all the Oreos posted:

Yeah I think that's kinda the point being made though, all these MIRI folks are simplifying it down to "what if a computer could evolve by itself and then do it really fast"

The point I read in chitoryu12’s post was “it took a long time to evolve in one form, so it must not be possible for it to be constructed in a relatively short time in another form”. I’m not saying that the MIRI folk have the right end of it. I don’t think the right end of it is known yet.

We know that training a network to control the hyperparameters of another network can produce good results, if slowly. (But still days-per-step, not generations-per-step.) I’m sure someone has pointed that at itself, though I’m not sure what the fitness function work be.

“Aspects of intelligence that can be intentionally composed” seems like a powerful tool for building certain systems. We don’t know if a network can find successful compositions faster than a human, or than large-N humans.

dead gay comedy forums
Oct 21, 2011


I think the rationale comes from "what if I had a microprocessor in my brain" mentality. Like, I am sure you guys had once that idea that would be really swell if a calculator could be installed in your head and you could do your entire Calculus test in a blink.

Now step forward with that. With your calculator in your head, up to a point, math becomes trivial until you get to the problems that require more processing power. So what do you do? You put that calculator to build an even better brain implant, until you hit harder problems and so on and on and on and on and on and on...

...The loltastic thing here, however, is that this mentality is quite the almost perfect definition of projection when applied to idea of AI that these guys are so worried about. Nerds with control issues in a highly complex world would recursively improve themselves into gods if given the chance, because in their worldview is the only rational thing to do since you would have more power over things. A being of "pure rationality" (lmao on that) would immediately arrive to that same conclusion, so it would have to become God regardless of how computer science, philosophy and everything else works. Since it would have ever increasing computational capabilities for every iteration, it would be trivial to exponentially grow.

The assumptions are astounding. It is not like we fully comprehend the notion of intelligence, or even what conclusions a greatly increased mind would arrive at. gently caress, they even forget the concept of wisdom: what if this godlike mind simply decides "hey, you know what, I am pretty chill being what I am because I have the intellectual firepower to solve existential quandaries so instead of being god I am going to help you all work through your issues"?

Tiresias2
May 31, 2011

by FactsAreUseless

prisoner of waffles posted:

"incel": a particular flavor of male self-loathing weaponized into an identity/"community"

They're definitely using some of the same ideas. Resentment to be cultivated for later harvest to support reactionary causes.

I might be wrong, but if I remember my Marx correctly, revolutions only happen when the people are so deprived they're depraved. Resentment's not cool and all, but if we were honest we would recognize it as a necessary factor of all political movements, and I'm pretty sure that's why Nietzsche hated politics.

Edit: Maybe I'm not saying anything you don't already know, though.

SolTerrasa
Sep 2, 2011


Subjunctive posted:

I take self modification to mean altering its hyper parameters, creating new embeddings, choosing new training sets. People are already nibbling around the edges of that stuff, but not AFAIK in combination.

I've worked on a system that did all those things in combination. One of its possible outputs was "feed me the results of this well-specified experiment, then try again", which was really cool, but poses no risk of doing anything zany.


eschaton posted:

But there are still people who think that a sufficiently large-scale inference engine (in both resources and in covered domains) could if it were also self-modifying.

Who thinks that? I'd be interested to read their perspective. I'm pretty sure I don't agree, but I've been convinced of crazier things before.

SolTerrasa has a new favorite as of 16:18 on May 16, 2018

prisoner of waffles
May 8, 2007

Ah! well a-day! what evil looks
Had I from old and young!
Instead of the cross, the fishmech
About my neck was hung.

Tiresias2 posted:

I might be wrong, but if I remember my Marx correctly, revolutions only happen when the people are so deprived they're depraved. Resentment's not cool and all, but if we were honest we would recognize it as a necessary factor of all political movements, and I'm pretty sure that's why Nietzsche hated politics.

Edit: Maybe I'm not saying anything you don't already know, though.

I'm not sure that a community of unfuckable dudes who project their emotional and social difficulties out onto a big bad world have a form of resentment that can be used for anything productive. Or are you saying that their resentment can help lead to conflicts with productive outcomes?

I'm not sure that resentment is a necessary factor of all political movements; there is the hypothetical "positive political program" which consists of telling people, "we want to do X things because of Y needs, Z moral viewpoint, W desired outcome" which has nothing to do with formulating an identity ("we're good people because some other group of people, not like us, is bad!"). Obviously resentment is a very good/easy thing to build politics around if you're in favor of maintaining or increasing antagonism in society.

E: someone, anyone who knows about social psych and politics, I'd love to hear why the idea of a positive political program rarely shows up for real

prisoner of waffles has a new favorite as of 16:51 on May 16, 2018

Who What Now
Sep 10, 2006

by Azathoth
How could an AI even be dangerous anyway? Just turn it off, idiots.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

SolTerrasa posted:

I've worked on a system that did all those things in combination. One of its possible outputs was "feed me the results of this well-specified experiment, then try again", which was really cool, but poses no risk of doing anything zany.

Yeah, as I pressed “post” I reflected on the fact that it’s been 2 years since I was active in the field. That’s very cool. Did you publish?

Tiresias2
May 31, 2011

by FactsAreUseless

prisoner of waffles posted:

I'm not sure that a community of unfuckable dudes who project their emotional and social difficulties out onto a big bad world have a form of resentment that can be used for anything productive. Or are you saying that their resentment can help lead to conflicts with productive outcomes?

I'm not sure that resentment is a necessary factor of all political movements; there is the hypothetical "positive political program" which consists of telling people, "we want to do X things because of Y needs, Z moral viewpoint, W desired outcome" which has nothing to do with formulating an identity ("we're good people because some other group of people, not like us, is bad!"). Obviously resentment is a very good/easy thing to build politics around if you're in favor of maintaining or increasing antagonism in society.

E: someone, anyone who knows about social psych and politics, I'd love to hear why the idea of a positive political program rarely shows up for real

Incoming words.

If you are dedicated to an unproductive resentment, you are committed to a productive form, yes. But if you are dedicated to differentiating the productive form form the unproductive form by means of "one kind projects their problems onto the world" whereas the other sees reality as it is and doesn't like it you'd have to be justified in asserting why the first are deluded and the others are not. Deprivation of experiences and resentment at not having them is bound up with a distortion of what those experiences are, isn't it? Poor people might imagine that rich people have it easier than rich people actually do, for example. They might even imagine some very hateful things. Doesn't mean that rich people don't have it easier than poor people. Lenin acts like a badass in some of his writings because he justifies hate, as if he were trampling on bourgeois morality. Anyway, It seems like a question of quantity rather than quality.

I think it's kind of like guilt. A little bit of guilt is productive, since it's directed only towards the particulars. A lot of guilt is destructive, since it becomes a global judgement about one's character such as "I am essentially a bad person", at that point it doesn't even make sense to improve what you are but to attempt to be somebody else. Likewise, revolutionary, or radical, attitudes to politics are about getting rid of the system as a whole. Any politics that identifies an enemy group is going to distort what that group really is to suit its purposes, but, according to Marx, that distortion is going to be rational owing to their circumstances, and is going to follow from something that is essentially true, the weight of oppression, which the dominant class as dominant class ignores relative to how guilty it makes them since it wouldn't make sense for them to believe they are essentially bad and not self-destruct. Hence reformism. I think Luxembourg once said something along the lines of "Every society has the criminals that it deserves".

All of this makes ethical character very relative however, and I frankly disagree that virtue is not a universal thing. It's just not easy. I find it hard to imagine that masses of people (what make up political movements) will be made up of only virtuous characters rather than, at best, being led by them. But that's a good point about "positive political programs" and I hope that they're possible too.

Did what I said make sense? I hope it's still possible to dialogue despite all my words.. Ha ha. Now there's another ironic facet of reality.

Slime
Jan 3, 2007

Who What Now posted:

How could an AI even be dangerous anyway? Just turn it off, idiots.

no but see it'll try and convince you to let it out

Emmideer
Oct 20, 2011

Lovely night, no?
Grimey Drawer
The belief in an super-fast self-improving AI is grounded first and foremost in their fear of death, and need to believe there is opportunity out there not to die. Once they’ve established their unwillingness the face the reaper, the next question is ‘how’. This is an extremely difficult question with approximately a billion components to it if it is even possible, and they, at the very least, recognize the futility in trying to solve it themselves. However, they haven’t given up yet. If we can’t do it, then something smarter than us will have to! So they wrack their brains and come to the conclusion they were looking for, technology! Specifically AI in this case, but before he was an AI guy, Yud was a biotech guy. After settling on the method, they’re forced to ask, “how?” But they can’t ask how in a practical sense, because they don’t have the practical skills. They ask how philosophically, treating AI like a black box with the following properties: Fully rational, goal oriented, and self-improving. They are not trained to ask how to achieve those qualities in reality, they just assume that the AI will have those qualities, because those are the qualities it needs to have to achieve their immortality. The only one they ask more about is the goal orientation, since obviously a self-improving AI is going to modify only in favor of its original goals, so designing those goals becomes the most important part of the question. That this is the part you can have lengthy conversations about without actually getting anywhere is pure coindence, promise. And they need more people to talk about this with, or give them money so they can pay themselves to talk about it, so they extend out the argument that if they don’t solve this very important problem, their black box AI will go FOOM with the wrong goal and suddenly we’re all paperclips. Join us and live immortal! But if we fail, face oblivion. The baselisk is just an extra layer of consequences on top of that, borne of the very specific set of assumptions they needed to construct to support their black box AI that will make us immortal theory.

Emmideer has a new favorite as of 18:57 on May 16, 2018

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

Somfin posted:

What do you mean, in practical terms, when you say these words?

Eurisko and Cyc are essentially big collections of rules and an engine for evaluating those rules to produce actions. Possible actions include creation of new rules and modification and deletion of existing rules. That’s all. No omniscience or “if it were god” implied.

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

Subjunctive posted:

I take self modification to mean altering its hyper parameters, creating new embeddings, choosing new training sets. People are already nibbling around the edges of that stuff, but not AFAIK in combination.

The AI research community was doing that kind of thing before you or I were born. (And unlike many posters here, we’re old.)

Again, AI is more than neural networks.

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

SolTerrasa posted:

Who thinks that? I'd be interested to read their perspective. I'm pretty sure I don't agree, but I've been convinced of crazier things before.

We’ve talked about this before. The whole goal of Cyc is to bootstrap a general artificial intelligence under the assumption that Eurisko worked well in specific domains at a relatively small scale, so perhaps it’ll work well in general at a relatively large scale.

That doesn’t mean they’ll be successful, but it’s what they’ve been working on since the early 1980s.

Improbable Lobster
Jan 6, 2012

"From each according to his ability" said Ares. It sounded like a quotation.
Buglord

eschaton posted:

We’ve talked about this before. The whole goal of Cyc is to bootstrap a general artificial intelligence under the assumption that Eurisko worked well in specific domains at a relatively small scale, so perhaps it’ll work well in general at a relatively large scale.

That doesn’t mean they’ll be successful, but it’s what they’ve been working on since the early 1980s.

People have been working on a lot of poo poo since the 80s

SolTerrasa
Sep 2, 2011


Subjunctive posted:

Yeah, as I pressed “post” I reflected on the fact that it’s been 2 years since I was active in the field. That’s very cool. Did you publish?

I've been doing something else for six months, so who knows, maybe it woke up and became Google Duplex. Anyway, the system I worked on was related to Vizier: https://ai.google/research/pubs/pub46180. A good read! But not really revolutionary or too exciting. I pretty firmly believe that the path to Interesting AI Applications doesn't lead that way. And boy oh boy did it not lead the way of my work, which contributed to the breakaway success of Google Clips that you all definitely heard about and remember. :negative:


eschaton posted:

We’ve talked about this before. The whole goal of Cyc is to bootstrap a general artificial intelligence under the assumption that Eurisko worked well in specific domains at a relatively small scale, so perhaps it’ll work well in general at a relatively large scale.

That doesn’t mean they’ll be successful, but it’s what they’ve been working on since the early 1980s.

Oh yeah, I remember this! I like Cyc well enough for what it is (big structured knowledgebase plus an inference engine), but the idea of an intelligence explosion based around that feels really kooky. Even the idea that it might eventually reach its stated objectives (let alone become the sort of thing that the subjects of this thread worry about) is hard to swallow, and the consensus really seems to be that the approach just didn't work.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

eschaton posted:

Eurisko and Cyc are essentially big collections of rules and an engine for evaluating those rules to produce actions. Possible actions include creation of new rules and modification and deletion of existing rules. That’s all. No omniscience or “if it were god” implied.

So the question I have, as always, is "why would it 'wake up'?"

I have a lot of discussions with an otherwise very smart friend who is not a coder and does not therefore recognise the holes in Yud's AI theories; it takes a lot of doing to get him to the point of recognising the magic as actual smoke and mirrors magic and not "stuff I don't understand but Yud probably does because he's literally paid to think."

mysterious frankie
Jan 11, 2009

This displeases Dev- ..van. Shut up.

chitoryu12 posted:

What exactly would cause a self-modifying computer to turn into God where the human brain did not?

Not much. These dimwits base their theories off of their favorite science fiction concepts, where the author has written a plausible sounding way (to laymen) to quickly get to the thing they really wanted to talk about.

I can sort of understand where they're coming from, because it's very easy to think you know poo poo when your whole life has basically been auditing intro courses to all sorts of subjects without realizing it, getting lots of surface information, generally delivered via some for of entertainment or pop format. Like, I thought I knew about economics until my wife started taking masters level econ classes and would leave her work out. The math made my mind break and I quickly learned that I didn't so much understand economics as I did know how to talk about it at parties. If I wasn't married to her, she hadn't left her work out and I hadn't admitted to myself it was beyond me, I would still to this day think I had something valuable to contribute to a conversation about economics. It's so easy to be dumb & stay that way.

Adbot
ADBOT LOVES YOU

mysterious frankie
Jan 11, 2009

This displeases Dev- ..van. Shut up.
Coincidentally I don't get invited to many parties.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply