|
rudatron posted:Eripsa, you're weaseling with the definition of words. If you're using 'interaction' to mean 'any interaction, physical or not', then 'community' loses it's meaning as a social concept entirely - it just becomes the limit of the space-time light-cone that you happen to be in, whenever you think the community 'started'. It's stupid. Everything Eripsa writes is true it's just that much of it is inane. The ultimate conclusions from his writings are that objects in close proximity are interconnected and that all human values are subjective. A Spherical Sponge posted:Hey, don't have time to read the entire thread but just to be quick about this: the brain isn't a computer, even just individual cells are enormously complex dynamic systems that we can't model in their entirety just yet, neurons aren't transistors and neither are synapses, the electrochemical signalling that goes on in the brain is complex on multiple levels and non linear and makes use of a mix of analogue and digital signalling methods, and that's not even going into definitions of what your 'self' is which is a very contentious topic anyway. Uploading isn't going to happen. Neither is the theseus's ship thing, because you can't replace neurons with techno equivalents because neurobiological machinery is complex to the point that it's unlikely we'll be able to replicate its function with non organic materials. You're going to die, unless we invent biological immortality, and that's probably only going to be for rich people even if it does end up existing. Also it's pretty easy to take any human idea that isn't strictly empirical and compare it to some religion.
|
# ? Jun 3, 2017 11:27 |
|
|
# ? May 6, 2024 21:54 |
|
Kilroy posted:This is pretty much what everyone does, for everything, all the time (including the hallucinogens, if we're counting caffeine as a hallucinogen). I don't know if you are trying to say something about thinking processes or drugs. Re hallucinogens, caffeine generally only induces hallucination - sensory experience of something that is not there - at very high doses (and sleep deprivation). If caffeine at any dose is making you hallucinate, this is something to mention to your physician.
|
# ? Jun 3, 2017 12:47 |
|
Ytlaya posted:After thinking about it for a bit, I think that what's happening with Eripsa is that he comes across some idea that intuitively "feels" interesting/important (and may or may not have been formed while under the influence of hallucinogens*), becomes attached to/invested in it, and then sets out to come up with some sort of logic proving the idea in question. So it's a sort of backwards discovery process, where he starts out with a conclusion that intuitively feels right to him and then sets out to come up with a formal proof. Ytlaya. I'm a professional philosopher who has been thinking about these issues seriously for over a decade. What you're watching is the development of a systematic and comprehensive worldview. However, since you have virtually no background with systematic philosophies, you simply don't appreciate what you're looking at. You don't see the consistency of perspective and approach, you don't appreciate the theoretical commitments being made and the trade-offs being balanced. This is not to call you stupid, but to say that your perspective here is very narrow and parochial. Reading your post feels like taking a 5 year old to a construction site. The child sees the man moving the wheelbarrow back and forth, and her limited understanding of the world concludes "he must think wheelbarrows are so much fun! Look, he keeps moving it!" But of course, this misses the forest for the tree. The child simply doesn't see how this action combines with all the others to produce a building. In some real sense, the child simply doesn't see what they are looking at.
|
# ? Jun 3, 2017 13:02 |
|
Wow dude. Look, I can understand getting mad, and saying that sort of stuff to someone like me, who's kind of an rear end in a top hat by default. But Ytlaya has been pretty generous to you so far, and your response has basically been to call them a child.
|
# ? Jun 3, 2017 13:25 |
|
Eripsa posted:Ytlaya. I'm a professional philosopher who has been thinking about these issues seriously for over a decade. What you're watching is the development of a systematic and comprehensive worldview. However, since you have virtually no background with systematic philosophies, you simply don't appreciate what you're looking at. You don't see the consistency of perspective and approach, you don't appreciate the theoretical commitments being made and the trade-offs being balanced. Have you ever considered that you aren't actually smart enough to warrant being such a smug prick? I mean, you tried and failed to make Facebook 2.0 like five times now. It's a proven fact that you don't actually know as much as you pretend like you do. E: VVVVVVVVV Please never have kids, you will kill them if this is your level of understanding of infants. Who What Now fucked around with this message at 13:54 on Jun 3, 2017 |
# ? Jun 3, 2017 13:46 |
|
A Spherical Sponge posted:Hey, don't have time to read the entire thread but just to be quick about this: the brain isn't a computer, even just individual cells are enormously complex dynamic systems that we can't model in their entirety just yet, neurons aren't transistors and neither are synapses, the electrochemical signalling that goes on in the brain is complex on multiple levels and non linear and makes use of a mix of analogue and digital signalling methods, and that's not even going into definitions of what your 'self' is which is a very contentious topic anyway. I agree with the association between Singularity and eschatology (and posted a peer reviewed article to the same effect earlier in the thread). But this part is wrong, not just about the brain but also about computers. First of all, any complex analog system can be modeled by a digital computer to arbitrary resolution. Here is a complete molecular (that's atom-for-atom physically accurate) supercomputer reconstruction of a bacteriophage: We've also modeled complete atomic reconstructions of portions of cells (up to 100 nm^3, approximately 100 million atoms). We've simulated biologically accurate cortical columns of around 30,000 neurons. High resolution simulations are absolutely possible. Geoffrey Hinton at Google said in 2016 (after the first AlphaGo match) that we're about a million times away from brain scale computing with artificial neural networks. In other words, our simulations would need to be a million times bigger to operate on brain scales. That's a million times more nodes, more connections, more layers and organizational structure, and more complexity within each node. A million is a big number, but in computing terms, a million is a mega. We all know the relative timeframe for computing systems to advance by a mega-, because we've seen it happen for other domains of processing speed. Hinton also said it was impossible to predict this stuff more that 5 years out. In January of this year, Hinton's lab published a paper describing their "Outrageously large" NNs, which are now simulating on the order of 137 billion with a loving b nodes. To save you the search, there are about 80 billion neurons in your brain. The complexity scale here is not insurmountable by any measure. BUT EVEN MORE IMPORTANTLY computers are not required to be digital or transistor based. Technically computers are defined in terms of Turing machines, which in the real world are pushdown automata. And PDA are just finite state machines with memory. So a computer is any finite state machine with memory. A state machine is any machine that a) has states, and b) has some transition procedure between states. If there are finitely many states, it is a finite state machine. If it has memory about its state history, it is a computer. A baby is a finite state machine. A 6 month old baby has a limited number of states: {crying, sleeping, eating, pooping, vomiting, laughing, struggling}. Perhaps this list isn't comprehensive, but it's pretty close. Infant babies don't do very much except transition between these states. New parents spend basically all their time figuring out how those transitions work. It is crying but I want it to sleep, so I have to feed it and then burp it and then it will pass out. Babies are finite state machines with memory. That's sufficient for saying babies are computers. Brains are definitely computers. The universe is a big computer made of little computers.
|
# ? Jun 3, 2017 13:50 |
|
rudatron posted:There's no guarantee that every chemical quantity inside a neuron is actually meaningful, there's already a simple differential equation model for neurons that works 'good enough', and the claim that neurons cannot be replicated or replaced is effectively saying that neurons are already optimal for what they do - that's just not the case. Well I sort of agree with some of your points... but when you say that we already have a simple differential equation model for neurons that works 'good enough' what is that 'good enough' in regard to? like you don't need a model to be 100% accurate to be useful for certain kinds of analysis, but you wouldn't say that it replicates the essence of the system in itself. It's an abstraction, rather than the thing itself. I find it hard to argue with where you're coming from because your suppositions of the nature of consciousness and the brain are so different from my own. Or at least it seems to be that way. [quote="mercrom" post=""472993638"] Don't forget to add quantum computing until computers can do that too. Then just add magic. Also it's pretty easy to take any human idea that isn't strictly empirical and compare it to some religion. [/quote] Well first of all I don't think brains involve quantum computation, if that's the line of thought you were going down. I don't think brains or consciousness are magic, I just think that consciousness is dependent on very particular physiological structures. I think consciousness is the result of the interaction between the brain and body and environment (though the distinction between the brain and body is a reductionsitic one, because there's lots of computation that goes on in the tissues themselves and not in the brain, such as the gut and certain physiological feedback mechanisms), and that even if you could somehow recreate the informational processes that are going on in the brain in a computer simulation, it wouldn't be conscious imo. I'm not really in the mood to do an effort post atm but I might later, it's a pretty complicated issue that isn't settled at all. Also it's not just a matter of taking a human idea that isn't strictly empirical and comparing it to some religion. If you read the article(though I seem to remember a more in depth article on the issue; this is more about the authors recovery from evangelism and how the structure of transhumanist ideas strongly match up with christian eschatology and caused a 'relapse' of sorts), transhumanism isn't being compared with religion because it's not empirical, it's being compared with christian eschatology because the philosophical ideas contained within transhumanism closely match up with those found in early christian eschatology. They do not, for example, match up with islamic or hindu eschatology. Edit: I want to be clear these are just my opinions I've arrived at from my education in neuroscience and my hobby of reading philosophy. I may well have an incorrect conception of things. A Spherical Sponge fucked around with this message at 14:12 on Jun 3, 2017 |
# ? Jun 3, 2017 14:10 |
|
Eripsa posted:A baby is a finite state machine. A 6 month old baby has a limited number of states: {crying, sleeping, eating, pooping, vomiting, laughing, struggling}. Perhaps this list isn't comprehensive, but it's pretty close. Infant babies don't do very much except transition between these states. New parents spend basically all their time figuring out how those transitions work. It is crying but I want it to sleep, so I have to feed it and then burp it and then it will pass out. Babies are finite state machines with memory. That's sufficient for saying babies are computers. Brains are definitely computers. The universe is a big computer made of little computers. Have you ever seen or interacted with a baby? A human one? Your idea that a baby can only have a finite number of "states" is incredibly reductive of human experience. You can't see wonder on a state machine's face. State machines don't form bonds with their mothers. State machines don't form likes and dislikes based on self-formed desires... I could go on, but my point is that you're oversimplifying the experience of being a baby to an astounding degree.
|
# ? Jun 3, 2017 14:31 |
|
COOL CORN posted:Your idea that a baby can only have a finite number of "states" is incredibly reductive of human experience. You can't see wonder on a state machine's face. State machines don't form bonds with their mothers. State machines don't form likes and dislikes based on self-formed desires... I could go on, but my point is that you're oversimplifying the experience of being a baby to an astounding degree. Perhaps you don't appreciate how general the term "state machine" is. Wonder is a state (we literally say "state of wonder"). Bonding is a process, but we could model it as two states: "Bonded" and "Unbonded". Boom. We could make a list of possible things to like, and add states "Likes X" to our state machine whenever it acquires that taste. A "state machine" is an abstract model of a system's behavior, describing how it transitions between discrete states. It doesn't require saying how the system works any deeper than required to specify the state. In the wikipedia entry for inite state machine you get the example of a state machine for a turnstile: The state machine entirely describes how the turnstile works at the level of this state description. But it doesn't say anything about what it is made of, how it works, what color it is, how many prongs it has, etc. A given state machine can describe a system at an arbitrary level of abstraction, but those properties don't matter to the function of the turnstile. Notice that this turnstile doesn't have memory; it doesn't remember how many coins you put in. So it's not a computer, it's just an FSM. To be a computer, you have to be a pushdown automata. A Turing Machine is just a PDA with infinite memory, so every practical computer is a PDA, which again is just FSM + memory. Similarly, a baby is composed of these states at an abstract level: {crying, sleeping, eating, pooping, vomiting, laughing, struggling}. We could list more. You could potentially argue that there are an infinite number of states, but there are formalisms for handling infinite state machines that don't change the basic point about computation. Anything that can be described as a state machine with memory is a computer. Period. So if we gave that turnstile above some memory (a counter every time you put in a coin, so it returned extra coins), that would make the turnstile a computer. Whether or not it has transistors or electronics or anything else. To be a computer is to be an FSM with memory. A turnstile is a computer, a baby is a computer, a cell is a computer, your stomach is a computer, and so is your brain, the United States, and the Milky Way.
|
# ? Jun 3, 2017 15:14 |
|
A Spherical Sponge posted:blah blah blah... and that even if you could somehow recreate the informational processes that are going on in the brain in a computer simulation, it wouldn't be conscious imo....blah blah blah human consciousness is indeed a very small issuance of a multitude of chemical reactions this is one of the reasons it is so flawed that is why ALL the reactions peripherally connected to critical self-awareness are not necessary to simulate very much why we can follow a cake recipe without having the exact chicken that laid the exact egg used in the first cake made by that recipe
|
# ? Jun 3, 2017 15:21 |
|
Eripsa posted:It doesn't require saying how the system works any deeper than required to specify the state. But enough about Eripsa's theories
|
# ? Jun 3, 2017 15:28 |
|
Broccoli Cat posted:human consciousness is indeed a very small issuance of a multitude of chemical reactions That's kind of a terrible analogy. no you don't need the exact same brain for something to be conscious; not only are other people, who's brains are different from mine, conscious, but most other animals probably have a form of consciousness too. (if the analogy you were making was brain=egg and consciousness=cake) [edit] also different eggs are in the same class of things while a brain inside a living body is an entirely different thing to a computer also I wasn't saying that human consciousness is a small issuance of a multitude of chemical reactions; I was saying it was the totality of those reactions. or at least I think I was? I have pretty bad add and I haven't taken my meds today because my doc hosed up my prescription so it's hard for me to keep track of my thoughts. I guess my position is that we aren't the ghost in the machine, spirit and flesh are one and the same. a simulation of your consciousness wouldn't be you, it would be an abstraction of you. Maybe I read too much continental philosophy
|
# ? Jun 3, 2017 15:49 |
|
COOL CORN posted:But enough about Eripsa's theories Complexity is difficult, so it's really just easier to pretend that babies are literally no different than a Neopet.
|
# ? Jun 3, 2017 16:06 |
|
Who What Now posted:Complexity is difficult, so it's really just easier to pretend that babies are literally no different than a Neopet. I'm describing a baby at a level of abstraction that is very general. The state "crying" is an extremely complex neurophysiological reaction, itself to be described by a much more complicated distributed state machine. But this complication within the crying state doesn't make the more abstract state description any less true or accurate. What it means is that lots of different machines (a neopet) can instantiate that same formal computer. So it's not hard to see a baby is a computer at an abstract enough level. The question is how abstract do you want to be.
|
# ? Jun 3, 2017 16:46 |
|
Dear thread: I have an idea for a crowdsourced Turing test web tool for evaluating Twitter bots. I am not aware of any such service. The tool I imagine does two things: first, it lets you submit links to tweets. The tool grabs that tweet and the reply chain, including any replies to and from the original tweeter. Second, the tool shows you sample exchanges from its database, scraped of identifying information, and asks the reader to judge: Bot or not? Perhaps then the reader is told the correct answer, if known. The goal is to assign Twitterbots scores for how convincing a speaker it is. Ideally, the results would be interesting enough to rate twitter bots on their skills as interlocutor, perhaps to be worn as a badge of honor. Perhaps we could have other bots assigning ratings also, and maintain a score for convincing humans and a score for convincing other bots. Developing this tool doesn't seem to me hard to do from a technical perspective, but I tend to underestimate such things. Any thoughts or suggestions on how I can get something like this built? I'd be happy to offer/acquire funding for such a project, who do I pay to build it for me?
|
# ? Jun 3, 2017 17:07 |
|
Eripsa posted:The question is how abstract do you want to be. I don't want to be so abstract that everything I say is completely useless. Unfortunately that seems to be your minimum.
|
# ? Jun 3, 2017 17:33 |
|
A Spherical Sponge posted:That's kind of a terrible analogy. no you don't need the exact same brain for something to be conscious; not only are other people, who's brains are different from mine, conscious, but most other animals probably have a form of consciousness too. (if the analogy you were making was brain=egg and consciousness=cake) [edit] also different eggs are in the same class of things while a brain inside a living body is an entirely different thing to a computer I wasn't saying what you were saying, I was saying what I was saying, and it's a perfect analogy, because I was saying what you were trying to say, except that your position was wrong. flesh and consciousness are not one, like instrument and music are not one. I won't use the word "spirit" because it's silly.
|
# ? Jun 3, 2017 17:58 |
|
Bot or not: http://botornot.co/ https://twitter.com/Botometer API and github link: https://twitter.com/Botometer/status/868258068659830785
|
# ? Jun 3, 2017 18:25 |
|
Broccoli Cat posted:I wasn't saying what you were saying, I was saying what I was saying, and it's a perfect analogy, because I was saying what you were trying to say, except that your position was wrong. You're not being very helpful in getting your point across
|
# ? Jun 3, 2017 18:31 |
|
WrenP-Complete posted:Bot or not: Ah yes, I know there are tools for detecting fake and removing fake accounts. These detection criteria have to do with patterns of tweets and followers, low content posts etc. I'm thinking of a tool that is explicitly framed on the Turing test, in that you're determining bot or not based on a transcript of twitter dialogue. I'm not sure botornot.co/botmeter uses any such criteria in its evaluation. The tool I'm imagining would present snippets of dialogue to judges to make an evaluation of whether or not it's a bot in virtue of how we'll it carries the conversation. The botometer is cool! It says me and Ken are equally likely to be bots, but Ken is more sentimental than me =) edit: botometer does sentiment analysis, which is close to what I'm thinking. I was also thinking of using the same tool to run competitions between bots to see which is more accurate. The sentiment analysis doesn't seem to be the deciding factor for these decisions. Eripsa fucked around with this message at 19:21 on Jun 3, 2017 |
# ? Jun 3, 2017 19:05 |
|
A Spherical Sponge posted:You're not being very helpful in getting your point across this is a comedy forum populated by unthinking Marxists, so that's ok.
|
# ? Jun 3, 2017 20:49 |
|
Broccoli Cat posted:this is a comedy forum populated by unthinking Marxists, so that's ok. Actually good sir I think you will find that I spend approximately 50% of my time thinking about Marxism.
|
# ? Jun 3, 2017 21:00 |
|
Broccoli Cat posted:this is a comedy forum populated by unthinking Marxists, so that's ok. Don't forget techno-fetishists! its you
|
# ? Jun 3, 2017 21:24 |
|
I was playing with Cleverbot (http://www.cleverbot.com/) this afternoon and found this. http://isturingtestpassed.github.io/quote:Has The Turing Test Been Passed? quote:But what about Eugene Goostman and other chatbots? Didn't they pass the Turing test?
|
# ? Jun 3, 2017 21:30 |
|
WrenP-Complete posted:I was playing with Cleverbot (http://www.cleverbot.com/) this afternoon and found this. http://isturingtestpassed.github.io/ Yup. The Turing test is the most maligned thing in AI. My original dissertation proposal in 2005 was titled "Saving the Turing test". I'm hoping to do some of that with this next video, which is why it's taking so long to get right. However: I think a twitterbot webtool as I've proposed might do something to stimulate some competitive spirit in the botALLY community and maybe produce some interesting bots. A high rank on turingtest.claims (domain available, $60/year) could be a mark of honor, and give a more crowd-friendly way of popularizing the Turing test outside the stuffy Loebner prize. Is there another goon forum where I can propose this idea and get a dev to create it? This is easily the most actionable Eripsa idea in history. edit: like I have no doubt a minimally competent dev can get the basic thing together in like a week. edit2: I just squatted on turingtest.chat because come on Eripsa fucked around with this message at 22:21 on Jun 3, 2017 |
# ? Jun 3, 2017 22:18 |
|
Eripsa posted:Yup. The Turing test is the most maligned thing in AI. My original dissertation proposal in 2005 was titled "Saving the Turing test". I'm hoping to do some of that with this next video, which is why it's taking so long to get right. I have a feeling I know why this was your "original" dissertation proposal, and not one that got off the ground. You can't just change the constraints of a test just because you really want it to pass. Eripsa posted:Is there another goon forum where I can propose this idea and get a dev to create it? This is easily the most actionable Eripsa idea in history. Please post this idea in YOSPOS. PLEASE, PLEASE do it.
|
# ? Jun 3, 2017 22:28 |
|
Eripsa posted:I'm describing a baby at a level of abstraction that is very general. The state "crying" is an extremely complex neurophysiological reaction, itself to be described by a much more complicated distributed state machine. But this complication within the crying state doesn't make the more abstract state description any less true or accurate. What it means is that lots of different machines (a neopet) can instantiate that same formal computer. So it's not hard to see a baby is a computer at an abstract enough level. The question is how abstract do you want to be. im glad u went into pro philosophy instead of pediatrics
|
# ? Jun 3, 2017 22:50 |
|
Eripsa posted:
Your best bet is to post an introduction in GBS and ask the moderators. Politely.
|
# ? Jun 3, 2017 22:55 |
|
COOL CORN posted:Please post this idea in YOSPOS. PLEASE, PLEASE do it. YOSPOS is a subforum of the Serious Hardware / Software Crap forum, for the record. Just make a new thread about it.
|
# ? Jun 3, 2017 23:00 |
|
I posted in the YOSPOS bot thread, which seems like an appropriate place. I'll get torn apart because I'm me, but it's a good idea and worth the try.
|
# ? Jun 3, 2017 23:02 |
|
And it's not even my birthday!
|
# ? Jun 3, 2017 23:10 |
|
yes, but what would Marx say about the neocortex? http://spectrum.ieee.org/computing/software/what-intelligent-machines-need-to-learn-from-the-neocortex
|
# ? Jun 4, 2017 17:19 |
|
Broccoli Cat posted:yes, but what would Marx say about the neocortex? I was recently turned on to Siraj Raval's youtube stream, which does a nice job relating the mathematics of ANNs to more general issues in cognitive science. https://www.youtube.com/watch?v=nhqo0u1a6fw
|
# ? Jun 4, 2017 18:07 |
|
Eripsa posted:I was recently turned on to Siraj Raval's youtube stream, which does a nice job relating the mathematics of ANNs to more general issues in cognitive science. can't argue with Raval's ADAM > everything else, regarding the subcognative logical pragmatism of self-teaching programs, although his generalization-themed take on intelligence falls apart when you're mugged in some foreign poo poo hole and wind up with brain damage and death from sepsis.
|
# ? Jun 4, 2017 18:30 |
|
I spent the day at the National Zoo, and was reminded of this thread because of the Think Tank*. From their website (https://nationalzoo.si.edu):quote:Think Tank, the place to think about thinking, opened in 1995 as a 15,000-square-foot permanent exhibition that combines the appeal of orangutans, macaques and other charismatic species with an interactive exploration of the question: “What is thinking?” Visitors are introduced to the concept of animal thinking by exploring three factors necessary to establish the existence of thought: image, intention and flexibility. This may relate to our conversations about transhumanism and intelligence but I'm a little too tired from the sun to be able to talk it out. *Also because my partner kept on talking about babies as state machine robots or something.
|
# ? Jun 4, 2017 20:13 |
|
WrenP-Complete posted:*Also because my partner kept on talking about babies Awww quote:as state machine robots or something. Oh.
|
# ? Jun 4, 2017 20:31 |
|
A Spherical Sponge posted:Well first of all I don't think brains involve quantum computation, if that's the line of thought you were going down. I don't think brains or consciousness are magic, I just think that consciousness is dependent on very particular physiological structures. I think consciousness is the result of the interaction between the brain and body and environment (though the distinction between the brain and body is a reductionsitic one, because there's lots of computation that goes on in the tissues themselves and not in the brain, such as the gut and certain physiological feedback mechanisms), and that even if you could somehow recreate the informational processes that are going on in the brain in a computer simulation, it wouldn't be conscious imo. I'm not really in the mood to do an effort post atm but I might later, it's a pretty complicated issue that isn't settled at all. A Spherical Sponge posted:Also it's not just a matter of taking a human idea that isn't strictly empirical and comparing it to some religion. If you read the article(though I seem to remember a more in depth article on the issue; this is more about the authors recovery from evangelism and how the structure of transhumanist ideas strongly match up with christian eschatology and caused a 'relapse' of sorts), transhumanism isn't being compared with religion because it's not empirical, it's being compared with christian eschatology because the philosophical ideas contained within transhumanism closely match up with those found in early christian eschatology. They do not, for example, match up with islamic or hindu eschatology. I didn't read the article. If you are a person who gets easily swept up in radical ideas you should stay away from religion, politics and the internet, and other places where facts don't matter.
|
# ? Jun 4, 2017 20:33 |
|
Who What Now posted:Awww To be fair, I think it was after an "aww" thing was said, but the state machine joke was too good to miss.
|
# ? Jun 4, 2017 20:44 |
|
Facts matter in both politics and the internet. Don't let a small minority fool you.
|
# ? Jun 4, 2017 20:45 |
|
|
# ? May 6, 2024 21:54 |
|
Gradient descent has real problems with data sets that don't pay nice. I think Monte Carlo is actually the more 'generalizable' algorithm.
|
# ? Jun 4, 2017 20:48 |