|
SCheeseman posted:idk, if AI eventually heralds true post scarcity by inventing star trek poo poo, cool beans, but at the moment it does essays and spits out pretty pictures. Not to say I don't think people will find ways to use the technology in broader ways that have greater impact, but at the moment, post scarcity is still sci-fi. GPT-4 can use external tools like a calculator so a star trek computer is theoretically possible but people are still working on the real-world implementations
|
# ¿ Mar 24, 2023 16:02 |
|
|
# ¿ May 10, 2024 07:01 |
|
SCheeseman posted:I meant more in the sense of replicators rather than API integrations. Post scarcity isn't post scarcity until all people can eat and live without dependence on labour or privilige. Is this thread supposed to be about what's happening currently or speculations about things that probably won't ever happen
|
# ¿ Mar 24, 2023 16:14 |
|
Noam Chomsky posted:So how long before it puts web developers and programmers out of work? I’m asking for a friend. Most large projects I've worked on have had a small collection of senior programmers, a small number of interns/junior programmers that the senior programmers are supposed to train in case they get hit by a bus, and a bunch of in-between programmers that are usually contract workers. ChatGPT can make the junior and senior programmers so much more productive that the in-between programmers are mostly unnecessary, but can't replace the senior programmers or their HBAB replacements in the foreseeable future.
|
# ¿ Mar 31, 2023 15:10 |
|
Another promising area is the possibility that ChatGPT can look at really old languages like Cobol and Fortran and not just improve the documentation but translate it into modern languages using cleaner code.
|
# ¿ Mar 31, 2023 16:12 |
|
Drakyn posted:I want to be very very clear about this without being any more insulting than possible: this seems like an worry based entirely on conflating a hypothetical computer program that's very good at playing Scrabble and Apples to Apples using the internet with a mind. Why do ChatGPT or its hypothetical descendants deserve any more special consideration than the AI bots in an FPS or anything else besides the fact that what it's simulating is 'pretending to talk' rather than 'pretending to play a video game badly'? Language use is what is supposed to make humans uniquely intelligent.
|
# ¿ Apr 6, 2023 14:26 |
|
gurragadon posted:I mean I thought I was pretty clear but no I don't think ChatGPT itself needs to be treated differently than other chatbots or anything else pretending to talk. It doesn't posses AGI or any form of intelligence, but it is really good at mimicking it like you said. The fact that is good at mimicking it makes me consider what is right when dealing with AI programs at the current technology level and hypothetical AI programs in the future. A chatbot doesn't need to be conscious to replace a person, the jobs question is why it needs to be treated differently than other chatbots.
|
# ¿ Apr 6, 2023 15:48 |
|
MSB3000 posted:I'd like to throw out there that if/when we create a conscious AI mind, we need to understand that there's no reason to expect it'll have any of the same kind of experience humans or even biological creatures have. There's not really an objective truth to human consciousness being the "default", or at least there doesn't seem to be reason to believe that. We're one of the unknown number of ways consciousness could exist in the universe. In other words, Galileo showed us it was dumb to assume the Earth was the center of the cosmos, and for the same reason it's dumb to assume human consciousness is the default type of consciousness. Reality does its own thing, we're just part it. The thing keeping GPT from becoming AGI has nothing to do with consciousness and everything to do with embodiment. An embodied AGI system would have the ability to perceive and manipulate the world, learn from its environment, adapt to changes, and develop a robust understanding of the world that is not possible in disembodied systems.
|
# ¿ Apr 8, 2023 20:12 |
|
Raenir Salazar posted:How do we know we're not also perceiving the world via a disembodied system though? What is embodiment other than a set of inputs? If we provided a sufficiently realistic and informative simulated environment how does this differ from the Real(tm) environment for the purposes of learning? This like physics, balance, and proprioception could theoretically be simulated, but that would involve much more complex inputs than GPT's current multi-modal interface.
|
# ¿ Apr 8, 2023 20:29 |
|
RPATDO_LAMD posted:GPT is just a text prediction model. It only does text prediction. It has 0 internal reasoning ability, although it can fool people who don't really understand how it works because it can generate text that looks like someone reasoning, based on all the examples of that kind of text in its training set. More likely GPT will be the master model and lower level recognition tasks like recognition will be delegated to sub-models.
|
# ¿ Apr 8, 2023 23:01 |
|
IShallRiseAgain posted:I think a true AGI would at least be able to train at the same time as its running. GPT does do some RLHF, but I don't think its real-time. Its definitely a Chinese Room situation at the current moment. Right now it just regurgitates its training data, and can't really utilize new knowledge. All it has is short term memory. That's certainly true for a 100% complete true AGI, but it's not a huge difference practically if it retrains on a regular basis. Eventually there will be fewer and fewer situations that are truly new as opposed to new situations that can be managed by recognizing and applying existing patterns. SaTaMaS fucked around with this message at 23:08 on Apr 8, 2023 |
# ¿ Apr 8, 2023 23:04 |
|
RPATDO_LAMD posted:No, you cannot use a text prediction model which isn't generally intelligent and isn't architecturally capable of being generally intelligent as the master model for an artificial general intelligence. why
|
# ¿ Apr 8, 2023 23:10 |
|
RPATDO_LAMD posted:If you put a generative text prediction model in a robot body it would not do anything. The LLM handles higher level cognitive tasks, you're talking about lower level tasks like movement and sensing.
|
# ¿ Apr 8, 2023 23:15 |
|
RPATDO_LAMD posted:Because it is a generative text prediction model. It doesn't have any internal reasoning or logic, it doesn't even have memory besides the fact that the last X tokens of the conversation are fed back into it for the next sentence generation. This is demonstrably false. It's capable of task planning, selecting models to execute sub-tasks, and intelligently using the results from the sub-models to complete a task.
|
# ¿ Apr 8, 2023 23:20 |
|
RPATDO_LAMD posted:Yeah exactly! This is what I'm saying. You can ask the text generation model to generate a plan or a snippet code and it will generate some text that plausibly looks like a plan, follows the same text structure as a good plan etc. And yet it's able to use pattern recognition to apply previous solutions it's seen to similar problems. That's what a lot of people are doing when they say they're reasoning.
|
# ¿ Apr 8, 2023 23:45 |
|
cat botherer posted:ChatGPT cannot control a robot. JFC people, come back to reality. It will be able to control a robot before it will be conscious is the point.
|
# ¿ Apr 8, 2023 23:56 |
|
DeeplyConcerned posted:Why would you use a language model to control a robot? Wouldn't you want to train the robot model on situations the robot would encounter? In order to communicate a task to the robot in natural language and have it be able to figure out a solution.
|
# ¿ Apr 9, 2023 00:04 |
|
DeeplyConcerned posted:That part makes sense . But you still need the robot to be able to interact with the physical world. Because the code to have the robot move its arm is going to be more complicated than "move your left arm" or "pick up the coffee cup". I'm thinking you would need at least a few specialized models that work together to make this work. You'd want a physical movement model, an object recognition model, plus the natural language model. Plus, I'd imagine some kind of safety layer to make sure that the robot doesn't accidentally rip someone's face off. Focused work on this is going to start soon if hasn't already. It's a far more tractable problem than trying to make a conscious AI. I'm not a believer in the singularity, but if a LLM capable of acting as a "master model" which can control sub-models with additional skills such as perception, motor control, and decision-making can be iterated on to gradually perform more and more tasks that previously only a human could do, it can still lead to a lot of the same issues as a true AGI like job displacement and economic inequality.
|
# ¿ Apr 9, 2023 00:33 |
|
eXXon posted:Do you have sources for this? https://arxiv.org/pdf/2303.17580.pdf
|
# ¿ Apr 9, 2023 12:57 |
|
An online demo of Hugging GPT called JARVIS it out https://beebom.com/how-use-microsoft-jarvis-hugginggpt/ https://huggingface.co/join quote:In the AI field, new large language models are being launched every day and things are changing at a breakneck pace. In just a few months of development, we can now run a ChatGPT-like LLM on our PC offline. Not just that, we can train an AI chatbot and create a personalized AI assistant. But what has intrigued me recently is Microsoft’s hands-on approach to AI development. Microsoft is currently working on an advanced form of AI system called JARVIS (an obvious reference to Marvel’s Iron Man) that connects to multiple AI models and responds with a final result. Its demo is hosted on Huggingface and anyone can check out JARVIS’s capabilities right now. So if you’re interested, go ahead and learn how to use Microsoft JARVIS (HuggingGPT) right away.
|
# ¿ Apr 13, 2023 17:55 |
|
Lucid Dream posted:Well, by that definition doesn't ChatGPT already pass that threshold? ChatGPT can only receive input and output via a chat window, which is very limiting all things considered.
|
# ¿ Apr 17, 2023 13:40 |
|
KillHour posted:This is basically the only time someone could make ChaosGPT. It's obviously a joke, and it works because it's bad. Before this, all the existing tech would be too limited to be bad in an interesting way. After this, it will probably be too good to be a joke. Nah the main limitation is that no LLM has intrinsic desires so all it can do is respond to instructions. AutoGPT isn't going to try to hack any nuclear codes unless someone tells it to, so the alignment problem is the real issue there. Once ChatGPT has a robot body it will need to worry about self-preservation and all the sorts of situations that result from Asimov's laws of robotics. Learning from doing is important but LLMs do so much learning up front that it's not as much of a requirement for intelligent behavior as it is for people.
|
# ¿ Apr 17, 2023 16:17 |
|
KillHour posted:I think you're being too limiting about the definition of what an intrinsic desire is. I would argue they do have intrinsic desires in the same way that a plant has an intrinsic desire to grow towards the sun. It's not necessarily a conscious thought - just a response to stimuli. But one that is engrained into it at the most basic level. I think what humans consider intrinsic goals are closer to a plant growing towards the sun than anything more logical. You literally cannot change them and you probably aren't even aware of them directly. In the same way, a model has no "choice" but to output the natural consequences of its model weights. To give an example - if the model is trained to complete a task and is capable enough, it is probably going to try to stop you from preventing it from completing that task, just because that would interfere with the thing it was trained to do. This might sound like I'm mincing words, but I think it's just that we are uncomfortable about thinking of humans as really advanced automata. The model doesn't have a "choice" or "desire" to complete a task; it is just executing the function it was designed for. It's no more useful to attribute human-like characteristics, such as desires or goals, to these models than it is to say a thermostat desires to keep a room a certain temperature.
|
# ¿ Apr 18, 2023 00:56 |
|
GlyphGryph posted:Saying they have desires is dumb, sure. But saying they have goals is perfectly reasonable, talking about goals does not require anything remotely human-like to be attributed, and goal modeling and terminology (like the difference between instrumental and terminal goals) is a useful and effective way to describe AI functionality. It really isn't, because of how easily goal terminology gets munged into intentionality, which then gets munged into consciousness and anthropomorphism.
|
# ¿ Apr 18, 2023 10:20 |
|
GlyphGryph posted:You said it wasn't useful - it clearly is, or you'd be offering an alternate framework for discussing the issue. If we gave up on useful language in scientific fields because idiots somewhere were bad at using it, there's a whole lot of stuff we'd be completely unable to talk about in a meaningful way. The key is to keep in mind that it's the user who has goals, while ChatGPT has tasks or objectives just like any other program. In this case it's processing input, including context, and utilizing its training data to produce a relevant output.
|
# ¿ Apr 18, 2023 16:00 |
|
KillHour posted:This is actually my point - our core goals are mostly "seek pleasure and avoid pain" and both of those things come from chemical and physiological responses we have no control over. The important thing is we don't need to experience the pain for us to want to avoid it - our brains are hardwired to do or not do certain things. That's pretty much the limit of my knowledge of the subject though, so anything else is speculation. The idea that a trained model may be able to exhibit goal-seeking behavior from the training as a proxy for how our brain is "trained" to avoid pain is definitely speculation. But I think it's plausible and can't be completely ruled out. It can be ruled out because you're confusing a metaphor (exhibiting goal-seeking behavior) with reality that it performs specific tasks (generating coherent and relevant responses) based on the data it was trained on.
|
# ¿ Apr 18, 2023 17:40 |
|
GlyphGryph posted:I'm honestly not sure what he thinks a goal is at this point. Magic, probably. Having a goal requires consciousness and intentionality
|
# ¿ Apr 18, 2023 18:17 |
|
KillHour posted:Does it? That really sounds like an assertion begging the question. Taking the intentional stance is a useful last resort when there's no simpler way to explain something's actions. For people, just measuring brain activity won't tell much at all about the person involved so we need to attribute intentionality for a useful description. For LLMs their "goals" are determined by their creators and are essentially programmed tasks that the AI system is designed to perform so attributing intentionality isn't necessary.
|
# ¿ Apr 18, 2023 20:36 |
|
GlyphGryph posted:But you think having an objective doesn't, apparently? That doesn't make much sense, considering they are synonymous. Why should we use the word the way you want to, here, where it explicitly requires something to have those things, instead of the way its traditionally used especially within technological fields but also elsewhere where it does not? Because it's very useful to differentiate between the intentional stance and the design stance.
|
# ¿ Apr 18, 2023 20:42 |
|
gurragadon posted:I was unfamiliar with these terms but wikipedia made it seem like the design stance is taking only the function of a system for granted as working while the design stance dosen't care about the structure or design of the system? The mental processes if you will. https://sites.google.com/site/minddict/intentional-stance-the#:~:text=Just%20as%20the%20design%20stance,object%20as%20a%20rational%20agent. quote:The Physical Stance and the Design Stance Objectives are typically more quantifiable than goals. Using the design stance, "objective" emphasizes that these systems are designed to perform specific tasks based on their algorithms and training data, without consciousness or intentions. These tasks are programmed by their creators and can be thought of as objectives that the AI system is designed to achieve. SaTaMaS fucked around with this message at 21:09 on Apr 18, 2023 |
# ¿ Apr 18, 2023 21:06 |
|
KillHour posted:Okay, but humans are "designed" by evolution to do things that make us more likely to reproduce. It just seems like an arbitrary distinction created to conform to the idea that we're special in a way a computer is not or cannot be. There's a bunch of handwaving going on to gloss over the limitations in our knowledge. It's possible there's some fundamental thing that makes intent real, but it's also possible we're just post-hoc justifying unconscious predisposition as intent. Cool so we're at the point of discussing intelligent design.
|
# ¿ Apr 18, 2023 21:22 |
|
KillHour posted:You just made that strawman up and it's incredibly blatant. I didn't say some intelligent god designed us. I said our brains have an inherent structure that is tuned or trained or designed or shaped or whatever you want to call it by evolution. This is not controversial. It's extremely controversial, you're literally talking about intelligent design. The whole point of evolution is that it provides a way to no longer need a designer.
|
# ¿ Apr 18, 2023 21:39 |
|
gurragadon posted:I think I understand what you are saying now, tell me if I'm off. Yes exactly
|
# ¿ Apr 18, 2023 21:50 |
|
Quinch posted:Yeah isn’t this the whole point of AI really? It’s given a goal defined by a measure of some sort and the AI works out the correct actions of its possible outputs to achieve this. I wouldn’t say it’s desire as such but saying an AI has goals and it’s programmed to maximise them is perfectly reasonable. Sure in casual conversation it doesn't really matter, and even in AI systems things like beliefs, desires, and intentions are employed as metaphors. However in any somewhat serious discussion about AI it's important to distinguish between things that are determined by their design and training data, and where something resembling personal motivations and intentions start to determine its goals, assuming such a thing is even possible for an AI.
|
# ¿ Apr 19, 2023 16:54 |
|
One annoying thing about using ChatGPT for coding is that whether you give it a great idea, a good idea, or a mediocre idea, it responds pretty much the same way, it will give some pros and cons and an example implementation. I'm not sure whether this is the RLHF conditioning trying to avoid hurting my feelings or whether it really has no concept of a "good" implementation vs a "bad" implementation assuming both are free of bugs.
|
# ¿ May 12, 2023 13:08 |
|
America Inc. posted:Small diversion in topic: As far as LLMs this is called "temperature". A lower temperature setting results in a more predictable and factual response, while a higher temperature setting results in a more creative response. Though a low temperature setting can still produce incorrect responses if the model is poorly trained.
|
# ¿ May 19, 2023 11:50 |
|
cat botherer posted:You're kind of touching on symbolic AI or expert systems, which was really the first way of doing AI before statistical methods took over. They're really useful in some problem domains. I think figuring out to integrate the two approaches will be a big deal in the future, given that they tend to be good at complimentary things. Deepmind actually found a way to do this for AlphaFold, they figured out how to get the neural network to treat atomic physics as an immutable ground truth when figuring out how proteins fold.
|
# ¿ May 19, 2023 23:39 |
|
SCheeseman posted:I don't think it's practical to heavily regulate machine learning, given it's already pervasive use in a bunch of benign applications (and less benign but desirable to government and business). The greater point was that anti-AI advocates solutions that use copyright as a fixer is a long term self-own that inhibits the general public's open access to the technology, which they might not care about because they don't want to use it but probably will care once they are forced to in order to keep their jobs and need to pay rent for access to it. In the UK the Red Flag Act was passed in 1865 , which required a man to walk in front of every automobile waving a red flag to warn pedestrians and horse-drawn vehicles. It wasn't repealed until 1896.
|
# ¿ May 21, 2023 16:49 |
|
KillHour posted:The part of art that is being automated here is the labor part, not the creative part. I keep using this example, but it's similar to DAWs becoming a way to make music that doesn't require knowing how to play every instrument in the song. Songs that once required a full band can be made by a single producer. Bands that play live instruments still exist, but now so do a bunch of artists who wouldn't have been able to hire a professional drummer to help them create a song. Instead, they spend $25/month on Sonicpass and the barrier to entry comes way down. No, it's just the opposite. Bands can still make money playing live, that's the labor part. Theoretically artists could paint pictures that were created with an AI, but I doubt people will pay much for that. OTOH I can imagine EDM artists performing AI created dance music live and making money selling tickets.
|
# ¿ May 29, 2023 18:53 |
|
KillHour posted:Playing a sampled drum kit is not the same as playing the drums. Playing a sampled guitar is not the same as playing a guitar. I'm not discounting the amount of effort that goes into EDM sets (in fact, they are my favorite genre, and I have dabbled in making my own and I can play the guitar), but I'm pointing out that before sampled instruments became a thing, it would be an incredible amount of effort and money for a solo artist to make music like that. When sampled instruments first came out, a bunch of people complained that it hurt "real" musicians and lamented the death of learning to do things "the old fashioned way." In ways that resemble artists in this thread lamenting that art is changing from underneath them. But what it ended up doing was creating opportunities for entirely new kids of musical expression, in much the same way generative AI can do for the visual arts. The EDM artist still has to be there in person in order to get paid. There isn't an equivalent to live concerts for the visual arts.
|
# ¿ May 29, 2023 21:15 |
|
|
# ¿ May 10, 2024 07:01 |
|
BrainDance posted:
That's not really the same. With art shows they're getting paid once for their creativity and labor, like if a musician could only sell each song they wrote once to a single person. With tangible art that can still work but AI will make that basically irrelevant for digital art, and the market for the people who will only buy art manually created by a person is miniscule compared to people who go to music concerts.
|
# ¿ May 30, 2023 00:17 |