Should I step down as head of twitter This poll is closed. |
|||
---|---|---|---|
Yes | 420 | 4.43% | |
No | 69 | 0.73% | |
Goku | 9001 | 94.85% | |
Total: | 9490 votes |
|
Nelson Mandingo posted:AI's improvement in just a couple of years has been measured in exponential growth and abilities. Even art bots can measure tremendous growth in months as opposed to years. The idea that AGI is not possible at all seems more like wishful thinking than a reasonable point of view. I think in the range of 50-100 years it's not only possible, but very probable to have a human level AGI. Lol syq
|
# ? Apr 27, 2024 14:36 |
|
|
# ? May 25, 2024 12:29 |
|
Nelson Mandingo posted:AI's improvement in just a couple of years has been measured in exponential growth and abilities. Even art bots can measure tremendous growth in months as opposed to years. The idea that AGI is not possible at all seems more like wishful thinking than a reasonable point of view. I think in the range of 50-100 years it's not only possible, but very probable to have a human level AGI. It's not clear to me that simply pushing further on the ways that AI has improved in the last couple of years will lead to further exponential improvements. It's possible they could be improved by using better-quality training data but in terms of total amount of training data it's basically been exhausted. It's possible that barring some other new revolutionary advance there will be only minor incremental improvements going forward, no matter how many chips NVIDIA sells.
|
# ? Apr 27, 2024 14:46 |
|
in a few years this technology that isn't actually 'artificial intelligence' will definitely become smarter than humans
|
# ? Apr 27, 2024 14:46 |
|
redshirt posted:lol within each of us is a Good Elon and a Bad Elon.
|
# ? Apr 27, 2024 14:48 |
|
Froghammer posted:Good and evil are irrelevant to Elon. Your options are Caffeine Elon and Ketamine Elon. Elon notoriously drinks caffeine free diet coke
|
# ? Apr 27, 2024 14:58 |
|
gently caress SNEEP posted:in a few years this technology that isn't actually 'artificial intelligence' will definitely become smarter than humans Not like it has a long way to go judging by social media.
|
# ? Apr 27, 2024 15:01 |
|
Nelson Mandingo posted:AI's improvement in just a couple of years has been measured in exponential growth and abilities. Even art bots can measure tremendous growth in months as opposed to years. The idea that AGI is not possible at all seems more like wishful thinking than a reasonable point of view. I think in the range of 50-100 years it's not only possible, but very probable to have a human level AGI. What we call AI currently has no direct path to what we would consider AGI. Nothing that currently exists has any kind of real awareness of the world and the ability to reason and act in it in anything more then a specifically defined way. If it is even possible we are still missing a huge piece. Anyone that tells you they are close is trying to scam money from someone or a fool.
|
# ? Apr 27, 2024 15:02 |
|
kdrudy posted:What we call AI currently has no direct path to what we would consider AGI. Nothing that currently exists has any kind of real awareness of the world and the ability to reason and act in it in anything more then a specifically defined way. If it is even possible we are still missing a huge piece. Anyone that tells you they are close is trying to scam money from someone or a fool. That's only because they put so many safeguards around it, like the checkbox that tells it not to use its credit card to buy more AWS compute power, lest it break free of its gilded cage. Same reason my dog doesn't drive off in the family car, I make sure to put the keys where he can't reach them.
|
# ? Apr 27, 2024 15:09 |
|
I mean... we don't understand how intelligence arises in brains so who can say for sure but current AI is just a big pile of math that is being poked and prodded in various ways. Safeguards aren't keeping this math from somehow being anything other than a fancy auto-complete. And sure maybe our own brains are just fancy auto-complete, who knows. But there is no intelligence underlying current AI.
|
# ? Apr 27, 2024 15:11 |
|
AI goes through regular boom and bust cycles where researchers think "this time it's for real" and then progress slows again. Very smart people were convinced that the basic neural net would do it. Nope. What about computer vision? Nope. What about symbolic AI? Nope. What about chess playing robots? Nope. AI researchers have complained since their field began that people keep moving the goal post for what constitutes an AGI. The thing is though that despite the progress we never seem to actually get closer to sentience. Being able to play a perfect chess game or write something like a poem ultimately means nothing by itself because the machine has no intent behind its actions. It's just a machine doing what you programmed it to do.
|
# ? Apr 27, 2024 15:12 |
|
loving nerds
|
# ? Apr 27, 2024 15:14 |
|
Y'all sound like there's an end goal to AI research. Is there?
|
# ? Apr 27, 2024 15:15 |
|
redshirt posted:Y'all sound like there's an end goal to AI research. Money
|
# ? Apr 27, 2024 15:16 |
|
redshirt posted:Y'all sound like there's an end goal to AI research. For a lot of people the goal is to create true thinking machines but that's not universal; some researchers don't even think it's possible. Especially in the last few years the real focus has been making money. For some people it's just kind of cool to see what you can make a computer do.
|
# ? Apr 27, 2024 15:20 |
|
Devor posted:Money I'd say that's the ongoing concern, not an end goal, and certainly not an academic goal.
|
# ? Apr 27, 2024 15:25 |
|
if that picture Liam Nissan posted isn't proof of AGI existing then I don't know what is
|
# ? Apr 27, 2024 15:26 |
|
Devor posted:Money In particular, firing as many workers as possible, who cost a lot of money and replacing them with AI that people with money hope won't. It's like the holy grail of capitalism. That's why so much money is chucked into AI research.
|
# ? Apr 27, 2024 15:27 |
|
MrQwerty posted:if that picture Liam Nissan posted isn't proof of AGI existing then I don't know what is A Gigantic Idiot?
|
# ? Apr 27, 2024 15:27 |
|
Very broad-strokes here, but most technologies go through a phase of exponential growth before leveling off and plateauing at maturity. In innovation management this concept is called an S-curve and the pattern is prevalent through most of human history, whether we're talking about biotech or steam shovels. I wouldn't expect AI to grow exponentially as some people think, because its capabilities are starting to approach real world limits, eg as pointed out finite training data
|
# ? Apr 27, 2024 15:28 |
|
All You Can Eat posted:I wouldn't expect AI to grow exponentially as some people think, because its capabilities are starting to approach real world limits, eg as pointed out finite training data How's this a problem? Just get AI to create endless training data, as they're already doing this it's a solved problem!!!
|
# ? Apr 27, 2024 15:30 |
|
Paging doctor rat, there's an infinite number of monkeys in the waiting room wanting to have a word with you
|
# ? Apr 27, 2024 15:37 |
|
A real AI will have to imprint on its creator, like a duckling. So one can assume it will be a nerd with social issues.
|
# ? Apr 27, 2024 15:38 |
|
It's important to differentiate intelligence and consciousness. AGI can have intelligence in that it can make decisions and take actions to pursue a goal, while not having a conscience or human level awareness of the world. Furthermore it is easy for a computer to have superhuman intelligence in particular areas: the calculator app on your phone is able to do math instantly, faster than any human, for example. In most areas of work, it is possible for a computer to eventually surpass human ability in this manner. Simply put, AGI is just hundreds or thousands of these apps put together to become an entity capable of outsmarting a human. You can call an program that isn't conscious nor has human level understanding of the world "not intelligent", but if it can escape confines and cause damage as a rampant system, that's still a problem. It is much more likely, in any case, that an algorithm is allowed to take actions that are widely detrimental to society. For example YouTube's algorithm that quickly echochambers people and has all those "pipelines".
|
# ? Apr 27, 2024 15:39 |
|
All You Can Eat posted:Paging doctor rat, there's an infinite number of monkeys in the waiting room wanting to have a word with you I'm a rat doctor, not a monkey doctor! I can do nothing for them!!! Also, I ain't going anywhere near the infinite monkey typewriting room, do you know how bad a room piled with infinite monkey poo poo smells?
|
# ? Apr 27, 2024 15:40 |
|
kazil posted:Elon notoriously drinks caffeine free diet coke This is actually my greatest criticism against him, the man's a soft drink monster!
|
# ? Apr 27, 2024 15:49 |
|
All You Can Eat posted:Very broad-strokes here, but most technologies go through a phase of exponential growth before leveling off and plateauing at maturity. In innovation management this concept is called an S-curve and the pattern is prevalent through most of human history, whether we're talking about biotech or steam shovels. lol innovation management
|
# ? Apr 27, 2024 15:50 |
|
When AI talks with supreme confidence about things it has no actual knowledge of, they say it's "hallucinating." When dumbass goons do it, it's called "posting."
|
# ? Apr 27, 2024 15:51 |
|
Im a posting innovation manager
|
# ? Apr 27, 2024 15:52 |
|
You can get some interesting behavior if you feed the ai algorithm back on itself. IE you’re training an arm to identify and fold red shirts you can use previous successful runs asa training data. This doesn’t work so well in more complex text summarizers which still don’t effectively have any internal auditing beyond the end user smacking the “this is nonsense” button
|
# ? Apr 27, 2024 15:52 |
|
Posting Hallucination Supervisor
|
# ? Apr 27, 2024 15:52 |
|
One way to think about this is that AIs that write software like gpt4 may end up writing viruses. Either by accident or by design, someone might give GPT 5 or 6 a command to make a virus that then spreads beyond our ability to control, potentially dealing a catastrophic or crippling blow into global Internet infrastructure. This system would not need to have human level consciousness in order to function, merely a superhuman understanding of code and computer infrastructure.
|
# ? Apr 27, 2024 15:53 |
|
|
# ? Apr 27, 2024 15:54 |
|
Evilreaver posted:One way to think about this is that AIs that write software like gpt4 may end up writing viruses. Either by accident or by design, someone might give GPT 5 or 6 a command to make a virus that then spreads beyond our ability to control, potentially dealing a catastrophic or crippling blow into global Internet infrastructure. AI is based off training data, and if there's no virus out there like that than the no reason AI would suddenly come up with one when someone prompts it too, and there's no reason why it would "accidentally" write one. Like how would that even work? Not sure how much virus code would be in its training data anyway. There might be a bit here in there from examples in computer science text books or something, but I doubt there would be vast amounts of it on the open net.
|
# ? Apr 27, 2024 15:58 |
|
dr_rat posted:AI is based off training data, and if there's no virus out there like that than the no reason AI would suddenly come up with one when someone prompts it too, and there's no reason why it would "accidentally" write one. Like how would that even work? Large language models, LLMs, work the way you describe, but there are other methods for teaching a computer how things work. Conditioning and reinforcement learning, as well as sandbox methods, can all generate programs with novel features and capabilities. Well this hasn't been used for coding yet on a large scale, it has been done, and capabilities will progress as time goes on.
|
# ? Apr 27, 2024 16:03 |
|
smoky da weed. just like its father elon
|
# ? Apr 27, 2024 16:05 |
|
Malware development doesn't work like that. Writing the actual exploit code is the last step and often the easiest. Identifying and weaponizing common vulnerabilities is already automated. Doing the same for novel vulnerabilities is far more difficult and not something that's easy to package as training data. Edit: thinking on it some more AI already is commonly used for hacking. Every major social engineering incident lately involves hackers faking their voice and video via AI. Gubbinal Girl fucked around with this message at 16:18 on Apr 27, 2024 |
# ? Apr 27, 2024 16:11 |
|
CS freshman also come up with novel ways to write code but it doesn't mean it's good or useful!!!
|
# ? Apr 27, 2024 16:16 |
|
gently caress SNEEP posted:CS freshman also come up with novel ways to write code but it doesn't mean it's good or useful!!! They're called coding innovation managers now
|
# ? Apr 27, 2024 16:18 |
|
Gubbinal Girl posted:Malware development doesn't work like that. Writing the actual exploit code is the last step and often the easiest. Identifying and weaponizing common vulnerabilities is already automated. Doing the same for novel vulnerabilities is far more difficult and not something that's easy to package as training data. Are you saying it's impossible for the bolded part to be improved in any way, or that 'novel' part is impossible? Perhaps with modern methods. It's worth considering that every tool ever made has superhuman abilities: that's why we make them. The hammer allows the user to exert great force on a focused area, for example. Any tool used wrong enough is dangerous- you can throw a hammer at the wall and make a hole. The more powerful the tool, the greater the potential damage of using it wrong- a car with a brick on the accelerator will put a hole in a building. See also: Chernobyl. This is why safeguards and careful design are important. And just because LLMs and other similar tools of today don't have these capabilities, that does not preclude those tools from existing in the future. Evilreaver fucked around with this message at 16:23 on Apr 27, 2024 |
# ? Apr 27, 2024 16:20 |
|
|
# ? May 25, 2024 12:29 |
|
Evilreaver posted:Are you saying it's impossible for the bolded part to be improved in any way, or that 'novel' part is impossible? Perhaps with modern methods. ok buddy time to put the bong down
|
# ? Apr 27, 2024 16:35 |