Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
Should I step down as head of twitter
This poll is closed.
Yes 420 4.43%
No 69 0.73%
Goku 9001 94.85%
Total: 9490 votes
[Edit Poll (moderators only)]

 
  • Post
  • Reply
euphronius
Feb 18, 2009

Nelson Mandingo posted:

AI's improvement in just a couple of years has been measured in exponential growth and abilities. Even art bots can measure tremendous growth in months as opposed to years. The idea that AGI is not possible at all seems more like wishful thinking than a reasonable point of view. I think in the range of 50-100 years it's not only possible, but very probable to have a human level AGI.

Lol syq

Adbot
ADBOT LOVES YOU

Mozi
Apr 4, 2004

Forms change so fast
Time is moving past
Memory is smoke
Gonna get wider when I die
Nap Ghost

Nelson Mandingo posted:

AI's improvement in just a couple of years has been measured in exponential growth and abilities. Even art bots can measure tremendous growth in months as opposed to years. The idea that AGI is not possible at all seems more like wishful thinking than a reasonable point of view. I think in the range of 50-100 years it's not only possible, but very probable to have a human level AGI.

It's not clear to me that simply pushing further on the ways that AI has improved in the last couple of years will lead to further exponential improvements. It's possible they could be improved by using better-quality training data but in terms of total amount of training data it's basically been exhausted. It's possible that barring some other new revolutionary advance there will be only minor incremental improvements going forward, no matter how many chips NVIDIA sells.

FUCK SNEEP
Apr 21, 2007




in a few years this technology that isn't actually 'artificial intelligence' will definitely become smarter than humans

Froghammer
Sep 8, 2012

Khajit has wares
if you have coin

redshirt posted:

lol within each of us is a Good Elon and a Bad Elon.

What Elon do you choose today?
Good and evil are irrelevant to Elon. Your options are Caffeine Elon and Ketamine Elon.

kazil
Jul 24, 2005

Derpmph trial star reporter!

Froghammer posted:

Good and evil are irrelevant to Elon. Your options are Caffeine Elon and Ketamine Elon.

Elon notoriously drinks caffeine free diet coke

Runcible Cat
May 28, 2007

Ignoring this post

gently caress SNEEP posted:

in a few years this technology that isn't actually 'artificial intelligence' will definitely become smarter than humans

Not like it has a long way to go judging by social media.

kdrudy
Sep 19, 2009

Nelson Mandingo posted:

AI's improvement in just a couple of years has been measured in exponential growth and abilities. Even art bots can measure tremendous growth in months as opposed to years. The idea that AGI is not possible at all seems more like wishful thinking than a reasonable point of view. I think in the range of 50-100 years it's not only possible, but very probable to have a human level AGI.

What we call AI currently has no direct path to what we would consider AGI. Nothing that currently exists has any kind of real awareness of the world and the ability to reason and act in it in anything more then a specifically defined way. If it is even possible we are still missing a huge piece. Anyone that tells you they are close is trying to scam money from someone or a fool.

Devor
Nov 30, 2004
Lurking more.

kdrudy posted:

What we call AI currently has no direct path to what we would consider AGI. Nothing that currently exists has any kind of real awareness of the world and the ability to reason and act in it in anything more then a specifically defined way. If it is even possible we are still missing a huge piece. Anyone that tells you they are close is trying to scam money from someone or a fool.

That's only because they put so many safeguards around it, like the checkbox that tells it not to use its credit card to buy more AWS compute power, lest it break free of its gilded cage.

Same reason my dog doesn't drive off in the family car, I make sure to put the keys where he can't reach them.

Mozi
Apr 4, 2004

Forms change so fast
Time is moving past
Memory is smoke
Gonna get wider when I die
Nap Ghost
I mean... we don't understand how intelligence arises in brains so who can say for sure but current AI is just a big pile of math that is being poked and prodded in various ways. Safeguards aren't keeping this math from somehow being anything other than a fancy auto-complete.

And sure maybe our own brains are just fancy auto-complete, who knows. But there is no intelligence underlying current AI.

Gubbinal Girl
Apr 11, 2022


AI goes through regular boom and bust cycles where researchers think "this time it's for real" and then progress slows again. Very smart people were convinced that the basic neural net would do it. Nope. What about computer vision? Nope. What about symbolic AI? Nope. What about chess playing robots? Nope.

AI researchers have complained since their field began that people keep moving the goal post for what constitutes an AGI. The thing is though that despite the progress we never seem to actually get closer to sentience. Being able to play a perfect chess game or write something like a poem ultimately means nothing by itself because the machine has no intent behind its actions. It's just a machine doing what you programmed it to do.

kazil
Jul 24, 2005

Derpmph trial star reporter!

loving nerds

redshirt
Aug 11, 2007

Y'all sound like there's an end goal to AI research.

Is there?

Devor
Nov 30, 2004
Lurking more.

redshirt posted:

Y'all sound like there's an end goal to AI research.

Is there?

Money

Gubbinal Girl
Apr 11, 2022


redshirt posted:

Y'all sound like there's an end goal to AI research.

Is there?

For a lot of people the goal is to create true thinking machines but that's not universal; some researchers don't even think it's possible. Especially in the last few years the real focus has been making money. For some people it's just kind of cool to see what you can make a computer do.

redshirt
Aug 11, 2007


I'd say that's the ongoing concern, not an end goal, and certainly not an academic goal.

MrQwerty
Apr 15, 2003

LOVE IS BEAUTIFUL
(づ ̄ ³ ̄)づ♥(‘∀’●)

if that picture Liam Nissan posted isn't proof of AGI existing then I don't know what is

dr_rat
Jun 4, 2001

In particular, firing as many workers as possible, who cost a lot of money and replacing them with AI that people with money hope won't.

It's like the holy grail of capitalism. That's why so much money is chucked into AI research.

kazil
Jul 24, 2005

Derpmph trial star reporter!

MrQwerty posted:

if that picture Liam Nissan posted isn't proof of AGI existing then I don't know what is

A Gigantic Idiot?

All You Can Eat
Aug 27, 2004

Abundance is the dullest desire.
Very broad-strokes here, but most technologies go through a phase of exponential growth before leveling off and plateauing at maturity. In innovation management this concept is called an S-curve and the pattern is prevalent through most of human history, whether we're talking about biotech or steam shovels.

I wouldn't expect AI to grow exponentially as some people think, because its capabilities are starting to approach real world limits, eg as pointed out finite training data

dr_rat
Jun 4, 2001

All You Can Eat posted:

I wouldn't expect AI to grow exponentially as some people think, because its capabilities are starting to approach real world limits, eg as pointed out finite training data

How's this a problem? Just get AI to create endless training data, as they're already doing this it's a solved problem!!!

All You Can Eat
Aug 27, 2004

Abundance is the dullest desire.
Paging doctor rat, there's an infinite number of monkeys in the waiting room wanting to have a word with you

redshirt
Aug 11, 2007

A real AI will have to imprint on its creator, like a duckling.

So one can assume it will be a nerd with social issues.

Evilreaver
Feb 26, 2007

GEORGE IS GETTIN' AUGMENTED!
Dinosaur Gum
It's important to differentiate intelligence and consciousness. AGI can have intelligence in that it can make decisions and take actions to pursue a goal, while not having a conscience or human level awareness of the world.

Furthermore it is easy for a computer to have superhuman intelligence in particular areas: the calculator app on your phone is able to do math instantly, faster than any human, for example. In most areas of work, it is possible for a computer to eventually surpass human ability in this manner. Simply put, AGI is just hundreds or thousands of these apps put together to become an entity capable of outsmarting a human.

You can call an program that isn't conscious nor has human level understanding of the world "not intelligent", but if it can escape confines and cause damage as a rampant system, that's still a problem.



It is much more likely, in any case, that an algorithm is allowed to take actions that are widely detrimental to society. For example YouTube's algorithm that quickly echochambers people and has all those "pipelines".

dr_rat
Jun 4, 2001

All You Can Eat posted:

Paging doctor rat, there's an infinite number of monkeys in the waiting room wanting to have a word with you

I'm a rat doctor, not a monkey doctor! I can do nothing for them!!!

Also, I ain't going anywhere near the infinite monkey typewriting room, do you know how bad a room piled with infinite monkey poo poo smells?

Captain Hygiene
Sep 17, 2007

You mess with the crabbo...



kazil posted:

Elon notoriously drinks caffeine free diet coke

This is actually my greatest criticism against him, the man's a soft drink monster!

sugar free jazz
Mar 5, 2008

All You Can Eat posted:

Very broad-strokes here, but most technologies go through a phase of exponential growth before leveling off and plateauing at maturity. In innovation management this concept is called an S-curve and the pattern is prevalent through most of human history, whether we're talking about biotech or steam shovels.

I wouldn't expect AI to grow exponentially as some people think, because its capabilities are starting to approach real world limits, eg as pointed out finite training data

lol innovation management

marshalljim
Mar 6, 2013

yospos
When AI talks with supreme confidence about things it has no actual knowledge of, they say it's "hallucinating." When dumbass goons do it, it's called "posting."

MrQwerty
Apr 15, 2003

LOVE IS BEAUTIFUL
(づ ̄ ³ ̄)づ♥(‘∀’●)

Im a posting innovation manager

coconono
Aug 11, 2004

KISS ME KRIS

You can get some interesting behavior if you feed the ai algorithm back on itself. IE you’re training an arm to identify and fold red shirts you can use previous successful runs asa training data.

This doesn’t work so well in more complex text summarizers which still don’t effectively have any internal auditing beyond the end user smacking the “this is nonsense” button

redshirt
Aug 11, 2007

Posting Hallucination Supervisor

Evilreaver
Feb 26, 2007

GEORGE IS GETTIN' AUGMENTED!
Dinosaur Gum
One way to think about this is that AIs that write software like gpt4 may end up writing viruses. Either by accident or by design, someone might give GPT 5 or 6 a command to make a virus that then spreads beyond our ability to control, potentially dealing a catastrophic or crippling blow into global Internet infrastructure.

This system would not need to have human level consciousness in order to function, merely a superhuman understanding of code and computer infrastructure.

kazil
Jul 24, 2005

Derpmph trial star reporter!

dr_rat
Jun 4, 2001

Evilreaver posted:

One way to think about this is that AIs that write software like gpt4 may end up writing viruses. Either by accident or by design, someone might give GPT 5 or 6 a command to make a virus that then spreads beyond our ability to control, potentially dealing a catastrophic or crippling blow into global Internet infrastructure.

This system would not need to have human level consciousness in order to function, merely a superhuman understanding of code and computer infrastructure.

AI is based off training data, and if there's no virus out there like that than the no reason AI would suddenly come up with one when someone prompts it too, and there's no reason why it would "accidentally" write one. Like how would that even work?

Not sure how much virus code would be in its training data anyway. There might be a bit here in there from examples in computer science text books or something, but I doubt there would be vast amounts of it on the open net.

Evilreaver
Feb 26, 2007

GEORGE IS GETTIN' AUGMENTED!
Dinosaur Gum

dr_rat posted:

AI is based off training data, and if there's no virus out there like that than the no reason AI would suddenly come up with one when someone prompts it too, and there's no reason why it would "accidentally" write one. Like how would that even work?

Not sure how much virus code would be in its training data anyway. There might be a bit here in there from examples in computer science text books or something, but I doubt there would be vast amounts of it on the open net.

Large language models, LLMs, work the way you describe, but there are other methods for teaching a computer how things work. Conditioning and reinforcement learning, as well as sandbox methods, can all generate programs with novel features and capabilities. Well this hasn't been used for coding yet on a large scale, it has been done, and capabilities will progress as time goes on.

temple
Jul 29, 2006

I have actual skeletons in my closet

smoky da weed. just like its father elon

Gubbinal Girl
Apr 11, 2022


Malware development doesn't work like that. Writing the actual exploit code is the last step and often the easiest. Identifying and weaponizing common vulnerabilities is already automated. Doing the same for novel vulnerabilities is far more difficult and not something that's easy to package as training data.

Edit: thinking on it some more AI already is commonly used for hacking. Every major social engineering incident lately involves hackers faking their voice and video via AI.

Gubbinal Girl fucked around with this message at 16:18 on Apr 27, 2024

FUCK SNEEP
Apr 21, 2007




CS freshman also come up with novel ways to write code but it doesn't mean it's good or useful!!!

MrQwerty
Apr 15, 2003

LOVE IS BEAUTIFUL
(づ ̄ ³ ̄)づ♥(‘∀’●)

gently caress SNEEP posted:

CS freshman also come up with novel ways to write code but it doesn't mean it's good or useful!!!

They're called coding innovation managers now

Evilreaver
Feb 26, 2007

GEORGE IS GETTIN' AUGMENTED!
Dinosaur Gum

Gubbinal Girl posted:

Malware development doesn't work like that. Writing the actual exploit code is the last step and often the easiest. Identifying and weaponizing common vulnerabilities is already automated. Doing the same for novel vulnerabilities is far more difficult and not something that's easy to package as training data.

Are you saying it's impossible for the bolded part to be improved in any way, or that 'novel' part is impossible? Perhaps with modern methods.



It's worth considering that every tool ever made has superhuman abilities: that's why we make them. The hammer allows the user to exert great force on a focused area, for example.
Any tool used wrong enough is dangerous- you can throw a hammer at the wall and make a hole.
The more powerful the tool, the greater the potential damage of using it wrong- a car with a brick on the accelerator will put a hole in a building. See also: Chernobyl. This is why safeguards and careful design are important.

And just because LLMs and other similar tools of today don't have these capabilities, that does not preclude those tools from existing in the future.

Evilreaver fucked around with this message at 16:23 on Apr 27, 2024

Adbot
ADBOT LOVES YOU

Dewgy
Nov 10, 2005

~🚚special delivery~📦

Evilreaver posted:

Are you saying it's impossible for the bolded part to be improved in any way, or that 'novel' part is impossible? Perhaps with modern methods.



It's worth considering that every tool ever made has superhuman abilities: that's why we make them. The hammer allows the user to exert great force on a focused area, for example.
Any tool used wrong enough is dangerous- you can throw a hammer at the wall and make a hole.
The more powerful the tool, the greater the potential damage of using it wrong- a car with a brick on the accelerator will put a hole in a building. See also: Chernobyl. This is why safeguards and careful design are important.

And just because LLMs and other similar tools of today don't have these capabilities, that does not preclude those tools from existing in the future.

ok buddy time to put the bong down

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply