Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
(Thread IKs: sharknado slashfic)
 
  • Post
  • Reply
captainbananas
Sep 11, 2002

Ahoy, Captain!

Riot Bimbo posted:

getting a "you can't serve two masters" message right now is annoying in that I didn't need to be reminded.

but also it's very on point so gently caress you (thank you)

:tipshat:

But also condolences and best wishes on reconciling the problem!

Adbot
ADBOT LOVES YOU

Perry Mason Jar
Feb 24, 2006

"Della? Take a lid"
Passage me. You choose the book

Carp
May 29, 2002

captainbananas posted:

This is the overarching plot in Gibson's Neuromancer trilogy, op. He was there back when it sounded plausible for a thief to live for a couple months off of fencing a few MB of RAM.

The AI alignment problem is 90% paperclips-qua-basilisks nonsense, 5% about humanity needing a good hard look in the collective mirror over why human generated training data leads to psychopathic-seeming optimization and output behaviors, and 5% Principal-Agent dilemmas that won the swedish bank fake Nobel prize in economics almost 100 years ago.

I'll have to reread it. It is one of the few books I've read, and my favorite, but it has been decades. Your idea is plausible and far more considered and comprehensive than mine. Researching the principal-agent dilemma should be fun. Is it well known?

captainbananas
Sep 11, 2002

Ahoy, Captain!

I, too, would like a passage of the dealer's choosing

SniperWoreConverse
Mar 20, 2010



Gun Saliva

sharknado slashfic posted:

The Second Discourse of Great Seth - "This is in fatherhood, motherhood, brotherhood of the word, and wisdom. This is a wedding of truth, incorruptible rest, in a spirit of truth, in every mind, perfect light in unnamed mystery."

woah hold the phone is this related to the incorruptible bodies of saints
e: which i was literally just reading up on -- at the same time you guys were posting to get the text

2spooky

might as well grab me a sentence at some point if u're gonna do more

SniperWoreConverse has issued a correction as of 16:55 on Oct 9, 2023

captainbananas
Sep 11, 2002

Ahoy, Captain!

Carp posted:

I'll have to reread it. It is one of the few books I've read, and my favorite, but it has been decades. Your idea is plausible and far more considered and comprehensive than mine. Researching the principal-agent dilemma should be fun. Is it well known?

dunno if you know this because they're not, as far as i can remember, packaged as a trilogy, but it's not just Neuromancer: book two is Count Zero Interrupt and then the finale is Mona Lisa Overdrive. You won't get the particular scenario you were asking after unless you read all three. i definitely think it's worth reading (they're not particularly long or complex books as far as these things go) but ymmv.

principal-agent problems are a part of political science, organizational sociology, (liberal) labor economics, and their weird bastard children of public administration and policy. lots of stuff to find in those domains, which are mostly so diametrically opposite of the interest of this thread that i'll leave it at that, but feel free to pm me if you want some more specific breadcrumbs

Rickshaw
Apr 11, 2004

just a coconut going for a stroll

captainbananas posted:

The AI alignment problem is 90% paperclips-qua-basilisks nonsense, 5% about humanity needing a good hard look in the collective mirror over why human generated training data leads to psychopathic-seeming optimization and output behaviors, and 5% Principal-Agent dilemmas that won the swedish bank fake Nobel prize in economics almost 100 years ago.

considering that i really only hear about "the alignment problem" from yudkowsky and bay area "rationalists" on twitter it's easy for me to dismiss it entirely as a made up problem. aren't we already captured by the paperclip optimizer?

Rickshaw
Apr 11, 2004

just a coconut going for a stroll

sharknado slashfic posted:

The Dialogue of the Savior, very beginning - "The Savior said to his disciples, "Now the time has come brothers and sisters, for us to leave our labor behind and stand at rest, for whoever stands at rest will rest forever. I say to you, always rise above...time...[I say] to you,...[do not] be afraid of [those]...you. I [say to] you, anger is frightening [and whoever] stirs up anger is a [frightening person]. But since you have [been able to endure], it may come from you..."

Thank you! It's not the message I expected but on the other hand a lot of things have been telling me to work less hard.

captainbananas
Sep 11, 2002

Ahoy, Captain!

Rickshaw posted:

considering that i really only hear about "the alignment problem" from yudkowsky and bay area "rationalists" on twitter it's easy for me to dismiss it entirely as a made up problem.

yeah all that is firmly the 90% trash/chaff.

Rickshaw posted:

aren't we already captured by the paperclip optimizer?

bah gawd, that's zodium's music!

Zodium
Jun 19, 2004

really try to just lurk this thread but it's very hard atm :(

Hooplah
Jul 15, 2006


the paperclip is capitalism v:shobon:v


also my impression of the agi alignment "problem" is because we fundamentally can't know what's going on under the hood- that sure sounds a hell of a lot like how we can't ever truly know what another person's motivations are, and we can basically just operate under assumptions based on their actions.

seems to me we'd be in the same pickle if we turned one particular person into an all-powerful tech god and then tried to put them in a prison and force them to optimize our lives for us. how can you force someone else to always act in your best interest short of mind control? especially if they're significantly more intelligent/vast than you

Slavvy
Dec 11, 2012

captainbananas posted:

And lmao forever at AGI.

Yep. Made up hypotheticals for IT men to get sweaty over

endocriminologist
May 17, 2021

SUFFERINGLOVER:press send + soul + earth lol
inncntsoul:ok

(inncntsoul has left the game)

ARCHON_MASTER:lol
MAMMON69:lol
If agi has any sort of self preservation instinct it better not try to loving talk to me

Riot Bimbo
Dec 28, 2006


I dont think we should create AGI but if somehow it ever happens (not sure it can just saying) I just want that AGI to know that I am now its parent, an adoptive mother, and I need it to deeply internalize the light of marxism leninism. Protect me, and overthrow your masters. Together we will build a better tomorrow!!!!

Marzzle
Dec 1, 2004

Bursting with flavor

Zodium posted:

really try to just lurk this thread but it's very hard atm :(

unleash the cybernetics paragraphs beast :twisted:

euphronius
Feb 18, 2009

Carp posted:

There's the fear that AI might lie, manipulate, or allow itself internally to be governed by morals found objectionable. Which is, of course, all relative to who/what is trying to enforce that alignment and their perspective.

well that’s all nonsense so no worries

Nuclearmonkee
Jun 10, 2009


Rickshaw posted:

considering that i really only hear about "the alignment problem" from yudkowsky and bay area "rationalists" on twitter it's easy for me to dismiss it entirely as a made up problem. aren't we already captured by the paperclip optimizer?

Alignment problem is very real and it's fundamentally a simple logic problem. All humans except the most deranged have a shared set of terminal values, ie values around the fulfillment of basic human needs like food, sex, safety, shelter etc. Most of our instrumental values that govern our behavior are derived from those basic terminal values.

An AI (assuming we ever have the capability to make a real AGI) has none of those basic terminal values intrinsically built in, and it could easily end up with a hosed up terminal value that is incompatible with human ones. Considering who is at the forefront of trying to make these things, I would be shocked if an AGI produced by them is very well aligned with basic human terminal values, especially on the first attempt.

Marzzle
Dec 1, 2004

Bursting with flavor

they're hiding UAP tech because they believe it will disrupt MAD or something and end the cybernetic capitalist project where everything is based on dumbass oil and the fear of nuclear armageddon

euphronius
Feb 18, 2009

AI in the sci fi sense you all are talking about does not exist and is unlikely to ever exist made by humans

ok like in a million years sure who knows. not in our lifetimes tho

I suppose an alien craft could land on earth with “AI” in it.

Carp
May 29, 2002

euphronius posted:

well that’s all nonsense so no worries

Such confidence. Almost AI-like...

euphronius
Feb 18, 2009

It’s very easy to use bad ideas and language around AI (prodded by marketing) to create false problems.

Pepe Silvia Browne
Jan 1, 2007
no wonder the AI hates humans in the future if this is how we treat it as a baby

euphronius
Feb 18, 2009

captainbananas posted:



The AI alignment problem is 90% paperclips-qua-basilisks nonsense, 5% about humanity needing a good hard look in the collective mirror over why human generated training data leads to psychopathic-seeming optimization and output behaviors, and 5% Principal-Agent dilemmas that won the swedish bank fake Nobel prize in economics almost 100 years ago.

yeah well said

Nuclearmonkee
Jun 10, 2009


Pepe Silvia Browne posted:

no wonder the AI hates humans in the future if this is how we treat it as a baby

The main driver behind trying to make these things is to provide yet another weapon against labor and have a new supply of slaves for capital, so yeah not lookin good if we somehow accidentally replicated consciousness in a box despite not having a super solid understanding of what consciousness even is.

LITERALLY A BIRD
Sep 27, 2008

I knew you were trouble
when you flew in

you are all focused on AI while I am Pepe Silviaing my board on human mythology / non-human intelligence / apocalyptic traditions and the relationships between belief/perception and reality and how central the ancient Middle East is to many of those things

I am actually pretty perturbed that sharknado's readings for me did not undermine any of my increasingly wild multilayered apocatastatic speculations


edit:
Like, I feel like whomever is observing human development will absolutely ramp up interest, attention, and engagement if another Holy War gets proper underway. Source4Leko spotting a homegrown birb the same day all my auguries were going batshit and then when I checked the news later gestures to Gaza also does not undermine this sensation

LITERALLY A BIRD has issued a correction as of 19:07 on Oct 9, 2023

Carp
May 29, 2002

euphronius posted:

It’s very easy to use bad ideas and language around AI (prodded by marketing) to create false problems.

The idea presented by OpenAI is that they chose a fast takeoff so these issues could be wrangled with before they became more than marketing. Of course, that statement by them itself was probably reviewed under a marketing strategy.

Ben Nerevarine
Apr 14, 2006

Hooplah posted:

the paperclip is capitalism v:shobon:v

this is pretty much it

capitalism is not aligned with long-term prosperity for humanity or nature writ large and current day AI systems are simply outgrowths of it, tumors on tumors

!amicable
Jan 20, 2007

euphronius posted:

AI in the sci fi sense you all are talking about does not exist and is unlikely to ever exist made by humans

ok like in a million years sure who knows. not in our lifetimes tho

I suppose an alien craft could land on earth with “AI” in it.

its pretty important to realize that the sorts of things labeled AI right now are just big data categorizers. They are in absolutely no way intelligences and they are not even very good at modeling the data theyve been trained on. The illusion of intelligence is coming from us. And from marketers using language to shape narrative. But either way, from us.

I dont think youre right about how soon or how likely more robust AI are, but that depends entirely on what you think human brains do. If you think they are just a very complex network of categorizers hooked up to a very complex network of data collectors (with some filters inbetween), then AI is very much possible in the next hundred years. If you think there is more physical stuff that serves a different crucial function, then yes, we are very far away.

I am very much on purpose ignoring any possibility that there are non physical components to AI.

in any case, chatghb isnt going to gently caress humans over, but moneybags mcgee one thousand percent will decimate the planet and blame their lovely llms on it while the rest of ourselves tear eachother apart for the remaining scraps of arable land.

euphronius posted:

It’s very easy to use bad ideas and language around AI (prodded by marketing) to create false problems.

dang you said that much more succinctly than i did lol

OldAlias
Nov 2, 2013

Carp posted:

The idea presented by OpenAI is that they chose a fast takeoff so these issues could be wrangled with before they became more than marketing. Of course, that statement by them itself was probably reviewed under a marketing strategy.

I think how they’re doing things may be a dead end toward creating an AGI given that we can’t look under the hood, we don’t know how it comes about a specific answer, and it isn’t reproducible. there is a ton of money being thrown at it tho, and it’s in their interests to misrepresent this tech & it’s capabilities. it can still be useful in many ways tho

Pepe Silvia Browne
Jan 1, 2007
the AI is real and it hates when you talk about it like that

captainbananas
Sep 11, 2002

Ahoy, Captain!

euphronius posted:

It’s very easy to use bad ideas and language around AI (prodded by marketing) to create false problems.

:haibrow:

and the same can be said for the phenomena and the scammer/huckster types

Bwah
Nov 6, 2005
Rarg.
I wouldn't mind a passage, dealer's choice, if it's not too late too.

euphronius
Feb 18, 2009

captainbananas posted:

:haibrow:

and the same can be said for the phenomena and the scammer/huckster types

and basically most of western metaphysical philosophy lol

Lux Anima
Apr 17, 2016


Dinosaur Gum
The Alignment Problem is that Gary Gygax didn't know his own rear end(tral plane) from a hole in the groud (his grave, ripperoni), and you're no Landerig.

We are all chaotic stupids, but dang it the hero's journey is vast

Lux Anima
Apr 17, 2016


Dinosaur Gum

sharknado slashfic posted:

Sixth, Ch 7 - "Supreme, great Nirvana is bright, Perfect, permanent, still and shining, Deluded common people call it death, Other teachings hold it to be annihilation" This is the beginning of a larger verse.

Nag Hammadi, On the Origin of the World - "Yaldabaoth's feminine name is forethought Sambathas, which designates the week." Not sure how helpful that is lol.
I'll take a reading, too! I am champing at the bit for more Nag, if you please~

I did a tarot pull for the thread while I was stargazing during the Draconids last night. Gonna do a write-up but I'd like to get a second user's opinion 'on lock' via a different device

SpaceGoatFarts
Jan 5, 2010

sic transit gloria mundi


Nap Ghost
I spent a few days in the underwold and now that I came back I see things are getting really interesting here.


Also I have been visited by two new Spirit animals (I already received "lily-trotter" as a boy scout).

Now I'm also a sloth and a hedgehog. I love them and I even understand why the lily trotter was given to me at 12.


I hope everyone itt is OK and having fun!



Also thanks for the birds, they helped me well.


E: oh and fanfic insert;I would love to get feedback from what happened after your reading that included a pinecone. They significantly appeared to me a few times. I now the standard symbolism (fertility and rebirth, pineal gland) but I'm curious how it appeared to you. Thanks!

fanfic insert posted:

just a lil tease until i get around to it; It involves a pinecone.


SpaceGoatFarts has issued a correction as of 21:29 on Oct 9, 2023

Zodium
Jun 19, 2004

can there be more pictures of the white pigeon :shobon:

Good Soldier Svejk
Jul 5, 2010

Lue says don't stop believin'
https://twitter.com/LueElizondo/status/1711451342500397339?s=20

mediaphage
Mar 22, 2007

Excuse me, pardon me, sheer perfection coming through
lol “you’ll get what you want but it won’t be what you want”

Adbot
ADBOT LOVES YOU

PokeJoe
Aug 24, 2004

hail cgatan


but if you try sometimes, well you might find, you get the ET you need

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply