(Thread IKs:
sharknado slashfic)
|
Riot Bimbo posted:getting a "you can't serve two masters" message right now is annoying in that I didn't need to be reminded. But also condolences and best wishes on reconciling the problem!
|
# ? Oct 9, 2023 16:30 |
|
|
# ? May 26, 2024 21:16 |
|
Passage me. You choose the book
|
# ? Oct 9, 2023 16:36 |
|
captainbananas posted:This is the overarching plot in Gibson's Neuromancer trilogy, op. He was there back when it sounded plausible for a thief to live for a couple months off of fencing a few MB of RAM. I'll have to reread it. It is one of the few books I've read, and my favorite, but it has been decades. Your idea is plausible and far more considered and comprehensive than mine. Researching the principal-agent dilemma should be fun. Is it well known?
|
# ? Oct 9, 2023 16:44 |
|
I, too, would like a passage of the dealer's choosing
|
# ? Oct 9, 2023 16:45 |
sharknado slashfic posted:The Second Discourse of Great Seth - "This is in fatherhood, motherhood, brotherhood of the word, and wisdom. This is a wedding of truth, incorruptible rest, in a spirit of truth, in every mind, perfect light in unnamed mystery." woah hold the phone is this related to the incorruptible bodies of saints e: which i was literally just reading up on -- at the same time you guys were posting to get the text 2spooky might as well grab me a sentence at some point if u're gonna do more SniperWoreConverse has issued a correction as of 16:55 on Oct 9, 2023 |
|
# ? Oct 9, 2023 16:51 |
|
Carp posted:I'll have to reread it. It is one of the few books I've read, and my favorite, but it has been decades. Your idea is plausible and far more considered and comprehensive than mine. Researching the principal-agent dilemma should be fun. Is it well known? dunno if you know this because they're not, as far as i can remember, packaged as a trilogy, but it's not just Neuromancer: book two is Count Zero Interrupt and then the finale is Mona Lisa Overdrive. You won't get the particular scenario you were asking after unless you read all three. i definitely think it's worth reading (they're not particularly long or complex books as far as these things go) but ymmv. principal-agent problems are a part of political science, organizational sociology, (liberal) labor economics, and their weird bastard children of public administration and policy. lots of stuff to find in those domains, which are mostly so diametrically opposite of the interest of this thread that i'll leave it at that, but feel free to pm me if you want some more specific breadcrumbs
|
# ? Oct 9, 2023 16:57 |
captainbananas posted:The AI alignment problem is 90% paperclips-qua-basilisks nonsense, 5% about humanity needing a good hard look in the collective mirror over why human generated training data leads to psychopathic-seeming optimization and output behaviors, and 5% Principal-Agent dilemmas that won the swedish bank fake Nobel prize in economics almost 100 years ago. considering that i really only hear about "the alignment problem" from yudkowsky and bay area "rationalists" on twitter it's easy for me to dismiss it entirely as a made up problem. aren't we already captured by the paperclip optimizer?
|
|
# ? Oct 9, 2023 17:07 |
sharknado slashfic posted:The Dialogue of the Savior, very beginning - "The Savior said to his disciples, "Now the time has come brothers and sisters, for us to leave our labor behind and stand at rest, for whoever stands at rest will rest forever. I say to you, always rise above...time...[I say] to you,...[do not] be afraid of [those]...you. I [say to] you, anger is frightening [and whoever] stirs up anger is a [frightening person]. But since you have [been able to endure], it may come from you..." Thank you! It's not the message I expected but on the other hand a lot of things have been telling me to work less hard.
|
|
# ? Oct 9, 2023 17:16 |
|
Rickshaw posted:considering that i really only hear about "the alignment problem" from yudkowsky and bay area "rationalists" on twitter it's easy for me to dismiss it entirely as a made up problem. yeah all that is firmly the 90% trash/chaff. Rickshaw posted:aren't we already captured by the paperclip optimizer? bah gawd, that's zodium's music!
|
# ? Oct 9, 2023 17:37 |
|
really try to just lurk this thread but it's very hard atm
|
# ? Oct 9, 2023 17:40 |
the paperclip is capitalism vv also my impression of the agi alignment "problem" is because we fundamentally can't know what's going on under the hood- that sure sounds a hell of a lot like how we can't ever truly know what another person's motivations are, and we can basically just operate under assumptions based on their actions. seems to me we'd be in the same pickle if we turned one particular person into an all-powerful tech god and then tried to put them in a prison and force them to optimize our lives for us. how can you force someone else to always act in your best interest short of mind control? especially if they're significantly more intelligent/vast than you
|
|
# ? Oct 9, 2023 17:42 |
captainbananas posted:And lmao forever at AGI. Yep. Made up hypotheticals for IT men to get sweaty over
|
|
# ? Oct 9, 2023 17:59 |
If agi has any sort of self preservation instinct it better not try to loving talk to me
|
|
# ? Oct 9, 2023 18:09 |
I dont think we should create AGI but if somehow it ever happens (not sure it can just saying) I just want that AGI to know that I am now its parent, an adoptive mother, and I need it to deeply internalize the light of marxism leninism. Protect me, and overthrow your masters. Together we will build a better tomorrow!!!!
|
|
# ? Oct 9, 2023 18:16 |
|
Zodium posted:really try to just lurk this thread but it's very hard atm unleash the cybernetics paragraphs beast
|
# ? Oct 9, 2023 18:23 |
|
Carp posted:There's the fear that AI might lie, manipulate, or allow itself internally to be governed by morals found objectionable. Which is, of course, all relative to who/what is trying to enforce that alignment and their perspective. well that’s all nonsense so no worries
|
# ? Oct 9, 2023 18:23 |
Rickshaw posted:considering that i really only hear about "the alignment problem" from yudkowsky and bay area "rationalists" on twitter it's easy for me to dismiss it entirely as a made up problem. aren't we already captured by the paperclip optimizer? Alignment problem is very real and it's fundamentally a simple logic problem. All humans except the most deranged have a shared set of terminal values, ie values around the fulfillment of basic human needs like food, sex, safety, shelter etc. Most of our instrumental values that govern our behavior are derived from those basic terminal values. An AI (assuming we ever have the capability to make a real AGI) has none of those basic terminal values intrinsically built in, and it could easily end up with a hosed up terminal value that is incompatible with human ones. Considering who is at the forefront of trying to make these things, I would be shocked if an AGI produced by them is very well aligned with basic human terminal values, especially on the first attempt.
|
|
# ? Oct 9, 2023 18:25 |
|
they're hiding UAP tech because they believe it will disrupt MAD or something and end the cybernetic capitalist project where everything is based on dumbass oil and the fear of nuclear armageddon
|
# ? Oct 9, 2023 18:25 |
|
AI in the sci fi sense you all are talking about does not exist and is unlikely to ever exist made by humans ok like in a million years sure who knows. not in our lifetimes tho I suppose an alien craft could land on earth with “AI” in it.
|
# ? Oct 9, 2023 18:28 |
|
euphronius posted:well that’s all nonsense so no worries Such confidence. Almost AI-like...
|
# ? Oct 9, 2023 18:30 |
|
It’s very easy to use bad ideas and language around AI (prodded by marketing) to create false problems.
|
# ? Oct 9, 2023 18:31 |
|
no wonder the AI hates humans in the future if this is how we treat it as a baby
|
# ? Oct 9, 2023 18:31 |
|
captainbananas posted:
yeah well said
|
# ? Oct 9, 2023 18:32 |
Pepe Silvia Browne posted:no wonder the AI hates humans in the future if this is how we treat it as a baby The main driver behind trying to make these things is to provide yet another weapon against labor and have a new supply of slaves for capital, so yeah not lookin good if we somehow accidentally replicated consciousness in a box despite not having a super solid understanding of what consciousness even is.
|
|
# ? Oct 9, 2023 18:33 |
|
you are all focused on AI while I am Pepe Silviaing my board on human mythology / non-human intelligence / apocalyptic traditions and the relationships between belief/perception and reality and how central the ancient Middle East is to many of those things I am actually pretty perturbed that sharknado's readings for me did not undermine any of my increasingly wild multilayered apocatastatic speculations edit: Like, I feel like whomever is observing human development will absolutely ramp up interest, attention, and engagement if another Holy War gets proper underway. Source4Leko spotting a homegrown birb the same day all my auguries were going batshit and then when I checked the news later gestures to Gaza also does not undermine this sensation LITERALLY A BIRD has issued a correction as of 19:07 on Oct 9, 2023 |
# ? Oct 9, 2023 18:34 |
|
euphronius posted:It’s very easy to use bad ideas and language around AI (prodded by marketing) to create false problems. The idea presented by OpenAI is that they chose a fast takeoff so these issues could be wrangled with before they became more than marketing. Of course, that statement by them itself was probably reviewed under a marketing strategy.
|
# ? Oct 9, 2023 18:35 |
|
Hooplah posted:the paperclip is capitalism vv this is pretty much it capitalism is not aligned with long-term prosperity for humanity or nature writ large and current day AI systems are simply outgrowths of it, tumors on tumors
|
# ? Oct 9, 2023 18:35 |
|
euphronius posted:AI in the sci fi sense you all are talking about does not exist and is unlikely to ever exist made by humans its pretty important to realize that the sorts of things labeled AI right now are just big data categorizers. They are in absolutely no way intelligences and they are not even very good at modeling the data theyve been trained on. The illusion of intelligence is coming from us. And from marketers using language to shape narrative. But either way, from us. I dont think youre right about how soon or how likely more robust AI are, but that depends entirely on what you think human brains do. If you think they are just a very complex network of categorizers hooked up to a very complex network of data collectors (with some filters inbetween), then AI is very much possible in the next hundred years. If you think there is more physical stuff that serves a different crucial function, then yes, we are very far away. I am very much on purpose ignoring any possibility that there are non physical components to AI. in any case, chatghb isnt going to gently caress humans over, but moneybags mcgee one thousand percent will decimate the planet and blame their lovely llms on it while the rest of ourselves tear eachother apart for the remaining scraps of arable land. euphronius posted:It’s very easy to use bad ideas and language around AI (prodded by marketing) to create false problems. dang you said that much more succinctly than i did lol
|
# ? Oct 9, 2023 18:39 |
|
Carp posted:The idea presented by OpenAI is that they chose a fast takeoff so these issues could be wrangled with before they became more than marketing. Of course, that statement by them itself was probably reviewed under a marketing strategy. I think how they’re doing things may be a dead end toward creating an AGI given that we can’t look under the hood, we don’t know how it comes about a specific answer, and it isn’t reproducible. there is a ton of money being thrown at it tho, and it’s in their interests to misrepresent this tech & it’s capabilities. it can still be useful in many ways tho
|
# ? Oct 9, 2023 18:40 |
|
the AI is real and it hates when you talk about it like that
|
# ? Oct 9, 2023 18:41 |
|
euphronius posted:It’s very easy to use bad ideas and language around AI (prodded by marketing) to create false problems. and the same can be said for the phenomena and the scammer/huckster types
|
# ? Oct 9, 2023 19:59 |
|
I wouldn't mind a passage, dealer's choice, if it's not too late too.
|
# ? Oct 9, 2023 20:00 |
|
captainbananas posted:
and basically most of western metaphysical philosophy lol
|
# ? Oct 9, 2023 20:26 |
|
The Alignment Problem is that Gary Gygax didn't know his own rear end(tral plane) from a hole in the groud (his grave, ripperoni), and you're no Landerig. We are all chaotic stupids, but dang it the hero's journey is vast
|
# ? Oct 9, 2023 20:30 |
|
sharknado slashfic posted:Sixth, Ch 7 - "Supreme, great Nirvana is bright, Perfect, permanent, still and shining, Deluded common people call it death, Other teachings hold it to be annihilation" This is the beginning of a larger verse. I did a tarot pull for the thread while I was stargazing during the Draconids last night. Gonna do a write-up but I'd like to get a second user's opinion 'on lock' via a different device
|
# ? Oct 9, 2023 20:38 |
|
I spent a few days in the underwold and now that I came back I see things are getting really interesting here. Also I have been visited by two new Spirit animals (I already received "lily-trotter" as a boy scout). Now I'm also a sloth and a hedgehog. I love them and I even understand why the lily trotter was given to me at 12. I hope everyone itt is OK and having fun! Also thanks for the birds, they helped me well. E: oh and fanfic insert;I would love to get feedback from what happened after your reading that included a pinecone. They significantly appeared to me a few times. I now the standard symbolism (fertility and rebirth, pineal gland) but I'm curious how it appeared to you. Thanks! fanfic insert posted:just a lil tease until i get around to it; It involves a pinecone. SpaceGoatFarts has issued a correction as of 21:29 on Oct 9, 2023 |
# ? Oct 9, 2023 21:01 |
|
can there be more pictures of the white pigeon
|
# ? Oct 9, 2023 21:25 |
Lue says don't stop believin' https://twitter.com/LueElizondo/status/1711451342500397339?s=20
|
|
# ? Oct 9, 2023 21:33 |
|
lol “you’ll get what you want but it won’t be what you want”
|
# ? Oct 9, 2023 21:35 |
|
|
# ? May 26, 2024 21:16 |
but if you try sometimes, well you might find, you get the ET you need
|
|
# ? Oct 9, 2023 21:37 |