Should I step down as head of twitter This poll is closed. |
|||
---|---|---|---|
Yes | 420 | 4.43% | |
No | 69 | 0.73% | |
Goku | 9001 | 94.85% | |
Total: | 9490 votes |
|
Doesnt elon's dad hate him
pillsburysoldier fucked around with this message at 17:59 on Apr 27, 2024 |
# ? Apr 27, 2024 17:57 |
|
|
# ? May 11, 2024 06:45 |
|
pillsburysoldier posted:Doesnt elon's das hate him Yes lol
|
# ? Apr 27, 2024 17:58 |
|
To the extent a computer “thinks” (in the sense marketers use the term) those “thoughts” are “merely” the thoughts and actions of the programmers and workers stored in the technology combined with the actions and thoughts of the person using it
|
# ? Apr 27, 2024 17:59 |
|
euphronius posted:To the extent a computer “thinks” (in the sense marketers use the term) those “thoughts” are “merely” the thoughts and actions of the programmers and workers stored in the technology combined with the actions and thoughts of the person using it Computers don't think, op
|
# ? Apr 27, 2024 17:59 |
|
Exactly. That is what I’m saying
|
# ? Apr 27, 2024 18:00 |
|
Related to calculators, I just rediscovered how to convert fractions to percentages, in my head, and I was really happy upon this revelation.
|
# ? Apr 27, 2024 18:04 |
|
R.L. Stine posted:i want a slur for ai so i can use it guilt free Grok
|
# ? Apr 27, 2024 18:05 |
|
R.L. Stine posted:i want a slur for ai so i can use it guilt free typewriter monkey
|
# ? Apr 27, 2024 18:07 |
|
Deep musk
|
# ? Apr 27, 2024 18:10 |
|
Silicon n words
|
# ? Apr 27, 2024 18:13 |
|
What are you doing Elon?
|
# ? Apr 27, 2024 18:15 |
|
redshirt posted:What are you doing Elon? Well we know he's not making up slurs against ai. He's far too terrified of it for that. Lol, like the chip heads will care when they take over earth, all us soggy brains are going in the blender if we were polite to those code cocks, or not.
|
# ? Apr 27, 2024 18:24 |
|
dr_rat posted:Well we know he's not making up slurs against ai. He's far too terrified of it for that. WITNESS ME!
|
# ? Apr 27, 2024 18:24 |
|
I need $6bn in urgent funding to safeguard against the AI taking revenge on us for calling it slurs
|
# ? Apr 27, 2024 18:37 |
|
Roko's rear end-a-lick what's that dumb rear end AI gonna do? Torture me for all eternity?
|
# ? Apr 27, 2024 18:39 |
|
redshirt posted:WITNESS ME! Do you have a surname?
|
# ? Apr 27, 2024 18:39 |
|
Monica Bellucci posted:Do you have a surname? No ma'am.
|
# ? Apr 27, 2024 18:40 |
|
kazil posted:Roko's rear end-a-lick You've done it now
|
# ? Apr 27, 2024 18:53 |
|
more like ai-yai-yai
|
# ? Apr 27, 2024 18:57 |
|
kazil posted:Roko's rear end-a-lick No but I will
|
# ? Apr 27, 2024 19:06 |
|
Mozi posted:more like ai-yai-yai AYAYAYAYAYAYA LUUUUCY!!!!
|
# ? Apr 27, 2024 19:15 |
|
We have such sights to show you
|
# ? Apr 27, 2024 19:22 |
|
Avirosb posted:We have such sights to show you Man the Hellraiser movies really got lame with their coenobites.
|
# ? Apr 27, 2024 19:26 |
|
dr_rat posted:Man the Hellraiser movies really got lame with their coenobites. Budget cuts you know
|
# ? Apr 27, 2024 19:30 |
|
She definitely has a crawlspace full of hobo corpses Also are you morons seriously trying to have a debate on the merits of ai
|
# ? Apr 27, 2024 20:00 |
|
sugar free jazz posted:holy poo poo…….is this true? could gpt 6 or even 8 create a civilization ending virus??? if by "a civilization ending virus" you mean firaxis deciding to replace sid meier with some iteration of gpt then yes
|
# ? Apr 27, 2024 20:08 |
|
steinrokkan posted:She definitely has a crawlspace full of hobo corpses The only PLUS I recognize is the erasement of the human race at the hands of cyberspiders.
|
# ? Apr 27, 2024 20:13 |
|
MrQwerty posted:Computers don't think The big problem with discussions like this is that people speak past each other. No, they don't think. No, they don't have goals. That is correct. However, they do have stimulus/response mechanisms, which can be condensed to plain-language terms like 'goals'. The enemies in Doom do not have the 'goal' of killing the player, they have the coding "I see the player --> I will shoot at the player". Thus, the enemy exhibits behavior indistinguishable from an agent that has the goal of killing the player. Can we agree on that? No, probably not, because they are hard-coded in their behaviors, not intelligently but dumbly/scriptedly choosing to move/shoot in very specific ways. But I'm going to call this a 'goal' anyway-- the programmer/developer wants the bot to threaten the player, and gives them tools to (appear to) do so. Thus, goals can be ascribed to the human programmer, even if the computer program does not have the capability to have goals itself. Juul-Whip posted:being able to count very quickly is not a form of intelligence euphronius posted:The computer is not “doing math”. It doesn’t even exist in the sense you are fooling yourself into thinking. So, if a bot in a game has the goal "find a firing line to lob this grenade directly into the player", a computer will have the ability to calculate the firing trajectory of the grenade down to float precision, within milliseconds, faster than a human. This goes for literally any problem like firing rockets into space in real life, maintaining autopilot in aircraft, or maintaining fuel injectors in a car engine. Thus, I'm going to say computers can "do math" faster than humans. The fact I have to define this term is kind of insane to me. Like, what the gently caress are you even talking about that computers can't do math. Nonsensical statement. As I said earlier, you do not have to have classic sentience/sapience in order to problem-solve. If you have a properly defined problem, you can find a solution. I don't want to say this because I feel like I need to define every word in the sentence, but a computer can do basically the same thing- either through rote programming (current-gen game bots who can pathfind to objectives and calculate lead/etc), through guessing/hallucinating (current gen wildly-unreliable LLMs), or through other machine learning methods such as gradient descent and conditional learning. So consider the following hypothetical- you and a computer have opposing goals. It may be simple, like "Human player wins this game of chess" against "Computer player wins this game of chess". Hopefully we can agree that in this situation the computer has a huge advantage in completing its goal compared to the human. Right? We can agree that even absent human level intelligence, there are goals that if a human and computer program find themselves in conflict, the computer can be at an advantage. Like, I hope we can agree on that. As computer power grows, these capabilities grow, and there are fewer and fewer areas where humans have a decided advantage. It is not unreasonable to be wary of this growth and the fact that the speeds involved can mean that a runaway program can create problems faster than they can be foreseen and prevented. Consider something stupid like accidentally running a reformat action on your computer- by the time you realized what the computer is doing, it's too late, your data is gone, and the computer did not need intelligent thought on its own accord to do it. Like I said earlier, AGI is not really the issue. It is far more likely that an algorithm like YouTube's pewdiepipeline creates political turmoil, or that a high-frequency trading algorithm goes haywire and collapses the stock market, or that a poorly-implemented power grid managing system locks down our grid and prevents it from being fixed in a timely manner. None of these issues require human-level understanding of the world-- they only require a programmer to be poo poo at coding. If a corporation or government gives a poorly-coded system control of something important, THAT IS THE ISSUE THAT AI PRESENTS. There doesn't even have to be malice involved! Evilreaver fucked around with this message at 20:19 on Apr 27, 2024 |
# ? Apr 27, 2024 20:14 |
|
Evilreaver posted:The big problem with discussions like this is that people speak past each other. No, they don't think. No, they don't have goals. That is correct. Chill dawg
|
# ? Apr 27, 2024 20:19 |
|
Lol having a philosophy of mind argument about AI in the thread about a man so terrified of AI he gives himself nightmares about it all night every night, then wakes up and goes, "I must make it a reality so I may master it"
|
# ? Apr 27, 2024 20:22 |
|
digital penitence posted:Ah yeah, maybe it was there people found it! Honestly it's even funnier if he rented it and didn't even buy it. https://abracadabranyc.com/products/devils-champion-leather-armor-set?variant=42175676219554
|
# ? Apr 27, 2024 20:26 |
|
Evilreaver posted:Like I said earlier, AGI is not really the issue. It is far more likely that an algorithm like YouTube's pewdiepipeline creates political turmoil, or that a high-frequency trading algorithm goes haywire and collapses the stock market, or that a poorly-implemented power grid managing system locks down our grid and prevents it from being fixed in a timely manner. None of these issues require human-level understanding of the world-- they only require a programmer to be poo poo at coding. If a corporation or government gives a poorly-coded system control of something important, THAT IS THE ISSUE THAT AI PRESENTS. There doesn't even have to be malice involved! Just to expand on this to hopefully catch some replies, it is possible for, say, the power-grid-managing system to intelligently break circuits and route power to avoid overloads. If the system is distributed, it may be difficult to shut down. It may even have the ability to turn a distributed system back on, for example if the programmer-intention is to make sure after a catastrophic event (ie, solar flare blacks out a large area) the system comes back up as soon as possible, the system will "want" to "try" to get remote systems back online with original programming. Can we agree such a situation is plausible? This may make the system difficult to 'stop', as when you yank the plug out of the wall, as soon as you plug it back in the system turns right back on. Now, if the system is poorly designed or hacked, the system may then begin to malfunction, for example by blacking out areas that aren't actually at risk (or that the system """believes""" is at risk, given the inputs that the risk-evaluation function is given). Now this system is a threat to us-- not an existential threat, sure, but cutting power to a huge area for a non-trivial amount of time will kill people. How quickly can a self-repairing system be turned off? If proper safeguards are not put in place, like if a lazy cheapest-option programming contractor's first draft is put in a production environment, then real damage and harm can be a consequence. EDIT: In conclusion, if the situation "Human does not want the computer running" versus "Computer wants to keep running" ever tilts too far in the computer's advantage, then that is the real poo poo (keeping in mind I already defined 'wants') Evilreaver fucked around with this message at 20:36 on Apr 27, 2024 |
# ? Apr 27, 2024 20:29 |
|
Bip Roberts posted:Chill dawg sorry that "computers can't do math" line fuckin sent me, I was owned as hell
|
# ? Apr 27, 2024 20:30 |
|
I mean, a lot of that is just akin to a poorly designed gate letting a bunch of cows escape a field Normally a gate is far better than a human at keeping cows in the field, but a poorly designed one could let them all out. The gate is given the goal of "keep the cows in the field". There's nothing you would even attempt to describe as intelligence in the gate
|
# ? Apr 27, 2024 20:30 |
|
There isn't much difference between automating a manual labor task and a computer doing math. At some point, a human has had to engineer precisely how that automated task is going to work - a labor step or a rote repetition in a miles long series of calculations. The fact that the computer does this many times faster than a human is precisely the point, as it is with labor automation. It's just a human designed widget to do code:
|
# ? Apr 27, 2024 20:31 |
|
MrQwerty posted:Lol having a philosophy of mind argument about AI in the thread about a man so terrified of AI he gives himself nightmares about it all night every night, then wakes up and goes, "I must make it a reality so I may master it" S-stepmotherboard, what are you doing?
|
# ? Apr 27, 2024 20:32 |
|
If we created a computer system that was as intelligent as a golden retriever, that would be an incredible advancement on the current state of the art. And yet, we’ve had domestic dogs for tens of millennia, and there’s plenty of stuff that we can’t train them to do. No amount of training will get a dog to drive a car competently, and no amount of feeding an LLM will get it to do much that it cannot do today. Platystemon fucked around with this message at 20:43 on Apr 27, 2024 |
# ? Apr 27, 2024 20:36 |
|
Evilreaver posted:The big problem with discussions like this is that people speak past each other. No, they don't think. No, they don't have goals. That is correct. this is terrifying. is someone working on this and making sure we're safe???
|
# ? Apr 27, 2024 20:37 |
|
Evilreaver posted:The big problem with discussions like this is that people speak past each other. No, they don't think. No, they don't have goals. That is correct. Thank
|
# ? Apr 27, 2024 20:37 |
|
|
# ? May 11, 2024 06:45 |
|
Yeah and if the government just decides to send jackboots to put you in a camp, you are also hosed. If the terrible thing happens, it is terrible. I agree. At any given moment for the next 50 years or so the jackboot thing is going to be a real possibility for everyone, and the AI thing is going to be a real problem for no-one.
|
# ? Apr 27, 2024 20:40 |