|
I assume the reason they immediately jumped to Intelligent Design is that saying humans are "designed," even in scare quotes, tends to neglect how arbitrary a lot of biological processes are. It presumes a goal that these systems strive for, rather than it simply being an accident that these things made themselves more likely to exist.
|
# ? Apr 19, 2023 01:55 |
|
|
# ? Jun 7, 2024 18:57 |
|
SaTaMaS posted:Yes exactly Alright that makes sense. It would be reasonable to take an intentional stance towards something biological or mechanical that has a consciousness. It seems like you think that sensory input is required for consciousness, which could very well be true, and I wouldn't be surprised if we found that out. That makes me wonder what level of sensory input something needs to gain consciousness and what you think it is. It seems like the major input that needed is "touch." I'm just thinking about people who don't have sight or hearing, and they are clearly conscious. I don't know if hooking up ChatGPT to a pressure sensor and thermometer would give it sufficient information, but I don't have perfect sensory information either and am very conscious. I think the necessity of using the intentional stance would depend on whether you think it would require complex input like human receive, or less amount of input. Or it would not be necessary at all to use the intentional stance at all if you think something that a derivative of current AI technology could ever become conscious even with sensory inputs. gurragadon fucked around with this message at 02:29 on Apr 19, 2023 |
# ? Apr 19, 2023 02:27 |
|
Clarste posted:I assume the reason they immediately jumped to Intelligent Design is that saying humans are "designed," even in scare quotes, tends to neglect how arbitrary a lot of biological processes are. It presumes a goal that these systems strive for, rather than it simply being an accident that these things made themselves more likely to exist. That's a thing, and yeah that's completely true. But jumping to that kind of conclusion ignores the context of the post. It was obviously not meant to be design with an intentional designer and not just some processes that work towards some sometimes arbitrary thing (like I get, fitness to environment for evolution.) GlyphGryph posted:I genuinely don't think the problem is the words people are using at this point, I think it's the people who insist on interpreting them in the most insane possible way that are the problem. I said pretty much that earlier. A part of it is just, that's SA, but I really think it's something more special to AI. Maybe that people immediately have strong opinions on it, but a lot of the AI questions are actually not things you can be super confident on yet, there are just things that havent been established or aren't all that knowable now. But that doesn't sit right with people.
|
# ? Apr 19, 2023 02:40 |
|
Clarste posted:I assume the reason they immediately jumped to Intelligent Design is that saying humans are "designed," even in scare quotes, tends to neglect how arbitrary a lot of biological processes are. It presumes a goal that these systems strive for, rather than it simply being an accident that these things made themselves more likely to exist. I was using the same words back to make that exact point - "designed" is doing a lot of work here and an AI would work exactly the same if it just happened to spontaneously come into existence through the random arrangement of atoms instead of being purposefully built by a human (however infinitesimally unlikely that would be). It doesn't matter that the people who build it have agency. That doesn't impart some special property of "made by a conscious being" that matters for how it operates. Instead, they took the opposite meaning somehow - that I'm saying we must have been made by a conscious being to be conscious. Which is not at all what I said nor meant. Edit: Also, there's an important point I should have made explicit. I used evolution as the comparison because we didn't actually make these AIs. Not in the same way we made Dijkstra's algorithm. Instead, we made a system that made the AIs. The system works by predetermined rules, but the outcomes of the system are extremely complex and have lots of surprising emergent properties. This is the crux of my point. Yes, a calculator doesn't "want" to do addition any more than a river "wants" to meander. But we know both how and why a calculator works. That it works is not surprising. We know how an AI works - we can describe it with math. But we don't know why it works - why does that math produce results we weren't expecting? (And I can point to lots and lots of quotes from AI researchers backing this up). We don't know how OR why our brain works, so I don't know why anyone would make a definitive statement in either the positive or negative about human consciousness, agency or desires beyond subjective experiential things. Double Edit: No I'm still not saying any of these AIs are conscious. Put down the baseball bat. But like... neither are fruit flies and they have real honest to god brains in there. KillHour fucked around with this message at 05:11 on Apr 19, 2023 |
# ? Apr 19, 2023 04:52 |
|
SaTaMaS posted:Having a goal requires consciousness and intentionality In QuakeC for the videogame Quake, enemies start "idle" until they find a enemy. Then they set the field "enemy" to that enemy. Theres a function called "movetogoal" that uses a small heuristic to move the monster towards his goal. https://quakewiki.org/wiki/movetogoal Lucid Dream posted:The LLMs don't have goals, but they do predict pretty darn well what a human would say if you asked them to come up with goals about different things. The videogame Civilization say that the advance of civilization is trough technology and wars. It also say that if you want to have tanks, first you have to invent monoteist religion. But none of these things are intended by Sid Meier, the designer, they are builtin in the design anyway. LLM might have goals builtin in the design, even if are not intended by the creators. The very least, to produce a interesting output or any output. Tei fucked around with this message at 07:29 on Apr 19, 2023 |
# ? Apr 19, 2023 07:25 |
|
Yeah isn’t this the whole point of AI really? It’s given a goal defined by a measure of some sort and the AI works out the correct actions of its possible outputs to achieve this. I wouldn’t say it’s desire as such but saying an AI has goals and it’s programmed to maximise them is perfectly reasonable.
|
# ? Apr 19, 2023 09:58 |
|
Tei posted:LLM might have goals builtin in the design, even if are not intended by the creators. The very least, to produce a interesting output or any output. Ok sure, but if I describe a situation I can still ask an LLM what a person’s goals might be given the context and it will respond with something plausible. I’m not saying anything about the subjective experience of the LLM or how interesting the output is, but rather that the system has the capability to predict what a human might say if given a situation and asked to define goals to solve the problem. The semantics don’t matter as much as the actual capability.
|
# ? Apr 19, 2023 13:36 |
|
Quinch posted:Yeah isn’t this the whole point of AI really? It’s given a goal defined by a measure of some sort and the AI works out the correct actions of its possible outputs to achieve this. I wouldn’t say it’s desire as such but saying an AI has goals and it’s programmed to maximise them is perfectly reasonable. Sure in casual conversation it doesn't really matter, and even in AI systems things like beliefs, desires, and intentions are employed as metaphors. However in any somewhat serious discussion about AI it's important to distinguish between things that are determined by their design and training data, and where something resembling personal motivations and intentions start to determine its goals, assuming such a thing is even possible for an AI.
|
# ? Apr 19, 2023 16:54 |
|
I am enjoying Drake's new AI songs almost as much as the record labels desperately trying to scrub it from the internet.
|
# ? Apr 20, 2023 17:03 |
|
Pleasant Friend posted:I am enjoying Drake's new AI songs almost as much as the record labels desperately trying to scrub it from the internet. That poo poo is fuckin hilarious and I'm here for it. In case someone hasn't seen it: https://www.youtube.com/watch?v=Po2BHFHtKgQ https://www.theverge.com/2023/4/19/23689879/ai-drake-song-google-youtube-fair-use posted:After the song went viral on TikTok, a full version was released on music streaming services like Apple Music and Spotify, and on YouTube. This prompted Drake and The Weeknd’s label Universal Music Group to issue a sternly-worded statement about the dangers of AI, which specifically says that using generative AI infringes its copyrights. Here’s that statement, from UMG senior vice president of communications James Murtagh-Hopkins: As far as I know, this is the first instance of a particular AI-related* copycat being targeted as a specific final product, and not just an argument about training the model on copyrighted stuff in general. *The song wasn't written by AI. It was written and produced and sung by a human and the vocals were deepfaked.
|
# ? Apr 20, 2023 17:52 |
|
Already dead
|
# ? Apr 21, 2023 17:39 |
|
Monglo posted:Already dead The hilarious thing is because the DMCA is over the short sample in the beginning, Universal can't use Content ID to match the song automatically and have to issue manual takedowns, so it's just a cat and mouse where it gets reuploaded basically immediately. https://www.youtube.com/watch?v=utzJJjaSs64 I'm sure this one will be dead in a day or two.
|
# ? Apr 21, 2023 17:45 |
|
Looks like google is in a dilemma about this one.The Verge posted:If Google agrees with Universal that AI-generated music is an impermissible derivative work based on the unauthorized copying of training data, and that YouTube should pull down songs that labels flag for sounding like their artists, it undercuts its own fair use argument for Bard and every other generative AI product it makes — it undercuts the future of the company itself. I also wonder who actually wrote and recorded this song. Verge Article about it. It would be pretty funny if the creator was just Drake or UMG. I guess a real-world application of AI generated content that possibly violates copyright needed to be released in a widespread manner though so this conversation about copyright could happen.
|
# ? Apr 22, 2023 16:31 |
|
gurragadon posted:Looks like google is in a dilemma about this one. I am sure this will be resolved in the most beneficial way for corporations and worse for authors and creators
|
# ? Apr 22, 2023 18:03 |
|
Okay, ChatGPT is a drat good rubber duck. I spent way too long on this issue so I asked ChatGPT:quote:I have the following code: quote:The error you are getting is because the input signature of the vertex shader expects an input element with semantic name "COLOR" and index 0, but your input layout does not have a matching element. quote:Here is my vertex shader struct. There is no COLOR semantic quote:In that case, it's likely that the issue is related to the vertex shader bytecode being passed to the `CreateInputLayout` function. Make sure that `kVertexShaderCode` points to the correct shader bytecode and its size is properly calculated. And I thought "No, there's no way I'm passing in the wrong vertex shader. That would be crazy." But then I thought "Well, I'll just check." I forgot to comment out some old code that was reassigning my variable Edit: Technically, this isn't super impressive - ChatGPT has read through a bajillion StackOverflow responses telling people to check the basic poo poo. But even for an experienced developer, having the basics reframed into the context of what you're doing is a huge help. Importantly, HLSL is somewhat new for me, so I was overly focused on thinking the issue must be a lack of knowledge, not a typo somewhere else. KillHour fucked around with this message at 22:14 on Apr 22, 2023 |
# ? Apr 22, 2023 21:52 |
|
I needed to write some functions for coordinate transformations using quaternions. It seemed like a good opportunity to try ChatGPT, because quaternion math is fairly straightforward, just tedious to get all the indexing and signs right. It spit out just what I asked for, and even gave some sample inputs and outputs to check. The functions worked perfectly. The sample outputs it gave as a check were wrong.
|
# ? Apr 22, 2023 22:58 |
|
Inferior Third Season posted:quaternion math is fairly straightforward Mods!? Edit: I mean other mods who haven't had their soul devoured by math.
|
# ? Apr 22, 2023 23:06 |
|
KillHour posted:Mods!?
|
# ? Apr 23, 2023 16:24 |
|
People get cranky when the dimensions get higher than three
|
# ? Apr 23, 2023 18:50 |
|
KillHour posted:Mods!? So I just looked it up when I had a need for these things a few days ago, and the equations were right there. The equations themselves are very straightforward, if you just take them as given. I just had ChatGPT write the code for me instead of doing it myself. The closest I came to doing math here was, when I considered actually conceptualizing what the equations meant, I remembered that "a solution exists", and that I didn't actually need to go any further down that path.
|
# ? Apr 23, 2023 20:41 |
|
I really should move my shader over to quaternions from matrix rotations...
|
# ? Apr 23, 2023 22:55 |
|
https://www.nytimes.com/2023/05/01/...ce=articleShare Hinton has left Google and has come out against the dangers of AI.
|
# ? May 1, 2023 16:27 |
|
Bar Ran Dun posted:https://www.nytimes.com/2023/05/01/...ce=articleShare His argument kind of seems like a situation where he is saying: "Don't make AI evil" and is not really a useful assessment on a practical level. His argument is essentially: AI will be able to do many great things that help society, but bad people exist and could use AI for bad purposes. Therefore, we should not pursue advanced AI. The same thing could be said of computers and the internet, but we didn't shut those down because they enabled trillions of dollars in fraud and scams over the course of their existence. His other concern is the "Terminator Scenario" that seems like a crazy reason to shut down research decades in advance of this situation. It also doesn't really prevent someone like China or the U.S. government in secret, from pursuing these kinds of research if you ban it from the public. quote:Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality. The concerns of "if AI is available to everyone, then bad people can use AI" and "it is possible for the worst outcome to happen" are both obviously true, but not useful in determining what to do about it.
|
# ? May 1, 2023 16:39 |
|
Leon Trotsky 2012 posted:The same thing could be said of computers and the internet, but we didn't shut those down because they enabled trillions of dollars in fraud and scams over the course of their existence. The same could also be said of human cloning and genetic engineering, and we did largely shut that down.
|
# ? May 1, 2023 16:45 |
|
It would be helpful for objections to AI to be specific and free of padding. For instance, why complain about "false text" flooding the internet? How's that going to be different from now? Are we in the end days of truthful internet, free of conspiracy theories, money scams, hate speech..? On a different topic, the company I work for produces audio books, among other things. We used to work with voice talent, but now we've started with synthetic voices. We've been experimenting with them occasionally but they were too stiff and tiring until recently. Now we're working with a small company whose voices are indistinguishable from human. And when the big players catch up I expect it will become uncommon to hire a human to narrate longer texts. I don't like being on the side of AI, facilitating this process. And I'm sad that I have to stop contacting some voice actors with who I've developed a relationship over the years. I still keep getting their updates by email (narrated this book, attended that voice over conference, worked with a mentor... etc) and I wonder how long this industry will survive. BTW you can see that this is AI because the voices have some stochastic behavior. When they encounter an abbreviation or something difficult to pronounce, they might get it right in one sentence and wrong in the very next. And sometimes they produce a short noise out of the blue. But that will become less frequent as models improve.
|
# ? May 1, 2023 17:11 |
|
GlyphGryph posted:The same could also be said of human cloning and genetic engineering, and we did largely shut that down. Didn't read the article because my NY sub is suspended at the moment but at first blush these feel only comparable in the limit. I don't think we today are anywhere close enough to a true AGI for these topics to not be apples and oranges. One is creating human-like intelligence/consciousness artificially (AI) and the other is artificially growing intelligent/conscious human beings (cloning); the stress there is very relevant because of how young these technologies are IMO.
|
# ? May 1, 2023 19:43 |
|
Doctor Malaver posted:For instance, why complain about "false text" flooding the internet? How's that going to be different from now? Are we in the end days of truthful internet, free of conspiracy theories, money scams, hate speech..? Volume and speed of reply. Right not you have to have a person pretend to be a bunch of people and sling social media bullshit
|
# ? May 1, 2023 19:54 |
|
The wonderful thing about AI is that it lowers the barrier to entry for everything. The existentially terrifying thing about AI is that it lowers the barrier to entry for everything.
|
# ? May 1, 2023 20:07 |
|
Doctor Malaver posted:It would be helpful for objections to AI to be specific and free of padding. For instance, why complain about "false text" flooding the internet? How's that going to be different from now? Are we in the end days of truthful internet, free of conspiracy theories, money scams, hate speech..? No one is arguing AI is inventing this stuff. Just that it lowers the bar and allow much more to flood the internet than ever before. Clarks world shutting entries is an early look at the new issues these systems are going to bring.
|
# ? May 2, 2023 10:44 |
|
The difference between 90% of everything being chaff and 99.99% of everything being chaff is pretty significant. If there is so much AI generated nonsense that it becomes nearly impossible to find anything that isn't, then the internet simply becomes unusable.
|
# ? May 2, 2023 11:34 |
|
Leon Trotsky 2012 posted:His argument kind of seems like a situation where he is saying: "Don't make AI evil" and is not really a useful assessment on a practical level. He doesn't seem to be saying anything about shutting down research, at least not in this particular article. It's hard to say, because there doesn't seem to be a transcript of his actual words anywhere and all the articles are mostly just paraphrasing him, but I don't see anything saying he's calling for a research halt. Rather, what he seems to be concerned about is companies buying into the AI hype, abandoning all safeguards and ethical considerations, and widely deploying it in increasingly irresponsible and uncontrolled ways in a race to impress the investors. quote:Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said. Moreover, he seems particularly concerned about the risk of companies letting AI tools operate on their own without humans checking to make sure their output doesn't have unexpected or undesirable side effects. quote:Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own.
|
# ? May 2, 2023 19:33 |
|
Main Paineframe posted:He doesn't seem to be saying anything about shutting down research, at least not in this particular article. It's hard to say, because there doesn't seem to be a transcript of his actual words anywhere and all the articles are mostly just paraphrasing him, but I don't see anything saying he's calling for a research halt. He says he didn't sign on to one of the letters calling for a moratorium on AI research because he was still working at Google, but he agreed with it. quote:Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job. He notified the company last month that he was resigning, and on Thursday, he talked by phone with Sundar Pichai, the chief executive of Google’s parent company, Alphabet. He declined to publicly discuss the details of his conversation with Mr. Pichai.
|
# ? May 2, 2023 19:35 |
|
Leon Trotsky 2012 posted:He says he didn't sign on to one of the letters calling for a moratorium on AI research because he was still working at Google, but he agreed with it. It says that he didn't sign those letters, and it also says that he didn't want to publicly criticize Google until he quit his job. It doesn't actually state that those two points are related, nor does it actually state that he agreed with the letter. This goes back to me point about how it's difficult to work out the details of his stance because the reporters are paraphrasing him rather than reporting his words directly.
|
# ? May 2, 2023 19:39 |
|
For those who have the time to listen to (very) lengthy audio, this guy’s YouTube channel has a lot of interviews with leading people in AI research and industry. Some of his recent guests were the Boston Dynamics CEO, a computational biology professor from MIT, and the CEO of OpenAI, which makes the GPT software (video below). I’m not qualified to provide any commentary on these, but they seem intended for the general public and I’m finding them super interesting and educational. The OpenAI CEO interview includes topics like political bias, AI safety, and AGI. https://www.youtube.com/watch?v=L_Guz73e6fw
|
# ? May 2, 2023 22:02 |
|
I watched until the interviewer started talking about how he's such good friends with Jordan Peterson. He comes off like a dorky Joe Rogan and I want to give him a wedgie and stuff him in a locker.
KillHour fucked around with this message at 01:35 on May 3, 2023 |
# ? May 3, 2023 01:33 |
|
the other hand posted:For those who have the time to listen to (very) lengthy audio, this guy’s YouTube channel has a lot of interviews with leading people in AI research and industry. Some of his recent guests were the Boston Dynamics CEO, a computational biology professor from MIT, and the CEO of OpenAI, which makes the GPT software (video below). I wasn't previously familiar with Lex Fridman, so I Googled him and it took about ten seconds to find that he's an ex-researcher who'd been demoted from research scientist to unpaid intern after some "controversial" studies. So I searched to find out what kinds of controversy he'd been involved in, and it took another ten seconds to find an article titled Peace, love, and Hitler: How Lex Fridman's podcast became a safe space for the anti-woke tech elite. Hell of a title! A little more Googling brings up plenty of results suggesting that people in the AI and machine learning industries largely regard him as a grifter who doesn't understand half as much as he claims to. That Business Insider article is by far the best source I've found, so let's pull it out from behind that paywall: quote:Peace, love, and Hitler: How Lex Fridman's podcast became a safe space for the anti-woke tech elite I don't care who guest-stars on this show, I'm not going to listen to Mr. Emphasize With Hitler talk about anything with anyone for two hours. It seems like common-sense to vet a source before you waste hours listening to them.
|
# ? May 3, 2023 02:37 |
|
Fridman also claims he's an MIT lecturer. Which is one of those 'technically true' statements. It's very common for people to give 'lectures' as an open forum discussions that people can walk into. But he plays it off like he teaches there. A lot of his claims to expertise are like this, small nuggets of truth wrapped in bullshit. He's a grifter and an awful podcast host, his voice is so incredibly boring and he asks very dumb questions. I have no idea how he manages to get so many high quality guests or so many listens. He's the Joe Rogan of tech. Mega Comrade fucked around with this message at 10:06 on May 3, 2023 |
# ? May 3, 2023 09:59 |
|
I feel like it's a tad optimistic to think that AI can replace entertainment writers, but that certainly is the talking point among labor haters right now. Has anyone made any content with AI that is actually any good?
|
# ? May 3, 2023 18:55 |
|
Regarding the fears of AI superintelligence and world domination, I'll be a lot more concerned if Paradox can ever develop an "AI" that can beat an average human player without cheating. Those games are pretty complicated, but not nearly as complicated as the real world, so...
|
# ? May 3, 2023 18:59 |
|
|
# ? Jun 7, 2024 18:57 |
|
Delthalaz posted:Regarding the fears of AI superintelligence and world domination, I'll be a lot more concerned if Paradox can ever develop an "AI" that can beat an average human player without cheating. Those games are pretty complicated, but not nearly as complicated as the real world, so... I know you're joking but they do that on purpose because predictable computer players are more fun and easier to balance. It would be very annoying for new players if every time you changed your strategy, the AI adapted to cut you off. Expert players might appreciate that, but it's just not worth the effort. https://www.deepmind.com/blog/alphastar-grandmaster-level-in-starcraft-ii-using-multi-agent-reinforcement-learning
|
# ? May 3, 2023 19:12 |