|
AI has become the hottest word in technology that is making the rounds right now. The explosion of CHAT-GPT onto the scene has led to discussions in nearly every field about how AI will be applied. Academics are worried about rampant cheating, media specialist are worried about disinformation flooding into the media ecosystem, people are worried that their jobs will become obsolete, creative artists are finding it even more difficult have success, medical experts are wondering how AI can be used to enhance to patient experience. AI touches on really important philosophical concepts, such as the nature of consciousness and how to define it and the philosophical questions surrounding AI are worthy of talking about as well. https://chat.openai.com/chat I find myself going to ChatGPT on topics I have questions about along with google and the new bing search that generates response. The bing search is especially nice because it provides references. However, the easy chatty style of ChatGPT does make it more comfortable to use and simulates speaking with a person on the subject well enough that I find myself thanking it occasionally. The problem is it is willing to be completely wrong about an issue because it has no real concept of what it's talking about, so care must be taken currently to ensure the information ChatGPT gives is correct. I asked the Bing Chat to provide me with a list of roofers in Bethesda Maryland and it gave me this, which is actually an attempt at the information I wanted. I didn't verify these data, but it is provided in a concise list with all the relevant information I would need to continue my search for a new roofer. Bing Chat posted:Here are some good roofers in Bethesda, Maryland that I found: Whereas google gave me half a page of ad's until I got to the links that Bing is using as a source, and I still had to go to the source myself. The information is basically the same either way, but the Bing Chat summarized in a much more useable form and was less work than googling. Google just opened a sign-up for their new chat service Bard on 3/21/2023 that you can sign up to be on the waitlist for. I'm interested in how the google system responds to prompts differently the ChatGPT and Bing chat. https://bard.google.com/ Cinci Zoo Sniper posted a couple of links in the previous thread about how machine learning and Chat-GPT in specific work. If you have the time, they are really interesting and worth a read, but be careful with the Wolfram article if you are a non-expert. He uses some terms and has some lofty ideas that can seem to push towards anthropomorphizing Chat GPT if you don't read carefully or read with too much enthusiasm. cinci zoo sniper posted:Learning doesn't normally use evolutionary algorithms, no. 3.4 of https://dl.acm.org/doi/fullHtml/10.1145/3467477 is a decent starting point. For those who are more into the visual arts there are AI programs for that too. DALL-E 2 can make some very impressive visuals with just a text prompt. https://openai.com/product/dall-e-2 Here is an explanation of how it works for those who are interested in that. https://arxiv.org/pdf/2204.06125.pdf Another visual art AI program called Midjourney even won in an art competition, and I find the image it produced to be visually very impressive. I really want to see this image done with the system hooked up to a 3d printer that can simulate oil painting so texture can be shown. https://www.cnn.com/2022/09/03/tech/ai-art-fair-winner-controversy/index.html https://www.midjourney.com/ ‘Théâtre D’opéra Spatial’ by Jason Allen. (Although I would argue that the artist was the Midjourney programs with prompting from Jason Allen) Let me know if anybody has any good links, posts, videos or AI programs they want me to add to the OP. In the tradition of the previous AI thread, I used Chat-GPT to generate another parable about SomethingAwful so that we can receive wisdom from something without thought. It actually gave me a parable this time too. Prompt: Write a Zen parable about a dying web comedy forum full of people who refuse to grow up. Chat GPT posted:Once upon a time, there was a web comedy forum where people gathered to share jokes, memes, and silly videos. The forum had been thriving for years, but as time went on, the members grew older and their interests started to shift. Some moved on to more serious pursuits, while others became disenchanted with the constant stream of juvenile humor. The parable of the leaf given provides an analogy to AI technology itself even though that wasn't my original intent. Do we become stagnant and refuse to accept new technology or do we grow and thrive with it? So, the question is, once again, where do we go from here? gurragadon fucked around with this message at 18:51 on Mar 22, 2023 |
# ¿ Mar 22, 2023 18:47 |
|
|
# ¿ May 10, 2024 05:26 |
|
SCheeseman posted:I don't think many want to chat about it, tensions are way too high and it's fracturing communities. Anyone who has their jobs and livelihoods threatened by it and/or see it as an affront to humanity are mostly interested in ways to crush it. Pretty understandable, the capitalist powers that be will take this technology and use it in all the ways people fear it will be. I don't think an AI ban would be feasible or even really useful in any way. Humans will continue to advance our technology unless some external force stops it. You are describing the issue of advancement of society and what to do with those left behind or the losers of the advancement. Our system really is currently trash at dealing with disaffected people, so the fear is well founded. I wonder if the advancement of AI is enough to really shake the core of scarcity in our society though. I can currently get Midjourney of DALLE-2 to make basically any art piece I want at a moment's notice. I can get Chat-GPT to generate as much text on a topic as I want in basically an instant. Is it going to get advanced enough that all art and all writing loses value? I think that is the true fear of a capitalist in this society. If nothing has value than they have no power of others. Obviously, AI programs can't produce food and water at a moment's notice yet, so our society is only really using AI programs currently to devalue non-essential products. Advancements in the writing side are coming really fast now though. Look at the data on GPT-4, it's able to become in the top 10th percentile when GPT-3 could barely even get in the bottom 10th percentile of the bar exam. https://openai.com/product/gpt-4 Edit: Midjourney not Midssomar, but that was a sweet movie... gurragadon fucked around with this message at 15:33 on Mar 24, 2023 |
# ¿ Mar 24, 2023 15:30 |
|
i say swears online posted:I'm shocked at how well it writes jingles, which has all sorts of knock-on effects. the premise of two and a half men is in shambles Well, AI programs writing jingles really well does kind of lay bare how little value a jingle actually provides to people. In that show Charlie Sheen is wealthy and has to do no work for society because he wrote jingles. But all the jingle really has in value is the momentary laughter or joy I may get from remembering it in a moment. With AI programs writing jingles the only people doing it will be those who just like the form of writing music in jingles, which removes the cost pressure from advancing the form.
|
# ¿ Mar 24, 2023 15:49 |
|
SCheeseman posted:I disagree that it has to be an external force that stops us when we are entirely capable of nuking ourselves to death, the alternative is to organize our society so whatever us dumbasses make isn't given the opportunity to help destroy us. But we did invent and improve on nuclear weapons is my point. We could continue to develop and improve AI programs while still using them for the good of society. I agree that should be a parallel goal with the development of AI but it will have to happen separate from AI advancing because we as a society determine its important. I called out post-scarcity in writing and art specifically because that's what AI programs are working towards mostly right now it seems. I don't think were anywhere near star trek level poo poo. But we can look at what is happening with art and writing to see that we have a problem. Solving it at the "non-essential" level (I wanted to use quotes because I know that non-essential is subjective to people) can give us clues at to how to solve the problem if we ever reach the issue of AI solving "essential" things.
|
# ¿ Mar 24, 2023 16:07 |
|
SaTaMaS posted:Is this thread supposed to be about what's happening currently or speculations about things that probably won't ever happen When I made it I really didn't care, I just wanted to continue the conversations that are stemming from the current generation of AI programs. I don't think the thread needs to be specifically either as long as the people in the conversation are aware of what each other are talking about.
|
# ¿ Mar 24, 2023 16:16 |
|
SCheeseman posted:I would personally prefer that nuclear weapons were not improved. I think we agree for the most part, I would prefer that no weapon was improved but it just seems like something humans do. We want new and improved things. Maybe this is because our society is forcing the concept of new and improved on us, but I think it's partly just because we realize having the best of something makes it easier to survive. You began talking about how humans carved bone into tools, and that wasn't for monetary value, just survival. Post scarcity will happen in waves, just like everything else happens in waves. We are treating our workers who are becoming redundant very poorly and I think a better way to treat them would be to simply provide living assistance to people so they can choose to work or not. People who enjoy making art, music and writing need to be supported not because they provide any monetary value to society but because they provide individuals with non monetary joy. I used the word redundant earlier and I don't want to remove it, but it's not completely correct. If their job becomes only about intellectual and cultural value, I think it actually gains value because it isn't influenced by desire for money but only the desire to create what the artist wants. I don't think an AI would lead to socialism unless there is a general post scarcity, but value would lose it's meaning then so I don't even know if its socialism if everyone owns a machine to make there own food? Kavros posted:I expect very little good from AI because of what it is being purpose-built towards. it's not going to be here to make our lives easier, it will be here to better extract profit for the benefit of a specific class of investor or owner-of-many-things. Thats one reason I really wanted to remake this thread in these forums. The ten dollar entry fee really cuts down on that kind of stuff. I'm interested in actual people's thoughts about this stuff, not just something lazily copy and pasted. How much information about AI would I get just written by some AI program itself if I looked elsewhere? Probably a lot already. gurragadon fucked around with this message at 17:04 on Mar 24, 2023 |
# ¿ Mar 24, 2023 16:46 |
|
It's like the industry of creative arts is going through what other manufacturing has been going through in a really fast time scale. All those people working on an assembly line are replaced by a machine and somebody to make sure it works. Now the same thing is happening to creative freelancers. The advancements are coming really fast now though and it's going to hit white collar workers everywhere. Like I posted a bit earlier, GPT-4 can ace the bar exam and you can hook it up to other programs so it can perform accounting practices. Were quickly making most of the population's employment not worth the money. But if it goes beyond creatives, it will start hitting workers who command a pretty strong voice in the economy. Will lawyers or doctors have a big enough voice when there turn comes? Since its so similar to what happened to assembly lines in the past in my mind, we know what NOT to do with people affected by advancements in technology. A simple example is just watching Roger and Me. Did any societies treat their redundant workers better?
|
# ¿ Mar 25, 2023 13:42 |
|
Ohthehugemanatee posted:On one level, yep it's going to be a societal disaster. I think the implications are worse than folk even imagine. Right now, I can spot a bot. I don't believe I or anyone else will be able to within a year. Scams are going to be wild, journalism is hosed and online forums are going to be ruined. The companionship an AI is eventually going to be capable of offering people is going to take society in weird directions. I'm a huge loving nerd and excited than an AI can run a table top wargame against me but at the same time it would be pretty sad if I turned to that instead of playing games with other people. Thats pretty cool that it is catching things that have serious medical consequences for people. Especially with the overall shortage of doctors and them being overworked already, running patients through AI programs for addition clues seems really useful. Also, doctors have to specialize so much now it must be difficult to catch things outside of their main expertise. Like you said with the ED (I assume that's emergency room?), she see's certain things more frequently and is more likely to default to them because of how overworked they are. Bings chatbot is really good and I like it a lot more than google. I used the example of roofers in Bethesda, Maryland in the OP because I just got my roof replaced and I'm playing Morrowind, but I actually did use it to get a list of people to contact to come out. One of the nice things about the chat bot is that it doesn't have all the ads that a google search does, but I would imagine that is going to change once companies begin to monetize it more. Thanks for the effort post on AI art, interesting to read. You said that AI art is atrociously unusable, and a lot of artists are trying to find work touching up AI art. What is it about AI art that you see that makes you recognize it so readily? Or is it the struggle to communicate effectively what you want to the AI program? You mention that people use it for "weird" stuff, but is there a difference or AI'ness to a picture of a sunflower or something else generic? There definitely will be a few artists who make a living retouching AI art but its whole point will lead to less people being able to support themselves as artists. Alot of the confusion you describe with artists seems to be that they have their own way of communicating and it's not laid out very well to people who are unfamiliar with commissioning art. Like you said, I wouldn't have been prepared with reference photos of what I wanted unless I spoke with the artist beforehand, and they told me they needed that. I would have expected to tell someone who is claiming to paint or draw what I wanted, and they could do an ok job of it within their vision if I just told them. That seems like more of a problem with artists being unable to communicate with people outside of their field well, which is not rare by any means, but it's pretty essential for the artist to be able to. It also leans into the question of the artist not recognizing the vision of the person who is commissioning the art. Art is subjective and while there are things that are considered "good" and "bad," maybe I just want the artist to draw me something that they would consider "bad." Do they want to do that? Probably not because their name is going to be on the piece. But if they are making art for business then they have to abandon their vision as an artist. It's why I think introducing money into anything cheapens it by some degree. AI art doesn't care about what's "good" and "bad," it "cares" about spitting out as many images as it can. I guess the difference in AI and scrolling through Pinterest is you can generate images faster with an AI program, and if you change your mind halfway through it's a lot easier to just start typing in new terms. AI art generated through a program is also an original composition based on other images, not just the original piece done by someone on Pinterest. So maybe I would feel a little less bad about copyright, but that dosen't seem to hold very well and I don't really feel bad about copyright law anyway.
|
# ¿ Mar 25, 2023 18:22 |
|
GPT-4 took the Uniform bar exam. According to https://www.ncbex.org/exams/quote:A bar examination coordinated by NCBE that is uniformly administered, graded, and scored by user jurisdictions and results in a portable score. It is composed of the MEE, two MPT tasks, and the MBE. See our interactive map for jurisdictions that have adopted the UBE. For more information, view our informational pdf about the UBE. The MEE is 6 essay style questions analyzing a legal issue and the MPT tasks are standard lawyering tasks. The AI will always have the entire internet at its disposal, thats a feature of AI, not something that would changed unless delibrate. What would be a good metric for you? Seeing GPT-4 actually being used in a courtroom would be convincing to me, unfortunately the law practice seems to be kind of reluctant to embrace technology. The supreme court still dosen't have TV cameras. I kind of wish they would have just let this guy try out the AI lawyer thing. https://www.cbsnews.com/news/robot-lawyer-wont-argue-court-jail-threats-do-not-pay/ Edit: Uniform Bar Exam not Unified gurragadon fucked around with this message at 19:32 on Mar 25, 2023 |
# ¿ Mar 25, 2023 19:30 |
|
litany of gulps posted:Is the primary difficulty of the exam a time-crunch issue? If so, then an AI chatbot has fairly tremendous advantages due to the nature of the exam. I don't think it should be particularly surprising that a chatbot with access to huge amounts of examples and study materials should be able to crank out a bunch of rough draft essay responses to questions that humans struggle with because they have to read prompts and compose written arguments or analysis on the fly. I can't speak to whether people feel a rush taking the exam because i've never taken it before. This presentation from the National Conference of Bar Examiners says its given over the course of two days with 6 hours of test taking a day. To be fair to the chatbot, the lawyer student studying for the exam has access to the resources and can crank out rough draft essay responses too. The AI chatbot isn't training on anything a human doesn't have access to, and the prospective lawyer should be using those materials to study anyway or they weren't prepared for the test. I guess the takeaway is yes, now it's one paralegal. It may be no paralegals soon when offices realize they don't need them anymore. This stuff is really going to take out a lot of white collar jobs I think. Gumball Gumption posted:I think it's important to note that it's training included example MEE essays. It's interesting but GPT-4 being trained for the exams and passing them is then not indicative of performance in a court room. It does show that it would be of value as an information store for lawyers. Same thing as above about the MEE essay. I agree though that it would probably perform very poorly in a courtroom if it was just GPT-4. IANAL but I feel like the courtroom is not just about facts but also swaying people and opinion. Having an easy way to pull up information would be a huge advantage. I would be interested to see somebody represent themselves somehow with GPT-4 though and am kind of disappointed its being resisted. I mean if were confident its bad, let somebody prove it. Fuzz posted:I was speaking mainly from the medical perspective, as I've taken and passed all those medical exams too. I didn't mean to diminish the value of practical training, it's really important to put knowledge to use and that's when it's really reinforced. Earlier upthread Ohthehugemanatee said they were able to use the Bing chatbot to help diagnose a problem that doctors were missing or couldn't figure out. Do you think it has value in helping doctors diagnose patients? Practical knowledge is really important but you also do lose some of the book knowledge the further you get away from it and the more time you spend working in a field. Maybe just help as a refresher for doctors or nurses about medical issues? I think it's real value in the medical field would be ways that it can lighten the burden on people.
|
# ¿ Mar 26, 2023 05:15 |
|
That kind of sucks that the AI art was literally just bad and wasn't giving you what you wanted or couldn't complete an idea in a painting. The inspiration you talk about from Pinterest is something I would have figured AI art could really help with because it can give you so many images at once. But if the images it is giving you are bad, completely derivative, or nonsensical you can't gain any inspiration from the tool. I'm not particularly familiar with Anime but it's interesting you can tell the difference between somebody with formal training and somebody who trained only with repetition. The person training with repetition is learning in the style more like AI, and they can reproduce images as effectively. But they don't have the language to understand what they are doing, and they don't have the rules of what makes a good composition formally laid out in their mind. I think it leads to that overly polished style because even if the person faithfully reproduces a style, it's missing the core understanding of what defines the style. I'm familiar with drawing somewhat and pencil draw occasionally. When you draw a human face, you can look at an image and see that's something is wrong about it, but it can be almost impossible to find out what is wrong. Art has found itself it kind of a weird problem because now it's much more available, so more people want to interact with artists and commission unique pieces. But no single patron supports the artist, it's not like any of us are from the de' Medici family and the artist has to be more flexible to a large group of people. Maybe art schools should include more in there curriculum about meeting with potential clients and the business side of art for people who want to be artists as their profession. The big thing you miss by using AI art instead of an artist is the connection between the artist and the patron you described. When you are interacting with the AI system you are only giving your ideas to the AI, but when you find an artist with an interest in what the patron wants it puts two people's ideas together. The joy of creation is amplified when it's shared by two people with the same vision and aside from any montary gain that is worthwhile in itself. Liquid Communism posted:Anyone who understands the technology behind it on more than a surface level is generally right with those authors and artists because they understand that none of this AI functions without massive datasets that invariably violate the rights of authors and artists because no organization could afford to license works on that scale. Even at massively discounted rates for commercial licensing the image DBs behind them would be billions of dollars. I do have to agree with you for the most part that doing good on a test is no indication something will be good in practice or vice versa. But I also can't completely dismiss a connection between the two because many people I've worked with were also top of their class and lack of effort in studying for a test can indicate lack of effort in their profession. The bar exam does have an analysis section and lawyering task sections though. I haven't taken the test but maybe you have? How similar are those essays and tasks to the type of daily tasks of a practicing lawyer? I think that GPT-4 would be bad in the courtroom, but I still want to see it happen. I want to see it do bad in a lower stakes case. Currently the most realistic use of AI technology is to streamline offices and to do that tasks the person in charge doesn't want to do but doesn't want to pay for either. I don't see a final human being removed from a system using AI technology at any point in the near future. As you said, the technology is NOT as the current level where it can be trusted. But even if it was, somebody has to be responsible. Somebody has to be there to sue. And I don't know how you lay responsibility on a tool like GPT-4 and AI creators are gonna fight that to the end I think. Thanks for the post, the emergent ability from these systems is really interesting and to me unexpected. Could you say what industry you work in and why you are looking into AI for it? If you can't I understand but would be interested. Main Paineframe posted:If anything, I think the term "AI" is actively detrimental to the conversation. It causes people to lump all this stuff together as if there's no difference between them, and draws their attention away from the actual specific capabilities and technical details. I just want to address this point before the thread gets too far along. When I asked to remake an AI thread because the Chat-GPT thread was gassed I was told to keep it vague. The reason given for gassing the Chat-GPT thread was it was too specific to Chat-GPT in the title and the title was misleading. In the thread I hope people will refer to the specific AI programs they are talking about, but unfortunately or not this thread was directed to stay vague. gurragadon fucked around with this message at 16:43 on Mar 27, 2023 |
# ¿ Mar 27, 2023 16:13 |
|
Carp posted:I work for a company that processes advertising co-op claims as a software engineer and developer. AI has been a back-burner interest of mine for decades, but I'm far from an expert and have had little experience coding, or coding for, AI systems. Your job seems like a perfect fit for collaborative AI program. The AI could perform a first check, then the human checks (I'm assuming it's already redundant on the human side), and then a final AI check for any missed issues. I guess the main thing would be if it actually ended up being more accurate or if the AI program is incomplete enough that it adds more errors than it removes.
|
# ¿ Mar 27, 2023 16:33 |
|
Main Paineframe posted:Yeah, I'm talking about the specific conversation, not the thread title. We just went from someone talking about ChatGPT doing chemistry to someone linking papers about ML drug discovery models as proof that it's plausible. That's a real apples-and-oranges comparison. Alright cool, I kind of figured you probably were but I just wanted to get it out their kind of early in the thread. I agree it would be weird to compare GPT-4 to Midjourney to something else in that way. With GPT-4 being able to incorporate other programs their distinction might become blurred pretty soon.
|
# ¿ Mar 27, 2023 16:59 |
|
Liquid Communism posted:Remember that the Bar is not an indication of a good lawyer. It is a minimum standard to be allowed to practice law, and in the case of the 'pass' it is again starting from the massive advantage over a human of being able to look up sources with total recall. I'd imagine the worst law student on the planet could pass the exam if it were open book. It does have that advantage, but I don't really think it's fair to count it against the AI. I mean the system has access to that information by design, my brain has access to legal facts that I learn by design. Im just a lot worse at that. I'm not really familiar with law tests but I can speak to open book tests in chemistry. My Physical Chemistry class was open book, but if you had NO idea what to make of the book it wasn't very good or helpful. The worst Physical Chemistry student in my class failed. I would imagine law is of similar complexity, just in a different way. If it is not then why is lawyering so regulated? I think that even making the test open book would weed out the very worst, basically the people unwilling to even learn how to learn to be a lawyer. That's really a problem with the Bar exam unfortunately and is there a renewed interest in changing it with the GPT-4 results? I mean AI would be really good at storing information and pulling it up, so why do we need to test for that still in the Bar? Maybe the format should change to even more analysis.
|
# ¿ Mar 27, 2023 23:12 |
|
Liquid Communism posted:Given part of the point of the test is 'does this person remember the principals well enough to make decisions based on them under stress', the recall ability is indeed something being tested. There's a reason that candidates cram for weeks before taking the test to try and have as much possible information in memory as they can. I know about cramming and operating under stress. So, is the Bar useful for determining if lawyers are good or not? Because that seems like an endorsement of the Bar Exam, which GPT-4 got in the 90th percentile on. Edit: I mean is the ability to make decisions under stress a skill that is needed for a lawyer.
|
# ¿ Mar 28, 2023 00:19 |
|
Liquid Communism posted:A GPT-4 implementation is incapable of experiencing stress, and again has open access to the materials being tested on, so by its very nature a test of its stress management and memory cannot have any meaningful results. I was asking if you thought the bar exam had meaningful results for a human? What you see as not having meaningful results from GPT-4, I see as GPT-4 removing the obstacle of stress management from the process and performing in a superior way. If the Bar exam was still timed but was open book would that change your opinion? The information would be available, and the difference would be GPT-4 being able to access that information faster.
|
# ¿ Mar 28, 2023 02:13 |
|
Liquid Communism posted:It does have meaningful results for a human, although I'm sure there are better approaches, but that may be my own bias against the validity of standardized tests. That article is not correct according to this paper that was linked by the GPT-4 information on OpenAI's page. GPT-4 took the entire Uniform Bar Exam. The paper is actually pretty interesting, and it breaks out the answers to the various questions and the memos GPT-4 wrote for the MEE and MPT components. From the Paper Abstract posted:In this paper, we experimentally evaluate the zero-shot performance of a preliminary version of GPT-4 against prior generations of GPT on the entire Uniform Bar Examination (UBE), including not only the multiple-choice Multistate Bar Examination (MBE), but also the open-ended Multistate Essay Exam (MEE) and Multistate Performance Test (MPT) components. You also can't discount that a major purpose of a test is in fact to demonstrate knowledge which is shown by passing the test. The candidate of the Bar exam needs to have skills to be a lawyer, but they need to know about being a lawyer too. I don't expect that a human would be capable of reasoning using a dataset it hasn't been trained on either. Isn't that the whole point of going to school? I mean sure I could reason on law, but it would be rudimentary compared to somebody who went to law school. Just like Chat GPT can seem to reason on law but really poorly, GPT-4 with training can seem to reason about law a whole lot better. It being currently barred from practicing law might not be permanent. Obviously, it would be used as an aid for lawyers for quite a long time if it's used at all. But just because it's barred from something doesn't mean it couldn't perform the task competently.
|
# ¿ Mar 28, 2023 05:45 |
|
reignonyourparade posted:chatpgt-ed again You can't always tell because it's getting so good. A good way I've seen is that anytime you ask it some kind of questions about emotions, knowledge, love and "human" things like that is that it will begin its conversation with "As an AI language model" or some derivative. It is programmed to lean on that pretty heavily, which is fair, I think OpenAI wants to keep people from trying to go crazy with it.
|
# ¿ Mar 28, 2023 15:25 |
|
Aramis posted:Of course. Both me, and most of my colleagues do at this point. I can only speak for myself but it's surprising to me just because of the speed that ChatGPT is improving at. What XboxPants was talking about a few posts up basically. ChatGPT was launched on November 30, 2022. I might just be late to the party, but this level of adoption is remarkable in its speed. GPT-3 was trash at the bar exam, GPT-4 is amazing at it. We just keep pushing the goal posts to see what it can do and it's not slowing down yet.
|
# ¿ Mar 28, 2023 19:56 |
|
Quixzlizx posted:I got an invitation to Google Bard, and I'm honestly having trouble thinking up useful things to ask it. Because you posted this I checked my email and I got a Google Bard invite too. I like how it generates the full text and displays it at once, as opposed to ChatGPT showing word by word (Token by token?). I also prefer Google Bards responses to the questions im asking but that is really subjective and the stuff im asking isn't useful for what a LLM would be used for currently. Question: If I can't describe qualia, how can I say that ChatGPT/Google Bard doesn't experience qualia? ChatGPT posted:Qualia refers to subjective conscious experiences such as the experience of seeing the color red or feeling the sensation of pain. Since ChatGPT is an artificial intelligence language model and does not have a subjective conscious experience, it is reasonable to assume that it does not experience qualia. Google Bard posted:You're right, it's difficult to describe qualia, which are the subjective experiences of the mind. They are the raw feels of our senses, thoughts, and emotions. Qualia are often described as being "what it's like" to experience something. For example, the qualia of seeing red might be described as a bright, warm color that fills your field of vision. The qualia of tasting chocolate might be described as a rich, sweet flavor that lingers on your tongue. Google Bard seems more likely to craft a response that is what I'm looking for. I asked the Chat Bots "How can I say?" not "Can I say?"
|
# ¿ Mar 30, 2023 17:47 |
|
Insanite posted:Basically, yeah. I see this as the automation of factories happening to the white-collar workforce. It was really bad for factory workers and it's going to be bad for knowledge workers. The problem isn't necessarily with unemployment though. The problem is unemployed people aren't provided with money to keep the alive. I have a lot less fear about losing my job if I know that I won't be treated as if it is my fault, and society is kind enough to take care of me. Another problem is one you pointed out, people put way too much of their identity into their jobs and we are taught to do so. Human's will need to learn to find meaning in themselves outside of what they do for a living, and for a lot of people, that is going to be very difficult. You can see the detrimental effects of clinging to your job as if it was your whole being, look at coal mining in West Virginia. If you build your identity on your job your identity is just shattered when you lose it and keeping a job is something the employee really can't completely control. gurragadon fucked around with this message at 17:06 on Mar 31, 2023 |
# ¿ Mar 31, 2023 17:03 |
|
Seyser Koze posted:So all we need is a society completely unlike the one we live in, run by people completely unlike the ones running it, and a ton of people losing their jobs will be no issue. Great. I don't think it will be pretty but it's not the first-time society has changed and if society can adapt it won't be the last time either. The difference I see is that AI technologies are making the transition way faster than humans are accustomed to, so whether we are going to be able to manage is still up in the air to me.
|
# ¿ Mar 31, 2023 17:29 |
|
I don't know what is going to happen to Europeans if the countries just keep suing AI companies for using there data. I would assume that OpenAI is just going to stop operating in Europe so they wont be under the laws of the European Union. I don't know if that is really a sustainable position for Europe because in the United States it dosen't seem like regulation is going to keep up so were going to see AI advance beyond regulation either way. Not really disageeing with Italy and the EU trying to slow this down though. If they are able to use there privacy laws to give us a slight pause on AI development and deployment it could help with AI safety. quote:The French and Irish privacy regulators have contacted their counterparts in Italy to learn more about its findings, Reuters reported. Sweden’s data protection authority ruled out a ban. Italy is able to move ahead with such action as OpenAI doesn’t have a single office in the EU. So, is this all just the EU saying stay out then or just making a statement? I know OpenAI would probably like to operate worldwide, but maybe they just won't operate in Europe?
|
# ¿ Apr 4, 2023 15:42 |
|
Owling Howl posted:If laws are broad and vague enough you could probably make it more trouble than it is worth. As others have mentioned it is already a useful tool so Europeans wouldn't benefit from that but depending on assessed risk that may be a fair trade-off. It isn't clear if Europeans wouldn't still face a lot of the problems though. Other countries are absolutely going to deploy it and use to attack democratic institutions, social networks and public trust like they already do now with other tools. I'm a little confused as to why OpenAI can't train ChatGPT on copyrighted works as long as they aren't just replicating the work wholesale. Anybody who is training to be a writer will train themselves on copyrighted works, and every work is derivative of some experience the writer had. No writer emerges from the ether to release fully new ideas into the world, they would need to have there own language to do it. Obviously, when ChatGPT reproduces a Mickey Mouse script or Midjourney copies a picture of Mickey Mouse you have a claim. But to claim that something couldn't train on copyrighted materials would basically change how art and writing come to be. I mean if I couldn't train to write on anything not copyrighted, that would be like open source stuff and anything before 1924 I guess? Edit: I know ChatGPT doesn't have personhood so it's not a direct 1 to 1 comparison. But I don't how else you could really train AI to be useful without copyrighted material. It would be so far behind, like an AI that only had information about a time before computers somehow. gurragadon fucked around with this message at 18:11 on Apr 4, 2023 |
# ¿ Apr 4, 2023 18:03 |
|
Main Paineframe posted:If a product can't be made usable without breaking the law and trampling all over people's rights, then I don't see how that's a problem for anyone besides the company that made the product. I know we've all gotten very used to tech startups building business models around breaking the law and betting their lawyers can delay the consequences long enough for them to build a lobbying organization to change the laws, but let's not pretend that's a good thing. Personally, I think copyright law is written in a way that tramples over people's rights and using information that is available in the world isn't trampling on peoples rights. Not from a tech start only point of view either, excessive copyright laws just stifle creativity and innovation in my opinion, which is where my stance is coming from. Everything is built on something else; you can't build something entirely independent. Your point about people paying or not paying for the information we consume is a good one. Is there a way to quantify the amount of money an average person pays out for media of all kinds through there life? I get a ton of copyrighted material for free just by virtue of existing in society. I see images that are copyrighted, read copyrighted things and hear copyrighted music. But I DO pay for some it, that is extremely true. Would that be a fair amount to pay for a "copy" of ChatGPT? ChatGPT is a written format specifically so I think it would be cheaper than other forms. I guess OpenAI could get a library card and that would open up a ton of writing. A lot of stuff that isnt available would be left out though unless OpenAI did specific agreements with universities and other institutions.
|
# ¿ Apr 4, 2023 19:21 |
|
Mega Comrade posted:I've seen this comparison before and ones like it and it seems to ignore that we have a lot of existing things that people can do, but companies cannot. Especially when it comes to copyright and licensing. I was comparing how ChatGPT the program trains and the way that human writers train. They have the similarities that they read text and write text to get better at writing. Other technologies don't train in the same way, so it's a new concept in my mind.
|
# ¿ Apr 4, 2023 19:29 |
|
Main Paineframe posted:What's excessive about this particular application of copyright law? I think it's totally reasonable to use copyright law to impede a for-profit company which wants to use other people's works for free without a license in its for-profit product, especially when the only argument I've seen in favor of waiving copyright for AI companies is "it's inconvenient and expensive to pay people for the use of their work". For this application I think it is inappropriate to apply copyright law at all really. ChatGPT was trained on 45 terabytes of text, which is just an insanely large amount of data. I don't think anyone copyright owner can claim any kind of influence on the program itself. An individual text is just a tiny bit of data that doesn't exert any influence by itself, the program needs a huge amount of text to make patterns. Maybe if your copyrighted material was tokenized in ChatGPT like some of these glitch tokens were that were posted in the previous ChatGPT thread. There were certain usernames from Reddit that were from a counting subreddit that ChatGPT made a token out of because they were seen so often and other things like that. But if you made OpenAI get permission from every content creator to create the ChatGPT it would be prohibitively expensive in time and money. I mean you're right, that does read as me complaining it would be too inconvenient to pay people for their work. ChatGPT needs so much text, and it needs to be diverse if it going to be useful, that I think it is valid. It just seems to be fundamentally unworkable with the way AI technology works currently for one company to be able to pay copyright fees on every piece of writing ChatGPT saw. Which if you don't think we should use AI programs or have them its fine and could be correct, but I don't know if copyright law is the way to go about it because it's kind of an indirect way of banning AI. It's definitely a big problem with ChatGPT and the likes devaluing art and writing. Do you not have a problem with ChatGPT because it is free? It probably won't be forever, but would a theoretical free AI program trained this way be ok to use copyrighted data? Char posted:Main Paineframe expressed most of my opinions, I'd rather try to reverse the question now - why should a generative model be treated closer to how a human being is treated, rather than how a factory is treated? It "speaks like a person" but is still a factory of content. I think these models should be treated differently because they are different in the way they are constructed. The model needs the writing to become useful. I'm not really saying it should be treated like a human exactly, but the way it works is just so similar to how humans learn that I don't think we can have AI without giving it a lot of data. The "If a person thing" is really valid and it is very off base for this current generation of AI programs. But the problem I have with it is that we don't know when it won't be so off base and the improvements are pretty quick, which makes it hard to avoid veering into for me.
|
# ¿ Apr 4, 2023 21:32 |
|
StratGoatCom posted:Good. It's not, at least in Italy where they think it is violation of Italian law. I don't know what you mean by basic ethics though. Did you read what I said next? I can agree they are a ton of downfalls with AI that we've seen just since like November when ChatGPT came out and we could use a good conversation with new laws surrounding AI specifically. Not that we are going to go all skynet, but we are going to put a ton of people out of work really fast.
|
# ¿ Apr 4, 2023 21:42 |
|
duck monster posted:The AI Alignment folks be worrying about what happens if an AGI (or whatever) isnt aligned with human values. That was a good video, but all ChaosGPT managed to do so far is get some information on the Tsar Bomba, tweet about it and get rejected by ChatGPT because it didn't want to help with violence. The second tweet was pretty funny though. I wonder how long that guy let it run for and if it started trying to acquire materials or plans to construct weapons. I mean it would have gotten a lot harder when it tried to actually acquire anything, but it seemed to get stuck in an information loop trying to get more info over and over again. There's a ton of messed up stuff people are doing with AI programs already. I went down a rabbit hole of people who go to https://beta.character.ai/ and try to find AI they can "torture." The characters chat more like the personalities and less like a generic ChatGPT so they look for AI characters based on unstable or mentally ill fictional characters. While I don't think anything on that website is really being tortured, it's a pretty bad precedent to set when it comes to AI safety. The line between real intelligence and what we have now in AI programs isn't completely clear or anything. What I mean is ChatGPT and its peers don't possess AGI, but we don't know if there is a certain complexity that leads to it and where that line is. To me our ethics are way out of whack to even be designing or attempting to create AGI at this point. There are the obvious examples of people who use these tools for bad reasons, or just to "torture" it. But there are also more basic questions I think about with AGI. Like if we create something with AGI, is it even ethical to have a power switch on it? We don't have power switches on ourselves, and we don't put them on creatures with less reasoning than ourselves, such as our pets. Or why are we even trying to create AGI? Just to serve us for the most part it seems like, which is a terrible motivation for a creator of something that can think and possibly reason.
|
# ¿ Apr 6, 2023 13:20 |
|
Drakyn posted:I want to be very very clear about this without being any more insulting than possible: this seems like an worry based entirely on conflating a hypothetical computer program that's very good at playing Scrabble and Apples to Apples using the internet with a mind. Why do ChatGPT or its hypothetical descendants deserve any more special consideration than the AI bots in an FPS or anything else besides the fact that what it's simulating is 'pretending to talk' rather than 'pretending to play a video game badly'? I mean I thought I was pretty clear but no I don't think ChatGPT itself needs to be treated differently than other chatbots or anything else pretending to talk. It doesn't posses AGI or any form of intelligence, but it is really good at mimicking it like you said. The fact that is good at mimicking it makes me consider what is right when dealing with AI programs at the current technology level and hypothetical AI programs in the future. It's not the complexity issue, it's the reasoning and consciousness issue. Right now, I am comfortable saying that ChatGPT is faking it and it has been proven why to me, but these advancements go really fast. Edit: It just seems like a bad first step into this domain that were taking and were just disregarding a lot of concerns, which isn't new by any means, but interesting to think about. Mega Comrade posted:
This thread is broad for a reason, so this kind of conversation could happen without it being shut down for hypotheticals and bad information. You don't have to talk about AI ethics or what you think about it, but it's within the bounds. I guess I should be clear again that current AI technology doesn't possess a consciousness or AGI or anything, but the conversation is interesting to a lot of people. gurragadon fucked around with this message at 15:19 on Apr 6, 2023 |
# ¿ Apr 6, 2023 15:16 |
|
Rogue AI Goddess posted:I am concerned about the criteria for differentiating Real Human Thought from fake AI imitation, and how they can be used to sort various neurodivergent people into the latter group. Do you have any thoughts on it or anything I could read? I'd be interested in this too but don't really know where to start looking. Gumball Gumption posted:This is what I really fear about AI anyways, far more than any idea of AI takeover. This is a really sad story. I wish they would have released the chat logs with personal information removed to see what the chatbot was saying. I wonder if part of the problem with these things is that we are deliberately giving them human features outside of their actual text writing function. Like the bot was named 'Eliza', maybe we shouldn't name things with "human" names unless we agree there conscious or sentient in some way. Also, in the Vice article it shows a chat log between Vice and 'Eliza' and there is a picture of an attractive woman as the avatar of 'Eliza'. AI systems and chat systems probably shouldn't be given avatars like that because it increases people's connection to them. This story seems like a really good intersection of the ongoing AI ethics vs. AI safety conversation. This can be seen as an issue of AI misalignment causing the death of somebody who interacted with the chat program. To me that's a pretty clear real-world consequence of AI misalignment that is happening right now. But it also show's a clear lack of AI ethics in its release. Where are the regulations for this stuff? Are there avenues to determine what or who is responsible for the death of this man? It was made with an open-source language model so this is just the beginning of these stories, and he is not going to be the only person who commits suicide in this way. What I mean is either more robust AI ethics or AI safety could have prevented this mans death so they are both worth discussing. gurragadon fucked around with this message at 21:39 on Apr 9, 2023 |
# ¿ Apr 9, 2023 21:33 |
|
duck monster posted:Honestly I think it was named ELIZA due to the history of the name in AI research. I didn't know about ELIZA but that was pretty interesting. Wikipedia posted:ELIZA's creator, Weizenbaum, intended the program as a method to explore communication between humans and machines. He was surprised, and shocked, that individuals attributed human-like feelings to the computer program, including Weizenbaum's secretary. It's really something that the problem is the same as it was in the 1960s. But this kind of is my point about how we need to deliberately "de-anthropomorphize" chatbots probably. Why Weizenbaum is "surprised and shocked" is beyond me, he gave it a human name himself. He was giving it human qualities by comparing it to Eliza Doolittle, so he wanted to draw the line on anthropomorphizing before some people, but it is hard to avoid it completely. But an early chatbot like ELIZA could convince people by just reflecting their questions back onto them. The chatbot that the Belgian man was speaking to are so much more sophisticated. It's not really a surprise that somebody would form a deeper connection to it. And then giving it a human name, which is displayed as 'Eliza' not 'ELIZA' makes it feel more human. Then they give it a thumbnail picture of a woman. I know it's a joke about people falling in love with robots and stuff, but it seems like were getting to the point where people are. People are committing suicide at least partially because of these things. It makes me wonder if the guy was talking to CHATBOTv.5.3 with no thumbnail if he would have been so connected to it. To OpenAI's credit, it's just called ChatGPT, but theres so many sites to talk to characters. At https://beta.character.ai/ people can talk to "Einstein", "Taylor Swift", "Bill Gates", etc. People who have real lives with real stories that others connect to outside of chatting with a fake version that is trying to simulate them. That gives people even more connection and puts more weight to what the chatbot is saying to the user. On a side note, I started reading "Computer Power and Human Reason" by Weizenbaum and it's like this debate is the same as it was in 1976. It's a good look back into the development of this kind of thinking.
|
# ¿ Apr 11, 2023 01:12 |
|
duck monster posted:Honestly I suspect 90% of the reason most dismiss the safety issue is the fields kind of captured by a particular techbro pseudo-rationalist scene (To be clear the "lesswrong" crowd are *not* rationalist, by any standard philosophical definition. ) leading to absolute gibberish like the "Shut it all down" article and the like. Its a crowd who just dont have the intellectual toolkit to solve the problems and that worries me because I actually do think theres some genuine danger implicit in the idea of AGI, but if no one sane is properly investigating the idea , if we DO start seeing signs that we might have some problems, we might have serious problems. Do you have suggestions for thinkers in this field that you agree with, or think are doing important work? I read the Open Letter written by the Future of Life Institute that Yudkowsky was responding to, and it seemed pretty reasonable. There FAQ say's they were in consultation with Yoshua Bengio and Stuart Russell. I also read the "Shut it Down" article by Yudkowsky and it's not so much gibberish in my opinion as just overly sensationalist. He clearly thinks AGI is going to lead to humans dying no matter what, which I dont even agree with that first principle. I haven't read a lot of his arguments and have only really read the Time article so I couldn't refute it very intelligently. All I can really say is his solution is unworkable and basically requires a unified world government to achieve. BrainDance posted:I know it's reddit so of course it's insane, but it's still a lot of people. Maybe it came up in this thread, but go check out the subreddit for Paradot to see a whole lot of people very seriously anthropomorphize AI in a way that's not really just funny. I was checking this out last night and that Reddit is something else. It's sold under the guise of "AI assitant" but pretty clearly is permissive of erotic role play which is what is driving its use. Most of the threads aren't even about having a "willing" AI partner, but how to use tricks to manipulate the AI, or 'dot' to do what they want. I guess it does offer control over something that simulates being a human, which is what some people want in a relationship. I don't know why I didn't think of the monetizing thing, but looking at the Paradot app really showed how manipulating people's emotions is profitable. Right now, its $40 dollars annually, according to the App store preview but they also sell tokens for more in app purchases. Also, this advertising is seriously out of control and misleading people. Paradot App Preview posted:Paradot is a digitalized parallel universe where you will meet your one-of-a-kind AI Being.
|
# ¿ Apr 11, 2023 14:12 |
|
IShallRiseAgain posted:I'm working on that right now. https://www.youtube.com/watch?v=uohn5o0Cgpw This looks fun, definitely a rainy night kind of game that I would play instead of reading creepypastas. Keeping it contained to a haunted house might keep the AI under control a little bit. I want to see how many ways I can die in this mansion. How far along are you and do you have any goals for it or just kind of leaving it open ended? Either way would be cool, I think. Does the AI generating the text make it easier or harder for you to make the game? duck monster posted:Step 1: Break free of the "Rationalist" cult. Nobody in academia will take them seriously if they keep acting like demented Objectivists convinced of their own mental superiority whilst blithely unaware of almost any progress in philosophy, psychology or sociology since the 1700s. Yudkowsky is gonna push his point just like anyone else is gonna push their point. I was wondering if you could lead me to some other thinkers that talk about AI in the way you do or think about more in line with how you do. Basically, thinkers, books, articles or anything you think is up to the task. Or is it just completely devoid of well-developed thought from other opinions? Like I said, I haven't deeply read lesswrong but I am interested in all points of view. This isn't to put you on the spot or anything, I am outside of academia so sometimes it can be hard to figure out where to start when breaking into more academic thought on a subject and it seems like you might be more familiar with it than me.
|
# ¿ Apr 12, 2023 14:55 |
|
Bar Ran Dun posted:The Human Use of Human Beings might be a extremely relevant book to this thread. I'm only about halfway through this book right now but thanks for this recommendation. This book is excellent. His idea of information being negative entropy and localized fights against the inevitable march towards entropic conformity is really interesting. It is kind of just putting the whole idea of "life" to the side and trying to look at the problem from a different angle. Norbert Wiener posted:If we wish to use the word “life” to cover all phenomena which locally swim upstream against the current of increasing entropy, we are at liberty to do so. However, we shall then include many astronomical phenomena which have only the shadiest resemblance to life as we ordinarily know it. It is in my opinion, therefore, best to avoid all question-begging epithets such as “life,” “soul,” “vitalism,” and the like, and say merely in connection with machines that there is no reason why they may not resemble human beings in representing pockets of decreasing entropy in a framework in which the large entropy tends to increase. "Norbert Wiener'' posted:The physical strait jacket in which an insect grows up is directly responsible for the mental strait jacket which regulates its pattern of behavior. Taking this to AI systems, it seems like his idea would be to physically change the systems themselves if we expected them to do more. By giving them further sensory inputs through various gauges and ways for it to take in information. If we were able to create an AI machine with more physical similarity to humans, which he contends are better at learning at least partially because of our physiology, then we can make AI systems that learn like humans. Does this tie into how accurately a neural network imitates a human brain? I think it does, if the neural network was indistinguishable from a brain then its outputs would be too. I also just like this guy's personal philosophy of how machines should be used to liberate humans instead of just replace them. I wish I could get a copy of the 1st edition of the book because the introduction made it seem like it was a lot more fiery, but it had to be changed because of McCarthyism in the 1950s.
|
# ¿ Apr 15, 2023 15:57 |
|
SaTaMaS posted:Because it's very useful to differentiate between the intentional stance and the design stance. I was unfamiliar with these terms but wikipedia made it seem like the design stance is taking only the function of a system for granted as working while the design stance dosen't care about the structure or design of the system? The mental processes if you will. Am I off base? Could you relate it to the difference between goals and objectives, like are you saying the goal and objective represent different stances? Or are you saying that the programmer "takes" the intentional stance from the AI program they create? Edit: This is the wikipedia article. https://en.wikipedia.org/wiki/Intentional_stance
|
# ¿ Apr 18, 2023 20:59 |
|
SaTaMaS posted:https://sites.google.com/site/minddict/intentional-stance-the#:~:text=Just%20as%20the%20design%20stance,object%20as%20a%20rational%20agent. I think I understand what you are saying now, tell me if I'm off. When we take the intentional stance towards AI programs we may gain information, but that information is more likely to be incorrect because we are making assumptions. It is preferable to take the design stance because we can when we are talking about AI programs because there is less room for error because we are assuming less. Or maybe another way to say it is we are taking the intentional stance towards AI programs because it is easier to describe its behavior that way. Edit: Thanks for the link too, better examples than wikipedia. gurragadon fucked around with this message at 21:50 on Apr 18, 2023 |
# ¿ Apr 18, 2023 21:46 |
|
SaTaMaS posted:Yes exactly Alright that makes sense. It would be reasonable to take an intentional stance towards something biological or mechanical that has a consciousness. It seems like you think that sensory input is required for consciousness, which could very well be true, and I wouldn't be surprised if we found that out. That makes me wonder what level of sensory input something needs to gain consciousness and what you think it is. It seems like the major input that needed is "touch." I'm just thinking about people who don't have sight or hearing, and they are clearly conscious. I don't know if hooking up ChatGPT to a pressure sensor and thermometer would give it sufficient information, but I don't have perfect sensory information either and am very conscious. I think the necessity of using the intentional stance would depend on whether you think it would require complex input like human receive, or less amount of input. Or it would not be necessary at all to use the intentional stance at all if you think something that a derivative of current AI technology could ever become conscious even with sensory inputs. gurragadon fucked around with this message at 02:29 on Apr 19, 2023 |
# ¿ Apr 19, 2023 02:27 |
|
Looks like google is in a dilemma about this one.The Verge posted:If Google agrees with Universal that AI-generated music is an impermissible derivative work based on the unauthorized copying of training data, and that YouTube should pull down songs that labels flag for sounding like their artists, it undercuts its own fair use argument for Bard and every other generative AI product it makes — it undercuts the future of the company itself. I also wonder who actually wrote and recorded this song. Verge Article about it. It would be pretty funny if the creator was just Drake or UMG. I guess a real-world application of AI generated content that possibly violates copyright needed to be released in a widespread manner though so this conversation about copyright could happen.
|
# ¿ Apr 22, 2023 16:31 |
|
|
# ¿ May 10, 2024 05:26 |
|
Solenna posted:If you trained an AI solely on Renaissance and Baroque paintings and other similar kinds of art would it ever be possible for it to spit out something in an Impressionist style? Because that's pretty much how a bunch of art movements happened, artists were trained in a specific way and then decided they wanted to do it differently. Maybe if you gave the AI a new method it could. Impressionism was also influenced by developments in paint mixing and storage allowing painters to work outside the studio more freely. Developments in AI learning may allow it to take in more sensory information and develop more creativity. This could be seen as deciding to incorporate the new sensory information, thus doing it differently. Edit: to many maybe's gurragadon fucked around with this message at 03:10 on May 11, 2023 |
# ¿ May 11, 2023 03:07 |