|
SCheeseman posted:I don't think many want to chat about it, tensions are way too high and it's fracturing communities. Anyone who has their jobs and livelihoods threatened by it and/or see it as an affront to humanity are mostly interested in ways to crush it. Pretty understandable, the capitalist powers that be will take this technology and use it in all the ways people fear it will be. Anyone who understands the technology behind it on more than a surface level is generally right with those authors and artists because they understand that none of this AI functions without massive datasets that invariably violate the rights of authors and artists because no organization could afford to license works on that scale. Even at massively discounted rates for commercial licensing the image DBs behind them would be billions of dollars. It is hilarious how clearly this is designed to benefit the rentier class, though. Using billions of dollars of supercompute hardware, at a cost of millions a month in power, cooling, and network capacity to avoid paying a few graphic designers $40k a year. gurragadon posted:It's like the industry of creative arts is going through what other manufacturing has been going through in a really fast time scale. All those people working on an assembly line are replaced by a machine and somebody to make sure it works. Now the same thing is happening to creative freelancers. GPT4 being able to pass the Bar is more a function of how the test is written than the ability of the AI. It's specifically designed to test the memory of applicants in a stressful and long-form test. A search algorithm with effectively perfect recall will do well at this. Doing a lawyer's actual job, making analysis of law and presenting facts in light of said analysis, in a convincing and logical manner aligned to precedent is far outside of its scope. The same for doctors, being able to cross-reference WebMD really fast is no replacement for the people skills required to actually get an accurate description of symptoms from a patient, or to create a treatment plan across multiple conditions present that provides a balance between quality of life, patient comfort, and effectiveness of treatment. Hell, GitHub's implementation to write code is going to go hilariously badly because it is trivial to poison the data sources in such a way as to make the outputs useless or try to inject exploits into them such that the script recreates them. Liquid Communism fucked around with this message at 05:24 on Mar 27, 2023 |
# ¿ Mar 27, 2023 05:14 |
|
|
# ¿ May 10, 2024 02:47 |
|
Even on factual questions if there isn't enough context or your query requires too much time it'll just make poo poo up. It's just a search engine running your cell phone's word predictor to respond to texts.
|
# ¿ Mar 27, 2023 15:47 |
|
gurragadon posted:I do have to agree with you for the most part that doing good on a test is no indication something will be good in practice or vice versa. But I also can't completely dismiss a connection between the two because many people I've worked with were also top of their class and lack of effort in studying for a test can indicate lack of effort in their profession. The bar exam does have an analysis section and lawyering task sections though. I haven't taken the test but maybe you have? How similar are those essays and tasks to the type of daily tasks of a practicing lawyer? Remember that the Bar is not an indication of a good lawyer. It is a minimum standard to be allowed to practice law, and in the case of the 'pass' it is again starting from the massive advantage over a human of being able to look up sources with total recall. I'd imagine the worst law student on the planet could pass the exam if it were open book.
|
# ¿ Mar 27, 2023 22:57 |
|
gurragadon posted:It does have that advantage, but I don't really think it's fair to count it against the AI. I mean the system has access to that information by design, my brain has access to legal facts that I learn by design. Im just a lot worse at that. Given part of the point of the test is 'does this person remember the principals well enough to make decisions based on them under stress', the recall ability is indeed something being tested. There's a reason that candidates cram for weeks before taking the test to try and have as much possible information in memory as they can.
|
# ¿ Mar 27, 2023 23:20 |
|
gurragadon posted:I know about cramming and operating under stress. So, is the Bar useful for determining if lawyers are good or not? Because that seems like an endorsement of the Bar Exam, which GPT-4 got in the 90th percentile on. A GPT-4 implementation is incapable of experiencing stress, and again has open access to the materials being tested on, so by its very nature a test of its stress management and memory cannot have any meaningful results.
|
# ¿ Mar 28, 2023 01:58 |
|
gurragadon posted:I was asking if you thought the bar exam had meaningful results for a human? What you see as not having meaningful results from GPT-4, I see as GPT-4 removing the obstacle of stress management from the process and performing in a superior way. It does have meaningful results for a human, although I'm sure there are better approaches, but that may be my own bias against the validity of standardized tests. If passing the test were the purpose of the test you would have a point regarding the GPT-4 results. It is not. The purpose of the bar exam is for a candidate to demonstrate they have the skills necessary to practice law at the standard set by the bar association, as a condition of admission to the bar and licensure to practice law. The ability to manage that stress is part of the point of the test. That GPT-4 cannot experience that stress is not an indicator of superiority, so much as a demonstration that it lacks the basic capabilities that are being tested in the first place. So far as I can tell from the article, it was also only tested on the first two portions of the bar exam, the multiple choice and essay portions, and not the significantly more important performance test where an aspiring lawyer is given a standard task such as generating a brief or memo for a case file set in a fictional state, along with a library of the laws of said fictional state. I do not expect that by its design the GPT-4 is capable of a relatively simple task of reasoning using a dataset on which it has not been trained. All somewhat beside the point as the GPT-4 cannot in point of fact practice law, because by definition a lawyer requires personhood, which a chat algorithm is incapable of.
|
# ¿ Mar 28, 2023 05:15 |
|
IShallRiseAgain posted:Honestly, I think once it reaches the point an AI can replace programmers, its basically capable of replacing any job, except for jobs that strongly rely on social interaction. There might be a slightly delay for physical labor, but the tech is almost already there. I honestly don't think it'll ever be a functional replacement for programmers. It can only interpolate from what it is trained on, and thus can't exactly find novel solutions to problems, and is incapable of any but the shallowest context. It'll speed up routine code much like predictive text messaging speeds that up, but won't replace human experience and knowledge. I've been in IT long enough to also understand that a lot of businesses will try to sell it as a replacement for coders, then spend a poo poo ton of money hiring people to unfuck the code the AI was confidently wrong about and they implemented without sanity checks. Doctor Malaver posted:It would be helpful for objections to AI to be specific and free of padding. For instance, why complain about "false text" flooding the internet? How's that going to be different from now? Are we in the end days of truthful internet, free of conspiracy theories, money scams, hate speech..? I think most of us old grumpy tech folks are well aware of the vast signal to noise ratio change in the internet brought about by social media, and understand that a tool that will allow individual bad actors to generate said noise vastly more efficiently will poison the usefulness of the internet as a platform in much the same way that junk mail, robocalls, and email spam either crippled those tools or required extensive legal and/or programmatic solutions to filter. Half the reason conspiracy theories get so much traction is that they're spread with great volume, and most people absolutely don't have the time in the day to research and critically evaluate every piece of information they consume. Liquid Communism fucked around with this message at 11:48 on May 8, 2023 |
# ¿ May 8, 2023 11:09 |
|
Owling Howl posted:This discussion has been ongoing for like 50 years with people submitting works made by children, animals or computers to art galleries. The bigger point to me is that it's a force of stagnation. AI can't create, it can only interpolate from its training set. Meaning you're never going to get an AI Andy Warhol, or see new mediums or forms of expression evolve out of emerging art styles because without references it cannot duplicate them. Hell, the first major use of deepfake stuff outside porn was duplicating dead actors for Disney so they didn't have to recast the character they were relying on a nostalgia bump from. As AI art makes it impossible for artists to find paying work, we lose any future works they would have made as well, and the works inspired by them. There's a reason the New Deal included funding for the arts.
|
# ¿ May 10, 2023 06:48 |
|
BrainDance posted:But, how is that different from what humans do? If you can't understand the difference between a script that puts pixels together based on what humans labeled training datasets as, then waits to see if the result provides enough of a pattern for human pattern recognition to think it's what they wanted and a human being creating representational art to try and convey their lived experience to others, I'm not sure I can help you. AI doesn't create. It does statistical prediction based on a massive dataset to try and make a pattern the user will recognize. It no more understands what a 'dog' is than a toaster, and can only contextualize it by the files in its training dataset labeled (by humans) as 'dog'. PBS of all mainstream sources did a pretty good segment on just how much (mostly developing world exploited workforce) human labor is actually behind making current AI tools look smart. https://www.pbs.org/newshour/show/concerns-rise-over-treatment-of-human-workers-behind-ai-technology Liquid Communism fucked around with this message at 14:50 on May 10, 2023 |
# ¿ May 10, 2023 14:45 |
|
Lemming posted:Just saying "there's a similarity" doesn't answer it either. Human brains are extremely similar to chimp brains, but chimps also aren't really creative in the same way humans are. The key is the difference between interpolation and extrapolation. An AI can make a guess what the next point in a pattern is based on all the other, similar patterns it has been trained on, but limited by the outer bounds of that training data. It will also be confidently wrong, as it is incapable of second-guessing its own work. A human can take a series of data points and make inferences based on data not actually present. Liquid Communism fucked around with this message at 03:26 on May 11, 2023 |
# ¿ May 11, 2023 03:20 |
|
BoldFace posted:I'm only familiar with interpolation and extrapolation in mathematical context involving things like numbers or geometric objects. I'm struggling to undestand how you use these terms with language models. If I ask GPT-4 to come up with a new word that doesn't exist in its training data, in what sense is this interpolation rather than extrapolation? Similarly, I can ask it to create a natural number larger than any other present in the training data (which is finite). You say that the training data imposes limits on the output of the model. I would like to know how these limits manifest in practice. Is there a simplest task an AI fails because of these limits, but a human doesn't? If you asked it to come up with a word not in its training data, how would you vet it? It could certainly generate semi-random nonsense and tell you it's a new word, but it couldn't make like Tolkien and invent a language from first principals. A better and more common example is troubleshooting programming code. ChatGPT is absolutely terrible at this, because it is both confidently wrong and incapable of making inferences of intent. A human coder can look at a piece of code, and the use it was meant for, and evaluate what the intent of the writer was and where they hosed up, even if the syntax is okay. This is such a basic thing that it's elementary-level problem solving, and large language models are utterly poo poo at it because all they can do is compare it to other examples of syntax they were trained on and try to vomit up something linguistically similar.
|
# ¿ May 11, 2023 06:42 |
|
Tei posted:1- That AI art can't be copyrighted. I don't expect #2 to hold up to any court challenge. Too many companies' main profit center is rent seeking on copyright ownership, and Disney has better lawyers than the government, and the usual 'we're just a carrier' argument isn't going to fly when they're producing derivative works.
|
# ¿ May 12, 2023 15:19 |
|
SCheeseman posted:The argument will probably be a fair use one, pointing to Google Books being ruled a transformative use in spite of the entire business model of that service being reliant on creating unauthorised scans of copyrighted books, serving unedited extracts made available for free, supported by advertising. Google attribute works to the authors but that didn't stop a lawsuit from rights holders, and that AI generators don't spit out explicit copies is arguably as much an extenuating circumstance. AI generators being capable of spitting out explicit copies will be enough to prove that they're using copyrighted materials to create the backend for their for-profit service without licensing. It'll go over about as well as someone trying to start a new Netflix with nothing but DVD rips.
|
# ¿ May 13, 2023 05:17 |
|
roomtone posted:There is still profit being generated from your art - the difference is, now instead of a % of the take, and related ways to support yourself, you get nothing, the tech companies take all of it including even the credit. It's could be the same with any creative format, if it's just allowed to happen. Somebody is still going to profit from human creativity, it's just going to become people who have nothing to do with the creativity. Yep. Always fun to watch the rentier class once again engage in outright theft to then turn around and offer to sell back the work they stole at a premium. That's the thing about every last one of these techbro 'disruption' schemes, they're all at the root an attempt at landlordism. The people running these content-generation AIs have no desire to produce art or literature. They simply want to make sure that they get a cut on the value of any that is produced. Liquid Communism fucked around with this message at 09:13 on May 14, 2023 |
# ¿ May 14, 2023 09:09 |
|
Count Roland posted:Here's a post from Slashdot which I thought was interesting: It in no way follows that it is then a social good to allow corporations, who are explicitly not people and thus cannot create, to violate copyright laws for profit while also exploiting said laws to attack creators.
|
# ¿ May 15, 2023 19:09 |
|
KillHour posted:You might be speaking from a personal values perspective, but as far as the law is concerned, I'm pretty sure this is explicitly not true. From a legal perspective it remains true, outside of personhood in some technical standings. People create works for hire which are owned by the corporation, but the corporation itself, not being a human, is incapable of authorship. GlyphGryph posted:Maybe in some other copyright regime this would make sense, but in the one we live in copyright exclusively belongs to the corporation creators work for and the money-men who buy them and not the creators themselves the vast majority of the time, often specifically so they can use those copyright laws to deprive the creators of the right to their work. (ZA/UM is a great recent example of this) As said above, are you criticizing the existence of copyright because the creators have not destroyed capitalism yet? Creators own their work unless forced (mostly by financial coercion) to transfer that ownership either via 'work for hire' or outright IP sales. Or, as is common and the foundation of most LLM currently in use, by outright theft the creators lack the funds to successfully oppose in court because the rentier class has successfully won at capitalism and and amassed resources more surely than ever the old nobility did.
|
# ¿ May 16, 2023 09:01 |
|
Count Roland posted:I believe LLMs are poor at logic. Dealing with facts requires the AI to state things are true or false. Such statements can be logically modified ie if x is true then y. A model that is guessing the next symbols in a phrase will sometimes pull this off but can't itself be reliable. The AI needs to do logical operations. Which I assume is possible, given how logic-based computing is. They're not so much poor at it as incapable of it. There's no logical processing going on, just a query of what is most likely to be the right response based on what is related in their training data. It's the same thing for fiction writing, it's honestly more work to make anything beyond a very short LLM response readable than writing your own rough draft because it is incapable of context and confidently wrong often enough that you're going to lose a ton of time editing for continuity, factuality, and catching plagiarism.
|
# ¿ May 22, 2023 11:45 |
|
GlyphGryph posted:They don't reference things, though? Once the AI is created and trained it doesn't have any of the original artwork to reference, and conceptually it doesn't make any sense anyhow. That's not how these things work. The AI's entire 'memory' consists of its training set. Hence why you cannot remove something from said training set without retraining the AI, or it will continue to use what has been indexed. It is incapable of creativity. It is simply pulling elements from training data that is tagged similarly to the prompt given. This is a large part of why the EU is looking at it sideways, as present designs cannot comply with the GDPR both in proving they do not contain PII, or obeying right to be forgotten.
|
# ¿ May 22, 2023 14:26 |
|
Clarste posted:The idea is to define the program as something that cannot "learn" and can only "copy" so therefore anything in its training set is copying by definition. Like tracing. A computer cannot have a style, it can only trace things. Yep. Even the 'draw a thing in Bob Ross' style' prompt is a dodge, because the algorithm has no idea what Bob Ross' style is. It knows there were files in its training set that were human-tagged as being produced by or similar to Bob Ross, and will now iterate on parts of them to generate an image that the human user will then decide is or is not what they wanted. The only thought of style here is in the prompt giver and those creating the metadata in the training set.
|
# ¿ May 22, 2023 18:04 |
|
SubG posted:And if you want to be fully confused about the issue, look up the images involved in Leibovitz v. Paramount, which went the other way. Not much confusing there, Lebiovitz v Paramount was ruled as parody for humor value and not in competition with the original photographer's work.
|
# ¿ May 23, 2023 06:03 |
|
https://twitter.com/robertkneschke/status/1662034837618786304 Gee, I wonder why they'd be concerned about liability if forced to disclose their training data.
|
# ¿ May 26, 2023 19:12 |
|
StratGoatCom posted:Oh please, you always have to be clear in chain of title you dork. The AI wierdos made this problem, not us. And yes, we will watermark, yes we will do things to choke your models because you did not ask permission for usage and theft invites retaliation. Speaking of this, a lawyer went and did just that. https://twitter.com/questauthority/status/1662273759259295746 Peter LoDuca, arguing a case before the Southern District of New York entered an affidavit generated by ChatGPT that outright made up case citations to cases that never existed. When pressed by the Court, he then generated those decisions with ChatGPT and submitted them to the court, complete with a forged notary stamp. To say the judge is a bit upset would be the understatement of the year.
|
# ¿ May 27, 2023 17:48 |
|
BrainDance posted:I have no idea how paying a person for their stuff showing up in an output would work or make sense. It's not pulling from the "number I made to represent this artwork in the model", it's pulling from "averages but not really averages I got from denoising into a whole bunch of images that seemed related in a way to this token" so, what, you pay every artist who ever drew or photographed a cat a billionth of a penny every time someone uses AI to generate anything that may have been influenced by its understanding of a cat, which actually includes an incredibly amount more (which we probably cant tell) than just the prompt "cat?" It doesn't make sense, which is why that isn't what anyone is angling for, but rather requiring licensing the works used to train the model in the first place.
|
# ¿ May 28, 2023 07:14 |
|
Gee, almost like there's a reason ethics of technology has a lot to say about why it's a net negative for society to create systems that rely on mass appropriation of others' work for commercial purposes.
|
# ¿ May 28, 2023 12:32 |
|
If you let the AI company set your rates, sure. They need your content, you don't necessarily need to sell to them, and should be pricing in the damage their business will do to your future income if you choose to do so.
|
# ¿ May 28, 2023 14:56 |
|
I'm not against banning its application as a content generation machine, but given that we haven't managed to stop running illegal hotels in residential areas (AirB&B et al) or scab taxi services in places where taxis are regulated despite there being existing laws against them that could be enforced, I'm not sure if regulation can get there fast enough.
|
# ¿ May 28, 2023 16:27 |
|
KwegiboHB posted:There we go, I knew you were in there somewhere. I'm not. One doesn't need to ban the underlying technology to make a given use of it verboten.
|
# ¿ May 28, 2023 17:19 |
|
Owling Howl posted:
I'd consider that a pretty heavy assumption. We can look to how often corporate tech builds on open source to vast profit, then takes their solutions closed source to protect that profit as an example of why. Along with the last ten plus years of VC influenced business operating on the base concept that since you can't jail a corporation, breaking the law is fine as long as the fines for doing so are less than the profit you make in the meantime. Some of your patsies may see fraud charges but the investors rake in the cash and that's what matters.
|
# ¿ May 29, 2023 10:37 |
|
The artist is playing every instrument at an EDM concert, as they're generally solo sets. What the hell are you on about?
|
# ¿ May 29, 2023 20:06 |
|
BrainDance posted:At the vast majority of them no the DJ is not and at least in the early and mid 2000s (and probably before? And probably now? Those were my partying years) the producers were seen as musicians but a large, large chunk of the community absolutely did not see what the DJs were doing as any kind of artistry or playing anything. It was actually a very similar situation to this. Just because an audience who has never tried to create an art form lacks understanding of the technical skill it requires does not mean that skill is not being used. Connecting right back to AI gen art, where users want art made to spec, without either investing themselves in gaining the skills to create it or resources to hire someone who has done so. (And as someone's inevitably going to Kramer on with it: generating an AI prompt is in no way similar, if anything it's less effort than creating a proper spec for commissioned work.)
|
# ¿ May 29, 2023 23:42 |
|
Jaxyon posted:It would also be more viable if the recording industry hadn't more or less instantly decided that it was entitled to more than half of an artists income by virtue of owning a microphone and a recording device. Almost like the rentier class are parasites making their profit off of others' work because they lack the drive to learn a skill! Much like AI 'artists'. Liquid Communism fucked around with this message at 14:11 on May 31, 2023 |
# ¿ May 31, 2023 14:09 |
|
Not particularly, because as you say, I don't recognize writing Google Image Search queries aimed at others' material as a form of artistry, even if you abstract it out a layer and use an overgrown predictive texting algorithm as a middleman.
|
# ¿ May 31, 2023 14:47 |
|
karthun posted:Why not? Is any expression of text a form of artistic expression? if so why not this one? Because ideas are a dime a dozen, and the least important part of artistic expression. Text absolutely can be a form of artistic expression, when it is used as an artistic medium, but holding up a search query for 'elf paladin character design blonde with sword' as equivalent to even someone without developed art skills' quick sketch is flatly absurd.
|
# ¿ May 31, 2023 21:33 |
|
gurragadon posted:I think a lot of this is just that we are in the nascent stages of this kind of technology. ChatGPT was the first impressive LLM that I've ever seen, and even though I know it makes mistakes, I still find it incredibly impressive. I don't know what the foreseeable future is and how fast breakthroughs will happen. Is it fusion where always almost there or is it really going to keep improving really quickly now? Deeply unlikely. ChatGPT isn't really that much of a breakthrough algorithm-wise, they just threw a massive pile of gear at it. Getting it to behave as well as it does now takes millions of dollars of hardware, and ongoing power/cooling investments to run it. MS talked about what they threw behind OpenAI earlier this year: https://www.networkworld.com/article/3691289/microsoft-details-its-chatgpt-hardware-investments.html $1bn in 2019 to get here, with another $10bn this year. The hardware involved is reported as Azure supercompute, running on tens of thousands of NVIDIA A100 GPUs quoted at $10k each. That's not particularly scalable unless you're a megacorporation.
|
# ¿ Jun 2, 2023 01:39 |
|
KillHour posted:You mean the military wrote a report on theoretical risks given a scenario - something they do constantly for everything you could imagine - and it was reported as something that actually happened? I'm shocked. This is my shocked face. It's also the plot from Terminator, where Skynet starts a global thermonuclear war because the easiest way to stop locals from turning it off was to provoke a mutually assured destruction nuclear counterstrike.
|
# ¿ Jun 2, 2023 15:26 |
|
SCheeseman posted:https://www.youtube.com/watch?v=_MrGNlXRi9M Not remotely surprising to anyone with the slightest idea of the difference between a jumped up autocorrect and an actual GPAI. Corvids are orders of magnitude beyond current 'AI', much less actual human level thought from a machine.
|
# ¿ Jun 13, 2023 22:23 |
|
Doctor Malaver posted:I'm trying to be impressed but these stats don't mean anything to me. For context, you can always go with Bo being good enough to be playing at the professional level in 2 of the 3 primary US pro sports (baseball and football) at the same time.
|
# ¿ Jul 11, 2023 14:05 |
|
DeeplyConcerned posted:I think people will tend to treat these things as conscious when they start possessing enough qualities that we associate with consciousness. Particularly facial expressions and the expression of emotion. That's just the very human tendency to anthropomorphize literally everything. It's one of the psychological factors AI developers now lean in on to try and convince less tech savvy users that what they're interacting with is an AGI rather than a predictive text algorithm incapable of contextual memory, much less emotional states. Rappaport posted:Possessing a credible theory of mind, for me. As far as what I'd find to be a convincing proof of AGI? An AI which can learn a task which is outside of its training data without being fed specific instruction, and then apply what it learned to a different but conceptually related task later unprompted would be a minimal start.
|
# ¿ Jul 12, 2023 10:50 |
|
Tei posted:I am way better programmer than ChatGPT. Sometimes I get frustrated how obtuse mediocre is ChatGPT. That's only obvious. A generative AI can never be better than the average of its inputs, and there's a ton of lovely inputs.
|
# ¿ Nov 28, 2023 04:52 |
|
|
# ¿ May 10, 2024 02:47 |
|
Lord Of Texas posted:This is not remotely true, and it's such a blatantly false/low-effort take that I don't feel it's even worth refuting. Just google any number of LLM benchmark papers on semantic scholar. My dude, you're trusting an AI-based research tool to provide you with accurate summaries of scientific literature, when the technology is well known for outright making stuff up that is trivially wrong? Continually yelling 'nuh uh' is not a refutation.
|
# ¿ Dec 6, 2023 08:29 |