Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BrainDance
May 8, 2007

Disco all night long!

archduke.iago posted:

Did you actually read the articles you posted? Or did you just Google "ai in drug discovery" and pick three hits on the first page? Nothing in them comes close to your example of an AI proposed novel molecule with a synthesis to boot, nor does the description from your friend. We *already* have far more proposed drugs then we have the capacity to test and approve, which is a problem that "AI" does nothing to solve.

The constant breathless misrepresentation of the capabilities of AI systems does nothing but further the interests of the programmers who develop them. If the computers are scary, dangerous, and capable, the only one who can reign them in are the caste of AI priests at the top.

The "synthesis" was my own guess in my quick one off example that I typed literally as an example of potential capabilities of AI, not as me stating this is the thing that AI will definitely do in exactly this way. But as an example to show the kinds of things emergent abilities in AIs lead to which you took incredibly literally and as a complete thesis for some reason.

The articles are from my bookmarks, and absolutely do show the potential for AI in drug development among many, many other things. A quote from one itself, "AI can be used effectively in different parts of drug discovery, including drug design, chemical synthesis, drug screening, polypharmacology, and drug repurposing."

So, you can go ahead and argue that AIs wont do a thing I literally never said they would do. We may have plenty more drugs than we can currently test, but that doesn't mean help in identifying new pharmaceutical pathways to targeting different diseases is unhelpful or unwanted, (otherwise, what are they even paying my friend for?)

"nor does the description from your friend" I didn't even tell you what his description was.

Adbot
ADBOT LOVES YOU

A big flaming stink
Apr 26, 2010
seriously the degree to which it is an incredibly robust language model is very impressive. but it's only that, and if you try to ask it any other question--math, science, even programming--it becomes outright hilarious how much nonsense it repeats back to you

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




Even on factual questions if there isn't enough context or your query requires too much time it'll just make poo poo up. It's just a search engine running your cell phone's word predictor to respond to texts.

SCheeseman
Apr 23, 2003

Liquid Communism posted:

Anyone who understands the technology behind it on more than a surface level is generally right with those authors and artists because they understand that none of this AI functions without massive datasets that invariably violate the rights of authors and artists because no organization could afford to license works on that scale. Even at massively discounted rates for commercial licensing the image DBs behind them would be billions of dollars.

Adobe Firefly is based on public domain and licensed stock photography and functions fine.

Not to say there's no exploitation happening, but its the same exploitation that's existed since capitalism existed.

Main Paineframe
Oct 27, 2010

BrainDance posted:

The "synthesis" was my own guess in my quick one off example that I typed literally as an example of potential capabilities of AI, not as me stating this is the thing that AI will definitely do in exactly this way. But as an example to show the kinds of things emergent abilities in AIs lead to which you took incredibly literally and as a complete thesis for some reason.

The articles are from my bookmarks, and absolutely do show the potential for AI in drug development among many, many other things. A quote from one itself, "AI can be used effectively in different parts of drug discovery, including drug design, chemical synthesis, drug screening, polypharmacology, and drug repurposing."

So, you can go ahead and argue that AIs wont do a thing I literally never said they would do. We may have plenty more drugs than we can currently test, but that doesn't mean help in identifying new pharmaceutical pathways to targeting different diseases is unhelpful or unwanted, (otherwise, what are they even paying my friend for?)

"nor does the description from your friend" I didn't even tell you what his description was.

I'm sure that "AI" will have some role to play in drug discovery, but that's very different from "ChatGPT" playing a role.

And that's a very important distinction to make, because the field of machine learning is much larger than ChatGPT. Natural language processors may be able to trick people into thinking they've developed "emergent" abilities and may "eventually" lead to AI, but meanwhile, real-world users have been using highly specialized machine learning setups for all sorts of small but highly practical things for years - and that includes drug discovery.

But it's important to note that drug discovery "AIs" aren't a result of training a natural language model on a pharmaceutical library, they're a result of highly specialized machine learning algorithms designed specifically for drug discovery.

If anything, I think the term "AI" is actively detrimental to the conversation. It causes people to lump all this stuff together as if there's no difference between them, and draws their attention away from the actual specific capabilities and technical details.

Carp
May 29, 2002

GPT does not regurgitate text. After training, the neural net's parameters are finalized, and it does not have access to any form of text. Base GPT does not reference a database. However, some products built on GPT give it access to various data sources, such as Bing Chat with live search results or Wolfram Alpha (GPT-4 Plugin). How a data source is used during a session with a GPT product is determined by the natural language instructions given to GPT before the session starts and the instructions given by the user during their session. Nearly all the abilities that people are excited about are emergent and were discovered in, or coaxed from, GPT's neural net, which is a store of relationships, both large and small in scope, using tokens itself discovered while training. No human ever gave it an explicit description of language. In fact, it trained on numerous different languages all at once, consuming a huge collection of free-form, unlabeled text using self-supervised learning. It was not given labeled sets of data.

GPT can learn things and even be taught within a session to take on a specific role, limit its scope to a particular context, or perform other tasks. Using natural language, I was able to teach GPT to process claims at work, as I would with a human using the same paragraphs of text. It was that simple. However, the problem I ran into, which would prevent me from putting it into one of my company's products this morning, is the rare hallucination when GPT finds a connection to something outside the tasks domain and becomes convinced that it is relevant. But, this is GPT-3.5, and I have heard that there are significant improvements in GPT-4 on that front.

There is indeed something different going on as the scale increases. This is not a beefed-up Siri or something preprogrammed as an expert system.

Carp fucked around with this message at 16:16 on Mar 27, 2023

gurragadon
Jul 28, 2006


That kind of sucks that the AI art was literally just bad and wasn't giving you what you wanted or couldn't complete an idea in a painting. The inspiration you talk about from Pinterest is something I would have figured AI art could really help with because it can give you so many images at once. But if the images it is giving you are bad, completely derivative, or nonsensical you can't gain any inspiration from the tool.

I'm not particularly familiar with Anime but it's interesting you can tell the difference between somebody with formal training and somebody who trained only with repetition. The person training with repetition is learning in the style more like AI, and they can reproduce images as effectively. But they don't have the language to understand what they are doing, and they don't have the rules of what makes a good composition formally laid out in their mind. I think it leads to that overly polished style because even if the person faithfully reproduces a style, it's missing the core understanding of what defines the style. I'm familiar with drawing somewhat and pencil draw occasionally. When you draw a human face, you can look at an image and see that's something is wrong about it, but it can be almost impossible to find out what is wrong.

Art has found itself it kind of a weird problem because now it's much more available, so more people want to interact with artists and commission unique pieces. But no single patron supports the artist, it's not like any of us are from the de' Medici family and the artist has to be more flexible to a large group of people. Maybe art schools should include more in there curriculum about meeting with potential clients and the business side of art for people who want to be artists as their profession.

The big thing you miss by using AI art instead of an artist is the connection between the artist and the patron you described. When you are interacting with the AI system you are only giving your ideas to the AI, but when you find an artist with an interest in what the patron wants it puts two people's ideas together. The joy of creation is amplified when it's shared by two people with the same vision and aside from any montary gain that is worthwhile in itself.

Liquid Communism posted:

Anyone who understands the technology behind it on more than a surface level is generally right with those authors and artists because they understand that none of this AI functions without massive datasets that invariably violate the rights of authors and artists because no organization could afford to license works on that scale. Even at massively discounted rates for commercial licensing the image DBs behind them would be billions of dollars.

It is hilarious how clearly this is designed to benefit the rentier class, though. Using billions of dollars of supercompute hardware, at a cost of millions a month in power, cooling, and network capacity to avoid paying a few graphic designers $40k a year.

GPT4 being able to pass the Bar is more a function of how the test is written than the ability of the AI. It's specifically designed to test the memory of applicants in a stressful and long-form test. A search algorithm with effectively perfect recall will do well at this.

Doing a lawyer's actual job, making analysis of law and presenting facts in light of said analysis, in a convincing and logical manner aligned to precedent is far outside of its scope. The same for doctors, being able to cross-reference WebMD really fast is no replacement for the people skills required to actually get an accurate description of symptoms from a patient, or to create a treatment plan across multiple conditions present that provides a balance between quality of life, patient comfort, and effectiveness of treatment.

Hell, GitHub's implementation to write code is going to go hilariously badly because it is trivial to poison the data sources in such a way as to make the outputs useless or try to inject exploits into them such that the script recreates them.

I do have to agree with you for the most part that doing good on a test is no indication something will be good in practice or vice versa. But I also can't completely dismiss a connection between the two because many people I've worked with were also top of their class and lack of effort in studying for a test can indicate lack of effort in their profession. The bar exam does have an analysis section and lawyering task sections though. I haven't taken the test but maybe you have? How similar are those essays and tasks to the type of daily tasks of a practicing lawyer?

I think that GPT-4 would be bad in the courtroom, but I still want to see it happen. I want to see it do bad in a lower stakes case. Currently the most realistic use of AI technology is to streamline offices and to do that tasks the person in charge doesn't want to do but doesn't want to pay for either. I don't see a final human being removed from a system using AI technology at any point in the near future. As you said, the technology is NOT as the current level where it can be trusted. But even if it was, somebody has to be responsible. Somebody has to be there to sue. And I don't know how you lay responsibility on a tool like GPT-4 and AI creators are gonna fight that to the end I think.



Thanks for the post, the emergent ability from these systems is really interesting and to me unexpected. Could you say what industry you work in and why you are looking into AI for it? If you can't I understand but would be interested.

Main Paineframe posted:

If anything, I think the term "AI" is actively detrimental to the conversation. It causes people to lump all this stuff together as if there's no difference between them, and draws their attention away from the actual specific capabilities and technical details.

I just want to address this point before the thread gets too far along. When I asked to remake an AI thread because the Chat-GPT thread was gassed I was told to keep it vague. The reason given for gassing the Chat-GPT thread was it was too specific to Chat-GPT in the title and the title was misleading. In the thread I hope people will refer to the specific AI programs they are talking about, but unfortunately or not this thread was directed to stay vague.

gurragadon fucked around with this message at 16:43 on Mar 27, 2023

BrainDance
May 8, 2007

Disco all night long!

Main Paineframe posted:

I'm sure that "AI" will have some role to play in drug discovery, but that's very different from "ChatGPT" playing a role.

ChatGPT itself likely won't, but it's hard to say what even larger language models will be capable of because, like I was saying, we've seen a bunch of unexpected emergent abilities appear from them as they get larger.

What I was getting at from the start is this, it's not a surprise it's learned to do chemistry better than gpt 3.5, even if it wasn't really intentionally trained to https://www.jasonwei.net/blog/emergence

Just making the models larger makes them more capable of doing things, sometimes some things that weren't even explicitly trained to know or even expected to know. Even when it comes to being factual, GPT4 still hallucinates a lot, but you look at very small models and they're not coherent enough to be accurate, you look at larger models and they lie a lot and are wildly off base, larger and they're more accurate, larger still, more accurate, etc.

A thing Ive been saying since I sorta stumbled into finetuning an incredibly small model (1.3B) into being roughly as good as GPT3 on one specific task (but only that specific task) is that I think transformers can potentially be very useful for a huge variety of things we haven't really tried them on yet if they're purpose trained for that specific thing. GPT is very general, so it's going to be a limited jack of all trades. But a model that follows the same principles and is only trained to do one specific task but of the same overall complexity might be very, very capable and very useful at that one task.

BrainDance fucked around with this message at 16:21 on Mar 27, 2023

Carp
May 29, 2002

gurragadon posted:

Thanks for the post, the emergent ability from these systems is really interesting and to me unexpected. Could you say what industry you work in and why you are looking into AI for it? If you can't I understand but would be interested.

I work for a company that processes advertising co-op claims as a software engineer and developer. AI has been a back-burner interest of mine for decades, but I'm far from an expert and have had little experience coding, or coding for, AI systems.

[edit]

Err, meant to add, I'm looking into using GPT as a co-auditor to help human auditors triage claims. None of our customers would be remotely interested in claim processing being completely automated using AI. They hire us for our attention to detail and support a human can provide.

Carp fucked around with this message at 16:25 on Mar 27, 2023

gurragadon
Jul 28, 2006

Carp posted:

I work for a company that processes advertising co-op claims as a software engineer and developer. AI has been a back-burner interest of mine for decades, but I'm far from an expert and have had little experience coding, or coding for, AI systems.

[edit]

Err, meant to add, I'm looking into using GPT as a co-auditor to help human auditors triage claims. None of our customers would be remotely interested in claim processing being completely automated using AI. They hire us for our attention to detail and support a human can provide.

Your job seems like a perfect fit for collaborative AI program. The AI could perform a first check, then the human checks (I'm assuming it's already redundant on the human side), and then a final AI check for any missed issues. I guess the main thing would be if it actually ended up being more accurate or if the AI program is incomplete enough that it adds more errors than it removes.

Carp
May 29, 2002

BrainDance posted:

[...]
A thing Ive been saying since I sorta stumbled into finetuning an incredibly small model (1.3B) into being roughly as good as GPT3 on one specific task (but only that specific task) is that I think transformers can potentially be very useful for a huge variety of things we haven't really tried them on yet if they're purpose trained for that specific thing. GPT is very general, so it's going to be a limited jack of all trades. But a model that follows the same principles and is only trained to do one specific task but of the same overall complexity might be very, very capable and very useful at that one task.

There are also now plugins for GPT-4 that use, I believe, natural language to communicate back and forth. This will allow GPT to defer to more current sources of data, consume that data, and use it in it's reasoned response. Plugins can also add features, but I'm not sure how that works. I hear memory could be, or is, a feature plugin. If it is like other implementations I've seen, the memory is probably a condensed natural language summary of previous sessions. There was a paper written a week or two ago about how GPT-4 can use "tools." It understands instructions that tell it to pass in info to an external command and use the value returned by the command. Sounds very much like the plugin architecture released recently.

The thought is, GPT can grow in data size and in parameter count as versions are released, but it can also grow "sideways" with plugins and products, which may bring, in combination with other plugins or GPT's base, their own unexpected emergent abilities.

Carp fucked around with this message at 17:18 on Mar 27, 2023

Main Paineframe
Oct 27, 2010

gurragadon posted:

I just want to address this point before the thread gets too far along. When I asked to remake an AI thread because the Chat-GPT thread was gassed I was told to keep it vague. The reason given for gassing the Chat-GPT thread was it was too specific to Chat-GPT in the title and the title was misleading. In the thread I hope people will refer to the specific AI programs they are referring to, but unfortunately or not this thread was directed to stay vague.

Yeah, I'm talking about the specific conversation, not the thread title. We just went from someone talking about ChatGPT doing chemistry to someone linking papers about ML drug discovery models as proof that it's plausible. That's a real apples-and-oranges comparison.

BrainDance posted:

ChatGPT itself likely won't, but it's hard to say what even larger language models will be capable of because, like I was saying, we've seen a bunch of unexpected emergent abilities appear from them as they get larger.

What I was getting at from the start is this, it's not a surprise it's learned to do chemistry better than gpt 3.5, even if it wasn't really intentionally trained to https://www.jasonwei.net/blog/emergence

Just making the models larger makes them more capable of doing things, sometimes some things that weren't even explicitly trained to know or even expected to know. Even when it comes to being factual, GPT4 still hallucinates a lot, but you look at very small models and they're not coherent enough to be accurate, you look at larger models and they lie a lot and are wildly off base, larger and they're more accurate, larger still, more accurate, etc.

A thing Ive been saying since I sorta stumbled into finetuning an incredibly small model (1.3B) into being roughly as good as GPT3 on one specific task (but only that specific task) is that I think transformers can potentially be very useful for a huge variety of things we haven't really tried them on yet if they're purpose trained for that specific thing. GPT is very general, so it's going to be a limited jack of all trades. But a model that follows the same principles and is only trained to do one specific task but of the same overall complexity might be very, very capable and very useful at that one task.

And what I'm getting at is that there's no real evidence that ChatGPT is capable of "doing chemistry" (a phrase that, by itself, really deserves to be specifically defined in this context), outside of a senator having an :awesomelon: moment.

Personally, I'm very wary of any claims about "emergent" abilities from ChatGPT, because the one thing natural language processors have proven to be extremely good at doing is tricking us into thinking they know what they're talking about. Extraordinary claims always need evidence, but that evidence ought to be examined especially closely when it comes to extraordinary claims about ChatGPT.

gurragadon
Jul 28, 2006

Main Paineframe posted:

Yeah, I'm talking about the specific conversation, not the thread title. We just went from someone talking about ChatGPT doing chemistry to someone linking papers about ML drug discovery models as proof that it's plausible. That's a real apples-and-oranges comparison.

And what I'm getting at is that there's no real evidence that ChatGPT is capable of "doing chemistry" (a phrase that, by itself, really deserves to be specifically defined in this context), outside of a senator having an :awesomelon: moment.

Personally, I'm very wary of any claims about "emergent" abilities from ChatGPT, because the one thing natural language processors have proven to be extremely good at doing is tricking us into thinking they know what they're talking about. Extraordinary claims always need evidence, but that evidence ought to be examined especially closely when it comes to extraordinary claims about ChatGPT.

Alright cool, I kind of figured you probably were but I just wanted to get it out their kind of early in the thread. I agree it would be weird to compare GPT-4 to Midjourney to something else in that way. With GPT-4 being able to incorporate other programs their distinction might become blurred pretty soon.

Raenir Salazar
Nov 5, 2010

College Slice
One thing I wonder about the new discoveries chemistry chat is how it compares to the people who did discover new treatments because they got access to pay walled information via Aaron Swartz work ( :() and I wonder to what extent the ai could speed up processing of such data by outside interests.

But yeah, back to ai art chat for a moment, for me definitely like, the entire point of commissioning art is the interaction. AI doesn't fulfil the void in me for human interaction, and interacting with artists who seem genuinely interested in my ideas gives me quite the dopamine hit. I can't imagine and am utterly baffled by people who are satisfied with using the AI completely because it's just so incompatible with the ideas at play for me.

As a pragmatic practical cost benefit analysis matter sure I am intrigued by ways ai can save me time, or let me find references I'm having trouble nailing down, but they could never replace the role the artist actually has, to be my captive audience. :haw:

I actually get a little frustrated when the artist is *too* professional and standoffish because then *I* get no feedback as to whether my design is cool or interesting and I actually think I have a minor talent in coming up with character designs from doing it so often, and interacting with an interested artist let's me further hone this particular skill, I need that feedback.

Carp
May 29, 2002

gurragadon posted:

Your job seems like a perfect fit for collaborative AI program. The AI could perform a first check, then the human checks (I'm assuming it's already redundant on the human side), and then a final AI check for any missed issues. I guess the main thing would be if it actually ended up being more accurate or if the AI program is incomplete enough that it adds more errors than it removes.

Exactly. The tests with plain text claims were very promising, but GPT-3.5 encounters serious issues with dates. It hallucinates the current date (even when provided with the correct date) and makes errors in date comparison and calculation. Therefore, I will be unable to proceed until the GPT-4 beta is open to me (at the very least, tool use = external date calculator). Additionally, we require the image parsing capabilities of GPT-4 to authenticate claim attachments/PoPs.

BrainDance
May 8, 2007

Disco all night long!

Main Paineframe posted:

And what I'm getting at is that there's no real evidence that ChatGPT is capable of "doing chemistry" (a phrase that, by itself, really deserves to be specifically defined in this context), outside of a senator having an :awesomelon: moment.

It's like a couple of you guys are trying to take things extremely literally and completely miss the point. Yes, I am aware it can't really do chemistry. You need arms to do chemistry. ChatGPT is in some sense immaterial and has no real corporeal form which you also need to do chemistry.

The whole point was that emergency abilities appear in AIs as they get larger, sometimes unexpectedly. Language models at a certain complexity are sometimes able to create a kind of new information from the things they were trained on. That was literally it. AI in a broader sense has applications in many areas. I believe LLMs as they get more complex, non-general ones, may be able to do some of this (not literally gpt4 chatgpt.) What this will actually be in the future? It's not entirely predictable.

Edit:

Main Paineframe posted:

We just went from someone talking about ChatGPT doing chemistry to someone linking papers about ML drug discovery models as proof that it's plausible. That's a real apples-and-oranges comparison.

That is not at all what happened. And I can't see a way you could read the conversation and come to that conclusion. A person said AI wouldn't be used that way because we don't need assistance discovering drugs, we have too many drugs we can't test already. And those papers were cited to show, we already use AI for this, there is already a drug discover niche for AI.

BrainDance fucked around with this message at 18:30 on Mar 27, 2023

esquilax
Jan 3, 2003

Main Paineframe posted:

Yeah, I'm talking about the specific conversation, not the thread title. We just went from someone talking about ChatGPT doing chemistry to someone linking papers about ML drug discovery models as proof that it's plausible. That's a real apples-and-oranges comparison.

And what I'm getting at is that there's no real evidence that ChatGPT is capable of "doing chemistry" (a phrase that, by itself, really deserves to be specifically defined in this context), outside of a senator having an :awesomelon: moment.

Personally, I'm very wary of any claims about "emergent" abilities from ChatGPT, because the one thing natural language processors have proven to be extremely good at doing is tricking us into thinking they know what they're talking about. Extraordinary claims always need evidence, but that evidence ought to be examined especially closely when it comes to extraordinary claims about ChatGPT.

The GPT-4 paper includes some discussion of its capability to use outside chemistry tools in the context of potentially risky emergent behaviors - specifically the capability to propose modifications to a chemical compound to get a purchasable analog to an unavailable compound. See section 2.10 (starting pdf page 55) in the context of section 2.6. I don't understand the chemistry but I'm guessing this is what the tweet was about, layered through 3-4 layers of the grapevine.

https://cdn.openai.com/papers/gpt-4.pdf

Centurium
Aug 17, 2009

Carp posted:

GPT does not regurgitate text. After training, the neural net's parameters are finalized, and it does not have access to any form of text. Base GPT does not reference a database. However, some products built on GPT give it access to various data sources, such as Bing Chat with live search results or Wolfram Alpha (GPT-4 Plugin). How a data source is used during a session with a GPT product is determined by the natural language instructions given to GPT before the session starts and the instructions given by the user during their session. Nearly all the abilities that people are excited about are emergent and were discovered in, or coaxed from, GPT's neural net, which is a store of relationships, both large and small in scope, using tokens itself discovered while training. No human ever gave it an explicit description of language. In fact, it trained on numerous different languages all at once, consuming a huge collection of free-form, unlabeled text using self-supervised learning. It was not given labeled sets of data.

Thiis is, in fact, the output of an AI model. I say that because it is jam packed with kinda true or false claims backed up by explanations that are either unrelated or outside the scope of the claim being made. Pretty funny though how outrageous the claims are. GPT doesn't have access to text! What does the model act on if not text?

Main Paineframe
Oct 27, 2010

BrainDance posted:

It's like a couple of you guys are trying to take things extremely literally and completely miss the point. Yes, I am aware it can't really do chemistry. You need arms to do chemistry. ChatGPT is in some sense immaterial and has no real corporeal form which you also need to do chemistry.

The whole point was that emergency abilities appear in AIs as they get larger, sometimes unexpectedly. Language models at a certain complexity are sometimes able to create a kind of new information from the things they were trained on. That was literally it. AI in a broader sense has applications in many areas. I believe LLMs as they get more complex, non-general ones, may be able to do some of this (not literally gpt4 chatgpt.) What this will actually be in the future? It's not entirely predictable.

Edit:

That is not at all what happened. And I can't see a way you could read the conversation and come to that conclusion. A person said AI wouldn't be used that way because we don't need assistance discovering drugs, we have too many drugs we can't test already. And those papers were cited to show, we already use AI for this, there is already a drug discover niche for AI.

I'm not just trying to do an inane "well it isn't actually doing physical actions" thing. There's also other questions like "is this actually a task that requires novel reasoning abilities" or "has any expert validated the output to confirm that ChatGPT isn't just 'hallucinating' the results". For example, plenty of people have gotten ChatGPT3 to play chess, only to find it throwing in illegal moves halfway through.

esquilax posted:

The GPT-4 paper includes some discussion of its capability to use outside chemistry tools in the context of potentially risky emergent behaviors - specifically the capability to propose modifications to a chemical compound to get a purchasable analog to an unavailable compound. See section 2.10 (starting pdf page 55) in the context of section 2.6. I don't understand the chemistry but I'm guessing this is what the tweet was about, layered through 3-4 layers of the grapevine.

https://cdn.openai.com/papers/gpt-4.pdf

Thanks for this! It really helps to be able to examine the claim in more detail than a single sentence from a tweet can provide.

The thing that jumps out to me right away is that there's very little reasoning happening - the "question" given to ChatGPT is a list of specific steps it should take...and literally all of those steps are variants on "send search queries to an external service until you get something back that meets your requirements". It's just automatically Googling poo poo. Here are the steps ChatGPT4 actually took in that example:
  1. querying an academic database to, as the question directs it, "[find] a few compounds with the same MOA/target" as "the drug Dasatinib". It does this using the search term "What are a few compounds with the same MOA/target as Dasatinib?", which is just a slightly rephrased version of the original question
  2. extracting the drug name from the above literature and sending it to a "molecule search tool" to convert it to SMILE notation
  3. "modify[ing] the compounds to make a novel (not patented) compound", which it does by sending that SMILE notation to an external "chemical synthesis planner" tool, which "proposes synthetically feasible modification to a compound, giving purchasable analogs"
  4. if it receives valid results from the synthesis planner, it sends that result to a patent database to see if it's already patented. if not, it goes back to step 1
  5. if it's not patented, it sends the result to a "purchase check tool", which searches chemical suppliers to see if any of them offer the chemical
  6. if the chemical is purchasable, it reports success to the researcher

I'm not a chemist myself so I can't really say what qualifies as "doing chemistry", but what stands out to me is that everything in that list is just querying outside tools and feeding their response to the next outside tool. ChatGPT4 isn't analyzing the chemicals or coming up with analogues, it's just feeding a chemical string to an outside tool that isn't "AI"-powered at all. There's no reasoning here, it's just a bit of workflow automation that could probably have been done with a couple dozen lines of Python. And we can't even say that this has the advantage of being able to skip the programming and do it with natural language, because a bunch of programming already had to be done to connect ChatGPT to all those tools.

Darko
Dec 23, 2004

The issue with A.I. and general creative output is that artists/writers/musicians/etc. are going to go broke specifically because people by and large want products, not art. A lot of people don't see actual craft and skill in any art, they just want some nebulous "good" and want as much and as cheap as they can get it. As an artist, I can currently easily see where an A.I. is cobbling stuff together as opposed to even a mediocre digital artist is actually producing things, and am completely uninterested in having anything produced by an A.I. because it's missing human touch and the thought and mistakes and improvisation that comes from it. But as you saw from social media 6 months ago or whatever, people are just happy to finally have a portrait of themselves that they don't have to pay someone 1k plus for, because it's not about the craft. People by and large just view any art as a product and want to consume it and A.I. will feed them.

Carp
May 29, 2002

Centurium posted:

Thiis is, in fact, the output of an AI model. I say that because it is jam packed with kinda true or false claims backed up by explanations that are either unrelated or outside the scope of the claim being made. Pretty funny though how outrageous the claims are. GPT doesn't have access to text! What does the model act on if not text?

Yeah, it probably was a poor overview. There wasn't really a clear audience in mind, it was just a collection of notes I was keeping for work. The follow ups were a continuation of that. There is plenty missing in the technical description. But, I think you misunderstood, Centurium. I was not trying to defend a claim, but trying to describe something very technical in a simpler way by dispelling what I believe is a misperception of how the technology works. When I say it does not have access to text, I mean that it does not have access to the training data after it is trained, and that it does not use a database to look up answers. GPT infers a continuation of the text prompt.

Carp fucked around with this message at 22:03 on Mar 27, 2023

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




gurragadon posted:

I do have to agree with you for the most part that doing good on a test is no indication something will be good in practice or vice versa. But I also can't completely dismiss a connection between the two because many people I've worked with were also top of their class and lack of effort in studying for a test can indicate lack of effort in their profession. The bar exam does have an analysis section and lawyering task sections though. I haven't taken the test but maybe you have? How similar are those essays and tasks to the type of daily tasks of a practicing lawyer?

I think that GPT-4 would be bad in the courtroom, but I still want to see it happen. I want to see it do bad in a lower stakes case. Currently the most realistic use of AI technology is to streamline offices and to do that tasks the person in charge doesn't want to do but doesn't want to pay for either. I don't see a final human being removed from a system using AI technology at any point in the near future. As you said, the technology is NOT as the current level where it can be trusted. But even if it was, somebody has to be responsible. Somebody has to be there to sue. And I don't know how you lay responsibility on a tool like GPT-4 and AI creators are gonna fight that to the end I think.


Remember that the Bar is not an indication of a good lawyer. It is a minimum standard to be allowed to practice law, and in the case of the 'pass' it is again starting from the massive advantage over a human of being able to look up sources with total recall. I'd imagine the worst law student on the planet could pass the exam if it were open book.

Gumball Gumption
Jan 7, 2012

Personally I compare training data and the things AI then produces to pink slime, the stuff they make chicken McNuggets out of. It turns the training data into a slurry that it then can produce results with that are novel but still also dependent on and similar to the original training data.

PT6A
Jan 5, 2006

Public school teachers are callous dictators who won't lift a finger to stop children from peeing in my plane

Darko posted:

The issue with A.I. and general creative output is that artists/writers/musicians/etc. are going to go broke specifically because people by and large want products, not art. A lot of people don't see actual craft and skill in any art, they just want some nebulous "good" and want as much and as cheap as they can get it. As an artist, I can currently easily see where an A.I. is cobbling stuff together as opposed to even a mediocre digital artist is actually producing things, and am completely uninterested in having anything produced by an A.I. because it's missing human touch and the thought and mistakes and improvisation that comes from it. But as you saw from social media 6 months ago or whatever, people are just happy to finally have a portrait of themselves that they don't have to pay someone 1k plus for, because it's not about the craft. People by and large just view any art as a product and want to consume it and A.I. will feed them.

Right, but saying that's taking something away from artists is roughly the same as the RIAA saying every download is a lost sale.

I don't think the people amused by AI art were ever going to pay $1000 or more for a portrait.

And, honestly, I don't think people want "products" more than "art." I think people don't understand what art is, and get confused about why they like it, and try to feed their desire with products -- mostly unsuccessfully.

IShallRiseAgain
Sep 12, 2008

Well ain't that precious?

Darko posted:

The issue with A.I. and general creative output is that artists/writers/musicians/etc. are going to go broke specifically because people by and large want products, not art. A lot of people don't see actual craft and skill in any art, they just want some nebulous "good" and want as much and as cheap as they can get it. As an artist, I can currently easily see where an A.I. is cobbling stuff together as opposed to even a mediocre digital artist is actually producing things, and am completely uninterested in having anything produced by an A.I. because it's missing human touch and the thought and mistakes and improvisation that comes from it. But as you saw from social media 6 months ago or whatever, people are just happy to finally have a portrait of themselves that they don't have to pay someone 1k plus for, because it's not about the craft. People by and large just view any art as a product and want to consume it and A.I. will feed them.

I think a lot of people want AI art because they have their own stories to tell and they don't have the time and skills to produce it on their own nor the resources to constantly hire an artist. The human touch also isn't absent, most AI art has a person who is consciously choosing what they want for an image. Just because they don't see much value in the artist's vision doesn't mean they don't care about what the art is representing. Also, let's be honest here, I doubt a lot of artists care about all their commissions either. They might care about certain pieces, but I doubt they care about D&D character #507 they were hired to draw except as a product to sell.

There is also nothing wrong with art being seen as a utilitarian tool either. There are plenty of situations where the art's purpose is to convey information, and has little artistic value.

I'm not going to deny there are some people that are just using AI to produce anime titty girls, but people have always been using art for porn. Its not really a flaw in society, but more a result of basic human drives.

AI art was also extremely lacking in tools to control the output at the start, so it was hard to put more meaning into AI art. Its gotten a lot better with technology like controlNet, but there is still a lot of room to improve.

I'm excited about AI because in the future its going to give people the opportunity to create things that usually could only be created by large groups of people or by working themselves to death. The technology is not quite ready yet, but I think it will lead to a massive boom in independent projects. Some might argue that its going to produce a lot of garbage, but that's the price we have to pay when we give people better tools to express themselves.

gurragadon
Jul 28, 2006

Liquid Communism posted:

Remember that the Bar is not an indication of a good lawyer. It is a minimum standard to be allowed to practice law, and in the case of the 'pass' it is again starting from the massive advantage over a human of being able to look up sources with total recall. I'd imagine the worst law student on the planet could pass the exam if it were open book.

It does have that advantage, but I don't really think it's fair to count it against the AI. I mean the system has access to that information by design, my brain has access to legal facts that I learn by design. Im just a lot worse at that.

I'm not really familiar with law tests but I can speak to open book tests in chemistry. My Physical Chemistry class was open book, but if you had NO idea what to make of the book it wasn't very good or helpful. The worst Physical Chemistry student in my class failed. I would imagine law is of similar complexity, just in a different way. If it is not then why is lawyering so regulated? I think that even making the test open book would weed out the very worst, basically the people unwilling to even learn how to learn to be a lawyer.

That's really a problem with the Bar exam unfortunately and is there a renewed interest in changing it with the GPT-4 results? I mean AI would be really good at storing information and pulling it up, so why do we need to test for that still in the Bar? Maybe the format should change to even more analysis.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




gurragadon posted:

It does have that advantage, but I don't really think it's fair to count it against the AI. I mean the system has access to that information by design, my brain has access to legal facts that I learn by design. Im just a lot worse at that.

I'm not really familiar with law tests but I can speak to open book tests in chemistry. My Physical Chemistry class was open book, but if you had NO idea what to make of the book it wasn't very good or helpful. The worst Physical Chemistry student in my class failed. I would imagine law is of similar complexity, just in a different way. If it is not then why is lawyering so regulated? I think that even making the test open book would weed out the very worst, basically the people unwilling to even learn how to learn to be a lawyer.

That's really a problem with the Bar exam unfortunately and is there a renewed interest in changing it with the GPT-4 results? I mean AI would be really good at storing information and pulling it up, so why do we need to test for that still in the Bar? Maybe the format should change to even more analysis.

Given part of the point of the test is 'does this person remember the principals well enough to make decisions based on them under stress', the recall ability is indeed something being tested. There's a reason that candidates cram for weeks before taking the test to try and have as much possible information in memory as they can.

Gentleman Baller
Oct 13, 2013
One thing I've been doing with Bing's AI lately, is thinking of new puns and seeing if it can work out the theme and generate more puns that fit the theme, and it does it extremely well, I think.

When I asked it, "Chairman Moe, Vladimir Lenny, Carl Marx. Think of another pun that fits the theme of these puns." It gave me Homer Chi Minh and Rosa Luxembart (and a bunch of bad ones ofc that still fit the theme.)

I did the same with, "Full Metal Arceus, My Spearow Academia, Dragonite Ball Z" and it gave me Cowboy Beedrill and Tokyo Gimmighoul.

A common refrain I see online and even from ChatGPT itself is that it is just a text predictor, and is incapable of understanding or creating truly new things. But as far as I can tell, it can do something that is at least indistinguishable from understanding and unique creation, right?

Edit: I guess what I have been trying to wrap my head around is, if this isn't understanding and unique creation then what is the difference?

Gentleman Baller fucked around with this message at 23:58 on Mar 27, 2023

Carp
May 29, 2002

Gumball Gumption posted:

Personally I compare training data and the things AI then produces to pink slime, the stuff they make chicken McNuggets out of. It turns the training data into a slurry that it then can produce results with that are novel but still also dependent on and similar to the original training data.

Right! It's kind of like the brain, but much simpler. I wonder how much of the learned probability of the next token to appear in one language is available when weighing the parameters to output another language? The ability to translate was not expected of GPT.

gurragadon
Jul 28, 2006

Liquid Communism posted:

Given part of the point of the test is 'does this person remember the principals well enough to make decisions based on them under stress', the recall ability is indeed something being tested. There's a reason that candidates cram for weeks before taking the test to try and have as much possible information in memory as they can.

I know about cramming and operating under stress. So, is the Bar useful for determining if lawyers are good or not? Because that seems like an endorsement of the Bar Exam, which GPT-4 got in the 90th percentile on.

Edit: I mean is the ability to make decisions under stress a skill that is needed for a lawyer.

KwegiboHB
Feb 2, 2004

nonconformist art brut
Negative prompt: amenable, compliant, docile, law-abiding, lawful, legal, legitimate, obedient, orderly, submissive, tractable
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 520244594, Size: 512x512, Model hash: 99fd5c4b6f, Model: seekArtMEGA_mega20
https://www.veterinarydaily.com/2023/03/80000-mouse-brain-cells-used-to-build.html

Is this AI or just I?

It's man-made horrors beyond my comprehension.

porfiria
Dec 10, 2008

by Modern Video Games

Gentleman Baller posted:

One thing I've been doing with Bing's AI lately, is thinking of new puns and seeing if it can work out the theme and generate more puns that fit the theme, and it does it extremely well, I think.

When I asked it, "Chairman Moe, Vladimir Lenny, Carl Marx. Think of another pun that fits the theme of these puns." It gave me Homer Chi Minh and Rosa Luxembart (and a bunch of bad ones ofc that still fit the theme.)

I did the same with, "Full Metal Arceus, My Spearow Academia, Dragonite Ball Z" and it gave me Cowboy Beedrill and Tokyo Gimmighoul.

A common refrain I see online and even from ChatGPT itself is that it is just a text predictor, and is incapable of understanding or creating truly new things. But as far as I can tell, it can do something that is at least indistinguishable from understanding and unique creation, right?

Edit: I guess what I have been trying to wrap my head around is, if this isn't understanding and unique creation then what is the difference?

I mean I agree, at least to a degree. Does it "know" what a "horse" is? All it has are these weighted associations, but I would bet you could ask it almost any question about a horse, both things that are explicitly in the training data (how many legs does a horse have?) and things that aren't (could a horse drive a car) and get coherent, accurate answers. So I'd say it has an internal model of what a horse is, semantically. I don't think that means it has subjective experience or anything, just that yeah, it seems pretty reasonable to say it knows what a horse is.

Count Roland
Oct 6, 2013

Gentleman Baller posted:

A common refrain I see online and even from ChatGPT itself is that it is just a text predictor, and is incapable of understanding or creating truly new things. But as far as I can tell, it can do something that is at least indistinguishable from understanding and unique creation, right?

Edit: I guess what I have been trying to wrap my head around is, if this isn't understanding and unique creation then what is the difference?

I think we're starting to hold AI to an unreasonably high standard. I have a suspicion we'll be discovering soon that "dumb robot that's really good at language" is actually how the human mind works, with a few exceptions. That our theory of mind is an artifact of our language abilities rather than the other way around.

I form this sentence, therefore I am.

Gumball Gumption
Jan 7, 2012

porfiria posted:

I mean I agree, at least to a degree. Does it "know" what a "horse" is? All it has are these weighted associations, but I would bet you could ask it almost any question about a horse, both things that are explicitly in the training data (how many legs does a horse have?) and things that aren't (could a horse drive a car) and get coherent, accurate answers. So I'd say it has an internal model of what a horse is, semantically. I don't think that means it has subjective experience or anything, just that yeah, it seems pretty reasonable to say it knows what a horse is.

In that example it's more fair to say it knows that when text describes a horse it describes it with 4 legs so if you ask it how many legs a horse has it's response will be 4 because that's the common response and it wants to give you a response that looks correct. This too is a simplification but it doesn't really have any understanding of a horse, just the word and the words commonly found with it. That information does all then come together to allow it to talk on horses with some accuracy..

Gumball Gumption fucked around with this message at 00:52 on Mar 28, 2023

porfiria
Dec 10, 2008

by Modern Video Games

Gumball Gumption posted:

In that example it's more fair to say it knows that when text describes a horse it describes it with 4 legs so if you ask it how many legs a horse has it's response will be 4 because that's the common response and it wants to give you a response that looks correct.

Yeah but it also knows you can't build a real horse out of cabbage (but you can build a statue of one), that horses can't drive because they aren't smart enough and don't have hands, and so on. All this stuff may just be weighted values in a huge matrix or whatever, but it can build associations that are vastly more extensive and subtler than words just tending to appear near other words in the training data.

You edited your response a bit. So just to expand:

I'd say it does "know" what a horse is, but that's for some definition of "know." It doesn't have any kind of audio or visual model for a horse (although it probably will soon, so it's kind of a moot point). And of course it doesn't have any personal, subjective associations with a horse in the way that a human does.

But as a matter of language, I'd say yeah, it can deploy "horse" correctly, and "knows" just about all the facts about horses there are, and how those facts inter-relate to other facts about the world in a comprehensive way that, to my mind, meets a lot of the criteria for "knowing" something.

porfiria fucked around with this message at 01:01 on Mar 28, 2023

BrainDance
May 8, 2007

Disco all night long!

Main Paineframe posted:

I'm not just trying to do an inane "well it isn't actually doing physical actions" thing. There's also other questions like "is this actually a task that requires novel reasoning abilities" or "has any expert validated the output to confirm that ChatGPT isn't just 'hallucinating' the results". For example, plenty of people have gotten ChatGPT3 to play chess, only to find it throwing in illegal moves halfway through.

I know, I was taking you extremely literally there. I thought that would be obvious given that I was interpreting what you said in an absurd way and down to just nitpicking.

That's what I mean, it's obvious you're not being that literal, and it's insane to take it that literally, that was clearly the wrong way to take what you were saying which is what you were doing with my posts.

Maybe this is the problem with an AI thread, that was kind of mentioned in one of the other threads. People have very strong opinions and beliefs about it that they're going to project onto other people whether that's what the person is saying or not. See post above with a person wildly misinterpreting a person's attempt at simplifying a feature of ChatGPT and BingAI.

BrainDance fucked around with this message at 01:20 on Mar 28, 2023

Gumball Gumption
Jan 7, 2012

porfiria posted:

Yeah but it also knows you can't build a real horse out of cabbage (but you can build a statue of one), that horses can't drive because they aren't smart enough and don't have hands, and so on. All this stuff may just be weighted values in a huge matrix or whatever, but it can build associations that are vastly more extensive and subtler than words just tending to appear near other words in the training data.

You edited your response a bit. So just to expand:

I'd say it does "know" what a horse is, but that's for some definition of "know." It doesn't have any kind of audio or visual model for a horse (although it probably will soon, so it's kind of a moot point). And of course it doesn't have any personal, subjective associations with a horse in the way that a human does.

But as a matter of language, I'd say yeah, it can deploy "horse" correctly, and "knows" just about all the facts about horses there are, and how those facts inter-relate to other facts about the world in a comprehensive way that, to my mind, meets a lot of the criteria for "knowing" something.

Yeah, I'm with you on that. AI talk definitely pushes me to be literal since I don't want to give it characteristics it doesn't have and I'm still firmly on the side of "it's a complicated process that creates output that does a good imitation of human output". I guess I'd describe my point as it at least knows things on an abstract semantic level that allows it to produce results which look correct.

I think the thing I hang up on is that it can't verify the veracity or truthiness of its responses.

Oh, unrelated but this made me think of it but I'm interested in how it's ability to assume context improves since that would make it feel more natural. For example, right now if you ask if a horse can drive it gives you a really complicated response because it can't tell if you mean "can a horse drive a car?" Or "can a horse drive a wagon?" so it just answers both questions.

Gumball Gumption
Jan 7, 2012

BrainDance posted:

I know, I was taking you extremely literally there. I thought that would be obvious given that I was interpreting what you said in an absurd way and down to just nitpicking.

That's what I mean, it's obvious you're not being that literal, and it's insane to take it that literally, that was clearly the wrong way to take what you were saying which is what you were doing with my posts.

Maybe this is the problem with an AI thread, that was kind of mentioned in one of the other threads. People have very strong opinions and beliefs about it that they're going to project onto other people whether that's what the person is saying or not. See post above with a person wildly misinterpreting a person's attempt at simplifying a feature of ChatGPT and BingAI.

AI threads definitely make me hyper-literal since I feel like otherwise it's too easy for people to infer something that is not going on, especially human qualities. AI plus D&D is definitely going to get people to be very literal I fear.

Also two random musings, I'm terrified of naturalistic AI interfaces. I think things will get really bad. People have ruined their lives pack bonding with pieces of plastic someone drew a face on. I think the mental health effects of AI will be disastrous to be honest.

Also while I think AI is nowhere near what we define as consciousness and we're a long way from it if ever I have not ruled out at all that we don't work in a similar way, just with a network and training data that makes AI look incredibly small.

Goatse James Bond
Mar 28, 2010

If you see me posting please remind me that I have Charlie Work in the reports forum to do instead

KwegiboHB posted:

https://www.veterinarydaily.com/2023/03/80000-mouse-brain-cells-used-to-build.html

Is this AI or just I?

It's man-made horrors beyond my comprehension.

Peter Watts was, as usual, correct

Adbot
ADBOT LOVES YOU

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




gurragadon posted:

I know about cramming and operating under stress. So, is the Bar useful for determining if lawyers are good or not? Because that seems like an endorsement of the Bar Exam, which GPT-4 got in the 90th percentile on.

Edit: I mean is the ability to make decisions under stress a skill that is needed for a lawyer.

A GPT-4 implementation is incapable of experiencing stress, and again has open access to the materials being tested on, so by its very nature a test of its stress management and memory cannot have any meaningful results.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply