|
https://blogs.windows.com/windowsde...t-and-dev-home/ Microsoft is bringing ChatGPT to Windows 11. Actually some of the stuff in that preview looks pretty good, but I'm reminded of one of my colleagues who lives in Belgium and works in Germany so he makes this commute every single day. All of Belgium and all of Germany share the same time zone. His Outlook constantly nags at him whether or not he wants to update the time zone after crossing countries. IDK if his settings are hosed up to do this but the fact that Windows can't figure this out doesn't make me excited for putting AI directly into Windows 11.
|
# ¿ May 23, 2023 19:59 |
|
|
# ¿ May 10, 2024 07:01 |
|
Count Roland posted:But you have people asking that you don't use AI at all? If you'd told me you're an artist that would have made some sense, but database work *should* be machine work. A lot of corporations are now starting to ban the use of ChatGPT and the like for work stuff because people inevitably start posting corporate secrets and/or classified material. Aside from that, no company wants their data to be used to train the next generation of ChatGPT.
|
# ¿ May 23, 2023 20:25 |
|
Being published doesn’t mean much in and of itself fyi. I’m also published, and I’ve seen my fair share of “why the gently caress is this published” papers. Probably including mine. Idk.
|
# ¿ May 23, 2023 23:16 |
|
Tei posted:Maybe they can research HOW to keep track of how documents affect the training data, then subtract that training. Correct me if I’m wrong but isn’t the thing with ai/ml/nn/etc that this is, or is so far, impossible to do? Something something black box can’t look into the weights to figure out why specifically it does this and not that?
|
# ¿ May 23, 2023 23:34 |
|
KillHour posted:Math isn't an opinion, and I don't own it. Wait, is AI literally just another Ax = b problem? (I don’t do AI but I do numerical computation stuff, and everything we do is solving Ax = b lol)
|
# ¿ May 24, 2023 00:13 |
|
KillHour posted:Actually, one of the most important requirements for a neural network is that you need to use a non-linear activation function specifically because otherwise the entire system will be reduceable to a linear equation I do remember one of my former office mates from a while ago who dabbled with AI/ML talking over and over about sigmoid this and sigmoid that.
|
# ¿ May 24, 2023 00:39 |
|
Count Roland posted:Can someone speak to this "race science paper" is? I don't even know what that is, nor is it clear why it is in a computing paper, nor why it's being brought up in YouTube videos. https://twitter.com/TimoPG/status/1645567315407409155?s=20 I have no idea who this Twitter guy is but it came up first on search. I was curious about it too. E: https://www1.udel.edu/educ/gottfredson/reprints/1997mainstream.pdf Boris Galerkin fucked around with this message at 01:03 on May 24, 2023 |
# ¿ May 24, 2023 01:01 |
|
You both are taking it super personally, tbh.
|
# ¿ May 24, 2023 01:18 |
|
Does anyone know/have information on whether or not “AI” can be used in linguistics, particularly with deciphering untranslated scripts like Linear A? In my mind, languages have rules, right? And we have textual examples of languages that we haven’t translated, right? So train an AI on languages and let it pick up on the rules and tell us finally what they say?
|
# ¿ May 24, 2023 02:07 |
|
Reveilled posted:I think the major problem with using an AI to decipher Linear A is that the entirety of all Linear A text ever uncovered would fit inside one very long twitter thread (it's something like 7400 characters). That's spread across lots of seperate inscriptions, many of which we can only read like one or two words on each line, and many of which contain words that appear exactly once in the entire corpus. Some of the longest texts we have appear to just be lists, meaning they're essentially only nouns, names and numbers, with none of the rest of the language's connective tissue to determine what they're talking about. The numbers appear often enough that they've been deciphered, but if the word QA-QA-RU is on a list and literally nowhere else there's virtually no way to tell if a QA-QA-RU is a goat, an amphora of oil, or a low-quality copper ingot. I see. It makes sense for early writing to be simple, lists, who owes who, etc. I didn’t realize that that was kinda all we had on Linear A. I don’t know much about it, I just pulled it out of my rear end as a classic example of an undeciphered script that may or may not have been related to languages we know today. What about for stuff like the Voynich manuscript? IIRC it’s an entire book or two of undeciphered writing that may or may not have been the bored doodles of some random guy.
|
# ¿ May 24, 2023 13:31 |
|
NoiseAnnoys posted:languages don't work like that, for starters. they're not programming languages, they're a continuum of communication with tons of variation and imperfection even among native speakers. there's a debate at the moment if widespread literacy and recorded sound have slowed the drift of languages, but basically any language drifts instantly from this proposed ideal grammar and into imperfection. even something like english isn't really one definite language, it's a collection of mutually intelligible dialects or microlanguages that reflect other areas of linguistic contact. Ancient languages may not work like that but maybe modern ones do? This is 100% anecdote, but I have Russian speaking friends from Russia, Kazakhstan, Ukraine, etc from back in grad school, from different social groups/people that didn’t socialize with each other. I swear that every single Russian speaking person I have ever asked about their language in terms of dialects or pronunciation and whatnot, every single person has said that they all sound the same. That there is no dialect and that all say and spell things the exact same way. (Contrast this to asking a random American if they think they have an accent or not, and some might say no because they think they sound “neutral” or whatever. These Russian speaking friends straight up said to some extent that if they, a Russian speaking Kazakh were to be dropped off to some backwater place in Russia they would still sound “local.”) I took 1 semester of Russian 101 just to learn how to read the alphabet in grad school cause I wanted to know how to pronounce the names of these scientists and engineers who’s equations I use so often etc. (Don’t worry this was just for me. I never butted into a convo with AKSHULLY the stress falls on the second syllable so you’re saying his name wrong.) From what I did learn, Russian is hyper rules based. Like, there is a simple and easy explanation for everything in a sentence, why this word ends in this instead of that ending, etc. More rules based than German but less than Latin. I just assumed this meant Latin was very structured without variation as well. E: I guess I should also clarify that my Russian speaking friends I’m thinking of were all met in grad school so they were all highly educated and possibly above average wealthy as they all had the money, time, and ability to gently caress off to west Europe for school. So my anecdote here are going to be biased towards that end of the spectrum. Boris Galerkin fucked around with this message at 13:52 on May 24, 2023 |
# ¿ May 24, 2023 13:35 |
|
By rules I'm referring to grammatical rules like declension and conjugation of verbs. English is not really one of these languages, but there are languages that are very highly structured and by how a certain word ends you can 100% infer if that word was an adjective or not and if so what age/sociogroup it's referring to and how many, just by looking at one word.
|
# ¿ May 24, 2023 16:53 |
|
Do you even have an argument other than "ai bad" on repeat? And if that's your take then so what? It's not constructive to any discussion when you just shut it down and address literally every single point with: "ai bad."
|
# ¿ May 26, 2023 23:30 |
|
In the future MegaDisneyCorp will copyright all text, art, videos, and music preemptively so that you NEED to use an AI to generate even an SMS less you be sued on the spot.
|
# ¿ May 27, 2023 00:47 |
|
How can anyone seriously look at one of these examples from the new photoshop ai fill thing and conclude “yep this is proof that ai generated art is all crap.” Like how? In what world is something like this https://twitter.com/ciguleva/status/1663515783828508672?s=20 Not impressive and just showcases how ai art is poo poo? Jesus gently caress this thread is the worse. Gas it already mods or step in and remind people the loving rules.
|
# ¿ May 31, 2023 11:57 |
|
Lemming posted:People are responding negatively because it wasn't portrayed as "here's a fun experiment," the Twitter thread that started it literally began with "Ever wonder what the rest of the Mona Lisa looks like?" I feel like this is picking nits because what if they were to have said this instead? quote:1. Ever wonder what the rest of the Mona Lisa looks like? I mean, you wouldn’t read that and assume that this particular artist is the reincarnation of Da Vinci sent to us to “finish” his art would you? Of that this version of an “expanded Mona Lisa” is the one true expanded Mona Lisa, as if another artist couldn’t do something different? A reasonable person would see the tweet and think “oh this is just a demo to show case how it can handle different art styles!” Not “oh poo poo someone call the Louvre we just uncovered the missing pieces.” Boris Galerkin fucked around with this message at 15:19 on May 31, 2023 |
# ¿ May 31, 2023 15:16 |
|
Lemming posted:If there was a zeitgeist promoting that random person as being someone who could possibly replace artists and creatives and people in all kinds of industries and it was phrased like that I think there would be a pretty similar response to it, yeah I understand it but that’s not my point or the problem I have. The issue for me is the people, some of the people, ITT refusing to debate and discuss and instead use this thread as an outlet for “ai bad faaaaart” and shutting down any conversation because “ai baaaaaaad.” People who refuse to open their ducking eyes and admit “ok yeah this is impressive tech” just because “ai baaaaaad.” If people aren’t going to post and debate and discuss in good faith then what the duck is the point of this thread? For the record I don’t give a drat what slapfights are happening in random twitter threads. But if you’re going to post takes here then they should be open for discussion and if you’re just going to jam “ai bad 💩” into every response then just gently caress off.
|
# ¿ May 31, 2023 16:01 |
|
Was Watson even a LLM? I think the concept of a LLM wasn’t a thing until the late 2010s. But I don’t know if this means proto-LLMs didn’t exist.
|
# ¿ May 31, 2023 16:12 |
|
Liquid Communism posted:Because ideas are a dime a dozen, and the least important part of artistic expression. Text absolutely can be a form of artistic expression, when it is used as an artistic medium, but holding up a search query for 'elf paladin character design blonde with sword' as equivalent to even someone without developed art skills' quick sketch is flatly absurd. It seems like you think ai art is just "type something in" -> "get picture out" print it and ship it bingo bango. I admit I thought the same way as well. But if you look up videos of people's workflows you see it's really not like that. These people are still layering things together and compositing their art using whatever artistic principles they know. With this in-filling poo poo you can think "I want a tree here to balance out the scene" and you can get your ai generator to draw a tree there. It seems the only difference is that ai artists aren't browsing for stock images or drawing their own trees to compose into their scenes, but instead using another tool do to do it for them. But at the end of the day, they are still the ones composing the scene. If that doesn't qualify as doing art then idk man.
|
# ¿ May 31, 2023 21:50 |
|
https://technomancers.ai/japan-goes-all-in-copyright-doesnt-apply-to-ai-training/quote:Japan Goes All In: Copyright Doesn’t Apply To AI Training I have no idea what that blog is but they source https://go2senkyo.com/seijika/122181/posts/685617 which is in Japanese and I can't read Japanese but I assume it's a more reputable website. Anyway, looks like while all you guys have been sitting there arguing about copyrights and hypotheticals, the Japanese government has decided to pull off the bandages and let Japanese AI development go hog wild. Now your lovely fanfics and coverarts are fair game to ingest and use to train models to create fanfics and coverarts in your style. Now what?
|
# ¿ Jun 4, 2023 15:59 |
|
Ansys, the engineering analysis software suite that pretty much every single engineering company in the world uses, wants to put ChatGPT into their products. Or rather, they have already done so and want to expand AI features. You are an engineer and specialist in hypersonic cruise missiles. Here is access to my entire design catalogue please design me a better missile. I mean most surely that’s how the execs think it works but lmao.
|
# ¿ Feb 7, 2024 05:08 |
|
How good are the LLMs at being multilingual, like how a person raised in a multilingual household would be? Been thinking about this because the other day I was watching TV and there was a random scene where the language switched to [non-English language]. I understand and speak this language, but there was a word I didn't know. I said "Siri, what does [foreign word] mean in English?" and Siri couldn't understand that I was switching language for that word. I would have accepted an explanation in English. I tried "Siri, [in foreign language: what does this word mean in English]?" and Siri transcribed my words into the closest approximate English syllables I guess, which was gibberish. I would have accepted an explanation in the foreign language. I asked about this in the iPhone thread since it was a Siri question and I know Siri isn't a LLM (right?) but it's spurred some additional discussion about how Siri sometimes can't distinguish perfectly between numbers like "15" and "50" for some people. This is just an example. But in real life real world, when I talk to my family or some friends we do switch language like that and it's completely natural/normal. Should clarify that this is in the context of voice assistants.
|
# ¿ Mar 14, 2024 13:12 |
|
Tei posted:I think better than a human being. A human works more in "modes", is thinking in english or thinking in spanish or german. A person thinking in german will try to understand in german a word he heard. In the context of people raised multilingual this is probably not true. It's not true for me, and I assume it's not true for people who have grown up speaking 2+ languages and can switch between them fluently on a word by word and phrase by phrase case. I'm not talking about people who learned a language in adulthood and use the crutch of thinking in English and translating to Spanish. Also anyway, I should have made it more clear but I'm only talking about voice inputs to LLMs. I know Siri isn't a LLM but I use Siri via voice for a lot of things. I'm just wondering what the progress is on voice assistants being able to recognize multiple languages being used in the same context/conversation.
|
# ¿ Mar 14, 2024 15:33 |
|
Tei posted:Heres this, somewhat has a joke. I have no idea what you mean by this but clearly I didn't repeat myself enough if you misunderstood what I posted in the first place.
|
# ¿ Mar 15, 2024 01:13 |
|
Lucid Dream posted:If the LLM enables the problem to be solved then it kinda solved it. If I use a calculator to help me do a complicated math problem, I still solved it even if the calculator helped. If the LLM is smart enough to choose to use the calculator then I think it counts. I get what you're trying to say here but it's also wrong. Every day I use a fancy calculator to solve an equation similar to this: There is a difference between saying I solved it and that I used a computer to solve it. If I said I solved it, and I actually did, then I'd probably get grant and research money thrown at me left and right and a fast-track to a tenured professorship. Technically the computer doesn't even solve it because there are no solutions to this equation. I use the computer to tell me what the answer could be, but not what it actually is because again there is no solution and it turns out the inputs I give it are like the most important thing. That weather prediction model someone posted is interesting. People have been using machine learning to fit their data and to get appropriate inputs since forever. I haven't read that paper but from glancing at it, it seems to be an extension of that. At the end of the day, I have Feelings about using AI in these types of numerical computations. The most important thing about the results that come out of these systems of equations (which are well known!) is that they are only as good as their inputs. People have been making guesses and assumptions to what these inputs are for centuries. It's nothing new. New techniques come now and then or old techniques are discovered to be applicable to other fields. But ultimately, the inputs used are well explained and well reasoned. Unless the AI can explain why it decided that the parameter alpha = 3.4 is the best and most appropriate value to use and how it arrived at this conclusion it is entirely not useful. For example that AI that plays GO may be able to say "this move is the best" and that works if you just want to crush your opponent, but why is it the best move? Nobody knows and nobody can explain it.
|
# ¿ Mar 16, 2024 21:58 |
|
You made a comment on semantics and it's obviously clear that they matter because it seems like you're conflating LLM with "computer" or "software" in general. Control logic in programming has been a thing since the first programming languages were invented. Fortran programmed with punch cards were capable of "knowing" whether to do multiply two vectors or multiply a vector with a matrix or whatever. The LLM that takes your natural language input and delegates smaller bite sized commands to other systems does the same thing but differently. Also you keep talking about "it" doing things and it's sounding awfully close to you thinking that the system is acting like a person with agency instead of just following code.
|
# ¿ Mar 17, 2024 13:01 |
|
quote:A computer doing math, I feel like theres some difference between a calculator doing math and a brain doing math that you just kinda know in your gut. A brain doing math is going through a non-deterministic process to figure it out. If by math you mean calculation, then there is or should be no difference as to how a computer or a person does it. Remember that calculation methods are programmed by people, who developed a method/algorithm and implemented it as code. If by math you mean like coming up with these methods and such, then perhaps. But as I said before, having the "here's what you do" is not entirely useful until you know why it's done this way.
|
# ¿ Mar 17, 2024 13:04 |
|
|
# ¿ May 10, 2024 07:01 |
|
Ok it can do math. E: I think it's impressive that the models can take natural language and make calls to other models to write a simple math program etc but I still don't consider this to be the LLM "doing" math. Boris Galerkin fucked around with this message at 17:43 on Mar 17, 2024 |
# ¿ Mar 17, 2024 17:40 |