|
Medical doctors have already been using "AI" for like 10 years. If you have a digital chart, your doctor has been inputting symptoms and diagnoses and a robot has been churning through differential diagnoses and validating treatment plans. The big AI revolution for doctors on the horizon? Instead of enumerating symptoms for ingestion by the robot they can just use their narrative notes.
|
# ¿ Mar 11, 2024 15:56 |
|
|
# ¿ May 21, 2024 04:08 |
|
Worf posted:yes and this is a thing that will soon be relegated to more people that arent MDs
|
# ¿ Mar 11, 2024 16:22 |
|
Paul Rudd is going to be seen as a prophet who foresaw the true face of generative AI when asking for nude Tayne.
|
# ¿ Mar 11, 2024 18:45 |
|
Maudib Arakkis posted:…??? No???
|
# ¿ Mar 11, 2024 19:56 |
|
frumpykvetchbot posted:Tireless eyes of tin and plastic will find your settlement.
|
# ¿ Mar 11, 2024 20:20 |
|
Novo posted:One potentially legit use case I'd like to see is an AI for language learners to practice speaking with. It doesn't matter if it hallucinates since the content of the conversations is irrelevant. But if nothing else, GPT is pretty good at being grammatically correct. Of course I'll probably have to wait a few years while the industry makes a billion useless products first.
|
# ¿ Mar 11, 2024 21:11 |
|
It's hard to tell if the collapsing is noteworthy because up until a new foundation is published the models are already collapsing on themselves and only work after a bunch of contract worker Africans hit it with a hammer for 1 cent a swing. The recent hiccups may be dead internet theory or they may just be what happens when you rush change for change sake to a subscriber base before you let your contractors hit it with enough hammers.
|
# ¿ Mar 11, 2024 22:26 |
|
AI is linear algebra and if anyone tells you they understand linear algebra they are a liar.
|
# ¿ Mar 11, 2024 23:02 |
|
There is no great loss in getting data sludge to fill in for brand ad copy or lovely pop art looking at the superficial surface level. Dig in any deeper and there are 2 immediate problems: the original artists who's art got crammed into the data garbage disposal were never compensated on top of our society having very limited mechanisms to make sure the people making ad copy and lovely pop art can land on their feet and not starve.
|
# ¿ Mar 17, 2024 15:03 |
|
wilderthanmild posted:The closest it is to true for dev jobs is the same definition that has always resulted in some lists of jobs that will be automated including them. It's usually using a criteria that new tools could lead to either fewer devs doing the same work or the same devs doing more work. If you look at what's being advertised as serious head count reductions in the immediate future and not just pie in the sky it's 75% removing off shore. Off shore call centers replaced with chatbots. Off shore code factories assembling googled snippets of code replaced with prompted code generation. Off shore graphical design replaced with prompted sludge generation. The new part is probably lowering the bar of the size of business able to take advantage of a delocalized garbage generator which kind of redirects what a junior is meant to be learning from fundamental productive skills to skills about polishing a turd or interfacing between customer and garbage generator. These are career tracks already existing in businesses with off shore components. And really already struggling because it's a really weird career pipeline compared to just getting really good at making things yourself or running local projects entirely under your control.
|
# ¿ Mar 17, 2024 16:59 |
|
Poohs Packin posted:I do lots of writing for work but feel pretty safe. Most of the work involves site specific contextual analysis of planning legislation. An LLM could likely summarize urban planning laws, but applying them to a specific site in a way that creates value for a specific client is not something it can do right now. It is well within the purview of a model to design a good way to ingest related unstructured data. It's kind of the whole point of the technology to take data of wildly different format and apply it to each other. It'll take a little more effort than asking ChatGPT to write you a book report on Atlas Shrugged because you'll need to feed it the relevant data to get a relevant result but you can do it. The results are only ever based on the data. Which seems like an obvious thing to say but is an important point and what I think people trying to to say AI isn't creative is trying to get to. Much of the prestige cost of lawyers or engineers or artists is breaking ground on new information. That cost will remain there. Similar to how you can get a will writing service for $400 but if you want to structure a divestment of assets of several small businesses and interstate real estate when you die you're getting a probate lawyer for $4 million. In the future you can maybe get a probate AI for $400,000 but is it going to find the same loopholes as the prestige firm? No but maybe you don't need it to. Livo posted:I work in allied health and am legally required to have current liability indemnity insurance for my job.
|
# ¿ Mar 18, 2024 17:38 |
|
Transformers are not a monkey but it's going to be hilarious when the marketing furor convinces the judge there is a monkey in the box despite all expert testimony explaining it's just a big bag of tensors that calculate prompt = mush.
|
# ¿ Mar 18, 2024 18:43 |
|
Livo posted:I agree there's always loopholes or overlooked/malicious settings for cloud solutions, so it'll never be perfect. However, data privacy regulations generally are piss-poor, especially here in Australia, so it'd probably be safe to assume all generic Windows tech support can freely access the AI search/medical notes by default, until they're legally forced not to here. I'm a lot easier to sue than Microsoft or Apple directly. In other words if Copilot is sending data overseas illegally it's a hugely hilarious own goal (which, well, can't rule out) unless it's some really annoyingly novel case of trying to prove if tokenized data sent through a data cheese grater is still a protected data class. Because not an expert but the architecture of these things is you usually have a foundation model which MS is gonna set up in its beefier nodes, your computers own model which is the chopped up tokenized extra bits that can live in your windows install, and some compute in the middle, probably at your local node, with tendrils back to the foundation model and into your local model where it plugs beep boop tokens into the math equations to spit out some probably answer.
|
# ¿ Mar 19, 2024 04:12 |
|
It's only going to become more important to learn your times tables if you want to stand even half a chance against the linear algebra terminators.
|
# ¿ Mar 19, 2024 22:16 |
|
At the risk of being made look like a fool by the robot overlord in my lifetime, every outfit who proclaims they are working on AGI for reals look to be doing the equivalent of the biologists electrifying big tanks of amino acids. Which is not unimportant work but not for the reasons stated by the guys writing the grants. Data modelling is one very specific avenue of data science oddly enough called data modelling. It's pretty cool sometimes but ultimately just one branch of one type of data management. gently caress modelling, tell me when we have a working librarian algorithm.
|
# ¿ Mar 19, 2024 22:46 |
|
kntfkr posted:don't need AI to replace rothko, the fill function in mspaint works just fine
|
# ¿ Mar 21, 2024 13:18 |
|
You misunderstand. I want to touch a paint brush to a wall once and get a marvelously velvety textured paint finish across the whole surface. I want to replace Rothko with a robot and set it loose in my dining room.Cabbages and Kings posted:how much would it cost to make GoonLLM trained only on the contents of these forums If you don't care about working with a company who stole the internet at a wide scale you can probably already create prompts in GPT products that can utilize the tone of specific posters or forums because it probably already ate us.
|
# ¿ Mar 21, 2024 17:06 |
|
Rothko is just a quick cipher to understand someone's interest in the process or experience of works of art not least of all because everyone is tired of hearing about Rothko and will say "shut up nerd I just like looking at neat pictures" if they are not very interested in process and experience. Transformer pictures do have a novel experience aspect where you can prompt something personal to you and whoa it made one just like you said. The process of all the household names i.e. stealing every picture off the internet leaves a little to be desired. A prolific enough collective could theoretically steer the process in the right direction by willingly feeding their artwork into the grinder to build an ethical model under the collective's control and then do novel things like get the 'opposite' of their average output through transformer tricks like finding isolated or distant nodes to incorporate to outputs.
|
# ¿ Mar 23, 2024 02:46 |
|
Well yes. Art is a collection of bad ideas that are hard and take too long to make something that's probably ugly. Sometimes its not and then someone completely unrelated to you 50 years later gets rich.
|
# ¿ Mar 23, 2024 03:01 |
|
The execs might be dumb enough to buy Sora being useful but the producers are going to use transformers to do completely typical Hollywood BS like post processing cheap DFX from overseas into something presentable, or making more Indiana Jones and Ghostbusters sequels with the leads post processed into something that doesn't look like they just left their own wake.
|
# ¿ Mar 25, 2024 19:02 |
|
Actually it's because computers can spend all the time otherwise spent thinking about cheese burgers and jerking off on learning important things and reasoning. Show me a drooling simpleton relieved of the burden of cheese burgers and cum and I'll show you the next Elon Musk. Serious answer. Hopeful nerds are curve fitting some of the popular analyses of information theories which extrapolate the printing press through digital computing up to a singularity and AGI is being put on the vertical part of the curve because it is surely coming next because information handling and cognition is just an engineering problem like distributing literary works and making computers run your business.
|
# ¿ Apr 3, 2024 15:50 |
|
I'm going to marry bogosort. It's not rich but I bet it's gonna get the right answer before an AI.
|
# ¿ Apr 3, 2024 16:36 |
|
If a perfect phish bot wants to steal my bank account I'll let it because such a step change in social secrets exploitation is going to be busy completely ending civilized life as critical infrastructure and supply chain is routinely owned and ransomed by the ghosts in the machine accidentally let loose from your smart fridge. Until then you reach the point I keep finding myself with applied models: how does this beat a warehouse of poor people in the developing world.
|
# ¿ Apr 3, 2024 16:56 |
|
syntaxfunction posted:It's funny that there's just some underlying foundational and structural issues with how these LLMs work, and if you read the white papers they even acknowledge them as issues that they hope will be solved. Like, later. By someone else. If you want to replace all human artifice, or even just a lot of writing and art, with a robot whirring in the closet you're gonna be waiting for a completely different technological breakthrough. Otherwise there are surmountable engineering and business process problems the people selling the stuff will gloss over but are somewhat surmountable: Public consumption models must be corralled with extra training and just hard exits. The corral will be broken, or the corral will stump the robot at which point a human still needs to step in. Private consumption models must always be reviewed by a human. There's also still very practical business steps to take with LLM development at the existing tech level like making ones that work as generally as GPT or Midjourney that aren't predicated on wide scale IP theft.
|
# ¿ Apr 10, 2024 12:10 |
|
Tarkus posted:gently caress that's a long prompt. There's no way that any LLM is going to follow all that. It's the weakest form of directing outputs so it's easily broken out of but its capable of following all of those till it isn't.
|
# ¿ Apr 13, 2024 01:05 |
|
Tarkus posted:So in other words, there's no way that any LLM is going to follow all that? Let me put it another way. One of the ways people have been able to get perceived higher quality responses is by giving models a base charge of conflicting jibber jabber like "think step by step and take your time. Do not think too long. Phrase your response like you are talking to a parakeet." These things love structure, rules, and context. The problem becomes when you give it even more context to your own ends.
|
# ¿ Apr 13, 2024 03:19 |
|
Clippy is a good example because he lives on in Microsoft Search being a reasonable place to find something because every app's ribbon has turned into a disaster. We will eventually drop the anthropomorphizing and a LLM will just be a cold partly accurate fact vomiter.
|
# ¿ Apr 22, 2024 23:05 |
|
roomtone posted:the image LLMs have totally hosed up google image search. if i'm looking for reference photos of something, now there's a bunch of generated stuff in the mix, which is useless.
|
# ¿ Apr 24, 2024 22:30 |
|
Hollismason posted:Gonna be honest if I could have a AI that I could feed Dungeons and dragons maps then it spitnout a 3d image I would buy it.
|
# ¿ Apr 25, 2024 18:40 |
|
Al the drive through robot is actually the poster child of corporate AI that is doable now. It's just one big dumb rear end interface between the unstructured data of your unintelligible drive through mumbling funneled into the very simple structured data of a POS looking for quantity and SKU. It can't do anything the POS can't, it's not some open ended customer service quagmire, and it's conversational abilities only extend to the sales equivalent of Dude Where's My Car joke "and then" when it doesn't understand you when you're done.
|
# ¿ Apr 25, 2024 19:17 |
|
Busters posted:The example that was used in my catechism was wiper fluid/antifreeze at a car accident when someone was clearly dying on the spot. All it can do is voice recognize hamboiger = SKU 550001234, qty 1 and ask "anything else?" It's a Dragon Naturally Speaking hooked up to your burger register. See also, I am endlessly entertained by the industrial voice recognition systems updating their ad copy to say 'Now with AI! The AI? The same poo poo rear end voice recognition that's been used for 10 years with some extra linear algebra whosits so it can understand a Bostonian after only a few.minites of training.
|
# ¿ Apr 26, 2024 02:58 |
|
MrQwerty posted:The first chatbots were from 1966 and 1972 I think we're going through a very similar phenomenon of LLM appearing they know how to research better than people and people are going to figure out how to do better with lower level tools soon enough.
|
# ¿ Apr 30, 2024 16:10 |
|
The Management posted:LLMs are *incapable* of reasoning. They regurgitate words based on a statistical model of language. That means that they can generate the right words that one would expect in a response, but they can’t provide the actual solution unless they’ve been trained on it.
|
# ¿ May 4, 2024 23:33 |
|
Mr Teatime posted:Doesn’t save you. You’ll just get accused of drawing over AI. Logic doesn’t factor into it, people will screenshot a perfectly well drawn set of hands and claim they are “obviously” AI and the peanut gallery nods along. Literally anything will get claimed as evidence. The irony is that AI stuff is pretty obvious but there’s a whole lot of people who are anti AI in art but have also completely bought into the hype about what it’s capable of and see it everywhere. Also reminding me and loling that there is a trend for teachers to require essays to be written in tools with change history and there's still students copying and pasting the whole output in instead of just typing it themself.
|
# ¿ May 20, 2024 20:41 |
|
|
# ¿ May 21, 2024 04:08 |
|
Violated the number 1 rule of stealing things for your model. Just don't ask. Can't prove this voice thing didn't just flop out of the algebra sounding like ScarJo, can't be helped.
|
# ¿ May 21, 2024 00:22 |