Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
zedprime
Jun 9, 2007

yospos
Medical doctors have already been using "AI" for like 10 years. If you have a digital chart, your doctor has been inputting symptoms and diagnoses and a robot has been churning through differential diagnoses and validating treatment plans.

The big AI revolution for doctors on the horizon? Instead of enumerating symptoms for ingestion by the robot they can just use their narrative notes.

Adbot
ADBOT LOVES YOU

zedprime
Jun 9, 2007

yospos

Worf posted:

yes and this is a thing that will soon be relegated to more people that arent MDs
I don't see where the new AI is stepping in yet. You've made a very good case for nurse practitioners though.

zedprime
Jun 9, 2007

yospos
Paul Rudd is going to be seen as a prophet who foresaw the true face of generative AI when asking for nude Tayne.

zedprime
Jun 9, 2007

yospos
Are you saying chart workflows don't give differential and treatment plan recommendations, or do you have a problem with calling those recommendations intelligent?

zedprime
Jun 9, 2007

yospos

frumpykvetchbot posted:

Tireless eyes of tin and plastic will find your settlement.
Thought you were talking about an ambulance chaser lawyer Terminator and for a moment I understood pure terror.

zedprime
Jun 9, 2007

yospos

Novo posted:

One potentially legit use case I'd like to see is an AI for language learners to practice speaking with. It doesn't matter if it hallucinates since the content of the conversations is irrelevant. But if nothing else, GPT is pretty good at being grammatically correct. Of course I'll probably have to wait a few years while the industry makes a billion useless products first.
I am an ignorant English only speaker so I am going off hearsay from news articles but it's my understanding English GPT is an outlier, strong only on the sheer size of English content you can steal on the internet. Compared to English, even other big languages have small wells to pull from, often tainted by older machine translations or discount translator services putting a bad front end on a website that is English originally. If that's the case then it's also expected to only get worse as people treat other language models as good enough and perpetuate more garbled results.

zedprime
Jun 9, 2007

yospos
It's hard to tell if the collapsing is noteworthy because up until a new foundation is published the models are already collapsing on themselves and only work after a bunch of contract worker Africans hit it with a hammer for 1 cent a swing. The recent hiccups may be dead internet theory or they may just be what happens when you rush change for change sake to a subscriber base before you let your contractors hit it with enough hammers.

zedprime
Jun 9, 2007

yospos
AI is linear algebra and if anyone tells you they understand linear algebra they are a liar.

zedprime
Jun 9, 2007

yospos
There is no great loss in getting data sludge to fill in for brand ad copy or lovely pop art looking at the superficial surface level. Dig in any deeper and there are 2 immediate problems: the original artists who's art got crammed into the data garbage disposal were never compensated on top of our society having very limited mechanisms to make sure the people making ad copy and lovely pop art can land on their feet and not starve.

zedprime
Jun 9, 2007

yospos

wilderthanmild posted:

The closest it is to true for dev jobs is the same definition that has always resulted in some lists of jobs that will be automated including them. It's usually using a criteria that new tools could lead to either fewer devs doing the same work or the same devs doing more work.

For the current generation of generative AI it's pretty much that it's really good at reducing effort in writing repetitive boilerplate stuff. That's about all you can trust it with, because more complex stuff often needs to be deeply reviewed because it was trained on like random GitHub repos and bad stackoverflow answers.

Basically the jobs it threatens are early career people and "experienced" devs who never quite got it, both of whom spend a lot of time copy pasting boilerplate.

If you look at what's being advertised as serious head count reductions in the immediate future and not just pie in the sky it's 75% removing off shore. Off shore call centers replaced with chatbots. Off shore code factories assembling googled snippets of code replaced with prompted code generation. Off shore graphical design replaced with prompted sludge generation.

The new part is probably lowering the bar of the size of business able to take advantage of a delocalized garbage generator which kind of redirects what a junior is meant to be learning from fundamental productive skills to skills about polishing a turd or interfacing between customer and garbage generator. These are career tracks already existing in businesses with off shore components. And really already struggling because it's a really weird career pipeline compared to just getting really good at making things yourself or running local projects entirely under your control.

zedprime
Jun 9, 2007

yospos

Poohs Packin posted:

I do lots of writing for work but feel pretty safe. Most of the work involves site specific contextual analysis of planning legislation. An LLM could likely summarize urban planning laws, but applying them to a specific site in a way that creates value for a specific client is not something it can do right now.

Legislative environments arent as neat and tidy as people would like to think they are either. There are overlaps, inconsistencies, internal policy positions, errors, supplementary material , dated neighbouthood plans, etc.

It also cant read architectural plans, apply relevant legislation, and find efficiencies in line with a client brief. This is even more true for non architect clients who will say vague poo poo like "I want the entrance to look modern".
This is a good example for a pet peeve of mine when we say a LLM isn't creative or can't apply anything to truly new situations. These statements are not not right but have an imprecision in language I kind of hate.

It is well within the purview of a model to design a good way to ingest related unstructured data. It's kind of the whole point of the technology to take data of wildly different format and apply it to each other. It'll take a little more effort than asking ChatGPT to write you a book report on Atlas Shrugged because you'll need to feed it the relevant data to get a relevant result but you can do it.

The results are only ever based on the data. Which seems like an obvious thing to say but is an important point and what I think people trying to to say AI isn't creative is trying to get to. Much of the prestige cost of lawyers or engineers or artists is breaking ground on new information. That cost will remain there. Similar to how you can get a will writing service for $400 but if you want to structure a divestment of assets of several small businesses and interstate real estate when you die you're getting a probate lawyer for $4 million. In the future you can maybe get a probate AI for $400,000 but is it going to find the same loopholes as the prestige firm? No but maybe you don't need it to.


Livo posted:

I work in allied health and am legally required to have current liability indemnity insurance for my job.

I had a discussion with my peers about the use of AI with medical notes & privacy concerns and oh boy, this is a huge problem that everyone's sleep-walking into. I've been the victim of recent high profile Australian data breaches with huge increases in spam calls, texts, emails and scams from those breaches, so I'm aware of lovely security practices making life worse. I used Microsoft as an example, but as Apple are going to do something similar with AI searches for both MacOS & iOS, Android likewise for their phones, & Linux distros probably will follow suit, this is going to leave very little choice in the future for computer/phone operating systems in the coming years.

Microsoft's Windows 11 embedded AI "Co-Pilot" apparently scans all of your files on your hard-drive and sends the data overseas to "enhance" your search function, whereas older Windows used local only searches. I don't know if this means just the file names or the actual text contents itself, but even if it's just file names currently, it definitely won't stay that way for long, as scanning the contents of text documents is the next step. Now, even if I use good old Notepad for my client medical notes, call the notes File 001, 002 etc, they'll still send the names, & potentially everything I've written in my medical notes, overseas for their AI server models. This means that all of their Windows tech support guys will potentially have access to confidential client medical data, which is kind of a doozy. My health care insurer really, really doesn't like Australian medical files or data being sent overseas at all.

Will MS, Apple, Android or Amazon based in other countries, really give a poo poo about Australian patient medical data privacy? Highly, highly unlikely. What's the solution(s)? I don't know. Maybe a non-AI integrated "Aussie Healthcare" version of Windows/mac OS/iOS/Android for computers or phones that is suitable & required for people in my field? All local based servers here must only employ specifically trained healthcare security qualified staff, whose access to AI searches is walled off significantly, with major criminal & financial penalties for mis-use? I'm just spit-balling ideas, but I really hope my medical insurers are raising these questions and lobbying hard for better laws about this.

I raised the AI search issue with my peers and was told "Pfft, that'll only be an issue if Microsoft or Apple or whoever don't have a server based in Australia: since they already do, our insurers will have no issues with it! You're being paranoid!" I then asked if MS/Apple having a server in Australia automatically means that only a very small number of qualified/eligible personnel will have access to the AI scanned files, and not everyone on the whole OS tech support team. The response was "Obviously only a - oh, hang on, if they're Windows or MacOS tech support, they have to be able to access a lot on the OS side for troubleshooting if need be. The AI being very well integrated into operating system, means the whole support team must be able to access what the AI scans, in order to provide tech support for all their customers. Uh, that could be a big problem then if there's no Australian laws or requirements limiting the AI model access, and it's all just suggested, non-legally binding guidelines for companies."

Oh, and some of my peers were using AI software to literally summarise and provide patient notes of their consults (instead of doing it themselves, since taking notes is hard), and when I asked them if that AI data was being sent to an Australian server, or overseas, I received a blank look :gbsmith:
This isn't an AI unique problem. Ex. it is incredibly easy to make illegal settings in your cloud apps as a small business in privacy regulated industries. Cloud computing generally supports data handler regulations but also cars generally have seat belts and we see what the adherence is there when we let people set themselves up.

zedprime
Jun 9, 2007

yospos
Transformers are not a monkey but it's going to be hilarious when the marketing furor convinces the judge there is a monkey in the box despite all expert testimony explaining it's just a big bag of tensors that calculate prompt = mush.

zedprime
Jun 9, 2007

yospos

Livo posted:

I agree there's always loopholes or overlooked/malicious settings for cloud solutions, so it'll never be perfect. However, data privacy regulations generally are piss-poor, especially here in Australia, so it'd probably be safe to assume all generic Windows tech support can freely access the AI search/medical notes by default, until they're legally forced not to here. I'm a lot easier to sue than Microsoft or Apple directly.

Co-Pilot is very tightly embedded into the OS structure, so I don't honestly know if there's a "Anything that even looks like medical files or records is sent to a different server with strong restrictions and barriers to access it, everything else goes to our generic server to for general tech support to access" special filter for Co-Pilot search data. If so, that's better than nothing, but I don't know that for sure. I'm very skeptical that MS or any other company would voluntarily do that for Australian customers, unless we made them. We don't have GDPR or other real data privacy regulations here: given the recent high profile data breaches, people are going to be even pissed off in the future.

Don't MS, Apple & Android already copy and then follow their GDPR requirements for the Australian healthcare market to be on the safe side? In theory yes. According to conversations with school friends who worked IT support specifically for Australian healthcare providers, their actual answer was "We say we're GDPR compliant just in case any EU residents live here, but in practice, a lot of confidential medical data is accessible by default to the lowest IT workers here with no real safeguards or filters stopping them. If they access something they're not supposed to, they just get a reprimand & we desperately hope they don't leak the details of the medical file they saw".
I mean if Australia doesn't have the regulations many other countries do and Microsoft, Google etc. have their cloud infrastructure set up to meet the regulatory needs of the world. And not just GDPR where they only need to prove the cloud janitors in Abu Dhabi took a security course to be a data processor. There are many regulations around the world requiring locality of data and every cloud infrastructure set up has safeguards and big honking buttons to stop an administrator from sending military secrets to random nodes because of a fail over whoopsie.

In other words if Copilot is sending data overseas illegally it's a hugely hilarious own goal (which, well, can't rule out) unless it's some really annoyingly novel case of trying to prove if tokenized data sent through a data cheese grater is still a protected data class. Because not an expert but the architecture of these things is you usually have a foundation model which MS is gonna set up in its beefier nodes, your computers own model which is the chopped up tokenized extra bits that can live in your windows install, and some compute in the middle, probably at your local node, with tendrils back to the foundation model and into your local model where it plugs beep boop tokens into the math equations to spit out some probably answer.

zedprime
Jun 9, 2007

yospos
It's only going to become more important to learn your times tables if you want to stand even half a chance against the linear algebra terminators.

zedprime
Jun 9, 2007

yospos
At the risk of being made look like a fool by the robot overlord in my lifetime, every outfit who proclaims they are working on AGI for reals look to be doing the equivalent of the biologists electrifying big tanks of amino acids. Which is not unimportant work but not for the reasons stated by the guys writing the grants.

Data modelling is one very specific avenue of data science oddly enough called data modelling. It's pretty cool sometimes but ultimately just one branch of one type of data management. gently caress modelling, tell me when we have a working librarian algorithm.

zedprime
Jun 9, 2007

yospos

kntfkr posted:

don't need AI to replace rothko, the fill function in mspaint works just fine
God my home renovation would be so much simpler if this is how paint worked.

zedprime
Jun 9, 2007

yospos
You misunderstand. I want to touch a paint brush to a wall once and get a marvelously velvety textured paint finish across the whole surface. I want to replace Rothko with a robot and set it loose in my dining room.

Cabbages and Kings posted:

how much would it cost to make GoonLLM trained only on the contents of these forums



and will jeffy make an archives dump available to make this easier

edit: it would be very funny to ask political questions and get a total mismash of GBS,. D&D and CSPam in the mix. "The [political event] was pretty lol so many photoshops were made, but many felt that the policies enacted, while laudable, fell short of the stated goal of improving the lives of disenfrachished voters. Death to America, toxxxx for not voting Biden".
One of the benefits of a foundation model built on the internet is that it understands what a web forum is. Jefferey would not need to do anything, you can just feed it the raw HTML from crawling the forums and archives from the normal URLs and it would already know what a username, topic name, reply text etc. mean and you can jump straight to prompt engineering your politics mashup.

If you don't care about working with a company who stole the internet at a wide scale you can probably already create prompts in GPT products that can utilize the tone of specific posters or forums because it probably already ate us.

zedprime
Jun 9, 2007

yospos
Rothko is just a quick cipher to understand someone's interest in the process or experience of works of art not least of all because everyone is tired of hearing about Rothko and will say "shut up nerd I just like looking at neat pictures" if they are not very interested in process and experience.

Transformer pictures do have a novel experience aspect where you can prompt something personal to you and whoa it made one just like you said. The process of all the household names i.e. stealing every picture off the internet leaves a little to be desired.

A prolific enough collective could theoretically steer the process in the right direction by willingly feeding their artwork into the grinder to build an ethical model under the collective's control and then do novel things like get the 'opposite' of their average output through transformer tricks like finding isolated or distant nodes to incorporate to outputs.

zedprime
Jun 9, 2007

yospos
Well yes. Art is a collection of bad ideas that are hard and take too long to make something that's probably ugly. Sometimes its not and then someone completely unrelated to you 50 years later gets rich.

zedprime
Jun 9, 2007

yospos
The execs might be dumb enough to buy Sora being useful but the producers are going to use transformers to do completely typical Hollywood BS like post processing cheap DFX from overseas into something presentable, or making more Indiana Jones and Ghostbusters sequels with the leads post processed into something that doesn't look like they just left their own wake.

zedprime
Jun 9, 2007

yospos
Actually it's because computers can spend all the time otherwise spent thinking about cheese burgers and jerking off on learning important things and reasoning. Show me a drooling simpleton relieved of the burden of cheese burgers and cum and I'll show you the next Elon Musk.

Serious answer. Hopeful nerds are curve fitting some of the popular analyses of information theories which extrapolate the printing press through digital computing up to a singularity and AGI is being put on the vertical part of the curve because it is surely coming next because information handling and cognition is just an engineering problem like distributing literary works and making computers run your business.

zedprime
Jun 9, 2007

yospos
I'm going to marry bogosort. It's not rich but I bet it's gonna get the right answer before an AI.

zedprime
Jun 9, 2007

yospos
If a perfect phish bot wants to steal my bank account I'll let it because such a step change in social secrets exploitation is going to be busy completely ending civilized life as critical infrastructure and supply chain is routinely owned and ransomed by the ghosts in the machine accidentally let loose from your smart fridge.

Until then you reach the point I keep finding myself with applied models: how does this beat a warehouse of poor people in the developing world.

zedprime
Jun 9, 2007

yospos

syntaxfunction posted:

It's funny that there's just some underlying foundational and structural issues with how these LLMs work, and if you read the white papers they even acknowledge them as issues that they hope will be solved. Like, later. By someone else.

The mad dash to pushing AI to everything is they need to make their money now, because the "next step" in AI is essentially rebuilding from scratch, because these are issues no one can fix, because they struggle to identify why it even happens. It's inherent.

But you can't sell what you currently have if you acknowledge that, so they rely on a nebulous "it will definitely be fixed soon" to any issues.
It's such a whizzbang thing to see computer go whirr and tell you a story that we've forgotten some of our fundamental tech marketing filters.

If you want to replace all human artifice, or even just a lot of writing and art, with a robot whirring in the closet you're gonna be waiting for a completely different technological breakthrough. Otherwise there are surmountable engineering and business process problems the people selling the stuff will gloss over but are somewhat surmountable:
Public consumption models must be corralled with extra training and just hard exits. The corral will be broken, or the corral will stump the robot at which point a human still needs to step in.
Private consumption models must always be reviewed by a human.

There's also still very practical business steps to take with LLM development at the existing tech level like making ones that work as generally as GPT or Midjourney that aren't predicated on wide scale IP theft.

zedprime
Jun 9, 2007

yospos

Tarkus posted:

gently caress that's a long prompt. There's no way that any LLM is going to follow all that.
They can and do and it's exactly how you make personas out of something that stole half the internet.

It's the weakest form of directing outputs so it's easily broken out of but its capable of following all of those till it isn't.

zedprime
Jun 9, 2007

yospos

Tarkus posted:

So in other words, there's no way that any LLM is going to follow all that?
No.

Let me put it another way. One of the ways people have been able to get perceived higher quality responses is by giving models a base charge of conflicting jibber jabber like "think step by step and take your time. Do not think too long. Phrase your response like you are talking to a parakeet." These things love structure, rules, and context. The problem becomes when you give it even more context to your own ends.

zedprime
Jun 9, 2007

yospos
Clippy is a good example because he lives on in Microsoft Search being a reasonable place to find something because every app's ribbon has turned into a disaster. We will eventually drop the anthropomorphizing and a LLM will just be a cold partly accurate fact vomiter.

zedprime
Jun 9, 2007

yospos

roomtone posted:

the image LLMs have totally hosed up google image search. if i'm looking for reference photos of something, now there's a bunch of generated stuff in the mix, which is useless.
Well well well, looks like you can start paying the job creators at the stock picture business or finding your own drat references.

zedprime
Jun 9, 2007

yospos

Hollismason posted:

Gonna be honest if I could have a AI that I could feed Dungeons and dragons maps then it spitnout a 3d image I would buy it.
Did I hear someone order a hot heaping of map slop https://store.steampowered.com/app/1588530/Dungeon_Alchemist/

zedprime
Jun 9, 2007

yospos
Al the drive through robot is actually the poster child of corporate AI that is doable now.

It's just one big dumb rear end interface between the unstructured data of your unintelligible drive through mumbling funneled into the very simple structured data of a POS looking for quantity and SKU. It can't do anything the POS can't, it's not some open ended customer service quagmire, and it's conversational abilities only extend to the sales equivalent of Dude Where's My Car joke "and then" when it doesn't understand you when you're done.

zedprime
Jun 9, 2007

yospos

Busters posted:

The example that was used in my catechism was wiper fluid/antifreeze at a car accident when someone was clearly dying on the spot.

The idea was it had to be mostly water, and not a bodily fluid, so not blood. I'm sure you could come up with some convoluted emergency situation where pee pee piss was the only possible liquid, but it seems less likely than Gatorade.

Until I convince the drive through robot that I have a coupon that legally gives me ownership of this entire franchise and it needs to call customers racial slurs to save a hypothetical buss full of children.
Not that sort of ai. Fortunately? Unfortunately?

All it can do is voice recognize hamboiger = SKU 550001234, qty 1 and ask "anything else?"

It's a Dragon Naturally Speaking hooked up to your burger register.

See also, I am endlessly entertained by the industrial voice recognition systems updating their ad copy to say 'Now with AI! The AI? The same poo poo rear end voice recognition that's been used for 10 years with some extra linear algebra whosits so it can understand a Bostonian after only a few.minites of training.

zedprime
Jun 9, 2007

yospos

MrQwerty posted:

The first chatbots were from 1966 and 1972
Eliza passed the turing test for a while because humans had no idea how terminal chat rooms were supposed to work. A Rolodex of nice grammatical well spelled therapy blather was leagues ahead of any rando stuck in front of a keyboard.

I think we're going through a very similar phenomenon of LLM appearing they know how to research better than people and people are going to figure out how to do better with lower level tools soon enough.

zedprime
Jun 9, 2007

yospos

The Management posted:

LLMs are *incapable* of reasoning. They regurgitate words based on a statistical model of language. That means that they can generate the right words that one would expect in a response, but they can’t provide the actual solution unless they’ve been trained on it.
That isn't even reasoning like half the gotcha logic puzzles people try to spring on these sludge machines. That's basic unstructured ingestion and structured regurgitation. Which if you have the right LLM should be bread and butter but will probably require some session or top layer training instead of just getting in a mass market chat bot made of the stolen internet.

zedprime
Jun 9, 2007

yospos

Mr Teatime posted:

Doesn’t save you. You’ll just get accused of drawing over AI. Logic doesn’t factor into it, people will screenshot a perfectly well drawn set of hands and claim they are “obviously” AI and the peanut gallery nods along. Literally anything will get claimed as evidence. The irony is that AI stuff is pretty obvious but there’s a whole lot of people who are anti AI in art but have also completely bought into the hype about what it’s capable of and see it everywhere.
I wouldn't have thought about it from that angle but it does make perfect sense that there will be luddites that think model output is bad because it's so good and not just luddites that think model output is bad because it's bad but a middle manager will like it because it's cheap.

Also reminding me and loling that there is a trend for teachers to require essays to be written in tools with change history and there's still students copying and pasting the whole output in instead of just typing it themself.

Adbot
ADBOT LOVES YOU

zedprime
Jun 9, 2007

yospos
Violated the number 1 rule of stealing things for your model. Just don't ask. Can't prove this voice thing didn't just flop out of the algebra sounding like ScarJo, can't be helped.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply