Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
How many quarters after Q1 2016 till Marissa Mayer is unemployed?
1 or fewer
2
4
Her job is guaranteed; what are you even talking about?
View Results
 
  • Post
  • Reply
cat botherer
Jan 6, 2022

I am interested in most phases of data processing.
not connecting things to the internet, ever, is always a winning move

Adbot
ADBOT LOVES YOU

Weatherman
Jul 30, 2003

WARBLEKLONK

cat botherer posted:

not connecting things to the internet, ever, is always a winning move

shame it didn't work for your posts

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
I posted about this in the Game Development thread, but today I decided to ask ChatGPT to write me a demonstration program to do some file operations in Unreal Engine 5.3. I was not finding consistent documentation on how to work with it, and I'm compelled to use its abstraction over standard library stuff since it'll properly take care of different platforms (PC, Mac, console, my butt, who knows?). I wrote up a little paragraph telling it to specifically use one thing to: check if a directory exists, if it doesn't create it, write a file to the directory that contains the text "Hello, World!", then open the file and print its contents to the console.

It hemmed and hawed for a bit, and then pooped up a perfectly functional program. Compiled and ran perfectly fine. It did exactly what I requested. I figured out a few things that I couldn't sort out from Googling.

Tiny Timbs
Sep 6, 2008

Rocko Bonaparte posted:

I posted about this in the Game Development thread, but today I decided to ask ChatGPT to write me a demonstration program to do some file operations in Unreal Engine 5.3. I was not finding consistent documentation on how to work with it, and I'm compelled to use its abstraction over standard library stuff since it'll properly take care of different platforms (PC, Mac, console, my butt, who knows?). I wrote up a little paragraph telling it to specifically use one thing to: check if a directory exists, if it doesn't create it, write a file to the directory that contains the text "Hello, World!", then open the file and print its contents to the console.

It hemmed and hawed for a bit, and then pooped up a perfectly functional program. Compiled and ran perfectly fine. It did exactly what I requested. I figured out a few things that I couldn't sort out from Googling.

I mentioned elsewhere that in some circles the discussion on AI has morphed from how damaging it can be to digital artists to claiming that it's not good for anything, at all, and anyone saying it is is either deluded or a bitcoin grifter. Using natural language prompts to build and tailor code in an easily-verifiable way is one of its strongest applications and it's one I've used to save me a lot of time and grunt-work spent sifting through StackExchange.

The Artificial Kid posted:

The Chinese Room thought experiment is a bullshit appeal to incredulity. The sleight of hand lies in declaring that we’ve laid out rules for the person in the room that produce responses that represent an understanding of Chinese. We are supposed to believe that those rules can do the task perfectly but at the same time are so rote and mechanical as to prevent any room-scale “understanding” of Chinese. And yet nothing provably happens in our brains that can’t be reduced to a rule being followed in a room (even if you needed enough of a room to simulate every atom in our brains).

The problem isn’t your concept of how limited AI is, it’s the idea that we are somehow doing something different and special in our heads. I would argue it’s more likely we are just doing something more complex and cross-domain integrated.

This has always been my problem with the Chinese Room. People go through the thought experiment and draw one set of conclusions, and I go through it and think "Yeah no poo poo, that's just a model of how intelligence works."

Tiny Timbs fucked around with this message at 14:34 on Feb 2, 2024

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

Tiny Timbs posted:

"Yeah no poo poo, that's just a model of how intelligence works."

What? No it isn't :psyduck:

Clarste
Apr 15, 2013

Just how many mistakes have you suffered on the way here?

An uncountable number, to be sure.
Again the problem is that "intelligence" is poorly defined. But from a materialistic perspective, the brain is pretty clearly a collection of dumb parts that don't seem to have any idea what they're doing, but somehow manage to get things done as a whole.

Main Paineframe
Oct 27, 2010
However much individual neurons may or may not understand, the brain as a whole understands words as logical concepts which relate to each other primarily via their representation of real-world things, rather than as snippets of text that relate to each other primarily via how likely they are to be used in the same sentence.

The Chinese Room thought experiment isn't about intelligence, it's about understanding and consciousness. The entire point of it is that the machine can produce intelligent output without actually being strong AI or even having any real comprehension of the concepts it's handling.

It's an excellent thought experiment to apply to LLMs, which are not designed to link text to real-world concepts and objects. They're fundamentally just text-prediction systems that derive statistical relationships between words based on how often they're used together in an extensive dataset of human-authored text. This allows them to produce a passable imitation of human understanding even though they do not truly understand - they're merely running statistical analysis on examples of human understanding.

Barrel Cactaur
Oct 6, 2021

The Chinese room and China brain are essentially descriptions for expert systems and neural networks respectively. The ideal way to understand why the Chinese room is not intelligent is to cast it as what it is in implementation, a machine that takes a binary code input, runs it through a set of math transformations, and gives an output. The least complex possible form of this is a logic gate, progressing to an experts system as you make the input more complex and add more rules. As the obstinate literalist noted earlier, and sufficiently complex expert systems instructions are essentially a complete manual on the task, studying the Chinese room manual long enough would eventually teach you Chinese, though in practice the best expert systems tend to have a comical error rate at tasks as open as translation. The neural net is a logical abstract of how brains work at thinking, an interconnected web of simpler systems that essentially go step by step in the process. Instead of having a truly comprehensive ruleset you have tiny pieces doing tiny things. Manually making a neural net is possible, it's how most visual programming languages work. To truly simulate brain like behavior you need a training algorithm, essentially a way of optimizing your neural net. This is still a static process for fixed neural nets.

Model based neural nets add an additional step, generating a weighted map for each neuron that modifies the math that neuron is doing differently based o the tags. This is trained by having an output validator that's only good at sorting tags comparing the input request to the output for tag correlation. Additionally most of these systems work across the input itterativly, playing guess and check with their validation algorithms. So your neural net can modify it's weighing to match the tags detected in the input and re modify the output to catch localized errors. This is pretty cool, it means for example your image denoiser can match more than one exact style of painting or camera setup, and do it not just from valid images but from complete noisemaps. it's not more intelligent, just more useful.

And btw, the trick to LLM remembering enough to hold a coherent conversation is just feeding past inputs and outputs into your next request. Its fairly trivial to do.

For context, to simulate a human brain we currently estimate we would need 100000000000 neuron simulation neural nets, individually trained to mirror a single human neuron, consisting each of a 7 layer neural net with quite a complex input and output. Now a lot of that is taken up running the body and inefficiently remembering things, and hopefully you can make neuron templates a little bit generic. Take that as a worst case estimate, but you can see why I still call it a robot parrot. It sure did learn those words, and it sure knows how to remix them but its simply a really fancy imitation of a small part of a brain optimizing for a small task. And it still can't really learn truly new information at runtime.

So essentially, do that on the fly, learning as you go, alongside every other possible human task, preferably using physical tools as well as digital ones, and maintain a coherent model of the universe based on it that you reference across tasks and maintain it for 40-80 years. Simple right? were almost there!

Hopefully we obliterate capitalism before they decide the proles are obsolete enough to starve and replace with robots that are bad at their jobs but don't complain.

Tiny Timbs
Sep 6, 2008

Mega Comrade posted:

What? No it isn't :psyduck:

What? Yes it is :psyduck:

All models are wrong, some are useful. It's a model for intelligence in the sense that it describes a system that takes sensory inputs, applies rules and heuristics, and then provides desired outputs. This is similar to early black box models for human cognition, which still see some use. The reason it's an evocative thought experiment is that it's representing mundane activities that occur at a very small scale as happening at a large, human scale, which makes us think about them in a different way.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
Vending machines are intelligent.

Nervous
Jan 25, 2005

Why, hello, my little slice of pecan pie.
Sounds like we need an intelligence alignment chart.

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

Tiny Timbs posted:

What? Yes it is :psyduck:

All models are wrong, some are useful. It's a model for intelligence in the sense that it describes a system that takes sensory inputs, applies rules and heuristics, and then provides desired outputs. This is similar to early black box models for human cognition, which still see some use. The reason it's an evocative thought experiment is that it's representing mundane activities that occur at a very small scale as happening at a large, human scale, which makes us think about them in a different way.

Proteins in our body take sensory inputs (physical contact with other poo poo), applies rules (does my protein part fit into this other part?), and performs a function to provide a desired output (the entire shape of the protein is to perform a specific function).

Would you consider proteins to be intelligent?

karthun
Nov 16, 2006

I forgot to post my food for USPOL Thanksgiving but that's okay too!

Tiny Timbs posted:

I mentioned elsewhere that in some circles the discussion on AI has morphed from how damaging it can be to digital artists to claiming that it's not good for anything, at all, and anyone saying it is is either deluded or a bitcoin grifter. Using natural language prompts to build and tailor code in an easily-verifiable way is one of its strongest applications and it's one I've used to save me a lot of time and grunt-work spent sifting through StackExchange.

I've also used it to make a bunch of boilerplate code for different api's that I have never used. It's really easy to get the initial part working and then work on the greater implementation. Where chat got can get into problems is that it's super confident l, even when it gets the fundamentals very wrong. The closest analogy I have is that it's a college sophomore. It can make a lot but when it's wrong it doesn't know that it's wrong and it can't correct itself. The other real issues is that it can't learn from its mistakes and how much is chatgpt going to cost after Microsoft is done subsiding it.

karthun fucked around with this message at 19:14 on Feb 2, 2024

Baronash
Feb 29, 2012

So what do you want to be called?

Boris Galerkin posted:

Proteins in our body take sensory inputs (physical contact with other poo poo), applies rules (does my protein part fit into this other part?), and performs a function to provide a desired output (the entire shape of the protein is to perform a specific function).

Would you consider proteins to be intelligent?

Baronash posted:

How tech companies are using/selling ML (or whatever the current industry branding campaign is calling it) is probably relevant in this thread, but we have a whole thread to talk about AI, how it works, and how "intelligent" it is right here.

Baronash fucked around with this message at 18:20 on Feb 2, 2024

LASER BEAM DREAM
Nov 3, 2005

Oh, what? So now I suppose you're just going to sit there and pout?
I wonder if services like GitHub Copilot are profitable, or at least break even to run currently. The large logistics company that I work for just opened up Copilot to any developer that wants a license. I can’t imagine that comes cheap.

Tiny Timbs
Sep 6, 2008

karthun posted:

I've also used it to make a bunch of boilerplate code for different api's that I have never used. It's really easy to get the initial part working and then work on the greater implementation. Where chat got can get into problems is that it's super confident l, even when it gets the fundamentals very wrong. The closest analogy I have is that it's a college sophomore. It can make a lot but when it's wrong it doesn't know that it's wrong and it can't correct itself. The other real issues is that it can't learn from its mistakes and how much is chatgpt going to cost after Microsoft is done subsiding it.

Yeah that becomes a huge problem when you start trying to use it to do complex things that aren’t easily verifiable. ChatGPT will do things like tell you when something is missing in its dataset or when it’s not confident in an answer, and transparency like that is a pretty big deal for building trust in human-machine systems. The over-confidence kills that trust though, because it can really screw you over.

Edit: uh I saw the note about moving AI discussion to the AI thread but this seems borderline since it’s about how people use the product

Tiny Timbs fucked around with this message at 19:29 on Feb 2, 2024

Baronash
Feb 29, 2012

So what do you want to be called?

Tiny Timbs posted:

Yeah that becomes a huge problem when you start trying to use it to do complex things that aren’t easily verifiable. ChatGPT will do things like tell you when something is missing in its dataset or when it’s not confident in an answer, and transparency like that is a pretty big deal for building trust in human-machine systems. The over-confidence kills that trust though, because it can really screw you over.

Edit: uh I saw the note about moving AI discussion to the AI thread but this seems borderline since it’s about how people use the product

Yeah, machine learning and AI are dominating tech investment, even if most of the market is built on rent-seeking reskins of ChatGPT, so it's bound to come up in a thread about tech industry skepticism. I just think the more philosophical questions are best left to the dedicated thread.

Ruffian Price
Sep 17, 2016

Boris Galerkin posted:

Proteins in our body take sensory inputs (physical contact with other poo poo), applies rules (does my protein part fit into this other part?), and performs a function to provide a desired output (the entire shape of the protein is to perform a specific function).

Would you consider proteins to be intelligent?

My gut feeling says yes, but

Weatherman
Jul 30, 2003

WARBLEKLONK

Ruffian Price posted:

My gut feeling says yes, but

:lmao:

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

LASER BEAM DREAM posted:

I wonder if services like GitHub Copilot are profitable, or at least break even to run currently. The large logistics company that I work for just opened up Copilot to any developer that wants a license. I can’t imagine that comes cheap.

They are not.
It's estimated to be costing Microsoft at least $40 per month per user and they charge $10/$19 for it.

Everyone is going for the "lose money and hold out competitors" approach to pricing.

Kwyndig
Sep 23, 2006

Heeeeeey


Which is the dumb way to do it. How much money has Uber set on fire at this point?

jaete
Jun 21, 2009


Nap Ghost

Kwyndig posted:

Which is the dumb way to do it. How much money has Uber set on fire at this point?

Interesting definition of "dumb". Isn't the Uber founder a literal billionaire? And besides, Uber still exists doesn't it?

Kwyndig
Sep 23, 2006

Heeeeeey


Uber is a money pit, they throw investor money in with the promise that "any day now" they'll corner the market and can jack the rates up above traditional taxi cabs and into true profitability. Uber has literally destroyed 30 billion dollars in capital in this quest since it's founding. Some success story.

mobby_6kl
Aug 9, 2009

by Fluffdaddy

Kwyndig posted:

Uber is a money pit, they throw investor money in with the promise that "any day now" they'll corner the market and can jack the rates up above traditional taxi cabs and into true profitability. Uber has literally destroyed 30 billion dollars in capital in this quest since it's founding. Some success story.

Well yeah it is for them? The execs and shareholders made a bunch of money which is what matters

OddObserver
Apr 3, 2009
At any rate it's more reasonable for Microsoft to subsidize stuff since they have other products (like Office) that have more solid business models.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
This was Microsofts MO all through the 90 and early 00s.
They are very experienced at it and their shareholders are probably fine with it (as long as it pays off eventually).

It's probably gonna be like the dotcom bubble, like the internet, this technology is too useful to go away but when it pops loads of companies will be ruined and only a handful will make it through and then have huge control over the market.

Mega Comrade fucked around with this message at 14:20 on Feb 3, 2024

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...

Clarste posted:

pretty clearly a collection of dumb parts that don't seem to have any idea what they're doing, but somehow manage to get things done as a whole.

This really does describe most bureaucracies.

Staluigi
Jun 22, 2021

Volmarias posted:

This really does describe most bureaucracies.

That's because somewhere in every bureaucracy, buried even beneath notice of the companies that unknowingly depend on them to function, there's some 64 year old lady named Ethel or Vanna with literal actual file cabinets and a tea mug warmer and they are the silkstrand between normal weekly operations and the company going up in flames with a cartoon "fwoomp" sound

BabyFur Denny
Mar 18, 2003

Kwyndig posted:

Uber is a money pit, they throw investor money in with the promise that "any day now" they'll corner the market and can jack the rates up above traditional taxi cabs and into true profitability. Uber has literally destroyed 30 billion dollars in capital in this quest since it's founding. Some success story.

I thought Uber is profitable? What's your point?

Kwyndig
Sep 23, 2006

Heeeeeey


My point is no it isn't, and that Wall Street is insane.

The Lone Badger
Sep 24, 2007

I mean, bringing in investor money is like profiting. Money is money.

Kwyndig
Sep 23, 2006

Heeeeeey


Investors expect a return, it's not free money.

The Lone Badger
Sep 24, 2007

Kwyndig posted:

Investors expect a return, it's not free money.

They can expect all they like they're not gonna get it. And in the meantime their money spends like anyone else's.

Evil Fluffy
Jul 13, 2009

Scholars are some of the most pompous and pedantic people I've ever had the joy of meeting.

BabyFur Denny posted:

I thought Uber is profitable? What's your point?

IIRC they had their first profitable quarter ever last year but ended the year at a loss (and many, many billions in the hole overall). Though it has technically been profitable for certain people. :capitalism:

nachos
Jun 27, 2004

Wario Chalmers! WAAAAAAAAAAAAA!
Everyone who needed to make money from Uber going public already has. The system worked exactly as intended for early investors.

Jose Valasquez
Apr 8, 2005

Uber made a profit of about a billion from Q3 2022-Q3 2023, their first profitable 12 month period
Another 27 years like that and they'll be back in the green

BabyFur Denny
Mar 18, 2003

Evil Fluffy posted:

IIRC they had their first profitable quarter ever last year but ended the year at a loss (and many, many billions in the hole overall). Though it has technically been profitable for certain people. :capitalism:

The last two reported quarters were profitable and I wonder how you know the total profit/loss for the whole year considering their annual report 2023 is still not out. Do you have insider knowledge?

Main Paineframe
Oct 27, 2010

BabyFur Denny posted:

I thought Uber is profitable? What's your point?

They only had their first real profitable quarter last year, after running deeply in the red for more than a decade.

The problem is that it was smart to run things that way. The tech industry has developed market dynamics and incentives that cause "running a deeply unsustainable business model and losing a billion dollars every quarter" to actually be an optimal way of doing things for companies. The ones that try to run things like normal companies and actually make a profit mostly get outcompeted by the ones running at a loss.

That leads to a number of adverse side effects that wouldn't be nearly as common in a healthier industry. For example, it's the basic root cause of enshittification: companies build their services better and cheaper than they can actually afford to, relying on investor money to forever subsidize the shortfall. When the investor money starts to run dry, the company has to hurriedly rush to close those gaping holes in their balance sheet by making everything worse and more expensive. The same effect applies to workforces, too - the tech industry's been shedding jobs at a rapid pace since interest rates drove away the investors.

Another effect is trend-chasing. The investors get hooked on trendy buzzwords, and as soon as the industry realizes which buzzword loosens investors' pursestrings, companies are lining up around the block to announce a new NFT feature or ChatGPT integration. Even if these things are a big waste of money, they pay for themselves by bilking investors.

There's also an impact in that "tech" companies will tend to outperform against "non-tech" companies just from the massive advantage of being able to run at a loss for so long. That's a big part of how Uber was able to take on the taxi companies. It undercut them on price while paying drivers more and handing out plenty of free ride coupons, using investor money to run at a loss the taxi companies couldn't afford to match. At the same time, they were able to burn even more money on building up services well beyond what the taxi companies could manage.

nachos
Jun 27, 2004

Wario Chalmers! WAAAAAAAAAAAAA!
FWIW those incentives are no longer there because high interest rates dried up VC money around January 2023, which coincidentally is when every startup started to focus on layoffs, preventing churn, and sustainable unit economics. The only exception is AI, which is why every new business or product seemingly has some kind of AI feature as that’s the only viable way to attract and attain investor capital right now.

Startups that went from pre-seed or seed to closing a series A round between 2022-2023 and WITHOUT any AI features are probably going to be very good businesses because they were likely able to establish strong unit economics at a much earlier stage than startups during the bull market prior.

Adbot
ADBOT LOVES YOU

Evil Fluffy
Jul 13, 2009

Scholars are some of the most pompous and pedantic people I've ever had the joy of meeting.

BabyFur Denny posted:

The last two reported quarters were profitable and I wonder how you know the total profit/loss for the whole year considering their annual report 2023 is still not out. Do you have insider knowledge?

I've only seen the Q3 and it was a lower gain than the Q1 loss, though if Q2 was profitable then sure it's probably net positive going in to Q4 so maybe after going tens of billions of dollars in the hole they'll finally have a small win that if they repeat for the next decade and change might make the company a net positive.


If I had insider knowledge I wouldn't be using it to argue on a dead gay comedy forum.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply