Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
How many quarters after Q1 2016 till Marissa Mayer is unemployed?
1 or fewer
2
4
Her job is guaranteed; what are you even talking about?
View Results
 
  • Post
  • Reply
shoeberto
Jun 13, 2020

which way to the MACHINES?

Riven posted:

Sure but the real answer is “the mathematical likelihood of the next word that should come after this one.” It’s not actually intelligently synthesizing ideas.

That's what has been eating me. We already have many systems built upon knowledge graphs, which I would argue is more how human brains work with expertise. Even if you don't know the answer, you know where to look to verify the knowledge. I don't know Jennifer Anniston's birthday, but I know that I don't know it, and also know where to look to find the specific fact.

The system works much more like very convincing bullshitters, but without the ability to navigate social situations where they would get called out. They're just coldly giving out probabilistic answers with zero determinism. Most of the companies are going to find this out the hard way when rubber meets road, just like we went through with blockchain hype.

I mentioned to my friend yesterday that we'll probably see GPT-based startups getting naming rights to sports stadiums in the next year or so because we're getting back on this roller coaster, apparently.

Adbot
ADBOT LOVES YOU

shoeberto
Jun 13, 2020

which way to the MACHINES?

cinci zoo sniper posted:

GPT-3’s primary improvement came from it being more than 10 times larger than GPT-2. Model architecture has not changed significantly since “GPT-1”, in fact - OpenAI “just” keeps getting better at training and fine-tuning the same model, and keeps getting more money to spend on GPUs.

Yeah, this is my best understanding of what's considered the quantum leap here. The algorithms are not fundamentally all that different from what we have seen for a decade+. More refined maybe, but I haven't seen anything to suggest they have stumbled on to the language model equivalent of fusion energy. It's primarily improved due to the scale and scope of the data being used to train the model.

Probabilistic language models aren't the AI that people think they are. I don't think we'll get there until you can tie it in with some sort of meaningful knowledge graph that can keep the responses on rails, which is a very hard problem to solve. Not impossible, but my personal feeling is that we're a decade out from this being reliable enough to match the expectations being set for the tech today.

shoeberto
Jun 13, 2020

which way to the MACHINES?

cat botherer posted:

Yeah, I'm a data scientist who has used GPT among other large language models to augment/engineer new features for more specialized data for use in downstream models. They are useful in some situations, but there is nothing particularly amazing about them and they give stupid results all the time. All they really do is find some projective subspace where semantically "close" things are close together (again, this often is wildly off). I think non-specialists tend to fill in the gaps of what they don't understand about ML with things that are more optimistic/cooler than reality.

Not a data scientist but I am working in a similar context, and that's been my prevailing experience as well.

That's why it feels so much like blockchain to me. Hype because of what people imagine it can do, when it's really only useful in highly specific situations (which are super technical and nitty-gritty). But a lot of people are going to try to get very rich off of it, and may even achieve that, until they have to actually start generating revenue.

shoeberto
Jun 13, 2020

which way to the MACHINES?

SniHjen posted:

Edited from the original:

This is the problem I have with this discussion, asking ChatGPT a question, is the same as asking anyone a question.
It's not even a question of accurate, good, or perfect. why are you trusting random people on a dead forum?

The problem with this comparison is that it's not controlling for the perception of expertise. It's realistic to assume humans who are otherwise discerning about human expertise will perceive that a very advanced generative AI also inherently has expertise. A big part of the hype cycle right now seems to be built around the assumption that expert systems can trivially be built with this tech, which just isn't the case.

shoeberto
Jun 13, 2020

which way to the MACHINES?

Main Paineframe posted:

Cherry-picked or not, they're examples of how ChatGPT is unreliable. And more than that, it shows how they're unreliable: because ChatGPT does not actually understand the question and is just mashing together sentences based on how often words are used next to each other on the internet. It's not really much different from Googling stuff and just taking the top result, except that it's billed as AI and rephrases everything into natural language so people think it's actually useful for anything.

Following up on the expert system thing, the really loving hard problem is how do you vet anything that it says for accuracy at scale?

We have a hard enough time moderating human generated content at scale. A nondeterministic algorithm that spits out convincing-enough misinformation could be an absolute trainwreck. I'm not sure if it's dangerous per se, but a lot of companies betting the farm on this are going to learn some hard lessons very quickly. See: CNET

shoeberto
Jun 13, 2020

which way to the MACHINES?

Boris Galerkin posted:

Like I already said, we survived the pre- and post-Wikipedia eras. Today in 2023 people will pull up blog articles written by conspiracy theorists as 100% unironic facts and ignore actually truthful Wikipedia articles with citations to factual and reliable sources as 100% unironic conspiracy theories.

I had a follow up post I made separately:

shoeberto posted:

Following up on the expert system thing, the really loving hard problem is how do you vet anything that it says for accuracy at scale?

We have a hard enough time moderating human generated content at scale. A nondeterministic algorithm that spits out convincing-enough misinformation could be an absolute trainwreck. I'm not sure if it's dangerous per se, but a lot of companies betting the farm on this are going to learn some hard lessons very quickly. See: CNET

I think we agree on some level. I don't think this is the end of the world, but I think a lot of money and effort is going to be wasted on this, just like blockchain.

shoeberto
Jun 13, 2020

which way to the MACHINES?
How many folks itt debating ChatGPT have been involved in shipping a product to market based on ML/AI/big data?

shoeberto
Jun 13, 2020

which way to the MACHINES?

Sagacity posted:

Perhaps the scams are going to be harder to execute now that free investor money is starting to dry up

Crypto was tailor made for con artists and pyramid schemes so the comparison becomes muddy at a point. Serious orgs are throwing real money at this - see Microsoft. But I do think that tighter capital markets, less opportunity to get individual investors involved, and an overall higher technical barrier of entry is going to keep it from being a shitshow of that magnitude.

Still, I'm maintaining my prediction that a startup will get naming rights to a pro sports stadium. It's going to be a company that makes generative content for dogs to watch when their owners are out of their house, something loving stupid like that.

shoeberto
Jun 13, 2020

which way to the MACHINES?
It's an example of using the wrong tool for the job because the wrong tool has a lot of zeitgeist around it, just like blockchain. There are many, many ways to model the problem of differential diagnoses with software than a generative language model.

To use an example of a problem space that's similar enough: You could use generative AI to re-implement a taxonomic tree when identifying species of plants. But why? The taxonomy exists; it's already a tree; tree structures are easy to build and allow people to navigate via UI. Would you rather have a probabilistic answer about what the plant species might be based on a language model, or a deterministic answer by following a well-defined taxonomy?

shoeberto
Jun 13, 2020

which way to the MACHINES?

Doggles posted:

I just got a notification from a f2p game I play announcing their "login with Twitter" option won't work after the API change. They're telling all users using that option to change to another login method this week.

That's a wild one. The cost of being an oauth provider has to be negligible compared to the benefit of having users associate their account on your service with everything else they do. Like it's not even some high traffic, easily abusable thing. It's an entirely different stack.

shoeberto
Jun 13, 2020

which way to the MACHINES?

Motronic posted:

And you're 100% correct that there will be/already is a screen scraping python library that will make the API 100% unnecessary for low volume accounts like this.

No way - Twitter's army of SREs and data scientists will quickly spot the traffic patterns and deploy mitigation measures. They probably have at least a few services dedicated to actively tracking and reporting this very threat.

shoeberto
Jun 13, 2020

which way to the MACHINES?

woke kaczynski posted:

I vaguely recall reading that just putting static, subject-based ads on relevant websites is more effective than all the algorithmic microtargeting, but of course there's an entire ecosystem built on convincing people otherwise.

drat near all big data/data driven intelligence has a lot of motivated reasoning behind it. Real data scientists care an awful lot about things like "confidence measures", but BD and ad sales reps do not give a poo poo. My company does just fine with contextual ads.

shoeberto
Jun 13, 2020

which way to the MACHINES?

cat botherer posted:

It's a huge amount of work to do that properly, and probably out of reach for non-monetized joke stuff like this. (I'm not nearly up on the ins and outs of ChatGPT training data, etc., but this sort of thing is generally just a lot of work)

it's a lot of mostly terrible work

shoeberto
Jun 13, 2020

which way to the MACHINES?

Morrow posted:

It's related to interest rates. A lot of tech companies took on debt and borrowed to expand rapidly and now that the era of easy money is over they're trying to cut down.

I forget where I saw it, but I remember seeing something about how Google's habit of acquiring companies just to lock up IP was resulting in them having a lot of engineers on payroll with no actual job responsibilities. Like, taking on the overhead of new employees was just the cost of doing business to keep potentially useful IP out of the hands of their competitors.

As a separate anecdote, my coworker talked about a friend at some other large tech company (I forget which) that had some mandate to grow staffing by, like, thousands. There wasn't a specific project requiring the staffing, just a stated business objective to grow staffing.

Anyways, it's not entirely shocking to see that balloon contract when you're no longer gambling with free money.

shoeberto
Jun 13, 2020

which way to the MACHINES?

StumblyWumbly posted:

I don't think there were folks conspiring to make sure something that impacted 6% of the work force did not impact your dad specifically.

Managers have to make recommendations for cuts, it's not as wild as you might think. I was just having a conversation this week with a coworker who knew of big orgs where there is a mandated distribution of negative performance reviews in order to avoid having managers sing praises of all their direct reports, thereby always pushing for raises/promotions while also shielding them from being laid off. Apparently google is literally doing that, too.

shoeberto
Jun 13, 2020

which way to the MACHINES?

Jose Valasquez posted:

There was speculation that this was being done to set up performance based layoffs at Google, but that is not what happened. There was no discernible pattern based on performance or any other criteria with the layoffs. To my knowledge direct managers were not consulted on the cuts

Oh, cool.

shoeberto
Jun 13, 2020

which way to the MACHINES?

OddObserver posted:

I suspect you have to believe that to get an MBA.

Edit: and I don't just mean layoffs. I have seen non-firing reorgs that just waste people's expertise making people re-learn stuff to make one org chart or another look nicer.

My first career job was at a big-ish defense contractor (no, not Raytheon or Northrup) and shortly after there was a new head for our division. He immediately shuffled the org chart for? ?? reasons.

The guy who mentored me said it happens every time. MBA types have no real ideas or skills so they re-arrange the chairs to prove to their bosses that they're doing something. The only thing that matters is that it helps them climb the ladder.

shoeberto
Jun 13, 2020

which way to the MACHINES?

dr_rat posted:

I still really hate the term human resources. I don't know who came up with it,but gently caress em, know and forever. Only "good" thing you can say about it, it is the way lot of people in the corporate world think about employees.

My company calls it "people ops" which I kinda like as an alternative. Most HR that I've experienced otherwise has been pretty dehumanizing.

shoeberto
Jun 13, 2020

which way to the MACHINES?

Boris Galerkin posted:

Although it could also go wrong and give wrong information, as with Google’s recent JWST fiasco.

But like personally, a lot of times I google something relating to a person place or thing and I don’t really want to click on any kinks, I just wanna know briefly what it is and go on with my life. The way google pulls info from Wikipedia is immensely useful for me.

The fundamental issue for me with this is:
1. Yes, snippets of relevant info at a glance are useful.
2. In industry, we are generally pretty good at full text document search and extracting relevant info from something as well-structured as Wikipedia. This type of feature is reliable and suits the needs for most use cases.
3. LLMs are entirely non deterministic and probabilistic, which means they may accurately summarize the information that you want. But they also might probabilistically choose to summarize entirely wrong information. Regardless, it's going to look coherent enough that it's hard to tell.

I work in search. I feel like I understand the tech. I do not see how this moves the expectation of how search operates for the average consumer in the way that the hype suggests. The way that Google and Bing are betting the farm feels like it's all about investors, not about addressing a real need. I have not spoken to a single person who isn't mainlining tech news who gives a poo poo, or even is aware, about any of this.

shoeberto
Jun 13, 2020

which way to the MACHINES?

cat botherer posted:

Charging for search is a pretty big ask for most people though. Of course that does better enable them to not be lovely.

It's an interesting model. Neeva is another one trying it out. I get the intent, but it seems niche, which in turn makes it hard to get enough traction to make results amazingly good.

At any rate, all of these ideas of shaking up search seem to not address some of the more fundamental issues - like how a lot of people just don't use search engines, and instead use Facebook/TikTok/Reddit to find what they want.

shoeberto
Jun 13, 2020

which way to the MACHINES?

Oxyclean posted:

Is AI that resource/energy intensive? I know it takes a good deal of processing power, but I have the impression it's very far from "crypto farm" levels.

Training the models are usually the worst parts ime. I haven't specifically worked with some of the new LLMs, but the vast majority of ANN stuff requires GPU computing to train models in less than a century's worth of CPU time. Assuming model training is an ongoing process, it's a lot of ongoing GPU resources, but it's not decentralized the same way that crypto mining was.

shoeberto
Jun 13, 2020

which way to the MACHINES?

Humphreys posted:

I had a little chuckle thinking that the writers of places like Ars Technica might be really worried about even the shittiest AI writing better articles than the crap they spew out.

Plenty of tech news places will be blown out of existence by this, but I'm not sure Ars is specifically a good example?

shoeberto
Jun 13, 2020

which way to the MACHINES?

Electric Wrigglies posted:

How do you guys go with dual sim phones? Do you need to register two plans on the one phone?

Used to have issues with it but I solved it by switching to this strategy

shoeberto
Jun 13, 2020

which way to the MACHINES?
I think it's important to realize the amount of smoke and mirrors required in game dev to give an experience that feels "organic". Things that feel surprising are almost always designed to feel that way without you realizing it. Devs do an awful lot of work to keep you on rails while making you feel empowered. Generative AI just isn't capable of matching that experience without needing an incredible amount of hand-holding, which means:

Ghost Leviathan posted:

All this sounds like way more trouble than its worth for an inferior experience. Games are bloated with generic filler content far too much already. They don't need more.

I mean maybe it's a useful tool in some limited contexts. But also think about any memorable side quest you've ever played in an Elder Scrolls or Fallout or Baldur's Gate or whatever. An AI just isn't going to be able to make free-associative zany poo poo like that, quite literally it's a fundamental constraint of the way the model is designed.

shoeberto
Jun 13, 2020

which way to the MACHINES?
Yeah that's basically just the next evolution of asset swap games.

shoeberto
Jun 13, 2020

which way to the MACHINES?
You have to have liquid capital (literally physical cash) to give to customers when they withdraw. Loans take time and paperwork. The feds don't want banks saying "hey customer I can get you your cash but I'm going to need about 2-3 weeks".

shoeberto
Jun 13, 2020

which way to the MACHINES?
Speculation on Meta but my take is:
* TikTok is eating their lunch
* They are in a terrible position wrt appeasing far-right users, either catering to them and alienating normies or blocking controversial content and having really public blacklashes
* The metaverse is just lmao. LLMs are, in some ways, a solution in search of a problem, but you can at least squint your eyes and see the shape of how they could turn into something consumers want.
* Huge target on their back for regulation from all sides, from "conservatives are being silenced!!" to data protection to corporate responsibility wrt foreign governments amplifying disinformation.

Absolutely nobody *likes* FB as a platform, governments are gunning to totally hollow out their core business model, and Zuck decides to just light $10bn on fire for a vanity project.

shoeberto
Jun 13, 2020

which way to the MACHINES?
Google's growth was all about exploiting data, but to their credit, they at least did diversify and had a few hit products beyond search/ads. They have room for a soft (if bumpy) landing with all of this regulatory pressure.

Meta's only play for real revenue is harvesting data and being unethical, and everyone is sick of it. There is no soft landing. It's astonishing that there isn't a single person at the executive level who can talk Zuck down from VR as their pivot.

shoeberto
Jun 13, 2020

which way to the MACHINES?

Vegetable posted:

People are overstating the problem. Meta is akin to Microsoft in the 2000s. They made some dumb bets (Windows Phone) but their core
products will continue to be a market leader. TikTok is big but it isn’t going to take down Facebook and Instagram altogether. The market is big enough for all three. That’s assuming TikTok even survives the political risks coming its way.

Extending this analogy, in what ways does Meta make a product that draws in massive enterprise contracts that businesses absolutely 100% do not have a viable alternative to?

shoeberto
Jun 13, 2020

which way to the MACHINES?
Someone I work with said that he doesn't think any of these LLMs are going to be anything beyond a common commodity before too long. All the secret sauce is how you build on top of it.

shoeberto
Jun 13, 2020

which way to the MACHINES?

Aramis posted:

Before long? You can, today, download the fully trained weights for a model surprisingly close to GPT3.5 and run it at a perfectly reasonable speed on a garden variety CPU. The only thing holding the floodgates closed is licensing liability concerns.

Liability concerns, and also just ignorance. ChatGPT is "AI" for an enormous amount of people who think it's cool but don't actually understand the tech. As the hype cycle cools they're going to lose that monopoly on being The AI for the public.

shoeberto
Jun 13, 2020

which way to the MACHINES?

Needs more drool

shoeberto
Jun 13, 2020

which way to the MACHINES?
There was a recent article where manufacturers were, in fact, lamenting that consumers didn't want to connect their fancy smart appliances to wifi: https://arstechnica.com/gadgets/2023/01/half-of-smart-appliances-remain-disconnected-from-internet-makers-lament/

shoeberto
Jun 13, 2020

which way to the MACHINES?
Home automation can be cool for certain applications, like lights and thermostats, but the problem is that those were popular and all the companies though that everything in the home should be "smart", no matter the impedance mismatch between the function of the appliance and the need for automation. :capitalism:

shoeberto
Jun 13, 2020

which way to the MACHINES?
Hey! A few technical topics that I have some expertise in even though I have no idea what y'all are talking about. Perfect time to inject my opinion.

Baronash posted:

Sure, but the context seems to be whether the result could be a privacy concern like the output of a camera might. I think any rendering of a point cloud is stripped of enough visual information that it wouldn't be of much interest to anyone.

The answer is it depends, mostly on the resolution of the sensor but also how much information is being captured. Like if a LiDAR sensor was doing a recorded sequence of scans that captured someone walking around naked, you probably won't be able to tell whether the carpet matches the drapes. But you could get a pretty accurate 3D representation of their movements and the space that they're in, and if the resolution is sufficiently high, you can get a lot of spatial details. It's not (likely) to be spank bank material but I still don't think anyone is going to want a highly accurate 3D capture of their otherwise private spaces and movement.

source: my first real job was working with LiDAR at a defense contractor that specialized in remote sensing tech (satellites, radar etc)

Boris Galerkin posted:

But why exclude those? I want them included because it doesn't make sense to exclude them. I'm basically saying I want every adult (18+) in America to be asked "do you own or rent the place in which you live" and the answer being own or rent. I'll accept that a couple living together can both be counted as being in the "own" category but I don't see why adults living with roommates and such should be excluded.

e: I don't care about the household as a unit, I care about the population. A standard college apartment situation with 2 people sharing a single unit should be counted as +2 to the rent category; a house where the owner lives in and rents out the extra room to a friend should be counted as +1 to own and +1 to rent. Someone who lives in their own condo alone and owns another condo that they rent out should still count as +1 to own, because the number of housing units he owns is irrelevant as he's still 1 person.


Blue Footed Booby posted:

I found articles that actually delved into that. I'm too lazy to dig through my browser history at this point, but from what I can tell it is in fact false. Basically, "housing units" generally includes apartments and other multi unit buildings (thus the name). Also, there are stats on the number of people in the average household household and the number of heads of household with dependents. For roommates to take renters from ~35% to a clear majority would require one of those numbers to be extremely wrong.

It's all a huge pain in the rear end, though, because none of the articles I could find defined all their terms. I'm not confident enough to fight anyone, but it's enough I wouldn't take on faith any claims that actually hinge on most people being renters.

There's a few things to clarify here.
1. The actual Census survey form isn't very useful for understanding what's being captured because the decennial Census itself doesn't ask a whole lot of detailed questions - they go for volume rather than detail. The American Community Survey is the ongoing survey that asks detailed questions annually on a much smaller sample, and they use the decennial Census as a baseline for imputing updated estimates for the population.
2. The Census bureau generally splits measures between the population of individuals and the population of housing units. Within those universes, they further subdivide based on the details. Households are a subset of housing units (with a specific criteria that I forget), and owner-occupied households are a further subset of households, etc. data.census.gov kinda sucks poo poo for navigating this stuff and I had a hard time finding the specific table that I wanted from ACS, but you really need to find specifically owner-occupied households vs renter-occupied.

source: I spent 7 years working at a company that bundled up Census data to make it easier to explore than the actual government's website. data.census.gov is an improvement over the old American Factfinder service but it's still not very easy to use.

shoeberto
Jun 13, 2020

which way to the MACHINES?

Mister Facetious posted:

He'd just say, "No. It'll be decided by the moderators whether it was meant asa joke or seriously, depending on the context of the discussion in the thread, and they will take appropriate actions to either probate, or high five the poster. Either way, it will be left as evidence if the poster doesn't edit it."

Yeah I don't know why the guy didn't at least punt to that. "We have policies about hate speech and our moderation team would review it and act accordingly." There are softball answers that should be part of your playbook for interviews like this. To instinctively avoid answering it really looks like you're avoiding an answer that will piss off racists, or worse, that you just don't want to moderate hate speech.

shoeberto
Jun 13, 2020

which way to the MACHINES?
tl;dr Die Bart, Die

shoeberto
Jun 13, 2020

which way to the MACHINES?

Blut posted:

Who checks voice mails on their personal phone these days? Most people I know (in their 30s, nevermind younger generations) have voicemails that just say "I'm unable to check my voicemails, please text me about whatever this is regarding". And thats if they've even bothered to record a message for it.

Really? Also mid-30s here, I've never heard of this.

shoeberto
Jun 13, 2020

which way to the MACHINES?

Blut posted:

Do you use your phone for work? Or do most of your peer group? They're the only people I know of who check/leave/use voice mails.

In peoples personal lives theres just no reason to leave a voicemail instead of sending a text to explain why your were calling, or at worst a voice note. Voicemails are incredibly less convenient.

I guess for my peer group it's all almost exclusively text or other chat. I don't use my phone for work, but when dealing with vets, pharmacies, doctors, or really any professional services, it involves something going to voice mail at some point. I do usually use visual voice mail or read the transcripts though.

I guess I don't really use it, but it's hard to imagine disabling it outright. Just like I don't use the postal service to snail mail stuff, but that doesn't mean I don't want a mailbox or that I've stopped checking the mail. Important stuff still comes that way.

Adbot
ADBOT LOVES YOU

shoeberto
Jun 13, 2020

which way to the MACHINES?
For cloud services, you can pick the type of hardware you're hosted on to optimize for what you're trying to do - more CPU, more memory, faster disk, etc. Getting a cloud hosted VM with attached GPU capabilities is one of those classes but it's been a bit niche for a few years compared to everything else. But it's a cost effective way to basically rent GPU compute time if you need it.

With LLMs, it's likely that cloud providers are going to massively increase their capacity for GPU VMs, which seems likely to be a short sighted investment. They could theoretically have entire data centers of racks with GPUs attached that just aren't being used.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply