Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Clarste
Apr 15, 2013

Just how many mistakes have you suffered on the way here?

An uncountable number, to be sure.
I assume the reason they immediately jumped to Intelligent Design is that saying humans are "designed," even in scare quotes, tends to neglect how arbitrary a lot of biological processes are. It presumes a goal that these systems strive for, rather than it simply being an accident that these things made themselves more likely to exist.

Adbot
ADBOT LOVES YOU

gurragadon
Jul 28, 2006

SaTaMaS posted:

Yes exactly

Alright that makes sense. It would be reasonable to take an intentional stance towards something biological or mechanical that has a consciousness. It seems like you think that sensory input is required for consciousness, which could very well be true, and I wouldn't be surprised if we found that out.

That makes me wonder what level of sensory input something needs to gain consciousness and what you think it is. It seems like the major input that needed is "touch." I'm just thinking about people who don't have sight or hearing, and they are clearly conscious. I don't know if hooking up ChatGPT to a pressure sensor and thermometer would give it sufficient information, but I don't have perfect sensory information either and am very conscious.

I think the necessity of using the intentional stance would depend on whether you think it would require complex input like human receive, or less amount of input.

Or it would not be necessary at all to use the intentional stance at all if you think something that a derivative of current AI technology could ever become conscious even with sensory inputs.

gurragadon fucked around with this message at 02:29 on Apr 19, 2023

BrainDance
May 8, 2007

Disco all night long!

Clarste posted:

I assume the reason they immediately jumped to Intelligent Design is that saying humans are "designed," even in scare quotes, tends to neglect how arbitrary a lot of biological processes are. It presumes a goal that these systems strive for, rather than it simply being an accident that these things made themselves more likely to exist.

That's a thing, and yeah that's completely true. But jumping to that kind of conclusion ignores the context of the post. It was obviously not meant to be design with an intentional designer and not just some processes that work towards some sometimes arbitrary thing (like I get, fitness to environment for evolution.)

GlyphGryph posted:

I genuinely don't think the problem is the words people are using at this point, I think it's the people who insist on interpreting them in the most insane possible way that are the problem.

I said pretty much that earlier. A part of it is just, that's SA, but I really think it's something more special to AI. Maybe that people immediately have strong opinions on it, but a lot of the AI questions are actually not things you can be super confident on yet, there are just things that havent been established or aren't all that knowable now. But that doesn't sit right with people.

KillHour
Oct 28, 2007


Clarste posted:

I assume the reason they immediately jumped to Intelligent Design is that saying humans are "designed," even in scare quotes, tends to neglect how arbitrary a lot of biological processes are. It presumes a goal that these systems strive for, rather than it simply being an accident that these things made themselves more likely to exist.

I was using the same words back to make that exact point - "designed" is doing a lot of work here and an AI would work exactly the same if it just happened to spontaneously come into existence through the random arrangement of atoms instead of being purposefully built by a human (however infinitesimally unlikely that would be). It doesn't matter that the people who build it have agency. That doesn't impart some special property of "made by a conscious being" that matters for how it operates.

Instead, they took the opposite meaning somehow - that I'm saying we must have been made by a conscious being to be conscious. Which is not at all what I said nor meant.

Edit: Also, there's an important point I should have made explicit. I used evolution as the comparison because we didn't actually make these AIs. Not in the same way we made Dijkstra's algorithm. Instead, we made a system that made the AIs. The system works by predetermined rules, but the outcomes of the system are extremely complex and have lots of surprising emergent properties. This is the crux of my point. Yes, a calculator doesn't "want" to do addition any more than a river "wants" to meander. But we know both how and why a calculator works. That it works is not surprising. We know how an AI works - we can describe it with math. But we don't know why it works - why does that math produce results we weren't expecting? (And I can point to lots and lots of quotes from AI researchers backing this up). We don't know how OR why our brain works, so I don't know why anyone would make a definitive statement in either the positive or negative about human consciousness, agency or desires beyond subjective experiential things.

Double Edit: No I'm still not saying any of these AIs are conscious. Put down the baseball bat. But like... neither are fruit flies and they have real honest to god brains in there.

KillHour fucked around with this message at 05:11 on Apr 19, 2023

Tei
Feb 19, 2011
Probation
Can't post for 48 hours!

SaTaMaS posted:

Having a goal requires consciousness and intentionality


In QuakeC for the videogame Quake, enemies start "idle" until they find a enemy. Then they set the field "enemy" to that enemy.
Theres a function called "movetogoal" that uses a small heuristic to move the monster towards his goal.

https://quakewiki.org/wiki/movetogoal


Lucid Dream posted:

The LLMs don't have goals, but they do predict pretty darn well what a human would say if you asked them to come up with goals about different things.

The videogame Civilization say that the advance of civilization is trough technology and wars.
It also say that if you want to have tanks, first you have to invent monoteist religion.

But none of these things are intended by Sid Meier, the designer, they are builtin in the design anyway.

LLM might have goals builtin in the design, even if are not intended by the creators. The very least, to produce a interesting output or any output.





Tei fucked around with this message at 07:29 on Apr 19, 2023

Quinch
Oct 21, 2008

Yeah isn’t this the whole point of AI really? It’s given a goal defined by a measure of some sort and the AI works out the correct actions of its possible outputs to achieve this. I wouldn’t say it’s desire as such but saying an AI has goals and it’s programmed to maximise them is perfectly reasonable.

Lucid Dream
Feb 4, 2003

That boy ain't right.

Tei posted:

LLM might have goals builtin in the design, even if are not intended by the creators. The very least, to produce a interesting output or any output.

Ok sure, but if I describe a situation I can still ask an LLM what a person’s goals might be given the context and it will respond with something plausible. I’m not saying anything about the subjective experience of the LLM or how interesting the output is, but rather that the system has the capability to predict what a human might say if given a situation and asked to define goals to solve the problem. The semantics don’t matter as much as the actual capability.

SaTaMaS
Apr 18, 2003

Quinch posted:

Yeah isn’t this the whole point of AI really? It’s given a goal defined by a measure of some sort and the AI works out the correct actions of its possible outputs to achieve this. I wouldn’t say it’s desire as such but saying an AI has goals and it’s programmed to maximise them is perfectly reasonable.

Sure in casual conversation it doesn't really matter, and even in AI systems things like beliefs, desires, and intentions are employed as metaphors. However in any somewhat serious discussion about AI it's important to distinguish between things that are determined by their design and training data, and where something resembling personal motivations and intentions start to determine its goals, assuming such a thing is even possible for an AI.

Pleasant Friend
Dec 30, 2008

I am enjoying Drake's new AI songs almost as much as the record labels desperately trying to scrub it from the internet.

KillHour
Oct 28, 2007


Pleasant Friend posted:

I am enjoying Drake's new AI songs almost as much as the record labels desperately trying to scrub it from the internet.

That poo poo is fuckin hilarious and I'm here for it.

In case someone hasn't seen it:
https://www.youtube.com/watch?v=Po2BHFHtKgQ

https://www.theverge.com/2023/4/19/23689879/ai-drake-song-google-youtube-fair-use posted:

After the song went viral on TikTok, a full version was released on music streaming services like Apple Music and Spotify, and on YouTube. This prompted Drake and The Weeknd’s label Universal Music Group to issue a sternly-worded statement about the dangers of AI, which specifically says that using generative AI infringes its copyrights. Here’s that statement, from UMG senior vice president of communications James Murtagh-Hopkins:

[corporate bullshit]

What happened next is a bit mysterious. The track came down from streamers like Apple Music and Spotify which are in tight control of their libraries and can pull tracks for any reason, but it remained available on YouTube and TikTok, which are user-generated content platforms with established DMCA takedown processes. I am told by a single source familiar with the situation that UMG didn’t actually issue takedowns to the music streamers, and the streaming services so far haven’t said anything to the industry trade publications. Neither has Drake or The Weeknd. It’s weird – it does seem like Ghostwriter977 pulled the track themselves to create hype, especially while the song remained on YouTube and TikTok.

But then TikTok and YouTube also pulled the track. And YouTube, in particular, pulled it with a statement that it was removed due to a copyright notice from UMG. And this is where it gets fascinatingly weedsy and probably existentially difficult for Google: to issue a copyright takedown to YouTube, you need to have… a copyright on something. Since “Heart on my Sleeve” is an original song, UMG doesn’t own it — it’s not a copy of any song in the label’s catalog.

So what did UMG claim? I have been told that the label considers the Metro Boomin producer tag at the start of the song to be an unauthorized sample, and that the DMCA takedown notice was issued specifically about that sample and that sample alone. It is not clear if that tag is actually a sample or itself AI-generated, but YouTube, for its part, doesn’t seem to want to push the discussion much further.

*snip*

As far as I know, this is the first instance of a particular AI-related* copycat being targeted as a specific final product, and not just an argument about training the model on copyrighted stuff in general.


*The song wasn't written by AI. It was written and produced and sung by a human and the vocals were deepfaked.

Monglo
Mar 19, 2015

Already dead

KillHour
Oct 28, 2007


Monglo posted:

Already dead

The hilarious thing is because the DMCA is over the short sample in the beginning, Universal can't use Content ID to match the song automatically and have to issue manual takedowns, so it's just a cat and mouse where it gets reuploaded basically immediately.

https://www.youtube.com/watch?v=utzJJjaSs64

I'm sure this one will be dead in a day or two. :shrug:

gurragadon
Jul 28, 2006

Looks like google is in a dilemma about this one.

The Verge posted:

If Google agrees with Universal that AI-generated music is an impermissible derivative work based on the unauthorized copying of training data, and that YouTube should pull down songs that labels flag for sounding like their artists, it undercuts its own fair use argument for Bard and every other generative AI product it makes — it undercuts the future of the company itself.

If Google disagrees with Universal and says AI-generated music should stay up because merely training an AI with existing works is fair use, it protects its own AI efforts and the future of the company, but probably triggers a bunch of future lawsuits from Universal and potentially other labels, and certainly risks losing access to Universal’s music on YouTube, which puts YouTube at risk.

I also wonder who actually wrote and recorded this song. Verge Article about it. It would be pretty funny if the creator was just Drake or UMG.

I guess a real-world application of AI generated content that possibly violates copyright needed to be released in a widespread manner though so this conversation about copyright could happen.

Tei
Feb 19, 2011
Probation
Can't post for 48 hours!

gurragadon posted:

Looks like google is in a dilemma about this one.

I also wonder who actually wrote and recorded this song. Verge Article about it. It would be pretty funny if the creator was just Drake or UMG.

I guess a real-world application of AI generated content that possibly violates copyright needed to be released in a widespread manner though so this conversation about copyright could happen.

I am sure this will be resolved in the most beneficial way for corporations and worse for authors and creators

KillHour
Oct 28, 2007


Okay, ChatGPT is a drat good rubber duck. I spent way too long on this issue so I asked ChatGPT:

quote:

I have the following code:

code:
D3D11_INPUT_ELEMENT_DESC s_DX11InputElementDesc[] =
		{
			{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0                           , D3D11_INPUT_PER_VERTEX_DATA, 0 },
			{ "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT   , 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
		};
		int iNumElements = sizeof(s_DX11InputElementDesc) / sizeof(D3D11_INPUT_ELEMENT_DESC);
		m_Device->CreateInputLayout(
			s_DX11InputElementDesc,
			iNumElements,
			kVertexShaderCode,
			sizeof(kVertexShaderCode),
			&m_InputLayout
		);
I am getting the error:
pre:
D3D11 ERROR: ID3D11Device::CreateInputLayout: The provided input signature expects to read an element with SemanticName/Index: 'COLOR'/0, but the declaration doesn't provide a matching name.
I haven't declared COLOR anywhere in the vertex shader signature. Why is it giving me that error?

quote:

The error you are getting is because the input signature of the vertex shader expects an input element with semantic name "COLOR" and index 0, but your input layout does not have a matching element.

First, check your vertex shader code to see if there's an input variable with semantic "COLOR". It would look like this:

code:
struct VS_INPUT
{
    ...
    float4 color : COLOR;
    ...
};
If there is no such variable with the semantic "COLOR" in the vertex shader, it is possible that the error is due to incorrect shader bytecode being passed to CreateInputLayout function. Make sure that `kVertexShaderCode` points to the correct shader bytecode and its size is properly calculated.

If you find a variable with semantic "COLOR" in your vertex shader, but you don't want to use it or don't need it, simply remove it from the vertex shader input structure.

If you do need a color input in your vertex shader, add an appropriate input element to your `s_DX11InputElementDesc` array:

code:
D3D11_INPUT_ELEMENT_DESC s_DX11InputElementDesc[] =
{
    { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0                           , D3D11_INPUT_PER_VERTEX_DATA, 0 },
    { "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT   , 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
    { "COLOR"   , 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};
This will provide the color input to your vertex shader with DXGI_FORMAT_R32G32B32A32_FLOAT format, which corresponds to a float4. Make sure to adjust the data structure you use to store vertex data accordingly, so that the vertex buffer you create includes color information.

quote:

Here is my vertex shader struct. There is no COLOR semantic

code:
struct vs_in
{
    float4 pos : POSITION;
    float4 uv  : TEXCOORD0;
};

quote:

In that case, it's likely that the issue is related to the vertex shader bytecode being passed to the `CreateInputLayout` function. Make sure that `kVertexShaderCode` points to the correct shader bytecode and its size is properly calculated.

When you compile the vertex shader, you should obtain the shader bytecode and its size. If you are using the D3DCompile function, the compiled shader bytecode can be accessed using the `ID3DBlob` interface, like this:

code:
ID3DBlob* pVertexShaderBlob = nullptr;
HRESULT hr = D3DCompile(/* ... parameters ... */, &pVertexShaderBlob);
if (SUCCEEDED(hr)) {
    m_Device->CreateInputLayout(
        s_DX11InputElementDesc,
        iNumElements,
        pVertexShaderBlob->GetBufferPointer(),
        pVertexShaderBlob->GetBufferSize(),
        &m_InputLayout
    );
}
If you are using some other method to compile the shader, make sure you correctly obtain the shader bytecode and its size before passing them to the `CreateInputLayout` function.

Additionally, ensure that you are creating the vertex shader using the same shader bytecode:

code:
m_Device->CreateVertexShader(
    pVertexShaderBlob->GetBufferPointer(),
    pVertexShaderBlob->GetBufferSize(),
    nullptr,
    &m_pVertexShader
);
If you have verified the shader bytecode and its size, and you are still getting the error, please double-check the build settings and make sure that the correct shader source file is being compiled. It's possible that there might be an issue with the build process, and a different shader file is being compiled, leading to the error you are encountering.

And I thought "No, there's no way I'm passing in the wrong vertex shader. That would be crazy." But then I thought "Well, I'll just check."


I forgot to comment out some old code that was reassigning my variable :doh:



Edit: Technically, this isn't super impressive - ChatGPT has read through a bajillion StackOverflow responses telling people to check the basic poo poo. But even for an experienced developer, having the basics reframed into the context of what you're doing is a huge help. Importantly, HLSL is somewhat new for me, so I was overly focused on thinking the issue must be a lack of knowledge, not a typo somewhere else.

KillHour fucked around with this message at 22:14 on Apr 22, 2023

Inferior Third Season
Jan 15, 2005

I needed to write some functions for coordinate transformations using quaternions. It seemed like a good opportunity to try ChatGPT, because quaternion math is fairly straightforward, just tedious to get all the indexing and signs right. It spit out just what I asked for, and even gave some sample inputs and outputs to check.

The functions worked perfectly. The sample outputs it gave as a check were wrong.

KillHour
Oct 28, 2007


Inferior Third Season posted:

quaternion math is fairly straightforward

:catstare: Mods!?

Edit: I mean other mods who haven't had their soul devoured by math.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

KillHour posted:

:catstare: Mods!?

Edit: I mean other mods who haven't had their soul devoured by math.
Quaternions are straightforward, but miserably so.

Xand_Man
Mar 2, 2004

If what you say is true
Wutang might be dangerous


People get cranky when the dimensions get higher than three

Inferior Third Season
Jan 15, 2005

KillHour posted:

:catstare: Mods!?

Edit: I mean other mods who haven't had their soul devoured by math.
The conceptual part of it, and deriving the equations from scratch, were done 20 years ago in university. The ability to do those things faded away within a few weeks of passing the class, and all that was left was remembering that quaternions 1) can be used instead of rotation matrices, 2) are more computationally efficient than rotation matrix math, and 3) do not suffer from gimbal lock.

So I just looked it up when I had a need for these things a few days ago, and the equations were right there. The equations themselves are very straightforward, if you just take them as given. I just had ChatGPT write the code for me instead of doing it myself.

The closest I came to doing math here was, when I considered actually conceptualizing what the equations meant, I remembered that "a solution exists", and that I didn't actually need to go any further down that path.

KillHour
Oct 28, 2007


I really should move my shader over to quaternions from matrix rotations... :negative:

Bar Ran Dun
Jan 22, 2006




https://www.nytimes.com/2023/05/01/...ce=articleShare

Hinton has left Google and has come out against the dangers of AI.

Leon Trotsky 2012
Aug 27, 2009

YOU CAN TRUST ME!*


*Israeli Government-affiliated poster

Bar Ran Dun posted:

https://www.nytimes.com/2023/05/01/...ce=articleShare

Hinton has left Google and has come out against the dangers of AI.

His argument kind of seems like a situation where he is saying: "Don't make AI evil" and is not really a useful assessment on a practical level.

His argument is essentially: AI will be able to do many great things that help society, but bad people exist and could use AI for bad purposes. Therefore, we should not pursue advanced AI.

The same thing could be said of computers and the internet, but we didn't shut those down because they enabled trillions of dollars in fraud and scams over the course of their existence.

His other concern is the "Terminator Scenario" that seems like a crazy reason to shut down research decades in advance of this situation. It also doesn't really prevent someone like China or the U.S. government in secret, from pursuing these kinds of research if you ban it from the public.

quote:

Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.

The concerns of "if AI is available to everyone, then bad people can use AI" and "it is possible for the worst outcome to happen" are both obviously true, but not useful in determining what to do about it.

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.

Leon Trotsky 2012 posted:

The same thing could be said of computers and the internet, but we didn't shut those down because they enabled trillions of dollars in fraud and scams over the course of their existence.

The same could also be said of human cloning and genetic engineering, and we did largely shut that down.

Doctor Malaver
May 23, 2007

Ce qui s'est passé t'a rendu plus fort
It would be helpful for objections to AI to be specific and free of padding. For instance, why complain about "false text" flooding the internet? How's that going to be different from now? Are we in the end days of truthful internet, free of conspiracy theories, money scams, hate speech..?

On a different topic, the company I work for produces audio books, among other things. We used to work with voice talent, but now we've started with synthetic voices. We've been experimenting with them occasionally but they were too stiff and tiring until recently. Now we're working with a small company whose voices are indistinguishable from human. And when the big players catch up I expect it will become uncommon to hire a human to narrate longer texts.

I don't like being on the side of AI, facilitating this process. And I'm sad that I have to stop contacting some voice actors with who I've developed a relationship over the years. I still keep getting their updates by email (narrated this book, attended that voice over conference, worked with a mentor... etc) and I wonder how long this industry will survive.

BTW you can see that this is AI because the voices have some stochastic behavior. When they encounter an abbreviation or something difficult to pronounce, they might get it right in one sentence and wrong in the very next. And sometimes they produce a short noise out of the blue. But that will become less frequent as models improve.

Serotoning
Sep 14, 2010

D&D: HASBARA SQUAD
HANG 'EM HIGH


We're fighting human animals and we act accordingly

GlyphGryph posted:

The same could also be said of human cloning and genetic engineering, and we did largely shut that down.

Didn't read the article because my NY sub is suspended at the moment but at first blush these feel only comparable in the limit. I don't think we today are anywhere close enough to a true AGI for these topics to not be apples and oranges. One is creating human-like intelligence/consciousness artificially (AI) and the other is artificially growing intelligent/conscious human beings (cloning); the stress there is very relevant because of how young these technologies are IMO.

Bar Ran Dun
Jan 22, 2006




Doctor Malaver posted:

For instance, why complain about "false text" flooding the internet? How's that going to be different from now? Are we in the end days of truthful internet, free of conspiracy theories, money scams, hate speech..?

Volume and speed of reply. Right not you have to have a person pretend to be a bunch of people and sling social media bullshit

Lucid Dream
Feb 4, 2003

That boy ain't right.
The wonderful thing about AI is that it lowers the barrier to entry for everything. The existentially terrifying thing about AI is that it lowers the barrier to entry for everything.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

Doctor Malaver posted:

It would be helpful for objections to AI to be specific and free of padding. For instance, why complain about "false text" flooding the internet? How's that going to be different from now? Are we in the end days of truthful internet, free of conspiracy theories, money scams, hate speech..?


No one is arguing AI is inventing this stuff. Just that it lowers the bar and allow much more to flood the internet than ever before.
Clarks world shutting entries is an early look at the new issues these systems are going to bring.

Clarste
Apr 15, 2013

Just how many mistakes have you suffered on the way here?

An uncountable number, to be sure.
The difference between 90% of everything being chaff and 99.99% of everything being chaff is pretty significant. If there is so much AI generated nonsense that it becomes nearly impossible to find anything that isn't, then the internet simply becomes unusable.

Main Paineframe
Oct 27, 2010

Leon Trotsky 2012 posted:

His argument kind of seems like a situation where he is saying: "Don't make AI evil" and is not really a useful assessment on a practical level.

His argument is essentially: AI will be able to do many great things that help society, but bad people exist and could use AI for bad purposes. Therefore, we should not pursue advanced AI.

The same thing could be said of computers and the internet, but we didn't shut those down because they enabled trillions of dollars in fraud and scams over the course of their existence.

His other concern is the "Terminator Scenario" that seems like a crazy reason to shut down research decades in advance of this situation. It also doesn't really prevent someone like China or the U.S. government in secret, from pursuing these kinds of research if you ban it from the public.

The concerns of "if AI is available to everyone, then bad people can use AI" and "it is possible for the worst outcome to happen" are both obviously true, but not useful in determining what to do about it.

He doesn't seem to be saying anything about shutting down research, at least not in this particular article. It's hard to say, because there doesn't seem to be a transcript of his actual words anywhere and all the articles are mostly just paraphrasing him, but I don't see anything saying he's calling for a research halt.

Rather, what he seems to be concerned about is companies buying into the AI hype, abandoning all safeguards and ethical considerations, and widely deploying it in increasingly irresponsible and uncontrolled ways in a race to impress the investors.

quote:

Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.

Moreover, he seems particularly concerned about the risk of companies letting AI tools operate on their own without humans checking to make sure their output doesn't have unexpected or undesirable side effects.

quote:

Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own.

Leon Trotsky 2012
Aug 27, 2009

YOU CAN TRUST ME!*


*Israeli Government-affiliated poster

Main Paineframe posted:

He doesn't seem to be saying anything about shutting down research, at least not in this particular article. It's hard to say, because there doesn't seem to be a transcript of his actual words anywhere and all the articles are mostly just paraphrasing him, but I don't see anything saying he's calling for a research halt.

Rather, what he seems to be concerned about is companies buying into the AI hype, abandoning all safeguards and ethical considerations, and widely deploying it in increasingly irresponsible and uncontrolled ways in a race to impress the investors.

Moreover, he seems particularly concerned about the risk of companies letting AI tools operate on their own without humans checking to make sure their output doesn't have unexpected or undesirable side effects.

He says he didn't sign on to one of the letters calling for a moratorium on AI research because he was still working at Google, but he agreed with it.

quote:

Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job. He notified the company last month that he was resigning, and on Thursday, he talked by phone with Sundar Pichai, the chief executive of Google’s parent company, Alphabet. He declined to publicly discuss the details of his conversation with Mr. Pichai.

Main Paineframe
Oct 27, 2010

Leon Trotsky 2012 posted:

He says he didn't sign on to one of the letters calling for a moratorium on AI research because he was still working at Google, but he agreed with it.

It says that he didn't sign those letters, and it also says that he didn't want to publicly criticize Google until he quit his job. It doesn't actually state that those two points are related, nor does it actually state that he agreed with the letter.

This goes back to me point about how it's difficult to work out the details of his stance because the reporters are paraphrasing him rather than reporting his words directly.

the other hand
Dec 14, 2003


43rd Heavy Artillery Brigade
"Ultima Ratio Liberalium"
For those who have the time to listen to (very) lengthy audio, this guy’s YouTube channel has a lot of interviews with leading people in AI research and industry. Some of his recent guests were the Boston Dynamics CEO, a computational biology professor from MIT, and the CEO of OpenAI, which makes the GPT software (video below).

I’m not qualified to provide any commentary on these, but they seem intended for the general public and I’m finding them super interesting and educational. The OpenAI CEO interview includes topics like political bias, AI safety, and AGI.

https://www.youtube.com/watch?v=L_Guz73e6fw

KillHour
Oct 28, 2007


I watched until the interviewer started talking about how he's such good friends with Jordan Peterson. He comes off like a dorky Joe Rogan and I want to give him a wedgie and stuff him in a locker.

KillHour fucked around with this message at 01:35 on May 3, 2023

Main Paineframe
Oct 27, 2010

the other hand posted:

For those who have the time to listen to (very) lengthy audio, this guy’s YouTube channel has a lot of interviews with leading people in AI research and industry. Some of his recent guests were the Boston Dynamics CEO, a computational biology professor from MIT, and the CEO of OpenAI, which makes the GPT software (video below).

I’m not qualified to provide any commentary on these, but they seem intended for the general public and I’m finding them super interesting and educational. The OpenAI CEO interview includes topics like political bias, AI safety, and AGI.

https://www.youtube.com/watch?v=L_Guz73e6fw

I wasn't previously familiar with Lex Fridman, so I Googled him and it took about ten seconds to find that he's an ex-researcher who'd been demoted from research scientist to unpaid intern after some "controversial" studies. So I searched to find out what kinds of controversy he'd been involved in, and it took another ten seconds to find an article titled Peace, love, and Hitler: How Lex Fridman's podcast became a safe space for the anti-woke tech elite. Hell of a title! A little more Googling brings up plenty of results suggesting that people in the AI and machine learning industries largely regard him as a grifter who doesn't understand half as much as he claims to.

That Business Insider article is by far the best source I've found, so let's pull it out from behind that paywall:

quote:

Peace, love, and Hitler: How Lex Fridman's podcast became a safe space for the anti-woke tech elite

In early October, Ye culminated a dayslong spree of inflammatory comments about Black and Jewish people with a tweet declaring, "I'm a bit sleepy tonight but when I wake up I'm going death con 3 On JEWISH PEOPLE." The outrage was immediate. Companies like Balenciaga and JPMorgan announced they were cutting ties with the rapper formerly known as Kanye West. Twitter blocked him.

The scientist turned podcast host Lex Fridman, meanwhile, invited Ye to appear on his podcast.

During the 2 ½-hour interview, Ye doubled down, comparing Planned Parenthood to the Holocaust and blaming his personal and financial problems on Jewish doctors and record-label executives. Fridman, who's Jewish, pushed back at times, telling him, "When you say 'Jewish media,' there's an echo of a pain that people feel."

But Fridman still aired the interview in its entirety.

"I believe in the power of tough, honest, empathetic conversation to increase the amount of love in the world," Fridman tweeted alongside the link.

Over the past few years, Fridman, 39, has gone from an unknown academic researcher to a social-media celebrity, professional agitator, and member of Elon Musk's inner circle. In 2019, while working at MIT, he coauthored a controversial study of Tesla's Autopilot concluding that human drivers remained focused while using the semiautonomous system. Musk, Tesla's CEO, was so enamored that he flew Fridman to the company's headquarters to tape an interview. Seemingly overnight, episodes of Fridman's podcast began racking up millions of views.

In his podcast, Fridman asks world-renowned scientists, historians, artists, and engineers a series of wide-eyed questions ("Who is God? "What is the meaning of life?"). It all seems innocent enough. But recently, "The Lex Fridman Podcast" has become a haven for a growing — and powerful — sector looking to dismantle years of "wokeness" and cancel culture. Episodes include Joe Rogan talking about being spied on by intelligence agencies (8.7 million views); Jordan Peterson questioning the validity of climate-change models (9.5 million views); and, of course, Ye ranting about Jews running the media (4.6 million views). And as some of tech's most vocal leaders have taken a sharp right turn — the bitcoin mogul Balaji Srinivasan's slamming the Food and Drug Administration's regulations, Musk's blaming "communism" for turning his transgender daughter against him — Fridman has become their mouthpiece.

His podcast provides a reputable forum (he's a scientist, after all) to discuss vaccine safety, race science, and the importance of traditional gender roles. Fridman has said his ultimate goal is "to allow each other, all of us, to make mistakes in conversation." But Lior Pachter, a computational biologist at Caltech who's become one of Fridman's most outspoken critics, said some scientists and academics fear Fridman is contributing to the "cacophony of misinformation."

As Fridman's career has blossomed, his ambitions have grown. He's pitched a startup to "put robots in every home" and talked about launching a social-media company (though there doesn't appear to be movement on either front). In December, Fridman asked Musk if he could run Twitter. "No salary. All in. Focus on great engineering and increasing the amount of love in the world. Just offering my help in the unlikely case it's useful," Fridman tweeted at the CEO, to which Musk replied: "You must like pain a lot. One catch: you have to invest your life savings in Twitter and it has been in the fast lane to bankruptcy since May. Still want the job?"

Though Fridman has touted affiliations with MIT and Google, AI and machine-learning experts who spoke with Insider said Fridman lacks the publications, citations, and conference appearances required to be taken seriously in the hypercompetitive world of academia. When AI professionals on Twitter challenged Fridman's research methods for the Tesla study, he blocked them en masse. Soon after the study was published, Fridman moved from his prestigious MIT lab to an unpaid research role.

But to the hordes of young men fed up with the so-called mainstream media, Fridman is a truth teller. He's a genius scientist who doesn't talk down to them and who's challenging the status quo, one interview at a time.

"If you're into flat Earth and you feel very good about it, that you believe that Earth is flat, the idea that you should censor that is ridiculous," Fridman said on the neuroscientist Andrew Huberman's podcast. "If it makes you feel good and you're becoming the best version of yourself, I think you should be getting as much flat Earth as possible."

---

At the beginning of each episode, Fridman addresses his listeners from his home studio, wearing a boxy black suit and sitting in front of a black drape. (He's joked that he looks like a "Russian hitman.") He welcomes fans in an ASMR voice before launching into hourslong conversations about everything from extraterrestrial life to sex robots. Since launching the show in 2018, he's taped more than 350 episodes.

When it comes to Fridman's interviews, he's all peace and love. "I'm a silly little kid trying to do a bit of good in this world," he's told his listeners. He presents himself as a hopeless romantic striving for human connection. His only foes are the critics and journalists intent on spreading cynicism and negativity — and even to those people Fridman responds with heart-emoji tweets.

The Hallmark-card persona Fridman has cultivated makes it all the more jarring when he muses about whether Hitler would make a good guest. "If you talk to Hitler in 1941, do you empathize with him, or do you push back?" he asked on a recent episode. "Because most journalists would push, because they're trying to signal to a fellow journalist and to people back home that this, me, the journalist, is on the right side. But if you actually want to understand the person, you should empathize. If you want to be the kind of person that actually understands in the full arc of history, you need to empathize."

He's also expressed interest in interviewing Andrew Tate — the misogynist influencer Fridman's described as "an interesting person who raises questions of what it means to be a man in the 21st century" — even after Tate was detained in Romania on human-trafficking and rape charges.


Fridman's podcast wasn't always so incendiary. It launched in 2018 as "The Artificial Intelligence Podcast," drawing a modest audience of autodidacts with conversations about self-driving cars, robotics, and deep learning, a form of AI that aims to use neural networks to replicate the processes of the human brain. As the podcast gained momentum, bigger names like Mark Zuckerberg and Jack Dorsey came on to talk about topics such as the metaverse and life in a simulation.

"The Lex Fridman Podcast" offered a rare opportunity to listen to four-hour conversations with luminaries of tech and science. Fridman's open-ended, nontechnical questions made complex topics feel accessible to layman listeners. To this day, many credit Fridman for inviting guests from across the political spectrum, even when he disagrees with their views. Bhaskar Sunkara, the founder and publisher of the socialist magazine Jacobin who appeared on Fridman's podcast in December, praised Fridman's interviewing style.

"I really do think there's a deep earnestness," Sunkara said. "That's, I think, part of his strength, the fact that he can just have these 'aw, shucks' conversations with people."

But in the past year or so, there's been an undeniable shift as a rage has built among tech magnates who want to say it like it is, with no repercussions. They've decried identity politics and political correctness in the mainstream media. They want unfettered capitalism and deregulation. They don't want to hear the whining about how their products affect people's health or happiness or well-being. They don't want to be criticized for hoarding wealth. Scientific facts, they argue, don't care about your feelings.

Fridman has latched onto the momentum. His guest list these days has turned into a who's who of the "intellectual dark web," a movement of controversial thinkers claiming to offer an alternative to woke culture.

Fridman makes the perfect spokesman for these ideas because he seems so harmless. He's not vitriolic like Ben Shapiro, creepy like Jordan Peterson, or crude like Joe Rogan. He's just a sweet-seeming, self-deprecating guy who wants to ask questions and hear everyone out. Even Hitler.

But Pachter warns that Fridman is pulling the wool over people's eyes.

"He says, 'I'm just asking questions,'" Pachter said, "but actually he's sort of two-siding issues where they don't really have two sides."


---

Fridman was raised in Moscow in the 1980s, where he developed a deep suspicion of socialism and communism. His grandfather was a decorated World War II gunner, his grandmother a survivor of the 1930s Stalinist famine that killed millions of Ukrainians.

When Fridman was about 11, his family relocated to the Chicago suburbs. He's said the adjustment to American culture was difficult. Years later, when Nan Xie, Fridman's fellow doctoral candidate at Drexel University, asked Fridman if he still spoke any of his native tongue, he replied, "I speak 120% Russian," Xie recalled.

By the time Fridman entered academia, his family name was well established. His brother, Gregory, and his father, Alexander, who was one of the Soviet Union's most accomplished plasma physicists, were both star professors at Drexel, where Fridman earned his undergraduate and doctoral degrees.

Fridman majored in computer science, focusing his studies on active authentication, a way to identify a device's user based on their habits and gestures. Fridman's Ph.D. advisor, Moshe Kam, joked that he often had to pull Fridman's head out of the clouds, as his lab desk was stacked with books on philosophy, beat poetry, and Russian expressionism.

In 2014, around the time he was finishing his doctorate, Fridman debuted a YouTube channel: a dry mix of AI lectures and interviews with martial-arts experts. He also tried his hand at poetry. The works, published on a since deleted blog, mostly focused on love and longing, which would become omnipresent themes in Fridman's podcast, with titles including "Oversexed Wolf Among Undersexed Sheep" and "You Can't Quote Nietzsche While Exposing Your Breasts."

One haiku, titled "A Blonde College Girl," reads: "a blonde college girl / climbs a tree in a tight shirt / and black yoga pants."

While he pursued his side projects, Fridman joined Google's secretive Advanced Technology and Projects team. He worked for Project Abacus, a task force studying how to use active authentication to replace traditional passwords. Fridman left the company after six months. (Google did not respond to requests for comment about Fridman's tenure.)



At his next stop, at MIT's prestigious AgeLab in 2015, Fridman really began to cultivate his public persona. His team aimed to use psychology and big-data analytics to understand driver behavior. An AgeLab colleague of Fridman's told Insider that Fridman increasingly focused on attention-grabbing studies with results that were harder to quantify. Many of these studies were published only on ArXiv, an open-access service that accepts scholarly articles without the kind of quality control required by other publications.

The former AgeLab staffer recalled Fridman telling him that he wanted "to amass a giant amount of followers."

Fridman started to do just that. He opened up his lectures to the wider Cambridge community, packing people in so tightly that they'd have to sit on the stairs. He uploaded the lectures on his personal YouTube channel with MIT's logo prominently displayed, and he updated his Twitter profile photo to one of him looking professorial, standing in front of a blackboard full of equations. (Pachter noted that the equations were from a course entirely unrelated to Fridman's own field of study.)

Filip Piękniewski, a computer-vision and AI researcher who's become one of Fridman's most vocal critics, said it was "clear from the very beginning that he's positioning himself to become one of the celebrities in this space."

---

In 2019, while working at MIT's AgeLab, Fridman posted his controversial Tesla study online. It found that "patterns of decreased vigilance, while common in human-machine interaction paradigms, are not inherent to AI-assisted driving" — in other words, that drivers using semiautonomous vehicles remain focused. The findings were a shock to the industry, contradicting decades of research suggesting that humans generally become distracted when partially automated systems kick in.

The MIT seal of approval was likely enormously valuable to Tesla. Its Autopilot feature had come under intense scrutiny over several widely publicized fatal crashes involving Tesla vehicles. The company was hit with a class-action lawsuit that described Autopilot as "essentially unusable and demonstrably dangerous." But Musk insisted his technology did not require human intervention, going so far as to brand the feature "Autopilot" instead of "Copilot."

Academics in AI began to pick apart the study's methodology. They criticized the small sample size and suggested the participants likely performed differently because they knew they were being observed, a phenomenon known as the Hawthorne effect. Missy Cummings, a former MIT professor who's served as the senior advisor for safety at the National Highway Traffic Safety Administration, has called the report "deeply flawed."

"What happens is people like Elon Musk take this and then try to make a big deal about it and say, 'I've got science on my side,'" Cummings said.

"I personally do not consider Lex a serious researcher," Cummings recently told Insider.

When Anima Anandkumar, a well-known AI expert, tweeted in 2019 that Fridman ought to submit his work for peer review before seeking press coverage, Fridman blocked her and many of her colleagues, some of whom had never even engaged in the discussion. (Fridman did not respond to several interview requests and requests for comment. He told the historian Dan Carlin during one episode: "Journalists annoy the hell out of me ... So I understand from Putin's perspective that journalism, journalists can be seen as the enemy of the state.")

Many of Fridman's peers had another reason to be suspicious of the study. Fridman's admiration for Tesla's CEO was well documented: He was an active participant in Tesla fan forums, he'd been photographed with Musk's Boring Company flamethrower, and in a 2018 tweet that's since been deleted he asked Musk to collaborate on a fully autonomous cross-country drive. Musk had even tweeted about Fridman's Tesla-friendly research in the past.


One former MIT colleague of Fridman's said many people in their field believed that being closely associated with Musk could be a career boon. "Lex was relatively excited to get in touch with Elon Musk and get into his good graces," said the former colleague, who asked to remain anonymous to avoid professional repercussions.

A week after Fridman posted about the study on Twitter and Tesla message boards, Musk invited Fridman to Tesla's offices. There, Musk sat for a 32-minute interview for Fridman's podcast in which he argued that within the next year Tesla's semiautonomous systems would be so reliable that if a human intervened while driving, that could actually decrease safety.

Soon, coverage of Fridman's study appeared in tech and business publications, including this one. Fridman's show became a sensation. Before the Musk interview, the podcast's catalog had a total of 1 million downloads. Suddenly, it wasn't rare for episodes to garner millions of views, with guests including Bridgewater's Ray Dalio and Facebook's chief AI scientist, Yann LeCun.

But as well-known billionaires flew out to chat with Fridman, the study — along with a second Tesla-centric study Fridman published — was removed from MIT's website without explanation. That year, Fridman quietly switched from AgeLab to an unpaid role in the department of aeronautics and astronautics. In 2020, Fridman rebranded his show as "The Lex Fridman Podcast."

An autonomous-vehicle investor and strategist described the removal of the study as disastrous. "Is there anything more damning than being disappeared from the MIT website? Like, post-Epstein, that's basically the next worst thing to actually being arrested," the investor said, referring to the school's scandalous history with Jeffrey Epstein. (The investor asked to remain anonymous, saying: "I've already been victimized, threatened, and harassed by the army of Musk people. He's Musk's guy and I just don't need it.")


None of Fridman's current MIT colleagues that Insider contacted agreed to be interviewed for this story. Only Sertac Karaman, the director of the Laboratory for Information and Decision Systems, where Fridman has since relocated, provided a brief statement: "Dr. Fridman has been a research scientist at MIT LIDS and in our research group. I have known him for many years, and been very impressed by his ideas and his research accomplishments."

The automotive journalist E.W. Niedermeyer told Insider that the 2019 study helped create the enduring perception that an automated system is ultimately safer.

An AI expert who knows Fridman thought that perhaps the podcast host abandoned academic rigor in pursuit of fame. (The expert was also granted anonymity so as to avoid harassment from Fridman's fans.)

"It's almost as if he sold his academic soul to get an interview with Musk," the expert said.

---

Today, Fridman lives in a spartan four-story townhouse in Austin, Texas. He shares the house with only a fleet of Roombas he says he once trained to scream in pain as a test of human empathy toward robots. (He joked on Huberman's podcast that he decided not to upload the experiment on his YouTube channel because it risked affirming some people's perception of him as a "dark person.")

Fridman often refers to the "Russian soul," an identity that helps explain his reverence for both science and romanticism. In 2019, Fridman incorporated a startup rooted in the idea that everyone should have a personalized AI system that would not only know them inside and out but provide tenderness and emotional support.

The company, called Lexhal, would encompass a more advanced version of the algorithms that already pervade our lives — like Netflix's recommendations for what to watch or Amazon's suggestions for what to buy. Fridman told Huberman that the goal was "exploring your own loneliness by interacting with machines."

Fridman offered the example of a smart fridge that witnesses its owner's late-night ice-cream binges. "You shared that moment — that darkness or that beautiful moment — where you were just, like, heartbroken," he said. "And that refrigerator was there for you. And the fact that it missed the opportunity to remember that is tragic. And once it does remember that, I think you're going to be very attached to the refrigerator."

While it doesn't appear Fridman has made much progress with Lexhal beyond ideations, Fridman seems obsessed with the idea of what it means to be a person in this age of technological advancement — and more specifically, what it means to be a man. He's embraced the kind of hypermasculine grind culture that has swept Silicon Valley in the past few years in which primarily men are looking to extend their lives and increase their sperm count. "I love fighting," Fridman told the philosopher William MacAskill on an episode of his podcast that's since been deleted. "I've been in street fights my whole life. I'm as alpha in everything I do as it gets."


Fridman has said he maintains a rigorous schedule of long, intensive work sessions followed by 10-minute social-media breaks. He eats a carnivore-keto diet complemented by supplements from brands like Athletic Greens that sponsor his show. His punishing exercise routine includes judo, Brazilian jiujitsu, and long runs where his only soundtrack is brown noise that allows him to meditate.

As he wrote in a 2014 poem called "Instructions for Being a Man": "a man should be alone and quiet. / exploring but certain: a dog without a chain. / a man should work hard, even when tired, / and never, never, never complain."

For Fridman, Musk appears to be a guiding light. Musk has appeared on the podcast (three times, in fact). So has Grimes, Musk's former partner, as well as her close friends Liv Boeree, the poker champion, and Aella, the sex researcher. Even conversations with other guests that ostensibly have nothing to do with the Tesla CEO often turn to praise of his various projects. Meanwhile, Fridman's profile has risen with every Musk retweet. (Musk's account is Fridman's most-interacted-with account, while Fridman's is in Musk's top 15.)

The two men are at the helm of a new wave of techno-optimism positing that the quicker we bring along bold technological advances, the better. Piękniewski described it as a "semireligious movement" where anyone who calls for guardrails is seen as an enemy of progress.

In recent months, as the general population has interacted with artificial intelligence in unprecedented ways, the popular narrative has taken on an increasingly fearful tone. Chatbots trained on large language models like ChatGPT have gone on deranged rants and threatened users. Pundits have predicted that automation will replace millions of jobs and produce a flood of misinformation. Doomsayers say AI could spell the end of humanity. In the past two weeks, Fridman has hosted both Sam Altman, the CEO behind ChatGPT, and Eliezer Yudkowsky, a researcher convinced that AI will wipe out the human race.

Yet Fridman remains bullish. His body of work seems to center on the idea that individuals can be trusted to use technology to become better versions of themselves. Just as he argued that Autopilot drivers wouldn't fall asleep at the wheel, he maintains that his fans can separate fact from fiction in an algorithmically driven mess of information.

As Fridman's influence grows, so does the criticism of him. The autonomous-vehicle investor and strategist called him a "useful idiot." Reddit and Hacker News, Silicon Valley's favorite message board, went wild after Fridman posted a 2023 reading list including "The Little Prince," "1984," and Fridman's personal favorite, "Animal Farm." Both platforms are full of frustrated fans accusing Fridman of relying too much on the MIT name and lacking expertise on complex topics.

Nassim Nicholas Taleb, the mathematical statistician known for writing the book "The Black Swan," blasted Fridman on Twitter in January. "Sending mail with heading 'Quick Note From MIT' when podcast has nothing to do with MIT is shoddy as hell," Taleb tweeted.

Taleb said he'd turned down 10 invitations from Fridman to appear on his podcast. "I have been convinced that there is something that is not solid about the person," Taleb told Insider.

Whether Fridman has the credentials or smarts to be this tech figurehead remains to be seen. But one thing's for certain: He has the support of some very powerful people.

"How do you become a priest of the religion?" Piękniewski said. "Well, the best way is to get blessed by an even bigger priest."

I don't care who guest-stars on this show, I'm not going to listen to Mr. Emphasize With Hitler talk about anything with anyone for two hours. It seems like common-sense to vet a source before you waste hours listening to them.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
Fridman also claims he's an MIT lecturer. Which is one of those 'technically true' statements. It's very common for people to give 'lectures' as an open forum discussions that people can walk into. But he plays it off like he teaches there. A lot of his claims to expertise are like this, small nuggets of truth wrapped in bullshit.

He's a grifter and an awful podcast host, his voice is so incredibly boring and he asks very dumb questions. I have no idea how he manages to get so many high quality guests or so many listens.

He's the Joe Rogan of tech.

Mega Comrade fucked around with this message at 10:06 on May 3, 2023

Jaxyon
Mar 7, 2016
I’m just saying I would like to see a man beat a woman in a cage. Just to be sure.
I feel like it's a tad optimistic to think that AI can replace entertainment writers, but that certainly is the talking point among labor haters right now.

Has anyone made any content with AI that is actually any good?

Delthalaz
Mar 5, 2003






Slippery Tilde
Regarding the fears of AI superintelligence and world domination, I'll be a lot more concerned if Paradox can ever develop an "AI" that can beat an average human player without cheating. Those games are pretty complicated, but not nearly as complicated as the real world, so...

Adbot
ADBOT LOVES YOU

KillHour
Oct 28, 2007


Delthalaz posted:

Regarding the fears of AI superintelligence and world domination, I'll be a lot more concerned if Paradox can ever develop an "AI" that can beat an average human player without cheating. Those games are pretty complicated, but not nearly as complicated as the real world, so...

I know you're joking but they do that on purpose because predictable computer players are more fun and easier to balance. It would be very annoying for new players if every time you changed your strategy, the AI adapted to cut you off. Expert players might appreciate that, but it's just not worth the effort.

https://www.deepmind.com/blog/alphastar-grandmaster-level-in-starcraft-ii-using-multi-agent-reinforcement-learning

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply