Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe
This is a very pedestrian question, but I wanted to try out the ChatGPT demo and when I try to log in it just says "The email you provided is not supported". This occurs consistently when trying to log in in several ways:

- with my personal microsoft account
- with my work microsoft account
- trying to create an account with my work email address

For the microsoft account ones, I had to click through a page granting access to my profile to OpenAI, and I got notifications that the app was associated with my account. But even after granting that access, I still got the "email not supported" message when trying to log in.

Has anyone encountered this problem and got past it?

I googled 'chatGPT "The email you provided is not supported"' and got a load of SEO spam. I also googled 'chatGPT "The email you provided is not supported" reddit' and found a couple of Reddit threads where people were complaining about having the same problem, but none of them could provide consensus on what a solution might be. There was one guy who described a crazy procedure involving connecting to VPNs and using Tor which, if that's what I need to do to get to this thing, then gently caress that.

Adbot
ADBOT LOVES YOU

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

Macichne Leainig posted:

I hate to ask the obvious but have you tried a different browser? For whatever reason I can't login to my work's lovely monorepo app in Firefox but anything Chromium-based is fine

That is in no way obvious. It did however work. I can log in just fine in Firefox. I think that the fact they issue an error message saying that there is a problem with my email address, when in fact the problem is something else entirely, makes them appear extremely incompetent. Once I logged in they showed a message inviting me to join their Discord server and provide any feedback I have, so I did.

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe
(this has no relation to my previous posts in this thread) (also it's me, someone with no actual knowledge of ML speaking my thoughts aloud and rubber-ducking, and you shouldn't read it because it'll be nonsense)

I find the idea of getting a computer to play strategy games using machine learning really interesting, but I haven't actually done anything to get experience in this. I'm a software developer, but I don't do anything to do with ML in my day job. I used to run a lovely website that implemented a niche boardgame so that people could play it online (I've taken it down due to concerns about personal legal exposure and users' data - the site was launched some years before GDPR was a reality, and was poorly coded). I would like to re-create the website, but to do it better with several years of working professionally as a developer under my belt. I'd also really like to implement an AI opponent - something the original implementation never had - and that's the idea that really interests me.

I recall hearing on the news a few years ago about how prominent teams in the field had created AIs that play Chess and Go at high levels, and how they did it without external training data because they would have their AIs play against one another and generate the training data from those games (possible I misunderstood aspects of this). I found this project (and an article by its author promoting it) which is an implementation of this sort of idea. There's a lot of jargon I don't have a handle on. If I understand correctly, the idea is that after a move has been made, the board state is evaluated for the probability that the AI player wins the game. Higher probability of winning => higher reward. And this is the basis for the training process. I could be fundamentally misunderstanding the ideas here, and inventing details to fill the gaps.

The author has an example of training an AI to play "Sushi Go", which is a card game of fairly low complexity. I noticed that the logic for creating input to the AI is basically turning information about the game into a huge 1-dimensional vector of floating point numbers (which are mostly booleans in disguise). But this implies that the AI will be trained to assume a very specific deck composition. As soon as you play with a slightly different deck - or more realistically for what I might use this idea for, a different board with different action spaces - the AI is totally useless. Are there models that allow you to associate different game resources and concepts with each other? Things like "there are n cards that are associated with this space on the board, and this many of them have been seen, and this many are in my hand"? Or "there are connections to this location on the board from such and such a set of other locations" - that can be combined in some way with information on what control I have on each of those locations? I'm sorry, this is probably being hopelessly vague. I guess what I'm trying to say is, are there models for an AI that allow you to pass in a set of inputs of variable size, or even have layers of the network that are instanced with one instance for each member of an index set? (I am aware of the concept of convolutional layers in image classification networks, so really what I'm thinking of is something like that but indexed by a discrete index set rather than by spatial extent.)

Also, I work day to day in C# and I don't like Python as a language to write more than small scripts in. Has anyone used ML.NET? Is it any good?

Hammerite fucked around with this message at 22:02 on Apr 23, 2023

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe
I have edited my post to be less rude about Python, because I don't want all responses to my post to focus on that part of it.

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe
I went back and tried to read the material explaining that library more carefully. I still don't understand what's going on, but I think I understand a little better what it is I don't understand.

My existing mental model for how all this works is the "supervised learning" model, having read some explanations of it a few years ago and having found that it made sense to me. I read some "neural networks for beginners" free online text, which explained supervised learning and gradient descent through a metaphor of a ball rolling around a multi-dimensional landscape (representing a loss function) descending to a local minimum.

But that library doesn't use supervised learning, or not in the simple way I understand it. It uses something called "proximal policy optimization", which per Wikipedia is a kind of "reinforcement learning", which is a distinct paradigm to supervised learning. What I don't understand is how proximal policy optimization works - how the neural net is trained. It seems like it's more complex than the basic supervised-learning idea I read about a few years ago. I need to find an explanation for a non-specialist of how it works, ideally one that I can understand using some kind of metaphor like the "rolling ball" one.

My assumption - from a position of ignorance - was that an effective way to select actions for an AI would be:
- iterate over all possible actions (or a sample of them, if there are very many)
- for each action, use the state of the board afterwards as the input to a neural network
- the outputs of the neural network represent a prediction of which player will win
- choose the action that gives the greatest likelihood that the AI itself will win

The issue with that of course is that if you are doing "supervised learning" then you need a canonical answer, the "correct answer", as to which player is most likely to win... and how do you get to that with no data? That's probably why it's not done that way, I guess.

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

CarForumPoster posted:

I don’t think statements of fact derived from a credible source and presented in good faith on a public discussion on that topic should be particularly controversial. It’d be extremely inappropriate to say this in a work environment, especially when you’re a CEO or manager, but Twitter isn’t the workplace and I don’t have any reason to know they work together.

it's a weird thing to talk about and he's being weird about it. especially as a prominent corporate guy who's in the news

it's only half about what he's saying about it, it's at least that much about the fact he's talking about it at all. and that's partly because people infer from the fact he's talking about it, that he might have strongly held and off-putting opinions in that area

like, I don't know how many women have rape fantasies. And I don't particularly care so I wouldn't get into an animated discussion about it online. The fact that he did, and under his IRL name makes his behaviour Weird with a capital W.

Adbot
ADBOT LOVES YOU

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe
Anyway has anything at all come out about why they decided to boot Altman in the first place? They must surely have had some compelling reason to do it.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply