Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Tei
Feb 19, 2011

It always surprise me the "Frankestein Syndrome", where people summons fear instead of hope, for new technology agents.

I mean, it could go either way, a AGI can be malicious, but it also can be beneficial. It could have a impact, but it could exist and have absolutelly no effect.

We already live with a lot of "AGI's" like beings: other people, theres 8.000.000 AGI's on Earth just now (withouth the A of artificial), having +1 of that is not a huge change.

Is specially rich that the original Frankestein monster was not about the creature, but about negligence from the doctor, leaving the creature to his own after creating it. It was later that the creature was monsterified / otherified; a lot of it on pop culture, pop culture is some lovely aggregation of lazy cultural elements.

We do not have to fear the unknown, just once is there, learn about it, protect if if is deserve protection. Use it with caution if is dangerous.

Adbot
ADBOT LOVES YOU

Tei
Feb 19, 2011

Maybe we have enough time to solve this problem, or figure what to do.

It can happen that solving the AGI problem is both a really easy thing to do, but we are really far from it. (?).

Is possible that our current approach of machine learning things will soon be capped in a local optimum, even become subjetively worse (?).

Maybe we are on the wrong path to achieve AGI, we get this awesome results because we have a cool algorithm that become better with more information available, and we have a lot of information to feed it.

We are just playing with a algorithm, but we don't really know how to build a AGI. Maybe the solution to create a AGI is so simple, we could have build it with 1996 technology, If we really knew how to do it.

Perhaps there are "types" of inteligences, and AGI, ASI and US are just a few types among many. And we are just building some awesome BookSmart AI that is stupid when you drag him outside the library.

Tei
Feb 19, 2011

Theres a thing called "parasitic computing", or something like that.

You could send a challenge to a remote computer. The remote computer wold use his resources to solve this challenge. Then return a answer.
Like, you could send packets to computers where the CRC is intentionally wrong, and the remote computer will ask for these packets again, except the one where the CRC is correct. Basically solving the problem of "Sum all these numbers" for you. I don't remember how this is called, parasitic computing?
A system A could parasite a system B, using resources in B intended for other means, to get computation done in A.

Too bad networks are slow, so you could get the work done faster by computing it locally than trying to abuse a remote computer.

But maybe a AGI can find a problem that is just 1) very hard to solve, 2) require very littel data to send over the network, 3) can use parasitic computer 4) can be send to many different internet hosts. 5) you don't expect a answer in nanoseconds, are happy enough with entire seconds delay

If there where something that match these 5 conditions, a trickster AGI could expand his capabilities using servers connected to the internet. Until is found and ip-banned, probably.

Tei
Feb 19, 2011

SaTaMaS posted:

Having a goal requires consciousness and intentionality


In QuakeC for the videogame Quake, enemies start "idle" until they find a enemy. Then they set the field "enemy" to that enemy.
Theres a function called "movetogoal" that uses a small heuristic to move the monster towards his goal.

https://quakewiki.org/wiki/movetogoal


Lucid Dream posted:

The LLMs don't have goals, but they do predict pretty darn well what a human would say if you asked them to come up with goals about different things.

The videogame Civilization say that the advance of civilization is trough technology and wars.
It also say that if you want to have tanks, first you have to invent monoteist religion.

But none of these things are intended by Sid Meier, the designer, they are builtin in the design anyway.

LLM might have goals builtin in the design, even if are not intended by the creators. The very least, to produce a interesting output or any output.





Tei fucked around with this message at 07:29 on Apr 19, 2023

Tei
Feb 19, 2011

gurragadon posted:

Looks like google is in a dilemma about this one.

I also wonder who actually wrote and recorded this song. Verge Article about it. It would be pretty funny if the creator was just Drake or UMG.

I guess a real-world application of AI generated content that possibly violates copyright needed to be released in a widespread manner though so this conversation about copyright could happen.

I am sure this will be resolved in the most beneficial way for corporations and worse for authors and creators

Tei
Feb 19, 2011

I think we kinda already invented infinite food and infinite energy?

We could use gravity and water to produce has much energy has we need.
And with modern fertilicers, a single person can feed a entire city. Sometimes farmers overproduce food and have to destroy it.

We could produce so much energy, to tank his market value into the negative (https://www.stanwell.com/our-news/energy-explainer/negative-prices/)


So basically we already have the tech for infinite food and energy. We are technological developed, but socially underdeveloped to take advantage of that technology.

With AGI / ASI, it could be the same. If our society continue primitive and based on scarcity, even huge technological changes represented for AGI / ASI might do nothing good for our society.

Tei
Feb 19, 2011

IShallRiseAgain posted:

AI being used to train AI is not the problem that people think it is. In fact, using AI to generate more training data is sometimes actually something desirable because you can better control the input. Midjourney for example uses RLHF (Reinforcement Learning from Human Feedback) to improve its model, and Stable Diffusion is going to release a model using the same technique. The controls used to gather the original dataset will work fine with a bunch of AI data, because even before AI, there was a lot of really bad data out there. (You can check the LAION database and search for a term, and see there is a lot of unrelated garbage.)

On my company we created a face generator AI to train a face recognition AI, because it was a way to avoid using faces of real people because of privacty and other legal issues.

Is kinda a great way to do it. Do you need 200 faces on demand?, you press a button. The alternative is free datasets of faces you can use legally, but the quality is kinda crap. Or asking all the devs to give their face... that only me and another dev did.

Imaginary Friend posted:

It will get interesting when the AI is powerful enough to create work of art that follow the same style of a song instead of ripping the chords and beats straight out simply by having all of the general knowledge of musicianship.

Imagine when a new hit is made and you just feed it into an algorithm that creates a derivate work from that, tweaking it just enough as to not be called a rip off.

Among other things, my desire to see more art created by AI has waned to nothing.

https://www.youtube.com/watch?v=Wng6CDTpb7g

Theres this scene in One Piece where a maniac doctor has been creating zombies. And for him Zombies are like people, but better because are disposable. He has a hard on on this artist he reanimated has a zombie, and the doctor is phisically unable to tell the difference between a dead thing, and a live thing. The scene is super powerfull, and I think highlight the reason with "AI ART" is bad. Is not live, it was made by something unalive, and is unalive. And the authors can't tell the difference between a live thing and a dead thing. You need a artist to make art, and "AI ART" is not created by somebody that understand art, is because that fails to realize the ideal of art, that is provoque human feelings.

Now, .. I am a techbro too. I use to deal with game engines and dream about creating content by algorithm, procedurally create the sky, the clouds, the sun, a world created by math, not by artists. All that would be a world, but a empty one, withouth meaning.

Algorithm AI art can be created, when is created by a artists. Life can create life.

Tei fucked around with this message at 21:14 on May 9, 2023

Tei
Feb 19, 2011

Imaginary Friend posted:

"Art" is in the eye of the beholder and looking at all creative works, whether it's movies, music or games, it's all derivate from other sources, that themselves are inspired by even more things. With AI, the time to churn out a copy that is superior will be shortened and the new popular thing will get bloated much faster by copies.

AI ART is not creating art for people that are not artist. Is creating "Cool looking images". Artist could create art with "AI ART" tools, but only a minority are interested, many are put off by projects like Midjourney stealing the images, ignoring the copyright of these images, taking them in bulk, with metadata like "created by <name of author>" so people can write "make foo in the style of <name of author>".

Copyright is SUPER SACRED HAS HELL if is Sony or Disney properties ...for corporations, but opt-out for artist if you are rich, or part of a "get rich quick" scheme with VC money, I guess.

Nintendo can have slaves, because copyright laws.
https://www.gamingbible.com/news/nintendo-forces-switch-hacker-to-pay-them-2530-of-his-income-for-res-170600-20230419

Midjourney would opt-out of copyright, because would be too expensive.
https://petapixel.com/2022/12/21/midjourny-founder-admits-to-using-a-hundred-million-images-without-consent/

One rule of law for poor people, a different rule of law for rich people.

Tei fucked around with this message at 00:25 on May 10, 2023

Tei
Feb 19, 2011

GlyphGryph posted:

Yeah but when people talk about AI art they don't care if its actually art. They don't care about whether the art they get from actual artists is actually art. They care about whether it does what they need it to do.

Is it beautiful? Does it represent what I want to see to see in a pleasing way? Does it generate the right response in the viewer?

It doesn't matter if something that is unalive can create real "art", what matters is that it can create *something* that meets that need, which, previously, only actual artists could really do. It may not be art, but if it isn't, what it is is something that makes people not need nearly as much art.

Basically you are saying that if AI ART produce a placebo of real art and people don't notice. Does the difference matter?

Theres a huge difference between something just "decorative" or real art.
The difference being big enough artist have a special place in our society, where somebody creating decoration is just another job.

There are many ways to explain why Art is important, and some decoration you can put in a corner in a room is not.

This is my way:

Culture is what a civilization is made of. A civilization is not made of stone houses or steel ships. Stell ships and stone houses are made of culture.

Your culture inform how kids need to be educated, what values they will have, what they can be or not.
Then these kids grown and are teachers, policemen, scientest, engineers. They build the stuff where you live, the car you drive, and the laws you obey.

Spaniards conquered many lands just trying to find California, a mythical land from a epic fantasy book inhabited by tall pretty black muslim womens.
Many rocket scientist are rocket scientist inspired by Star Trek or Julio Verne or other.
So many people has died under a flag, and a national anthem... for a love of a culture, customs, achievements of a country.

And you are taunting me if a placebo, a sham, will generate the right response on the viewers?
Absolutelly not. Like a placebo of a real medicine lack the parts that make the medicine work. Or a beer withouth alcohol will not make you drunk.
People will not watch a entire movie made by AI, because it will lack art, and art made it interesting for the viewer in the very important ways.
In the many ways Art is important for a society,... AI ART made by somebody that is not a artist, will not supply or complete these parts.

Tei
Feb 19, 2011

Owling Howl posted:

This discussion has been ongoing for like 50 years with people submitting works made by children, animals or computers to art galleries.

Ultimately it doesn't matter if you describe it as art or decoration - the impact it will have on society is the same. If an author uses AI to illustrate their book an illustrator is not getting paid to do it. Is it devoid of artistic meaning? Sure but that doesn't help the illustrator. It helps the author though...

Illusttration can have a impact on society when is made by artist.

A artist can create a new font, and text written in that font inspire people new ideas. Buildings with that font hold new organizations old people will reject or disagree with.

For that new font to be meaningfull for a lot of people, that font need to have meaning.


(counterfeiting the world by means of numbers)


(( A new song or music style can have the same effect of changing society ))

Imaginary Friend posted:

Use the prompt "create an original house" or any other prompt without any creative details on ten different AI models and then ask ten different artists to ask the same question to ten artists they know about.

This is a very good answer, but I don't like this type of challenges.

When we say "AI can't do X". Well, we create a challenge. A scientist or engineer can take on the challenge, and use his creativity to create a algorithm that solve X.

This kinda prove the creativity of the human being (engineers or scientist) and paradoxically is used as a proff of the creativity of computers.

So I don't create these kind challenges.

Anyway I love your post and is very good answer. Is just somebody could get it and figure a way to beat it . Creating a AI specifically to give a answer to this particular question.

Tei fucked around with this message at 18:14 on May 10, 2023

Tei
Feb 19, 2011

Solenna posted:

If you trained an AI solely on Renaissance and Baroque paintings and other similar kinds of art would it ever be possible for it to spit out something in an Impressionist style? Because that's pretty much how a bunch of art movements happened, artists were trained in a specific way and then decided they wanted to do it differently.

This is a unfair challenge for a AI.

Millions of renaissance painters where born painting rennasance and died painting rennasance.

Billions of people are taugh a religion, and die with that religion, instead of becoming atheist.

If people (a general intelligence system) pretty often has a norm, never learn or move from where they are taugh. Why ask that skill to thing that are less than general intelligence systems.

.
.
.
.
.

But they can.
Recomendation algorithms always have a small part programmed to offer you a random movie from the catalog. Even if you watch only and only WW2 movies. A good recomendation algorithm will sometimes mix a romantic movie, because you might get bored of WW2 movies and is so you don't abandon the platform (netflix or other).
So algorithms already do what you ask them to do.

.
.
.

I will say again. Is a bad idea to taunt/challenge AI's. "Do algorithms can do X?". Pretty often the answer is "Yes, if we program them to do it".

Tei
Feb 19, 2011

Can somebody make a AI that do X?

Yes.

Can somebody make a AI that do X, and that AI spontaneously do Y?

No yet. That is in the area of AGI's. And we can't build AGI's yet. Ask again in 50 years.

Bel Shazar posted:

The more AI is used, and improved upon, the more people will realize there is absolutely nothing specialir unique about human existence or its artifacts.

My expectation is, once we develop the first AGI, the source code to do that will be between 20 and 50 lines of code.

What make a human special is the experiences. The accumulation of all the experiences in our life, and how these experiences are related to each other. Is only data, but is a fuckton of data.

Tei fucked around with this message at 13:14 on May 11, 2023

Tei
Feb 19, 2011

Something I have seen a lot of artist do, and is stupid, is to record themselves painting, has a way to "watermark" that their creation is human made.

Is stupid, because you can train a Midjourney type of app to mimick that.

I want to tell them "Stop doing that", but would not do, because artist are very sensitive people and have already suffered a lot because of AI's and general stuff. And I don't want to be another person yelling at them stuff. And might be useful for a ver short while, perhaps.


BrainDance posted:

What? We'be already seen this all the time. All the large models are able to do unpredicted things they weren't explicitly trained to do but that they figured out how to do even though those things weren't exactly in their training data.

Ask ChatGPT to tell you a story in emoji, but allow it to only use emoji that fit a certain vibe or some other criteria, basically make it as unique a task as you can that it won't have actual examples of, that will require it to use multiple different concepts to figure out something it wasn't directly told, it can do it. This is really why emergent abilities need larger models, they're things that need a model of a certain complexity to figure out in the first place.

Current models can absolutely spontaneously do things they weren't exactly trained on, that's what makes them impressive in the first place.

For some values of Y, that is true. I was making a more general comment.

Tei
Feb 19, 2011

roomtone posted:

A lot of what I would say about this has already been said, but to be a bit more mundane about it - what are the legal implications around AI art? I mean in terms of the datasets they've used to train, and the usage of things generated from it.

1- That AI art can't be copyrighted.
2- Unfortunallly, that the way Midjourney just opt-out of copyright laws to steal their datasets is legal.
3- That using AI ART to do illegal stuff (like revenge porn) is illegal.
4- Maybe having to register your AI thing, if that feels feasible and in some territories. In the EU is mandatory to register databases that collect citizens data.
5- If the register thing ever happend, they may pass new laws, that would be somewhat unnecesary, like you AI ART can't use people faces withouth permision, or be considered offensive. But they may have to wait a bit before passing a law like this to see if possible to really curtain what a AI ART thing can do. So far it seems the industry is self-censoring, thats why violence/sex/racism is banned in Midjourney/ChatGPT.

Tei
Feb 19, 2011

Liquid Communism posted:

AI generators being capable of spitting out explicit copies will be enough to prove that they're using copyrighted materials to create the backend for their for-profit service without licensing. It'll go over about as well as someone trying to start a new Netflix with nothing but DVD rips.

I have seen models produce a image with watermarks... but asking the algorithm to draw the watermark has part of the instructions. To me that is cheating.
Also not all models are created equal, some might have too small a corpus of image than others.

Tei
Feb 19, 2011

Lucid Dream posted:

We're in uncharted territory in so many ways these days, and I don't think we can really look at precedent too much. If AI turns out to be as disruptive as it sure seems like it's going to be, then it's pretty reasonable to expect there to be new laws and novel interpretations of existing laws in light of the impact of AI.


The current fast advancement of AI is a illusion.

Almost every advancement we see latelly are based on the same ML algorithm, probably alot of the stuff we see just run TensorFlow.

So.. is very possible that we will soon reach the cap of the utility of that algorithm/approch.


There are many different AI ideas and strategies. Expert systems, rule based, machine learning, genetic algorithms. AI is not only machine learning systems.

Once we reach the cap of ML systems, we will go back to slow progress towards AGI.

Tei
Feb 19, 2011

We got a new AI movie!
https://www.youtube.com/watch?v=573GCxqkYEg

Hope is good, it looks good.

Tei
Feb 19, 2011

What is art? art is dying of hunger.


AI present some risks, and of now, is driven by the worst type of people our society has produced, the libertarian cryptobro and libertarian CEOs. These people are basically sociopathic.

The way things like AI ART has been created and presented, is to assasinate (replace) artist, and by stealing artist work.

Of course artist are annoyed and scared. Is not only the dying of hunger thing, that was old, is their disrepect and removing them of the only thing they have, their authorship.


All the "democraticing the process" is a lie. The average person will not have the means (hardware) and knowledge (how to install and configure it) to create their own IA ART thing. Only these with the tools can do it. We are not seeing the art of Random Person, we see the art created by Corporation X. Anyone can download Gimp and paint something, but is much more complicated to have your own Midjourney that you yourself trained with your own dataset of images, and configured with your own ideas. Is very likelly that AI ART will become a service controlled by huge corporations of the size of Microsoft or Google. Democratizacion my rear end. Something that was free before (art) will be controlled by Benevolent GODS, like Microsoft, that will dictate what is possible and what is not. If they choose to ban the word "jew" or "cross" from prompt, you will be unable to make stuff with that.

Tei
Feb 19, 2011

KillHour posted:

This sounds like a compelling argument, but it doesn't reflect the reality of the situation at all. Stable Diffusion is open source and there are open source UIs that make setting it up not much more difficult than GIMP.

https://github.com/AUTOMATIC1111/stable-diffusion-webui

You don't need to train an entire model to get your own style. LoRA is very effective at fine tuning the model to get certain outputs, and they are easy enough to make that there are literally thousands and thousands of them made by individuals.

https://huggingface.co/blog/lora

Here's a marketplace for LoRA (warning :nws: - a lot of these are porn because of course they are): https://civitai.com/

The idea that customizing a diffusion model without being beholden to a big company is out of reach of an individual artist is just not true.

No, you are wrong, dear KillHour, my friend.

poo poo like this:



code:
export MODEL_NAME="runwayml/stable-diffusion-v1-5"
export OUTPUT_DIR="/sddata/finetune/lora/pokemon"
export HUB_MODEL_ID="pokemon-lora"
export DATASET_NAME="lambdalabs/pokemon-blip-captions"

accelerate launch --mixed_precision="fp16"  train_text_to_image_lora.py \
  --pretrained_model_name_or_path=$MODEL_NAME \
  --dataset_name=$DATASET_NAME \
  --dataloader_num_workers=8 \
  --resolution=512 --center_crop --random_flip \
  --train_batch_size=1 \
  --gradient_accumulation_steps=4 \
  --max_train_steps=15000 \
  --learning_rate=1e-04 \
  --max_grad_norm=1 \
  --lr_scheduler="cosine" --lr_warmup_steps=0 \
  --output_dir=${OUTPUT_DIR} \
  --push_to_hub \
  --hub_model_id=${HUB_MODEL_ID} \
  --report_to=wandb \
  --checkpointing_steps=500 \
  --validation_prompt="Totoro" \
  --seed=1337
This is going to live in the cloud, managed by programmers and system administrators, not artist. It will be a service, with a corporation setting the rules.
Even if just now somebody can download that poo poo.
And even if just now somebody can download and understand this poo poo enough to run it, is going to be some guy with some technical side, not some random person.

Edit:
wow, the links you provided are pretty cool, thanks!

Tei fucked around with this message at 17:56 on May 18, 2023

Tei
Feb 19, 2011

I remember studying in the university a 4GL language with reasoning qualities, that could be used to queried information. LOGOL or something, forgot what was called.

Tei
Feb 19, 2011

To these people saying "Once you have put a document into the machine learning corpus, It can't be removed".

Thats a false, is not it? If you train your AI with 50.000 legal documents and a illegal one. You can always remove the illegal ones, then train again. It could be expensive, but thats Not My loving Problem.

Also, I am not so sure it is that hard. It may appear that is hard because the actual learning is a big blob of data. Or is not?

Maybe they can research HOW to keep track of how documents affect the training data, then subtract that training.

Is not the same thing, but heres a example how that is possibe with bayessian filters:

https://github.com/atyks/PHP-Naive-Bayesian-Filter/blob/master/src/naive-bayesian-filter/class.naivebayesian.php

code:
		/** untraining of a document.
		 To remove just one document from the references.
		 @see updateProbabilities()
		 @see untrain()
		 @return bool success
		 @param string document id, must be unique
		 */
		function untrain($doc_id) {
			$ref = $this->nbs->getReference($doc_id);
			$tokens = $this->_getTokens($ref['content']);

			while (list($token, $count) = each($tokens)) {
				$this->nbs->removeWord($token, $count, $ref['category_id']);
			}

			$this->nbs->removeReference($doc_id);

			return true;
		}
This bayesian filter is trained with documents that produced words that fall into categories, and these words are weigted, and removing the document remove the words or rebalance the weight.

If the cryptobros don't want to re-train the entire models, they can figure out how to do something similar for copyright data. How they do it, is their problem, they can go the long way, or they can invent a better way. I don't buy the idea (repeated here by many people) that is not possible. Is possible: repeat the training withouth the tainted data.

Tei
Feb 19, 2011

Boris Galerkin posted:

Correct me if I’m wrong but isn’t the thing with ai/ml/nn/etc that this is, or is so far, impossible to do? Something something black box can’t look into the weights to figure out why specifically it does this and not that?

I just have show you how is possible for a different blob of numbers, a bayesian filter. So I say maybe is possible for their blob of numbers.

And is moot point. They can train the whole dataset again, but withouth that infrininging document. The "waaaah.. is expensive" should be ignored.

Tei
Feb 19, 2011

KillHour posted:

As I just explained, it is not because of the fundamental differences between a diffusion model and a bayesian filter. Not in a "we don't know how" sense, but in a "we can mathematically prove it's impossible" sense.

Okay, KillHour, I accept your opinion.

Tei
Feb 19, 2011

Jaxyon posted:

1. People are using the wrong terms in calling AI "AI", it's, like many techbro trends, mostly bullshit overhyping.

I have a AI book written in the 80s, that list has the first AI device the safety valve of steam engines.

https://www.steamlocomotive.com/appliances/safetyvalve.php

This device is smart, will automatically release vapor if the pression is too high, otherwise will preserve the pression.

But since the 80's and with each advancement of AI, the definition of AI change. Humans redefine artificial inteligence every time a new goal if achieved by machines.
Today few people will consider AI to a valve.

Edit:
My theory is that humans want to feel special. That inteligence is magical. And since machines are unmagical, anything machines achieve is not intellgence, hence the definition must change to not include this new thing machines are achieving. Theres a huge resistence to accept that inteligence can be a mechanical function, something not more special than a muscle or the blood stream.

Tei fucked around with this message at 06:34 on May 24, 2023

Tei
Feb 19, 2011

It would be kinda criminal to create a conscious inteligence connected to the traffic system of a city, only allowed to control the traffic of that city, it sounds a bit too much like a slave in a nightmare scenario. So is better if that inteligence is not conscious. Is a good feature of thinking machines that are not conscious. We don't want to create conscious machines, because would be a problem for us... and for them.

Tei
Feb 19, 2011

Jaxyon posted:

I'm not sure why people think that a LLM would be able to do that, given that it's not trained in Voynich or whatever.

As stated, it would make a guess that would likely be super wrong as soon as a human checked it's work.

Backtracking wold be a good argument, imo.

Many machine learning systems seems to work by starting by the result and trying to answer "what is question".



Like how a AI powered xerox machine get a blob of blurry data, and try to figure out what it means. Or NVIDIA DLSS technology in games.
We can treach "Voynitch" has the answer, and we may try to ask the computer to figure out where it come from.
Like maybe a ML can find that Voynitch is russian writen how japanese would pronounce it, if it where writted in the indian alphabet.




I have this book, and it says that to play a phonograph, you need the entire universe.

https://www.youtube.com/watch?v=BkHCO8f2TWs

I saw this dude TV serie, and he said that to make a cake you need the entire universe.


Perhaps the reason we can't translate the Voynich is because part of the original inforamtion is lost, so we can't "play" it again, the parts of the universe tha required to read it, are lost forever.

Tei
Feb 19, 2011

NoiseAnnoys posted:

it's also entirely possible there was no information in the voynich manuscript whatsoever, so even if our hypothetical ai was advanced enough to decode a completely alien script and language from a corpus that consists of a single entry, there's a fairly good possibility there just isn't anything to decipher.

yea but the stastiscal analisys said that it ressemble a human languaje (or so I heard) so even if is a elaborated joke, it seems based on a real languaje, maybe ofuscated


we are only commenting this por the possibily of AI to uncover long misteries like this one

Tei fucked around with this message at 17:46 on May 25, 2023

Tei
Feb 19, 2011

KillHour posted:

Given the vagueness of "AI" in regards to specific implementation, that's pretty much equivalent to asking "can math be used to solve x"? I know I'm stating the obvious, and I'm not really directing this at you, but more to remind everyone that AI isn't a singular thing that you use like a crystal ball, but a whole class of statistical techniques.

AI is also a algorithm that check every possible solution of a problem exausting all of them, then spit outs the solution.

AI is not just ML the way they are being build just now. AI can be heuristic and tools that use the power of the computer to amplify the mental muscle of humans to reach things we could not do manually.

A bad trait of the current uses of AI is that try to solve a problem that did not need solving and don't synnergize with humans.

AI ART is just a bad idea, we already have artists making art and we don't need to turn that mechanical. Is not some disgusting work that is best given to machines and not operated by humans. AI ART do very littel to enhance the art humans already do (is not zero, but is small).

The "AI is just math" oversimplification, that I am NOT against, I just don't support.

Tei
Feb 19, 2011

KillHour posted:

This is extremely not true. There are algorithms that do that, but what most people call "AI" isn't doing that at all, or anything even remotely close to that.

haha


I can't imagine your opinion on a expert system that is just a database of solutions where the algorithm is somewhat like "SELECT solution FROM expert_system__medical_problems WHERE issue in ('bad cough', 'fever') and issue in ('bad cough', 'fever')"

Edit:
I am not against the "everything is Math", I just don't do it.

Tei fucked around with this message at 11:02 on May 26, 2023

Tei
Feb 19, 2011

Tree Reformat posted:

People have been concerned about TTS systems displacing audiobooks for a while now, AI-enhanced TTS models just kind of accelerates that trend.

This is one of the biggest bones of contention, and probably what the current court cases are going to hinge around the ultimate answer to. People against AI assert that both the scraping of copyrighted material to collect the training data in the first place without the explicit approval of the copyright holders constitutes copyright infringement in and of itself (which would mean every single webcrawler for search engines is copyright infringing), and that the models are themselves full of copyrighted material. Effectively, they assert AI researchers have developed the most efficient (if extremely lossy and resource intensive) data compression method in human history (several billion images vs about 4 gb model file).

If your poo poo uses my work and make derivative work from it, and I can prove it, the problem is not that your servers hold a copy of my work, but that your work is derivative work from mine. You need me to license you to make derivative stuff from my work, or is illegal*


*illegal for everyone except VC money and sillicon valley barons, that can exfiltrate with impunity because the upper side of our society sniff neoliberalism glue

Tei
Feb 19, 2011

GlyphGryph posted:

Okay, and what if my poo poo doesn't use your work but does make stuff derivative from it?

That don't exists.
If my style is painting red big noses, you get that information from somewhere.

Sure, you don't have to store a copy of the original artwork, you learned from it, but is still a machine that need my work to create yours, using my style.

Tei
Feb 19, 2011

gurragadon posted:

I think it would also be hard to prove that its your style. How many pictures of clowns with big red noses exists, Rudolf the reindeer, people with rashes on their nose, or people with red paint on their face? It could easily make something with red noses without looking at a particular artists red nose style.

I think the prompt of the image being "Person draw in the style of Tei" pretty much clears out whos style is being copies. Is not it?

It also shows the author of the AI BOT had images with metadata "Created by Tei" in their learning corpus.

Tei
Feb 19, 2011

GlyphGryph posted:

Of course it exists. Your style is not built out of things unique to you. It can get that information in a ton of ways without having any access to your work. It can synthesize the styles you were inspired by, for example, under my guidance, to add and remove and tweak elements until it matches what you do. Or maybe I hire an artist to do ten original pieces in your style and feed that to the machine, tell it it's in your style, and there we go - I'm genning works without the machine ever seeing anything of yours.

That would be a complicate way to copy my style, but is still copying the style. Ofuscating something might confuse robots and naive people, but it unimpress judges.

Tei
Feb 19, 2011

cat botherer posted:

Regardless, I’m just stoked that AI can give me a Hieronymous Bosch extended universe.

I think this work better for mural and murals with lots of details because the mural themselves where already a idea replicated N times. Where for things like a portrait is a disaster because all it achieve is destroy the composition.


Boris Galerkin posted:

How can anyone seriously look at one of these examples from the new photoshop ai fill thing and conclude “yep this is proof that ai generated art is all crap.”

Like how?

The uncreativiness of these expansions feels like walking around and then have 1 ton bricks falling on your head. Seriusly, is that bad.

Is bad because a design is holistic. Meaning, each part is linked to all other parts in harmony. Is a team where each player do his goal. When you extend a picture this way and ignore composition rules, you are breaking these rules for nothing, theres no gain. And because they are purelly extending the picture withouth adding anything of interest. Its like getting the Las Vegas, and adding more desert around the desert of Las Vegas. Is horrible, is bad, is dumb, is lame.
Is kinda cool, but need WAY MORE AI for it to be good. It need to teach AI composition, to teach AI to add novely elements, and for the AI to understand the original picture feelings so these novely elements added follow the same theme or generate the same feelings.


Edit:
Map of USA, but extended from Salt Lake City using a AI, so all of USA is a huge desert.

Tei fucked around with this message at 07:43 on Jun 1, 2023

Tei
Feb 19, 2011

SubG posted:

How are they being "used" that way? "Similar" how? What "chicanery"? Seriously: what, specifically, are you alleging here?

I think is clear what he is saying.

https://www.youtube.com/watch?v=JyxSm91eun4

There are laws like copyright. These laws binds us, but there are exceptions. Having these exceptions make sense and we all like these exceptions existing.

Then we have this group of people, with no moral. They are using and abusing the loophole for profit. Either creating, financing or just using non profits to dig a hole in copyright laws and ignore these laws.

Where they are using a existing non profit is somewhat okay. Still a burden we may have to think about.
Where they are creating these non profit with the purpose to skip copyright laws, is a fraud on society and a massive copyright violation in the spirit of the law, if not in the text itself. Has legal loopholes work.

Reverend Harry Powell is the AI companies singing sweet tones to induce neoliberalistcons to sleep and have get-rich-quick dreams, so they can rip and tear with pleasure.

Tei
Feb 19, 2011

Humans do it all the time. Poliicians have to carefully configure the reward, because the moment they offer monetary rewards for stuff, people abuse these for gains not intended by the politicians.

Is a problem of the idea of enforcing a behavior with rewards.

Tei fucked around with this message at 13:12 on Jun 2, 2023

Tei
Feb 19, 2011


I have not mentioned EleutherAI, and don't know the status of EleutherAI.

I am open source developer myself, so obviusly I am I support and defend open source.


Edit:
Theres nothing bad about open sourcing AI. Open Source algorithms is very good for society as a whole, including corporations and random guys and artists (despite artist favouring products pretty often).

Theres nothing bad about scientist building corpus of data and distributing that with a free license.

What is bad is when these "scientist" are jus a facade to build that corpus using copyrighted images.

Tei fucked around with this message at 08:22 on Jun 4, 2023

Tei
Feb 19, 2011

cumpantry posted:

keep telling yourself this while corpos continue to favor ai and put artists out of jobs (artist defined as someone who doesn't use the kind of chintzy crap)

Using copyrighted images is a choice. You can make a worse bot with only images in the public domain or a license that allow derivative copies / use withouth attribution. Of course, using the best images available not matter the license give you a better bot, so is what they did.

Tei
Feb 19, 2011

I have read more than once Roger Penrose "The Emperor New Mind" book, because It was fun and full of interesting ideas.

I did not agreed with the ideas in the book. To me it was all hand waving "our brain is special", withouth evidence.

Sometimes reading a reactionary is fun.

Adbot
ADBOT LOVES YOU

Tei
Feb 19, 2011

You can write a State Machine with 12 lines of code. With the status “scared”, “bored”, “angry”, “happy”. The states labels are meaningless, but the conexions (and triggers) are not. Fear is connected to Fight or Run and that means something about Fear and Run and Fight.

Emotions are some simple poo poo.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply