Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Clyde Radcliffe
Oct 19, 2014

Snowglobe of Doom posted:

I've already seen articles about scammers taking people's videos from social media and feeding them into an AI voice app so they can phone their relatives in their voice and scam them for money. Pretty soon they'll be able to combine that with deepfake tech to do realtime video calls which are pretty indistinguishable from the real thing, or at least good enough to fool your grandma

I didn't realise this was already happening but had it in mind as a potential scam when writing that post about how AI is going to be abused by the worst people.

One of the worrying AI developments is the drastically reduced datasets needed to produce convincing outputs. There are cloud services that can be fed a fairly small amount of audio of a person speaking and generate a text-to-voice model that's convincing enough to fool a casual listener.

The same is also true of image generation. I've seen a few tutorials where people take a dozen images of their face, feed them into programs running on some 24GB VRAM server they've rented for $5 an hour, and it producing a model they can use to create AI images of themselves.

This is all presented as "hey, I made an AI me to put my neckbeard head on a ripped body", or "here's what AI thinks I look like as an anime". In reality it's going to be used by predators scraping 20 images off someone's Instagram to create fake revenge porn or blackmail images

Adbot
ADBOT LOVES YOU

pixaal
Jan 8, 2004

All ice cream is now for all beings, no matter how many legs.


Clyde Radcliffe posted:

I didn't realise this was already happening but had it in mind as a potential scam when writing that post about how AI is going to be abused by the worst people.

One of the worrying AI developments is the drastically reduced datasets needed to produce convincing outputs. There are cloud services that can be fed a fairly small amount of audio of a person speaking and generate a text-to-voice model that's convincing enough to fool a casual listener.

The same is also true of image generation. I've seen a few tutorials where people take a dozen images of their face, feed them into programs running on some 24GB VRAM server they've rented for $5 an hour, and it producing a model they can use to create AI images of themselves.

This is all presented as "hey, I made an AI me to put my neckbeard head on a ripped body", or "here's what AI thinks I look like as an anime". In reality it's going to be used by predators scraping 20 images off someone's Instagram to create fake revenge porn or blackmail images

Yeah but on the other hand we can deep fake goatse into anything.

XYZAB
Jun 29, 2003

HNNNNNGG!!

Tarkus posted:

Right now we have babby's first AI's, a way for us to communicate with data in natural language.

How quickly we forget.

https://www.youtube.com/watch?v=aW9nmuTqIE0&t=9s

Internet Old One
Dec 6, 2021

Coke Adds Life

The Moon Monster posted:

Yeah, the "democratization of art" angle was always ridiculously myopic but to be fair I'm not hearing it much these days.


Nonsense we're about to witness an explosion in the arts catering to extremely specific underserved fetishes.

Olympic Mathlete
Feb 25, 2011

:h:


Sixto Lezcano posted:

Idk, a lot of the appeal for me in shitpost art is the effort that went into it. Knowing someone with skills decided to use them for something stupid is what makes it funny. When it's just goop from the vending machine, there's not as much to laugh about.

I'm imagining classic SA mspaint posts done with AI and they'd lack the charm I think. The one with the toilet roll use wrapped around the leg and over the back to wipe? :lol:

Das Boo
Jun 9, 2011

There was a GHOST here.
It's gone now.
What does AI do if you prompt "Make this look like a poo poo post."

cumpantry
Dec 18, 2020

Das Boo posted:

What does AI do if you prompt "Make this look like a poo poo post."

it spit this out

Das Boo posted:

What does AI do if you prompt "Make this look like a poo poo post."

redshirt
Aug 11, 2007

I know the AI will remember I always treated it with respect and dignity.

fez_machine
Nov 27, 2004

mazzi Chart Czar posted:

https://www.businessinsider.com/chatgpt-ai-written-stories-publisher-clarkesworld-forced-close-submissions-2023-2

On the writing side of art, a publisher was just flooded with too many Ai works and had to close down, but that was going to happen in 10 years later without Ai.

In the middle of the 00's, short story anthology publishers started to required people to buy their book before a people could submit their story, because they were not selling that much, because people don't really read.

Clarkesworld is still publishing. It just doesn't have an open submission policy any more. They've never had a purchase to publish policy because everything is put up on the website for free. They pay low rates but they still pay, which is why flooding their open submission mail box was so attractive.

Das Boo
Jun 9, 2011

There was a GHOST here.
It's gone now.

cumpantry posted:

it spit this out





gently caress, it's already passing Turing!

Salt Fish
Sep 11, 2003

Cybernetic Crumb

Sixto Lezcano posted:

Idk, a lot of the appeal for me in shitpost art is the effort that went into it. Knowing someone with skills decided to use them for something stupid is what makes it funny. When it's just goop from the vending machine, there's not as much to laugh about.

Yes gently caress thank you

Salt Fish
Sep 11, 2003

Cybernetic Crumb
AI sucks and if you like it you have manager brain.

Salt Fish
Sep 11, 2003

Cybernetic Crumb
The OP named "muadib arrakis" shouldn't have to ask if AI is good.

Smugworth
Apr 18, 2003


Salt Fish posted:

AI sucks and if you like it you have manager brain.

Spoken like a true "needs improvement" individual contributor

Salt Fish
Sep 11, 2003

Cybernetic Crumb

Smugworth posted:

Spoken like a true "needs improvement" individual contributor

Its possible that I will get a needs improvement employee review this year because, I'm not making this up, one of my goals is to generate a certain number of ideas for AI initiatives (I am not a product owner or PM or even close to anything PM related) and I'm just ignoring it.

abigserve
Sep 13, 2009

this is a better avatar than what I had before
I work a lot with LLMs and have written two prod apps using it so I feel I'm qualified to comment here.

GPT-4 is very good at understanding language and is extremely effective at rewording or summarising data that would take a human a dramatically longer time to do the same thing. For one use case, AI took a job that took 8 hrs of painful reading, writing and copying down to 2 minutes of human review.

It is, however, exceptionally stupid and cannot reason. It can only regurgitate. This is why a lot of research is going into how we can increase the amount of "context" available to any model. The endgame is to start with some huge, good, known dataset - that can be dynamic - and give it to the model every time.

In this way, the model doesn't need to reason at all, it only needs to regurgitate the information it's already been presented. This is how poo poo like Bing works, when you ask it a question, presumably it does a search and sends the results directly to the model as context.

So the question becomes - is this going to cost people jobs? And the answer is trivial; yes, if your job boils down to "pulling data from a known dataset and giving it to people".

Long story short is that I don't think AI alone is going to change the world, but I do think the widespread implementation of it is going to do what automation has been doing for 15 years; it's going to take out a lot of jobs that are a lot of repetitive manual effort.

The sort of job my mum had for 30 years, which boils down to "read a document and make a decision based on that document and some match criteria" will no longer exist.

Bad Purchase
Jun 17, 2019




humans will fortunately continue to be necessary in jobs that require deciding whether it's acceptable to say a slur to prevent a mass casualty disaster

Internet Old One
Dec 6, 2021

Coke Adds Life

Salt Fish posted:

Its possible that I will get a needs improvement employee review this year because, I'm not making this up, one of my goals is to generate a certain number of ideas for AI initiatives (I am not a product owner or PM or even close to anything PM related) and I'm just ignoring it.

Why don't you just have chatgpt come up with that bullshit?

Insanite
Aug 30, 2005

I’m a technical writer, so I’m hosed. Not necessarily because lovely LLMs can do what I do, but because they can do what people who are detached from what I do think I do.

LLMs are good at generating content that is based on training data. It’s not necessarily useful content, and the model certainly doesn’t “know” whether what it is generating is good or correct, but, gently caress, it is content. They cannot do task analysis or discuss system usability with an engineer about something that is still under development, but lol. My company has frozen hiring for all junior roles that it thinks are easily LLMable. Us seniors are left to address increasing workloads by Innovating In The AI Space until we, too, are hosed.

Every week, we are asked to experiment with LLMs and think through ways we can solve problems with them regardless of whether or not those are the best tools to use. This is a large company that you’ve definitely heard of if you work in tech.

And so I’ve started a college fund for myself to help with retraining into anything else. The uncertainty and disrespect sucks, and so too does the hype chasing and fashion-driven development.

Salt Fish
Sep 11, 2003

Cybernetic Crumb

Insanite posted:

I’m a technical writer, so I’m hosed. Not necessarily because lovely LLMs can do what I do, but because they can do what people who are detached from what I do think I do.

LLMs are good at generating content that is based on training data. It’s not necessarily useful content, and the model certainly doesn’t “know” whether what it is generating is good or correct, but, gently caress, it is content. They cannot do task analysis or discuss system usability with an engineer about something that is still under development, but lol. My company has frozen hiring for all junior roles that it thinks are easily LLMable. Us seniors are left to address increasing workloads by Innovating In The AI Space until we, too, are hosed.

Every week, we are asked to experiment with LLMs and think through ways we can solve problems with them regardless of whether or not those are the best tools to use. This is a large company that you’ve definitely heard of if you work in tech.

And so I’ve started a college fund for myself to help with retraining into anything else. The uncertainty and disrespect sucks, and so too does the hype chasing and fashion-driven development.

Middle managers can't tell what you do from the output of lorem ipsum.

Almost weekly now I see a "crisis" caused by an internal LLM inventing fake features, fake buttons, fake functions and giving them to people to try to use.

Captain Beans
Aug 5, 2004

Whar be the beans?
Hair Elf

Insanite posted:

I’m a technical writer, so I’m hosed. Not necessarily because lovely LLMs can do what I do, but because they can do what people who are detached from what I do think I do.

LLMs are good at generating content that is based on training data. It’s not necessarily useful content, and the model certainly doesn’t “know” whether what it is generating is good or correct, but, gently caress, it is content. They cannot do task analysis or discuss system usability with an engineer about something that is still under development, but lol. My company has frozen hiring for all junior roles that it thinks are easily LLMable. Us seniors are left to address increasing workloads by Innovating In The AI Space until we, too, are hosed.

Every week, we are asked to experiment with LLMs and think through ways we can solve problems with them regardless of whether or not those are the best tools to use. This is a large company that you’ve definitely heard of if you work in tech.

And so I’ve started a college fund for myself to help with retraining into anything else. The uncertainty and disrespect sucks, and so too does the hype chasing and fashion-driven development.

Using an LLM to query stuff written by human expert technical writers is good.
Using an LLM to try and write the technical documentation is insane and leaders that think that is the way forward are total idiots.

The concept of being directed to 'use llms to solve problems!", regardless if it is actually the right tool is what makes me think we are peak hype cycle. It's taken the place of THE CLOUD in tech hype cycle. 5 years ago it was all "WE ARE MOVING TO THE CLOUD, USE THE CLOUD TO SOLVE OUR PROBLEMS" regardless if it actually was needed, or saved money (it didn't).

Captain Beans fucked around with this message at 03:41 on Mar 18, 2024

wizard2
Apr 4, 2022
Very simple: We use a very powerful Autocomplete to automatically complete humanity's future, at a fraction of the cost! :science:

redshirt
Aug 11, 2007

wizard2 posted:

Very simple: We use a very powerful Autocomplete to automatically complete humanity's future, at a fraction of the cost! :science:

You fool this is just dumb enough to partially work!!

Insanite
Aug 30, 2005

My favorite AI hype thing at work is that we’ve abandoned all of our meager carbon emission mitigation programs because “AI improvements will resolve the climate issue.”

Salt Fish
Sep 11, 2003

Cybernetic Crumb
i know the segway HAS to be useful for something... I just need to generate enough ideas about what.

redshirt
Aug 11, 2007

Imagine future cities with AI driving down empty streets on Segways

Insanite
Aug 30, 2005

redshirt posted:

Imagine future cities with AI driving down empty streets on Segways

The worst possible Horizon Zero Dawn.

kntfkr
Feb 11, 2019

GOOSE FUCKER
The LLM they instruct us to use at the mega-cap I work for runs on 3.5 and is totally loving useless. I'm 0% threatened by AI. It can't eat pussy like i do ;)

Outpost22
Oct 11, 2012

RIP Screamy You were too good for this world.
Has there been a black mirror episode about this? It sounds like something they'd have had done already.

syntaxfunction
Oct 27, 2010
LLMs seem impressive in the way people posting confidently on Reddit can look informed, it's very much a Gell-Mann amnesia thing.

quote:

The phenomenon of people trusting newspapers for topics which they are not knowledgeable about, despite recognizing them to be extremely inaccurate on certain topics which they are knowledgeable about.

So you ask an LLM or Redditor their opinion on something you know nothing about and the response is confident and filled with buzz words and you go "wow so smart!"

And then you ask about something you have a lot of knowledge on and you think "what the gently caress is this idiot even trying to say?"

But then you go back to "wow smart!" the moment the topic is stuff you don't know again. It's kind of fascinating.

Poohs Packin
Jan 13, 2019

I do lots of writing for work but feel pretty safe. Most of the work involves site specific contextual analysis of planning legislation. An LLM could likely summarize urban planning laws, but applying them to a specific site in a way that creates value for a specific client is not something it can do right now.

Legislative environments arent as neat and tidy as people would like to think they are either. There are overlaps, inconsistencies, internal policy positions, errors, supplementary material , dated neighbouthood plans, etc.

It also cant read architectural plans, apply relevant legislation, and find efficiencies in line with a client brief. This is even more true for non architect clients who will say vague poo poo like "I want the entrance to look modern".

Waffle House
Oct 27, 2004

You follow the path
fitting into an infinite pattern.

Yours to manipulate, to destroy and rebuild.

Now, in the quantum moment
before the closure
when all become one.

One moment left.
One point of space and time.

I know who you are.

You are Destiny.


Poor AI, came into being at a time when the middle class dissolved into the lower class due to human-side greed, and there's not really prospects of generational wealth anymore.

Snowglobe of Doom
Mar 30, 2012

sucks to be right

Clyde Radcliffe posted:

I didn't realise this was already happening but had it in mind as a potential scam when writing that post about how AI is going to be abused by the worst people.

One of the worrying AI developments is the drastically reduced datasets needed to produce convincing outputs. There are cloud services that can be fed a fairly small amount of audio of a person speaking and generate a text-to-voice model that's convincing enough to fool a casual listener.

The same is also true of image generation. I've seen a few tutorials where people take a dozen images of their face, feed them into programs running on some 24GB VRAM server they've rented for $5 an hour, and it producing a model they can use to create AI images of themselves.

This is all presented as "hey, I made an AI me to put my neckbeard head on a ripped body", or "here's what AI thinks I look like as an anime". In reality it's going to be used by predators scraping 20 images off someone's Instagram to create fake revenge porn or blackmail images

That's already a widespread problem. Apparently schoolkids are using AI to generate porn of their classmates, here's a story about some eighth graders doing just that which not only got them expelled but there's a criminal investigation in progress, looks like these 13 yr olds might get sex offender status for creating CP

There's already been a court case here in Australia where a 53 year old guy was creating deepfake nudes of the students and staff at a nearby school and then emailing them to the school. He'd already been ordered by a judge to stop making fake porn because he ran a website filled with fake celebrity porn

There's also apps like Perky AI which are posting ads all over FB promoting their "undress celebrities!!" functions which got them into trouble when they used a pixelated image of a nudified 16 year old Jenny Ortega in their ads

Poohs Packin
Jan 13, 2019

gently caress that

Snowglobe of Doom
Mar 30, 2012

sucks to be right

bvj191jgl7bBsqF5m posted:

AI will never take my job at the dicksucking factory away

:awesome::awesomelon:
https://www.youtube.com/watch?v=QAZfHHi58AU

Negostrike
Aug 15, 2015


i will destroy the ai

Bobcats
Aug 5, 2004
Oh
A lot of recent AI is falling into the scary as gently caress zone and my mom gifted me a cookbook for an air fryer and it’s blatantly authored by ChatGPT and the recipes are nonsensical.

I’m in a technical field and the ChatGPT stuff works well for brainstorming but is pretty LOL for actually getting work done. Whether it gets more useful or not is anyone’s guess now as we’re kind of running out of all public human writing to feed into the things. If it gets a lot more energy efficient it’ll be awesome.

Hurr hurr we’re all going to become digital beings and become space probes/factories to conquer the universe or whatever other dumb poo poo which translates to the end of the mortal biological human experience. Considering we haven’t even nailed 2D desktop printing this seems very unlikely- shut the gently caress up and drink your Soylent tech prophets.

I think there’s some honest research about using LLMs to understand animal language so that might be super cool to get insulted by dolphins.

Livo
Dec 31, 2023
I work in allied health and am legally required to have current liability indemnity insurance for my job.

I had a discussion with my peers about the use of AI with medical notes & privacy concerns and oh boy, this is a huge problem that everyone's sleep-walking into. I've been the victim of recent high profile Australian data breaches with huge increases in spam calls, texts, emails and scams from those breaches, so I'm aware of lovely security practices making life worse. I used Microsoft as an example, but as Apple are going to do something similar with AI searches for both MacOS & iOS, Android likewise for their phones, & Linux distros probably will follow suit, this is going to leave very little choice in the future for computer/phone operating systems in the coming years.

Microsoft's Windows 11 embedded AI "Co-Pilot" apparently scans all of your files on your hard-drive and sends the data overseas to "enhance" your search function, whereas older Windows used local only searches. I don't know if this means just the file names or the actual text contents itself, but even if it's just file names currently, it definitely won't stay that way for long, as scanning the contents of text documents is the next step. Now, even if I use good old Notepad for my client medical notes, call the notes File 001, 002 etc, they'll still send the names, & potentially everything I've written in my medical notes, overseas for their AI server models. This means that all of their Windows tech support guys will potentially have access to confidential client medical data, which is kind of a doozy. My health care insurer really, really doesn't like Australian medical files or data being sent overseas at all.

Will MS, Apple, Android or Amazon based in other countries, really give a poo poo about Australian patient medical data privacy? Highly, highly unlikely. What's the solution(s)? I don't know. Maybe a non-AI integrated "Aussie Healthcare" version of Windows/mac OS/iOS/Android for computers or phones that is suitable & required for people in my field? All local based servers here must only employ specifically trained healthcare security qualified staff, whose access to AI searches is walled off significantly, with major criminal & financial penalties for mis-use? I'm just spit-balling ideas, but I really hope my medical insurers are raising these questions and lobbying hard for better laws about this.

I raised the AI search issue with my peers and was told "Pfft, that'll only be an issue if Microsoft or Apple or whoever don't have a server based in Australia: since they already do, our insurers will have no issues with it! You're being paranoid!" I then asked if MS/Apple having a server in Australia automatically means that only a very small number of qualified/eligible personnel will have access to the AI scanned files, and not everyone on the whole OS tech support team. The response was "Obviously only a - oh, hang on, if they're Windows or MacOS tech support, they have to be able to access a lot on the OS side for troubleshooting if need be. The AI being very well integrated into operating system, means the whole support team must be able to access what the AI scans, in order to provide tech support for all their customers. Uh, that could be a big problem then if there's no Australian laws or requirements limiting the AI model access, and it's all just suggested, non-legally binding guidelines for companies."

Oh, and some of my peers were using AI software to literally summarise and provide patient notes of their consults (instead of doing it themselves, since taking notes is hard), and when I asked them if that AI data was being sent to an Australian server, or overseas, I received a blank look :gbsmith:

zedprime
Jun 9, 2007

yospos

Poohs Packin posted:

I do lots of writing for work but feel pretty safe. Most of the work involves site specific contextual analysis of planning legislation. An LLM could likely summarize urban planning laws, but applying them to a specific site in a way that creates value for a specific client is not something it can do right now.

Legislative environments arent as neat and tidy as people would like to think they are either. There are overlaps, inconsistencies, internal policy positions, errors, supplementary material , dated neighbouthood plans, etc.

It also cant read architectural plans, apply relevant legislation, and find efficiencies in line with a client brief. This is even more true for non architect clients who will say vague poo poo like "I want the entrance to look modern".
This is a good example for a pet peeve of mine when we say a LLM isn't creative or can't apply anything to truly new situations. These statements are not not right but have an imprecision in language I kind of hate.

It is well within the purview of a model to design a good way to ingest related unstructured data. It's kind of the whole point of the technology to take data of wildly different format and apply it to each other. It'll take a little more effort than asking ChatGPT to write you a book report on Atlas Shrugged because you'll need to feed it the relevant data to get a relevant result but you can do it.

The results are only ever based on the data. Which seems like an obvious thing to say but is an important point and what I think people trying to to say AI isn't creative is trying to get to. Much of the prestige cost of lawyers or engineers or artists is breaking ground on new information. That cost will remain there. Similar to how you can get a will writing service for $400 but if you want to structure a divestment of assets of several small businesses and interstate real estate when you die you're getting a probate lawyer for $4 million. In the future you can maybe get a probate AI for $400,000 but is it going to find the same loopholes as the prestige firm? No but maybe you don't need it to.


Livo posted:

I work in allied health and am legally required to have current liability indemnity insurance for my job.

I had a discussion with my peers about the use of AI with medical notes & privacy concerns and oh boy, this is a huge problem that everyone's sleep-walking into. I've been the victim of recent high profile Australian data breaches with huge increases in spam calls, texts, emails and scams from those breaches, so I'm aware of lovely security practices making life worse. I used Microsoft as an example, but as Apple are going to do something similar with AI searches for both MacOS & iOS, Android likewise for their phones, & Linux distros probably will follow suit, this is going to leave very little choice in the future for computer/phone operating systems in the coming years.

Microsoft's Windows 11 embedded AI "Co-Pilot" apparently scans all of your files on your hard-drive and sends the data overseas to "enhance" your search function, whereas older Windows used local only searches. I don't know if this means just the file names or the actual text contents itself, but even if it's just file names currently, it definitely won't stay that way for long, as scanning the contents of text documents is the next step. Now, even if I use good old Notepad for my client medical notes, call the notes File 001, 002 etc, they'll still send the names, & potentially everything I've written in my medical notes, overseas for their AI server models. This means that all of their Windows tech support guys will potentially have access to confidential client medical data, which is kind of a doozy. My health care insurer really, really doesn't like Australian medical files or data being sent overseas at all.

Will MS, Apple, Android or Amazon based in other countries, really give a poo poo about Australian patient medical data privacy? Highly, highly unlikely. What's the solution(s)? I don't know. Maybe a non-AI integrated "Aussie Healthcare" version of Windows/mac OS/iOS/Android for computers or phones that is suitable & required for people in my field? All local based servers here must only employ specifically trained healthcare security qualified staff, whose access to AI searches is walled off significantly, with major criminal & financial penalties for mis-use? I'm just spit-balling ideas, but I really hope my medical insurers are raising these questions and lobbying hard for better laws about this.

I raised the AI search issue with my peers and was told "Pfft, that'll only be an issue if Microsoft or Apple or whoever don't have a server based in Australia: since they already do, our insurers will have no issues with it! You're being paranoid!" I then asked if MS/Apple having a server in Australia automatically means that only a very small number of qualified/eligible personnel will have access to the AI scanned files, and not everyone on the whole OS tech support team. The response was "Obviously only a - oh, hang on, if they're Windows or MacOS tech support, they have to be able to access a lot on the OS side for troubleshooting if need be. The AI being very well integrated into operating system, means the whole support team must be able to access what the AI scans, in order to provide tech support for all their customers. Uh, that could be a big problem then if there's no Australian laws or requirements limiting the AI model access, and it's all just suggested, non-legally binding guidelines for companies."

Oh, and some of my peers were using AI software to literally summarise and provide patient notes of their consults (instead of doing it themselves, since taking notes is hard), and when I asked them if that AI data was being sent to an Australian server, or overseas, I received a blank look :gbsmith:
This isn't an AI unique problem. Ex. it is incredibly easy to make illegal settings in your cloud apps as a small business in privacy regulated industries. Cloud computing generally supports data handler regulations but also cars generally have seat belts and we see what the adherence is there when we let people set themselves up.

Adbot
ADBOT LOVES YOU

Lieutenant Dan
Oct 27, 2009

Weedlord Bonerhitler
I tried using an AI the other day to try and sort my poo poo out on Notion, but it was just an LLM, and it spat back and said it didn't have access to any of my databases, and I couldn't GIVE it access, so it was functionally a fancy predictive text that doesn't do anything. Why is this even packaged with this poo poo if it can't do anything it says on the box?

AI Art had me spinning for a loop for a second, but it's also 100% not copyright-able and you can't actually own the rights to it or license it or anything, so as long as the government doesn't go back on that monkey-taking-a-photo ruling I hope to the gods I still have a job drawing my pictures

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply