Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
chaosapiant
Oct 10, 2012

White Line Fever

manero posted:

My Quake claim to fame is that I did the "Shambler Ate My Balls" page way back when.

Sadly, it looks lost to time

What is this page you refer to? What was it? What did it used to be?

Adbot
ADBOT LOVES YOU

skasion
Feb 13, 2012

Why don't you perform zazen, facing a wall?
Like Mr T and Chewbacca, but with shambler. Duh

haveblue
Aug 15, 2005



Toilet Rascal

chaosapiant posted:

What is this page you refer to? What was it? What did it used to be?

https://knowyourmeme.com/memes/ate-my-balls :nws:

Al Cu Ad Solte
Nov 30, 2005
Searching for
a righteous cause
Underrated Half-Life 2 moment: Breen chastising the Overwatch troops during your assault on Nova Prospekt. "The man you have failed to slow let alone capture is by all means, simply that: an ordinary man."

Gordon's a wrecking ball shaped like a nerd.

Johnny Joestar
Oct 21, 2010

Don't shoot him?

...
...




agdq is about to have doom and doom 2 runs run by kingdime soon enough: https://gamesdonequick.com/

he's really good

after that will be chex quest by peaches, who is a goon

GUI
Nov 5, 2005

Quake 4 was a fun game for the era it came out in. i.e. when every shooter was starting to give up on varied enemy types and it was all about guy holding machinegun, sniper and guy holding shotgun.

Mak0rz
Aug 2, 2008

😎🐗🚬

https://twitter.com/fanfiction_txt/status/1083471776879804418

Cream-of-Plenty
Apr 21, 2010

"The world is a hellish place, and bad writing is destroying the quality of our suffering."

He give eyes to the SPIDER MASTERMIND. BFG 9000 still tremble in brawny Berzerkor hands. "Time to fall to pieces you hellish Erection Set!" He fire the gun and the SPIDER MASTERMIND collapse; erection set pieces everywhere. "poo poo rear end demon spider scum!"

manero
Jan 30, 2006

Cream-of-Plenty posted:

He give eyes to the SPIDER MASTERMIND. BFG 9000 still tremble in brawny Berzerkor hands. "Time to fall to pieces you hellish Erection Set!" He fire the gun and the SPIDER MASTERMIND collapse; erection set pieces everywhere. "poo poo rear end demon spider scum!"

It reads like the script of The Room

the nucas
Sep 12, 2002

Al Cu Ad Solte posted:

Underrated Half-Life 2 moment: Breen chastising the Overwatch troops during your assault on Nova Prospekt. "The man you have failed to slow let alone capture is by all means, simply that: an ordinary man."

this flashes me back to nicole horne chastising her high paid mercenaries in max payne. "What do you mean, "he's unstoppable"? You are superior to him in every way that counts. You are better trained, better equipped, and you outnumber him at least twenty-to-one. Do. Your. JOB."

Mister No
Jul 15, 2006
Yes.
Things that make the player feel powerful are great, whether in gameplay (super shotguns and riveters) or narrative. That was also my favorite part of hl2 and Max Payne.

Jose Mengelez
Sep 11, 2001

by Azathoth
poor bastards have no idea they're mooks in a videogame.

Gobblecoque
Sep 6, 2011

Jose Mengelez posted:

poor bastards have no idea they're mooks in a videogame.

Funny as hell, it was the most horrible thing I could think of.

Pathos
Sep 8, 2000

Oh man - Take No Prisoners. That’s a game I haven’t thought about in loving forever. Is that still playable on Windows 10? It looks like it’s not on GOG and a few people mention it not working but I figure you guys might know

SparkTR
May 6, 2009

Pathos posted:

Oh man - Take No Prisoners. That’s a game I haven’t thought about in loving forever. Is that still playable on Windows 10? It looks like it’s not on GOG and a few people mention it not working but I figure you guys might know

It works fine with dgVoodoo2. Same as MageSlayer, another game from Raven that used the same engine.

Dewgy
Nov 10, 2005

~🚚special delivery~📦

manero posted:

It reads like the script of The Room

o hai doomguy

Rupert Buttermilk
Apr 15, 2007

🚣RowboatMan: ❄️Freezing time🕰️ is an old P.I. 🥧trick...

Dewgy posted:

o hai doomguy

"You're ripping and tearing me APART, LISAAAAAAAA!"

Instruction Manuel
May 15, 2007

Yes, it is what it looks like!

Pathos posted:

Oh man - Take No Prisoners. That’s a game I haven’t thought about in loving forever. Is that still playable on Windows 10? It looks like it’s not on GOG and a few people mention it not working but I figure you guys might know



SparkTR posted:

It works fine with dgVoodoo2. Same as MageSlayer, another game from Raven that used the same engine.

I always wanted to play those games but I had completely forgotten about them.

site
Apr 6, 2007

Trans pride, Worldwide
Bitch
https://twitter.com/Nibellion/status/1083815618045005826

https://twitter.com/Bro_Team_Pill/status/1083832785188544512

site fucked around with this message at 22:33 on Jan 11, 2019

Cat Mattress
Jul 14, 2012

by Cyrano4747
I'm glad I never bought any Gearbox game.

Guillermus
Dec 28, 2009



I was just replaying the Brothers in Arms games... :gonk:

Convex
Aug 19, 2010
Uh why exactly did this guy not go straight to the police with this evidence?

Gobblecoque
Sep 6, 2011
To absolutely no one's surprise, Randy Pitchford turns out to be a piece of poo poo.

Solaris 2.0
May 14, 2008

Convex posted:

Uh why exactly did this guy not go straight to the police with this evidence?

Yea gently caress Randy Pitchford and all but if that allegation is true he is also guilty by staying silent until now. Just like Joe Paterno.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo
*trying to remove the last couple of posts out of my head from being remembered*


Over the last two days, I've been looking at ESRGAN. I found the repo for running ESGAN on my computer and it did work on sprite work but there were issues with the provided pretrained generator. The generator was trained with images from the DIV2K imageset (https://data.vision.ee.ethz.ch/cvl/DIV2K/) combined with the Flickr2K series which are all 2k images of real life shots. The generator was giving me back sprites that looked good but there were lots of streaks which I think was due to the fact it was seeing straight lines and thinking they were ledges/windows in buildings.

What I did was start a new training set with a more character/object from Magic: The Gathering Art of more recent art qualities which are done in a way that are less abstract (like Statis)



I focused on pictures of people, but also creatures and things casting magic.

Now to break down how GAN works is working with the fact you are actually making two Neural Networks; the generator and the discriminator.

The generator's sole goal is to take the training data and try to figure out of how to counterfeit the styles.

One of the discriminator's goal is to give the generator a image and ask it to try to recreate it using the generator's learned methods.

After several generations the training is tested (I had it do this every 2000 iterations). The discriminator is given 'truths' which, in this case, is both the shrunk and the original HR image from it's own unique image set so the generator does not train with it (it would be like asking a student to study for an exam and is then given the answers. The student will pass not because of work, but because it was able to remember what the answers were). The generator is given the shrunk image and is told to scale it up.

The discriminator's goal is to try to tell which one is the real one; the original HQ one or the one the generator makes.

From this: the discriminator has to tell which one of the two images below are the correct one.



Now it seems rather obvious which one was made up, but the human mind has a lot going for it including pattern recognition and knowing faces aren't usually made of straps, but the generator/discriminator don't really understand things like that. They have to figure out things like if you see a large dark spot with a thin line of white on top, then there should be a line on the bottom. The discriminator tells the generator how it was wrong and the generator can figure that out eventually.

34,000 iterations later during a validation session the generator submitted this. This is from increasing a 75x75 image to a 300x300 image remember.

. There are other awesome things it learned to fake like the background detail, the belts of the dude is more defined, and the tree branch actually looks... better than the original?? The guy's hands took a turn for the worse though.

I did all of this because the game sprites I was trying to upscale was Heretic with lots of magic, demons, and undead. Here are some of the better results.



- kind of oversaturated


-posted this one because the guys chest symbol came through pretty good

Most of the issues I found in the others was because I made the alpha channel the color black. I should have replaced it with bright pink or something easy to remove.

Anyway, I am going to start over with a better training set of images from Wizard of the Coast pages instead of 400x300 resolution card art.

EVIL Gibson fucked around with this message at 23:44 on Jan 11, 2019

juggalo baby coffin
Dec 2, 2007

How would the dog wear goggles and even more than that, who makes the goggles?


EVIL Gibson posted:

*trying to remove the last couple of posts out of my head from being remembered*


Over the last two days, I've been looking at ESRGAN. I found the repo for running ESGAN on my computer and it did work on sprite work but there were issues with the provided pretrained generator. The generator was trained with images from the DIV2K imageset (https://data.vision.ee.ethz.ch/cvl/DIV2K/) combined with the Flickr2K series which are all 2k images of real life shots. The generator was giving me back sprites that looked good but there were lots of streaks which I think was due to the fact it was seeing straight lines and thinking they were ledges/windows in buildings.

What I did was start a new training set with a more character/object from Magic: The Gathering Art of more recent art qualities which are done in a way that are less abstract (like Statis)



I focused on pictures of people, but also creatures and things casting magic.

Now to break down how GAN works is working with the fact you are actually making two Neural Networks; the generator and the discriminator.

The generator's sole goal is to take the training data and try to figure out of how to counterfeit the styles.

One of the discriminator's goal is to give the generator a image and ask it to try to recreate it using the generator's learned methods.

After several generations the training is tested (I had it do this every 2000 iterations). The discriminator is given 'truths' which, in this case, is both the shrunk and the original HR image from it's own unique image set so the generator does not train with it (it would be like asking a student to study for an exam and is then given the answers. The student will pass not because of work, but because it was able to remember what the answers were). The generator is given the shrunk image and is told to scale it up.

The discriminator's goal is to try to tell which one is the real one; the original HQ one or the one the generator makes.

From this: the discriminator has to tell which one of the two images below are the correct one.



Now it seems rather obvious which one was made up, but the human mind has a lot going for it including pattern recognition and knowing faces aren't usually made of straps, but the generator/discriminator don't really understand things like that. They have to figure out things like if you see a large dark spot with a thin line of white on top, then there should be a line on the bottom. The discriminator tells the generator how it was wrong and the generator can figure that out eventually.

34,000 iterations later during a validation session the generator submitted this. This is from increasing a 75x75 image to a 300x300 image remember.

. There are other awesome things it learned to fake like the background detail, the belts of the dude is more defined, and the tree branch actually looks... better than the original?? The guy's hands took a turn for the worse though.

I did all of this because the game sprites I was trying to upscale was Heretic with lots of magic, demons, and undead. Here are some of the better results.



- kind of oversaturated


-posted this one because the guys chest symbol came through pretty good

Most of the issues I found in the others was because I made the alpha channel the color black. I should have replaced it with bright pink or something easy to remove.

Anyway, I am going to start over with a better training set of images from Wizard of the Coast pages instead of 400x300 resolution card art.

this is really neat, thanks for the insight into how this stuff works

HolyKrap
Feb 10, 2008

adfgaofdg
This is neat, haven't tried it yet but there's a download link the description

https://www.youtube.com/watch?v=FsoyC9j9MV0

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

HolyKrap posted:

This is neat, haven't tried it yet but there's a download link the description

https://www.youtube.com/watch?v=FsoyC9j9MV0

That link in the video description is the mod contents; not the training/testing code.

To use ESRGAN (no training) this github is what you want - https://github.com/xinntao/ESRGAN. Put your textures in the LR directory and install the ESRGAN pretrained models (ends with .pth and can be found in a link under "test models" section) in the models directory.

Run

python test.py models/RRDB_ESRGAN_x4.pth or
python test.py models/RRDB_PSNR_x4.pth

for a pure scale up. There will be artifacting so you can pass it through a interpolation in the instructions below that.


If you want to build your own GAN then you go to the BasicSR github https://github.com/xinntao/BasicSR

The instructions are long, requires an Nvidia card (you can use CPU but it will take exponentially larger) and it's better if you have an SSD, but you can replace the training images with your own. There's a lot of tiny little things you need to remember (like square training images for example) but if you are actually interested, then I can help you out a bit in PM

Flannelette
Jan 17, 2010


EVIL Gibson posted:

*trying to remove the last couple of posts out of my head from being remembered*


Over the last two days, I've been looking at ESRGAN. I found the repo for running ESGAN on my computer and it did work on sprite work but there were issues with the provided pretrained generator. The generator was trained with images from the DIV2K imageset (https://data.vision.ee.ethz.ch/cvl/DIV2K/) combined with the Flickr2K series which are all 2k images of real life shots. The generator was giving me back sprites that looked good but there were lots of streaks which I think was due to the fact it was seeing straight lines and thinking they were ledges/windows in buildings.

What I did was start a new training set with a more character/object from Magic: The Gathering Art of more recent art qualities which are done in a way that are less abstract (like Statis)



I focused on pictures of people, but also creatures and things casting magic.

Now to break down how GAN works is working with the fact you are actually making two Neural Networks; the generator and the discriminator.

The generator's sole goal is to take the training data and try to figure out of how to counterfeit the styles.

One of the discriminator's goal is to give the generator a image and ask it to try to recreate it using the generator's learned methods.

After several generations the training is tested (I had it do this every 2000 iterations). The discriminator is given 'truths' which, in this case, is both the shrunk and the original HR image from it's own unique image set so the generator does not train with it (it would be like asking a student to study for an exam and is then given the answers. The student will pass not because of work, but because it was able to remember what the answers were). The generator is given the shrunk image and is told to scale it up.

The discriminator's goal is to try to tell which one is the real one; the original HQ one or the one the generator makes.

From this: the discriminator has to tell which one of the two images below are the correct one.



Now it seems rather obvious which one was made up, but the human mind has a lot going for it including pattern recognition and knowing faces aren't usually made of straps, but the generator/discriminator don't really understand things like that. They have to figure out things like if you see a large dark spot with a thin line of white on top, then there should be a line on the bottom. The discriminator tells the generator how it was wrong and the generator can figure that out eventually.

34,000 iterations later during a validation session the generator submitted this. This is from increasing a 75x75 image to a 300x300 image remember.

. There are other awesome things it learned to fake like the background detail, the belts of the dude is more defined, and the tree branch actually looks... better than the original?? The guy's hands took a turn for the worse though.

I did all of this because the game sprites I was trying to upscale was Heretic with lots of magic, demons, and undead. Here are some of the better results.



- kind of oversaturated


-posted this one because the guys chest symbol came through pretty good

Most of the issues I found in the others was because I made the alpha channel the color black. I should have replaced it with bright pink or something easy to remove.

Anyway, I am going to start over with a better training set of images from Wizard of the Coast pages instead of 400x300 resolution card art.

How does this method compare to the method the upscaled doom and hexan one uses (nvidiaGW + Gigapixel)?


Oh so he really was an rear end in a top hat the whole time, 20 years of not liking him justified I guess :toot:

juggalo baby coffin
Dec 2, 2007

How would the dog wear goggles and even more than that, who makes the goggles?


randy pitchford is reaping the karma for not paying that duke nukem guy with the sick wife

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

Flannelette posted:

How does this method compare to the method the upscaled doom and hexan one uses (nvidiaGW + Gigapixel)?


Exact same tech in general (some sort of GAN) but the difference is you will not be able to retrain Gigapixel (maybe nvidiaGW).

If you read Gigapixel, you see that they trained "using millions of images" but the actual image sets could have unexpected results for images. On what you see from day to day it is probably awesome, but if you try to have it figure out how to upscale a bloody rabbit head on a stake might throw it off.


Now for nvidiaGW, I am interested to see the ""photorealistic hallucination" which now I know why it's great for the generator. When training my network, it could only pull samples from the images you provide it. What the hallucination provides is a little guess out of the blue.

The downside with the products is there is no discriminator to help the generator make better images. This makes sense since the customers are using the tools because they don't have the HQ images on hand because if they did, they would use those. Nvidia and Topaz needs to train their networks like hell with as many HQ/LQ pairs while making sure the network doesn't get funny. The best example is this article I read (https://hackaday.com/2019/01/03/cheating-ai-caught-hiding-data-using-steganography/) where CycleGAN was trained on satellite images and the maps of those images. It was then provided just maps it created and told to recreate the satellite images. It could do it, but was somehow recreating things that were not even on the map like cars and smoke stacks. After looking the map, the AI was found to be packing these details in the background noise (stenography) and then rebuilding the original map using that.

Diabetes Forecast
Aug 13, 2008

Droopy Only
I feel bad that a sick fucko is causing a bunch of other talented people's work to get thrown under the bus because he's a loudmouth piece of poo poo. (though I'll never have anything good to say about anything they made post-borderlands) but it's a good thing this is finally getting exposed.

TOOT BOOT
May 25, 2010

Keep reading about the Gearbox thing, it somehow gets even more bizarre from there.

the nucas
Sep 12, 2002
if you'd told me 20 years ago i would grow to hate both developers that had touched the half-life franchise i would have laughed in your face but here we are.

Barudak
May 7, 2007

the nucas posted:

if you'd told me 20 years ago i would grow to hate both developers that had touched the half-life franchise i would have laughed in your face but here we are.

3/4 developers who worked on Half-Life games at this point suck.

Taito, I'm watching you.

meteor9
Nov 23, 2007

"That's why I put up with it."

Barudak posted:

3/4 developers who worked on Half-Life games at this point suck.

Taito, I'm watching you.

If anything bad comes out of Zuntata I'll...

Well I will be very annoyed and do nothing but still.

Flannelette
Jan 17, 2010


EVIL Gibson posted:

Exact same tech in general (some sort of GAN) but the difference is you will not be able to retrain Gigapixel (maybe nvidiaGW).

If you read Gigapixel, you see that they trained "using millions of images" but the actual image sets could have unexpected results for images. On what you see from day to day it is probably awesome, but if you try to have it figure out how to upscale a bloody rabbit head on a stake might throw it off.


Now for nvidiaGW, I am interested to see the ""photorealistic hallucination" which now I know why it's great for the generator. When training my network, it could only pull samples from the images you provide it. What the hallucination provides is a little guess out of the blue.

The downside with the products is there is no discriminator to help the generator make better images. This makes sense since the customers are using the tools because they don't have the HQ images on hand because if they did, they would use those. Nvidia and Topaz needs to train their networks like hell with as many HQ/LQ pairs while making sure the network doesn't get funny. The best example is this article I read (https://hackaday.com/2019/01/03/cheating-ai-caught-hiding-data-using-steganography/) where CycleGAN was trained on satellite images and the maps of those images. It was then provided just maps it created and told to recreate the satellite images. It could do it, but was somehow recreating things that were not even on the map like cars and smoke stacks. After looking the map, the AI was found to be packing these details in the background noise (stenography) and then rebuilding the original map using that.

So mainly either have your thing trained by huge company that can run gigabytes through but you have to use whatever they come up with or be able to hand tune it yourself but you're using your own hardware?

the nucas posted:

if you'd told me 20 years ago i would grow to hate both developers that had touched the half-life franchise i would have laughed in your face but here we are.

Randy likes making first person jumping segments when he was at 3drelms so the writing was on the wall.

Flannelette fucked around with this message at 08:49 on Jan 12, 2019

Cream-of-Plenty
Apr 21, 2010

"The world is a hellish place, and bad writing is destroying the quality of our suffering."

TOOT BOOT posted:

Keep reading about the Gearbox thing, it somehow gets even more bizarre from there.

"Okay so yeah, I had porn on a thumb drive, and yeah, the thumb drive had industry secrets on it, too, and yeah, I lost the thumb drive at a 'Medieval Times' themed restaurant, but the truth is that the porn was only on there because I was trying to figure out how that bitch did that magic trick."


I just want to know what the magic trick was. Was it making something...disappear?

szary
Mar 12, 2014

Cream-of-Plenty posted:

"Okay so yeah, I had porn on a thumb drive, and yeah, the thumb drive had industry secrets on it, too, and yeah, I lost the thumb drive at a 'Medieval Times' themed restaurant, but the truth is that the porn was only on there because I was trying to figure out how that bitch did that magic trick."


I just want to know what the magic trick was. Was it making something...disappear?

https://arstechnica.com/gaming/2019/01/gearbox-ceo-allegedly-kept-underage-porn-on-usb-stick-new-lawsuit-alleges/

It was 'female ejaculation'

Adbot
ADBOT LOVES YOU

CJacobs
Apr 17, 2011

Reach for the moon!

That is... extremely strange. From this he seems like the kinda guy that would pull the "well technically it's called ephebophilia" schtick, but who the hell knows about the "peacock party" thing until the police get involved (which they hopefully will to clear all this poo poo up).

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply