|
This first example isn't really glitch art, since the way I change the animation is very deliberate and predictable: Starting from this gif: If you only draw every other pixel per frame in a checkerboard pattern, you get this: If you then expand that to draw every fourth pixel per frame, in this pattern (numbers mean frame numbers): 13 42 you get this: It's kind of like 2d interlacing. Unsurprisingly it also dramatically reduces the gif file size, since you're drawing (roughly) half and a quarter of the data you used to draw. With that out of the way, I'm going to talk a bit about Imagemagick (I also did the above animations with it). I think all the examples in the thread where you use Audacity to distort images use RGB images. If you're comfortable with using the command line, you can use Imagemagick to extract the color channels as raw data, manipulate them in Audacity and then recombine the results. Here I'm going to use the Lab color space instead of RGB. Lab represents pixels as Lightness (L) and two color channels (a and b). It is closer to how our vision works than RGB. Anyway, to convert an image to raw Lab channels (so no headers to worry about), do this: code:
-separate separates the three channels into their own layers, -depth 8 sets the output color depth to 8 bits and the gray: prefix tells Imagemagic to save the output files as gray values. Finally the "%d" in bean-lab%d.raw tells Imagemagick to add the channel number there, so the files will be "bean-lab0.raw", "bean-lab1.raw" and "bean-lab2.raw". Now you can open those three files as raw data in Audacity, do some processing and then export again as raw data I pitch-shifted the L channel (and then padded the end of the file with random data using a hex editor). The output file I named bean-lab0-pitch1.raw To combine the processed files into a new image, I used the command line code:
"-combine" combines the three channels into a single image (which defaults to the sRGB colorspace), the colorspace is then overridden and forced to lab with "-set colorspace lab". bean-pitch1.png is just the output filename. This is the original "bean-small.png" This is the image where I altered the pitch of the L channel. You can see how the colors of the buildings have not moved, only the lightness data has been distorted. And here I added echo to the a and b channels. It is a lot more subtle effect because our eyes are far more sensitive to differences in lightness (luma) than color (chroma).
|
# ¿ Aug 12, 2013 12:31 |
|
|
# ¿ Apr 27, 2024 18:03 |
|
Also, if you have Fast Fourier transform (FFT) enabled imagemagick (by compiling it yourself from source), you can make glitch art using mathematics Using the "bean_small.png" again as the staring point, you can convert it to magnitude and phase components: code:
The resulting files are below Magnitude: Almost all the magnitude data is in the center of the image, where you can see the single lighter pixel. It is surrounded by not-quite-black pixels that you are probably unable to see. Phase: First, altering the "magnitude" image: If you add a tiny, tiny bit of noise to a limited area in the image (so color very close to black and small brush if you do it by hand), you might get something like this after doing the inverse transform code:
Second, altering the "phase" image: You have to add a lot more noise to the phase image to get visible results. code:
Of course you can do the same with any tool that is able to do FFT and IFT. And these were done with the default sRGB colorspace. You could probably do some weird poo poo by using some other colorspace and FFT together.
|
# ¿ Aug 13, 2013 18:52 |
|
Here's a tool to easily glitch jpegs. http://snorpey.github.io/jpg-glitch/
|
# ¿ Sep 13, 2013 17:11 |
|
This was posted in the political cartoons thread, but I thought it would fit here as well
|
# ¿ Oct 13, 2013 12:23 |
|
Sparr posted:I spent a long time trying to figure out what Wheany was talking about with his glitch images but after bugging him for a few days and some tinkering I eventually got it. Here is a method more suitable for fleshy human beings instead of beeb boop computer programmers: Install Gimp, and then this FFT plugin: http://registry.gimp.org/node/19596 (On Windows, unzip the... uh... zip, and copy fourier.exe under C:\Users\<Yourname>\.gimp-2.8\plug-ins) I'm using this image as base http://flic.kr/p/fHKrXk Open the image, and run Filters -> Generic -> FFT Forward Then run Filters -> Noise -> Hurl... The random seed doesn't matter much, but it lets you get repeatable results. Set randomization to 1%, You really don't need a lot. (My parameters here are Random seed: 10, Randomization: 1%, Repeat: 1), click OK Run Filters -> Generic -> FFT Inverse. So starting from this: And following those steps above, you get this: That's a bit boring, so I tried other filters after running FFT forward. This is the result of Layer -> Transform -> Offset X: 5, Y: 5 px, with wrap around
|
# ¿ Jan 6, 2014 19:19 |
|
PHIZ KALIFA posted:So, for some reason nothing I try with Audacity works. I downloaded this image, imported it in Audacity, selected everything from .2 to the end, ran an echo over it, exported it with the same U/A-Law setting, then tried to open it as a JPG again, and it didn't work. Same with reverb and every other effect. I have zero idea why this won't work. Even the images I don't alter, just import/export through audacity won't open. JPEG is a compressed format, so something like reverb which will affect everything following it will probably completely ruin the file. You have to use an uncompressed file, like BMP. Altering just a few characters here and there with notepad will probably only create localized corruption, so the file will remain mostly readable.
|
# ¿ Feb 6, 2014 21:08 |
|
Content aware scaling/liquid scaling It's what turns this into this What if you apply it to an animation, like this classic scene from the Matrix: Just applying the liquid rescale on individual frames gets you this: But what if you were to apply scaling in the time domain? If you take the same row of pixels from each frame of animation and stack them on top of each other, you'll get something like this: Now if you scale that down vertically by half and then reconstruct the images, you get this animation: Doesn't look that different from the original, just twice as fast. But here is an individual frame that shows how several frames have blended together: If you liquid rescale the same images and then reconstruct the animation, you get this: The first two gifs have 338 frames, the last two have 169.
|
# ¿ Apr 1, 2014 19:24 |
|
sigma 6 posted:Can you tell me the difference between this and slit scan? I think slit scan is taking row 1 (or column 1) from frame 1, row 2 from frame 2 and so on and then having each row/column advance normally. I guess the main difference is how intentional the effects are. With the technique I used, you're at the mercy of what liquid rescale considers least important information. And in Imagemagick (which I used), the results of liquid rescale depend on the used colorspace. sRGB produces different results from HSB or Lab.
|
# ¿ Apr 2, 2014 13:34 |
|
More on content aware scaling, especially when it comes to video: https://www.youtube.com/watch?v=AJtE8afwJEg They suggest a technique where you carve a continuous horizontal or vertical surface from the video "cube" to scale it down. I actually wanted to implement this temporally, so instead of removing (an approximation of) pixel-columns or rows from the video, it would remove frames. It's "easy" in theory, you "just" have to make a minimum graph cut from the video cube, but my feeble mind was not able to understand the math behind it to actually implement it in code
|
# ¿ Apr 2, 2014 15:41 |
|
President Kucinich posted:Also, just to be clear, when you say take the same row of pixels from each frame of the animation and stack them on top of each other I mean that take the same row from each frame, or in imagemagick: convert frame1.png frame2.png frame3.png -crop <width>x1+0+<row>! -append output_<row>.png where <width> is the width of the frame and <row> is the row number. You probably want to make a script that automates this for each row. Edit: I actually just realized that there is no need to do this individually line-by-line, but you will get a shitload of temporary files if you don't: convert frame1.png frame2.png frame3.png -crop <width>x1 +repage rows%04d.png That will save the rows of frame1.png as <width>x1 png files, followed by frame2.png's rows and then frame3.png. Then you can take rows0000.png, rows<height>.png, rows<height*2>.png and so on and stack those. This should be a lot faster, assuming the filesystem doesn't poo poo itself with the likely tens of thousands of tiny files. Wheany fucked around with this message at 11:34 on Apr 3, 2014 |
# ¿ Apr 2, 2014 18:27 |
|
Have you ever content-aware scaled... different color channels... temporally? But seriously, I want to try some animation with separated color channels, but using some other colorspace and not RGB.
|
# ¿ Apr 24, 2014 09:24 |
|
Now that's a neat effect
|
# ¿ May 26, 2014 22:23 |
|
How about making pictures where every pixel has a unique color? As long as the picture is less than 16.7 megapixels, you will have enough colors for each pixel (in a 24-bit colorspace). The basic algorithm could be "take a random pixel from the source image and assign the closest unused color to it." Keep going until there are no more pixels left.
|
# ¿ Sep 22, 2014 15:55 |
|
TheLastManStanding posted:Added a new tool. This one performs a content-aware-scale directly on the time dimension of a sequence of images. I love you. No. that came out wrong. I'm in love with you.
|
# ¿ Oct 18, 2014 23:26 |
|
MiketheGreat posted:Are there any good collections and central resources for things like this? One thing you could use is a recording of http://fediafedia.com/neo/ and other "hacker typing" sites.
|
# ¿ Oct 20, 2014 20:37 |
|
|
# ¿ Apr 27, 2024 18:03 |
|
Also I noticed the "paulstretch" effect in Audacity yesterday, that can be used to stretch audio by extreme amounts (like 10 x), which makes it sound all echo-y and sort of creepy.
|
# ¿ Oct 21, 2014 11:55 |