Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
sigma 6
Nov 27, 2004

the mirror would do well to reflect further

DreamingofRoses posted:

I'm sorry if this is the wrong place to ask this but I'm an absolute beginner at Blender/cg in general and I've done the most basic tutorials but I'm trying to get better at modelling from a 2d picture. For some reason this is breaking my brain, is there anything in particular y'all would recommend looking at specifically about modelling off of photos?

Have multiple angles. If modeling a character check out the old Joan of Arc tutorial for very basic box modeling and edge modeling techniques. It's insanely old but gives you an idea of modeling from front and side view. Use good referencehttps://avatars.mds.yandex.net/i?id=d8012813126fbfe8186fee47c2044ebf70e5d313-8185861-images-thumbs&n=13

Adbot
ADBOT LOVES YOU

sigma 6
Nov 27, 2004

the mirror would do well to reflect further

lord funk posted:

Question for those of you who do character modeling:

Is it common to keep the head / body in separate meshes? It seems like so much more resolution is needed for the head and face details than the rest of the body. Remeshing at a super high resolution for the head seems like overkill for the body, and kind of bogs down my machine.

I'm learning character sculpting in Blender, and was just curious about actual best practices.

Heads are often separate objects because that way they have more texture space, being a separate. This isn't always the cse for lowpoly characters, but usually is.

sigma 6
Nov 27, 2004

the mirror would do well to reflect further

EoinCannon posted:

I've been subcontracted to model in 3d an AI generated abstract design so it can be fabricated and hung on a wall as a piece of art. Makes me die inside but the fabrication client mostly sends me interesting animals and stuff for museums so I'm doing it, also money. It's literally something that someone spat out from a prompt, probably took 5 minutes but translating it to 3d by hand takes ages. Something about it is insulting.

Doing the same thing now but just wait until "AI driven 3d" produces usuable results for a base mesh. Right now it's kinda bad and usually creates triangle soup. Kaedim might be the exception but I haven't tried it. Also - a good result in AI in 5 minutes is usually not a thing. Even iterating can take hours and photoshopping that (with or without AI) could take hours more. I know painters who use AI to generate their reference now and that drastically speeds up workflow. If you are not ready for the automation that AI offers, sorry... it's already here and isn't going anywhere. Adapt and overcome. It's coming to Maya and is already used widely in Blender. Maybe Zbrush too soon, since I doubt Maxon isn't sleeping on this.

sigma 6 fucked around with this message at 21:37 on Nov 20, 2023

sigma 6
Nov 27, 2004

the mirror would do well to reflect further

EoinCannon posted:

I guess it depends on your reasons for doing what you do in the first place.
I'd rather go live in the woods than polish up the outputs of a machine that digests and spits out other people's art

I might not have the luxury of standing on my principles though, I have a mortgage in one of the most expensive cities in the world and cost of living is hosed. All well and good to say adapt and overcome but that pretty much means adapt to a job that is worse than it used to be and overcome your disgust.

Same. I can't afford not to take the work. Point is - I did not make the concept based on my own drawings, paintings, or photo reference, which I have done in the past. Instead, I use AI to generate concepts iteratively until the client was happy. Then I can move on to the 3D part faster. Which obviously still takes a lot of time to get right.

You can also feed your own paintings or drawings in as a dataset if you are concerned about using other people's work. AI has changed how an image is iterated radically. It's a little like when photoshop became very common and people couldn't believe what was real and what wasn't when they were showed an image. People thought it was computer graphics "magic" and some were very upset about it.

Just found multiple AI painting tools and so far it has been pretty fun.

This Canvas beta is free so far and spits out panorama EXRs which can be used for skydomes and (at least in Maya) is already mapped properly by default.
https://www.nvidia.com/en-us/studio/canvas/

This is the other one, which looks far more advanced. Krea.ai https://www.krea.ai/


floofyscorp posted:

Wasn't Kaedim just using incredibly underpaid artists to churn out assets as fast as humanly possible and calling it 'AI'?


Not sure but it looks like Reddit says so. Haven't used it before but almost beta tested it. Results look clean though, compared to others I have seen.

sigma 6 fucked around with this message at 23:14 on Nov 20, 2023

sigma 6
Nov 27, 2004

the mirror would do well to reflect further

mutata posted:

Results look clean compared to other abusive content mills you've seen? Cool, man. Nice AI pitch, by the way.

I don't really care what your opinion of AI is. If you don't use the tool, someone else will. CG has historically had no room for luddites.
Train your own AI on your own art if you feel that strongly but these kinds of reactions are like a vegan telling me not to eat meat at this point.

Didn't know about Kaedim's bad business practices until now and frankly I wouldn't be surprised if similar companies popped up. Sadly.

sigma 6
Nov 27, 2004

the mirror would do well to reflect further

TooMuchAbstraction posted:

The point is that it's literally not AI. It's just sweatshops. Completely orthogonal to the AI question, this is a very straightforward "do you support paying people pennies to work very hard" kind of question.

also re: luddites, the historical Luddites had legitimate concerns about how the industrial revolution was hurting people and exacerbating societal inequality. Calling people luddites when they have concerns about the harmful side-effects of technology is not the burn you think it is.

Not a burn but pretty apt. Again - I don't condone bad business practices but I don't think using technology which is becoming more and more readily available a bad business practice. You don't condemn a photographer for not being a better painter even if they are creating the same image. It's more about what the client wants. Also this kind argument just re enforces group think.
AI = bad. *rolleyes* Ridiculous considering the pace at which things are moving.

sigma 6 fucked around with this message at 21:04 on Nov 21, 2023

sigma 6
Nov 27, 2004

the mirror would do well to reflect further

EoinCannon posted:

Lots of things are "good business practices" in that they look good on a balance sheet, but are not good for workers or society at large

Things that are moving fast and seemingly inevitable are not by definition good

Well Microsoft acquiring the OpenAI CEO and then the OpenAI staff threatening to quit if the board doesn't rehire him...has been interesting.

sigma 6
Nov 27, 2004

the mirror would do well to reflect further

JuniperCake posted:

I mean okay you are talking about a product that literally can not function without being fed other people's work and is only capable of producing derivatives (very fancy ones sure) because it is incapable of replicating something it hasn't seen. And you are championing a product that is made to put a whole slew of people out of work so executives at the top can cut costs and make money at the cost of every loving thing else. But once you put those "luddites" out of work, there will be no novel material to feed into it and your content will just constantly feed into itself and continue to grow stagnant. All you've really done is sabotaged the careers of a bunch of folks who just wanted to make a modest living doing creative work. And all to just churn out a stream of meaningless "content" so a company can make more money and look good to shareholders.

If you want a demonstration of what something like this looks like, look at what AI translation has done to the translating industry. Like really take a look at that. Both what happened to translators and the resulting massive drop in quality of accessible translations for books. It sucks for the people who make translations, the people who read translations, the original author, everyone. Everyone except the companies making money off of it of course.

This isn't actually a new thing, it's just automation and we know the history of that. And this one has an additional fun flavor of being the perfect tool for deep fakes and other malicious use cases on top of that. And if you're going to defend it, then defend it but do it with an actual argument and not just "new thing is good because its new."

You don't hate the tool, you hate the user or the way it is implemented. People have been photobashing very well for years. Do some people hate photoshop? Probably. Most though recognize it as a powerful tool. Now that photoshop has AI in it, even more powerful. Is that a reason to hate a technological tool? Um... no. Also - new doesn't always mean good but certainly this has been a game changer for some and a disaster for others. A new kind of industrial revolution maybe (?) Since this has always been about automation and how it is applied. As for the deep fakes. This tech is now being used to facial replacement stuff vs. modeling and animating a 3d head, which is tedious by comparison. I am not saying everything being used with deepfake tech is good. Just pointing out you can't put the genie back in the bottle.

sigma 6 fucked around with this message at 01:42 on Nov 22, 2023

sigma 6
Nov 27, 2004

the mirror would do well to reflect further

TooMuchAbstraction posted:

Just gonna put this out there: it is valid to dislike a tool. In particular, two big options stand out:

1. You spent a great deal of time and effort on developing the skills that the tool (to a greater or lesser extent) replaces, and thus the tool harms your competitive advantage on the marketplace, threatening your livelihood.
2. You're only in the industry in the first place because it allows you to exercise the skills that the tool replaces. The remaining work is not of interest to you personally.

You can dislike a tool all you want but that won't stop others from using it. Fortunately for some, and unfortunately for others, automation and capitalism go hand in hand. Famously, programmers often make more than artists who can't program because programmers can automate repetitive tasks whereas artists without programming skills end up repeating the same steps "by hand".
This leads to many job postings asking artists to know Python, or some other programming language to get a technical art job.

tango alpha delta posted:

Pshaw at all you scrubs using fancy apps and artificially flavoured intelligence. I do all my art by typing binary into an ascii file, like a real masochist.

lol

sigma 6
Nov 27, 2004

the mirror would do well to reflect further

Photobashing an image or using a bot to do the compositing for you? Did you think this wasn't inevitable? (re: Wonderstudio)

Are you going to ban using photoshop because it uses AI or just the AI features of photoshop? ( ... Which Adobe says only draws from its Adobe stock library. )

As for legality: Yes - ChatGPT has been made illegal in a few countries. As far as I know no image generators have been made illegal however and what's more, "AI Images Aren’t Protected By Copyright Law According To U.S. Copyright Office". However, as the tech gets better, who is to tell what is generated by DALL E or some time in photoshop and / or Firefly? Or a combination of both?

sigma 6
Nov 27, 2004

the mirror would do well to reflect further

mutata posted:

Bxnfksjfirksuavsvxjvgkmf sbah jmf f sjajajc g fjwjqh.

Sorry you glitched out.

Computers are just tools, after all.

https://www.instagram.com/reel/Czl4VtEx0s5

floofyscorp: Is this the same company ?... "EAspouse" How did that go again? Oh yeah... Old news.

I am sure EA has gotten better about their ethics since then.

Some irony for you guys is that the client rejected the AI ref in favor of photo ref. Still ended up modeling the model from AI ref first though. *sigh*

cubicle gangster posted:

In the interests of sharing more content, the talk i did at unreal fest is live, as of 3 hours ago.

https://www.youtube.com/watch?v=6la2yieiCG0

It was my first time doing one so I decided that reading from a script would be better than rambling or potentially making a mistake, but them putting the prompter on a screen down by my feet made that a poo poo option.
My part is just to give context to the development, it's short, but important to say why Jose and Alex had such a hard job ahead of them. Alex's section is where it gets really spicy, they did a lot of fantastic work pulling this off.

That's really impressive.

sigma 6 fucked around with this message at 09:53 on Nov 23, 2023

sigma 6
Nov 27, 2004

the mirror would do well to reflect further

ImplicitAssembler posted:

If we talk about 'machine learning', my experience is very much the opposite. Very talented people seeing/making a tool to make their/our work even better.

Thank you. The fact that there is a stigma attached is very telling. On that note:
This is very impressive IMO.
Sorry for the insta link OP.

https://www.instagram.com/reel/C0Opo-sqs05/?utm_source=ig_web_copy_link&igshid=MzRlODBiNWFlZA==

quote:

I'm using Womp 3d for the modeling, Krea ai for the realtime ai view, Magnific ai for up scaling and details and Stable Diffusion video and Runway for the animation.

Sagebrush posted:

i've been poking around with AI tools to generate texture maps and it's uhhhhhh surprisingly good at it. the resolution isn't quite there yet, but it's usable for many purposes

like i was putting together a kitchen scene and i wanted a hand-painted enamel gilded art nouveau plate.

"art nouveau pattern, vines, green and gold, gilded, circular outline"



erase the background, a little processing to crisp it up and make the metallic and bump maps out of that, and baby you got a stew goin.

it's the world now. parts of it really suck but it isn't going away. gotta figure out how to use it well rather than just going all crunchy and living in the woods.

This is really cool! Some of the most impressive stuff I have seen is AI processing textures.
CGMonk / CGmonsastary has some cool tools.

sigma 6 fucked around with this message at 09:26 on Dec 1, 2023

sigma 6
Nov 27, 2004

the mirror would do well to reflect further

Koramei posted:

Not entirely wrong there, although i highly doubt it’ll ever get found out, at least the use cases I’ve seen.
Just very much disagreeing with the notion using the tools implies a lack of talent. The artists I know using it that way could draw all of this and better (and have, many times), but it’s such a timesaver and useful ideation tool that they took to it quickly.

Thank you again. I have a an airbrush artist friend which switched to AI for reference because downloading copyright free 4k images was a drag and lacked creative control. Recently his work was featured in an international airbrush magazine and truth be told, it's about the craft, not the reference.

Adbot
ADBOT LOVES YOU

sigma 6
Nov 27, 2004

the mirror would do well to reflect further

haddedam posted:

Maya rose menu takes longer to learn but it gets you carpal tunnel way faster than any hotkey ever could.

I don't know about that. One of the fastest modelers I ever met just remapped his hotbox menu and swiped in the direction he needed to (ie. up right, middle, left or down right, middle, left). If you use a stylus with a tablet, your chances of getting carpal tunnel are much lower. Also - one can easily remap any keystrokes in Maya, Zbrush, or really most packages.

It comes down to efficiency of workflow more than anything else. A lot of studios allow artists to use the package of their choice for making assets these days although many still stick primarily to one main package for output to the renderer. Sony famously has a custom built version of Maya which Autodesk makes specifically for them.

Personally I can't stand Blender's UI or shortcuts, but I didn't start with them either. QWERTY shortcuts are like second nature in Maya / Max / Zbrush but of course Blender doesn't use those same shortcuts. tI am sure I can remap them but ... uggghh why can't these packages use universal transform, camera, and selection shortcuts? I remember when Zbrush finally got a gizmo and so many people wondered "Why did this take so long?" I actually like that you can still switch between the gizmo and the transpose tool with the y button because some things are just still easier with the transpose tool IMO. Things like masking by topology. Again it comes down to how fast your workflow is. Keystrokes are generally faster than anything else, so it pays to learn them. However, learning shortcuts across three or more packages gets confusing quickly. R is rotate in Zbrush for example vs. e in Maya and that one always gets me. Also makes things more confusing when the keystrokes change with different versions of the same software.

sigma 6 fucked around with this message at 05:37 on Mar 5, 2024

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply