Author: MastaMan
Date: October 30, 2024
Updated on: November 7, 2024

Introduction

I decided to continue developing the topic of AI and how it helps a 3D artist in his work. 
The previous article “How Chat GPT and DALLE can help 3D artists in their work”, you really liked it and many users wrote interesting comments on this article.

How Chat GPT and DALL-E can help 3D artists in their work

This article will show in more detail how you can use the image generator directly in 3Ds Max thanks to tyDiffusion. Where as a result we will get a realistic background in a few clicks with minimal effort.

tyflow tydiffusion mainimage 1
Loading...

Preface

While collaborating with MaryGold Studio, I was able to communicate with some of their employees. While talking about their projects, they showed some incredible renders.

The conversation awkwardly turned into an interview, but I managed to learn some secrets. Many thanks to Oleh for the information provided and the desire to share such cool knowledge with the audience.

If you are interested in the master class itself, you can skip the next dialogue.
Oleh shows renders of the latest projects.

MastaMan: How do you manage to create such amazing images of such a scale? The image looks hyper-realistic and what amazes me most is how detailed the background is!

Oleh: Yes, it became possible thanks to the latest AI technologies.

MastaMan: Oh, that's interesting. You probably use Stable Diffusion?

Oleh: Yes and no. We use tyDiffusion from the tyFlow plugin based on Stable Diffusion and generate images directly in 3Ds Max, directly in the Viewport.

MastaMan: Wow, I didn't even know that you can generate images directly in the Viewport.

Oleh: But that's not all, we also use the paid tool Magnific AI.

MastaMan: Do you use it in conjunction with tyDiffusion?

Oleh: Yes, it helps to fix some "craps" after tyDiffusion.

MastaMan: And how complicated is the process of generating, for example, the background for rendering?

Oleh: In fact, the whole process is not complicated, you need to know some basic settings and correctly compose the Prompt.

MastaMan: As I understand it, AI will soon become an integral part of the 3D artist's work, the tools are becoming simpler and more accessible.

Oleh: This is a natural process, and everything is moving in this direction. But there are moments associated with routine and we can't get rid of them yet.

MastaMan: Can this be solved by writing a script?

Oleh: I think so. The problem is that after generation we get a picture in PNG format with a file size of more than 100 MB and applying such material to the background is problematic, it simply does not display.

MastaMan: Oh, I have encountered such problems, the script will cope with this task perfectly.

Oleh: So the process of creating a background will be really simple! Let's call the script - AI Texture Projection.

Generating a city background image with tyDiffusion for your renders



To work, we will need pre-installed tyFlow with tyDiffusion. This tool can be downloaded for free.

If you have any difficulties, there is a separate video that shows this. I will attach all the necessary links at the end of the article.

Preparation

It is necessary to prepare the scene before generating images. You can simply assign colors to certain objects to let the AI know where our river is, where the grass and green areas are, roads, buildings, etc. Thus, without loss of generation quality, we can save time on rendering the initial frame from which we will generate the generation.

As you can see in the screenshot below, there is quite a massive building in the scene, exactly the area for which the final render was created. You can use 3D models from Google Maps or paid services that provide such services in your projects.

We will not focus on this, since the purpose of this master class is to show the basic principles of working with AI.

whole city in 3dsmax
Loading...

For a better understanding, below is a screenshot of what we should receive.

what we should prepare first
Loading...

Positioning the camera and searching for lighting

In this workflow, we are generating the background and environments, so we need to determine where the boundary is, where we will use a plane with a background texture, and where we will leave high-poly models.

It all depends on your perspective and situation. In this example, the river will be the boundary, so we build a plane along the coastline.

At the same time, it is important that our plane is polygonal enough for the correct projection of UVW using the modifier from tyFlow.

plane for the projection background texture
Loading...

Set the lighting to be the same as in the future scene. To ensure that the generated AI image matches your lighting exactly.

test render for finding lightning
Loading...

Render the angle for generation

Hide the previously created plane and other objects that should not be included in the frame.

Although tyDiffusion generates an image no larger than 1280px on one side, this image must be rendered in high resolution to provide AI as many details as possible, this directly affects the generation result.

test render of city
Loading...

Launching tyDiffusion

Now we get to the most interesting part. To run tyDiffusion, click on the Viewport drop-down menu: StandardtyDiffusion

After which you will launch the Windows terminal (Command Prompt), where all the necessary processes will be launched automatically.

If you have not yet installed the tyFlow plugin, go to the end of the article, where you will find a link to a video on how to quickly install this plugin.

run tydiffusion
Loading...

Basic tyDiffusion settings

Promt

Here we write positive and negative prompts for AI. The main part of the hint should contain a subject-specific description of what we need to receive. 

It is also necessary to write some properties to improve the quality of generation. Specify a negative Prompt, this will help avoid some artifacts during generation.

tydiffusion prompt
Loading...

Models

In Basic settings, we select the model that comes with tyDiffusion. The list of models is small, but sufficient for good results. We will use Juggernaut Reborn (1.5).

choose model juggrnaut reborn
Loading...

VAE (Variational Autoencoder)

One of the important components of tyDiffusion, which is used to compress and decode images during processing. The default settings are quite good, but I recommend experimenting with additional options.

vae autoencoder
Loading...

Resolution

I recommend using the values ​​on which the model was trained, but you can experiment.

1730313114
Loading...

Mode

Generation type. In our case, select Image to image.

tydiffusion set mode
Loading...

Source

Source of information. Select Render Color.

Sampler

Here we leave everything as default.

Conserve VRAM

Select this checkbox if you have little video memory. I have 24GB and without it the program crashes.

Steps

The number of passes of the model during rendering, for the test is 20, for the final result is 100+.

CFG scale (Classifier-Free Guidance Scale)

This is an important parameter in tyDiffusion, which is responsible for the balance between creativity and accuracy of the image according to Prompt.

Essentially this is controlling how closely the model will follow your text instructions.

tydiffusion cfg scale
Loading...

Denoise

The Denoise parameter determines the degree of change during each step. For example, if you want to maintain the overall look of an image you've already created, but add new elements, decreasing the Denoise parameter will help with minimal intervention.

This option allows users to have more control over the final result, depending on how much you want to interfere with the image modification process.

Seed

Generation variability. You can always return to the previous option by specifying the exact Seed.

Configuring ControlNet for accurate generation

Depth

This module uses a depth map, which helps the model better understand the space and distance of objects from the camera.

Edge

Works with contours and lines in an image to control the shape or boundaries of objects. This is ideal for generating well-defined structures or silhouettes.

IP-Adapter

An adaptive module configures the model to work with images generated through other sources, preserving their key features, but allowing them to influence the final result.

Bake Result

After you have clicked Generate image and your image has appeared in the Viewport, select our plane and click the Bake result button, as shown in the screenshot below. This way we will save this image to our computer.

tydiffusion bake result
Loading...

AI Texture Projection Script

Let's use a script from MastaMan - AI Texture Projection, which will help you create a material with a baked background texture.

In addition, this script will help compress an image from 100 Megabytes to several Kilobytes, which allows the material to be displayed in Viewport without problems and rendered correctly.

Download AI Texture Projection

Download the archive and extract the script. Drag MG AI Texture Projection.ms into the Viewport to run the script. Or drag and drop the Install.mcr script into Viewport if you want to add a button to the toolbar or Quad Menu.

run mg ai texture projection
Loading...

In Projection Image we select the generated background image, in Viewport we select our plane or the object for which we baked. Click the Project On Object. button Done!

The image will be applied to the plane and now you can remove the entire city from the scene, which will greatly facilitate our scene.

project generated image to our plane
Loading...
render result
Loading...

Final result

In this way, you can have a detailed background image even at the Drafts stage, significantly lighten the scene and speed up rendering.

If you're interested in AI and want to see other ways to use it, check out Patreon, where Oleh shares great videos and personal experiences. The link to Patreon will be at the end of the article.

Result before tyDiffusion

result before tydiffusion
Loading...

Result after tyDiffusion

result after tydiffusion
Loading...

If you are interested in this format, where large companies talk about their “kitchen” and you are interested in a technical interview, be sure to write about it in the comments!

Conclusion

Thanks to modern technology and tyDiffusion AI, you can create a realistic background in minutes.

This will not only relieve your scene and speed up rendering, but also take the quality of your images to a new level! And all this is done directly without leaving 3Ds Max!

If you liked the result or want to tell us about your experience using AI - share it in the comments. It will be interesting to read!

https://pro.tyflow.com/#lets-go
https://docs.tyflow.com/tyflow_AI/tyDiffusion/installation/
https://www.patreon.com/posts/eng-ai-artist-1-111719490
https://www.patreon.com/c/visualpraxis/posts
https://marygold-studio.com/


{{commentsMsg}}
  

No one has posted a comment yet
{{comment.lastname}} {{comment.name}} {{comment.date}}
{{comment.text}}
Sponsored content