Sep 17, 2011

Optimizing Memory in Rendering Vray in 3Ds Max

I just found this on a website but does not know the author. The original author has full credit for this valuable tutorial.

V-Ray uses both physical and virtual memory but let's keep this simple. V-Ray uses MEMORY, period.
It doesn't separately access or allocate physical or virtual memory. Your operating system handles how physical and virtual memory are being allocated, not V-Ray. So, when V-Ray starts a render it's going to allocate MEMORY to satisfy its needs (based on a lot of settings of which I will explain the most important and most common settings later on in this article). It doesn't care nor knows what type of memory is being used - physical or virtual. That's being handled by the OS.
Now, there's a limit to how much memory V-Ray can use based on what operating system you are using.
A default Windows 32 bit version installation (XP / Vista) will not allow you to use more than 2GB per application because it's not able to access larger memory blocks (it’s a mathematical limitation of 32 bits). In practice this means you will run into problems once your render is nearing the 1700MB - 1800MB range, since there's also a lot of overhead in each process (= application). If you use the /3GB switch in your boot.ini you are able to have your applications use up to 3GB of RAM. This limit includes virtual memory and is on a PER APPLICATION level. Remember though, if the application doesn't support the IMAGE_FILE_LARGE_ADDRESS_AWARE value in the .exe header it will never be able to access memory blocks over 2GB and the /3GB switch would be rendered useless!! This executable header value is a special value (actually it's an API - Application Programming Interface) that was made available by Microsoft to allow programmers of 32 bit software to access memory blocks larger than 2GB.
So if you have say 4GB of physical memory and a 6GB page file set up (and no additional switches enabled in the boot.ini), your application could for instance use 800MB physical memory and 1200MB page file space depending on what other applications are running at the same time. These other apps also use physical memory of course (as do the OS and kernel!) so you will never have the entire 2GB - 3GB (depending on boot.ini switches) available for your application! Your operating system controls how memory is allocated to applications.
NOTE : the /3GB switch is a notorious OS killer! It introduces instability in your OS. Some people are lucky and never run into problems but most will become victim of very unusual and unexpected behavior in both OS and applications. Issues range from not being able to run applications to blue screens indicating a hardware problem. If you are actively using this switch and you experience problems you can't explain, start by removing that switch! 9 out of 10 times this is your problem.
64 bit versions of Windows operating systems don't have this memory limitation. Windows XP / Vista x64 can access up to 128GB of physical memory, all of which is available to any application that supports the IMAGE_FILE_LARGE_ADDRESS_AWARE value in the executable header (actually, with this flag enabled x64 OS can access up to 8TB of physical and virtual memory amounting to 16TB... which at present no main board in the world can support though). 3DS Max supports this value. In fact, any true 64 bit application supports this value. However, if the application doesn't support this header value (like with most 32 bit applications), only 2GB of memory can be allocated. Even on a 64 bit OS!
Of course what I have described here is just the tip of the Iceberg. Memory management in your OS depends on a lot of factors and also involves several different layers of memory types (kernel, paged pool, non-paged pool, etc. etc.).
For more information on how the individual Windows OS'es manage memory please refer to this Microsoft MSDN article : or use Google. There are tons of articles out there on the net describing in detail how memory is handled (just make sure you read articles that come from trusted sources!).
V-Ray and Memory
Now, as for V-Ray, how to overcome memory related issues?
In the next couple of paragraphs I will describe a number of methods to reduce memory usage. You can use most of these methods individually or combined to achieve best results.
Disable unnecessary applications and processes
The first and simplest thing to do to conserve memory is to disable any application you will not need. Like anti-virus and some Windows Services (like secondary logon, remote access, IP Sec, automatic updates and so on).
Disable / change settings of the Frame Buffer
Another thing to do is to disable the Frame Buffer (of both V-Ray and Max) by disabling the Rendered Frame Window checkbox in the Common Parameters of the Render Scene dialog. This VFB uses a lot of memory since the entire render needs to be saved in memory. With V-Ray an 800x600 render with a lot of bright colors can easily take as much as 150MB - 200MB.
IMPORTANT! If you really do need the V-Ray Frame Buffer you should at least set Max's own frame buffer to a very small size, f.i. 50x50, and disable the checkbox Get Resolution from Max in the V-Ray Frame Buffer rollout. The reason for this is that even though you are using the V-Ray frame buffer, due to the way Max was built, it will STILL create the Max frame buffer in memory!! changing this setting alone can save a LOT of memory in a lot of cases!
Dynamic geometry
Another memory conservation option is to use Dynamic geometry (set the list box to Dynamic for the Default Geometry option in the V-Ray :: System rollout). This setting works in close conjunction with the Dynamic Memory Limit setting!! The way this works is that V-Ray only loads objects that are needed to render the current buckets. It unloads all other objects. Using this method can result in slower rendering although you can tweak other settings to counter attack this slowdown. You can do this by increasing/decreasing the Max. Tree Depth & Face/Level Coef., using larger Bucket Sizes and setting the Dynamic Memory Limit to an appropriate size. See descriptions below.
Max. Tree Depth:
I will call this MTD from now on. To be able to render objects V-Ray creates so called BSP trees (Binary Space Partitioning trees) which are basically (and very simply put) a hierarchical data structure of the scene, starting from the Root Node (the entire scene) down to Leaves (the actual triangles of the meshes). The MTD option is a setting that influences how many nodes the tree will be able to contain. The default setting for MTD is 60. Setting this to a higher value (like 90) will take more memory but will also render faster. Obviously, setting it to a lower value will take less memory but will also render slower. Mind you, setting this value too high or too low will cause the effect to be inverted! So setting the MTD value too high would cause slower render times compared to the default values while still taking more memory. This value is scene dependant and therefore it takes a little bit of experimenting on which values work for your particular scene.
Face/Level Coef.:
The option Face/Level Coef. (I will call this FLC for short) works closely together with the MTD value. This option controls the maximum number of triangles in each Leaf node. According to ChaosGroup setting this value to a lower value than 2.0 (the default) will cause the BSP tree to take more memory but it will render faster. Setting it to a higher value will take less memory but renders slower. Again, the value that's optimal for your scene greatly depends on that particular scene.
Render Region Division (a.k.a. Bucket Size):
Buckets are the small squares you see in the VFB when you're rendering your scene. These buckets represent a part of the BSP (see above). To decrease memory usage you can set the bucket size (in the V-Ray :: System rollout -> Render Region Division) to a smaller value. The default is 64 which is a fairly good size for most scenes. It is recommended to set the values in binary steps. So use 8, 16, 32, 64 and 128. This has to do with internal math and programming in general. Extremely small sizes like 8 and 16 are NOT recommended however, since it will take a lot longer for V-Ray to calculate how to render that particular bucket. V-Ray has to take parts of the edges of surrounding buckets to prevent seams, so the more and the smaller buckets you have, the more V-Ray will have to calculate. It DOES take less memory though! But be prepared to accept much longer render times.
Dynamic Memory Limit
The DML setting is a very important one as setting wrong values here can seriously cripple V-Ray! The way this works is that the amount of memory you enter in the value box is the TOTAL amount of memory that will be used by ALL render threads. A render thread is represented by a bucket, it's the same thing. By default V-Ray (or actually 3DS Max) assigns one thread for each physical processor core. You can change the number of render threads by using a MaxScript command:
renderers.current.system_numThreads = x
Where x is the number of threads you want the renderer to use.
But since V-Ray already makes optimal use of the available processor cores it is usually wise to just keep the default of one thread per core. Actually, setting up multiple threads per core can make the render process take longer because it now has to divide its processing power amongst several threads which of course also leads to some overhead.
Setting up the DML requires some thought. You normally enter a value that represents the total amount of physical memory you want to assign to all of the render threads. So if you have 4GB of RAM and a Dual Core processor and a 32 bit OS you would want to enter a value of 2000MB (under normal circumstances the maximum amount of memory a 32 bit OS can access). That way each render thread (bucket) can use up to 1000MB of physical memory. If the render thread exceeds the threshold (in this case 1000MB) it will start using the page file and swap to disk. This (swapping) will slow down the render process. But because of the 2GB limitations of 32 bit OS'es, if the threads exceed the 2GB limit, it will actually crash Max... since it can't access more RAM. Remember, the 2GB limit is per process / application and INCLUDES virtual memory! Replace 2GB with 3GB in the previous text if you're using the /3GB switch.
If you have a 64 bit OS you can enter higher values as the OS is not limited to 2GB.
Do not enter values which are too low as this will cause excessive page file swapping. In the
example above, if you enter a value of 1000, each render thread can only use up to 500MB of phys. memory. If it needs more it will use the page file.
If you are using a Quad Core processor, 8GB phys. memory and 64 bit OS, you could enter any value up to the maximum amount of available phys. memory. As for me, I have set it to 6000MB and I'm using 4 render threads. That allows each thread to use 1500MB which is plenty of RAM for virtually all of my projects.
now, another excellent method for optimizing memory usage is by using VRayProxies. VRayProxies are instances of your original geometry which are stored in an external file. Not only do they take less memory, they will also render faster since they are already in a (semi) render ready format. Even if you have just one large object a VrayProxy can be beneficial! It "crunches" the large object into a much smaller one. This will both benefit render times and makes it easier to navigate the viewport. this is because of the way a VRayProxy is displayed.
You can create a VRayProxy by simply right-clicking the object and
-Select V-Ray Mesh Export
-In the dialog select the folder where you want to save the vrmesh (V-Ray Mesh)
-Either select Export as Single File if you selected one object (or selected multiple objects but you want just one VRayProxy object) or Export as Multiple Files if you want each individual selected object to be saved in their respective vrmesh file
-Tick the Automatically Create Proxies checkbox
-Click OK and you’re good to go
Render to vrimg file
This is a somewhat advanced concept which involves rendering the raw V-Ray render engine output to a file instead of memory. The advantage of this method is that the rendered buckets will be streamed to a file on the hard disk (the vrimg file format) and will be immediately released from memory once they have been saved. Of course this is a HUGE memory saver and is often used as the preferred method when rendering very large format images which would require very large amounts of RAM or when experiencing memory related problems in general.
If you consider rendering an RGB image of 7000x5000 would take up to 700MB of memory for the Frame Buffer alone, you can imagine how much you will need a method like this to be able to successfully render such images.
All you need to do to make use of this option is to disable all Frame Buffers (Max and V-Ray), browse to the V-Ray:: Frame Buffer rollout in the Render Scene dialog, enable Render to V-Ray raw image format and select a folder and filename where to save the output to.
Once the render is done you can convert the resulting vrimg file to OpenEXR using the vrimg2exr tool. You can find this tool in the Windows Start Menu > All Programs > ChaosGroup > V-Ray Advanced for 3ds Max > Tools > VRImg to OpenEXR Converter.
You can then use PhotoShop or any other image editing application that supports OpenEXR to open and edit the image / convert it to other formats.
Using Backburner to render V-Ray jobs
Another method that is commonly used since the latest V-Ray versions have been released is starting V-Ray render jobs as a Backburner job. This way you don’t have to start 3DS Max which of course saves a lot of memory. This was not an option in previous releases of V-Ray since the DR (Distributed Render) system wasn’t compatible with Backburner. Now you can and it’s a nice option. Especially in combination with option 9 Render to vrimg file.
Bitmap Pager
This is one of those options no one remembers or has ever heard of.
You can find it in the menu Customize > Preferences > Rendering tab.
What this does basically is create a (series of) temporary file(s) on your hard disk to store parts of the bitmaps you want to render. This does a good job when you have a lot of textures or very large textures. I would suggest reading up on the specific settings of this option in the Max Help files. It’s a very useful option!
Manually clear out unused bitmaps from memory
Using the MaxScript command below you can clear out any unused cached bitmap from the memory. Sometimes when you have been doing a lot of test renders and / or work on materials stuff gets “left behind” in memory. This command is useful to get rid of that:
Open the MaxScript Listener (F11) and type: FreeSceneBitmaps()
That’s it.
Keeps things clean an think things through!
One of the simplest things you can do to save memory is keep your scene files clean.
A lot of times you can see people having tons of unused materials in their scenes with large bitmaps or (large) objects in hidden layers. All of those things eat memory. By simply deleting items you don’t need any more the moment you know you’re not going to need them anymore, you can save some costly memory.
It’s also advisable before doing a heavy render to save your work, close Max, restart and open the scene file and render. It’s not uncommon for Max to keep a lock on parts of the memory (for no reason at all) until you close it down and restart. The amount of locked memory is sometimes quite considerable!
Another thing you could do is resize your bitmap textures to represent the actual maximum size they will be rendered in. A lot of people tend to create their texture in the largest size they can think of, e.g., 4096x4096, which of course is not a problem. It’s actually a good thing to do that. But then they take the same resolution image into Max and apply it to an object that will be no larger than 50x50 pixels when rendered. That’s not smart memory management. If you render your images at a resolution of 1024x768 you will never need a texture map that is larger than 1024x768.
Also, a common mistake is to think that a jpg or other compressed image format will take less memory compared to for instance the PhotoShop psd file because it has a smaller file size. WRONG! First of all, when loaded into memory it takes away precious cpu time because the image needs to be decompressed and then, when decompressed, it turns out that a jpg of 1MB actually uses 15MB of memory! Depending on the jpg of course. The more colors you have and the less the image has regions that are the same as others within the picture, the less compression will be possible (= larger file size). But you get the picture (no pun intended).
The tips and tricks mentioned here are just a small collection of things you can do to save memory / optimize performance. Also, some tips are V-Ray specific while others will work with and for all render engines out there. It’s truly impossible to list all of the available options to tweak and configure 3DS Max and V-Ray to perform optimally. Read up on and experiment with the options available and you will eventually start to understand how they work. Understanding your render engine is imperative to be able to achieve quality results while maintaining decent render times / memory flow. The ability to successfully render certain scenes might actually depend on your knowledge of your render engine!
I hope this document gives you some food for thought and maybe some of the tips will actually solve some of your problems

No comments:

Post a Comment