About normal map

Refer to http://tech-artists.org/wiki/Normal_map

Normal Map

(Redirected from  Normal map)

A normal map is usually used to fake high-res geometry detail on what is actually a low-res mesh. Each pixel of a normal map is used to transfer the normal that's on the high-res mesh onto the surface of the low-res mesh. The red, green, and blue channels of the texture are used to control the direction of each pixel's normal. The pixels in the normal map basically control what direction each of the pixels on the low-poly model will be facing, controlling how much lighting each pixel will receive, and thus creating the illusion of more surface detail or better curvature. The process of transferring normals from the high-res model to the in-game model is often called baking.

A model with a normal map.(image by James Ku)
A model with a normal map.
(image by  James Ku)
The low-resolution wireframe.(image by James Ku)
The low-resolution wireframe.
(image by  James Ku)
The high-resolution model baked into the normal map. (image by James Ku)
The high-resolution model baked into the normal map. 
(image by  James Ku)

Tangent-Space vs. Object-Space

Normal maps can be made in either of two basic flavors: tangent-space or object-space. Object-space is also called local-space or model-space, same thing. World-space is basically the same as object-space, except it requires the model to remain in its original orientation, neither rotating nor deforming, thus world-space is rarely (if ever) used.

Tangent-space normal map

A tangent-space normal map.Image by Eric Chadwick
A tangent-space normal map.
Image by  Eric Chadwick
  • Predominantly-blue colors.
  • Object can rotate and deform. Good for deforming meshes, like characters, animals, flags, etc.
  • Map can be reused on differently-shaped meshes.
  • Map can be tiled and mirrored easily, though some games might not support mirroring very well (see UV Coordinates).
  • Easier to overlay painted details (see Painting).
  • More difficult to avoid any smoothing problems from the low-poly vertex normals (see Smoothing Groups and Hard Edges).
  • Slightly slower performance than an object-space map (but not by much).

 

Object-space normal map

An object-space normal map.Image by Eric Chadwick
An object-space normal map.
Image by  Eric Chadwick
  • Rainbow colors.
  • Objects can rotate, but usually shouldn't be deformed, unless the shader has been modified to support deformation. Good for rotating game elements, like weapons, doors, vehicles, etc.
  • Each mesh shape requires a unique map, can't easily reuse maps.
  • Difficult to tile properly, mirroring requires specific shader support (see Luxinia shader).
  • Easier to generate high-quality curvature because it completely ignores the crude smoothing of the low-poly vertex normals.
  • Harder to overlay painted details because the base colors vary across the surface of the mesh.

Joe Wilson aka EarthQuake wrote: "We have a tool that lets you load up your reference mesh and object space map. Then load up your tangent normals, and adjust some sliders for things like tile and amount. We need to load up a mesh to know how to correctly orient the tangent normals or else things will come out upside down or reverse etc. It mostly works, but it tends to "bend" the resulting normals, so you gotta split the mesh up into some smoothing groups before you run it, and then I usually will just composite this "combo" texture over my orig map in Photoshop." 

RGB Channels

The red, green, and blue channels of a tangent-space normal map.Image by Eric Chadwick
The red, green, and blue channels of a tangent-space normal map.
Image by  Eric Chadwick

Shaders can use different techniques to render tangent-space normal maps, but the normal map directions are usually consistent within a game. Usually the red channel of a tangent-space normal map stores the X axis (pointing the normalspredominantly leftwards or rightwards), the green channel stores the Y axis (pointing the normals predominantly upwards or downwards), and the blue channel stores the Z axis (pointing the normals outwards away from the surface).

If you see lighting coming from the wrong angle when you're looking at your normal-mapped model, and the model is using tangent-space normal maps, the normal map shader might be expecting the red or green channel (or both) to point in the opposite direction. To fix this either change the shader, or simply invert the appropriate color channels in an image editor, so that the black pixels become white and the white pixels become black.

Some shaders expect the color channels to be swapped or re-arranged to work with a particular compression format. For example the DXT5_nm format usually expects the X axis to be in the alpha channel, the Y axis to be in the green channel, and the red and blue channels to be empty. 

Tangent Basis

When shared edges are at different angles in UV space, different colors will show up along the seam. The tangent basis works with these colors to light the model properly.Image by Eric Chadwick
When shared edges are at different angles in UV space, different colors will show up along the seam. The tangent basis works with these colors to light the model properly.
Image by  Eric Chadwick

Tangent-space normal mapping doesn't just use a map, it also uses a special kind of vertex data called the tangent basis. This is similar to UV coordinates except it provides directionality across the surface. Three vectors are created for each vertex: normaltangent, and bitangent (aka binormal). These three vectors create an axis for each vertex, giving it a specific orientation. These axes are used to properly transform the incoming lighting from world space into tangent space, so your normal-mapped model will be lit correctly.

For example, when you look at a tangent-space normal map for a character, you typically see different colors along the UV seams. This is because the UV shells are often oriented at different angles on the mesh, a necessary evil when translating the 3D mesh into 2D textures. The body might be mapped with a vertical shell, and the arm mapped with a horizontal one. This requires the normals in the normal map to be twisted for the different orientations of those UV shells. The UVs are twisted, so the normals must be twisted in order to compensate. The tangent basis helps reorient (twist) the lighting as it comes into the surface's local space, so the lighting will then look uniform across the normal mapped mesh.

When the renderer (or game engine) renders your game model, the shader must use the same tangent basis as the normal map baker, otherwise you'll get incorrect lighting. In fact you'll get seams all over the place without it. For the best lighting the shader should be written to extract the tangent basis stored in the mesh, and the mesh importer must either import the tangent basis created by the baker, or use the same method to recreate the tangent basis itself. If the shader doesn't use the same tangent basis as the baker, the lighting might be correct in some places but incorrect in others.

There are a few different ways programmers can use to calculate the tangent basis: DirectX, NVIDIA mesh mender, a custom solution, etc. The baking app xNormal supports custom tangent basis generators to make sure engine and baker match.

The tangent basis is calculated using the UV layout and the smoothing groups (hard edges), because typically each vertex in the tangent basis is a combination of three things: the vertex's normal (influenced by smoothing), the vertex's tangent (usually derived from the V texture coordinate), and the vertex's bitangent (derived by coders using that lovely thing called math). This means the UVs and the normals on the low-res mesh directly influence the coloring of a tangent-space normal map when it is baked. For this reason you should avoid changing the UVs without re-baking the map, because then the map probably won't match the tangent basis anymore, and you'll see lighting problems.

  • Q: Why do tileable normal maps work on models? It seems to work correctly on any model I have tried and I'm not talking just walls here. I'm wondering how a normal map generated from one mesh works fine on a different mesh?
  • A: A normalmap grabbed from tileable 0-1 UV layout doesn't have those directional gradients because it uses a uniform direction in tangent space.

 

UV Coordinates

See also: Texture coordinate

The mirrored UVs (in red) are offset 1 unit before baking.Image by Eric Chadwick
The mirrored UVs (in red) are offset 1 unit before baking.
Image by  Eric Chadwick

If you want to mirror the UVs, or you want to reuse parts of the normal map by overlaying multiple bits in the same UV space, then simply move all those overlapped/mirrored bits one unit away before you capture the normal map. To the side or up or down doesn't matter, as long as only one copy of forward-facing UVs remains in the 0-1 UV box when you bake the normal map. For example, in the 3ds Max Edit UVWs window you can use the Select Inverted Faces command to select all the faces to offset.

Normal map-baking tools will only capture normals within the 0-1 UV box, any UV bits outside this area are ignored (see Address Modes). If you move all the overlaps exactly 1 UV unit away then you can leave them there after the bake and they will be mapped correctly. You can move them back if you want, it doesn't matter to most game engines. Be aware that ZBrush does use UV offsets to manage mesh visibility, however this usually doesn't matter because the ZBrush cage mesh is often a different mesh than the in-game mesh used for baking.

Many games have a difficult time solving the seam when a normal map is mirrored down the center of the mesh. This can be avoided by offsetting the mirror point. Ben Regimbal aka b1llhas some examples of offset-mirroring on his site (especially the blond at far right).

If you change the UV layout after baking you should then re-capture the normal map, because rotating or mirroring UVs after baking may cause the normal map not to match the tangent basis anymore, which will likely cause lighting problems. Best advice is to experiment with your engine to see what works in your case. If your engine doesn't give you incorrect lighting when you do it, you might want to use Will Fuller's normal map actions to rotate & flip areas of the normal map right inside Photoshop, without have to re-bake. 

Modeling The High Poly Mesh

Subdivision Surface modeling is the technique used most often for normal map modeling. The subdivision cage or basemesh is typically not the same as the in-game mesh used for baking. The in-game mesh usually needs to be carefully optimized to create a good silhouette, define edge-loops for better deformation, and minimize extreme changes between the vertex normals for better shading (see Smoothing Groups).

Some artists prefer to model the in-game mesh first, some prefer to model the high-res mesh first, and some start somewhere in the middle. The modeling order is ultimately a personal choice though, all three methods can produce excellent results.

  • Build the in-game model, then up-res it and sculpt it.
  • Build and sculpt a high resolution model, then build a new in-game model around that.
  • Build a basemesh model, up-res and sculpt it, then step down a few levels of detail and use that as a base for building a better in-game mesh.

If the in-game mesh is started from one of the subdivision levels of the basemesh sculpt, various edge loops can be collapsed or new edges can be cut to add/remove detail as necessary.

JPG tutorial: Modeling High/Low Poly Models for Next Gen Games by João "Masakari" Costa

Smoothing Groups or Hard Edges

Smoothing groups split the vertex normals, causing ray misses (red area). Image by Eric Chadwick
Smoothing groups split the vertex normals, causing ray misses (red area). 
Image by  Eric Chadwick
Bevels interpolate across the vertex normals, minimizing raycast errors. Image by Eric Chadwick
Bevels interpolate across the vertex normals, minimizing raycast errors. 
Image by  Eric Chadwick

It is generally better not to use smoothing groups or hard edges to add definition to the low-poly game mesh. Smoothing groups cause the vertex normals to be split along the hard edge. This can cause normal map baking errors because the split normals each "see" a different part of the high-res mesh during the raycasting process.

A single smoothing group should be applied to the entire in-game mesh before baking. However this can produce extreme shading differences across the model, as the lighting is interpolated across the extreme differences between the vertex normals. It is usually better to reduce these extremes when you can (mostly by adding bevels) because the tangent basis can only do so much to counteract the extreme lighting variations. Less extreme gradients are also better if your game engine doesn't use the same tangent basis as the baker (or doesn't make its own properly).

When you use object-space normal maps the vertex normal problem goes away since you're no longer relying on the crude vertex normals of the mesh. An object-space normal map completely ignores vertex normals. 

Baking

When you use normal map baking software to create your normal map, it grabs the normals from your high-poly mesh and puts them into a normal map for the low-poly mesh. The baker usually starts projecting a certain numerical distance out from the low-poly mesh, and sends rays inwards towards the high-poly mesh. When a ray intersects the high-poly mesh, it records the mesh's surface normal into your normal map.

To get an understanding of how all the options affect your normal map, do some test bakes on simple meshes like boxes. They generate quickly so you can learn the settings that really matter.

An in-game mesh with split normals causes ray misses (yellow) and ray overlaps (cyan). Image by Diego Castaño
An in-game mesh with  split normals causes ray misses (yellow) and ray overlaps (cyan). 
Image by  Diego Castaño
An in-game mesh using a single smoothing group minimizes ray-casting errors.Image by Diego Castaño
An in-game mesh using a single smoothing group minimizes ray-casting errors.
Image by  Diego Castaño

Working with Cages

Cage has two meanings in the normal-mapping process: a low-poly basemesh for subdivision surface modeling, or a ray-casting mesh used for normal map baking. In this section we'll talk about the cage for baking.

Instead of using a numerical distance to start ray-casting from, some software allows you to use a ballooned-out copy of the low-poly mesh to control that starting distance. This ballooned-out mesh is the cage.

Solving Intersections

The projection process often causes problems like misses, or overlaps, or intersections. It can be difficult generating a clean normal map in areas where the high-poly mesh intersects or nearly intersects itself, like in between the fingers of a hand. Setting the ray distance too large will make the baker pick the other finger as the source normal, while setting the ray distance too small will lead to problems at other places on the mesh where the distances between in-game mesh and high-poly mesh are greater.

Fortunately there are several methods for solving these problems.

  1. Change the shape of the cage. Manually edit points on the projection cage to help solve tight bits like the gaps between fingers.
  2. Limit the projection to matching materials, or matching UVs.
  3. Explode the meshes.
  4. Bake two or more times using different cage sizes, and combine them in Photoshop.

Solving Wavy Lines

When capturing from a cylindrical shape, often the differences between the low-poly mesh and the high-poly mesh will create a wavy edge in the normal map. There are a couple ways to avoid this:

  1. Adjust the shape of the cage to influence the directions the rays will be cast. At the bottom of this page of his normal map tutorialBen Mathis aka poopinmymouth shows how to do this in 3ds Max. The same idea is shown in the image below.
  2. Subdivide the low-res mesh so it more closely matches the high-res mesh. Jeff Ross aka airbrush has a video tutorial that shows how to do this in Maya.
  3. Paint out the wavy line. The normal map process tutorial by Ben Mathis aka poopinmymouth includes an example of painting out wavy lines in a baked normal map.
  4. Use a separate planar-projected mesh for the details that wrap around the barrel area, so the ray-casting is more even. For example to add tread around a tire, the tread can be baked from a tread model that is laid out flat, then that bake can layered onto the bake from the cylindrical tire mesh in a paint program.
  5. The polycount thread "approach to techy stuff" has some good tips for normal-mapping cylindrical shapes, where you often see the wavy-line problem.
Adjust the shape of the cage to remove distortion. Image by Timothy Evison aka tpe
Adjust the shape of the cage to remove distortion. 
Image by  Timothy Evison aka tpe

Solving Pixel Artifacts

Random pixel artifacts in the bake. Image by Eric Chadwick

If you are using 3ds Max's Render To Texture to bake from one UV layout to another, you may see stray pixels scattered across the bake. This only happens if you are using a copy of the original mesh in the Projection, and that mesh is using a different UV channel than the original mesh.

There are two solutions for this:

  • Add a Push modifier to the copied mesh, and set it to a low value like 0.01.

- or -

  • Turn off Filter Maps in the render settings (Rendering menu > Render Setup > Renderer tab > uncheck Filter Maps). To prevent aliasing you may want to enable the Global Supersampler in Render Setup.

Baking Transparency

Sometimes you need to bake a normal map from an object that uses opacity maps, like a branch with opacity-mapped leaves. Unfortunately baking apps often completely ignore any transparency mapping on your high-poly mesh.

3ds Max's RTT baker causes transparency errors. Image by Joe Wilson aka EarthQuake
3ds Max's RTT baker causes transparency errors. 
Image by  Joe Wilson aka EarthQuake
The lighting method bakes perfect transparency. Image by Joe Wilson aka EarthQuake
The lighting method bakes perfect transparency. 
Image by  Joe Wilson aka EarthQuake

To solve this, render a Top view of the mesh. This only works if you're using a planar UV projection for your low-poly mesh and you're baking a tangent-space normal map. Make sure the Top view matches the dimensions of the planar UV projection used by the low-poly mesh. It helps to use an orthographic camera for precise placement.

On the high-poly mesh either use a specific lighting setup or a use special material shader...

  1. The lighting setup is described in these tutorials:
  2. The material shader does the same thing, but doesn't require lights:
The lighting setup for top-down rendering. Image by Ben Cloward
The lighting setup for top-down rendering. 
Image by  Ben Cloward

Anti-Aliasing

Turning on super-sampling or anti-aliasing (or whatever multi-ray casting is called in your normal map baking tool) will help to fix any jagged edges where the high-res model overlaps itself within the UV borders of the low-poly mesh, or wherever the background shows through holes in the mesh. Unfortunately this tends to render much much slower, and takes more memory.

One trick to speed this up is to render 2x the intended image size then scale the normal map down 1/2 in a paint program like Photoshop. The reduction's pixel resampling will add anti-aliasing for you in a very quick process. After scaling, make sure to re-normalize the map if your game doesn't do that already, because the un-normalized pixels in your normalmap may cause pixelly artifacts in your specular highlights. Re-normalizing can be done with NVIDIA's normal map filter for Photoshop.

3ds Max's supersampling doesn't work nicely with edge padding, it produces dark streaks in the padded pixels. If so then turn off padding and re-do the padding later, either by re-baking without supersampling or by using a Photoshop filter like the one that comes with Xnormal.

Edge Padding

Edge padding grows border pixels of each UV-chart in a texture overwriting empty space. Texture filtering is used when the texture is being sampled at render time. Therefore the empty space color (background/fill color) would bleed into the sample and create artifacts at UV-borders. Similar happens when the MipMaps are created for the texture, here the fill color can create wrong values for the normals as well. Hence a few passes of edge padding raise the quality of textures at runtime.

Painting

Don't be afraid to edit normal maps in Photoshop. After all it is just a texture, so you can clone, blur, copy, blend all you want... as long as it looks good of course. Some understanding of the way colors work in normal maps will go a long way in helping you paint effectively.

A normal map sampled from a high-poly mesh will nearly always be better than one sampled from a texture, since you're actually grabbing "proper" normals from an accurate, highly detailed surface. That means your normal map's pixels will basically be recreating the surface angles of your high-poly mesh, resulting in a very believable look.

If you only convert an image into a normal-map, it can look very flat, and in some cases it can be completely wrong unless you're very careful about your value ranges. Most image conversion tools assume the input is a hightmap, where black is low and white is high. If you try to convert a diffuse texture that you've painted, the results are often very poor. Often the best results are obtained by baking the large and mid-level details from a high-poly mesh, and then combined with photo-sourced "fine detail" normals for surface details such as fabric weave, scratches and grain.

However... sometimes creating a high poly surface takes more time than your budget allows. For character or significant environment assets then that is the best route, but for less significant environment surfaces working from a heightmap-based texture will provide a good enough result for a much less commitment in time.

Some tutorials for painting normal maps or creating them from 2D sources (paintings, photos, displacement maps):

Re-Normalizing

See also: Unit vector

The re-normalize option in the NVIDIA filter.Image by Scott Warren
The re-normalize option in the NVIDIA filter.
Image by  Scott Warren

Re-normalizing means resetting the length of each normal in the map to 1.

A normal map shader combines the three color channels of a normal map to create the direction and length of each pixel's normal. This information is necessary to add lighting. When you alter normal maps, either by blending them together or editing them by hand, this can cause the lengths to change. Some shaders are written to re-normalize the normal map, some shaders are not, they expect the length of the normals to be 1.

If the lengths of the normals are not normalized to 1, and the shader doesn't re-normalize, you may see artifacts on the shaded surface... the specular highlight may speckle like crazy, the surface may get patches of odd shadowing, etc.

NVIDIA's normal map filter for Photoshop provides an easy way to re-normalize a map after editing, just use the Normalize Only option.

An ambient occlusion pass can be multiplied onto the blue channel to store ambient occlusion in a tangent-space normal map, because it is shortening the normals in the crevices of the surface. However, the shader must be altered to actually use the lengths of your custom normals; most shaders just assume all normals are 1 in length.

Some shaders use compressed normal maps, with a technique called swizzling. Usually this means the blue channel is thrown away completely, and it's recalculated in the shader. The shader has to re-normalize in order to recreate that data, so any custom normal lengths that were edited into the map will probably be ignored completely. 

Backlighting Example

If the shader doesn't re-normalize the normal map, you can customize the normal map for some interesting effects. If you invert the blue channel of a tangent-space map, the normals will be pointing to the opposite side of the surface, which can simulate backlighting.

Tree simulating subsurface scattering (front view) Image by Eric Chadwick
Tree simulating subsurface scattering (front view) 
Image by  Eric Chadwick
Tree simulating subsurface scattering (back view). Image by Eric Chadwick
Tree simulating subsurface scattering (back view). 
Image by  Eric Chadwick
The maps used for the leaves. The 2nd diffuse was simply color-inverted, hue-shifted 180°, and saturated. 
Image by  Eric Chadwick

The tree leaves use a shader than adds together two diffuse maps, one using a regular tangent-space normal map, the other using the same normal map but with the blue channel inverted. This causes the diffuse map using the regular normal map to only get lit on the side facing the light (front view), while the diffuse map using the inverted normal map only gets lit on the opposite side of the leaves (back view). The leaf geometry is 2-sided but uses the same shader on both sides, so the effect works no matter the lighting angle. As an added bonus, because the tree is self-shadowing the leaves in shadow do not receive direct lighting, which means their backsides do not show the inverted normal map, so the fake subsurface scatter effect only appears where the light directly hits the leaves. This wouldn't work for a whole forest, because of the computational cost of self-shadowing and double normal maps, but could be useful for a single "star" asset.

Shaders and Seams

You need to use the right kind of shader to avoid seeing seams. The shader must be written to extract the tangent basis stored in the mesh, and the mesh importer must either import the tangent basis created by the baker or use the same method to recreate the tangent basis itself. If the shader doesn't use the same tangent basis as the baker, the lighting will be inconsistent across the UV borders.

3ds Max Shaders

These shaders use the tangent basis, solving lighting properly:

Load the .FX files using the DirectX Shader material. Make sure to setup your lights in the shader, or it may just stay black. Often you also need to specify a diffuse bitmap too, you can't just leave that slot blank.

These shaders do not use the tangent basis, so they cause lighting seams:

  • Standard material, bitmap loaded in the Bump channel via Normal Bump, using the checkbox DX Display of Standard Material.
  • Standard material, Enable Plugin Material = Metal Bump9.
  • DirectX Shader material, with any of the .FX files that ship with Max 9.

Maya Shaders

  • BRDF shader by Brice Vandemoortele and Cedric Caillaud (more info in this thread on Polycount.netUpdate: New version here with many updates, including object-space normal maps, relief mapping, self-shadowing, etc.
  • If you want to use the software renderer, use mental ray instead of Maya's software renderer because mental ray correctly interprets tangent space normals. The Maya renderer treats the normal map as a grayscale bump map, giving nasty results. Mental ray supports Maya's Phong shader just fine (amongst others), although it won't recognise a gloss map plugged into the "cosine power" slot. The slider still works though, if you don't mind having a uniform value for gloss. Spec maps work fine though. Just use the same set up as you would for viewport rendering. You'll need to have your textures saved as TGAs or similar for mental ray to work though. - from CheeseOnToast

Normal Map Compression

Normal maps can take up a lot of memory. Compression can reduce the size of a map to 1/4 or less of the uncompressed file, which means you can either increase the resolution or you can use more maps.

See: Normal map compression for normal-map specific compression considerations and typesSee also: DirectDraw Surface (.dds) for a list of all DDS compression types

3D Tools

  • xNormal by Santiago Orgaz et. al. is a free application to generate normal, ambient occlusion, parallax displacement, and relief maps. It can also project the texture of the highpoly model into the lowpoly mesh, even with different topologies. It includes an interactive 3D viewer with multiple mesh and textures format support, shaders, realtime soft shadows, and glow effect. It also includes useful tools like height map, normal map, cavity map, occlusion map, and tangent-space/object-space conversion.
  • GPU MeshMapper by Advanced Micro Devices, Inc. bakes normal, displacement and ambient occlusion maps from high- to low-polygon meshes within an easy to use GUI. Vector displacement preview and GPU acceleration works only with ATI graphics cards, but an ATI graphics card is not required to run and use the tool. MeshMapper is free software.
  • Turtle by Illuminate Labs is a commercial baking and rendering plugin for Maya.
  • SHTools for UE3 (restricted access) is a baking application included with Unreal Engine 3.
  • PolyBump2 by Crytek is a baking application included with CryEngine2.
  • Renderbump by id software is a baking application included with Doom 3.
  • Kaldera by Mankua is a commercial baking plugin for 3ds Max, but it hasn't been updated since 2005.
  • Melody by NVIDIA is a free baking application, but it hasn't been updated since 2005.
  • ORB (Open Render Bump) by Martin Fredriksson and C. Seger is a free baking application, but it hasn't been updated since 2003.
  • The major 3D apps (3DS Max, Blender, Cinema 4D, Maya, XSI) and the major 3D sculpting tools (Modo, Mudbox, ZBrush, 3D-Coat) each have their own integrated normal map baking tools.

2D Tools

  • Crazy Bump by Ryan Clark is a commercial tangent-space normal map converter for 2D images. It is very likely the best and the fastest of them all. It also creates displacement maps, specular maps, fixes problems with diffuse maps, layers multiple normal maps, etc. A must-have tool for normal mapping.
  • NVIDIA normal map filter for Photoshop is a free tangent-space normal map converter for 2D images. It also re-normalizes, converts to height, and creates DuDv maps.
  • NVIDIA DDS texture compression plugin for Photoshop is also free, and has the same options as the NVIDIA normal map filter. Additionally it lets you create the best-quality mip maps for a normal map, by filtering each mip independently from the original source image, rather than simply scaling down the normal map.
  • NormalMapper by Advanced Micro Devices, Inc. A very simple tool to convert greyscale TGA images to normal maps, includes full C++ source code.
  • ShaderMap by Rendering Systems is a commercial normal map converter for photos and displacement maps. It has a free command-line version and a low-cost GUI version.
  • Normal Map Actions for Photoshop by Will Fuller aka sinistergfx:
    1. Overlay: Levels blue channel of current layer to 127 and sets the blend mode to overlay. Used for overlaying additional normal map detail.
    2. Normalize: Just does a nVidia normal map filter normalize on the current layer.
    3. Normalize (flatten): Flattens the image and does a nVidia normal map filter normalize.
    4. Rotate 90 CW: Rotates current normal map layer 90 degrees clockwise and fixes your red and green channels so that it doesn't break your normal map.
    5. Rotate 90 CW (inverted Y): Rotates current normal map layer 90 degrees clockwise and fixes your red and green channels so that it doesn't break your normal map. For normal maps that use the inverted Y convention.
    6. Rotate 90 CCW: Rotates current normal map layer 90 degrees counter-clockwise and fixes your red and green channels so that it doesn't break your normal map.
    7. Rotate 90 CCW (inverted Y): Rotates current normal map layer 90 degrees counter-clockwise and fixes your red and green channels so that it doesn't break your normal map. For normal maps that use the inverted Y convention.
    8. Rotate 180: Rotates current normal map layer 180 degrees and fixes your red and green channels so that it doesn't break your normal map.

猜你喜欢

转载自blog.csdn.net/cnjet/article/details/55095785