Followup: Normal Mapping Without Precomputed Tangents

This post is a fol­low-up to my 2006 ShaderX5 arti­cle [4] about nor­mal map­ping with­out a pre-com­put­ed tan­gent basis. In the time since then I have refined this tech­nique with lessons learned in real life. For those unfa­mil­iar with the top­ic, the moti­va­tion was to con­struct the tan­gent frame on the fly in the pix­el shad­er, which iron­i­cal­ly is the exact oppo­site of the moti­va­tion from [2]:

Since it is not 1997 any­more, doing the tan­gent space on-the-fly has some poten­tial ben­e­fits, such as reduced com­plex­i­ty of asset tools, per-ver­tex band­width and stor­age, attribute inter­po­la­tors, trans­form work for skinned mesh­es and last but not least, the pos­si­bil­i­ty to apply nor­mal maps to any pro­ce­du­ral­ly gen­er­at­ed tex­ture coor­di­nates or non-lin­ear deformations.

Intermission: Tangents vs Cotangents

The way that nor­mal map­ping is tra­di­tion­al­ly defined is, as I think, flawed, and I would like to point this out with a sim­ple C++ metaphor. Sup­pose we had a class for vec­tors, for exam­ple called Vector3, but we also had a dif­fer­ent class for cov­ec­tors, called Covector3. The lat­ter would be a clone of the ordi­nary vec­tor class, except that it behaves dif­fer­ent­ly under a trans­for­ma­tion (Edit 2018: see this arti­cle for a com­pre­hen­sive intro­duc­tion to the the­o­ry behind cov­ec­tors and dual spaces). As you may know, nor­mal vec­tors are an exam­ple of such cov­ec­tors, so we’re going to declare them as such. Now imag­ine the fol­low­ing function:

Vector3 tangent;
Vector3 bitangent;
Covector3 normal;
Covector3 perturb_normal( float a, float b, float c )
    return a * tangent +
           b * bitangent + 
           c * normal; 
           // ^^^^ compile-error: type mismatch for operator +

The above func­tion mix­es vec­tors and cov­ec­tors in a sin­gle expres­sion, which in this fic­tion­al exam­ple leads to a type mis­match error. If the normal is of type Covector3, then the tangent and the bitangent should be too, oth­er­wise they can­not form a con­sis­tent frame, can they? In real life shad­er code of course, every­thing would be defined as float3 and be fine, or rather not.

Mathematical Compile Error

Unfor­tu­nate­ly, the above mis­match is exact­ly how the ‘tan­gent frame’ for the pur­pose of nor­mal map­ping was intro­duced by the authors of [2]. This type mis­match is invis­i­ble as long as the tan­gent frame is orthog­o­nal. When the exer­cise is how­ev­er to recon­struct the tan­gent frame in the pix­el shad­er, as this arti­cle is about, then we have to deal with a non-orthog­o­nal screen pro­jec­tion. This is the rea­son why in the book I had intro­duced both \vec{T} (which should be called co-tan­gent) and \vec{B} (now it gets some­what sil­ly, it should be called co-bi-tan­gent) as cov­ec­tors, oth­er­wise the algo­rithm does not work. I have to admit that I could have been more artic­u­late about this detail. This has caused real con­fu­sion, cf from

The dis­crep­an­cy is explained above, as my ‘tan­gent vec­tors’ are real­ly cov­ec­tors. The def­i­n­i­tion on page 132 is con­sis­tent with that of a cov­ec­tor, and so the frame \left(\vec{T}|\vec{B}|\vec{N}\right) should be called a cotan­gent frame.

Intermission 2: Blinns Perturbed Normals (History Channel)

In this sec­tion I would like to show how the def­i­n­i­tion of \vec{T} and \vec{B} as cov­ec­tors fol­lows nat­u­ral­ly from Blinn’s orig­i­nal bump map­ping paper [1]. Blinn con­sid­ers a curved para­met­ric sur­face, for instance, a Bezi­er-patch, on which he defines tan­gent vec­tors \vec{p}_u and \vec{p}_v as the deriv­a­tives of the posi­tion \vec{p} with respect to u and v.

In this con­text it is a con­ven­tion to use sub­scripts as a short­hand for par­tial deriv­a­tives, so he is real­ly say­ing \vec{p}_u = {\partial\vec{p} \over \partial u}, etc. He also intro­duces the sur­face nor­mal \vec{N} = \vec{p}_u \times \vec{p}_v and a bump height func­tion f, which is used to dis­place the sur­face. In the end, he arrives at a for­mu­la for a first order approx­i­ma­tion of the per­turbed normal:

    \[\vec{N}' \simeq \vec{N} + \frac{f_u \vec{N} \times \vec{p}_v + f_v \vec{p}_u \times \vec{N}}{|\vec{N}|} ,\]

I would like to draw your atten­tion towards the terms \vec{N} \times \vec{p}_v and \vec{p}_u \times \vec{N}. They are the per­pen­dic­u­lars to \vec{p}_u and \vec{p}_v in the tan­gent plane, and togeth­er form a vec­tor basis for the dis­place­ments f_u and f_v. They are also cov­ec­tors (this is easy to ver­i­fy as they behave like cov­ec­tors under trans­for­ma­tion) so adding them to the nor­mal does not raise said type mis­match. If we divide these terms one more time by |\vec{N}| and flip their signs, we’ll arrive at the ShaderX5 def­i­n­i­tion of \vec{T} and \vec{B} as follows:

    \begin{align*} \vec{T} &= -\frac{\vec{N} \times \vec{p}_v}{|\vec{N}|^2} = \nabla u, & \vec{B} &= -\frac{\vec{p}_u \times \vec{N}}{|\vec{N}|^2} = \nabla v, \end{align*}

    \[\vec{N}' \simeq \hat{N} - f_u \vec{T} - f_v \vec{B} ,\]

where the hat (as in \hat{N}) denotes the nor­mal­ized nor­mal. \vec{T} can be inter­pret­ed as the nor­mal to the plane of con­stant u, and like­wise \vec{B} as the nor­mal to the plane of con­stant v. There­fore we have three nor­mal vec­tors, or cov­ec­tors, \vec{T}, \vec{B} and \vec{N}, and they are the a basis of a cotan­gent frame. Equiv­a­lent­ly, \vec{T} and \vec{B} are the gra­di­ents of u and v, which is the def­i­n­i­tion I had used in the book. The mag­ni­tude of the gra­di­ent there­fore deter­mines the bump strength, a fact that I will dis­cuss lat­er when it comes to scale invariance.

A Little Unlearning

The mis­take of many authors is to unwit­ting­ly take \vec{T} and \vec{B} for \vec{p}_u and \vec{p}_v, which only works as long as the vec­tors are orthog­o­nal. Let’s unlearn ‘tan­gent’, relearn ‘cotan­gent’, and repeat the his­tor­i­cal devel­op­ment from this per­spec­tive: Peer­cy et al. [2] pre­com­putes the val­ues f_u and f_v (the change of bump height per change of tex­ture coor­di­nate) and stores them in a tex­ture. They call it ’nor­mal map’, but is a real­ly some­thing like a ’slope map’, and they have been rein­vent­ed recent­ly under the name of deriv­a­tive maps. Such a slope map can­not rep­re­sent hor­i­zon­tal nor­mals, as this would need an infi­nite slope to do so. It also needs some ‘bump scale fac­tor’ stored some­where as meta data. Kil­gard [3] intro­duces the mod­ern con­cept of a nor­mal map as an encod­ed rota­tion oper­a­tor, which does away with the approx­i­ma­tion alto­geth­er, and instead goes to define the per­turbed nor­mal direct­ly as

    \[\vec{N}' = a \vec{T} + b \vec{B} + c \hat{N} ,\]

where the coef­fi­cients a, b and c are read from a tex­ture. Most peo­ple would think that a nor­mal map stores nor­mals, but this is only super­fi­cial­ly true. The idea of Kil­gard was, since the unper­turbed nor­mal has coor­di­nates (0,0,1), it is suf­fi­cient to store the last col­umn of the rota­tion matrix that would rotate the unper­turbed nor­mal to its per­turbed posi­tion. So yes, a nor­mal map stores basis vec­tors that cor­re­spond to per­turbed nor­mals, but it real­ly is an encod­ed rota­tion oper­a­tor. The dif­fi­cul­ty starts to show up when nor­mal maps are blend­ed, since this is then an inter­po­la­tion of rota­tion oper­a­tors, with all the com­plex­i­ty that goes with it (for an excel­lent review, see the arti­cle about Reori­ent­ed Nor­mal Map­ping [5] here).

Solution of the Cotangent Frame

The prob­lem to be solved for our pur­pose is the oppo­site as that of Blinn, the per­turbed nor­mal is known (from the nor­mal map), but the cotan­gent frame is unknown. I’ll give a short revi­sion of how I orig­i­nal­ly solved it. Define the unknown cotan­gents \vec{T} = \nabla u and \vec{B} = \nabla v as the gra­di­ents of the tex­ture coor­di­nates u and v as func­tions of posi­tion \vec{p}, such that

    \begin{align*} \mathrm{d} u &= \vec{T} \cdot \mathrm{d} \vec{p} , & \mathrm{d} v &= \vec{B} \cdot \mathrm{d} \vec{p} , \end{align*}

where \cdot is the dot prod­uct. The gra­di­ents are con­stant over the sur­face of an inter­po­lat­ed tri­an­gle, so intro­duce the edge dif­fer­ences \Delta u_{1,2}, \Delta v_{1,2} and \Delta \vec{p_{1,2}}. The unknown cotan­gents have to sat­is­fy the constraints

    \begin{align*} \Delta u_1 &= \vec{T} \cdot \Delta \vec{p_1} , & \Delta v_1 &= \vec{B} \cdot \Delta \vec{p_1} , \\ \Delta u_2 &= \vec{T} \cdot \Delta \vec{p_2} , & \Delta v_2 &= \vec{B} \cdot \Delta \vec{p_2} , \\ 0 &= \vec{T} \cdot \Delta \vec{p_1} \times \Delta \vec{p_2} , & 0 &= \vec{B} \cdot \Delta \vec{p_1} \times \Delta \vec{p_2} , \end{align*}

where \times is the cross prod­uct. The first two rows fol­low from the def­i­n­i­tion, and the last row ensures that \vec{T} and \vec{B} have no com­po­nent in the direc­tion of the nor­mal. The last row is need­ed oth­er­wise the prob­lem is under­de­ter­mined. It is straight­for­ward then to express the solu­tion in matrix form. For \vec{T},

    \[\vec{T} = \begin{pmatrix} \Delta \vec{p_1} \\ \Delta \vec{p_2} \\ \Delta \vec{p_1} \times \Delta \vec{p_2} \end{pmatrix}^{-1} \begin{pmatrix} \Delta u_1 \\ \Delta u_2 \\ 0 \end{pmatrix} ,\]

and anal­o­gous­ly for \vec{B} with \Delta v.

Into the Shader Code

The above result looks daunt­ing, as it calls for a matrix inverse in every pix­el in order to com­pute the cotan­gent frame! How­ev­er, many sym­me­tries can be exploit­ed to make that almost dis­ap­pear. Below is an exam­ple of a func­tion writ­ten in GLSL to cal­cu­late the inverse of a 3×3 matrix. A sim­i­lar func­tion writ­ten in HLSL appeared in the book, and then I tried to opti­mize the hell out of it. For­get this approach as we are not going to need it at all. Just observe how the adju­gate and the deter­mi­nant can be made from cross products:

mat3 inverse3x3( mat3 M )
    // The original was written in HLSL, but this is GLSL, 
    // therefore
    // - the array index selects columns, so M_t[0] is the 
    //   first row of M, etc.
    // - the mat3 constructor assembles columns, so 
    //   cross( M_t[1], M_t[2] ) becomes the first column
    //   of the adjugate, etc.
    // - for the determinant, it does not matter whether it is
    //   computed with M or with M_t; but using M_t makes it
    //   easier to follow the derivation in the text
    mat3 M_t = transpose( M ); 
    float det = dot( cross( M_t[0], M_t[1] ), M_t[2] );
    mat3 adjugate = mat3( cross( M_t[1], M_t[2] ),
                          cross( M_t[2], M_t[0] ),
                          cross( M_t[0], M_t[1] ) );
    return adjugate / det;

We can sub­sti­tute the rows of the matrix from above into the code, then expand and sim­pli­fy. This pro­ce­dure results in a new expres­sion for \vec{T}. The deter­mi­nant becomes \left| \Delta \vec{p_1} \times \Delta \vec{p_2} \right|^2, and the adju­gate can be writ­ten in terms of two new expres­sions, let’s call them \Delta \vec{p_1}_\perp and \Delta \vec{p_2}_\perp (with \perp read as ‘perp’), which becomes

    \[\vec{T} = \frac{1}{\left| \Delta \vec{p_1} \times \Delta \vec{p_2} \right|^2} \begin{pmatrix} \Delta \vec{p_2}_\perp \\ \Delta \vec{p_1}_\perp \\ \Delta \vec{p_1} \times \Delta \vec{p_2} \end{pmatrix}^\mathrm{T} \begin{pmatrix} \Delta u_1 \\ \Delta u_2 \\ 0 \end{pmatrix} ,\]

    \begin{align*} \Delta \vec{p_2}_\perp &= \Delta \vec{p_2} \times \left( \Delta \vec{p_1} \times \Delta \vec{p_2} \right) , \\ \Delta \vec{p_1}_\perp &= \left( \Delta \vec{p_1} \times \Delta \vec{p_2} \right) \times \Delta \vec{p_1} . \end{align*}

As you might guessed it, \Delta \vec{p_1}_\perp and \Delta \vec{p_2}_\perp are the per­pen­dic­u­lars to the tri­an­gle edges in the tri­an­gle plane. Say Hel­lo! They are, again, cov­ec­tors and form a prop­er basis for cotan­gent space. To sim­pli­fy things fur­ther, observe:

  • The last row of the matrix is irrel­e­vant since it is mul­ti­plied with zero.
  • The oth­er matrix rows con­tain the per­pen­dic­u­lars (\Delta \vec{p_1}_\perp and \Delta \vec{p_2}_\perp), which after trans­po­si­tion just mul­ti­ply with the tex­ture edge differences.
  • The per­pen­dic­u­lars can use the inter­po­lat­ed ver­tex nor­mal \vec{N} instead of the face nor­mal \Delta \vec{p_1} \times \Delta \vec{p_2}, which is sim­pler and looks even nicer.
  • The deter­mi­nant (the expres­sion \left| \Delta \vec{p_1} \times \Delta \vec{p_2} \right|^2) can be han­dled in a spe­cial way, which is explained below in the sec­tion about scale invariance.

Tak­en togeth­er, the opti­mized code is shown below, which is even sim­pler than the one I had orig­i­nal­ly pub­lished, and yet high­er quality:

mat3 cotangent_frame( vec3 N, vec3 p, vec2 uv )
    // get edge vectors of the pixel triangle
    vec3 dp1 = dFdx( p );
    vec3 dp2 = dFdy( p );
    vec2 duv1 = dFdx( uv );
    vec2 duv2 = dFdy( uv );
    // solve the linear system
    vec3 dp2perp = cross( dp2, N );
    vec3 dp1perp = cross( N, dp1 );
    vec3 T = dp2perp * duv1.x + dp1perp * duv2.x;
    vec3 B = dp2perp * duv1.y + dp1perp * duv2.y;
    // construct a scale-invariant frame 
    float invmax = inversesqrt( max( dot(T,T), dot(B,B) ) );
    return mat3( T * invmax, B * invmax, N );

Scale invariance

The deter­mi­nant \left| \Delta \vec{p_1} \times \Delta \vec{p_2} \right|^2 was left over as a scale fac­tor in the above expres­sion. This has the con­se­quence that the result­ing cotan­gents \vec{T} and \vec{B} are not scale invari­ant, but will vary inverse­ly with the scale of the geom­e­try. It is the nat­ur­al con­se­quence of them being gra­di­ents. If the scale of the geomtery increas­es, and every­thing else is left unchanged, then the change of tex­ture coor­di­nate per unit change of posi­tion gets small­er, which reduces \vec{T} = \nabla u = \left( \frac{\partial u}{\partial x}, \frac{\partial u}{\partial y}, \frac{\partial u}{\partial z} \right) and sim­i­lar­ly \vec{B} in rela­tion to \vec{N}. The effect of all this is a dimin­ished per­tu­ba­tion of the nor­mal when the scale of the geom­e­try is increased, as if a height­field was stretched.

Obvi­ous­ly this behav­ior, while total­ly log­i­cal and cor­rect, would lim­it the use­ful­ness of nor­mal maps to be applied on dif­fer­ent scale geom­e­try. My solu­tion was and still is to ignore the deter­mi­nant and just nor­mal­ize \vec{T} and \vec{B} to whichev­er of them is largest, as seen in the code. This solu­tion pre­serves the rel­a­tive lengths of \vec{T} and \vec{B}, so that a skewed or stretched cotan­gent space is sill han­dled cor­rect­ly, while hav­ing an over­all scale invariance.

Non-perspective optimization

As the ulti­mate opti­miza­tion, I also con­sid­ered what hap­pens when we can assume \Delta \vec{p_1} = \Delta \vec{p_2}_\perp and \Delta \vec{p_2} = \Delta \vec{p_1}_\perp. This means we have a right tri­an­gle and the per­pen­dic­u­lars fall on the tri­an­gle edges. In the pix­el shad­er, this con­di­tion is true when­ev­er the screen-pro­jec­tion of the sur­face is with­out per­spec­tive dis­tor­tion. There is a nice fig­ure demon­strat­ing this fact in [4]. This opti­miza­tion saves anoth­er two cross prod­ucts, but in my opin­ion, the qual­i­ty suf­fers heav­i­ly should there actu­al­ly be a per­spec­tive distortion.

Putting it together

To make the post com­plete, I’ll show how the cotan­gent frame is actu­al­ly used to per­turb the inter­po­lat­ed ver­tex nor­mal. The func­tion perturb_normal does just that, using the back­wards view vec­tor for the ver­tex posi­tion (this is ok because only dif­fer­ences mat­ter, and the eye posi­tion goes away in the dif­fer­ence as it is constant).

vec3 perturb_normal( vec3 N, vec3 V, vec2 texcoord )
    // assume N, the interpolated vertex normal and 
    // V, the view vector (vertex to eye)
    vec3 map = texture2D( mapBump, texcoord ).xyz;
    map = map * 255./127. - 128./127.;
    map.z = sqrt( 1. - dot( map.xy, map.xy ) );
    map.y = -map.y;
    mat3 TBN = cotangent_frame( N, -V, texcoord );
    return normalize( TBN * map );
varying vec3 g_vertexnormal;
varying	vec3 g_viewvector;  // camera pos - vertex pos
varying vec2 g_texcoord;
void main()
    vec3 N = normalize( g_vertexnormal );
    N = perturb_normal( N, g_viewvector, g_texcoord );
    // ...

The green axis

Both OpenGL and Direc­tX place the tex­ture coor­di­nate ori­gin at the start of the image pix­el data. The tex­ture coor­di­nate (0,0) is in the cor­ner of the pix­el where the image data point­er points to. Con­trast this to most 3‑D mod­el­ing pack­ages that place the tex­ture coor­di­nate ori­gin at the low­er left cor­ner in the uv-unwrap view. Unless the image for­mat is bot­tom-up, this means the tex­ture coor­di­nate ori­gin is in the cor­ner of the first pix­el of the last image row. Quite a difference!
An image search on Google reveals that there is no dom­i­nant con­ven­tion for the green chan­nel in nor­mal maps. Some have green point­ing up and some have green point­ing down. My artists pre­fer green point­ing up for two rea­sons: It’s the for­mat that 3ds Max expects for ren­der­ing, and it sup­pos­ed­ly looks more nat­ur­al with the ‘green illu­mi­na­tion from above’, so this helps with eye­balling nor­mal maps.

Sign Expansion

The sign expan­sion deserves a lit­tle elab­o­ra­tion because I try to use signed tex­ture for­mats when­ev­er pos­si­ble. With the unsigned for­mat, the val­ue ½ can­not be rep­re­sent­ed exact­ly (it’s between 127 and 128). The signed for­mat does not have this prob­lem, but in exchange, has an ambigu­ous encod­ing for −1 (can be either −127 or −128). If the hard­ware is inca­pable of signed tex­ture for­mats, I want to be able to pass it as an unsigned for­mat and emu­late the exact sign expan­sion in the shad­er. This is the ori­gin of the seem­ing­ly odd val­ues in the sign expansion.

In Hindsight

The orig­i­nal arti­cle in ShaderX5 was writ­ten as a proof-of-con­cept. Although the algo­rithm was test­ed and worked, it was a lit­tle expen­sive for that time. Fast for­ward to today and the pic­ture has changed. I am now employ­ing this algo­rithm in real-life projects for great ben­e­fit. I no longer both­er with tan­gents as ver­tex attrib­ut­es and all the asso­ci­at­ed com­plex­i­ty. For exam­ple, I don’t care whether the COLLADA exporter of Max or Maya (yes I’m rely­ing on COLLADA these days) out­put usable tan­gents for skinned mesh­es, nor do I both­er to import them, because I don’t need them! For the artists, it does­n’t occur to them that an aspect of the asset pipeline is miss­ing, because It’s all nat­ur­al: There is a geom­e­try, there are tex­ture coor­di­nates and there is a nor­mal map, and just works.

Take Away

There are no ‘tan­gent frames’ when it comes to nor­mal map­ping. A tan­gent frame which includes the nor­mal is log­i­cal­ly ill-formed. All there is are cotan­gent frames in dis­guise when the frame is orthog­o­nal. When the frame is not orthog­o­nal, then tan­gent frames will stop work­ing. Use cotan­gent frames instead.


[1] James Blinn, “Sim­u­la­tion of wrin­kled sur­faces”, SIGGRAPH 1978

[2] Mark Peer­cy, John Airey, Bri­an Cabral, “Effi­cient Bump Map­ping Hard­ware”, SIGGRAPH 1997

[3] Mark J Kil­gard, “A Prac­ti­cal and Robust Bump-map­ping Tech­nique for Today’s GPUs”, GDC 2000

[4] Chris­t­ian Schüler, “Nor­mal Map­ping with­out Pre­com­put­ed Tan­gents”, ShaderX 5, Chap­ter 2.6, pp. 131 – 140

[5] Col­in Bar­ré-Brise­bois and Stephen Hill, “Blend­ing in Detail”,

111 Gedanken zu „Followup: Normal Mapping Without Precomputed Tangents

  1. Excel­lent! Thank you so much for tak­ing the time to answer my ques­tions. I had start­ed to sus­pect that the inverse-trans­pose of the Pu|Pv|N matrix was a round­about way of com­put­ing what you com­pute direct­ly, and I’m very glad to hear that’s the case. I still have some learn­ing to do to com­fort­ably manip­u­late cov­ec­tors, but it helps a great deal that the end result is some­thing I recognize…and under­stand­ing cov­ec­tors more thor­ough­ly will final­ly take the “black mag­ic” out of why the inverse-trans­pose of Pu|Pv|N also works (much less effi­cient­ly of course).

  2. Pingback: Spellforce 2 Demons of the Past | The Tenth Planet

  3. Pingback: Decals (deferred rendering) | IceFall Games

  4. Pingback: Balancing | Spellcaster Studios

  5. Hi there! I’m hav­ing trou­ble using the func­tions dFdx() and dFdy()… tried adding this line:
    #exten­sion GL_OES_standard_derivatives : enable
    in my shad­er, but it gives me the mes­sage that the exten­sion isn’t sup­port­ed. Do you have any idea about how i’m sup­posed to do this?

  6. Hi Chris­t­ian,
    I’m research­ing on the run-time gen­er­ate tbn matrix relat­ed top­ics recent­ly, and I found your arti­cle very inter­est­ing. I’m won­der­ing if I want to inte­grate your glsl shad­er with my code, which is writ­ten in directx/hlsl, should I use the ‑N instead in the final result of cotan­gent frame matrix?(for deal­ing with the Right-hand­ed and left-hand­ed issue). If this is not the solu­tion, what should I do? Thanks.

    • Hi Sher­ry
      the one gotcha you need to be aware of is that HLSL’s ddy has dif­fer­ent sign than dFdy, due to OpenGL’s win­dow coor­di­nates being bot­tom-to-top (when not ren­der­ing into an FBO, that is). Oth­er than that, it is just syn­tac­ti­cal code con­ver­sion. For test­ing, you can sub­sti­tute N with the face nor­mal gen­er­at­ed by cross­ing dp1 and dp2; that must work in every case, what­ev­er sign con­ven­tion the screen space deriv­a­tives have.

  7. Hi Chris­t­ian!
    First, thanks for an explana­to­ry arti­cle, it’s great!
    I still have a cou­ple of ques­tions. Lets list some facts:
    a) You said in response to Michael that transpose(TBN) can be used to trans­form eg. V vec­tor from world-space to tangent-space.
    b) dFdx and dFdy, and chence dp1/2 and duv1/2 are con­stant over a triangle.
    Based on that facts, can your TBN be com­put­ed in geom­e­try-shad­er on per-tri­an­gle basis, and its trans­pose used to trans­form V, L, Blinn’s half vec­tor, etc. to tan­gent-space, in order to make “clas­sic” light­ning in pixel-shader?
    I’ve found some­thing sim­i­lar in:
    but there is noth­ing about com­mon tan­gent-basis cal­cu­la­tion in tex­ture bak­ing tool and shad­er. The lat­ter is necce­sary, since sub­sti­tut­ing nor­mal map with flat nor­mal-map (to get sim­ple light­ning) pro­duces faceted look, as in Kilgard’s approach. The sec­ond ques­tion: do you have a plu­g­in for xNor­mal, which can com­pute cor­rect per-tri­an­gle tan­gent basis?

    • Hi Andrzej,
      yes, you can (and I have) do the same cal­cu­la­tions in the geom­e­try shad­er. The cotan­gent basis is always com­put­ed from a tri­an­gle. In the pix­el shad­er, the ‘tri­an­gle’ is implic­it­ly spanned between the cur­rent pix­el and neigh­bor­ing pix­els. In the geom­e­try shad­er, you use the actu­al tri­an­gle, i.e., dFdx and dFdy are sub­sti­tut­ed with the actu­al edge dif­fer­ences. The rest will be identical.
      You can then pass TBN down to the pix­el shad­er, or use it to trans­form oth­er quan­ti­ties and pass these.

      Faceting: As I said before, the T and B vec­tors will be faceted, but the N vec­tor can use the inter­po­lat­ed nor­mal to give a smoother look. In prac­tice, if the UV map­ping does not stray too far from the square patch assump­tion, the faceting will be unnoticeable.

  8. Thank You very much, Christian!

    I imple­ment­ed the tech­nique and wrote XNor­mal plu­g­in, yet, when trans­form­ing gen­er­at­ed tan­gent-space nor­mal map back to object-space (by XNor­mal tool), I did­n’t get the expect­ed result. I’ll try to make per-tri­an­gle tbn com­pu­ta­tion order-inde­pen­dent and check the code.

  9. Real­ly nice post, I’m work­ing on some ter­rain at the moment for a poject in my spare time.. I came accross your site yes­ter­day while look­ing for an in shad­er bumpmap­ping tech­nique and Imle­ment­ed it right away
    I real­ly like the results and per­for­mance also seems good.. but I was won­der­ing about apply­ing such tech­niques to large scenes, I don’t need the shad­er applied to cer­tain regions(black/very dark places).. But my under­stand­ing is that it will be applied to ever pix­el on the ter­rain regard­less of whether I want it lit or not..
    This is a bit off top­ic so apolo­gies for that, but in terms on opti­miza­tion, if applied to a black region would glsl avoid pro­cess­ing that or do you have any rec­om­men­da­tions for per­for­mance tweak­ing, conditionals/branching etc.. I under­stand a sten­cil buffer could assist here, but these type of opti­miza­tions are real­ly inter­est­ing, it could make a good post in it’s own right.
    Thanks again for your insight­ful article!

    • Hi Cor­mac,
      since the work is done per pix­el, the per­for­mance does­n’t depend on the size of the scene, but only on the num­ber of ren­dered pix­els. This is a nice prop­er­ty which is called “out­put-sen­si­tiv­i­ty”, ie. the per­for­mance depends on the size of the out­put, not the size of the input. For large scenes, you want all ren­der­ing to be out­put sen­si­tive, so you do occlu­sion culling and the like.
      Branch­ing opti­miza­tions only pay off if the amount of work that is avoid­ed is large, for instance, when skip­ping over a large num­ber of shad­ow map sam­ples. In my expe­ri­ence, the tan­gent space cal­cu­la­tion described here (essen­tial­ly on the order of 10 to 20 shad­er instruc­tions) is not worth the cost of a branch. But if in doubt, just pro­file it!

  10. Thanks Chris­t­ian !!
    The per­for­mance is actu­al­ly sur­pris­ing­ly good.. The card I’m run­ning on is quite old and FPS drop is not sig­nif­i­cant which is impressive.
    It’s fun­ny when I applied it to my scene I start­ed get­ting some strange ari­facts in the nor­mals. I did­n’t have time too dig into it yet, per­hap my axes are mixed up or some­thing, I’ll check it again tonight.

    • If you mean the seams that appear at tri­an­gle bor­ders, these are relat­ed to the fact that dFdx/dFdy of the tex­ture coor­di­nate is con­stant across a tri­an­gle, so there will be a dis­con­tin­u­ous change in the two tan­gent vec­tors when the direc­tion of the tex­ture map­ping changes. This is expect­ed behav­ior, espe­cial­ly if the tex­ture map­ping is sheared/stretched, as is usu­al­ly the case on a pro­ce­dur­al ter­rain. The light­ing will be/should be correct.

  11. Hi Chris­t­ian!
    I thought about seams and light­ning dis­con­ti­nu­ities and have anoth­er cou­ple of ques­tions in this area. But first some facts.
    1. When we con­sid­er a tex­ture with nor­mals in object space there are almost no seams; dis­tor­tions are most­ly relat­ed to nonex­act inter­po­la­tion over an edge in a mesh, whose sides are uncon­nect­ed in the tex­ture (eg. due to dif­fer­ent lenght, angles, etc.)
    2. No mat­ter how a tan­gent space is defined (per-pix­el with T,B and N inter­po­lat­ed over tri­an­gle, with only N inter­po­lat­ed, or even con­stant TBN over tri­an­gle), tan­gent-space nor­mals should always decode to object-space counterparts. 

    Why it isn’t the case in Your method? I think its because tri­an­gles adja­cent in a mesh are also adja­cent in tex­ture-space. Then, when ren­der­ing an edge, you read the nor­mal from a tex­ture, but you may ‘decode’ it with a wrong tan­gent-space basis — from the oth­er side tri­an­gle (with p=0.5). Things get worse with lin­ear tex­ture sam­pler applied instead of point sam­pler. Note how the trick with ‘the same’ tan­gent-space on both sides of the edge works well in this situation.

    So, the ques­tion: shoud­n’t all tri­an­gles in a mesh be uncon­nect­ed in the tex­ture space for Your method to work? (It shold­n’t be a prob­lem to code an addi­tion­al ‘tex­ture-break­er’ for a tool-chain.)

    • Hi Andrzej,
      have a look at the very first posts in this thread. There is a dis­cus­sion about the dif­fer­ence between “paint­ed nor­mal maps” and “baked nor­mal maps”.

      For paint­ed nor­mal maps, the method is in prin­ci­ple, cor­rect. The light­ing is always exact­ly faith­ful to the implied height map, the slope of which may change abrupt­ly at a tri­an­gle bor­der, depend­ing on the UV map­ping. That’s just how things are.

      For baked nor­mal maps, where you want a result that looks the same as a high-poly geom­e­try, the bak­ing pro­ce­dure would have to use the same TBN that is gen­er­at­ed in the pix­el shad­er, in order to match perfectly.
      This can be done, how­ev­er due to tex­ture inter­po­la­tion, the dis­con­ti­nu­ity can only be approx­i­mat­ed down to the tex­el lev­el. So there would be slight mis­match with­in the tex­el that strad­dles the tri­an­gle bound­ary. If you want to elim­i­nate even that, you need to pay 3 times the num­ber of ver­tices to make the UV atlas for each tri­an­gle separate.

      In my prac­tice, I real­ly don’t care. We’re have been using what every­one else does: x‑normal, crazy bump, sub­stance B2M, etc, and I have yet to receive a sin­gle ‘com­plaint’ from artists. :)

  12. Some­how I for­got that nor­mal maps can also be painted :-)
    Maybe sep­a­rat­ing tri­an­gles in the tex­ture domain is unnec­es­sary in prac­tice, but as I am teach­ing about nor­mal map­ping, I want to know every aspect of it.

    Thanks a lot!

  13. Chris­t­ian, please help me with anoth­er issue. In many NM tuto­ri­als over the net, light­ning is done in tan­gent space, i.e., Light and Blinn’s Half are trans­formed into tan­gent space at each ver­tex, to be inter­po­lat­ed over a tri­an­gle. Con­sid­er­ing each TBN forms ortho­nor­mal basis, why this works only for L and not for H? It gives me strange arte­facts, while inter­po­lat­ing ‘plain’ H and trans­form­ing it at each frag­ment by inter­po­lat­ed & nor­mal­ized TBN works well (for sim­plic­i­ty, I con­sid­er world-space \equiv object-space).

    • Hi Andrzej,
      there are some ear­li­er com­ments on this issue. One of the salient points of the arti­cle is that the TBN frame is, in gen­er­al, not ortho­nor­mal. There­fore, dot prod­ucts are not pre­served across transforms.

  14. Hi Chris­t­ian,
    Thanks! But in main case I do have ortho­nor­mal base at each ver­tex (N from mod­el, T com­put­ed by mikk­T­Space and orthog­o­nal­ized by Gram-Schmit method, B as cross prod­uct). Accord­ing to many tuto­ri­als this should work, yet in [2] they say this can be con­sid­ered an approx­i­ma­tion only. Who is right then? I don’t know if there is bug in my code or tri­an­gles in low poly mod­el are too big for such approximation?

  15. Pingback: mathematics – Tangent space – Lengyel vs ShaderX5 – difference | Asking

  16. Hi Chris­t­ian,
    Very fas­ci­nat­ing arti­cle and it looks to be exact­ly what I was look­ing for. Unfor­tu­nate­ly, I can’t seem to get this work­ing in HLSL: I must be doing some­thing real­ly stu­pid but I can’t seem to fig­ure it out. I’m doing my light­ing cal­cu­la­tions in view space but hav­ing read through the com­ments, it should­n’t affect the cal­cu­la­tions regardless?

    Here’s the “port” of your code:


    • Hi Peter,
      when trans­lat­ing from GLSL to HLSL you must observe two things:
      1. Matrix order (row- vs col­umn major). The mat3(…) con­struc­tor in GLSL assem­bles the matrix column-wise.
      2. The dFdy and ddy have dif­fer­ent sign, because the coor­di­nate sys­tem in GL is bottom-to-top.

  17. Pingback: WebGL techniques for normal mapping on morph targets? – Blog 5 Star

  18. Pingback: Rendering Terrain Part 20 – Normal and Displacement Mapping – The Demon Throne

  19. hi, l’d like to ask if this method can be used in the ward anisotrop­ic light­ing equa­tion, which is using tan­gent and binor­mal directly.

    I found the result to be facet because the tan­gent and binor­mal are facet, is that means I can’t use your method in this sit­u­a­tion? sor­ry about my bad english..

    • Hey Maval,

      I just came across this same post when I was inves­ti­gat­ing the usage of com­put­ing our tan­gents in the frag­ment shader. 

      Since T and B are faceted with this tech­nique, it can­not be used with an anisotrop­ic brdf, so long as the anisotropy depends on the tan­gent frame.

      I also do not think it’s pos­si­ble to cor­rect for this with­out addi­tion­al infor­ma­tion non-local from the triangle.


    • Hi Maval,
      the faceting of the tan­gents is not very not­i­ca­ble in prac­tice, so I would just give it a try and see what hap­pens. It all depends on how strong the tan­gents are curved with­in the spe­cif­ic UV map­ping at hand.

  20. Wow thank you very much for this. I was sup­posed to imple­ment nor­mal map­ping in my PBR ren­der­er but could­n’t get over the fact that those co-tan­gent and bi-co-tan­gent were a huge pain in the back to trans­fer to the ver­tex shad­er (with much data dupli­cat­ed). This is so much bet­ter, thank you :D.

  21. Hi Chris­t­ian. I was try­ing to pro­duce sim­i­lar for­mu­las in a dif­fer­ent way and some­how the result is dif­fer­ent and does not work. As a chal­lenge for myself I’m try­ing to find the error in my approach but can not. 

    As a basis of my approach do deduce gra­di­ent du/dp, I assume that both tex­ture coor­di­nates u and world-point p on the tri­an­gle are func­tions of screen coor­di­nates (sx, sy). So to dif­fer­en­ti­ate du(p(sx, sy))/dp(sx, sy) I use rule of par­tial derivatives:
    du/dp = du/dsx * dsx/dp + du/dsy * dsy/dp. From here (du/dsx, du/dsy) is basi­cal­ly (duv1.x, duv2.x) in your code. 

    To com­pute dsx/dp, I try to inverse the deriv­a­tives: dsx/dp = 1 / (dsx/dp), which is equal to (1 / dFdx(p.x), 1 / dFdx(p.y), 1 / dFdx(p.z)), but some­how that does not work because the result is dif­fer­ent from dp2perp in your code. Can you clar­i­fy why?

    • Your terms \mathrm{d}s_x \over \mathrm{d}p and \mathrm{d}s_y \over \mathrm{d}p are vec­tor val­ued and togeth­er form a 2‑by‑3 Jaco­bian matrix, that con­tains the con­tri­bu­tion of each com­po­nent of the world space posi­tion to a change in screen coordinate.

      To get the inverse, you’d need to inverse that matrix, but you can­not do that because it’s not a square matrix. The prob­lem is under­de­ter­mined. Aug­ment the Jaco­bian with a vir­tu­al 3rd screen coor­di­nate (the screen z coor­di­nate, or depth coor­di­nate) \mathrm{d}s_z \over \mathrm{d}p to and equate that to zero, because it must stay con­stant for the solu­tion to lie on the screen plane. Et voila, then you would have arrived at an exact­ly equiv­a­lent prob­lem for­mu­la­tion as that which is described in the sec­tion ’solu­tion to the co-tan­gent frame’.

  22. Pingback: Followup: Normal Mapping Without Precomputed Tangents via @erkaman2 | hanecci's blog : はねっちブログ

  23. In my library, I have a debug shad­er that lets me visu­al­ize the nor­mal, tan­gent, and (recon­struct­ed) bi-tan­gent in the tra­di­tion­al “pre­com­put­ed ver­tex tan­gents” scenario.

    I’m try­ing to work out what the equiv­a­lent to the “per ver­tex tan­gent” is with a cal­cu­lat­ed TBN. The sec­ond and third row vec­tors don’t seem to match very close­ly with the orig­i­nal tan­gen­t/bi-tan­gent.

    • Sor­ry, make that the first and sec­ond row vec­tor. The third row vec­tor is obvi­ous­ly the orig­i­nal per-ver­tex normal.

    • Hi Chuck,
      to get some­thing that is like the tra­di­tion­al per-ver­tex tan­gent, you’d have to take the first two columns of the invert­ed matrix. 

      The tan­gents and the co-tan­gents each answer dif­fer­ent ques­tions. The tan­gent answers what is the change in posi­tion with a change in uv. The co-tan­gents are nor­mals to the planes of con­stant u and v, so they answer what direc­tion is per­pen­dic­u­lar to keep­ing uv con­stant. They’ll be equiv­a­lent as long as the tan­gent frame is orthog­o­nal, but diverge when that is not the case.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Please answer the following anti-spam test

Which thing makes "tick-tock" and if it falls down, the clock is broken?

  1.    ruler
  2.    pencil
  3.    watch
  4.    chair
  5.    table