Yes, sRGB is like µ-law encoding

I vague­ly remem­ber some­one mak­ing a com­ment in a dis­cus­sion about sRGB, that ran along the lines of

So then, is sRGB like µ-law encod­ing?

This com­ment was not about the col­or space itself but about the spe­cif­ic pix­el for­mats nowa­days brand­ed as ‘sRGB’. In this case, the answer should be yes. And while the tech­ni­cal details are not exact­ly the same, that anal­o­gy with the µ-law very much nails it.

When you think of sRGB pix­el for­mats as noth­ing but a spe­cial encod­ing, it becomes clear that using such a for­mat does not make you auto­mat­i­cal­ly “very picky of col­or repro­duc­tion”. This assump­tion was used by hard­ware ven­dors to ratio­nal­ize the deci­sion to lim­it the sup­port of sRGB pix­el for­mats to 8-bit pre­ci­sion, because peo­ple “would nev­er want” to have sRGB sup­port for any­thing less. Not true!Screen Shot 2014-03-06 at 19.02.54I’m going to make a case for this lat­er. But first things first.

Μ-law encoding

In the audio world, µ-law encod­ing is a way to encode indi­vid­ual sam­ple val­ues with 8 bits while the dynam­ic range of the rep­re­sent­ed val­ue is around 12 bits. It is the default encod­ing used in the au file for­mat. The fol­low­ing image shows the encod­ed sam­ple val­ue for each code point from 0 to 255.


It is worth not­ing that the tech­nique was orig­i­nal­ly invent­ed in the ana­log world to reduce trans­mis­sion noise. A log­a­rith­mic ampli­fi­er would non-lin­ear­ly com­press the sig­nal before trans­mis­sion. Any noise intro­duced dur­ing trans­mis­sion would be then be atten­u­at­ed at the receiv­ing sta­tion when the sig­nal is expand­ed to its orig­i­nal form. The gist of this method is to dis­trib­ute the noise in a per­cep­tu­al­ly uni­form way. With­out the com­pres­sion and expan­sion, the noise would seem to dis­pro­por­tion­al­ly affect the low lev­els. A sim­i­lar effect of noise reduc­tion is achieved in the dig­i­tal domain when using a non-lin­ear encod­ing like the µ-law, but this time it is the appar­ent quan­ti­za­tion noise that is reduced.

Gamma and sRGB

Back in the visu­al world we have the con­cept of dis­play gam­ma, which was devel­oped for tele­vi­sion in the 1930’s for entire­ly the same rea­sons [1]. The broad­cast­er would gam­ma-com­press the sig­nal before it is aired and the receiv­er gam­ma-expands its back. The tri­ode char­ac­ter­is­tic of the elec­tron gun was used to imple­ment an approx­i­mate 2 to 2.5 pow­er law that fol­lows the curve of human per­cep­tion (as you can see in the fig­ure below, the imple­men­ta­tion of this pow­er law in a CRT is very approx­i­mate!). A ‘bright­ness’ knob on the tele­vi­sion set allows the view­er to fine-tune the effec­tive expo­nent of this rela­tion to the sit­u­a­tion at hand. As with audio over tele­phone lines, the goal was to dis­trib­ute the noise in a per­cep­tu­al­ly uni­form way.


Tri­ode char­ac­ter­is­tic for dif­fer­ent grid volt­ages.
CC-BY-SA from Wike­pe­dia

And then it hap­pened: peo­ple start­ed plug­ging tele­vi­sion sets to com­put­ers.

But the poor TV has no idea that it is now con­nect­ed to a com­put­er and faith­ful­ly gam­ma-expands all the RGB val­ues as if they were still being sent from a broad­cast­er. This implic­it gam­ma-expan­sion has been a real­i­ty for decades and has shaped the way how peo­ple expect RGB col­ors to look like. Then in 1995 at the dawn of the web, Microsoft and HP gave a name to this mess and called it “stan­dard RGB”, which is noth­ing more than an euphemism for “how your aver­age uncal­i­brat­ed mon­i­tor dis­plays your RGB val­ues”.

It should be clear that the advan­tage of hav­ing a dis­play gam­ma in the dig­i­tal domain is sim­i­lar to the advan­tage of µ-law encod­ing for audio: A large dynam­ic range can be encod­ed with a low­er num­ber of bits.

How the ARB missed an important point

Mod­ern graph­ics hard­ware usu­al­ly has the abil­i­ty to do gam­ma-expan­sion as part of the tex­ture fetch process. This fea­ture is enabled by select­ing an appro­pri­ate pix­el for­mat for the tex­ture, like GL_SRGB8_ALPHA8 resp. DXGI_FORMAT_R8G8B8A8_UNORM_SRGB. This makes sure the pix­el shad­er sees the orig­i­nal col­or val­ues, not the gam­ma-com­pressed ones. But note what the  OpenGL sRGB exten­sion has to say about this mat­ter:

6)  Should all component sizes be supported for sRGB components or
    just 8-bit?

    RESOLVED:  Just 8-bit.  For sRGB values with more than 8 bit of
    precision, a linear representation may be easier to work with
    and adequately represent dim values.  Storing 5-bit and 6-bit
    values in sRGB form is unnecessary because applications
    sophisticated enough to sRGB to maintain color precision will
    demand at least 8-bit precision for sRGB values.

    Because hardware tables are required sRGB conversions, it doesn't
    make sense to burden hardware with conversions that are unlikely
    when 8-bit is the norm for sRGB values.

If this should be a sub­tle form of flat­tery, I doesn’t get to me. Yes, I am sophis­ti­cat­ed enough for sRGB in my appli­ca­tions, but no, I do not demand at least 8-bit pre­ci­sion in all cas­es. Quite to the con­trary, while the moti­va­tion for the hard­ware guys is under­stand­able, I would make it manda­to­ry to have sRGB sup­port for 4-bit and 5-bit pre­ci­sions so that dynam­ic tex­tures can save band­width while at the same time ben­e­fit from con­ver­sion hard­ware.

There was a brief win­dow in time when 5-bit sRGB sup­port was avail­able in Direc­tX 9 hard­ware like the R300, until about late 2005. Then sud­den­ly and silent­ly, sRGB sup­port for any­thing but 8-bit pre­ci­sion was removed in new­er hard­ware, and was also dis­abled for exist­ing hard­ware in new­er dri­vers. What the …? Yes exact­ly, that’s what I thought. At that time we were in the final stages of Spell­force 2 devel­op­ment, and break­ing changes like these of course are always wel­come.

The story about dynamically baked 5:6:5 sRGB terrain textures

Spell­force 2 is one of the few games from 2006 to imple­ment a lin­ear light­ing mod­el via the afore­men­tioned hard­ware fea­tures. The col­ors from any tex­tures are gam­ma-expand­ed pri­or to enter­ing the light­ing equa­tions in the shad­er, and then gam­ma-com­pressed again before they are writ­ten out to the frame buffer. This is the stan­dard lin­ear light­ing process.

719765-927088_20060324_002 copy

The Spell­force 2 ter­rain sys­tem could dynam­i­cal­ly bake and cache dis­tant tiles into low res­o­lu­tion 16-bpp tex­tures (RGB 5:6:5). It was absolute­ly cru­cial for per­for­mance to have such a cache, but the cache was tak­ing a sig­nif­i­cant chunk of VRAM. As with any tex­tures, the ter­rain tex­tures had to be gam­ma-expand­ed pri­or to enter­ing the light­ing equa­tions. In Direc­tX 9, this con­ver­sion was enabled via a “tog­gle switch” (D3DSAMP_SRGBTEXTURE) as opposed to the more mod­ern con­cept of being a sep­a­rate pix­el for­mat. You could switch it on for any bit depth (4-bit, 5-bit, 8-bit, didn’t mat­ter). That is, it was pos­si­ble until the hard­ware ven­dors changed their mind.

This change of course forced me to either a) do arith­metic gam­ma-expan­sion in the shad­er or b) use 8-bit pre­ci­sion for the ter­rain tile cache. Since b) was not an option, I imple­ment­ed a) via a cheap squar­ing (mul­ti­ply­ing the col­or with itself). This trick effec­tive­ly imple­ments a gam­ma 2.0 expan­sion curve, which is dif­fer­ent from the gam­ma 2.2 expan­sion curve that is built into the hard­ware tables. There­fore, the col­ors of dis­tant ter­rain tiles nev­er exact­ly match their close up coun­ter­part.

You can say we were lucky that we didn’t have to make such a change after Spell­force 2 was released!

Conclusion and Recommendation

The con­clu­sion of this arti­cle is, yes, sRGB tex­ture for­mats are like µ-law encod­ing. They are about encod­ing and noth­ing but encod­ing. They are not about gam­ma cor­rec­tion, because using sRGB tex­tures doesn’t “cor­rect” any­thing except to undo one very spe­cif­ic gam­ma-com­pres­sion curve. Using the term “cor­rec­tion” implies being nit-picky about that last bit of col­or pre­ci­sion, which isn’t the mat­ter here. The mat­ter is a dif­fer­ence in the encod­ing. And under no cir­cum­stances are users “sophis­ti­cat­ed enough for sRGB in their apps” so much “picky about col­or repro­duc­tion” that they would “nev­er want” any­thing less than 8-bit pre­ci­sion (empha­sis mine)!

In the light of mobile GPUs and the nev­er-end­ing quest to reduce mem­o­ry traf­fic, it would be very use­ful to have auto­mat­ic sRGB con­ver­sion for low­er bit depths. Imag­ine things like dynam­ic cube maps or—terrain tile caches 😉

EDIT: I wouldn’t care if the con­ver­sion from low­er bit depths was imple­ment­ed in terms of first expand­ing to 8 bits, then using the 8 bit LUT. Pre­ci­sion wise I would this as one and the same, so there is no need for addi­tion­al 4-bit and 5-bit tables for me. Cir­cuit­ry for such a process has to be built into hard­ware any­way if S3TC/DXT tex­ture com­pres­sion for­mats are used togeth­er with sRGB con­ver­sion, and could be reused for uncom­pressed low bit depths for­mats.

[1] James A. Moy­er, 1936, “Radio Receiv­ing and Tele­vi­sion Tubes”,

Leave a Reply

Your email address will not be published.