Elite Dangerous: Impressions of Deep Space Rendering

I am a backer of the upcom­ing Elite Dan­ger­ous game and have par­tic­i­pat­ed in their pre­mi­um beta pro­gramme from the begin­ning, pos­i­tive­ly enjoy­ing what was there at the ear­ly time. ‘Pre­mi­um beta’ sounds like an oxy­moron, pay­ing a pre­mi­um for an unfin­ished game, but it is noth­ing more than pur­chas­ing the same backer sta­tus as that from the Kick­starter cam­paign.

I came into con­tact with the orig­i­nal Elite dur­ing christ­mas in 1985. Com­pared with the progress I made back then in just two days, my recent per­for­mance in ED is lousy; I think my com­bat rat­ing now would be ‘com­pe­tent’.

But this will not be a game­play review, instead I’m going to share thoughts that were inspired while play­ing ED, most­ly about graph­ics and shad­ing, things like dynam­ic range, sur­face mate­ri­als, phase curves, ‘real’ pho­tom­e­try, and so on; so … after I loaded the game and jumped through hyper­space for the first time (actu­al­ly the sec­ond time), I was greet­ed by this screen fill­ing disk of hot plasma:

ED001

Con­tin­ue read­ing

X‑Plane announces Physically Based Rendering

I always won­dered when X‑Plane would jump on the PBR band­wag­on. I like X‑Plane, I think its the best active­ly-devel­oped (*) flight sim­u­la­tor out there, but I always felt that shad­ing could be bet­ter. For instance, there is this unre­al­is­tic ‘Lam­bert-shad­ed’ world ter­rain tex­ture, which becomes too dark at sun­set; anoth­er is the dread­ed ‘con­stant ambi­ent col­or’ that plagues the shad­ing of objects.

Now in this post on the X‑Plane devel­op­er blog, Ben announces that Phys­i­cal­ly Based Ren­der­ing is a future devel­op­ment goal, yay! Then he goes on to say that, while sur­face shad­ing will be a solved prob­lem™ because of PBR, oth­ers like par­tic­i­pat­ing media (clouds, atmos­phere) would still need mag­ic tricks for the fore­see­able future. Con­tin­ue read­ing

Journey into the Zone (Plates)

I have exper­i­ment­ed recent­ly with zone plates, which are the 2‑D equiv­a­lent of a chirp. Zone plates make for excel­lent test images to detect defi­cien­cies in image pro­cess­ing algo­rithms or dis­play and cam­era cal­i­bra­tion. They have inter­est­ing prop­er­ties: Each point on a zone plate cor­re­sponds to a unique instan­ta­neous wave vec­tor, and also like a gauss­ian a zone plate is its own Fouri­er trans­form. A quick image search (google, bing) turns up many results, but I found all of them more or less unus­able, so I made my own.

Zone Plates Done Right

I made the fol­low­ing two 256×256 zone plates, which I am releas­ing into the pub­lic so they can be used by any­one freely.

Cosine zone plate with constrast weighting

Cosine zone plateCC0

Zone plates with contrast weighting

Sine zone plateCC0

Con­tin­ue read­ing

Ego mecum conjungi …

… Twit­ter!

Twitter_logo_blue

So out of a whim I just embar­rassed myself and tried to write in (prob­a­bly wrong) latin that I joined twit­ter. You can fol­low me under: @aries_code.

If you won­der how this came about, this was my train of thought:

  • Twit­ter has some­thing to do with birds
  • Birds have fan­cy latin species names
  • The species name for Spar­row is Spass­er domesticus
  • This does­n’t sound too fancy …
  • How do you say ‘I joined twit­ter’ in latin anyway?

But then I dis­cov­ered that I am onto some­thing: Accord­ing to one argu­ment, the brand name of Twit­ter should have been ‘Titi­a­tio’, had it exist­ed in antiq­ui­ty. And accord­ing to anoth­er argu­ment, latin should be an ide­al twit­ter lan­guage, because it is both short and expressive.

But I digress. If you are into com­put­er graph­ics, then you know of Johann Hein­rich Lam­bert, the eponym of our beloved Lam­bert­ian ref­electance law. The book where he estab­lished this law, Pho­tome­tria, is writ­ten entire­ly in latin—now this is hardcore!

So, now you know what to do if you want to stand out in your next SIGGRAPH paper …

Yes, sRGB is like µ‑law encoding

I vague­ly remem­ber some­one mak­ing a com­ment in a dis­cus­sion about sRGB, that ran along the lines of

So then, is sRGB like µ‑law encoding?

This com­ment was not about the col­or space itself but about the spe­cif­ic pix­el for­mats nowa­days brand­ed as ‘sRGB’. In this case, the answer should be yes. And while the tech­ni­cal details are not exact­ly the same, that anal­o­gy with the µ‑law very much nails it.

When you think of sRGB pix­el for­mats as noth­ing but a spe­cial encod­ing, it becomes clear that using such a for­mat does not make you auto­mat­i­cal­ly “very picky of col­or repro­duc­tion”. This assump­tion was used by hard­ware ven­dors to ratio­nal­ize the deci­sion to lim­it the sup­port of sRGB pix­el for­mats to 8‑bit pre­ci­sion, because peo­ple “would nev­er want” to have sRGB sup­port for any­thing less. Not true!Screen Shot 2014-03-06 at 19.02.54I’m going to make a case for this lat­er. But first things first.

Con­tin­ue read­ing

Spellforce 2 Demons of the Past

Spell­force 2 was released in 2006 and will be 8 years old by april. Nev­er­the­less, the third add-on of the series shipped a month ago. Talk about a long seller!

Of course I’m attached to SF2 because I wrote many parts of its engine back then. This time I was briefly involved to help the devel­op­ers include my attribute-less nor­mal map algo­rithm. The orig­i­nal SF2 did not have any nor­mal maps, and there­fore none of the orig­i­nal art assets comes with tan­gent space infor­ma­tion. This is an ide­al sce­nario to pimp up the visu­als with­out touch­ing the geom­e­try, sim­ply by mak­ing a shad­er change and adding nor­mal maps. Con­tin­ue read­ing

Slides of my FMX 2013 presentation on Physically Based Shading

EDIT 2019: I have con­vert­ed the orig­i­nal slides to PDF for­mat and also made minor cor­rec­tions. See this post for details. The down­load is at the end of this page.

I was kind­ly invit­ed by Wolf­gang from Con­fet­ti FX to speak at the FMX 2013 con­fer­ence about phys­i­cal­ly based shad­ing (with­in the scope of the Real Time Ren­der­ing day). Since I remem­bered the FMX as a con­fer­ence for visu­al arts, I made the pre­sen­ta­tion inten­tion­al­ly non-tech­ni­cal, for fear of alien­at­ing the lis­ten­ers. In ret­ro­spect, my guess was a bit too con­ser­v­a­tive, as there were quite a num­ber of pro­gram­mers in the audience.

Screen Shot 2013-04-24 at 14.53.58

Nev­er­the­less, here are the slides for down­load (with all notes includ­ed). The Keynote for­mat is the orig­i­nal and the Pow­er­point for­mat was export­ed from that and is a lit­tle bro­ken, so you should use the Keynote ver­sion if you can read it.

Down­load “FMX 2013 Slides PDF with Notes” fmx-11-revised.pdf – Down­loaded 1552 times – 15 MB

Followup: Normal Mapping Without Precomputed Tangents

This post is a fol­low-up to my 2006 ShaderX5 arti­cle [4] about nor­mal map­ping with­out a pre-com­put­ed tan­gent basis. In the time since then I have refined this tech­nique with lessons learned in real life. For those unfa­mil­iar with the top­ic, the moti­va­tion was to con­struct the tan­gent frame on the fly in the pix­el shad­er, which iron­i­cal­ly is the exact oppo­site of the moti­va­tion from [2]:

Since it is not 1997 any­more, doing the tan­gent space on-the-fly has some poten­tial ben­e­fits, such as reduced com­plex­i­ty of asset tools, per-ver­tex band­width and stor­age, attribute inter­po­la­tors, trans­form work for skinned mesh­es and last but not least, the pos­si­bil­i­ty to apply nor­mal maps to any pro­ce­du­ral­ly gen­er­at­ed tex­ture coor­di­nates or non-lin­ear defor­ma­tions. Con­tin­ue read­ing