I just got the news about the OpenGL 4.3 spec, which was released today, and is available at http://www.opengl.org/registry/. The spec document has been reorganized and cleared up considerably and is a lot easier to follow than the previous specifications. New features include (ordered by importance for my projects):
- Queries for internal texture format parameters
- Debug output callbacks
- Compute shaders
- Texture views
- and others
I’m currently on a project where compatibility and scaleability is prime, so the first two features are very welcome as development aids to make the code run robustly on a variety of platforms. Compute shaders and texture views are of course cool, but require the newest hardware, so they are lower in my list.
A nice touch by Nvidia to make to expose the new functionality as extensions on older hardware.
I found my copy of the book in the mail today. I was a little surprised by the moderate size—other volumes of this series were just that: volumes! I think this one is about half the size than the previous tomes. By the way, this post is a shameless plug because there is an article written by me in it. Thanks go to Wolfgang Engel, the series editor, and Christopher Oat, my section editor, and CRC press for making it possible! I will post some comments on the other articles when I read them through.
I found one very good and comprehensive article on data driven engine design by Donald Revie. The ideas presented in there resonate very well with the designs that I found worked well in the past, so this part gets a solid +1 from me.
(I ended up with a triad of IRenderObject, IRenderGeometry and IRenderProgram, where the render object would AddBatches() to a draw list, each of which refers to one geometry containing the mesh and one program for setting the render state. For instance, a SpeedTree™ render object would typically add three batches, one for each of the branches, fronds and leaves, where each batch would pair the specific geometry with an appropriate program. The draw list is then sorted in one go via a general 128 bit sort key, and the batches rendered in order. This system is general enough that the UI system also can just AddBatches() to this list (so the UI system, as a whole, is just one instance of a render object). In this case, the individual UI geometry objects just represent views into one big dynamic vertex buffer. I probably want to explain this system in detail in another post.)
Another two very inspiring articles so far are the one about geometric postprocess antialiasing, by Emil “Humus” Persson, and the piece about global illumination using a voxel grid, from the teams at the Unis Koblenz/Magdeburg. I already read their ACM paper before and I think it is a viable method.
While googling the net, I found this remarkable piece. This is a patent application from 2010, and from the drawings, it seems that the inventor had the ‘revolutionary’ idea of calculating some adaptive depth bias based on the depth map to improve shadow mapping.
The noteworthy part of the story is, that the official research, the patent clerk apparently turned up my ShaderX4 article from 2006, as can be seen here further down on the page. The page also states that the application has been abandoned. So, maybe the existence of this article has prevented a patent on shadow maps, or at least discouraged the issuer to pursue the application. In either case, let’s hope it stays that way!
We can draw two conclusions from this.
- Conclusion 1: ShaderX seems to be recognized by the EPO as a source of state of the art.
- Conclusion 2: If you have an idea, write about and publish it! It doesn’t matter if it is just a blog post. If it can be tracked down, this can prevent a patent on the same idea in the future!
I finally got around to write some comments on this years Advances in Real Time Rendering held at SIGGRAPH 2011. Thanks to the RTR-team for making the notes available. The talk about physically-based shading in Call Of Duty has already been mentioned in my previous post. So, in no particular order:
Rendering in Cars 2
Christopher Hall, Robert Hall, David Edwards (AVALANCHE Software)
At one point, the talk about rendering in Cars 2 describes how they use pre-exposed colors as shader inputs to avoid precision issues when doing the exposure after the image has been rendered. I have employed pre-exposed colors with dynamic exposure in the past, and I found them tricky to use. Since there is a delay in the exposure feedback (you must know the exposure of the previous frame to feed the colors for the next frame) you can even get exposure oscillation!