Highlights from GDC 2014 presentations

This page is my per­son­al col­lec­tion of high­lights from GDC 2014. I was not able to attend in per­son, so I had to rely on Twit­ter to get updat­ed. The immer­sion was not per­fect, but some of the thrill was def­i­nite­ly car­ried over. So here it goes (in release order):

Math for Game Programmers: Inverse Kinematics

Gino van den Bergen

Gino “dual num­bers” van den Bergen describes an alge­bra­ic frame­work for kine­mat­ics, based on—you guessed it—dual num­bers. It is worth tak­ing seri­ous­ly though. The approach is based on the fact that the group of rigid-body trans­for­ma­tions is cov­ered by dual quater­nions (quater­nions made up from dual num­bers, as in typedef Quaternion<DualNumber> DualQuaternion, or equiv­a­lent­ly \mathrm{C\el}l^+_{3,0,1}}), and so an IK-chain is mod­eled alge­braical­ly by a bunch of dual quater­nions mul­ti­plied togeth­er, which can be solved for more eas­i­ly than with matri­ces. Pret­ty clever.

Math for Game Programmers: Grassmann Algebra in Game Development

Eric Lengyel

This talk has back­ground infor­ma­tion about exte­ri­or alge­bra and the pro­jec­tive split, and some­what tan­gent (no pun intend­ed) to the dual quater­nion talk men­tioned above, because it is all based on the same 19th cen­tu­ry stuff (Grass­mann, Hamil­ton, Clif­ford) that is mak­ing a renais­sance these days. Lengyel first gives the mes­sage “know your vec­tors”, by dis­sect­ing ordi­nary vec­tors, bi-vec­tors (aka. axi­al vec­tors), and co-vec­tors (aka. pseu­do-vec­tors). The sec­ond part shows how to eas­i­ly car­ry con­cepts from 3-D to homoge­nous 4-D (for exam­ple, doing a 4-D cross prod­uct, join and meet, etc).

Assassin’s Creed 4: Black Flag Road to next-gen graphics

BARTlomiej WRONSKI

This talk dis­cuss­es 3 next-gen tech­niques used in Black Flag: glob­al illu­mi­na­tion, fog irra­di­ance vol­umes and screen space reflec­tions. For me, the fog vol­umes were the most inter­est­ing part, as the team has devised a clever com­pute-based approach do solve the issue. The talk also has some prac­ti­cal details on the inner work­ings of AMD’s GCN graph­ics chips.

Approaching Zero Driver Overhead (a.k.a. →0)

cass everitt et. al.

This pre­sen­ta­tion and a sim­i­lar one giv­en at the Steam dev days show a con­cert­ed effort of major graph­ic chip ven­dors to evolve OpenGL fur­ther and reduce dri­ver over­head to break the draw call bar­ri­er and enable a push in scene com­plex­i­ty. The impor­tant bits here are bindles tex­tures, per­sis­tent­ly mapped buffers and Mul­tiDrawIndi­rect, togeth­er with the fact that you can fill mul­ti­ple buffers con­cur­rent­ly from mul­ti­ple threads (mul­ti­thread­ing is in OpenGL since ver­sion 1, and buffers can be shared among con­texts). This already works with exist­ing dri­vers using exist­ing exten­sions. I am a firm believ­er and prac­ti­tion­er of Nev­er Start Over, there­fore I wel­come a more evo­lu­tion­ary path in favor of rad­i­cal new designs like Man­tle or #DirectX12 (though com­pe­ti­tion is always healthy).

Rendering Battlefield 4 with Mantle

johan andersson

Nev­er­the­less it was inter­est­ing to see details on the pro­posed Man­tle API. I think it is no coin­ci­dence that they choose to pre­fix their entry points with “gr”, sim­i­lar to Glide (the only viable spir­i­tu­al pre­de­ces­sor to Man­tle if there should be any). Doing graph­ics the Man­tle way revolves around pipeline objects and descrip­tor tables. Both are larg­er hand have even coars­er gran­u­lar­i­ty than the ren­der state objects intro­duced in Direc­tX 10 (the lat­ter ones turned out not to match actu­al hard­ware at all, so they’re not see­ing the perf boost that they thought they would bring). Inter­est­ing read over­all.

Crafting a Next Gen Content Pipeline for The Order 1886

David Neubelt and Matt Petineo

This pre­sen­ta­tion is sim­i­lar to the one they gave at SIGGRAPH last year (as part of the Phys­i­cal­ly Based Shad­ing course). This time the focus is more on engi­neer­ing issues and less on PBS. The salient points are Tiled For­ward +, H-basis, direc­tion­al AO maps, GGX dis­tri­b­u­tion (i have my gripes with this, I’d rather over­lay mul­ti­ple cosine-pow­er lobes and have a mate­r­i­al with more than one glossi­ness para­me­ter to be able to flex­i­bly rep­re­sent NDF dis­tri­b­u­tions at dif­fer­ent length scales), Kaji­ra-Kay-ish Hair, invert­ed gauss­ian for cloth, nor­mal map antialias­ing via reduc­tion of glossi­ness in mip lev­els, and a some­what cool hard­ware set­up + edi­tor tools to acquire BRDFs and paint with them.

Physically Based Shading in Unity

Aras Pranckevičius

This caught my atten­tion, because on a side note buried deep inside the paper, they men­tion they’re using my rusty old shad­ing mod­el from ShaderX7 (which I lat­er dubbed as “Min­i­mal­ist Cook Tor­rance”) on mobile tar­gets. Back then I real­ly had to squeeze out every arith­metic oper­a­tion I could. And now the his­to­ry of yesterday’s desk­top repeats in the form of today’s mobile. Check out their demo for iPad and also the pic­tures on their blog it’s look­ing real­ly nice.

Introduction to PowerVR Raytracing

James A. McCombe

In a bold move, Imag­i­na­tion Tech­nolo­gies intro­duces hard­ware sup­port for ray-trac­ing in their lat­est Pow­erVR graph­ics chips. This evo­lu­tion­ary step most­ly uses the already exist­ing com­pute units to process rays in par­al­lel, but also adds ded­i­cat­ed fixed func­tion hard­ware for ray-box inter­sec­tion tests and scene tra­ver­sal. They adver­tise this tech to accel­er­ate sec­ondary rays emit­ted from the posi­tions of ras­ter­ized pix­els to allow bet­ter reflec­tions, refrac­tion, order-inde­pen­dent trans­paren­cy, high qual­i­ty shad­ows and gen­er­al­ly any­thing that ras­ter­i­za­tion sucks at. (Yes, this means you need to “upload” your scene to the graph­ics chip and keep 2 rep­re­sen­ta­tions.) In the off-line domain, the Uni­ty engine ships with sup­port for this hard­ware to pre­view light map bak­ing. I find this inno­va­tion high­ly refresh­ing. Ray trac­ing is no longer the eter­nal “tech­nol­o­gy of the future”, it’s com­ing now.

 

Leave a Reply

Your email address will not be published.