A new binary release of Brigade 2 is now available from the
download page (or click
download the package directly). This release comes with three demos
(including source code), and includes a 64-bit build of the engine, which allows for
larger scenes and textures.
Brigade 2 was featured on Kotaku last weekend: kotaku.com/5899802, thanks to the amazing work of
who rendered the Asian City scene in Brigade. The renders almost look like photographs.
A trailer for the student game "It's About Time" has been released
during the Game Release Party, at the Pakhuis, the building of the IGAD game development
program. The trailer can be viewed on youtube:
Friday, January 27th, 2012
Reflect has been released. You can obtain it
from the download page.
The package includes source files, as well as the latest binaries for Brigade 2.
Reflect features a Portal-style world, with lasers, mirrors, doors and elevators.
It's About Time plays in an Aztec world, and uses the time of the day in many of its puzzles.
Thursday, January 26th, 2012
Some new Brigade footage has been recorded and placed on YouTube:
This video shows 5 minutes of gameplay from Reflect, a game
built by students of the IGAD program of the NHTV University of
Applied Sciences. This version is rendered by the Brigade 2 engine,
using 2 GTX470 GPUs.
First of all: Brigade is a research project. It is designed to facilitate
the production of 'proof-of-concept' games that use path tracing as the
primary rendering algorithm, as opposed to rasterization, which is what
GPUs normally do. Path tracing uses ray tracing, and extends basic ray
tracing by supporting indirect lighting and soft shadows. More complex
effects, such as caustics, are also supported, but suffer from noise.
Brigade does not aim to produce photo-realistic images using complex
scenes and materials. The emphasis is first of all on speed, and getting
the highest possible quality within a strict time budget. Higher quality
graphics are the goal, but never at the expense of interactivity.
Performance is achieved by employing state-of-the-art algorithms, often
based on very recent research. Besides this, engineering and optimization
obviously plays an important role. The greatest gains however are obtained
by using not just a GPU or CPU but both: Brigade splits the rendering task
seamlessly over available compute units. Depending on your configuration,
Brigade will render using the CPU only, or mostly on the GPU, or anything
inbetween. This balance is reconsidered per frame. The plan is to add
support for multiple GPUs as well as network rendering. Multiple CPUs are
already supported, and all your cores will be put to work optimally.
Brigade is the successor to the Arauna real-time ray tracer, which has
been used for a large number of student projects at the 'International
Game Architecture & Design' program of the NHTV University of Applied
Sciences, Breda, The Netherlands. As such, Brigade mimicks the Arauna API
to ease the transition to a new rendering engine. If you are familiar
with Arauna, you will feel right at home.
Brigade implements a basic path tracer, with a fixed shading path. This
path supports diffuse materials with textures, as well as specular and
dielectric materials (metal, glass). Dielectrics may have absorption,
and an adjustable index of refraction. Lighting is entirely done using
area lights, of which an unlimited amount may be specified. For Brigade,
a light is simply a polygon with a material that is named 'emm0-9'. This
way, entire meshes can be turned into light sources.
Brigade aims to offer a simple API to the developer. To facilitate this,
the initial scene is loaded using a text file, named scene.txt. In this
file, you can specify resolution, number of samples per pixel, wether or
not GPU rendering should be used and so on. Thanks to this you can
experiment with the path tracer without ever looking at the source code.
Check scene.txt to see how to add more planes to the demo scene, for
Internally, Brigade uses a BVH that is constructed per-frame from BVHs
that are built per scene graph object. The per-object BVHs are updated
as needed, for optimal performance. The BVH is then synchronized with
the version on the GPU, and also converted to other structures that are
needed for optimal CPU rendering. Because of the intention to use
Brigade for games, fast support for animated scenes is a priority.
Brigade currently requires a recent NVidia card to run (although you may
be able to run using the CPU only by altering scene.txt). Your GPU must
support CUDA. Two executables are provided: One is optimized for GTX2xx
class devices, the other one targets Fermi devices specifically. The first
one will obviously work on a broader range of hardware, but the Fermi
version is optimal for that architecture and should be used if possible.
A version of Brigade that automatically detects supported features will
be released at a later date.
Friday, May 14th, 2010
Some more screenshots of the new path tracer that is currently under development. The path tracer
is currently called 'Brigade'. Brigade renders scenes interactively, using CPUs and GPUs together in a hybrid
system (CUDA GPU subsystem programmed by Dietger van Antwerpen). The CPU tracer uses BVH and MBVH acceleration
structure, and combines large packets for primary rays with MBVH packet traversal for secondary rays for
optimal rendering performance. Click the images for larger versions.
The image of the left is a scene modelled by Nils Ruisch, a student of the visual art branch of the IGAD program
for game development. The crossbow is a model by Robert Joosten. The image on the right is a reconstruction of a
scene by M.C.Escher, modelled by Gabrian van Houdt.
Thursday, January 28th, 2010
Here is a screenshot of some new technology that is under development currently. The model
is made by Nils Ruisch, an IGAD student.
A new version of the Arauna ray tracer is now available. This is mostly a service release, which includes
the latest features and some minimal example projects. The current version of Arauna now renders Sponza at
40fps and more at 1024x600 resolution, on all cores of an Intel i7 processor. Go to the downloads page for
this package (61Mb).
The above image is from the demo that is included with this source distribution.
Friday, July 3rd, 2009
IGAD students have created a new game using the Arauna ray tracer: Time Of Light, a pinball game.
Get the game here: download
Friday, April 3rd, 2009
A new demo is available, showing off the new global illumination code in Arauna. This demonstration is based on the ideas explained in the Eurographics 2009 paper "A Precalculated Pointset for Caching Shading Information", by Roel Reijerse and me. In this demo, the pointset is used to store shading information calculated using virtual point lights and lightcuts. Some of the points in the set are updated per frame (on the animated model).
Tuesday, October 7th, 2008
Here is a high-quality render from the experimental system for cached illumination. The system is used for indirect lighting only in this shot; shadows are still traced per pixel using the ray tracer.
Other news: A new project has been added to the projects page: Pirates on the Edge. More information about this game will be available soon.
Tuesday, October 7th, 2008
As a test, I added sphere support to Arauna. Spheres are interesting in a ray tracer, because they can be 'perfect', unlike spheres in a GPU engine, where they will be represented by triangles, usually. This especially matters for reflections, as can be seen in the screenshot below (click for a larger version).
The sphere code in Arauna is efficient, and does not reduce the performance of scenes that have no spheres. Currently, spheres can not be textured and/or normal mapped.
Tuesday, September 9th, 2008
Another new version of Arauna has been released. This version has some very substantial improvements over the previous version (released only two months ago): Performance improved by 50% (on average), the shading path is completely floating point again (resulting in better image quality), and support for dynamic geometry has been replaced completely: Arauna now handles highly dynamic environments effortlessly.
The full source package (including a basic demo application) can now be obtained from the download page. Also note that the source code for Let there be Light has been released. The source code package can be downloaded from the download page as well. Note that this version has been adapted to work with the latest version of Arauna. It performs much better than the original version. However, due to the changes in the shading pipeline, some surfaces will be rendered much too bright. This release should therefore be used as an example of using Arauna only.
On a related note: Over the next six months, two teams of IGAD students will be using the Arauna engine to develop two games as part of the GameLab of our game development course. Screenshots and demos will be posted on this site.
Friday, July 11th, 2008
A new version of the Arauna ray tracer source distribution is now available from the downloads page. This is the version that has been used to build 'Let there be Light'. This version is stable and fast and can be used freely to build non-commercial applications.
Marketing blurb: Arauna is a real-time ray tracer developed for game development. Being a real-time ray tracer, it is experimental, and does not yet deliver the performance needed to produce graphics of the same quality as modern games do using a GPU. However, in its class, it is one of the fastest (probably the fastest) renderer. Two games have been developed already using Arauna, both by students of the IGAD program of the NHTV University of Applied Sciences (Breda, The Netherlands).
Real-time ray tracing of large triangle meshes (up to 2M per 1GB of memory)
Full HDR pipeline with post-processing for HDR glow
Recursive reflection and refraction
Accurate shadows from an unlimited amount of point lights
Texturing with bilinear filtering and normal mapping
Support for dynamic objects
Unlimited light sources (though the number of lights affecting one pixel should be limited)
Full Phong shading model (ambient, diffuse, specular) with attenuation
Hand-tuned SIMD code for extremely efficient ray tracing and shading
Loads OBJ files for meshes and TGA files for textures
System requirements: Depending on scene complexity, enabled features, material complexity (reflections, refractions) and number of light sources, Arauna runs on any dual core processor at a resolution of 512x384. Faster dual core processors, quadcore processors and systems with many cores will run Arauna well at higher resolutions.
Arauna has been under development for four years now, and is stable.
Thursday, July 3rd, 2008
The final version of Let there be Light has been released. You can download the installer from the downloads page.
Monday, June 9th, 2008
An alpha build of the second student game that uses the Arauna ray tracer is now available for download: Get it from the downloads page. In this alpha, you can explore one room of the game. The final game is expected within weeks, and features five full rooms, puzzles, better performance, automatic fly-throughs and much more. More information is available from the projects page.
Monday, May 26th, 2008
Second update today: Vadim Kindl sent me an e-mail with a port of Arauna for OS-X. The file has been added to the download page, and includes an Xcode project.
Monday, May 26th, 2008
The full data set for Outbound is available now from the downloads page. This includes the design document, scenery, dynamic objects, textures, original high-resolution textures and additional material properties that cannot be stored in obj files. The files are free to be used for non-commerical purposes. If you need a large, detailed, hand-made data set to put your ray tracer through it's paces, look no further: Outbound comes with a 1M polygon scene, that has been hand-crafted by IGAD students.
Friday, May 24th, 2008
Outbound can now be downloaded from the downloads page. Here's the story, from the projects page:
Harrold C. Babbage, grandchild of Charles Babbage, the father of the first idea of a programmable computer, has built a time machine. In a Jules Verne setting, combined with the folklore of mid 19th century time-travel ideology, our story of "Outbound" has taken shape. A young and inspired scientist, Harrold C Babbage, has successfully built a time machine, and has taken the risky step to travel from his current time (1870) to one hundred years later.
Harrold has arrived in a strange looking place, a rock towering countless of meters above foggy scenery. In the distance multiple smaller rocks are seen and seagulls can be heard, along with, in the distance, a catchy tune. You wonder where you are and most importantly when.
Outbound is a Myst/Riven style game, developed by second year students of IGAD, a game development program of the NHTV University of Applied Sciences. Enjoy a detailed environment rendered using real-time ray tracing: more than one million triangles, reflective and refractive objects, and of course, some challenging puzzles.
The full data set for Outbound will be available for the research community early next week.
Tuesday, May 20th, 2008
Outbound, the first game based on the Arauna renderer, is about to be released. Students are preparing the final package; release is expected early June. When it is released, the game can be downloaded for free from this site.
In the meantime, you can enjoy some screenshots from the game at the projects page.
Tuesday, May 13th, 2008
Returned from a brief vacation in France. Arauna can now be compiled for x64. I hoped that this would improve performance, but it did not: In fact, performance is sligtly lower using the Intel compiler. Interestingly, using the Visual Studio compiler, performance did improve, by no less than 10%, which is still slower than the icc build. Of course, the x64 build at least does allow scenes that exceed 2GB now, so my time wasn't completely wasted...
Wednesday, April 16th, 2008
Update to yesterday's post: I finally managed to make a significant improvement to the GI calculations. By dismissing all VPLs that have an influence below a preset value (not considering visibility), I doubled the speed, without perceivable loss of quality. The Sponza scene is now illuminated in 4.5 seconds. I think this concludes my journey; I will now proceed to the land of dynamic GI...
Tuesday, April 15th, 2008
My new laptop is in, which is cool: It's an Asus C90, which comes with a desktop processor that can be 'overclocked' to nearly 3Ghz. Plus, it has all the other benefits of a desktop processor, such as SSE3 and 64-bit support.
On the research side, I have been searching for coherency in many places, to speed up the global illumination precomputations. Currently, the test scene can be processed in 3.5 seconds; Sponza takes nearly 10 seconds. I was hoping
to cut that in half: Ray queries between sampling points and VPLs exhibit some coherency, so I hoped for a 1.5 - 2x speedup. Sadly, it doesn't work like that, because virtually every sampling point uses a different set of VPLs. Add
to that the fact that I approximate occlusion by reusing query results if two rays use childs of the same node in the VPL BVH, and coherency (that can be exploited) suddenly becomes rare.
I will probably look a bit further before I give up... In the meantime, I work with my students on their game. In the game, the player pushes a ball that casts 12 beams of light. This is implemented using a ball with holes. The
shadow boundaries of the beams are hard of course, and so I added a custom light source that has 12 beams, with the same properties as the existing spotlights. The beams are round and soft now, which dramatically improves the
visual quality of the game. It also shows that low-level access to an engine will always result in better solutions for practical problems, something I also noticed when working with GPU-based 3D engines.
Tuesday, April 1st, 2008
The GI calculations now use a BVH to cluster distant light sources that are close together (similar to the 'Lightcuts' approach by Walter et al.). This brings down the numbder of VPLs that needs to be evaluated tremendously: For larger numbers of VPLs, roughly 10% needs to be queried. There is an impact on quality, as can be seen in these shots:
The left image shows the approximated GI (10 seconds on a 1Ghz dualcore CPU). The right image is the original global illumination result based on 2000 virtual point lights. The centre image shows the exaggerated difference.
Monday, March 31st, 2008
I have been working with the GameLab team for their game. One of the things they needed is a simple mechanism to add floating debris. To facilitate this, I added a 'breakable' node to the scene graph, that turns an arbitrary mesh into double sided polygons that can fly in any direction, with an individual rotation and direction per polygon. This is also a nice stress test for the dynamic BVH that is rebuilt per frame.
A simple stand-alone demo showing the effect can be downloaded here:
In the package, you will find one 'high quality' version, with GI and reflections enabled, and one basic version, that obviously runs faster. Screenshot:
Tuesday, March 18th, 2008
Another quick update: Here is a screenshot of the 'BugToad' in Sponza. Bugtoad was modelled by Son Kim.
Tuesday, March 11th, 2008
Quick update: The GameLab team (year 1 of the IGAD program) requested proper emissive materials. So here it is, along with a tiny teaser picture of the project they are working on:
Monday, March 10th, 2008
As expected, global illumination is even lower-frequent than ambient occlusion. Using the set of sampling points to capture the global illumination proves to yield a good approximation of diffuse interreflection, as can be seen in the following screenshots:
From left to right: Simple test scene lit by 4096 virtual point lights (Keller) but without VPL shadows; same scene, this time with VPL shadows; and finally, Sponza, lit by 1024 VPS with shadows.
The artifacts near the foot of the pillars are caused by what could be considered 'bad geometry': The floor plane does not have edges at those locations, so in the same point generation phase, no extra points are calculated on the floor plane near the pillars. Also, the density of the point set is not increased, as it should.
For the record: Global illumination is not calculated in real-time yet.
Tuesday, March 4th, 2008
The voronoi smoothing works out really well in an actual low-frequency situation, such as a scene shaded with ambient occlusion, as can be seen in the screenshot below:
This is the same technique used on the 2D voronoi diagram yesterday, but this time applied to 3D sample points, with the added restriction that included points must (roughly) lie on the same plane.
Monday, March 3rd, 2008
I have been experimenting with ways to smooth the Voronoi diagram, ideally without adding the requirement of a mesh (e.g., a triangulation of the sample points). The voronoi diagram that is the result of finding the closest point in the sample set has hard edges, but adding some noise to the distance calculation results in smoother transitions. Taking this further leads to the following output (in 2D):
To the left, the original voronoi diagram, which is the result of finding the closest point for each pixel of the image. The blue dots are 'dark' samples; the black dots represent 'red' samples. To test the quality of smoothing, the right half of the samples is red.
On the right side, the pixels of the image are coloured based on randomized distances. The distance to each sample point is randomized with a maximum value of 90 (measured in pixels). In some cases, the used distance smooths the discontinuity a bit too much, whereas in other areas, a larger radius could be used. The result is however clearly an improvement over a direct visualization of the nearest sample point, and, it can be calculated quickly, without using any knowledge about connectivity between the sample points.
Friday, February 22nd, 2008
Some shots of a voronoi diagram based on the pointset. The pointset contains ~130k sampling points (an average of two per scene triangle). The samples are shaded using 512 ambient occlusion rays. Some artifacts remain; the Sponza mesh is once again proving to be a tough mesh. In this case it's the intersecting triangles that are causing problems...
Images: On the left side, a quick-and-dirty 'closest point' shading of the point set (voronoi, effectively). On the right side, the same camera position, this time with the sampling points superimposed.
Thursday, February 21st, 2008
Quick update: Darts throwing is improved, now it emphasizes sharp discontinuities.
Wednesday, February 20th, 2008
Here are some screenshots from a recent experiment: Darts throwing on a mesh using particle tracing. The resulting set (after tracing the particles) obviously has a very high density in some areas, and a very low density in other places, so it's postprocessed to remove most particles. The result is (close to) a uniform distribution, which is the input to the second stage of the experiment, which I will be working on for the next week or so.
The left screenshot shows the resulting data set, visualized using the 3D grid code from my previous GI attempt. On the right side, the original scenery, for reference.
Monday, February 18th, 2008
I added a new file to the download page: The Arauna kD-tree compiler. This is a stand-alone command line tool that will build a optimized kD-tree for an obj file (based on Havran's and Wald's paper, On building fast kd-Trees for Ray Tracing, and on doing that in O(N log N)). The tree is exported as a text file. You can use this tool in your ray tracing experiments; it saves you the hassle of building your own kD-tree compiler. You can also use it to generate a reference data set, in case you are working on your own compiler. More information about using the data can be found in a tread on the ompf forum.
Tuesday, February 5rd, 2008
I have been experimenting with 'plotting' discs in the photon map, to reduce low frequency noise. Basically, grid locations around the actual position of a photon are updated with the photon color, scaled by the distance of the grid point to the surface normal. This effectively reduces the noise, but now I'm very much stuck with the places where planes intersect...
To investigate the problem, I made a quick visualization of the photon map: 8 slices of data. I'm not sure why the void is not black; photons are supposed not to be stored there...
Update: I uploaded the current state as an incremental source update. Get the package.
Saturday, February 2nd, 2008
Allow me to do a braindump of what I am working on right now: The 'real-time photon mapper'. First of all, a few thoughts on the target: Global illumination can be implemented with roughly three degrees of freedom:
Completely static (precomputated, i.e. no freedom);
Dynamic lights and dynamic geometry.
Using the approach that I implemented (discretized storage of photons in a 128x128x128 grid) precomputation is straightforward and run-time visualization efficient (through trilinear interpolation). Due to L2 cache misses, there is a significant speed impact of ~8%, but that's to be expected, and acceptable.
Dynamic lights are next on my list. As a possible solution, I was hoping to build the photon map once, decrease intensity by x% every frame, and resend x% of the photons every frame. Noise would be acceptable because (100-x)% of the 'light' in the grid would be from the previous frame. However, 'x' needs to be sufficiently high to support moving lights. I can spawn ~800k photons per second, using two cores at 100%. That means that if I devote 25% of the frame time to updating the photon map, I can only update 200k photons. I need roughly 20M photons, so a moving light would be completely updated after 100 seconds. That's too long.
There are several ways to deal with this. Option 1: Accept that this will require at least an 8-core machine. Option 2: Seriously increase the number of photons per second. Option 3: Reduce the number of photons needed for a good estimate. About option 1: I am currently thinking about using the 'first hit' of a limited number of photons as 'virtual photon fountains', similar to Keller's VPLs. That way, the first photon path is free, and photon mapping performance would be roughly doubled to 1.5M per second.
In the meantime, I'm reading up on the topic of global illumination, since this is pretty much completely new to me... More on this later.
Friday, February 1st, 2008
Downloads added: My RT07 paper (and presentation slides) on ray tracing games and the latest demo / source package. You can find the files on the downloads page.
Thursday, January 31st, 2008
Welcome to my new site: Here I will post news about the Arauna real-time ray tracing project, developments in the field, and projects based on Arauna. I think this site was badly needed: Source packages, screenshots, presentations and a lot of other stuff is basically floating around several other places, which makes it hard to find. If I can find the discipline to keep this up to date, the problem is solved.
Random screenshots for today:
This is the new and highly experimental photon mapper in action. I'm not sure that it will stay, but
for the moment, it produces pretty images.