Framework for Real Time Car Visualisation

  • 2019-2020
  • In collaboration with: Business Creation Net
  • Platforms: Desktop(all), Mobile(all)
  • Browsers: Chrome, Firefox, Safari, Edge
  • WebGL 1.0 and WebGL 2.0

We developed a web framework which we used for several projects that involved visualizing new car models from a big car manufacturer with the purpose of mainly advertisement. The projects are publicly accessible, however for legal reasons I cannot share them here. Also, one of the projects won Automotive Brand Contest 2020. However, I will go over the important features developed, and provide images from the initial proof-of-concept developed exclusively for in-house purposes.

PBR Pipeline

When we started development, we found that none of the existing mainstream WebGL-based engines/frameworks provided enough from a graphics point of view, and developing on top of them proved too cumbersome and restrictive. That’s why we went ahead an implemented our own PBR pipeline from scratch.

Because we knew ahead what our framework was mostly about, that being rendering mostly cars, we gave special attention to our car paint shader which is modeled around concepts from literature regarding layered BRDFs. Regular metal and dielectric materials are mostly standard implementations with a few improvements. We implemented local reflection probes, but used them sparingly.

Temporal Reprojection Anti-Aliasing

During early phases of development, we were noticing that our rendered images contained heavy specular(shading) aliasing. This was particularly bad because it was hurting overall realism and making them less convincing. At that point in time, mainstream WebGL engines treated specular antialiasing in a very primitive way, by more or less increasing the roughness on the fly behind the scenes. Other techniques like LEAN Mapping (or any of it’s derivations) impacted our asset pipeline and we wanted to avoid that. Since we needed a better anti-aliasing implementation regardless (since FXAA did not really cut it at all) we decided to give temporal anti-aliasing a go.

We built our TAA implementation from the ground up, starting with the most trivial version where you only get viable anti-aliased images in static scenarios. However because of the nature of our applications, that was not enough, we needed good TAA in fully dynamic scenarios with at least ghosting and flickering as possible. We found PlayDead’s TAA implementation a very good source of inspiration for our own, and in the end we managed to have viable and fast TAA which runs well on mobile as well.

HDR and Image Grading

Our lighting is mostly image-based, in which case high dynamic range environments are indispensable. Our IBL implementation is mostly run-of-the- mill. Unreal’s Split Sum approximation with PMREM for specular and diffuse probe for diffuse. We chose the diffuse probe over SH due to better quality.

Because we are rendering in HDR we needed solid color grading post processing. We based our color grading implementation on John Hable’s articles. We toyed with the idea if having fully custom tonemapping, but we went with ACES in the end since it looked good enough.

Mobile AR

Every graphics feature was available as originally intended on both desktop and mobile. In addition, on mobile platforms the framework offered an AR feature, first using a custom in-house AR solution, which was eventually replaced by ARCore on devices which supported it.

Additional Features:

We were continuously developing the framework as we went along from one commercial application to the next. Some of the most notable graphics related features are:

Ray-marched Screen Space Refractions which we successfully used to simulate the refraction that occur in cars head/tail/signal lights which contain pieces of thick transparent glass.

Bloom, which we ended up using in an entirely different way than it’s original intent, but ended up looking good

Depth-of-Field with circular Bokeh artifacts. This is an interesting implementation as it is not physically based, however it’s physically plausible

GPU occlusion testing used for some UI elements anchored to 3D coordinates

Making it all work on the web

Because we were displaying actual car models, the meshes needed to be at a particularly high standard from geometry point of view. This meant we could not use anything low-poly. Our scenes typically had over 1 million triangles, reaching almost 2 million in one particular application. With this in mind, because our applications were still running in the browser, the asset size restrictions were draconian, which forced us to come up with all sorts of solutions and tricks to keep the asset sizes as low as possible. We’re using draco compressed .glbs for all the meshes with as few quantization bits as possible without losing visible quality.

Besides geometry, textures also needed to be cut down drastically. We wanted to use GPU texture compression to reduce bandwidth usage, but needing to run on all platforms meant we would need to use different compression formats for each OS. Each GPU compressed format has it’s highs, lows and quirks and it’s normal to deal with them, however we had to deal with all of them combined at once. Eventually we gave the .basis format a try and managed to use it successfully, but not universally. Some textures were still having issues, especially ones with transparency. Another example of textures being cut down is the visible environments we use. Normally we’d use a HDR lat long texture and simply tonemap it along everything else. However, the HDR texture was too big so we dropped it. Completely. Instead we’re using pre-tonemapped LDR textures as a GPU compression format. This of course complicates our rendering pipeline, especially TAA, since we basically render everything in a ‘vacuum’

Probably the worst restrictions we faced, were the ones related to the actual graphics API. At that time iOS and MacOS did not support WebGL 2.0 (and they probably never will), however more painful than this, was iOS’s lack of support for WEBGL_draw_buffers extension, which cut us out of any hope for MRT usage. This meant no deferred rendering, but also consider TAA and the motion vectors which now need to rendered in a separate pass, which might not sound very bad unless your scene has millions of triangles like ours did

Resource intensive applications are never easy to handle. Especially when they are cross-platform. Graphics resource intensive applications even more so. Regardless, we managed to build a production ready framework which we successfully used in several commercial applications