solving transparency with dithering


Using alpha transparency for Quest is discouraged. It can hit performance quite badly. What are your options then?

If that's in a very controlled scene you may still go with it. If you have full control over the performance (nothing will appear, you know precisely what is the heaviest case in that place), it might be safe to use it. But most cases are not like that.

I was using alpha transparency for particle effects, like smoke. Initially, I was cutting off fragments below a certain threshold and above it, rendering fully. This was done with glsl's "discard". Performance was much better than alpha. And the visuals were acceptable. But I still wanted something more.

I thought that I may go with the discard but I could use a random value to offset it a bit. Dithering. And as I already had a pseudo-random value prepared that I needed for anti-banding (with so many smooth colour transitions I really need anti-banding to avoid visible colour patches), I decided to use it. Even without any tweaking it looked promising.

It was using pseudo-random value based on relative to camera position of a fragment and time. Which meant that it was always different for the same pixel (even if we didn't move the camera/head). It was okay for fast-changing shapes like steam. And it was quite okay for both eyes, even if the pixels differed. I think that it actually helps a bit with pretend blending.

The larger explosions/smokes were different. The whole random area was like glitter. Instead of slowly fading away and not attracting eyes, it was shimmering.

The first solution I wanted to try was based on fragment location on a render target. It worked well. Unless you were rotating your head. A minor inconvenience, right? It looked like you had something on your eyes/headset. I expected that but thought that it might be not a big deal. It was. And I already knew the solution.

This effect is much more apparent in vr.

At this time I was going through my backlog. I stumbled upon Myst on Quest. In the intro, there is dithering being used for disappearing falling body. It is also used for the water splashes when the bridge to the clock appears. It was the right time to play the game. It only reassured me that I should go with this solution. It still isn't as smooth as alpha blending but in my opinion it looks a bit better than what I had.

I remembered what Lucas Pope did for Return of the Obra Dinn and what he covered in detail (https://forums.tigsource.com/index.php?topic=40832.msg1363742#msg1363742). I didn't want to do the sphere mapping. Because of resolution that could change dynamically, because I didn't want to do a texture readout. And I only needed noise. I actually wanted to avoid any patterns.

I used the same approach I have for "voxels". Well, almost. Because for voxels I use model-space location (that is quantised to discrete values and is the basis for a pseudo-random value). For dithering, I needed world-location but relative to head's location (position without orientation).

To achieve that, in vertex shader when I calculate vertex location in camera space, I do one more transform to reverse the rotation. Then I feed it to the fragment shader where the location is normalised (to end up on a sphere, which is a bit costy) and then quantised. The voxel size is based on the camera FOV and render-target resolution (assuming that the sphere's radius is 1, as the location is normalised). One problem is aliasing. It is shimmering a bit and the solution would be to use supersampling. But that's out of the question. I increased the voxel size a bit to make the aliasing issue less apparent (especially for foveated areas, where it still happens a bit). And even then, it still looked good enough for larger smoke. Still, there's a problem when the object moves but I may tweak the explosion smoke to appear faster

Of course, I turned it on for the much more dynamic steam but then it had the dithered parts looked static, so I reverted to fully random noise. This is again much more apparent in the headset than on a flat screen.

The downside of this technique is it is not compression friendly. Sort of. Or maybe it is. It really depends. It may result in blocky artefacts but also the compression may help a bit and blend it. It's just unpredictable.

There's one side-effect that I haven't anticipated and I like a lot.

You don't have to worry about sorting objects. Because if you use dithered transparency, they sorted out of the box.


The general aesthetics of the smoke and steam has changed. It kind of mixes the low poly look and voxel approach. Which is exactly what I was looking for.

Get Tea For God

Comments

Log in with itch.io to leave a comment.

Ohh, so that's why all animations in Quill for quest look like this.

It's actually the recommended approach for transparency in this hardware

It's one of the possible solutions.

I've also seen a similar approach used for PC, although there it was used to get the water to work with deferred shading. It included rendering depth info at a lower scale, using deferred shading and then mixing the results.

In the end, I decided to use dithering for PC too but mixed with alpha as it adds fuzziness to the smoke.

Also, in some cases, I decided not to use it at all. Muzzle flashes, some explosions and particles. They looked better with either scaling down or being cut sharp and clean.

great solution! it looks amazing! :D

wow! thank you for sharing this! really really fascinating and well reasoned