How to use RTP’s parallax shaders in Unity VR

If you found your way here and are reading this, then it is probably safe to assume you know about Tomasz Stobierski’s Relief Terrain Pack v3.2 and the issues its parallax effects have since VR support in Unity became native. In that case you probably want to skip the next section.

The Problem: Two Cameras vs. One

For all others, who just joined: the RTP pack is one of the most popular assets for Unity developers, known for outstanding rendering of terrains. Two of those features are parallax mapping (PM) and its slightly more sophisticated brother parallax occlusion mapping (POM). In a nutshell, they are neat little tricks to add a lot of 3-dimensional detail to surfaces without actually increasing the complexity of the meshes involved, hence they come comparably cheap in terms of processing power required.

Example of parallax mapping effects

As long as the only VR support in Unity was Oculus’ plugin package for the Rift, everything was fine: the package automatically created two cameras, positioned them like your eyes would be positioned in the virtual world, and had each camera render the output for either side of the Rift’s display. While this worked very well of course, it was not exactly fine for everybody, especially people with older hardware, because pretty much all expensive calculations had to be done twice per frame.

So Unity went ahead and added native VR support, which greatly improved performance, because it would now only use a single camera. Of course everything is still rendered from two different perspectives, but a lot of other expensive per-frame tasks can be shared that way. Unfortunately, at the time of writing, Unity does not tell a shader the actual eye position that is currently rendering, but only the position of the camera’s GameObject (in the middle between both eyes). So long story short: while you would still see perfectly working parallax mapping watching somebody’s VR experience on a computer monitor, the person wearing the headset would only observe a flat surface with a strangely morphing texture.

Unity kind of reacted to this problem, and implemented the optional use of the old system, using two cameras again. While this of course works, for reasons above it has a pretty severe impact on framerate.

The Solution (read: Hack)

So here is my way how to fix this and get VR parallax mapping back without killing performance. The actual idea came from forum user davidlively, who suggested the overall method used.

First, we need a little helper, that will push information about the currently rendering eye’s position where the shaders can pick it up.

Basically for each eye we are storing the offset its rendering position has from the “center eye”. Note the use of the parent’s transform: this is the GameObject that we use to manipulate our camera’s position. If your setup differs, this script will probably not work correctly. Add the script two all your cameras (not their parents) that will render terrain.

Next we will modify RTP’s parallax shaders (I will assume you can figure out where to put things, so I skip unchanged code with /* ... */):

In principle it is rather simple what we are doing: we always offset the view direction by whatever our script told us to. It is just the complexity of the RTP shaders and the compiler directives, that might be a little intimidating.

If you did things like I described, you should now be able to enjoy RTP’s parallax mapping in all its VR glory again!

Disclaimer

Of course this is just a hack and nothing like a final solution. I am sure Unity will fix it sooner or later, maybe by correction IN.viewDir for eye position in the first place. But then again, this might break so many things, that they need to come up with something else. A while ago they mentioned adding an eye index to the default shader variables, but I have not heard anything about that since then, which is primarily why I am posting my hack now. This would anyways not account for different IPDs, like ShaderEyeOffset.cs does.

So until there is an official way to handle VR situations in shaders, this solution works for me. Please also note, that on the one hand I am not a shader guy at all, and on the other one I only use a subset of RTP’s features in my current project. Thus it is possible, that my changes break something that I am not aware of – so while I hope this might be useful to somebody, you have been warned.

There is also still a problem where objects using the blend mesh feature (with PM or POM are enabled) do not fade seamless into the terrain. Unfortunately I can not provide you with a screenshot, because you can only see it in VR! I also think it is not related to my changes, but then again, I might be wrong.

Also keep in mind, that whenever you update the RTP package you will lose all changes.

Leave a Reply

Your email address will not be published. Required fields are marked *

Hello!

Portrait of Paul Alexandrow I am a freelance IT consultant, software developer and commercial pilot from Vienna, Austria.

In this Blog I post random notes about findings and solutions to technical issues that I encounter during my daily - or sometimes nightly - work.

more about me

Topics

A32NX AR AVR Bourne Shell C# Cg/HLSL DevOps Embedded Development Events Flight Simulator Fly By Wire Microcontrollers Monitoring Open Source Unity3d VR XR

Secured By miniOrange