• 3D
  • Stereoscopic rendering

Hi All,

I thought I'd start a thread on this to let you guys know what I'm working on, get some input and who knows, see if this is useful to anyone.

I'm still new to Godot but been building my own things for awhile, after discovering Godot I thought it would be fun to learn it and see if I can contribute to the project instead of reinventing the wheel in my own little thing in areas where I am not very good at (my experience lies mostly in 3D rendering, not in things like sound or GUI).

Also one of the friends I've made at regular developer meeting I attend here in Sydney has done a lot of work modding existing games to add stereoscopic support to them and I was impressed by how well this can work when done properly. Its always felt like a gimmick to me until I saw his version of Abe's Oddysee in hi-def stereoscopic 3D and it is something I've added to my own project as well. Now with VR it's even more of an interesting thing. I know that the Godot devs are looking into adding VR to 3.0, maybe I can help here :)

Earlier in the week Lamberto Tedaldi made a post on the Godot facebook page introducing his solution for split screen stereoscopic on an Android phone. It gave me a bit of a boost in understand a few things I hadn't grasped about Godot and a nice starting point to work from.

One of the issues I believe with his solution that can't be solved without diving into the C++ source code is that his solution results in a 'Toe-in' of the two cameras as both 'lookat' a single convergence point set some distance away to the front. Now I haven't mastered the theory behind why this is actually a problem and whether my info is based on what is written about 3D displays and whether it thus applies to HMDs as well but so far I've been lead to believe the cameras should look in a parallel direction but that there is a skewing of the frustum based on the convergence distance. A good bit of information on the subject can be found in the PDF from NVidia: https://www.nvidia.com/content/GTC-2010/pdfs/2010_GTC2010.pdf

Anyways, that is the approach I had implemented in my own engine some time ago and it has indeed give much better results on my 3D monitor (I'm using a 3D TV at home that can take splitscreen input but also build in support for cards that have hardware support for 3D with a separate left and right buffer). Now I want to replicate that in Godot :)

As part of learning more of the internals of Godot I've started modifying the camera code to support configuring a camera as a proper left eye or right eye camera implementing an asymmetric frustum. I got a fair way last night and eventually got to a point where i think the correct projection matrix is applied to each eye but where I think it is not using that frustum for clipping but using the normal projection frustum. It was late and I decided it was time for bed :)

I'll spend some more time on it tonight time allowing and am planning on checking in a first version into my fork on my github page in the weekend. I just wanted to introduce what I'm doing and see if there are any people interested in having a look over my shoulder and seeing if what I'm doing makes sense and the approach I'm taking fits in the overal way Godot is designed to work. I also want to make sure I'm not re-inventing the wheel here if others are already going down this path.

My first milestone will simply be getting the left and right camera projection matrices to work in combination with the split screen approach that Lamberto created and then adjusting the approach to be able to switch between the side by side normal aspect ratio approach for HMD and for 3DTV where the two images get stretched and I need to double the aspect ratio as a result.

I then want to start looking into what it'll take to start tying into hardware support with left/right buffer and see what it'll take to support things like NVidias 3D Vision. One thing here will be looking into ensuring we're not doubling up work as it would be possible to render to both left and right buffers simultaneously or at the very least more parallel and remove a lot of overhead in rendering a scene twice.

Cheers,

Bas

Ok, just a small update on my progress here. I finally found the bug in my code that was making everything go haywire. I had simply mixed up the row vs column representation of matrixes, I thought they were reversed from what I was doing in my engine but they actually line up.

To whom it may concern, the commented out code in CameraMatrix::set_frustum would work if you swap the indices around and it would be more readable then the current code being executed (though it does exactly the same thing).

Anyway, calculating a standard mono camera matrix can be done as follows: ymax = p_z_near * tan(p_fovy_degrees * Math_PI / 360.0f); xmax = ymax * p_aspect; set_frustum(-xmax, xmax, -ymax, ymax, p_z_near, p_z_far);

CameraMatrix::set_perspective has a slightly different approach but I believe the end result is similar if not identical but I to do the adjustment for stereoscopic I only know the approach calculating a frustum. I've created a second CameraMatrix::set_perspective that takes 3 extra parameters: - p_eye which is 0 (mono, same output as a normal camera), 1 (left) and 2 (right) - p_intraocular_dist which is the distance between the two eyes - p_convergence_dist which is the depth at which an object would render the same to both eyes. Anything before this distance will pop out of the screen, and anything after will seem to be behind the screen

To create the frustum shift to get a parallel stereo matrix you calculate a shift value as follows based on the intraocular distance and the convergence distance: frustumshift = (p_intraocular_dist / 2.0) * p_z_near / p_convergence_dist;

Then you either add (left eye) or subtract (right eye) this shift like so: set_frustum(-xmax + frustumshift, xmax + frustumshift, -ymax, ymax, p_z_near, p_z_far); /* left eye */

At this point in time we have the correct shift of our frustum however our two cameras are still looking outwards from the same point in space. Lamberto made this shift by moving the camera but I'm doing it in code. First it's easy to build in here as we have all our values handy but there are two future reasons I'm doing this: 1) ideally I want to change this logic at some point that we don't have two cameras but one camera that is able to render both viewport 2) when you're using an HMD like an HTC Vive or Oculus Rift the IOD is either adjusted on the headset itself or set through the middleware tools. We're actually not going to calculate our camera matrices at all, we'll be getting them from the HMD

Anyways, I'm getting off topic, we move the cameras by half the IOD. Now this may seem counterintuitive but we're actually add this for the left eye and subtract for the right (we're not moving the camera, it stays at 0,0,0, we're moving the world): modeltranslation = p_intraocular_dist / 2.0; /* left eye */

// translate matrix by (modeltranslation, 0.0, 0.0) CameraMatrix cm; cm.set_identity(); cm.matrix[3][0] = modeltranslation; *this = *this * cm;

Now that's all there is too it. Well almost. This is where I am at for calculating the correct matrices. There are two more things missing from this part of the code so far: 1) the above would work nicely for a side by side stereoscopic render for an HMD but for a 3DTV the image gets stretched so the aspect ratio is half what it should be. I need to find a nice way to configure that I need to adjust the aspect ratio 2) the above code has the center of the camera in the center of the viewport. Perfect for a 3DTV but for an HMD the center of the camera needs to be aligned with the center of your eye, and your eye is more inwards. We need another translation for adjusting this. This could possibly be done with the H Offset property of the camera.

Ok thats the heart of it which is why I spend a lot of detail about this but other classes needed to be altered. I had to change the VisualServerRaster class and its parent class so it can deal with the extra properties of the camera. I also needed to make a number of changes to the Camera class itself.

Both of those I will talk about in more detail tomorrow after I get a good night sleep. For both classes there are a few things that should be improved but where I simply lack experience with GoDot so I'll be interested in feedback. I will also check in the source code so far into GitHub tomorrow.

Finally, I need to adjust Lamberto's example. I've split his example in two so the stereoscopic camera can be used independently of the gyroscope logic he wrote for Android but I need to clean it up some more. All in all its looking pretty good on my 3DTV so far:)

To be continued

Hmm, I don't know why but my 2nd comment on this discussion I wrote last week doesn't seem to have been added.

I'll find some time later this week to redo some of the explanation of the rest of the changes I made however if you want to try out what I've done so far my fork on github with the source code changes is: https://github.com/BastiaanOlij/godot

A sample Godot project that uses the enhancements can be found here: https://github.com/BastiaanOlij/StereoCameraTest

Note that this fork also contains some experimental changes to get the accelerometer/gyro/magnetometer readings on iPhone, these changes are untested as I'm having some trouble compiling Godot for iPhone, it refuses to build the 32bit/64bit intel? builds and switches back to ARM for some reason.

To be continued

7 years later

*Hi,

I have a question about this. The is for anaglyphic 3D. I am totally new on Godot. Ich want to create a Scene with SBS 3D .. because I have a Smartphone with a 3D Display and I can tell my Smartphone which com.. app should be rendered in 3D using the app id in a config file.
I remember I had accomplished that once but had the problem that it wasnt really CrossEyed and the 3D Effect was terrible.

I would like to ask how I can create a correct SBS Viewports with the cameras actually are crossed on a focus point.
I tried using the new XRCamera but OPENXR is not working and it wasnt really working at all.

thank you very much
kind regards, Thomas