pre On the second though, you could make each such patch the size of a pixel, and still render it with a small number of cameras by sending the per-pixel projection matrix to the shader via 4 pre-rendered textures. This may even run in realtime.

^ Forget it. Not going to work. It'll have to be raytracing.

  • pre replied to this.

    xyz I mean it won't be, if I can't make it work in Godot then I'd just go back to Unity where it's working fine.

    • xyz replied to this.

      pre Which way you did it in Unity? The multitude of cameras? Whatever you did in Unity can probably be replicated in Godot but if you already have it set up in Unity and you're happy with it why move to Godot - just produce what you need there.

      However, raytracing is the optimal approach for solving this problem in general, imho.

      Looking through the Unity code that I got from somewhere too long ago to remember.

      Looks like it's basically just calling Camera.RenderToCubemap twice, then doing a convert to equirect.

      Confusing. Perhaps it can indeed just be done with 12 squares somehow?

      Though they do pass the render function a MonoOrStereoscopicEye parameter. So I guess that make the cube-map renderer head-shape aware?

      Can't look at the Unity source code and see what it really does I guess.

      • xyz replied to this.

        pre Looks like it's basically just calling Camera.RenderToCubemap twice, then doing a convert to equirect.

        Then it does precisely what you said you don't want to do in your initial post, i.e. it rotates each eyeball around its center. So, are the results from this satisfactory in terms of how the stereoscopy looks or not?

        • pre replied to this.

          xyz I really think passing in that parameter StereoscopicEye.Right and StereoscopicEye.Left affects what the render does so that it's not just projected from origin the way it is if you pass StereoscopicEye.mono

          But I won't have access to test with it till the weekend really.

          Seems like it is indeed going to take some hacking on Godot's source-code which is very likely beyond my immediate skills.

          Wonder if I'd get a bite if I posted a month's wages on Replit Bounties.

          • xyz replied to this.

            I might be wrong, but I'm pretty sure what you're asking for is logically impossible.
            In the second gif in your original post, you can see that the position of the cameras changes when you rotate. Because of that, the image that they see is different, so you can't combine those images.
            Here's a quick example of what I mean:

            On the left image, the green point will be obstructed by the red point, but once the head turns (the right image), the green point will no longer be obstructed.
            You would need to have a set of images for each head angle. Maybe you could get away with a finite number of them and do some blending/interpolation, but I'm not sure how well it would work.

              pre It probably renders each cubemap image as a regular stereo image. The problem is in stitching them together. Because of slightly different camera position in each cubemap face (if we assume it rotates both cameras around the same pivot), the faces won't fit seamlessly. Maybe it just does some edge blending when sampling the cubemap to make final equirectangular projection. But it's hard to tell without seeing the actual results.

              pre Seems like it is indeed going to take some hacking on Godot's source-code

              Why? What could you possibly hack-in that you can't get from the regular build.

              I found this article which describes one-pixel columns approach in Unity. This can be done in Godot without source code interventions.
              But this is still an approximation compared to what a raytracer would produce. So again. Spare yourself a trouble and do it the easy way with a raytracer. Use the right tool for the job.

              Also why do you need environment maps? If you already have a scene that can be rendered by a game engine, just run it in real time.

              LoipesMas You would need to have a set of images for each head angle. Maybe you could get away with a finite number of them and do some blending/interpolation, but I'm not sure how well it would work.

              Yeah that's exactly what the OP is proposing. It could be done with a programmable ray tracer because you can alter the view ray origin for each rendered pixel. With realtime rendering your rays are pretty much set in stone for the entire rendering pass. So you'll need to stitch a large number of small images, each rendered from a slightly different camera position/angle. Ideally, those images should be only one pixel in size.

              Looks like vray already has a built-in option to render stereo cubemaps. Not sure if Blender's Cycles could be harnessed to do it without too much hassle.

              • pre replied to this.

                xyz Intersting article. Could maybe do something like that in godot if all else fails. Though presumably a C++ function built into the engine like Unity has there would be faster.

                The project is a VR project. How are you gonna do VR in a ray-tracer at 90 frames a second?

                Then when things are edited, the user may render to a 360 stereo video for upload to a VR video-hosting platform, that's what the output-to-cubemap function is for.

                I guess output to fbx for import into blender could be the proposal here? But I don't think my target users are going to be able to do things like that.

                We'll know more about what the Unity render is doing when I can experiment with it more tomorrow.

                • xyz replied to this.

                  pre The project is a VR project. How are you gonna do VR in a ray-tracer at 90 frames a second?

                  Ok. So you need it to run in realtime and capture it into 360 stereo video? You should have mentioned that in the first post. Then the pixel columns approach probably won't be fast enough as your rendering shaders would need to process the whole scene vertex data as many times per frame as there are columns.

                  The likely bottleneck here is on GPU side, so doing it in native code wouldn't make much difference. The whole feature may not really be feasible in realtime, except maybe for very light (vertex-wise) scenes. But I could be wrong in this estimate. Go and implement it using viewports, shaders and GDScript, then profile it and see where the bottlenecks are. If they are indeed on the CPU side then you can think of porting what you have into native code. On the other hand, if they are on the GPU side, there's not much you can do about it other than try a different approach.

                  Do you know if somebody else managed to implement this in a project or a commercial product?

                  I need the editing to run in real-time, but the export to 3d-video doesn't need to be real-time.

                  I am not aware of anyone trying to do what I am trying to do, but what I have been doing in Unity until a couple of months ago when I decided to try Godot was working. It has made all the movies at starshipsd.com by essentially acting the parts in VR, editing the results, and then rendering to 360/3d-video upon the export button being pressed.

                  • xyz replied to this.

                    pre I need the editing to run in real-time, but the export to 3d-video doesn't need to be real-time.

                    Then it should be doable.

                    • pre replied to this.

                      LoipesMas

                      You’re technically correct. However in practice it turns out that the blend OP is proposing (for each angle, render only one vertical slice directly ahead of the camera) works well enough. It’s what the Cardboard Camera app does, and in fact this is how all 3D 360 images work today.

                      So here's the difference between Unity rendering an equirectangular output for Mono vs Left.-Eye.
                      Mono at the bottom.

                      You can see that the edging on the floor gets closer to the camera in the Left version, presumably because the origin is offset by my exaggerated 0.5m pupil-distance.

                      The technique in that article seems pretty similar to the monsterous system I described, but instead of setting up eight thousand cameras they move one camera around and call a render() function for it in each position. Which I guess would be easier on memory.

                      Neither Godot's Camera3D nor Viewport class seem to have a render function. But you could always just do one pixel-column per game-frame or something I guess. Spin the cameras and fill in the texture. Maybe the frame-rate can get really high if you're only rendering a 1x4096 pixel frame.

                      xyz

                      pre Yeah, the way Godot handles offscreen buffer rendering (using viewports) is kind of stringent. The devs should think about implementing a more flexible system.

                      You'd have to go with multitude of cameras/viewports but I think Godot may handle it well if you run it on powerful enough hardware. And you can always render is several passes, each handling a certain amount of cameras.

                      Right, there we go, that seems to have it.

                      You just gotta accept it's going to take several frames to pan the cameras around so you're gonna have to freeze your game.

                      Made it so you can adjust the number of camera-pairs it uses for each eye from 4 upwards. I found 64 or 128 is about the best number of cameras before it starts getting to memory intensive. Render time then is only like 30 frames, but most of them are long ones.

                      https://github.com/revpriest/godotPanoRenderer

                      One interesting thing is that if you set the canvas-size of the subviewports for each camera to just 1 pixel wide they get no lighting showing. I was fiddling about trying to figure why the lights were all on a different layer or something but turned out just the camera was too narrow to let light in or something. Needs at least a five-pixel wide render in order to have any lighting at all really.

                      Thanks for your help folks.