Continuing on with my venture into MR/AR through the use of the Zed Mini Camera mounted on the HTC VivePro, today I will investigate a tutorial on Spacial mapping, provided by StereoLabs on their ZED SDK documentation.
After working on this for about 1.5 hours, I have the following observations. First, I think that the tutorial provided on the documentation is outdated. That is because when I try to import the ZED_Spacial_Mapping prefab into my project, I get a non-existent script. In fact, all the rationale around the spacial mapping has been moved into the Zed Manager script, which is contrary to the way things are presented in the tutorial.
On the positive side, I did manage to generate meshes of the environment around me using the relative feature of the Zed Manager and I could even manage to save those meshes to .obj files in my computer and then load them into the scene, but all of those things I could only do at runtime. When I tried importing the Mesh directly into my scene, it appeared with a random rotation and much further away from the original environment. That is: it didn’t appeared attached to the origin scene as I had thought. And maybe that is to be expected, now that I think about it.
The most disappointing part however was that I couldn’t generate what the tutorial referred to as a Navigation Mesh. Doing so would allow me to place characters and objects in the scene and allow them to physically interact with objects. I could make a ball bouncing on the ground for example, which would be very impressive. And Unity’s built-in physics system would be great for that job. However, importing the Nav Mesh Surface component did nothing for me and I don’t know how to generate or use the Nav. Meshes through the ZED SDK. I think I will send an e-mail to the developers at StereoLabs for some clarifications, as this would be a really cool asset to add to my AR arsenal.
I got a reply from StereoLabs about the Spatial Mapping SDK! Actually that is an issue which they neglected to update in their most issue documentation so it’s good that I brought it to their attention. I am apparently close to solving the issue by myself, so let’s give this another shot.
I think that I have actually made all the necessary steps needed to solve this issue, so I should be able to generate something satisfactory soon. There is one issue though: the agent needs enough space to walk. This might be why I don’t get results in the limited space of my office. I’ll try opening up the sample scene to see!
I think I have set up the whole machinery correctly, but I get a message about the Nav. Mesh not being able to be generated. Perhaps my workspace is too small. This issue also appears in the sample scene contained in the SDK, so perhaps that is the case indeed. Well, I do feel satisfied about my progress and possibly I can easily complete the tutorial in a wider workspace. Moving on then!
First off, on the topic of no base stations for the Vive, I quote the works of a person working at Vive, as said in some forum discussion: “You’ll require basestations to display anything other than that grey screen.” Perhaps there is someway to pipe the image from the camera into the Vive display without using the tracking system, but that seems to require some digging. It certainly isn’t easy. For the first step, I sent an message to the HTC VivePro support team. We’ll see.
My next project will be following a tutorial on using the motion controllers. This way, I will get familiar with making an environment of user interaction for the future. Let’s get started!
The first alarming warning that I get is that the action system of the new steamVR may cause problems. In that case, I will just work with the previous version of steamVR, which I have already downloaded. Actually, they provide a link to the version of steamVR they recommend (the deprecated one) so I’ll just go and grab that too.
Using the controllers worked great and was super easy! All I had to do was attach a script that exists in the ZED SDK to empty objects representing the left and right controllers. I made one into a light and the other into a cube. It was very realistic!
The remaining tutorials on the documentation page are actually not related to my project. They are about Green Screen Motion Capture with VR and using multiple cameras. I will not cover them. What I want now is to collect my work for the last few days into a video! I want to run my demos and export the result from VR into a video. That shouldn’t be too hard to do.
And indeed, it is actually pretty easy to do. Just use the Windows 10 Gaming recorder (Windows button + G). Now, I’ll just make some videos demoing what the ZedMini with the VivePro can do. I’ll also put in some stuff I did with the Vive Hand Tracking SDK and SRWorks. If I can make a demo with virtual text tomorrow, then that’d be great.
VIDEO #1: Planting Tutorial: OK
For the next two/three demos, I’m un-mounting the Zed Mini from the VivePro (have to re-mount later).
VIDEO #2: SRWorks: Didn’t do any AR stuff sadly but it worked nevertheless. It’s not that important anyways. Emphasize the bigger FOV!
VIDEOS #3,4: Vive Hand Tracking SDK (with and without mesh): OK
(Re-mounting ZEDMini. Works)
VIDEO #5: ZEDMini AR – ball and flashlight! – OK!
Goal for tomorrow: Make a scene with text following the camera – right in front of the user. Also, make a rotating ball (I already have done that). Make the text change every time the ball rotates. Or make a timer. Or integrate the controllers too. I have now many tools in my AR/MR/VR arsenal, know in general how Unity works and can definitely do a task like this fast.