VR/MR/AR, C# and Unity – DAY 7

Picking up from where I left off last time, my problem today is mainly hardware related. I cannot get the VivePro and Zed to agree in Unity and so the display is just black!

The solution which I found was using ZED_Rig_Stereo, instead of ZED_Rig_Mono (later confirmed by staff in StereoLabs), because the latter simply uses just the left camera, while the headset needs input from both cameras to function. But then my problem was immense glitching and jumping in the image!

I tried so many things: installing new NVIDIA drivers, changing GPU settings, clearing up memory in the disk etc… But nothing really worked. And I also had problems with the Vive not connecting easily.

With the help of a staff member at StereoLabs, I finally reached a working solution: disable tracking in the ZEDManager. That means that all the tracking info which the Zed uses come from the VivePro, whose tracking is actually much better, given that it has two points of reference (base-stations). But my VivePro has been having a lot of difficulties tracking. So, after looking online at forums of people with similar problems, I got the idea that maybe the windows, which I opened to let more light in for the sake of the Zed, actually reflect the lasers emitted from the base stations and prevent tracking. Truly, as soon as I shut the blinds, tracking with the VivePro became easier again and I could produce a working demo of my code.

Success!


Plans for the week to come:

  • Ensure that the ZED+VivePro is working smoothly and pinpoint conditions in different environments which cause problems or that work particularly well.
  • Attach the 3D printed mount on the VivePro in a stable way.
  • Run the 3D avatar on VR and then on MR and customize it so that it provides a good teaching experience. Also add some textual instructions around it.
  • Research how you can infer the relative position of the hands through the ZED.

 

VR/AR/MR, C# and Unity – DAY 6

Today’s goal is bringing together the work we did in the previous days and produce a working demo of the AR/MR capabilities of the ZEDMini. I don’t imagine it will be a daunting task, but I shall proceed carefully. I also want to crop the videos I made last time and also make one or two more in order to have a journal of results I have gotten these past few days.

Right off the bat, I’m facing issues with the VivePro again. It really doesn’t want to track with the base stations and its giving me a lot of trouble. I redid the room setup, elevated my desk and base stations and found out that a slight shake makes connecting sometimes happen. I’m really getting annoyed by these glitches though.

My sample scene with the rotating sphere works but the sphere is too large and sometimes gets sucked into walls. I also noticed that testing it standing up and without my chair in the way, the ZED AR demos are much better. I actually had a fun AR experience with it!


So the ball rotates fine with the RotateSphere script I wrote. Maybe I’ll turn up the speed and adjust position and things like that a little.

Text is very frustrating to place correctly. Unity is glitching a little bit, the equipment is annoying me a lot with it taking a million years to load and there is some window which hides anything i put behind it. I don’t know what kind of window that is.

Some positive results come from using ZED_Rig_Mono (not stereo) and disabling depth occlusion.

For some reason, two TextMeshPro instances cannot be placed in the scene. One really behaves absurdly and isn’t even show in the Game View. We’ll just make due with one instance then.

Now I’ll make a C# script modifying the textPro instance so that it is a timer and counts the sphere’s rotations.

After some time, I finally did code up the scripts and everything is running smoothly! That was my first complete, autonomous experience scripting with C# on Unity. Essentially, I implemented the timer by adding Time.deltaTime each time at the Update() function (even though I also considered System.Timers.Timer as an alternative). I used the Math.Floor() function extensively and also investigated C#’s Property system. Further, I learned about getting components and game objects amongst scripts and objects, which was very helpful in my endeavor to exchange information between different parts of the scene.

Another script counted how many times the ball rotates around the central axis. I store an angle variable which holds the total angle (mod 360) which the ball traverses. Every time the angle passes 360, I increment a counter and modify the TextMeshPro’s text accordingly.

Everything works smoothly with the Camera and I am very satisfied but I am still very frustrated and annoyed by the Vive’s connectivity. It seriously cannot connect. It takes it minutes! I have as clean a setup as I can, I removed the chair and restarted the computer many times. I just can’t get the camera to work with the Vive for my scene! It has made me quite angry this thing. The only error (warning) message I do get is:

If your machine has multiple graphics adapters, Unity may have created a WindowContext on the wrong adapter. If you experience a black screen when playing, please restart the Editor.

I don’t know what this means but it not the case that the specific scene has never worked! It just decided not to work now. I think I’ll call it quits for the day and try again tomorrow. Lots were gained today but this Vive issue has annoyed me plenty.

Bertrand’s Postulate – a BOOK proof

I really enjoyed reading Erdos’s proof of Bertrand’s postulate in “Proofs from THE BOOK”, so I shall attempt writing out here in order to make sure I understand it properly.

Bertrand’s Postulate

For any n>0, there exists a prime number p such that

\boxed{n < p \leq 2n}

Proof

The main idea of the proof is estimating the binomial coefficient 2n\choose n by using the prime numbers in its factorization. If it were the case that no prime existed between n and 2n, then we will see that our bound will enforce n to be small. How small we shall see. But we will end up proving that the statement holds for n<k for some k>0, using a certain clever trick. To get there though, we first have to work on some bounding.

PART 1: THE “BIG” PRIMES IN \,2n \choose n

First, let p\geq 2 be a prime. By Legendre’s formula, we know that the largest power of p dividing 2n\choose n is given by:

v_p\left(2n\choose n\right) = v_p\left(\frac{(2n)!}{(n!)^2}\right) = \sum\limits_{k\geq 1}\left(\left\lfloor\frac{2n}{p^k}\right\rfloor-2\left\lfloor\frac{n}{p^k}\right\rfloor\right)

Each summand is at most one since,

\left\lfloor\frac{2n}{p^k}\right\rfloor-2\left\lfloor\frac{n}{p^k}\right\rfloor < \frac{2n}{p^k}-2\left(\frac{n}{p^k}-1\right) = 2

Furthermore, if p^k > 2n, then the summand is equal to zero. Thus,

v_p\left(2n\choose n\right) \leq \max\,\{r\colon p^r \leq 2n\}

This inequality yields some very useful facts:

  • The largest power of p that divides 2n\choose n is not larger than 2n.
  • Primes p > \sqrt{2n} appear at most once in 2n \choose n.

Now, primes p such that \,\frac{2}{3}n < p \leq n cannot divide 2n\choose n. 3p > 2n so 2p > 2n-p. And p \leq n, so 2p > n. Thus, in \frac{(2n)!}{(n!)^2}, we have p appearing only once in the numerator and twice in the denominator.


PART 2: BOUNDING \, 2n\choose n

First some background on the binomial coefficients. We know the following properties:

  1. \sum\limits_{k=0}^{n} {n\choose k} = 2^n
  2. {n \choose k} = {n \choose n-k}
  3. {n \choose k} = \frac{n-k+1}{k}{n\choose k-1}. Using this and (2) we can prove that:
  4. 1 = {n\choose 0} < {n \choose 1} <\cdots< {n \choose \lfloor n/2 \rfloor} = {n \choose \lceil n/2 \rceil} >\cdots> {n \choose n-1} > {n \choose n} = 1 (unimodal sequence)

From (1) we know that for all k it is true that {n \choose k} \leq 2^n. From (4) we know that {n \choose \lfloor n/2 \rfloor} \geq \frac{2^n}{n}, since the largest element in a sequence is greater than or equal to the average of the sequence. Now we can infer that

\boxed{{2n \choose n} \geq \frac{4^n}{2n}}

So we have now (CRUX MOVE):

\boxed{\frac{4^n}{2n} \leq {2n \choose n} \leq \prod\limits_{p\leq \sqrt{2n}}{2n}\cdot\prod\limits_{\sqrt{2n} < p \leq \frac{2}{3}n}p\cdot\prod\limits_{n<p\leq 2n}p}

To make sure you understand why this holds, be able to answer the following:

  • Why do we take primes only up to 2n?
  • Why do we not take any primes between \frac{2}{3}n and n?
  • Why is the exponent of each prime 1?

Now we have:

4^n \leq (2n)^{1+\sqrt{2n}}\cdot \prod\limits_{\sqrt{2n}<p\leq\frac{2}{3}n}p\cdot\prod\limits_{n<p\leq 2n}p

Let’s assume, for the sake of contradiction, that there are no primes between n and 2n. Then the inequality above becomes:

4^n \leq (2n)^{1+\sqrt{2n}}\cdot \prod\limits_{\sqrt{2n}<p\leq\frac{2}{3}n}p

We shall show that for this to hold, n must be small enough that we will actually be able to prove Bertrand’s postulate for such n.

First we need the following lemma:

LEMMA

For all reals x\geq 2, we have that

\boxed{\prod\limits_{p\leq x}p \leq 4^{x-1}}

Proof

Let q be the largest prime with q\leq x. Then \prod\limits_{p\leq x}p = \prod\limits_{p \leq q}p and 4^{q-1} \leq 4^{x-1}, so it suffices to prove our statement for x = q prime. For q=2 the result is trivial and so we consider odd primes q = 2m+1.

Then we inductively compute:

\prod\limits_{p \leq 2m+1}p = \prod\limits_{p \leq m+1}p \cdot \prod\limits_{m+1 < p \leq 2m+1}p \leq 4^m{{2m+1}\choose m} \leq 4^m 2^{2m} = 4^{2m}

There are a few parts to this computation:

  • \prod\limits_{p \leq m+1}p \leq 4^m holds by induction.
  • \prod\limits_{m+1 < p \leq 2m+1}p \leq {{2m+1}\choose m} comes from the observation that {{2m+1}\choose m} = \frac{(2m+1)!}{m!(m+1)!} is an integer, where the primes we consider are all factors of the denominator but not of the numerator.
  • Finally, {{2m+1}\choose m} \leq 2^{2m} form the properties (1) and (2) of the binomial coefficient presented above.

PART 3: GETTING THE CONTRADICTION

The last Lemma we proved as well as the inequality above yield that:

4^n \leq (2n)^{1+\sqrt{2n}}4^{\frac{2}{3}n} \Leftrightarrow 4^{\frac{1}{3}n} \leq (2n)^{1+\sqrt{2n}}

With some clever algebra we can see that this fails to hold for large n. First, we know by induction that if a\geq 2, then a+1 < 2^a. So:

2n = \left(\sqrt[6]{2n}\right)^6 < \left(\lfloor\sqrt[6]{2n}\rfloor + 1\right)^6 < 2^{6\lfloor\sqrt[6]{2n}\rfloor} \leq 2^{6\sqrt[6]{2n}}

Now:

2^{2n} \leq (2n)^{3\left(1+\sqrt{2n}\right)} < 2^{\sqrt[6]{2n}\left(18+18\sqrt{2n}\right)} \overset{n\geq 50}{<} 2^{20\sqrt[6]{2n}\sqrt{2n}}= 2^{20(2n)^{2/3}}

So we get (2n)^{1/3} < 20 and thus n < 4000. This is a contradiction because we can easily show Bertrand’s postulate to hold for n < 4000. An impressive trick to establish this right away is due to Landau:

It suffices to check that the sequence

2, 3, 5, 7, 13, 23, 43, 83, 163, 317, 631, 1259, 2503, 4001

consists of prime number where each is smaller than twice the previous one. Thus every interval \{y\colon n < y \leq 2n\} with n \leq 4000 contains one of these 14 primes and our proof concludes.

 

VR/MR/AR, C# and Unity – DAY 5

Continuing on with my venture into MR/AR through the use of the Zed Mini Camera mounted on the HTC VivePro, today I will investigate a tutorial on Spacial mapping, provided by StereoLabs on their ZED SDK documentation.

After working on this for about 1.5 hours, I have the following observations. First, I think that the tutorial provided on the documentation is outdated. That is because when I try to import the ZED_Spacial_Mapping prefab into my project, I get a non-existent script. In fact, all the rationale around the spacial mapping has been moved into the Zed Manager script, which is contrary to the way things are presented in the tutorial.

On the positive side, I did manage to generate meshes of the environment around me using the relative feature of the Zed Manager and I could even manage to save those meshes to .obj files in my computer and then load them into the scene, but all of those things I could only do at runtime. When I tried importing the Mesh directly into my scene, it appeared with a random rotation and much further away from the original environment. That is: it didn’t appeared attached to the origin scene as I had thought. And maybe that is to be expected, now that I think about it.

The most disappointing part however was that I couldn’t generate what the tutorial referred to as a Navigation Mesh. Doing so would allow me to place characters and objects in the scene and allow them to physically interact with objects. I could make a ball bouncing on the ground for example, which would be very impressive. And Unity’s built-in physics system would be great for that job. However, importing the Nav Mesh Surface component did nothing for me and I don’t know how to generate or use the Nav. Meshes through the ZED SDK. I think I will send an e-mail to the developers at StereoLabs for some clarifications, as this would be a really cool asset to add to my AR arsenal.


I got a reply from StereoLabs about the Spatial Mapping SDK! Actually that is an issue which they neglected to update in their most issue documentation so it’s good that I brought it to their attention. I am apparently close to solving the issue by myself, so let’s give this another shot.

I think that I have actually made all the necessary steps needed to solve this issue, so I should be able to generate something satisfactory soon. There is one issue though: the agent needs enough space to walk. This might be why I don’t get results in the limited space of my office. I’ll try opening up the sample scene to see!

I think I have set up the whole machinery correctly, but I get a message about the Nav. Mesh not being able to be generated. Perhaps my workspace is too small. This issue also appears in the sample scene contained in the SDK, so perhaps that is the case indeed. Well, I do feel satisfied about my progress and possibly I can easily complete the tutorial in a wider workspace. Moving on then!


First off, on the topic of no base stations for the Vive, I quote the works of a person working at Vive, as said in some forum discussion: “You’ll require basestations to display anything other than that grey screen.” Perhaps there is someway to pipe the image from the camera into the Vive display without using the tracking system, but that seems to require some digging. It certainly isn’t easy. For the first step, I sent an message to the HTC VivePro support team. We’ll see.


My next project will be following a tutorial on using the motion controllers. This way, I will get familiar with making an environment of user interaction for the future. Let’s get started!

The first alarming warning that I get is that the action system of the new steamVR may cause problems. In that case, I will just work with the previous version of steamVR, which I have already downloaded. Actually, they provide a link to the version of steamVR they recommend (the deprecated one) so I’ll just go and grab that too.

Using the controllers worked great and was super easy! All I had to do was attach a script that exists in the ZED SDK to empty objects representing the left and right controllers. I made one into a light and the other into a cube. It was very realistic!


The remaining tutorials on the documentation page are actually not related to my project. They are about Green Screen Motion Capture with VR and using multiple cameras. I will not cover them. What I want now is to collect my work for the last few days into a video! I want to run my demos and export the result from VR into a video. That shouldn’t be too hard to do.

And indeed, it is actually pretty easy to do. Just use the Windows 10 Gaming recorder (Windows button + G). Now, I’ll just make some videos demoing what the ZedMini with the VivePro can do. I’ll also put in some stuff I did with the Vive Hand Tracking SDK and SRWorks. If I can make a demo with virtual text tomorrow, then that’d be great.

VIDEO #1: Planting Tutorial: OK

For the next two/three demos, I’m un-mounting the Zed Mini from the VivePro (have to re-mount later).

VIDEO #2: SRWorks: Didn’t do any AR stuff sadly but it worked nevertheless. It’s not that important anyways. Emphasize the bigger FOV!

VIDEOS #3,4: Vive Hand Tracking SDK (with and without mesh): OK

(Re-mounting ZEDMini. Works)

VIDEO #5: ZEDMini AR – ball and flashlight! – OK!

Goal for tomorrow: Make a scene with text following the camera – right in front of the user. Also, make a rotating ball (I already have done that). Make the text change every time the ball rotates. Or make a timer. Or integrate the controllers too. I have now many tools in my AR/MR/VR arsenal, know in general how Unity works and can definitely do a task like this fast.

 

 

 

VR, C# and Unity – DAY 4

It’s the start of a new week! There are a lot to do and many ideas to be conceived and implemented this week so let’s get to it with enthusiasm for research.

One first goal I should set is finishing up the documentation tutorial given by Stereolabs so that I can integrate my usage of Unity with the ZED SDK. The goal is producing a basic environment with AR in Unity and clear some difficulties that are presented in the usage of the ZED mini. By Thursday’s meeting, I need to have built some textual environment with perhaps some object flying around the user can interact with.

After doing than, I will finish a mini Unity Tutorial on some advanced topics of the platform as well as expand my knowledge of C#. This is be useful for the future, given that there is no serious pressure to produce any results thus far.


Let’s get started!

First, I will confirm that Vive works all well and good as well as the Zed mini is strapped in safely and works. I need to remember to change the base of it as soon as possible!

Everything works fine. In fact, I figured out something important about the usage of the Zed camera. It’s performance depends highly on the objects surrounding the user and because it often makes mistakes with depth estimations, it is best to use it in an environment with enough space. Otherwise it will exhibit glitches. Also, I noticed that opening the blinds and letting more natural light in the room helped boost the performance of the depth estimation. This way I finally got a good AR experience with the Zed World application.

On to Unity now. Going back to Zed Manager, we also see the following properties:

  1. Motion Tracking: when enabled, it allows us to place and track the real world positions of virtual objects without external trackers.
  2. Rendering: enables depth occlusion of virtual objects with real ones and provides control for AR processing and camera brightness
  3. Spatial Mapping: Allows us to render the real environment into a mesh. It is useful for collisions and other interactions of real and virtual objects.
  4. Streaming: Allows broadcasting of Zed input so that other devices can use it as input.

I completed the basic Unity tutorial provided by Stereoworks. It was just putting a virtual sphere in the AR world. That was easy. Now, I want to make the sphere move in circles.

That requires a small script. The two points we should remember from that script are:

  • To get a game object via its name in the scene, call the function GameObject.Find()
  • To make a game object rotate about some point in space and some axis with the rate of some angle per second, call transform.RotateAround (Vector3 point, Vector3 axis, float angle).

The next tutorial we shall study concerns object placement . That is, we will want to be able to place objects in real-life surfaces. That’d be cool, wouldn’t it?

For this, tutorial, we shall use ZED_Rig_Stereo from the prefabs. We tilt the Directional light towards the ground we add a ZEDLight script component to it. Next, we make an empty game object and name it Plane Detection Manager . We attach the ZEDPlaneDetectionManager script to this object. This will allow us to easily identify planes in the mesh our camera records and place object in them according to the laws of Newtonian physics.

This is a very convenient tool. It detects surfaces anywhere you click and creates gameobjects out of them. It also detects the floor.

A few things to note. First, for floor detection it is important that the camera is actually centered at the floor and that the floor is well lit. Second, it seems like occlusion problems are very prevalent with the ZED. That is, one should use it in an environment with well defined and simple surfaces because in complex environment such as my office at the moment, the depth calculation (which is based on triangulation algorithms) gives very poor results. In any case however, I managed to get some good results with this tutorial.

Specifically, I attached to a cube a script which places it wherever the user desires and clicks. The cube has a rigidbody, so it just falls to the ground. But for it to fall to the “floor”, we must have first defined a floor. That is, the user must recognize with the mouse and through the plane detector the surfaces on which they want to drop the cube.

A couple of interesting things in the code were the functions:

  • ZEDSupportFunctions.GetWorldPositionAtPixel()
  • ZEDManager.GetInstance()
  • Input.GetMouseButtonDown()
  • Input.mousePosition.x, Input.mousePosition.y

Next, we shall move on to a tutorial about Lighting and Shadows. This is a fairly short tutorial. We basically learn about SpotlightsForward Rendering options (use of anti-aliasing) and Casting shadows (through the introduction of more directional lights).


 

 

 

On the infinitude of primes – Erdős’s proof

In the book “Proofs from THE BOOK”, there are 6 proofs on the infinitude of primes. All of them are very elegant and beautiful but I really liked the reasoning behind the following one, coined by Paul Erdos.

We shall prove that the series

\sum\limits_{p \in \mathbb{P}}\frac{1}{p}

diverges. That would imply that the set of prime numbers in infinite because otherwise their sum would converge.

Let \{p_i\}_{i \in \mathbb{N}} be the sequence of primes numbers written in increasing order.

Assume that the series above converges. Then, there must exist some number k such that

\sum\limits_{i \geq k+1}\frac{1}{p_i} < \frac{1}{2}

Call the primes p_i for 1 \leq i \leq k the small primes and the rest the big primes.

For any natural number N, it must then hold that

\boxed{\sum\limits_{i \geq k+1}\frac{N}{p_i} < \frac{N}{2}}

Now, let N_b be the number of positive integers n \leq N which are divisible by at least one big prime and let N_s be the number of positive integers less than or equal to N whose prime divisors are all small primes.

We will show that N_b+N_s < N, thus arriving at a contradiction (it should be N_b+N_s = N.

First, note that \lfloor\frac{N}{p_i}\rfloor counts the number of positive integers less than or equal to N that are multiples of p_i.

We have that

N_b \leq \sum\limits_{i \geq k+1}\lfloor\frac{N}{p_i}\rfloor < \frac{N}{2}

and this equation allows us to estimate N_b. For N_s, let n \leq N be a positive integer whose prime divisors are all small primes. Let n = a_n b_n^2, where a_n is not a square of an integer. Since a_n cannot contain the same small prime number twice in its prime factor decomposition, it can take exactly 2^k values. And given that b_n \leq \sqrt{N}, we have that

N_s < 2^k \sqrt{N}

If we then take N = 2^{2k+2}, we can see that N_s < N/2.

Adding the two inequalities together yields the desired result.

 

 

VR, C# and Unity – DAY 3 – Diving into MR/AR

The Zed Mini Camera is here! I first have to mount it to the VivePro, install the relevant Zed SDK and then I can start developing AR/MR. Wow, things do move fast.

Prior to diving into it, I will first do some research on various resources related to developing MR/AR using the Zed.

The first thing I come across is the page by StereoLabs:  https://www.stereolabs.com/docs/getting-started/. This seems like a good place to start. It contains information about installation, usage instructions and even beginner tutorials about AR/MR development.

So that’s good. But now, let’s think a bit of how we are going to proceed with this and set some smart goals for this day.

  1. Mount the camera on the Vive, install the Zed SDK and successfully demo it or play a game like Ping Pong. This will verify that it works.
  2. Study the Zed manual page and go through their tutorials to make basic AR/MR apps. That will give me a handle on how developing with these tools looks like.

At that point, I will feel safe and sound using the Zed Mini for AR/MR and even have some ideas about what kind of app I will want to make. But, I do believe that this is where I should stop and look around me a bit. I am still very novice with developing with C# and Unity and AR/MR in the future of this project will be mainly needed by me in order to develop an interface on which the user interaction can be based. Thus, I will definitely need to dive more deeply into game programming. So the following goal is for the weekend:

  1. Follow a more advanced Unity Game Tutorial and get more acquainted with using Scripts, Prefabs etc.
  2. Read more of formal C# from your book, C# Percisely.

Now, looking even more ahead, next week’s tasks will focus on building an AR/MR UI for testing. That will include making clickable events, text and perhaps an android which can simulate the sign language motions. All those definitely seem within my grasp at the moment but they will even more so after this weekend’s tutorials.

Lastly, the issue of wearing the gloves and exchanging the data with the MR/AR application will be arriving soon. This is something I will have to discuss with the rest of the team, but for now I believe that as long as there exists a way to process the information, then the only thing that is left is piping it somehow to the Unity application. We’ll see.


Ok, so problem #0. The base where the camera is supposed to be mounted isn’t made for VivePro and so it’s not sticking really well. I need to be careful and get the replacement printed mount as soon as possible.

Problem #1: The camera needs to be connected to a USB port and the computer only has 2. That means no mouse. I need to bring a multi-USB adaptor from home.

Securing camera wire with Vive headset wire: OK!

Zed SDK GitHub: https://github.com/stereolabs/

Zed SDK required CUDA installation. Installed Zed SDK. Required computer restart. The USB cable is not reversible so one has to follow the arrows on it. That caused the camera to not be detected at first. Now it works. Downloading and installing Zed World app which will allow me to demo AR/MR.

Ran Zed World. First impressions. Not completely what I expected. The camera has a limited FOV and cannot really see things in detail. I also didn’t see any AR/MR things going on. Also the computer seemed to really be straining to handle it. Just now it crashed and I got a SYSTEM_THREAD_EXCEPTION_NOT_HANDLED, which isn’t good.

The second time I tried running Zed World, I had a more positive experience. I think that the FOV is just that way and actually it seems enough for our applications. It does seem like the camera has a limited resolution, which prevents the user from making out very fine details but that still can be ignored. My biggest issue was the AR. With the exception of some morphed, blurry images flying by in a way that is extremely difficult to detect or interact with, the experience overall with AR was disappointing. Zed World glitched and was in general really slow, was is definitely something we do not like. I’ll look into this topic first.

I downloaded and started testing some things in with the Zed Unity Plugin. It turns out (I think) that you also must import the steamVR plugin for it to work; otherwise you run the scenes and only the camera works (the Vive isn’t involved). So after that, I ran some demo scenes and got to see some AR/MR, which was cool. But it wasn’t that great and the quality was definitely lower than I had anticipated. I am suspecting that perhaps its my room setup and the fact that I am working on a very confined space. But at least it worked most of the time and now I can just dive into working developing a simple AR application.


The first thing we learn by studying the documentation of the ZED Plugin for Unity is the existence of a custom camera made by ZED specifically for AR/MR. They are the

ZED_Rig_Mono, ZED_Rig_Stereo

plugins and they appear in place of the camera we were used to before. So it is important to delete the main camera from a Unity project when planning to use the above Prefabs.

The Camera gameobjects have access to a script called ZED Manager. It handles a lot of of the parameters that go into forming the AR experience. There is a parameter called Input Type. Select ‘USB’ and then set the Resolution as you like. Let’s try to see if something changed by running the scene (input type was SVO before).  The scene doesn’t play. Planetarium plays but not sampleMR. Let’s try another scene. The drones sorta work and so does disco, which is good. Moving on.

 

VR, C# and Unity – DAY 2

I now have a clear idea of an goal around my work on VR in the context of the research project I am participating in. Once the Zed Mini Stereo Camera is attached to VivePro and the Zed SDK is successfully installed, we will be able to work in MR with the Vive. There, I need to make some sort of environment, with text and figures that will act as feedback for the user trying to learn ASL.

For instance, there could be tabs with various hand gestures on them. The user clicks on a tab and they try that hand gesture. The gloves collect the data, the data is processed and some assessment is made regarding the degree of correctness of the user’s signing. Then some textual feedback is given in MR and maybe some 3D model also helps the user understand the correct motion and their mistakes.

It seems like to make this work, I need to get even better acquainted with events in Unity and setting up a scene. It definitely seems more reachable now though. My first goal for tomorrow (7/5) will be setting up the Zed Mini and installing its SDK. Then, I will start playing with it.

For today, I achieved my goal with the Vive Hand Tracking SDK. I made a scene in which the user does some basic hand gestures (Fist, Point, Like etc) and that gesture is picked up by a C# script which displays it on some 3D text. It worked quite well.

VR, C# and Unity – DAY 1

I am going to start writing a log on my experience learning VR developing with Unity and C#. I have spent about one week familiarizing myself with Unity, VR and C#, so I don’t expect my progress to be terribly slow.

I am using a VivePro headset, on to which I will be attaching a Zed Mini Camera. For now, I have installed the HandTracking SDK provided by Valve and tested it through some demo scenes.

At first, I want to find out how to add a 3D text to a Scene. With GameObject > 3D Object > 3D Text that is achievable but the text appears to be too blurry. There is a solution to this: Scale the text to 0.1 and set the font size to something big (like 200). Then, another problem appears. The text doesn’t behave like a 3D object occlusion-wise. There are two solutions to this as I found. First, one can use a shader program (in openGL) or the TextMeshPro package. I did the latter and got pretty good results.

To use TextMeshPro in code, one has to include using TMPro and then the text can change by accessing the public attribute .text.

Another idea I had was to make the text disappear some time after it appeared. That is easy if one simply writes a script in which the function WaitForSeconds() is utilized. This function can also be used to make an object fade over time but I didn’t look into that.

Sometimes, the hands in the Vive Hand Tracking SDK don’t appear. This is also accompanied by Unity crashing and rebooting SteamVR doesn’t seem to work. A “sure” solution to this problem seems to be restarting the computer but also rebooting Unity seems to work as well.

On another note, there is a way to instantiate a GameObject through a Prefab in C#. That is using the function

Instantiate(Prefab, Position, Rotation)

For Rotation, Quaternion.identity is many times used. I need to study more about Quaternions.

After playing around a lot with text, I didn’t get the results I wanted with the Sample scene, even though it wasn’t completely for naught. Time to change plans. I want to make a scene from scratch, in which I will extract the hand gesture based on the SDK’s documentation and then customize text to show which gesture is performed.

There is also a lot to see and do when the Zed Camera is finally mounted to the VivePro, a task which is presenting some difficulties at the moment.