Category Archives: Uncategorized

SVVR2016

20160429_122440

I’ve been fortunate enough lately to attend the largest virtual reality professional event/conference : SVVR. This virtual reality conference’s been held each year in the Silicon Valley for 3 years now. This year, it showcased more than 100 VR companies on the exhibit floor and welcomed more than 1400 VR professionals and enthusiasts from all around the world. As a VR enthusiast myself, I attended the full 3-day conference and met most of the exhibitors and I’d like to summarize my thoughts, and the things I learned below, grouped under various themes. This post is by no means exhaustive and consists of my own, personal opinions.

CONTENT FOR VR

I realize that content creation for VR is really becoming the one area where most players will end up working. Hardware manufacturers and platform software companies are building the VR infrastructure as we speak (and it’s already comfortably usable), but as we move along and standards become more solid, I’m pretty sure we’re going to see lots and lots of new start-ups in the VR Content world, creating immersive games, 360 video contents, live VR events, etc… Right now, the realms of deployment possibilities for a content developer is not really elaborate. The vast majority of content creators are targeting the Unity3D plug-in, since it’s got built-in support for virtually all VR devices there is on the market like the Oculus family of headsets, HTC Vive, PlayStation VR, Samsung’s GearVR and even generic D3D or OpenGL-based applications on PC/Mac/Linux.

2 types of content

There really is two main types of VR content out there. First, 3D virtual artificially-generated content and 360 real-life captured content.

stereoVR

The former being what we usually refer to when thinking about VR, that is, computer-generated 3D worlds, e.g. in games, in which VR user can wander and interact. This is usually the kind of contents used in VR games, but also in VR applications, like Google’s great drawing app called TiltBrush (more info below). Click here to see a nice demo video!

newThe latter is everything that’s not generated but rather “captured” from real-life and projected or rendered in the VR space with the use of, most commonly, spherical projections and post-processing stitching and filtering. Simply said, we’re talking about 360 videos here (both 2D and 3D). Usually, this kind of contents does not let VR users interact with the VR world as “immersively” as the computer-generated 3D contents. It’s rather “played-back” and “replayed” just like regular online television series, for example, except for the fact that watchers can “look around”.

At SVVR2016, there were so many exhibitors doing VR content… Like InnerVision VR, Baobab Studios, SculptVR, MiddleVR, Cubicle ninjas, etc… on the computer-generated side, and Facade TV, VR Sports, Koncept VR, etc… on the 360 video production side.

TRACKING

Personally, I think tracking is by far the most important factor when considering the whole VR user experience. You have to actually try the HTC Vive tracking system to understand. The HTC Vive uses two “Lighthouse” camera towers placed in the room to let you track a larger space, something close to 15′ x 15′ ! I tried it a lot of times and tracking always seemed to keep pretty solid and constant. With the Vive you can literally walk in the VR space, zig-zag, leap and dodge without losing detection. On that front, I think competition is doing quite poorly. For example, Oculus’ CV1 is only tracking your movement from the front and the tracking angle  is pretty narrow… tracking was often lost when I faced away just a little… disappointing!

Talking about tracking, one of the most amazing talks was Leap Motion CTO David Holz’s demo of his brand new ‘Orion’, which is a truly impressive hand tracking camera with very powerful detection algos and very, very low latency. We could only “watch” David interact, but it looked so natural !  Check it out for yourself !

AUDIO

Audio is becoming increasingly crucial to the VR work flow since it adds so much to the VR experience. It is generally agreed in the VR community that awesome, well 3D-localised audio that seems “real” can add a lot of realism even to the visuals. At SVVR2016, there were a few audio-centric exhibitors like Ossic and Subpac. The former is releasing a kickstarter-funded 3D headset that lets you “pan” stereo audio content by rotating your head left-right. The latter is showcasing a complete body suit using tactile transducers and vibrotactile membranes to make you “feel” audio. The goal of this article is not to review specific technologies, but to discuss every aspects/domains part of the VR experience and, when it comes to audio, I unfortunately feel we’re still at the “3D sound is enough” level, but I believe it’s not.

See, proper audio 3D localization is a must of course. You obviously do no want to play a VR game where a dog appearing on your right is barking on your left!… nor do you want to have the impression a hovercraft is approaching up ahead when it’s actually coming from the back. Fortunately, we now have pretty good audio engines that correctly render audio coming from anywhere around you with good front/back discrimination. A good example of that is 3Dception from TwoBigEars. 3D specialization of audio channels is a must-have and yet, it’s an absolute minimum in my opinion. Try it for yourself ! Most of today’s VR games have coherent sound, spatially, but most of the time, you just do not believe sound is actually “real”. Why ?

Well, there are a number of reasons going from “limited audio diversity” (limited number of objects/details found in audio feed… like missing tiny air flows/sounds, user’s respiration or room’s ambient noise level) to limited  sound cancellation capability (ability to suppress high-pitched ambient sounds coming from the “outside” of the game) but I guess one of the most important factors is simply the way audio is recorded and rendered in our day-to day cheap stereo headset… A lot of promises is brought with binaural recording and stereo-to-binaural conversion algorithm. Binaural recording is a technique that records audio through two tiny omni microphones located under diaphragm structures resembling the human ears, so that audio is bounced back just like it is being routed through the human ears before reaching the microphones. The binaural audio experience is striking and the “stereo” feeling is magnified. It is very difficult to explain, you have to hear it for yourself. Talking about ear structure that has a direct impact on audio spectrum, I think one of the most promising techniques moving forward for added audio realism will be the whole geometry-based audio modeling field, where you can basically render sound as if it had actually been reflected on a computed-generated 3D geometry. Using such vrworks-audio-planmodels, a dog barking in front of a tiled metal shed will sound really differently than the same dog barking near a wooden chalet. The brain does pick up those tiny details and that’s why you find guys like Nvidia releasing their brand new “Physically Based Acoustic Simulator Engine” in VrWorks.

HAPTICS

Haptics is another very interesting VR domain that consists of letting users perceive virtual objects not through visual nor aural channels, but through touch. Usually, this sense of touch in VR experience is brought in by the use of special haptic wands that, using force feedback and other technologies, make you think that you are actually really pushing an object in the VR world.

You mostly find two types of haptic devices out there. Wand-based and Glove-based. Gloves for haptics are of course more natural to most users. It’s easy to picture yourself in a VR game trying to “feel” rain drops falling on your fingers or in an flight simulator, pushing buttons and really feeling them. However, by talking to many exhibitors at SVVR, it seems we’ll be stuck at the “feel button pushes” level for quite some time, as we’re very far from being able to render “textures” since spatial resolutions involved would simply be too high for any haptic technology that’s currently available. There are some pretty cool start-ups with awesome glove-based haptic technologies like Kickstarter-funded Neurodigital Technologies GloveOne or Virtuix’s Hands Omni.

Now, I’m not saying wand-based haptic technologies are outdated and not promising. In fact, I think they are more promising than gloves for any VR application that relies on “tools” like a painting app requiring you to use a brush or a remote-surgery medical application requiring you to use an actual scalpel ! When it comes to wands, tools and the like, the potential for haptic feedback is multiplied because you simply have more room to fit more actuators and gyros. I once tried an arm-based 3D joystick in a CAD application and I could swear I was really hitting objects with my design tool…  it was stunning !

SOCIAL

If VR really takes off in the consumer mass market someday soon, it will most probably be social. That’s something I heard at SVVR2016 (paraphrased) in the very interesting talk by David Baszucki titled : “Why the future of VR is social”. I mean, in essence, let’s just take a look at current technology appropriation nowadays and let’s just acknowledge that the vast majority of applications rely on the “social” aspect, right ? People want to “connect”, “communicate” and “share”. So when VR comes around, why would it be suddenly different? Of course, gamers will want to play really immersive VR games and workers will want to use VR in their daily tasks to boost productivity, but most users will probably want to put on their VR glasses to talk to their relatives, thousands of miles away, as if they were sitting in the same room. See ? Even the gamers and the workers I referred to above will want to play or work “with other real people”. No matter how you use VR, I truly believe the social factor will be one of the most important ones to consider to build successful software. At SVVR 2016, I discovered a very interesting start-up that focused on the social VR experience. With mimesys‘s telepresence demo, using a HTC Vive controller, they had me collaborate on a painting with a “real” guy hooked to the same system, painting from his home apartment in France, some 9850 km away and I had a pretty good sense of his “presence”. The 3D geometry and rendered textures were not perfect, but it was good enough to have a true collaboration experience !

MOVING FORWARD

We’re only at the very beginning of this very exciting journey through Virtual Reality and it’s really difficult to predict what VR devices will look like in only 3-5 years from now because things are just moving so quickly… An big area I did not cover in my post and that will surely change of lot of parameters moving forward in the VR world is… AR – Augmented Reality 🙂 Check out what’s MagicLeap‘s up to these days !

 

 

Compiz port to GLES2 : Supporting the Unity Desktop

The second part of my “Compiz port to OpenGL ES” task was to make the whole thing work seemlessly with Canonical’s brand new Unity desktop. Take a look at the Unity Desktop that appeared originally in Ubuntu 11.04 Natty Narwhal :

This change, of course, promised to bring its share of problems, mainly for 2 reasons:

  • Unity displays its GUI components/Widgets using the “Nux” toolkit, which is OpenGL-based. I knew Nux made extensive use of framebuffers, which compiz-gles depended on as well, so ouch.. possible interference ahead!
  • The Unity/Compiz running scheme is very peculiar : Unity runs as a Compiz plugin (the “Unityshell” plugin).

Let’s take a look at the main challenges those elements posed to this task:

1) Unity depending on NUX

This, at first, wouldn’t have to be such a big deal because after all, stacking openGL calls at different software layers is a common thing. The problem here though, is that Nux made unsafe uses of framebuffer Objects (FBO). By “unsafe”, I mean that the code was not carefully unbinding FBOs back to their previous owners after use… so any caller (like compiz !) trying to nest Unity/Nux drawings in its own FBO just couldn’t do it ! This “unsafe” FBO usage comes from a “standalone” point of view and is somewhat not compatible with the compiz plugin scheme.

So what I and Jay Taoko came up with is a new nux API :

This new API lets us manage a new so-called “reference frame buffer” that allows for FBO nesting 🙂  Here it is :

=== modified file 'Nux/WindowCompositor.h'
--- Nux/WindowCompositor.h    2011-12-29 18:06:53 +0000
+++ Nux/WindowCompositor.h    2012-01-05 04:00:19 +0000
@@ -175,6 +175,23 @@
     //====================================  
   public:
+    /*!
+        Set and external fbo to draw Nux BaseWindow into. This external fbo will be
+        restored after Nux completes it rendering. The external fbo is used only in embedded mode. \n
+        If the fbo_object parameter 0, then the reference fbo is invalid and will not be used.
+
+        @param fbo_object The opengl index of the fbo.
+        @param fbo_geometry The geometry of the fbo.
+    */
+    void SetReferenceFramebuffer(unsigned int fbo_object, Geometry fbo_geometry);
+
+    /*!
+        Bind the reference opengl framebuffer object.
+
+        @return True if no error was detected.
+    */
+    bool RestoreReferenceFramebuffer();
+
     ObjectPtr<IOpenGLFrameBufferObject>& GetWindowFrameBufferObject()
     {
       return m_FrameBufferObject;
@@ -561,6 +578,10 @@
     int m_TooltipX;
     int m_TooltipY;

+    //! The fbo to restore after Nux rendering in embedded mode.
+    unsigned int reference_fbo_;
+    Geometry reference_fbo_geometry_;

All this landed in Nux 2.0 series as a big “Linaro” merge on 2012-01-06.  Take a look !  It’s still upstream in more recent versions!

2) Unity running as a Compiz plugin

It may be weird at first to think of Unity as a compiz plugin, but let’s think about it for a minute. When running compiz as our Window Manager, which makes intensive use of OpenGL ES calls, it’s not so foolish to make Unity a plugin because Unity is also making intensive usage of openGL primitives and we’ve got to find a way to coordinate the two components in a graceful manner. One manner is to run Unity as a compiz plugin (the “unityShell” plugin, that is). That way, Unity declares a series of callbacks knowed to Compiz (like glPaint, glDrawTexture, preparePaint, etc…) and compiz calls them at the right time. Every drawing operations are done, every framebuffers are filled at the right time, in the right order, in a graceful manner with the other plugins. On second thought, this decision from Canonical to write Unityshell is not so foolish after all 😛

Tagged , , , ,

Compiz port to GLES2 : Another Cool Project !

Hi ! It’s been a while since I last wrote about my professional highlights in the Open-Source World.

From oct2011 till mar2012, I worked on porting COMPIZ –  the open-source eye-candy window manager – to OpenGLES1/2. This was a very fun and interesting task to tackle! Now, compiz runs under Linux and many of you might never have used it. So here’s two videos, showing what’s it’s basically capable of on a desktop ubuntu:

Those videos actually show compiz running on fast desktop boxes, but my task was to port compiz to the Texas Instruments’ OMAP4 platform (using the power of its amazing SGX GPU). Now, we don’t find any OpenGL implementation on theses SoCs, rather, OpenGLES1/2 implementations delivered by Imagination Technologies (remember, it’s embedded !). So since Compiz was originally written using legacy OGL calls, conventional glVertex and almost no shaders, the fun work could start!!  I ported a lot of ‘compiz core’ code to GLES2, using vertexBuffer objects instead of glVertices. I also had to completely get rid of the fixed-functionality and write cool brand new shaders in GLSL for eye-candy plugins that we wanted to include in our demo (e.g. the Wobbly plugin  – to be showcased in another blog post).

During those months, I had the pleasure of working with Travis Watkins from Linaro (a.k.a. Amaranth) who did first rounds of GLES porting and with whom I worked closely. I also have to mention the work of Pekka Paalanen (from Collabora),  who briefly worked on this project but made important contributions to the paint and damage system, for example. For those interested in pulling our CompizGLES work from the community git, clone the ‘gles’ branch from here :

http://git.compiz.org/compiz/core/log/?h=gles

Tagged , , , , ,

Electrolysis revealed

I must admit that people around me are starting to show an increasing interest in multi-process Firefox and so, I’ve decided to further document the electrolysis project herein.

The electrolysis project ?

Electrolysis is the working name of a Mozilla project which goal is to re-arch good old single-process Firefox into a multi-process one. The idea’s been around for some time now, all the more so since competitors like Google and Microsoft have released multi-process versions of their browsers!  Currently, there is going to be three types of concurrent processes :

  • The main browser process (called the “chrome process”)
  • Plugins processes (called “plugin processes”)
  • Web content and script processes (called “content processes”)

And why are we moving toward multi-process browsers ? Because there are several benefits associated with it :

Security

Generally, plugins are a potential threat to browsers integrity. In the single-process case, they can load themselves into the main address space and exploit weak entry points by, say, calling functions with forged string arguments. With multi-process electrolysis, though, each plugin gets loaded into its own process (with its own address space) and is considered “untrusted” process. Those “plugin processes” have to communicate with the main browser process through IPDL, an inter-process/thread communication protocol language developed by Mozilla. Thus, most of exploit/attack attempts from plugins are more easily caugth. When caugth (Or in fact, when any IPDL protocol error is detected), we just shutdown the plugin process.

Regular web content and malicious scripts may just as well do harm to the system. Thus, each tab you open in the browser is going to load and run its web content in separate “content processes”, also considered “untrusted”, thus leading to all the above-mentioned security advantages.

But IPDL is not the only security feature that’s added to electrolysis. In fact, multiprocess Firefox will also implement sandboxing to keep untrusted processes from accessing some system resources/features.

Performance:

Since we’re seeing more and more multi-processor/multi-cores CPU coming to the market nowadays, it is very likely that this chrome/content/plugin process separation will greatly increase general performance. In fact, multi-process software are well-suited for multi-CPU/multi-cores systems and therefore, each process can be separately run on a single CPU.

Stability:

Formerly, with single-process Firefox, if some web content happened to crash, the whole Firefox process would crash. This will not be the case any more with multi-process Firefox. Because every web content/plugin will run in its own process and address space, crashes caused by some scripts/plugins/content on some web site will only affect the associated browser tabs, thus letting the user browse other web sites normally.  This is an important breakthrough for increased Firefox stability and, because the “chrome” process (the main Firefox “default” process) is self-contained and well-tested, the rate of main process crashes should drop near 0.

UI responsiveness

UI responsiveness is an important aspect of every user’s browsing experience, especially on mobile devices. UI responsiveness could be simply defined as the time elapsed between, say, a click from the user and the associated visual feedback from the UI. In the browser world, UI responsiveness is especially important when panning the content area around. The user wants to feel he’s really manipulating the content in realtime. In Firefox, the main “chrome” process is responsible of all the UI stuff. So, having all the UI-related management isolated into a single trusted process will surely increase the browser’s responsiveness.

Tagged , , , , , , , , , , , , ,

Multi-process Firefox on the N900 ?

I’m really glad fennec/firefox RC 1.0 came out for maemo! If you’ve got an N810 or N900, but haven’t installed firefox on it yet, rush to firefox.com/m from your device browser and click “download”. With it, you’ll be able to unleash all the great plugin power you were used to with the desktop version! Try it.

Here’s a snapshop I did with the “load-applet” app available on the Maemo Select repository. I’m still amazed at the N900 processing power. it lets me surf google maps as I used to on my desktop firefox!


I’m even editing this blog entry with fennec/firefox running on my N900 !!

I’m currently working on the Electrolysis project (multi-process firefox) and so, I can’t keep thinking of all the implications a multi-process fennec would have on the N900. Watch https://wiki.mozilla.org/Content_Processes for more news.

Tagged , , , , ,

An introduction to dual debugging


As a firefox/electrolysis developer, I recently hit a little snag with some harder-than-usual part of the code. I had written a code patch for it, and found very difficult to see the associated effects of my patch compared to the expected non-patched results. Dual-debugging was really indicated here and I therefore spent some time formalizing a win32 dual-debugging method of my own to increase my efficiency.

But first, what’s dual-debugging ?

“Dual-debugging” simply means debugging two versions of the same application *at the same time* (usually a patched and a non-patched versions). The major advantage is that it lets you easily see the tiny differences that your patch brought up (by stepping through the code on both debuggers). The main drawback is that this method requires more ressources from your system (usually two codebases and two profiles).

I’m developing on Windows with the very stable and mighty Microsoft Visual Studio 2008. The following is a step-by-step tutorial explaining how you can easily setup a dual-debugging environment. I’ll take “electrolysis” as an example, but you can apply those principles to almost any other application.

dual-debugging HOWTO :

1. Duplicate your working tree

To trace into your code on both debugguers, you’ll have to duplicate your code directory in a similar, parallel location. For example, let’s say your code is located under :

D:\WORK\codebase

let’s just copy the whole “codebase” directory to this new location

D:\WORK2\codebase

2. Duplicate your MSVC projects and solutions

Same thing here. Your MSVC project files contain important information about sources files that are going to be debuggued. You must duplicate all the *.sln and *.vcproj. For example :

D:\WORK\MSVC\*.*  -->   D:\WORK2\MSVC\*.*

3. Duplicate your profile for the debuggued application

Many applications use user preferences for various runtime checks. One of those is to make sure the application is not already instanciated when you run it. This is the case with firefox and electrolysis. If you keep a single user profile, you won’t be able to debug two instances of it in parallel because it’ll say : “Firefox is already running in the background…” To solve this problem, just create another profile. For firefox, or electrolysis, this is easily done by :

./firefox -P dummy

Using the little firefox profile GUI, create another profile with a different name.

Then run the application once with the newly created profile. For electrolysis, with a newly created profile called “electrolysis2”, do the following :

./firefox -no-remote -P electrolysis2 -chrome chrome://global/content/test-ipc.xul

4. Modify the duplicates properly

Now, you have to make all the duplicated projects and code files point to the new directories and duplicated files.

  • With MSVC’s “Find and Replace In Files” dialog (CTRL-SHIFT-H),  change every “\WORK\” by “\WORK2\” …
  • For the duplicated working tree directory.
  • For the duplicated MSVC project directory
  • In MSVC, change the debugging command arguments to use the newly created profile. For example, if debugging electrolysis, do :
  1. Open the duplicated solution (.sln)
  2. Right-Click on the startup project’s “properties” pop-up menu item
  3. In the “Debugging” menu item, under “Configuration Properties”, change  “-P electrolysis”  for “-P electrolysis2”.
Tagged

Tutorial : Development setup for Electrolysis (E10S)

Over the last month, I’ve exhaustively documented different development aspects of Electrolysis (E10S), the multi-process firefox. This step-by-step documentation can be found here.

The Mozilla Codebase is especially huge and complex, and because of its general scarcity of comments in the code and its lack of a single centralized documentation, it can be overwhelmingly hard for a newbie to setup his development environment and successfully tackle his first task. This is especially true for the electrolysis (e10s) project, where many processes run the mozilla codebase concurrently.

This documentation was written in an attempt to provide the new E10S hacker with basic knowledge about setting up, building and running the E10S project on any of the usual platforms, namely, Linux, Maemo and Window.

Tagged , , , , ,