Author Archives: fredinfinite23

SVVR2016

20160429_122440

I’ve been fortunate enough lately to attend the largest virtual reality professional event/conference : SVVR. This virtual reality conference’s been held each year in the Silicon Valley for 3 years now. This year, it showcased more than 100 VR companies on the exhibit floor and welcomed more than 1400 VR professionals and enthusiasts from all around the world. As a VR enthusiast myself, I attended the full 3-day conference and met most of the exhibitors and I’d like to summarize my thoughts, and the things I learned below, grouped under various themes. This post is by no means exhaustive and consists of my own, personal opinions.

CONTENT FOR VR

I realize that content creation for VR is really becoming the one area where most players will end up working. Hardware manufacturers and platform software companies are building the VR infrastructure as we speak (and it’s already comfortably usable), but as we move along and standards become more solid, I’m pretty sure we’re going to see lots and lots of new start-ups in the VR Content world, creating immersive games, 360 video contents, live VR events, etc… Right now, the realms of deployment possibilities for a content developer is not really elaborate. The vast majority of content creators are targeting the Unity3D plug-in, since it’s got built-in support for virtually all VR devices there is on the market like the Oculus family of headsets, HTC Vive, PlayStation VR, Samsung’s GearVR and even generic D3D or OpenGL-based applications on PC/Mac/Linux.

2 types of content

There really is two main types of VR content out there. First, 3D virtual artificially-generated content and 360 real-life captured content.

stereoVR

The former being what we usually refer to when thinking about VR, that is, computer-generated 3D worlds, e.g. in games, in which VR user can wander and interact. This is usually the kind of contents used in VR games, but also in VR applications, like Google’s great drawing app called TiltBrush (more info below). Click here to see a nice demo video!

newThe latter is everything that’s not generated but rather “captured” from real-life and projected or rendered in the VR space with the use of, most commonly, spherical projections and post-processing stitching and filtering. Simply said, we’re talking about 360 videos here (both 2D and 3D). Usually, this kind of contents does not let VR users interact with the VR world as “immersively” as the computer-generated 3D contents. It’s rather “played-back” and “replayed” just like regular online television series, for example, except for the fact that watchers can “look around”.

At SVVR2016, there were so many exhibitors doing VR content… Like InnerVision VR, Baobab Studios, SculptVR, MiddleVR, Cubicle ninjas, etc… on the computer-generated side, and Facade TV, VR Sports, Koncept VR, etc… on the 360 video production side.

TRACKING

Personally, I think tracking is by far the most important factor when considering the whole VR user experience. You have to actually try the HTC Vive tracking system to understand. The HTC Vive uses two “Lighthouse” camera towers placed in the room to let you track a larger space, something close to 15′ x 15′ ! I tried it a lot of times and tracking always seemed to keep pretty solid and constant. With the Vive you can literally walk in the VR space, zig-zag, leap and dodge without losing detection. On that front, I think competition is doing quite poorly. For example, Oculus’ CV1 is only tracking your movement from the front and the tracking angle  is pretty narrow… tracking was often lost when I faced away just a little… disappointing!

Talking about tracking, one of the most amazing talks was Leap Motion CTO David Holz’s demo of his brand new ‘Orion’, which is a truly impressive hand tracking camera with very powerful detection algos and very, very low latency. We could only “watch” David interact, but it looked so natural !  Check it out for yourself !

AUDIO

Audio is becoming increasingly crucial to the VR work flow since it adds so much to the VR experience. It is generally agreed in the VR community that awesome, well 3D-localised audio that seems “real” can add a lot of realism even to the visuals. At SVVR2016, there were a few audio-centric exhibitors like Ossic and Subpac. The former is releasing a kickstarter-funded 3D headset that lets you “pan” stereo audio content by rotating your head left-right. The latter is showcasing a complete body suit using tactile transducers and vibrotactile membranes to make you “feel” audio. The goal of this article is not to review specific technologies, but to discuss every aspects/domains part of the VR experience and, when it comes to audio, I unfortunately feel we’re still at the “3D sound is enough” level, but I believe it’s not.

See, proper audio 3D localization is a must of course. You obviously do no want to play a VR game where a dog appearing on your right is barking on your left!… nor do you want to have the impression a hovercraft is approaching up ahead when it’s actually coming from the back. Fortunately, we now have pretty good audio engines that correctly render audio coming from anywhere around you with good front/back discrimination. A good example of that is 3Dception from TwoBigEars. 3D specialization of audio channels is a must-have and yet, it’s an absolute minimum in my opinion. Try it for yourself ! Most of today’s VR games have coherent sound, spatially, but most of the time, you just do not believe sound is actually “real”. Why ?

Well, there are a number of reasons going from “limited audio diversity” (limited number of objects/details found in audio feed… like missing tiny air flows/sounds, user’s respiration or room’s ambient noise level) to limited  sound cancellation capability (ability to suppress high-pitched ambient sounds coming from the “outside” of the game) but I guess one of the most important factors is simply the way audio is recorded and rendered in our day-to day cheap stereo headset… A lot of promises is brought with binaural recording and stereo-to-binaural conversion algorithm. Binaural recording is a technique that records audio through two tiny omni microphones located under diaphragm structures resembling the human ears, so that audio is bounced back just like it is being routed through the human ears before reaching the microphones. The binaural audio experience is striking and the “stereo” feeling is magnified. It is very difficult to explain, you have to hear it for yourself. Talking about ear structure that has a direct impact on audio spectrum, I think one of the most promising techniques moving forward for added audio realism will be the whole geometry-based audio modeling field, where you can basically render sound as if it had actually been reflected on a computed-generated 3D geometry. Using such vrworks-audio-planmodels, a dog barking in front of a tiled metal shed will sound really differently than the same dog barking near a wooden chalet. The brain does pick up those tiny details and that’s why you find guys like Nvidia releasing their brand new “Physically Based Acoustic Simulator Engine” in VrWorks.

HAPTICS

Haptics is another very interesting VR domain that consists of letting users perceive virtual objects not through visual nor aural channels, but through touch. Usually, this sense of touch in VR experience is brought in by the use of special haptic wands that, using force feedback and other technologies, make you think that you are actually really pushing an object in the VR world.

You mostly find two types of haptic devices out there. Wand-based and Glove-based. Gloves for haptics are of course more natural to most users. It’s easy to picture yourself in a VR game trying to “feel” rain drops falling on your fingers or in an flight simulator, pushing buttons and really feeling them. However, by talking to many exhibitors at SVVR, it seems we’ll be stuck at the “feel button pushes” level for quite some time, as we’re very far from being able to render “textures” since spatial resolutions involved would simply be too high for any haptic technology that’s currently available. There are some pretty cool start-ups with awesome glove-based haptic technologies like Kickstarter-funded Neurodigital Technologies GloveOne or Virtuix’s Hands Omni.

Now, I’m not saying wand-based haptic technologies are outdated and not promising. In fact, I think they are more promising than gloves for any VR application that relies on “tools” like a painting app requiring you to use a brush or a remote-surgery medical application requiring you to use an actual scalpel ! When it comes to wands, tools and the like, the potential for haptic feedback is multiplied because you simply have more room to fit more actuators and gyros. I once tried an arm-based 3D joystick in a CAD application and I could swear I was really hitting objects with my design tool…  it was stunning !

SOCIAL

If VR really takes off in the consumer mass market someday soon, it will most probably be social. That’s something I heard at SVVR2016 (paraphrased) in the very interesting talk by David Baszucki titled : “Why the future of VR is social”. I mean, in essence, let’s just take a look at current technology appropriation nowadays and let’s just acknowledge that the vast majority of applications rely on the “social” aspect, right ? People want to “connect”, “communicate” and “share”. So when VR comes around, why would it be suddenly different? Of course, gamers will want to play really immersive VR games and workers will want to use VR in their daily tasks to boost productivity, but most users will probably want to put on their VR glasses to talk to their relatives, thousands of miles away, as if they were sitting in the same room. See ? Even the gamers and the workers I referred to above will want to play or work “with other real people”. No matter how you use VR, I truly believe the social factor will be one of the most important ones to consider to build successful software. At SVVR 2016, I discovered a very interesting start-up that focused on the social VR experience. With mimesys‘s telepresence demo, using a HTC Vive controller, they had me collaborate on a painting with a “real” guy hooked to the same system, painting from his home apartment in France, some 9850 km away and I had a pretty good sense of his “presence”. The 3D geometry and rendered textures were not perfect, but it was good enough to have a true collaboration experience !

MOVING FORWARD

We’re only at the very beginning of this very exciting journey through Virtual Reality and it’s really difficult to predict what VR devices will look like in only 3-5 years from now because things are just moving so quickly… An big area I did not cover in my post and that will surely change of lot of parameters moving forward in the VR world is… AR – Augmented Reality 🙂 Check out what’s MagicLeap‘s up to these days !

 

 

Gecko on Wayland

266px-Wayland_Logo.svgAt Collabora, we’re always on the lookout for cool opportunities involving Wayland and we noticed recently that Mozilla had started to show some interest in porting Firefox to Wayland. In short, the Wayland display server is becoming very popular for being lightweight, versatile yet powerful and is designed to be a replacement for X11. Chrome and Webkit already got Wayland ports and we think that Firefox should have that too.

Some months ago, we wrote a simple proof-of-concept basically starting from actual Gecko’s GTK3 paths and stripping all the MOZ_X11 ifdefs out of the way. We did a bunch of quick hacks fixing broken stuff but rather easily and quickly (couple days), we got Firefox to run on Weston (Wayland official reference compositor). Ok, because of hard X11 dependencies, keyboard input was broken and decorations suffered a little, but that’s a very good start! Take a look at the below screenshot 🙂

firefox-on-wayland

Firefox/Gecko : Getting rid of Xlib surfaces

Over the past few months, working at Collabora, I have helped Mozilla get rid of Xlib surfaces for content on Linux platform. This task was the primary problem keeping Mozilla from turning OpenGL layers on by default on Linux, which is one of their long-term goals. I’ll briefly explain this long-term goal and will thereafter give details about how I got rid of Xlib surfaces.

LONG-TERM GOAL – Enabling Skia layers by default on Linux

My work integrated into a wider, long-term goal that Mozilla currently has : To enable Skia layers by default on Linux (Bug 1038800). And for a glimpse into how Mozilla initially made Skia layers work on linux, see bug 740200. At the time of writing this article, Skia layers are still not enabled by default because there are some open bugs about failing Skia reftests and OMTC (off-main-thread compositing) not being fully stable on linux at the moment (Bug 722012). Why is OMTC needed to get Skia layers on by default on linux ?  Simply because by design, users that choose OpenGL layers are being grandfathered OMTC on Linux… and since the MTC (main-thread compositing) path has been dropped lately, we must tackle the OMTC bugs before we can dream about turning Skia layers on by default on Linux.

For a more detailed explanation of issues and design considerations pertaining turning Skia layers on by default on Linux, see this wiki page.

MY TASK – Getting rig of Xlib surfaces for content

Xlib surfaces for content rendering have been used extensively for a long time now, but when OpenGL got attention as a means to accelerate layers, we quickly ran into interoperability issues between XRender and Texture_From_Pixmap OpenGL extension… issues that were assumed insurmountable after initial analysis. Also, and I quote Roc here, “We [had] lots of problems with X fallbacks, crappy X servers, pixmap usage, weird performance problems in certain setups, etc. In particular we [seemed] to be more sensitive to Xrender implementation quality that say Opera or Webkit/GTK+.” (Bug 496204)

So for all those reasons, someone had to get rid of Xlib surfaces, and that someone was… me 😉

The Problem

So problem was to get rid of Xlib surfaces (gfxXlibSurface) for content under Linux/GTK platform and implicitly, of course, replace them with Image surfaces (gfxImageSurface) so they become regular memory buffers in which we can render with GL/gles and from which we can composite using GPU. Now, it’s pretty easy to force creation of Image surfaces (instead of Xlib ones) for just all content layers in gecko gfx/layers framework, just force gfxPlatformGTK::CreateOffscreenSurfaces(…) to create gfxImageSurfaces in any case.

Problem is, naively doing so gives rise to a series of perf. regressions and sub-optimal paths being taken, for example, to copy image buffers around when passing them across process boundaries, or unnecessary copying when compositing under X11 with Xrender support. So the real work was to fix everything after having pulled the gfxXlibSurface plug 😉

The Solution

First glitch on the way was that GTK2 theme rendering, per design, *had* to happen on Xlib surfaces. We didn’t have much choice as to narrow down our efforts to the GTK3 branch alone. What’s nice with GTK3 on that front is that it makes integral use of cairo, thus letting theme rendering happen on any type of cairo_surface_t. For more detail on that decision, read this.

Upfront, we noticed that the already implemented GL compositor was properly managing and buffering image layer contents, which is a good thing, but on the way, we saw that the ‘basic’ compositor did not. So we started streamlining basic compositor under OMTC for GTK3.

The core of the solution here was about implementing server-side buffering of layer contents that were using image backends. Since targetted platform was Linux/GTK3 and since Xrender support is rather frequent, the most intuitive thing to do was to subclass BasicCompositor into a new X11BasicCompositor and make it use a new specialized DataTextureSource (that we called X11DataTextureSourceBasic) that basically buffers upcoming layer content in ::Update() to an gfxXlibSurface that we keep alive for the TextureSource lifetime (unless surface changes size and/or format).

Performance results were satisfying. For 64 bit systems, we had around 75% boost in tp5o_shutdown_paint, 6% perf gain for ‘cart’, 14% for ‘tresize’, 33% for tscrollx and 12% perf gain on tcanvasmark.

For complete details about this effort, design decisions and resulting performance numbers, please read the corresponding bugzilla ticket.

To see the code that we checked-in to solve this, look at those 2 patches :

https://hg.mozilla.org/mozilla-central/rev/a500c62330d4

https://hg.mozilla.org/mozilla-central/rev/6e532c9826e7

Cheers !

 

The importance of the ARM architecture

Recently, in a compiz-related IRC channel, I was reading someone questioning the relevancy of OMAP platform development. Basically, he was saying something like :

I still don’t understand what the main purpose of the panda board is. Just a very small hardware with desktop power? Or all about SOC? or what ?

Well, the main purpose is to develop for ARM devices. That’s the bottom-line. See, the PandaBoard is a development board that integrates the Texas Instruments OMAP4 system on a chip (SoC). And this OMAP4 chip (either OMAP4430 or OMAP4460 if you’re using Panda or Panda ES), of course, is ARM based. Apart from TI’s OMAP platforms, you can’t imagine how many devices out there are using the ARM architecture! Here a quick list of devices using ARM processors :

  • Apple iPods, iPhones and iPads
  • Google Galaxy Nexus phones
  • HTC one series phones
  • Samsung Galaxy S series phones
  • Nokia’s N series Phones
  • Motorola phones, LG phones, blah blah,
  • Gameboy Advance
  • Nintendo DS/3DS
  • Calculators, peripherals, …
  • … and many others…

So you see just how widespread ARM architecture is.

In 2005 about 98% of the more than one billion mobile phones sold each year used at least one ARM processor. As of 2009 ARM processors accounted for approximately 90% of all embedded 32-bit RISC processors and were used extensively in consumer electronics, including personal digital assistants (PDAs), tablets, mobile phones, digital media and music players, hand-held game consoles, calculators and computer peripherals such as hard drives and routers [1].

Now, developing on the Beagleboard, Pandabord or Blaze really isn’t just about developing for ARM devices, it’s about developing for OMAP devices. Still, there are a lot of Motorola, Panasonic, LG, Blackberry and Samsung smartphones out there based on OMAP chips, but that’s another story…

 

References

  1. ARM In Wikipedia. Retrieved June 26, 2012, from http://en.wikipedia.org/wiki/ARM
Tagged

The SGX stack : a new project for me

When considering X/DDX driver implementation on an ARM platform, Texas Instrument’s Blaze and Pandaboards are quite often considered by engineers and designers because of their smart multicore design, low-cost value and, of course, powerful SGX GPU devices which make real-time rendering of complex high-polygon scenes possible!

Take a look at this user demo which, basically, shows everything the PowerVR SGX device is capable of rendering from cell shading and particles systems, all the way to ambient occlusion, bump maps and Image-based lighting with custom shaders. Currently, the SGX 540 is fully supporting the OpenGL ES2.0 spec.

So since beginning of April 2012, I’m working on implementing/fixing our DDX drivers to work nicely with our PVR SGX on the Pandaboard ES. This lets me hack at code at various levels down the stack : “high level” DDX drivers and low-level DRM kernel code and PVR kernel-module.

Currently, only the PVR kernel module (DKMS) is open-source. You can get the code from our TI OMAP release PPA (pvr-omap4-dkms package). However, all the user libs, that is, the OpenGL/VG implementations (what we call the “sgxlib”) written by IMG are closed-source. You can download the binary blob trom the same PPA (libegl1-sgx-omap4, libgles1-sgx-omap4, libgles2-sgx-omap4, libopenvg-sgx-omap4 and pvr-omap4_xxx packages).

That’s it for now.

Tagged

Compiz port to GLES2 : Putting everything together.

Hi, here’s the last article of the “Compiz port to GLES2” series.

This project was really tons of fun. It led me to :

  • Learn compiz and metacity internals
  • Rewrite parrt of the opengl plugin
  • Port many cool compiz plugins (Water, Wobbly, Cube, Scale, etc…)
  • Rewrite compiz opengl plugin’s partial damage rendering
  • Harmonize Nux/Compiz framebuffer usage

And all of this working on the really cool PandaBoard plateform !

So today, I just wanted to show you the results. Check the following two videos :

1) This video showcases Unity3D with Wobbly, Water and other plugins.

2) At the end of this video, we see the rest of the demo I devised : The cube plugin !

Tagged , , , , , ,

Compiz port to GLES2 : Porting the water plugin

Yet another fun task I tackled along the way to porting Compiz to OpenGL|ES was to completely rewrite the “Water plugin”. But before I talk too much (like always), let’s take a look at the Water plugin on desktop:

So clearly, for those of you who didn’t already know the water plugin, clearly, you’ll understand that its practical applications are… well… limited! I mean, having water drops pop up all over one’s desktop is nice and fun… but only for the first 10 minutes I’d say ! 😛  After this already-long time frame, the excitment is over… and you’ll understand that our interest in the water plugin was for demoing purposes, of course.  In fact, the water plugin is a really cool thing to have for a demo. It showcases per-fragment shading capabilities like no other compiz plugin can, and the way the plugin was devised originally is rather interesting : it makes a very clever usage of Framebuffers. In fact, it uses a series of three square FBOs of limited resolution (let’s say 256×256) which content aren’t colors, but bump normals (in the x,y,z components) and heights (in the w component). By binding 2 FBOs in the series (current and last ones), the water-calculating shader may then proceed all the framebuffer fragments (it calculates a per-fragment “acceleration” factor by comparing current/last heights for each fragments) and output newly computed normals and heights in the next FBO in the series!

Only problem was… the original shaders were written in ARB language !  Rewritting this shader in GLSL really was a liberating action !!

Here’s the ported Water plugin in action! This was run on a Pandaboard ES running ubuntu 11.04 Natty Narwhal :

You can find the final version of the Compiz water plugin port to GLES here. Have fun !

Tagged , , , , ,

Compiz port to GLES2 : Supporting the Unity Desktop

The second part of my “Compiz port to OpenGL ES” task was to make the whole thing work seemlessly with Canonical’s brand new Unity desktop. Take a look at the Unity Desktop that appeared originally in Ubuntu 11.04 Natty Narwhal :

This change, of course, promised to bring its share of problems, mainly for 2 reasons:

  • Unity displays its GUI components/Widgets using the “Nux” toolkit, which is OpenGL-based. I knew Nux made extensive use of framebuffers, which compiz-gles depended on as well, so ouch.. possible interference ahead!
  • The Unity/Compiz running scheme is very peculiar : Unity runs as a Compiz plugin (the “Unityshell” plugin).

Let’s take a look at the main challenges those elements posed to this task:

1) Unity depending on NUX

This, at first, wouldn’t have to be such a big deal because after all, stacking openGL calls at different software layers is a common thing. The problem here though, is that Nux made unsafe uses of framebuffer Objects (FBO). By “unsafe”, I mean that the code was not carefully unbinding FBOs back to their previous owners after use… so any caller (like compiz !) trying to nest Unity/Nux drawings in its own FBO just couldn’t do it ! This “unsafe” FBO usage comes from a “standalone” point of view and is somewhat not compatible with the compiz plugin scheme.

So what I and Jay Taoko came up with is a new nux API :

This new API lets us manage a new so-called “reference frame buffer” that allows for FBO nesting 🙂  Here it is :

=== modified file 'Nux/WindowCompositor.h'
--- Nux/WindowCompositor.h    2011-12-29 18:06:53 +0000
+++ Nux/WindowCompositor.h    2012-01-05 04:00:19 +0000
@@ -175,6 +175,23 @@
     //====================================  
   public:
+    /*!
+        Set and external fbo to draw Nux BaseWindow into. This external fbo will be
+        restored after Nux completes it rendering. The external fbo is used only in embedded mode. \n
+        If the fbo_object parameter 0, then the reference fbo is invalid and will not be used.
+
+        @param fbo_object The opengl index of the fbo.
+        @param fbo_geometry The geometry of the fbo.
+    */
+    void SetReferenceFramebuffer(unsigned int fbo_object, Geometry fbo_geometry);
+
+    /*!
+        Bind the reference opengl framebuffer object.
+
+        @return True if no error was detected.
+    */
+    bool RestoreReferenceFramebuffer();
+
     ObjectPtr<IOpenGLFrameBufferObject>& GetWindowFrameBufferObject()
     {
       return m_FrameBufferObject;
@@ -561,6 +578,10 @@
     int m_TooltipX;
     int m_TooltipY;

+    //! The fbo to restore after Nux rendering in embedded mode.
+    unsigned int reference_fbo_;
+    Geometry reference_fbo_geometry_;

All this landed in Nux 2.0 series as a big “Linaro” merge on 2012-01-06.  Take a look !  It’s still upstream in more recent versions!

2) Unity running as a Compiz plugin

It may be weird at first to think of Unity as a compiz plugin, but let’s think about it for a minute. When running compiz as our Window Manager, which makes intensive use of OpenGL ES calls, it’s not so foolish to make Unity a plugin because Unity is also making intensive usage of openGL primitives and we’ve got to find a way to coordinate the two components in a graceful manner. One manner is to run Unity as a compiz plugin (the “unityShell” plugin, that is). That way, Unity declares a series of callbacks knowed to Compiz (like glPaint, glDrawTexture, preparePaint, etc…) and compiz calls them at the right time. Every drawing operations are done, every framebuffers are filled at the right time, in the right order, in a graceful manner with the other plugins. On second thought, this decision from Canonical to write Unityshell is not so foolish after all 😛

Tagged , , , ,

Compiz port to GLES2 : Another Cool Project !

Hi ! It’s been a while since I last wrote about my professional highlights in the Open-Source World.

From oct2011 till mar2012, I worked on porting COMPIZ –  the open-source eye-candy window manager – to OpenGLES1/2. This was a very fun and interesting task to tackle! Now, compiz runs under Linux and many of you might never have used it. So here’s two videos, showing what’s it’s basically capable of on a desktop ubuntu:

Those videos actually show compiz running on fast desktop boxes, but my task was to port compiz to the Texas Instruments’ OMAP4 platform (using the power of its amazing SGX GPU). Now, we don’t find any OpenGL implementation on theses SoCs, rather, OpenGLES1/2 implementations delivered by Imagination Technologies (remember, it’s embedded !). So since Compiz was originally written using legacy OGL calls, conventional glVertex and almost no shaders, the fun work could start!!  I ported a lot of ‘compiz core’ code to GLES2, using vertexBuffer objects instead of glVertices. I also had to completely get rid of the fixed-functionality and write cool brand new shaders in GLSL for eye-candy plugins that we wanted to include in our demo (e.g. the Wobbly plugin  – to be showcased in another blog post).

During those months, I had the pleasure of working with Travis Watkins from Linaro (a.k.a. Amaranth) who did first rounds of GLES porting and with whom I worked closely. I also have to mention the work of Pekka Paalanen (from Collabora),  who briefly worked on this project but made important contributions to the paint and damage system, for example. For those interested in pulling our CompizGLES work from the community git, clone the ‘gles’ branch from here :

http://git.compiz.org/compiz/core/log/?h=gles

Tagged , , , , ,

Electrolysis revealed

I must admit that people around me are starting to show an increasing interest in multi-process Firefox and so, I’ve decided to further document the electrolysis project herein.

The electrolysis project ?

Electrolysis is the working name of a Mozilla project which goal is to re-arch good old single-process Firefox into a multi-process one. The idea’s been around for some time now, all the more so since competitors like Google and Microsoft have released multi-process versions of their browsers!  Currently, there is going to be three types of concurrent processes :

  • The main browser process (called the “chrome process”)
  • Plugins processes (called “plugin processes”)
  • Web content and script processes (called “content processes”)

And why are we moving toward multi-process browsers ? Because there are several benefits associated with it :

Security

Generally, plugins are a potential threat to browsers integrity. In the single-process case, they can load themselves into the main address space and exploit weak entry points by, say, calling functions with forged string arguments. With multi-process electrolysis, though, each plugin gets loaded into its own process (with its own address space) and is considered “untrusted” process. Those “plugin processes” have to communicate with the main browser process through IPDL, an inter-process/thread communication protocol language developed by Mozilla. Thus, most of exploit/attack attempts from plugins are more easily caugth. When caugth (Or in fact, when any IPDL protocol error is detected), we just shutdown the plugin process.

Regular web content and malicious scripts may just as well do harm to the system. Thus, each tab you open in the browser is going to load and run its web content in separate “content processes”, also considered “untrusted”, thus leading to all the above-mentioned security advantages.

But IPDL is not the only security feature that’s added to electrolysis. In fact, multiprocess Firefox will also implement sandboxing to keep untrusted processes from accessing some system resources/features.

Performance:

Since we’re seeing more and more multi-processor/multi-cores CPU coming to the market nowadays, it is very likely that this chrome/content/plugin process separation will greatly increase general performance. In fact, multi-process software are well-suited for multi-CPU/multi-cores systems and therefore, each process can be separately run on a single CPU.

Stability:

Formerly, with single-process Firefox, if some web content happened to crash, the whole Firefox process would crash. This will not be the case any more with multi-process Firefox. Because every web content/plugin will run in its own process and address space, crashes caused by some scripts/plugins/content on some web site will only affect the associated browser tabs, thus letting the user browse other web sites normally.  This is an important breakthrough for increased Firefox stability and, because the “chrome” process (the main Firefox “default” process) is self-contained and well-tested, the rate of main process crashes should drop near 0.

UI responsiveness

UI responsiveness is an important aspect of every user’s browsing experience, especially on mobile devices. UI responsiveness could be simply defined as the time elapsed between, say, a click from the user and the associated visual feedback from the UI. In the browser world, UI responsiveness is especially important when panning the content area around. The user wants to feel he’s really manipulating the content in realtime. In Firefox, the main “chrome” process is responsible of all the UI stuff. So, having all the UI-related management isolated into a single trusted process will surely increase the browser’s responsiveness.

Tagged , , , , , , , , , , , , ,