Category Archives: work

Gecko on Wayland

266px-Wayland_Logo.svgAt Collabora, we’re always on the lookout for cool opportunities involving Wayland and we noticed recently that Mozilla had started to show some interest in porting Firefox to Wayland. In short, the Wayland display server is becoming very popular for being lightweight, versatile yet powerful and is designed to be a replacement for X11. Chrome and Webkit already got Wayland ports and we think that Firefox should have that too.

Some months ago, we wrote a simple proof-of-concept basically starting from actual Gecko’s GTK3 paths and stripping all the MOZ_X11 ifdefs out of the way. We did a bunch of quick hacks fixing broken stuff but rather easily and quickly (couple days), we got Firefox to run on Weston (Wayland official reference compositor). Ok, because of hard X11 dependencies, keyboard input was broken and decorations suffered a little, but that’s a very good start! Take a look at the below screenshot 🙂

firefox-on-wayland

Advertisements

Firefox/Gecko : Getting rid of Xlib surfaces

Over the past few months, working at Collabora, I have helped Mozilla get rid of Xlib surfaces for content on Linux platform. This task was the primary problem keeping Mozilla from turning OpenGL layers on by default on Linux, which is one of their long-term goals. I’ll briefly explain this long-term goal and will thereafter give details about how I got rid of Xlib surfaces.

LONG-TERM GOAL – Enabling Skia layers by default on Linux

My work integrated into a wider, long-term goal that Mozilla currently has : To enable Skia layers by default on Linux (Bug 1038800). And for a glimpse into how Mozilla initially made Skia layers work on linux, see bug 740200. At the time of writing this article, Skia layers are still not enabled by default because there are some open bugs about failing Skia reftests and OMTC (off-main-thread compositing) not being fully stable on linux at the moment (Bug 722012). Why is OMTC needed to get Skia layers on by default on linux ?  Simply because by design, users that choose OpenGL layers are being grandfathered OMTC on Linux… and since the MTC (main-thread compositing) path has been dropped lately, we must tackle the OMTC bugs before we can dream about turning Skia layers on by default on Linux.

For a more detailed explanation of issues and design considerations pertaining turning Skia layers on by default on Linux, see this wiki page.

MY TASK – Getting rig of Xlib surfaces for content

Xlib surfaces for content rendering have been used extensively for a long time now, but when OpenGL got attention as a means to accelerate layers, we quickly ran into interoperability issues between XRender and Texture_From_Pixmap OpenGL extension… issues that were assumed insurmountable after initial analysis. Also, and I quote Roc here, “We [had] lots of problems with X fallbacks, crappy X servers, pixmap usage, weird performance problems in certain setups, etc. In particular we [seemed] to be more sensitive to Xrender implementation quality that say Opera or Webkit/GTK+.” (Bug 496204)

So for all those reasons, someone had to get rid of Xlib surfaces, and that someone was… me 😉

The Problem

So problem was to get rid of Xlib surfaces (gfxXlibSurface) for content under Linux/GTK platform and implicitly, of course, replace them with Image surfaces (gfxImageSurface) so they become regular memory buffers in which we can render with GL/gles and from which we can composite using GPU. Now, it’s pretty easy to force creation of Image surfaces (instead of Xlib ones) for just all content layers in gecko gfx/layers framework, just force gfxPlatformGTK::CreateOffscreenSurfaces(…) to create gfxImageSurfaces in any case.

Problem is, naively doing so gives rise to a series of perf. regressions and sub-optimal paths being taken, for example, to copy image buffers around when passing them across process boundaries, or unnecessary copying when compositing under X11 with Xrender support. So the real work was to fix everything after having pulled the gfxXlibSurface plug 😉

The Solution

First glitch on the way was that GTK2 theme rendering, per design, *had* to happen on Xlib surfaces. We didn’t have much choice as to narrow down our efforts to the GTK3 branch alone. What’s nice with GTK3 on that front is that it makes integral use of cairo, thus letting theme rendering happen on any type of cairo_surface_t. For more detail on that decision, read this.

Upfront, we noticed that the already implemented GL compositor was properly managing and buffering image layer contents, which is a good thing, but on the way, we saw that the ‘basic’ compositor did not. So we started streamlining basic compositor under OMTC for GTK3.

The core of the solution here was about implementing server-side buffering of layer contents that were using image backends. Since targetted platform was Linux/GTK3 and since Xrender support is rather frequent, the most intuitive thing to do was to subclass BasicCompositor into a new X11BasicCompositor and make it use a new specialized DataTextureSource (that we called X11DataTextureSourceBasic) that basically buffers upcoming layer content in ::Update() to an gfxXlibSurface that we keep alive for the TextureSource lifetime (unless surface changes size and/or format).

Performance results were satisfying. For 64 bit systems, we had around 75% boost in tp5o_shutdown_paint, 6% perf gain for ‘cart’, 14% for ‘tresize’, 33% for tscrollx and 12% perf gain on tcanvasmark.

For complete details about this effort, design decisions and resulting performance numbers, please read the corresponding bugzilla ticket.

To see the code that we checked-in to solve this, look at those 2 patches :

https://hg.mozilla.org/mozilla-central/rev/a500c62330d4

https://hg.mozilla.org/mozilla-central/rev/6e532c9826e7

Cheers !

 

The importance of the ARM architecture

Recently, in a compiz-related IRC channel, I was reading someone questioning the relevancy of OMAP platform development. Basically, he was saying something like :

I still don’t understand what the main purpose of the panda board is. Just a very small hardware with desktop power? Or all about SOC? or what ?

Well, the main purpose is to develop for ARM devices. That’s the bottom-line. See, the PandaBoard is a development board that integrates the Texas Instruments OMAP4 system on a chip (SoC). And this OMAP4 chip (either OMAP4430 or OMAP4460 if you’re using Panda or Panda ES), of course, is ARM based. Apart from TI’s OMAP platforms, you can’t imagine how many devices out there are using the ARM architecture! Here a quick list of devices using ARM processors :

  • Apple iPods, iPhones and iPads
  • Google Galaxy Nexus phones
  • HTC one series phones
  • Samsung Galaxy S series phones
  • Nokia’s N series Phones
  • Motorola phones, LG phones, blah blah,
  • Gameboy Advance
  • Nintendo DS/3DS
  • Calculators, peripherals, …
  • … and many others…

So you see just how widespread ARM architecture is.

In 2005 about 98% of the more than one billion mobile phones sold each year used at least one ARM processor. As of 2009 ARM processors accounted for approximately 90% of all embedded 32-bit RISC processors and were used extensively in consumer electronics, including personal digital assistants (PDAs), tablets, mobile phones, digital media and music players, hand-held game consoles, calculators and computer peripherals such as hard drives and routers [1].

Now, developing on the Beagleboard, Pandabord or Blaze really isn’t just about developing for ARM devices, it’s about developing for OMAP devices. Still, there are a lot of Motorola, Panasonic, LG, Blackberry and Samsung smartphones out there based on OMAP chips, but that’s another story…

 

References

  1. ARM In Wikipedia. Retrieved June 26, 2012, from http://en.wikipedia.org/wiki/ARM
Tagged

The SGX stack : a new project for me

When considering X/DDX driver implementation on an ARM platform, Texas Instrument’s Blaze and Pandaboards are quite often considered by engineers and designers because of their smart multicore design, low-cost value and, of course, powerful SGX GPU devices which make real-time rendering of complex high-polygon scenes possible!

Take a look at this user demo which, basically, shows everything the PowerVR SGX device is capable of rendering from cell shading and particles systems, all the way to ambient occlusion, bump maps and Image-based lighting with custom shaders. Currently, the SGX 540 is fully supporting the OpenGL ES2.0 spec.

So since beginning of April 2012, I’m working on implementing/fixing our DDX drivers to work nicely with our PVR SGX on the Pandaboard ES. This lets me hack at code at various levels down the stack : “high level” DDX drivers and low-level DRM kernel code and PVR kernel-module.

Currently, only the PVR kernel module (DKMS) is open-source. You can get the code from our TI OMAP release PPA (pvr-omap4-dkms package). However, all the user libs, that is, the OpenGL/VG implementations (what we call the “sgxlib”) written by IMG are closed-source. You can download the binary blob trom the same PPA (libegl1-sgx-omap4, libgles1-sgx-omap4, libgles2-sgx-omap4, libopenvg-sgx-omap4 and pvr-omap4_xxx packages).

That’s it for now.

Tagged

Compiz port to GLES2 : Putting everything together.

Hi, here’s the last article of the “Compiz port to GLES2” series.

This project was really tons of fun. It led me to :

  • Learn compiz and metacity internals
  • Rewrite parrt of the opengl plugin
  • Port many cool compiz plugins (Water, Wobbly, Cube, Scale, etc…)
  • Rewrite compiz opengl plugin’s partial damage rendering
  • Harmonize Nux/Compiz framebuffer usage

And all of this working on the really cool PandaBoard plateform !

So today, I just wanted to show you the results. Check the following two videos :

1) This video showcases Unity3D with Wobbly, Water and other plugins.

2) At the end of this video, we see the rest of the demo I devised : The cube plugin !

Tagged , , , , , ,

Compiz port to GLES2 : Porting the water plugin

Yet another fun task I tackled along the way to porting Compiz to OpenGL|ES was to completely rewrite the “Water plugin”. But before I talk too much (like always), let’s take a look at the Water plugin on desktop:

So clearly, for those of you who didn’t already know the water plugin, clearly, you’ll understand that its practical applications are… well… limited! I mean, having water drops pop up all over one’s desktop is nice and fun… but only for the first 10 minutes I’d say ! 😛  After this already-long time frame, the excitment is over… and you’ll understand that our interest in the water plugin was for demoing purposes, of course.  In fact, the water plugin is a really cool thing to have for a demo. It showcases per-fragment shading capabilities like no other compiz plugin can, and the way the plugin was devised originally is rather interesting : it makes a very clever usage of Framebuffers. In fact, it uses a series of three square FBOs of limited resolution (let’s say 256×256) which content aren’t colors, but bump normals (in the x,y,z components) and heights (in the w component). By binding 2 FBOs in the series (current and last ones), the water-calculating shader may then proceed all the framebuffer fragments (it calculates a per-fragment “acceleration” factor by comparing current/last heights for each fragments) and output newly computed normals and heights in the next FBO in the series!

Only problem was… the original shaders were written in ARB language !  Rewritting this shader in GLSL really was a liberating action !!

Here’s the ported Water plugin in action! This was run on a Pandaboard ES running ubuntu 11.04 Natty Narwhal :

You can find the final version of the Compiz water plugin port to GLES here. Have fun !

Tagged , , , , ,

Compiz port to GLES2 : Supporting the Unity Desktop

The second part of my “Compiz port to OpenGL ES” task was to make the whole thing work seemlessly with Canonical’s brand new Unity desktop. Take a look at the Unity Desktop that appeared originally in Ubuntu 11.04 Natty Narwhal :

This change, of course, promised to bring its share of problems, mainly for 2 reasons:

  • Unity displays its GUI components/Widgets using the “Nux” toolkit, which is OpenGL-based. I knew Nux made extensive use of framebuffers, which compiz-gles depended on as well, so ouch.. possible interference ahead!
  • The Unity/Compiz running scheme is very peculiar : Unity runs as a Compiz plugin (the “Unityshell” plugin).

Let’s take a look at the main challenges those elements posed to this task:

1) Unity depending on NUX

This, at first, wouldn’t have to be such a big deal because after all, stacking openGL calls at different software layers is a common thing. The problem here though, is that Nux made unsafe uses of framebuffer Objects (FBO). By “unsafe”, I mean that the code was not carefully unbinding FBOs back to their previous owners after use… so any caller (like compiz !) trying to nest Unity/Nux drawings in its own FBO just couldn’t do it ! This “unsafe” FBO usage comes from a “standalone” point of view and is somewhat not compatible with the compiz plugin scheme.

So what I and Jay Taoko came up with is a new nux API :

This new API lets us manage a new so-called “reference frame buffer” that allows for FBO nesting 🙂  Here it is :

=== modified file 'Nux/WindowCompositor.h'
--- Nux/WindowCompositor.h    2011-12-29 18:06:53 +0000
+++ Nux/WindowCompositor.h    2012-01-05 04:00:19 +0000
@@ -175,6 +175,23 @@
     //====================================  
   public:
+    /*!
+        Set and external fbo to draw Nux BaseWindow into. This external fbo will be
+        restored after Nux completes it rendering. The external fbo is used only in embedded mode. \n
+        If the fbo_object parameter 0, then the reference fbo is invalid and will not be used.
+
+        @param fbo_object The opengl index of the fbo.
+        @param fbo_geometry The geometry of the fbo.
+    */
+    void SetReferenceFramebuffer(unsigned int fbo_object, Geometry fbo_geometry);
+
+    /*!
+        Bind the reference opengl framebuffer object.
+
+        @return True if no error was detected.
+    */
+    bool RestoreReferenceFramebuffer();
+
     ObjectPtr<IOpenGLFrameBufferObject>& GetWindowFrameBufferObject()
     {
       return m_FrameBufferObject;
@@ -561,6 +578,10 @@
     int m_TooltipX;
     int m_TooltipY;

+    //! The fbo to restore after Nux rendering in embedded mode.
+    unsigned int reference_fbo_;
+    Geometry reference_fbo_geometry_;

All this landed in Nux 2.0 series as a big “Linaro” merge on 2012-01-06.  Take a look !  It’s still upstream in more recent versions!

2) Unity running as a Compiz plugin

It may be weird at first to think of Unity as a compiz plugin, but let’s think about it for a minute. When running compiz as our Window Manager, which makes intensive use of OpenGL ES calls, it’s not so foolish to make Unity a plugin because Unity is also making intensive usage of openGL primitives and we’ve got to find a way to coordinate the two components in a graceful manner. One manner is to run Unity as a compiz plugin (the “unityShell” plugin, that is). That way, Unity declares a series of callbacks knowed to Compiz (like glPaint, glDrawTexture, preparePaint, etc…) and compiz calls them at the right time. Every drawing operations are done, every framebuffers are filled at the right time, in the right order, in a graceful manner with the other plugins. On second thought, this decision from Canonical to write Unityshell is not so foolish after all 😛

Tagged , , , ,

Compiz port to GLES2 : Another Cool Project !

Hi ! It’s been a while since I last wrote about my professional highlights in the Open-Source World.

From oct2011 till mar2012, I worked on porting COMPIZ –  the open-source eye-candy window manager – to OpenGLES1/2. This was a very fun and interesting task to tackle! Now, compiz runs under Linux and many of you might never have used it. So here’s two videos, showing what’s it’s basically capable of on a desktop ubuntu:

Those videos actually show compiz running on fast desktop boxes, but my task was to port compiz to the Texas Instruments’ OMAP4 platform (using the power of its amazing SGX GPU). Now, we don’t find any OpenGL implementation on theses SoCs, rather, OpenGLES1/2 implementations delivered by Imagination Technologies (remember, it’s embedded !). So since Compiz was originally written using legacy OGL calls, conventional glVertex and almost no shaders, the fun work could start!!  I ported a lot of ‘compiz core’ code to GLES2, using vertexBuffer objects instead of glVertices. I also had to completely get rid of the fixed-functionality and write cool brand new shaders in GLSL for eye-candy plugins that we wanted to include in our demo (e.g. the Wobbly plugin  – to be showcased in another blog post).

During those months, I had the pleasure of working with Travis Watkins from Linaro (a.k.a. Amaranth) who did first rounds of GLES porting and with whom I worked closely. I also have to mention the work of Pekka Paalanen (from Collabora),  who briefly worked on this project but made important contributions to the paint and damage system, for example. For those interested in pulling our CompizGLES work from the community git, clone the ‘gles’ branch from here :

http://git.compiz.org/compiz/core/log/?h=gles

Tagged , , , , ,

Working with Collabora

Last week, I started working on the Elecrolysis project with Collabora. I’m really excited and the fun thing about it is that I’m going to be hacking on the Fennec side (not firefox). So, from my last contract with the Mozilla people working on fennec performance, it’s almost as if I didn’t stop working!