November 2009 Archives

November 20, 2009

Comments Enabled

Due to popular demand, I've enabled comments and trackbacks here.  Please don't all spam me at once... :)

November 15, 2009

Internal Memory Bottlenecks and Their Removal

While debugging something different late last night, I noticed some flags in one of Glamo's registers which looked interesting: FIFO settings for the LCD engine.  This reminded me of an observation by Lars a few weeks ago that the LCD engine seems to conflict with Glamo's 2D engine on memory accesses, leading to slower performance of accelerated 2D operations when the screen is switched on.  So I turned the FIFO up to "8 stages" (from 1) to see what happened.  The result was much faster 2D operations - literally twice the speed!

At "8 stages", the price of this was that the display became jittery and unstable.  However, the same speed improvement is seen at the "4 stages" setting. I've also seen some occasional artifacts with this setting, so I'm using 2 stages at the moment, where the speed is still right up there.  I'll be testing some more and seeing if things can be tuned even more.

Because we don't make the maximum use possible of the 2D engine, this doesn't immediately translate into a huge increase in the UI speed.  But the differences are very obvious with x11perf and some of my test programs. The program I showed in the screenshot recently jumped from 45-48fps right up to 95-98fps!

November 13, 2009

"Look Ma, No Busywaits!"

When the CPU needs to do something which depends on a result which the GPU is currently working on, it has to wait for the GPU to catch up.  One of the biggest problems with the current architecture of xf86-video-glamo, both DRM and non-DRM versions, is that they do this waiting by spinning in a tight loop, each time checking the current status of the GPU, until it's caught up.  This isn't great for a few reasons.  It makes no use of the parallelism between the CPU and the GPU, so precious CPU time is being wasted while something more useful could be being done.  If there's nothing else to do, then the CPU could be sleeping - reducing power consumption.

Most GPUs, including Glamo, have a mechanism for being a little smarter.  The kernel can ask the chip to trigger an interrupt when a certain point in the command queue has been reached.  When a process needs to wait, the kernel can send it to sleep and watch out for the interrupt.  When it happens, the process can be quickly woken back up in a low-latency fashion, meaning that the process gets back to work with very little latency.

This week, I've been implementing this kind of thing for the Glamo DRM driver.  It goes a bit like this:

  • Process submits some rendering commands via one of the command submission ioctls.
  • Kernel driver places rendering commands on Glamo's command queue.
  • Process needs to wait for the GPU to catch up, so calls the wait ioctl.
  • Kernel driver puts an extra sequence of commands, called a fence, onto the command queue.  A unique number is associated with the fence.  The number is recorded by the kernel.
  • When the GPU processes the fence, it raises the interrupt and places a unique number into a certain register.
  • The interrupt handler checks this number, and wakes up the corresponding process.
I wrote a test program which tells Glamo to fill the whole screen with colour as fast as it can, waiting for the GPU to catch up each time.  The task was to make the program run with close to zero CPU usage while still getting the full framerate that I could get using busywaits.  The task was achieved successfully, and here's a screenshot to prove it.  The framerate - just below 50fps when doing fills of the entire VGA screen - was exactly the same with busywaits.  It even went up a little (to 50-51fps) when I improved the interrupt handling.

Things aren't always so great.  When the command sequence to be executed is very short, the overheads of fencing and scheduling become significant, and the overall rate drops.  However, it shouldn't be too difficult to design some kind of heuristic to use busywaits as a low-latency strategy in such cases.

There are still a few problems to iron out.  The fence mechanism seems to be able to fall out of sync with things, leading to processes waiting for too long (or even forever).  But when it works, some things do seem to feel a little faster in general use.

Geeks may be interested in the actual code.

November 7, 2009

Concrete Blocks

Concrete blocks. Absolutely everywhere. That's my dominant impression of this international particle accelerator research centre after the first month. Concrete blocks shielding the outside world from radiation emitted by the shiny things hiding behind them. And generally, the bigger the pile of concrete blocks, the cooler the thing that's lurking behind.

Here are some photos from today's open day at DESY. Most of the things shown (everything apart from FLASH and XFEL) have nothing to do with what I work on, but they're still exciting to look at. The HERA and PETRA tunnels aren't normally open, least of all to the public, and there probably won't be another opportunity to see them for years. In pictures 38, 40, 42, 45, 46 and 51, you can see the sequence of bits of pipes and coils which guided electrons from PETRA, physically above HERA, into HERA's electron ring. HERA was switched off in September 2007, but almost all of it is still in the tunnels. You can also see wider views of the machine. The cylindrical pipe thing on the top is the superconducting ring of magnets which guided protons, and the pink boxy thing underneath is a normally conducting ring of magnets for the electrons. You can even see what's underneath the pink metal cover, but it's not very exciting. Then there's a spin rotator which alters the polarisation of the electrons. A bit further down, you can see the electron and proton rings being brought closer together (the electron beam pipe is the thin bronze-coloured thing just in front of the yellow thing), and then going through the final focusing magnets before colliding with one another in the next room. Not that you can see anything except concrete blocks, because that bit is just way too cool.

And it needs a whole lot of cryogenic stuff to make it work.

PETRA was previously used for particle physics, before being turned into a pre-accelerator for HERA and more recently (last year or so) into a synchrotron radiation source for (e.g.) protein crystallography. This thing is still used - in fact it's one of the most modern synchrotron X-ray sources in the world - but it wasn't switched on while we were in the tunnel, otherwise we would have been fried. Naturally it's hidden behind a huge wall of concrete blocks.

There are plenty more photos to see beyond the ones linked here..!