GPU Bee-havior

Prior to Nvidia’s public release of CUDA, programming a state machine using graphics shaders was challenging, but certainly not impossible. This project was among the first of its kind to execute a simplistic bee-based artificial intelligence (AI) state-machine on a Graphics Processing Unit (GPU).

Without CUDA, the entire project was coded using standardized OpenGL and Cg – Nvidia’s proprietary language – on a GeForce 7600 GT.

Although the GPU-executed state-machine would be considered simplistic today, by almost any standard, it was novel at the time and simulated over 1000 autonomous bee indulging in foraging behavior.


Internal publication

  • GPU Bee-havior: Honey Bee foraging AI simulated on a GPU
    Lasse Farnung Laursen
    Local PDF

Quick Facts

  • Three month project
  • Pre-CUDA GPU state-machine(s)
  • Coded in C++ and Cg (C for Graphics) – Nvidia’s proprietary shader language
  • Produces comparable results to empirically observed data of real foraging Honey Bees

Method overview

In addition to gaining a new understanding and appreciation for pre-CUDA general purpose GPU programming, I also a lot about how bees navigate our world. Empirical studies have demonstrated that honey bees can navigate using a variety of methods including:

  • Using the sun as a compass
  • Using polarized light as a compass when the sun is obscured by clouds
  • Using landmarks, e.g. fences, trees, etc., if the whole sky is overcast
  • Using distance as communicated by other bee’s via a dance

I honestly never imagined the average honey bee was capable of navigating via landmarks. I initially thought most of their navigation was scent based, similar to the common ant.

Although the paper includes a fairly elaborate description of the bee’s navigational behavior, it is important to note that the actual foraging behavior simulated on the GPU is very simple. A simplified diagram of the honey bees foraging behavior was devised from empirical studies available at the time.

Two AI routines were implemented during the course of the project. A very simple AI and a slightly more advanced version.

Behavior model
Bee Ai – Simple version
Bee Ai – Advanced version

Pre-CUDA general purpose GPU programming

Prior to this project I had only ever used the GPU for rendering graphics. While enjoyable, I can still recall the struggles of debugging control flow on graphics hardware. There’s a reason CUDA has gained traction and established itself as a cornerstone in GPGPU programming. While coding raw shaders has its uses, I wouldn’t want to go back to relying on it to build a state machine.

The completed code was capable of simulating far more bees in real-time than any existing CPU. The simulation was basic and some hard design choices had to be made in order to make the program feasible to execute on the GPU. For example, in 2007, texture look up calls were very expensive. Consequently, the bees simulated on the GPU were intentionally visually impaired. Despite this impairment, the code still replicated results close to empirically observed real-world data thanks to a set of heuristic modifications.

The GPU implemented solution best serves as a rough simulation compared to a CPU-based implementation. Given the advances in GPGPU over the past decades, I imagine a GPU could produce results comparable to a CPU-implementation these days.

Implementation UML Diagram

Design post-mortem

In addition to an appreciation of the CUDA framework Nvidia offers, another key insight gained as part of the project was optimal shader programming. The state machine was implemented via the fragment shader with each pixel representing an autonomous agent. The graphics hardware’s execution speed was limited by the slowest executing fragment shader. Meaning, the more logic kept in a single fragment shader, the less efficient the entire program was likely to be. Computationally, the most efficient approach was to balance the state machine across multiple rendering passes.