Hero Image

GPU Bee-havior

Before CUDA was a real thing, we had to do all the hard work by hand. In fact, we'd sit with tiny magnets poking the hard-disk, shifting the right bits, and walk 15 miles just to get to the university. Uphill. Both ways.

Ahem. Getting back to reality, this project was one of the first (if not even perhaps the first) of its kind. Executing AI behavior on a GPU. Back in 2007, CUDA did not yet exist (or at least not in any public release), so this entire project was coded using standardized OpenGL and Nvidia's proprietary Cg language on a GeForce 7600 GT. Although the Bee AI is relatively simple, at least compared to any of the current AI featured in AAA games, back then this was both a significant technical milestone. Inspired by the surge of bee related research over the previous decade, I set forth on this project, to create a blazing fast simulation of autonomous agents, simulating real bee foraging behavior.

A screenshot of the bee-havior program executing.

Quick Facts

  • Three month solo project
  • Pioneering AI routines on GPU
  • General Purpose GPU coding prior to CUDA
  • Coded from scratch in C++ and Cg (C for Graphics)
  • Produces comparable results, via heuristics, to empirically observed data of foraging Honey Bees


Bee's are fascinating. I knew virtually nothing about them entering into this project back in 2007, and found myself lost in the simple yet complex way they manage to navigate their surroundings. The internally published paper notes on a number of these fascinating facts, such as many ways that the average honey bee can orient themselves in our world, including:

  • Sun compass
  • Polarized light
  • Landmarks
  • Distance measurement
I'll be honest and confess that I never imagined the honey bee was actually capable of navigating via landmarks. I initially thought most of their navigation was scent based, similar to the common ant.

Although the paper includes a fairly elaborate description of the bee's navigational behavior, it is important to note that the actual foraging behavior simulated on the GPU is fairly simple. A simplified diagram of the honey bees foraging behavior was devised from empirical studies available at the time.

Two AI routines were implemented during the course of the project. A very simple AI and a slightly more advanced version.

Bee Behavior

The behavioral control structure of a social insect forager, based on empirical observations available at the time of publication (2007). The solid boxes represent a behavioral category. The dotted-lined boxes describe the infromation upon which a transition from one behavioral category to another is based. On the left hand side are the internally driven categories, and on the right are the externally driven ones. As the figure shows, some of the behavioral categoies are first entered once a forager leaves its hive.

Bee Ai - Simple Version

A probability model state diagram of the simple bee AI.

Bee Ai - Advanced Version

A probability model state diagram of the advanced bee AI.

Old school general-purpose GPU programming

This project constitutes the first time I ever used a GPU for general purpose programming, and while I did thoroughly enjoy the challenge, it became quite clear over time that the flexibility simply wasn't there at the time. Able to simulate far more bees than any existing CPU, the simulation was very basic and some hard choices had to be made in order to make the program worth executing on the GPU. Back in 2007, texture look up calls were quite expensive. As a consequence, the bees simulated on the GPU were slightly impaired visually speaking. However, despite this impairment, I was still able to replicate results close to empirical observations via a set of heuristics. These days, the honey bee foraging simulation can at best probably be used as a quick and dirty simulation to gain a better understanding of how certain parameters affect the overall outcome.

Progam UML Diagram

A simple UML diagram showing the various parts that made up the GPU Bee-havior program.


Although implementing autonomous agents on a GPU still remains an interesting notion in my mind, the biggest concession is the lack of optimal interaction with the remaining computer components. These days, CUDA is the defacto standard for GPGPU programming, but projects like these paved the way for what CUDA eventually became.

One of the most interesting aspects of executing AI on the GPU is that each rendering pass on the fragment shader lasts as long as the slowest fragment shader itself. In other words, while it's tempting to levy a large portion of complex AI into a single state, it's computationally more efficient to have every agent try to balance it's computational load over every single state that it is likely to traverse through.