For tonight’s post, I’m going to include three new examples from my upcoming Nature of Code book. I’ll also excerpt some of the text with these examples below.

This first example expands on the existing Recursive Tree example that comes with Processing.

Chapter 8: Recursion and Fractals

The recursive tree fractal is a nice example of a scenario in which adding a little bit of randomness can make the tree look more natural. Take a look outside and you’ll notice that branch lengths and angles vary from branch to branch, not to mention the fact that branches don’t all have exactly the same number of smaller branches. First, let’s see what happens when we simply vary the angle and length. This is a pretty easy one, given that we can just ask Processing for a random number each time we draw the tree.

void branch(float len) {   
  // Start by picking a random angle for each branch
  float theta = random(0,PI/3);  

  line(0, 0, 0, -len);
  translate(0, -len);
  len *= 0.66;
  if (len > 2) {
    pushMatrix();    
    rotate(theta);   
    branch(len);
    popMatrix();     
    pushMatrix();
    rotate(-theta);
    branch(len);
    popMatrix();
  }
}

In the above function, we always call branch() twice. But why not pick a random number of branches and call branch() that number of times?

void branch(float len) {   
  
  line(0, 0, 0, -len);
  translate(0, -len);
  
  if (len > 2) {

    // Call branch() a random number of times
    int n = int(random(1,4));       
    for (int i = 0; i < n; i++) {    
  
      // Each branch gets its own random angle
      float theta = random(-PI/2, PI/2); 
      pushMatrix();     
      rotate(theta);
      branch(h);
      popMatrix();
    }
  }
}

The example below takes the above a few steps further. It uses Perlin noise to generate the angles, as well as animate them. In addition, it draws each branch with a thickness according to its level and sometimes shrinks a branch by a factor of two to vary where the levels begin.

TreeStochasticNoise.zip

Next up an excerpt from the Genetic Algorithm chapter.

Chapter 9: Evolution and Code

In 2009, Jer Thorp released a great genetic algorithms example on his blog entitled “Smart Rockets.” Jer points out that NASA uses evolutionary computing techniques to solve all sorts of problems, from satellite antenna design to rocket firing patterns. This inspired him to create a Flash demonstration of evolving rockets. Here is a description of the scenario:

A population of rockets launches from the bottom of the screen with the goal of hitting a target at the top of the screen (with obstacles blocking a straight line path).

Each rocket is equipped with five thrusters of variable strength and direction. The thrusters don’t fire all at once and continuously; rather, they fire one at a time in a custom sequence. In this example, we’re going to evolve our own simplified Smart Rockets, inspired by Jer Thorp’s. You can leave implementing some of Jer’s additional advanced features as an exercise.

Our rockets will have only one thruster, and this thruster will be able to fire in any direction with any strength in every single frame of animation. This isn't particularly realistic, but it will make building out the framework a little easier. (We can always make the rocket and its thrusters more advanced and realistic later.)

Source: SmartRockets.zip

And here's a short excerpt from the beginning of the chapter on neural networks, as well as the example that closes out the chapter demonstrating how to visualize the flow of information through a network.

Chapter 10: The Brain

Computer scientists have long been inspired by the human brain. In 1943, Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, developed the first conceptual model of an artificial neural network. In their paper, "A logical calculus of the ideas imminent in nervous activity,” they describe the concept of a neuron, a single cell living in a network of cells that receives inputs, processes those inputs, and generates an output.

Their work, and the work of many scientists and researchers that followed, was not meant to accurately describe how the biological brain works. Rather, an artificial neural network (which we will now simply refer to as a “neural network”) was designed as a computational model based on the brain that can solve certain kinds of problems.

It’s probably pretty obvious to you that there are certain problems that are incredibly simple for a computer to solve, but difficult for you. Take the square root of 964,324, for example. A quick line of code produces the value 982, a number Processing computed in less than a millisecond. There are, on the other hand, problems that are incredibly simple for you or me to solve, but not so easy for a computer. Show any toddler a picture of a kitten or puppy and they’ll be able to tell you very quickly which one is which. Say hello and shake my hand one morning and you should be able to pick me out of a crowd of people the next day. But need a machine to perform one of these tasks? People have already spent careers researching and implementing complex solutions.

The most common application of neural networks in computing today is to perform one of these easy-for-a-human, difficult-for-a-machine” tasks, often referred to as pattern classification. Applications range from optical character recognition (turning printed or handwritten scans into digital text) to facial recognition. We don’t have the time or need to use some of these more elaborate artificial intelligence algorithms here, but if you are interested in researching neural networks, I’d recommend the books Artificial Intelligence: A Modern Approach by Stuart J. Russell and Peter Norvig and AI for Game Developers by David M. Bourg and Glenn Seemann.

In this chapter, we’ll instead begin with a conceptual overview of the properties and features of neural networks and build the simplest example possible of one (a network that consists of a singular neuron). Afterwards, we’ll examine strategies for building a “Brain” object that can be inserted into our Vehicle class and used to determine steering. Finally, we’ll also look at techniques for visualizing and animating a network of neurons.

Source: NetworkAnimation.zip