A neural network for Java Lego robots

Learn to program intelligent Lego Mindstorms robots with Java

1 2 Page 2
Page 2 of 2
  1. Class LMpbn, where you encapsulate all features of the backpropagation algorithm:

    • Properties, like public arrays:

           input []
            hidden []
            output []
            w1 [][]
            w2 [][]
      
      
    • Methods:

           train(...)
            test(...)
      
      

    Note that this is a generic class, so, you can use it in any leJOS program and in any other Java program.

    Listing 1. The LMbpn class

     

    /**

    * <p>Title: Lego Mindstorms Neural Networks</p> * * @author Julio César Sandria Reynoso * @version 1.0 * * Created on 1 de abril de 2005, 06:09 PM */

    import java.lang.Math;

    /** * LMbpn: Lego Mindstorms Back Propagation Network */ class LMbpn { public static int data1[][] = {{0,0,0}, {1,1}}; public static int data2[][] = {{1,0,0}, {1,0}}; public static int data3[][] = {{0,0,1}, {0,1}}; public static int data4[][] = {{0,1,0}, {0,0}};

    public static double input[] = {0,0,0,1}; public static double w1[][] = {{0,0,0}, {0,0,0}, {0,0,0}, {0,0,0}}; public static double hidden[] = {0,0,1}; public static double w2[][] = {{0,0}, {0,0}, {0,0}}; public static double output[] = {0,0}; public static double delta2[] = {0,0}; public static double delta1[] = {0,0,0};

    public static int trainedEpochs = 0;

    LMbpn() { byte i, j; // Initialize weights randomly between 0.1 and 0.9 for(i=0; i<w1.length; i++) for(j=0; j<w1[i].length; j++) w1[i][j] = Math.random()*0.8+0.1;

    for(i=0; i<w2.length; i++) for(j=0; j<w2[i].length; j++) w2[i][j] = Math.random()*0.8+0.1; }

    public static void train(int e) { for(int i=0; i<e; i++) { // Call method learn with training data learn( data1[0], data1[1] ); learn( data2[0], data2[1] ); learn( data3[0], data3[1] ); learn( data4[0], data4[1] ); trainedEpochs++; } }

    public static void learn( int inp[], int out[] ) { int i, j; double sum, out_j;

    // Initialize input units for(i=0; i<inp.length; i++) input[i] = inp[i];

    // Calculate hidden units for(j=0; j<hidden.length-1; j++) { sum = 0; for(i=0; i<input.length; i++) sum = sum + w1[i][j]*input[i];

    hidden[j] = 1 / ( 1 + Math.exp(-sum)); }

    // Calculate output units for(j=0; j<output.length; j++) { sum = 0; for(i=0; i<hidden.length; i++) sum = sum + w2[i][j]*hidden[i];

    output[j] = 1 / (1 + Math.exp(-sum)); }

    // Calculate delta2 errors for(j=0; j<output.length; j++) { if( out[j] == 0 ) out_j = 0.1; else if( out[j] == 1 ) out_j = 0.9; else out_j = out[j]; delta2[j] = output[j]*(1-output[j])*(out_j-output[j]); }

    // Calculate delta1 errors for(j=0; j<hidden.length; j++) { sum = 0; for(i=0; i<output.length; i++) sum = sum + delta2[i]*w2[j][i];

    delta1[j] = hidden[j]*(1-hidden[j])*sum; }

    // Adjust weights w2 for(i=0; i<hidden.length; i++) for(j=0; j<output.length; j++) w2[i][j] = w2[i][j] + 0.35*delta2[j]*hidden[i];

    // Adjust weights w1 for(i=0; i<input.length; i++) for(j=0; j<hidden.length; j++) w1[i][j] = w1[i][j] + 0.35*delta1[j]*input[i]; }

    public static void test(int inp[], int out[]) { int i, j; double sum;

    // Initialize input units for(i=0; i<inp.length; i++) input[i] = inp[i];

    // Calculate hidden units for(j=0; j<hidden.length-1; j++) { sum = 0; for(i=0; i<input.length; i++) sum = sum + w1[i][j]*input[i];

    hidden[j] = 1 / ( 1 + Math.exp(-sum)); }

    // Calculate output units for(j=0; j<output.length; j++) {

    sum = 0; for(i=0; i<hidden.length; i++) sum = sum + w2[i][j]*hidden[i];

    output[j] = 1 / (1 + Math.exp(-sum)); }

    // Assign output to param out[] for(i=0; i<output.length; i++) if( output[i] >= 0.5 ) out[i] = 1; else out[i] = 0; } }

  2. Class LMbpnDemoRCX: A demo program for the RCX, where you implement the use of the LMbpn class:

      ...
       main() {
          LMbpn bpn = new LMbpn();
          ...
          bpn.train(...);
          ...
          bpn.test(...);
          ...
       }
    
    

    Listing 2. The LMbpnDemoRcx class

     

    import josx.platform.rcx.LCD; import josx.platform.rcx.TextLCD; import josx.platform.rcx.Sound; import josx.platform.rcx.Sensor; import josx.platform.rcx.SensorConstants; import josx.platform.rcx.Motor; import josx.platform.rcx.Button;

    public class LMbpnDemoRcx { public static LMbpn bpn = new LMbpn();

    public static void main(String args[]) throws InterruptedException {

    int i, white; int inp[] = {0,0,0}; int out[] = {0,0};

    Sound.beep(); TextLCD.print( "Train" );

    // Train bpn 500 epochs, sit down and wait about 5 minutes! for(i=0;i<500;i++) { bpn.train(1); LCD.showNumber( bpn.trainedEpochs ); }

    Sensor.S1.setTypeAndMode ( SensorConstants.SENSOR_TYPE_TOUCH, SensorConstants.SENSOR_MODE_BOOL );

    Sensor.S2.setTypeAndMode ( SensorConstants.SENSOR_TYPE_LIGHT, SensorConstants.SENSOR_MODE_RAW );

    Sensor.S3.setTypeAndMode ( SensorConstants.SENSOR_TYPE_TOUCH, SensorConstants.SENSOR_MODE_BOOL );

    Sound.twoBeeps(); Sensor.S2.activate(); white = Sensor.S2.readRawValue();

    Motor.A.setPower(1); Motor.C.setPower(1);

    Sound.twoBeeps();

    while( !Button.PRGM.isPressed() ) {

    LCD.showNumber( Sensor.S2.readRawValue() );

    if( Sensor.S1.readBooleanValue() ) inp[0] = 1; // Sensor 1 on else inp[0] = 0; // Sensor 1 off

    if( Sensor.S2.readRawValue() > white + 50 ) inp[1] = 1; // Sensor 2 over black floor else inp[1] = 0; // Sensor 2 over white floor

    if( Sensor.S3.readBooleanValue() ) inp[2] = 1; // Sensor 3 on else inp[2] = 0; // Sensor 3 off

    bpn.test( inp, out );

    if( out[0] == 1 ) Motor.A.forward(); else Motor.A.backward();

    if( out[1] == 1 ) Motor.C.forward(); else Motor.C.backward();

    Thread.sleep( 500 );

    } // while()

    Sensor.S2.passivate(); Motor.A.stop(); Motor.C.stop(); Sound.beep();

    } // main()

    } // class LMbpn

First, you must instantiate an object from class LMbpn. After that, you must train the backpropagation network with a certain number of epochs. Finally, you test the network with current sensor states.

Compile and run

To compile the classes and run your program, first, install leJOS and some environment variables in a command window. Compile classes with the commands:

 lejosc LMbpn.java
lejosc LMbpnDemoRcx.java

Download your program to the RCX using:

 lejos LMbpnDemoRcx

And run your program by pressing the RCX's Run button.

This example trains the backpropagation network in 500 epochs, which takes about five minutes in the RCX. You could train the network on a personal computer (that takes less than five seconds), save the weights calculated, and assign these weights instead of initializing them randomly.

Conclusion

Lego Mindstorms robots are cool toys used by hobbyists all around the world. They prove suitable for building mobile robots and programming them with artificial intelligence. The backpropagation network described in this article was implemented as a Java class to build an intelligent Lego robot that can learn a basic behavior. With some more work, you can program more complex behavior with this neural network. Finally, such a class is a reusable Java class that can be modified and used in any other Java-based system.

Julio César Sandria Reynoso is software developer at the Instituto de Ecología, A.C., and professor of computer programming and artificial intelligence at the Universidad de Xalapa, in Xalapa City, Mexico. He has a master's of science in artificial intelligence and has worked with Java since 1998.

Learn more about this topic

This story, "A neural network for Java Lego robots" was originally published by JavaWorld.

Copyright © 2005 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2