End-course project 3

Date: 25th of May 2013
Participating group members: Morten D. Bech and Thomas Winding
Activity duration: 5.5 hours

Goal
To further develop the behaviors of the autonomous robots and build one or more of them.

Plan
Modify the idle behavior of the robot so that it drives around randomly. Add the remaining behaviors and try to make them behave as we imagined they would.

Progress
We implemented the normal, or “idle”, behavior to have the robot move slowly around in a random manner. Its takeControl() method always returns a motivation value of 20 and should generally be the lowest value returned as this is the behavior we want it to have when nothing else is happening with the robot. The action() method of the robot can be seen below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
public void action() {
	suppressed = false;
 
	int action = random.nextInt(2)+1;
	if (action == 1) {
		int direction = Math.random() > 0.5 ? -1 : 1;
		AutAnimal.pilot.travel(direction*(random.nextInt(20)+5), true);
	}
	if (action == 2) {
		int turn = Math.random() > 0.5 ? -1 : 1;
		int angle = 10 + random.nextInt(40);
		AutAnimal.pilot.rotate(turn * angle, true);
	}
 
	while (!suppressed && AutAnimal.pilot.isMoving()) {
		Thread.yield();
	}
	AutAnimal.pilot.stop();
}

Subsequently we added the remaining behavior classes and started implementing their functionality. The behavior that avoids other autonomous robots and the edges of the arena was fairly straightforward as the functionality was much akin to avoiding the player robot however it did not need to move as quickly since it is not “scared” of the other robots.

As we have not decided quite yet what should actually happen when the robot is hit by the player robot, it has been set to spin in place for three seconds when it is hit.

The hunger behavior of the robot is still not quite finalized as well. So far we imagine the hunger behavior will function as follows: The hunger of the robot increases over time, when the robot reaches some level of hunger it will start moving at a medium speed and move forward, still avoiding obstacles and the player robot. When it finds some food it will grab it with its “arms” and try to run away with it.

The way the robot will identify food is with a color sensor mounted on front of the robot and pointing downwards, the “food” will be a block of some sort with a green base that extends out from the block itself, enabling the robot to see it.

The extended base of the food enables the robot to detect it with the color sensor

The extended base of the food enables the robot to detect it with the color sensor

"Artists" rendition of the food block

“Artists” rendition of the food block

Building of the autonomous robot
The robot needs to have three sensors in front, namely the IR Sensor, Ultra Sonic Sensor and Color Sensor. Furthermore it should have a grabbing arm for grabbing the food as described above. The robot also needs some kind of pressure plate on top to detect when it’s been hit by the player. To make hitting this plate easier we will try to keep the profile of the robot as low as possible.

In the robots first incarnation the Ultra Sonic and IR Sensors were placed in front of the Color Sensor, but this gave us a problem with the grabbing arm which would have to be very long if we wanted to move it into a upright position. Therefore this design was abandoned and the two sensors were placed on top of the Color Sensor instead. The Ultra Sonic Sensor was placed as the topmost sensor and we angled it slightly downwards so it could detect the edge of the arena. Right behind the Ultra Sonic Sensor the pressure plate is placed which is connected to a Touch Sensor.

Due to our construction of the robot it is relatively hard to get to push the buttons on the NXT and connecting the cables to the sensors and NXT is going to be a tight fit as well, we have not mounted them yet. A solution to not being able to press the buttons could be to have the program on the NXT autorun when it is turned on and then use the pressure plate on top of the robot as a start button. Pictured below is our construction of the robot.

Autonomous Robot

Pictures of the autonomous robot for the game, completed with all the necessary sensor and motors.

End-course project 2

Date: 23rd of May 2013
Participating group members: Kenneth Baagøe, Morten D. Bech and Thomas Winding
Activity duration: 7 hours

Goal
Further refining the player robot described in previous entry[9] and possibly start constructing the autonomous robots for the game.

Plan
Identify the bug that we noticed last time that caused problems with attacking and fix it. Add the video feed from the player robot directly into the Java controller application that runs on the PC. Possibly start developing the program for the autonomous robots and building them.

Progress
Controlling the robot
The problem with the robot being unable to attack when steering left while moving forward or steering right while moving backwards stumped us last time. The solution, however, was not very complicated and can be explained by with the term key blocking/ghosting[1][2]. Key blocking basically means that a keyboard can only handle three keypresses at a time and while that was what we were trying to do, it is only certain combinations it could handle, which we inadvertently demonstrated: When pressing e.g. the left arrow and the up arrow the channel that the space bar was trying to send on was blocked. Finally we solved the problem by moving the controls from the arrow keys to the WASD keys which did not cause blocking in the keyboard.

Video feed in the Java application
We tried to add the video feed from an Android smartphone directly into the Java controller application that runs on the PC. We were able to get a feed running using the media functionality of JavaFX and we tried using the VLCj library[3] that utilize the streaming functionality of the VLC media player[4]. On the smartphone we used the applications IP webcam[5] and spydroid[6].

As mentioned we were able to get a feed running using the JavaFX library, this was done in combination with IP webcam application for the smartphone, see code below. We had to set up a LAN as the application was only able to stream on a local network. The problem we encountered with this solution was that there was a very noticable delay on the feed, 3-4 seconds. Unfortunately we were not able to figure out a way to circumvent this delay and thus the solution was not usable as a delay that long made it too hard to control the robot properly.

Video feed in Java:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
import javafx.application.Application;
import javafx.scene.Group;
import javafx.scene.Scene;
import javafx.scene.media.Media;
import javafx.scene.media.MediaView;
import javafx.stage.Stage;
 
public class MediaPlayer extends Application {
private static final String MEDIA_URL = "http://192.168.10.101:8080";
 
public void start(Stage primaryStage) throws Exception {
	primaryStage.setTitle("JavaFX Media Player");
	Group root = new Group();
	Scene scene = new Scene(root, 640, 480);
 
	Media media = new Media(MEDIA_URL);
	javafx.scene.media.MediaPlayer mediaPlayer = new javafx.scene.media.MediaPlayer(media);
	mediaPlayer.setAutoPlay(true);
 
	MediaView mediaView = new MediaView(mediaPlayer);
	((Group)scene.getRoot()).getChildren().add(mediaView);
 
	primaryStage.setScene(scene);
	primaryStage.show();
}
 
public static void main(String[] args) {
	launch(args);
}
}

We also tried using the spydroid application in conjunction with the VLCj library but were unable to get the video feed running in Java, it would connect but there would be no picture. When we opened the stream in a browser instead, we saw that this application had a delay comparable to the one described above. Thus we chose to not pursue the use of this application any more.

Autonomous robots
We started developing the program for the autonomous robots and decided that they should be motivation based[8]. We had a good idea of how the robots should function and thus we needed to translate that into different behaviors for the robots. The behaviors included:
– When the robot was “idle”
– When the robot was hungry
– When the robot had been hit by the player robot
– When the robot saw the player robot
– When the robot saw an obstacle / other autonomous robot

As we had done motivation based robots before[7] we were able to draw inspiration from the program we developed then and reused the arbitrator and behavior interface.

We added a simple idle behavior that rotated the robot in place and added the behavior that made robot scared when it saw the player robot so we could test the IR receiver which we wanted to use to detect the player robot. The takeControl() would then return a high value when the robot is relatively close to the player robot and otherwise return zero.

1
2
3
4
public int takeControl() {
	if (ir.getSensorValue(3) > 160) return 200;
	return 0;
}

As the robot moves slowly during its idle behavior we wanted it to look like it got scared when it saw the player robot and thus made it back off a random distance fast when it sees the player followed by turning either left or right which can be seen in the code below.

1
2
3
4
5
6
7
8
9
10
11
12
13
public void action() {
	suppressed = false;
	AutAnimal.fastRotation();
	AutAnimal.fastTravel();
	int turn = Math.random() > 0.5 ? -1 : 1;
	int angle = 100 + random.nextInt(60);
	AutAnimal.pilot.travel(-(20+random.nextInt(30)), true);
	while (!suppressed && AutAnimal.pilot.isMoving()) Thread.yield();
	AutAnimal.pilot.rotate(-turn * angle, true);
	while (!suppressed && AutAnimal.pilot.isMoving()) Thread.yield();
	AutAnimal.slowRotation();
	AutAnimal.slowTravel();
}

Backlog
As was mentioned we were unfortunately not able to reach a usable solution to adding the camera feed into the controller application and decided not to spend any more time on trying to solve it for the time being. We would like to return to this problem at a later stage if we have the time to do so.

We have started implementing the behaviors and will continue working on them next time.

Finally, if we decide that we really want to use the arrow keys on the keyboard to control robot we might need to get a hold on a mechanical keyboard.

References
[1] http://www.braille2000.com/brl2000/KeyboardReq.htm
[2] http://en.wikipedia.org/wiki/Rollover_%28key%29
[3] http://code.google.com/p/vlcj/
[4] http://www.videolan.org/vlc/
[5] http://ip-webcam.appspot.com/
[6] http://code.google.com/p/spydroid-ipcamera/
[7] Lesson 10, http://bech.in/?p=362
[8] Thiemo Krink (in prep.), Motivation Networks – A Biological Model for Autonomous Agent Control
[9] End-course project 1, http://bech.in/?p=427

End-course project 0

Date: 9th of May 2013
Participating group members: Kenneth Baagøe, Morten D. Bech and Thomas Winding
Activity duration: 6 hours

Prelude
For the exam of the course Digital Control we have to make a project based on some of the topics presented during the course.

Goal
The goal of this lab session is to decide on a project we would like to do and describe it in details with regards to hardware, software and possible difficult problems we might able to see now in the project.

Plan
First we will brainstorm some different ideas we can come up with and then select three and describe these in a bit more detail, finally will we select one which we will.

Ideas
Mapping (Inspired by Maja Mataric[1])
Mapping out the layout of a room and navigating it without bumping in to objects that might block the robots path. Might need to be in a fixed environment as a random environment could be too demanding.

RC animal / autonomous animals
Consists of a single remote controlled “animal” and a number of autonomous “animals”. The autonomous robots try to steal food from the the player (RC animal) while the player incapacitates them to stop them. Additionally the player animal is controlled though a tablet of some sort which also provides a point of view from the animal.

WRO Senior High School Regular Challenge
Work on the challenge and came up with one or a couple of possible designs for a robot that complete the challenge. Also compare software possibilities between NXT-G and LeJOS, because the only software allowed at the international finale is NXT-G. The challenge can be found here.

Flock of robots
A flock of robots that are able to propagate updates to other robots. Basically, if a single robot of the flock receives an update it can then send the update to another robot(s) and they can, in turn, send it to yet another robot.

Zombie robots
Builds on the flock of robots. One robot could get “infected” with a zombie virus and start infecting other robots. The healthy robots try to avoid the zombies.

Evolution/self-learning
A robot that is able to evolve. Could be realized with a line-following PID controller-based robot that would adjust the PID values automatically and compare the run times, thus finding the optimal values for the PID controller. Would probably also need a way to find its way home to the start position in case the PID values throw it off the track.

Soccer/penalty-kick game[2]
A game where the player could control either the robot that would kick the ball and an autonomous robot would act as goalkeeper or vice versa.

Top 3
We have selected following three ideas as our top 3:

  • Mapping
  • RC animal / autonomous animals
  • Zombie robots

An addition that can probably be used in most of our ideas, a holonomic drive allows a robot to drive in all directions without turning, which think could be funny to incorporate. We will present the three ideas in a little more detail and technical perspective.

Mapping
Hardware/physical
For the mapping robot we would need the following hardware:

  • One NXT brick
  • Sensors for detection (Ultrasonic sensor and color/light sensor)
  • Landmarks
  • Environment
  • Movable obstacles

The environment will be a roadlike environment, made up from several squares (see http://legolab.cs.au.dk/DigitalControl.dir/city.jpg for an example), which we can most likely make/print while obstacles can be mostly anything that will block the robot. Landmarks can e.g. be colored lines at the edge of a square so the robot knows when it enters a new square.

Software
On the software side we will use lejOS to program the robot in Java. Controlling the robot will be handled with a Navigator. For an initial run with the robot to discover what the environment looks like we will need some sort of discovery protocol. Furthermore the robot should be able to plan a path that would be the shortest possible when it needs to go somewhere, it should also be able to recalculate a new path if the shortest path is blocked.

Expected difficulties
We believe that the implementation of the planning part of the robot will be somewhat difficult to write.

Expectations
We believe we will be able to fully implement this idea and have the robot working as described in the idea section.

Components need for the Mapping project.

Components need for the Mapping project.

RC animal/autonomous animals
Hardware
1 NXT for the RC robot + an IR emitter.
4 NXT for the animals + IR sensors.
1 laptop/PC/tablet.
1 smartphone.
Environment: We’ll need an arena in which the event will take place. The driver shall not be able to see this though.

Software
We’ll need streaming software for live video stream of the RC-robot(using Skype could be a solution in this particular situation). In able to manage the robot we’ll also have to find/code some protocol(s) that can be used. Motivation based animals.

Expected difficulties
It will be a challenge creating a low-latency live video stream, and we might have to create some sort of overlay using Skype instead of start coding from scratch. Furthermore the latency from the controller(laptop/PC/tablet) to the NXT we will consider as a challenge.

Expectations
We’re expecting the robot to be controlled remotely but not necessarily controlled from a tablet. We also expect the animals to have a behavior based “mind”.

Components need for the RC animal project.

Components need for the RC animal project.

Attack of the Zombies!
As explained earlier the a group of robots “live” in some environment where one of the robots gets infected by an virus and now tries to infect the other robots. We imagine that the environment is some flat surface surround by a wall to keep the robots in a confined space. Therefore the robots must have some way of sensing the wall. As what makes a robot a infected robot and how does it infect other robots? One solution is mount an IR-emitter and an IR-sensor on all the robot, the infected once will turn on their emitter which gives the not infected ones a chance to survive. The placement of the IR-sensor then becomes crucial, is it place as on mammal or predator? Furthermore how does a zombie infect another robot? It could be done is several ways, one it to communicate it through bluetooth every time a robot hits another robot, but this might also conflict with the wall detection.

Hardware
For this project we are going to need approx. 5 robots, a flat surface with an edge around it, same number of IR-emitter and IR-sensors as robots. The IR-emitter could be something we constructed our self which include some LEDs as well to indicate towards the viewer which robot are infected and which aren’t.

Software
The software is going to need two states that it can change between, it might be implemented as a Strategy pattern or as a simply simple state object. Also we need some implementation of change from one state to other and back again for resetting, which could be done via BlueTooth. Also we need to be able to infect the first Zombie. Finally the software of the for the not infect could be motivation based.

Expected difficulties
One of the concerns we have for this project is how we indicate that a robot is infected, both to the other robots and the viewer, if we were to construct our own emitter we could take up a lot of time. Secondly we are concerned about the rate that the infection happens with, we don’t want the rate to be too fast nor too slow, because we don’t want it to be too borrowing nor over too fast.

Expectations
We expect that we would be able to complete the project in its full, but if can the infection rate perfect is another question.

Components need for the Attack of the Zombie project

Components need for the Attack of the Zombie project

And the winner is
We have selected RC aminal as the project we would like to do for our end-course project, however we wouldn’t commit to it 100% until we have had a chance to talk to Ole. Our reason for selecting that one is that we found the most fun to do of the three, while all could be interesting to do. Furthermore we have made a small timetable which can be seen be below.

Timetable
So fare we have come up with the following timetable for the project:

Tuesday, 14th of May Talk to Ole regarding our choice of project
Thursday, 16th of May Correct project depend on feedback we get from Ole, work on a remote controlled robot.
Tuesday, 21st of May Complete the remote control if not already done.
Thursday, 23rd of May Construction of environment and robots.

References
[1] Maja J. Mataric, Integration of Representation Into Goal-Driven Behavior-Based Robots, in IEEE Transactions on Robotics and Automation, 8(3), Jun 1992, 304-312.
[2] World Robot Olympiad Gen II Football, http://wro2013.org/challenges/challenges-football

Lesson 10

Date: 2nd of May 2013
Participating group members: Kenneth Baagøe, Morten D. Bech and Thomas Winding
Activity duration: 5 hours

Goal
The goal of this lesson is to test the BumperCar example provided by lejOS according to the lesson plan. Furthermore the goal is also to expand upon the BumperCar example, also described in the lesson plan.

Plan
We will use the basic robot from the previous lesson and add an ultrasonic sensor and a touch sensor to it. We will then run the BumperCar example and observe how it functions and describe that. Afterwards we will modify the program according to modifications described in the lesson plan and observe how they work.

Progress
We added the ultrasonic sensor and the touch sensor to the robot and also added a bumper to increase the detection range of the touch sensor (pictured below).

BumperCar

Running the basic BumperCar example we saw that the robot would back off and turn a bit if the ultrasonic sensor was close to an obstacle and the same thing happened if the touch sensor was pressed. We also tried holding down the touch sensor and this resulted in the robot repeatedly backing off and turning slightly. The same behaviour was observed if we held an object close to the ultrasonic sensor. We did, however, have a slight problem with the ultrasonic sensor detecting even a very low obstacle, so to actually test the functionality of the touch sensor we had to temporarily remove the obstacle detection using the ultrasonic sensor.

Adding the Exit behavior
To add the Exit behavior, described in the lesson plan, we added an additional class in the BumperCar class, aptly named Exit, which, like the other classes, implemented the Behavior interface.

The implementation of the class looked like the following:

Implementation of the Exit behavior

Implementation of the Exit behavior

We also had to modify the DetectWall behavior as the suppress method was not implemented properly, it simply did nothing. We went about the basic idea of the suppression the same way that it was done in the DriveForward behavior: We added a _suppressed boolean and implemented the suppress() method to set this boolean to true. Furthermore the action() method of the DetectWall behavior did not properly check for suppression either. Thus we also modified the action() method to immediately return from both of the rotate() methods and added a loop that would yield its timeslices until either the rotation was completed or the behavior was suppressed.

Our modification of the detectWall action

Our modification of the detectWall action

Pressing the escape button seems to exit the program immediately, however from a previous lesson we are aware that the ultrasonic has a short delay of about 30ms when pinging, this will, of course, block the entire program and it is most likely because of the delay being relatively short that we are not able to observe it affecting the program.

Functionality of the Arbitrator class
When the triggering condition of the DetectWall behavior is true, the Arbitrator class does not call the takeControl method of the DriveForward behavior. This can be seen in the code snippet below from the Arbitrator class: It starts from the top priority, checking if the takeControl() method returns true, if it does not, it moves on to the second highest priority and so on. If the triggering condition, however, returns true, the method breaks out of the loop as there is no need to check for the lower priorities.

The loop that determines the highest priority that wants control of the robot

The loop that determines the highest priority that wants control of the robot

Sampling the ultrasonic sensor in an isolated thread
To alleviate the problem of the ultrasonic sensor delaying the program every time it is sampled, we added the sampling to a thread that would update a variable which the takeControl() method would then evaluate upon.

Implementation of a thread to remove the delay from the sonic sensor

Implementation of a thread to remove the delay from the sonic sensor

Further modifying the DetectWall behavior
To make the robot move backwards for one second before turning we employed the same basic principles as when we modified the DetectWall behavior to be able to be suppressed, as described above. We made the robot move backwards until either a second had passed or the behavior was suppressed by way of another loop, see the code below.

The DetectWall action that moves backwards for one second before turning

The DetectWall action that moves backwards for one second before turning

We were also asked to try to modify the code to make it possible to interrupt the DetectWall action if the touch sensor was pressed or the ultrasonic sensor detected an object. This interruption should then start the action of the behavior again. We were not able to make this part work, we tried adding a boolean to tell if the action was already running and if it was running, the program would stop the motors and start the action over. Unfortunately we couldn’t come up with a solution for this, however when we implemented motivation, which is described in the next section, the interruption worked without any problems. Again we were not exactly sure why.

Motivation
To make the robot motivation based we used the Behavior and Arbitrator implementations provided by Ole Caprani[1][2]. We then changed our modified BumperCar to return integers on the different takeControl() methods instead of booleans. DriveForward was given a low value as it is still a low priority behavior while Exit was given a high value as it should still take precedence over other behaviors. To make DetectWall return a somewhat lower value when it was already active, we added a boolean that was set to true when the action method was run and set to false when the method finished. Thus the takeControl method was able to differentiate between the two states.

takeControl() for the motivation-based DetectWall behavior

takeControl() for the motivation-based DetectWall behavior

Conclusion
We have followed the lesson plan almost to its full extend, the only catch was that we had trouble implementing the DetectWall behaviour so that it could be interrupted while turning if e.g. the touch sensor was activated. However the change to motivation based decision making fixed this problem as we were able to dynamically return higher values if some other action was in progress.

Class-files
BumperCar – Arbitrator
BumperCar – Motivation based
Behavior – Motivation based
Arbitrator – Motivation based

Reference
[1] http://legolab.cs.au.dk/DigitalControl.dir/NXT/Lesson10.dir/Behavior.java
[2] http://legolab.cs.au.dk/DigitalControl.dir/NXT/Lesson10.dir/Arbitrator.java

Lesson 6

Date: 14th of March 2013
Participating group members: Thomas Winding, Kenneth Baagøe and Morten Djernæs Bech.
Activity duration: x hours

Overall goal
To build various Braitenberg vehicles, using Lego NXT, and observe what behaviours they exhibit when e.g. exitatory and inhibitory connections are used.

Figure 1

Figure 1

Overall plan
To achieve the goal mentioned above, we will be drawing inspiration from the lesson description and Tom Deans notes[1][2]

Braitenberg vehicle 1
Goal
To make vehicle 1 as pictured in Figure 1.

Plan
As a sensor for the Braitenberg vehicle we will be using a sound sensor. First we will try mapping the values measured to a power between 0 and 100. Afterwards we will try mapping the values to a power between -100 and 100. Lastly we will try mapping the power an inhibitory connection instead of the exitatory we will be using to the two first experiments mentioned.

Results
First experiment: Using raw values from the sensor we map to power in a simple way: Since the sensor measures values between 0-1023 we divide the value by 10 to get an approximate value between 0-100 and round down/up in case the value should be outside the 0-100 range (in this case the values could only happen to be over 100 though since the lowest measure is 0). Using these values we get a robot that moves forward when it registers some sound. This sound has to be of some volume though, since the motors do not run unless they get a power of somewhere around 50-60 or above.

Second experiment: We use the same procedure as above except to get the motor power we do the following calculation on the sensor value: (soundValue/5)-100 to get values in the range [-100, 100]. The resulting behaviour in the robot is that it moves forward when it registers sound and backwards when it doesn’t

Inhibitory connection: We achieve the inhibitory connection simply by inverting the sensor value compared to the one used in the above experiments. The resulting behaviour in the robot is the opposite of the above results.

Braitenberg vehicle 2
Goal
To make vehicle 2a and 2b as pictured in Figure 1 and run them with both inhibitory and exitatory connections.

Plan
We will mount two light sensors on the vehicle and map motorpower to the wheels. This will be done in two ways as the figures (2a and 2b) in Figure 1 show, which means we will try with the sensor value being mapped to power on the wheel placed diagonally across or to the wheel on the same side as the sensor.

Results
Running the robot with the sensors controlling either the wheel diagonally across from them or the wheel on their side results in two behaviours: If they control the wheel diagonally across the behaviour of the robot is that it moves towards light and if the sensors control the wheel on their own side, the robot moves away from light.

Braitenberg vehicle 3
Goal
Build vehicle 3 which is pictured in Figure 2. The vehicle should be able to move between a light source and a sound source using its different sensors.

Figure 2

Figure 2

Plan
To detect the light source and sound source we will use sound and light sensors.

Results
We haven’t made the code for the robot yet, so it’s going to subject for another blog post soon.

References
[1]Tom Dean, Introduction to Machina Speculatrix and Braitenberg Vehicles
[2]Tom Dean, Notes on construction of Braitenberg’s Vehicles, Chapter 1-5 of Braitenbergs book

Lesson 5

Date: 7th/12th of March 2013
Participating group members Thursday: Thomas Winding and Kenneth Baagøe.
Participating group members Tuesday: Thomas Winding, Kenneth Baagøe and Morten Djernæs Bech.
Activity duration:
6 hours Thursday 7th of March 2013
3 hours Tuesday 12th of March 2013

Overall goal
The overall goal of this lesson is building and programming a Segway-inspired self-balancing NXT robot.

Overall plan
To achieve our goal we will use the information provided here:

Self-balancing robot with a light sensor
Goal
To build a self-balancing robot that will adjust its balance according to measurements from a light sensor.

Plan
We will build a robot inspired by Phillippe Hurbain [1] and run the program made by Brian Bagnall [2] on it. Furthermore we will try to adhere to the guidelines that Phillippe Hurbain propose and observe whether that helps on the operation of the robot.

Result
The robot tips over very easily, even when we tried in a dark room on a non-uniform surface as suggested by Phillippe Hurbain. There are two major influences on this: Phillippe Hurbain suggests starting the program when the robot is perfectly balanced. Attaining a perfect balance is hard to do when simply starting the program by hand, even pushing the enter button on the robot will bring it at least somewhat out of balance and thus skew the measurement that the robot balances by. Besides the aforementioned the program is also configured to Brian Bagnalls hardware and we suspect that there might be subtle differences compared to ours, which could cause problems in the balancing.

Changing the values of the parameters from a PC
Goal
Changing the values for the PID controller in the program mentioned above, from a PC via a GUI.

Plan
Drawing inspiration from the provided BTcontrolledCar and PCcarController programs, we will program a GUI that is able to take input from a user and supply these values to the PID controller.

Results
Using the GUI to provide new values for the program over a Bluetooth connection works fine, there are, however, some shortcomings in the program that we hoped to implement, but haven’t. One thing we might implement later is the ability to change the different values for the PID controller on the fly instead of having to restart the program on the robot every time a change is wanted.

Changing the values of the parameters from a PC revisited
Revisiting the shortcomings mentioned above, a program for the robot, which is able to change the kP, kI, kD and scale values on the fly has been made (BTbalance2.java). This operates over Bluetooth and works in tandem with PCcarBalance.java Standard values are the ones provided by Brian Bagnall. To ease calibration of the robot, a three second grace period was added after pressing the enter button which is signaled first with two beeps and lastly a beep sequence to signal the start of the balancing. Furthermore the offset, set by the calibration, is changeable by pressing the left button (decrease) and the right button (increase). However, when exiting the program via a press on the escape button, the robot writes out a large amount of exceptions, the cause of which we were unable to locate.

Running this program on the robot and trying different values for the PID controller, we were able to get the robot to balance for 10-20 seconds on a white surface in a room lighted with fluorescent light. One of the better runs is demonstrated in the video below (values used were: Offset: 542, kP: 53, kI: 2, kD: 36, scale: 18). The fluorescent light did cause some trouble in that the robot would cast a shadow which moved when the robots position changed in comparison to the lights. This meant that it would sometimes recieve a very different reading if the robot moved so that the shadow no longer was where the light sensor measured. We remedied this by running the robot in a dark room where we could sometimes achieve a balancing time of 30-40 seconds.

Self-balancing robot with gyro sensor
Goal
Balance the robot, with the same design used in the previous exercises, using a gyro sensor.

Plan
Follow the AnyWay NXT-G code and translate it to Java.

Result
After translating the NXT-G example to Java, the program unfortunately did not function properly as the robot would instantly move forward at full speed when started and thus tip over.
We will continue on this assignment on Thursday 14th of March 2013.

References
[1] http://www.philohome.com/nxtway/nxtway.htm
[2] http://variantpress.com/books/maximum-lego-nxt/