Aron Barreira Bordin, Felipe Kutait, Hugo Fernando Waki, Mateus L. Mazziero, Matheus B. de Moraes, Raul Pelegrini Neto, Vinicius F. J. Moretto, Vitor C. Costanzo
With the orientation of:
André Bicudo and Rene Pegoraro
II - How this project works
Before start coding it, let me explain you how it works. We have to separated modules, Android and Arduino.
Arduino - Arduino has a simple communication layer. It can receive and send bluetooth messages through bluetooth. When Arduino receives a new message, the message is analyzed and an action is executed(I’ll be using a communication protocol, explained in the part #2). This action can be something related with movement(turn right, stop, etc) or something related with sensors(read the ultrasonic value, for example). Nothing related with the robot logic will be coded in the Arduino. Arduino only execute action received by bluetooth, nothing more. But why? It’s simple! If we are using a simple an abstract code in the Arduino, we’ll be able to use the same robot to do anything, without recoding it!
Android - All the robot logic is here. When hacking it or creating a new project, we’ll code a new Android project and keep the same source code in the Arduino. In this project, Android will be constantly reading raw camera data to follow the line. Depending the position of the line, Android will send commands to the connected Arduino. When necessary, Arduino ask sensor data. For example, each 500ms we can as the robot to measure the ultrasonic distance.
III - Image processing explained
Before start to code I just want to make sure that you can understand how everything works.
We have a 320x240 picture. To have a good performance, we’ll analyze 15 pictures per second. But we won’t need to read all pixels to take a decision.
To detect the line, we’ll be using a 5X240 part of the picture. You can choose how many pixels do you want to analyze and which of them, but in this tutorial, I’ll be using the pixels from 295x1 to 299x240.
As long as the image resolution is 320x240, I’m processing only 5 lines. Check the representation bellow, easier to understand :)
The algorithm will be analyzing the red area. As you can see, this representation have 5 red blocks, these blocks are used because it’s easier to understand the camera data. To identify a line, we’ll do the following:
Take a picture;
Select a 5x240 region;
Divide this region in 5 blocks;
Calculate the median value of each block;
Compare the median value with a variable to identify if the block is white or black.
So, we’ll set each block as white or black, represented by 1 or 0.
After analyzing the 5 blocks, we’ll have 5 values, creating a integer between 00000 and 11111. Where 00000 is completely black and 11111 is completely white.
I prefer to convert to a binary value because it’s easier to code it, so each situation can be represented with a number from 0(binary value: 00000) to 31(binary value(11111).
If it still confusing and you want to learn more about the process, feel free to comment here :)
As I told you, we’ll compare the camera value with a number to identify if something is white or black.
This divisor variable will be calculated by the camera, so our app can “learn” to distinguish colors and it’ll be able to work with any ambient.
This process is called calibration.
IV - Coding it
I separated the code in some sections, easier to understand how everything works.
You can follow the tutorial or just copy the source code.
2) User interface
After create the application to use the camera provided in the link above, we can work with the UI. The UI is divided in 3 sections:
* a camera preview
* a TextView to log the application
* 5 buttons(Calibrate the camera, Turn flash on/off, Connect with the Robot, Stop the robot, Start the robot)
Open your main_activity.xml and add replace it with the following code:
Ok; now you have the UI for this application. So, let’s code it.
3) Application logger
In this section, we’ll code a new class to show the log in the TextView that we have created.
Create a new public class, Logger.java. This class is really simple, we’ll just add the Log message to the TextView and scroll it.
You need to update you MainActivity to start the Logger.
If you followed my tutorial about Camera, it’ll be easier. If you don’t, please, just check the onCreate method and copy everything related with the Logger.
You can run your application now and check if the Logger is working :)
4) Image processing:
In this section, we’ll start the most important part of the project, the Android camera processing.
I’m not going to use OpenCV or any other computer vision library.
This is a simple project, so we’re going to use the raw camera data to analyze pixel per pixel and take a decision.
To be able to see if it’s processing correctly the image, let’s change the background color of each button.
We have 5 buttons in the bottom of the screen, and as I told you, we’re going to divide the image in 5 blocks.
So let’s represent each block as the background color of each button. Using this tip, if there is a black line in the center, for example, the button in the center will be black.
Let’s start just creating a Click listener in the MainActivity. First implement the View.OnClickListener and then add these lines to onCreate in your MainActivity.java:
Now override onClick method with the following code:
No, let’s edit the CameraView class. First, just add some private variables that we are going to use:
In you CameraView constructor, make sure to have the following line:
Inside your Surface created, add this code:
Now, let’s create the method switchFlash(). This method will turn on/off the flashlight. Add this permission:
And add this code:
This method is simple, it just check if the flashlight is on or off and change the state. Add this new method to start a calibration:
We’ll work with the calibration in the next section.
The next step is to get raw data from camera to analyze it. Implement Camera.PreviewCallback and add the following line inside your SurfaceCreated and SurfaceChanged:
Now you can override the following method:
There is a lot of code, right?
This is the code necessary for processing the image! It’s fully commented to help you to understand.
I’ll talk more about it in the part 4, where we are going to edit this method to add new features to the project :)
Just add this following method, so you’ll be able to run your app.
If you run, you’ll probably see it working now! (If you have any problem, please just let me know)
But each pixel has a value between 0 and 254, and this value can be influenced by ambient light.
So, to try to avoid this problem, we need to calibrate the camera.
To calibrate our app, we’ll analyze 100 pictures and check the difference between the lowest and biggest values.
These values will show us what is black and what is white.
First, add some private variables:
Edit the Calibrate method with the following code, just to reset some variables:
Now, let’s edit the doCalibrate method.
To calibrate, we’ll save the biggest and lowest color median captured by the camera. So we can set the lowest color as the absolute black in this ambient; and the higher value as the absolute white in this ambient.
So if we need to check if something is white or black, we can compare each block with this value calculated with the calibration.
After reading 100 pictures, we set the new colorDivisor value: