[Tutorial #1] Line Follower Robot with Android and Arduino

Welcome, Aron here!

In the last year I worked in a Line Follower Robot with Android and Arduino, or the project Timótheo.

To complete this project, we created a Framework, .PNG Arduino Framework.

Using it we created an autonomous robot that uses an Android device to analyze the environment using the Camera, and then send commands to the robot.

And now, you’ll be able to check a full tutorial to learn how to create a similar project and how to hack it and add custom features.

Check the video bellow and let’s start.

I - About this tutorial

In this tutorial I’ll explain you everything about the project while we code it together.

I’ll show you the logic used, components, tips, and after when finished, you’ll see how to hack it to have some nice extra features.

The tutorial is divided in 4 parts:

You’ll need the following components:

The full project will cost something around $60 :)

Source code:

Get updates Follow @aron-bordin
Star it: Star
Contribute: Fork
Download: Download

This project was developed by:

  • Aron Barreira Bordin, Felipe Kutait, Hugo Fernando Waki, Mateus L. Mazziero, Matheus B. de Moraes, Raul Pelegrini Neto, Vinicius F. J. Moretto, Vitor C. Costanzo

With the orientation of:

  • André Bicudo and Rene Pegoraro

II - How this project works

Before start coding it, let me explain you how it works. We have to separated modules, Android and Arduino.

  1. Arduino - Arduino has a simple communication layer. It can receive and send bluetooth messages through bluetooth. When Arduino receives a new message, the message is analyzed and an action is executed(I’ll be using a communication protocol, explained in the part #2). This action can be something related with movement(turn right, stop, etc) or something related with sensors(read the ultrasonic value, for example). Nothing related with the robot logic will be coded in the Arduino. Arduino only execute action received by bluetooth, nothing more. But why? It’s simple! If we are using a simple an abstract code in the Arduino, we’ll be able to use the same robot to do anything, without recoding it!

  2. Android - All the robot logic is here. When hacking it or creating a new project, we’ll code a new Android project and keep the same source code in the Arduino. In this project, Android will be constantly reading raw camera data to follow the line. Depending the position of the line, Android will send commands to the connected Arduino. When necessary, Arduino ask sensor data. For example, each 500ms we can as the robot to measure the ultrasonic distance.

III - Image processing explained

Before start to code I just want to make sure that you can understand how everything works.

We have a 320x240 picture. To have a good performance, we’ll analyze 15 pictures per second. But we won’t need to read all pixels to take a decision.

To detect the line, we’ll be using a 5X240 part of the picture. You can choose how many pixels do you want to analyze and which of them, but in this tutorial, I’ll be using the pixels from 295x1 to 299x240.

As long as the image resolution is 320x240, I’m processing only 5 lines. Check the representation bellow, easier to understand :)

The algorithm will be analyzing the red area. As you can see, this representation have 5 red blocks, these blocks are used because it’s easier to understand the camera data. To identify a line, we’ll do the following:

  • Take a picture;
  • Select a 5x240 region;
  • Divide this region in 5 blocks;
  • Calculate the median value of each block;
  • Compare the median value with a variable to identify if the block is white or black.
  • So, we’ll set each block as white or black, represented by 1 or 0.
  • After analyzing the 5 blocks, we’ll have 5 values, creating a integer between 00000 and 11111. Where 00000 is completely black and 11111 is completely white.

I prefer to convert to a binary value because it’s easier to code it, so each situation can be represented with a number from 0(binary value: 00000) to 31(binary value(11111).

If it still confusing and you want to learn more about the process, feel free to comment here :)

As I told you, we’ll compare the camera value with a number to identify if something is white or black.

This divisor variable will be calculated by the camera, so our app can “learn” to distinguish colors and it’ll be able to work with any ambient.

This process is called calibration.

IV - Coding it

I separated the code in some sections, easier to understand how everything works.

1) Camera app

The base of this system is a camera application, so we can read the camera data to take decisions. I created a separated tutorial on how to use Camera with Android Studio.

You can follow the tutorial or just copy the source code.

2) User interface

After create the application to use the camera provided in the link above, we can work with the UI. The UI is divided in 3 sections: * a camera preview * a TextView to log the application * 5 buttons(Calibrate the camera, Turn flash on/off, Connect with the Robot, Stop the robot, Start the robot)

Open your main_activity.xml and add replace it with the following code:

<FrameLayout
    xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    tools:context=".MainActivity">
    <LinearLayout
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        android:orientation="vertical"
        android:weightSum="10">
        <FrameLayout
            android:id="@+id/camera_view"
            android:layout_width="match_parent"
            android:layout_height="0dp"
            android:layout_weight="5"></FrameLayout>

        <ScrollView
            android:id="@+id/scrLogger"
            android:layout_width="match_parent"
            android:layout_height="0dp"
            android:layout_weight="3"
            android:background="#000000">
            <TextView
                android:id="@+id/txtLogger"
                android:layout_width="match_parent"
                android:layout_height="wrap_content"
                android:singleLine="false"
                android:textColor="#ff00ff04" />
        </ScrollView>
        <LinearLayout
            android:layout_width="match_parent"
            android:layout_height="0dp"
            android:layout_weight="2"
            android:background="#000000"
            android:orientation="horizontal"
            android:weightSum="5">
            <ImageView
                android:id="@+id/btnCalibrate"
                android:layout_width="0dp"
                android:layout_height="match_parent"
                android:layout_margin="1dp"
                android:layout_weight="1"
                android:background="#FFFFFF"
                android:padding="16dp"
                android:src="@android:drawable/ic_menu_compass" />
            <ImageView
                android:id="@+id/btnFlash"
                android:layout_width="0dp"
                android:layout_height="match_parent"
                android:layout_margin="1dp"
                android:layout_weight="1"
                android:background="#FFFFFF"
                android:padding="16dp"
                android:src="@android:drawable/ic_menu_camera" />
            <ImageView
                android:id="@+id/btnConnect"
                android:layout_width="0dp"
                android:layout_height="match_parent"
                android:layout_margin="1dp"
                android:layout_weight="1"
                android:background="#FFFFFF"
                android:padding="16dp"
                android:src="@android:drawable/ic_menu_share" />
            <ImageView
                android:id="@+id/btnStop"
                android:layout_width="0dp"
                android:layout_height="match_parent"
                android:layout_margin="1dp"
                android:layout_weight="1"
                android:background="#FFFFFF"
                android:padding="16dp"
                android:src="@android:drawable/ic_menu_close_clear_cancel" />
            <ImageView
                android:id="@+id/btnStart"
                android:layout_width="0dp"
                android:layout_height="match_parent"
                android:layout_margin="1dp"
                android:layout_weight="1"
                android:background="#FFFFFF"
                android:padding="16dp"
                android:src="@android:drawable/ic_menu_send" />
        </LinearLayout>
    </LinearLayout>
    <ImageButton
        android:id="@+id/imgClose"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:layout_gravity="right|top"
        android:background="@android:drawable/ic_menu_close_clear_cancel"
        android:padding="15dp" />
</FrameLayout>

Ok; now you have the UI for this application. So, let’s code it.

3) Application logger

In this section, we’ll code a new class to show the log in the TextView that we have created.

Create a new public class, Logger.java. This class is really simple, we’ll just add the Log message to the TextView and scroll it.

package com.wordpress.bytedebugger.timotheo;

import android.util.Log;
import android.view.View;
import android.widget.ScrollView;
import android.widget.TextView;

/**
 * This class will manage the Log messages
 */
public class Logger {
    private static Logger __logger = null;
    protected static final String TAG = "[Logger]";
    private static TextView mTxtLogger;
    private static ScrollView mScrollLogger;

    /**
     * Ensure that we have only one Logger instance running
     * @param parent MainActivity
     * @return Logger
     */
    public static Logger getInstance(MainActivity parent) {
        return (__logger == null) ? new Logger(parent) : __logger;
    }

    public static Logger getInstance() {
        return (__logger == null) ? new Logger(null) : __logger;
    }

    private Logger(MainActivity parent) {
        mTxtLogger = (TextView)parent.findViewById(R.id.txtLogger);
        mScrollLogger = (ScrollView)parent.findViewById(R.id.scrLogger);
    }

    /**
     * Log the message, display it in the TextView area and in the logcat
     * @param txt
     */
    public static void Log(String txt) {
        Log.d(TAG, txt);
        mTxtLogger.append(txt + "\n");

        mScrollLogger.post(new Runnable() {
            @Override
            public void run() {
                mScrollLogger.fullScroll(View.FOCUS_DOWN);
            }
        });
    }

    /**
     * Log a error message, show it in the TextView, and in the logcat
     * @param txt
     */
    public static void LogError(String txt) {
        Log.e(TAG, txt);
        mTxtLogger.append("[ERROR] " + txt + "\n");

        mScrollLogger.post(new Runnable() {
            @Override
            public void run() {
                mScrollLogger.fullScroll(View.FOCUS_DOWN);
            }
        });
    }
}

You need to update you MainActivity to start the Logger.

If you followed my tutorial about Camera, it’ll be easier. If you don’t, please, just check the onCreate method and copy everything related with the Logger.

package com.wordpress.bytedebugger.timotheo;

import android.app.Activity;
import android.hardware.Camera;
import android.os.Bundle;
import android.view.View;
import android.widget.FrameLayout;
import android.widget.ImageButton;


public class MainActivity extends Activity implements View.OnClickListener{
    private Camera mCamera = null;
    private CameraView mCameraView = null;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        Logger l = Logger.getInstance(this); //create the logger instance

        Logger.Log("Starting Timótheo...");
        try{
            Logger.Log("\tCreating the camera...");
            mCamera = Camera.open();//you can use open(int) to use different cameras
        } catch (Exception e){
            Logger.LogError("\tFailed to get camera: " + e.getMessage());
        }

        Logger.Log("\tDone!\n\tCreating the camera preview...");
        if(mCamera != null) {
            mCameraView = new CameraView(this, mCamera);//create a SurfaceView to show camera data
            FrameLayout camera_view = (FrameLayout)findViewById(R.id.camera_view);
            camera_view.addView(mCameraView);//add the SurfaceView to the layout
        }
        Logger.Log("Done!");
        //btn to close the application
        ImageButton imgClose = (ImageButton)findViewById(R.id.imgClose);
        imgClose.setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View view) {
                System.exit(0);
            }
        });
        Logger.Log("Timótheo started successfully!");

        findViewById(R.id.btnCalibrate).setOnClickListener(this);
        findViewById(R.id.btnFlash).setOnClickListener(this);
        findViewById(R.id.btnConnect).setOnClickListener(this);
        findViewById(R.id.btnStop).setOnClickListener(this);
        findViewById(R.id.btnStart).setOnClickListener(this);
    }
}

You can run your application now and check if the Logger is working :)

4) Image processing:

In this section, we’ll start the most important part of the project, the Android camera processing. I’m not going to use OpenCV or any other computer vision library.

This is a simple project, so we’re going to use the raw camera data to analyze pixel per pixel and take a decision.

To be able to see if it’s processing correctly the image, let’s change the background color of each button.

We have 5 buttons in the bottom of the screen, and as I told you, we’re going to divide the image in 5 blocks.

So let’s represent each block as the background color of each button. Using this tip, if there is a black line in the center, for example, the button in the center will be black.

Let’s start just creating a Click listener in the MainActivity. First implement the View.OnClickListener and then add these lines to onCreate in your MainActivity.java:

findViewById(R.id.btnCalibrate).setOnClickListener(this);
findViewById(R.id.btnFlash).setOnClickListener(this);
findViewById(R.id.btnConnect).setOnClickListener(this);
findViewById(R.id.btnStop).setOnClickListener(this);
findViewById(R.id.btnStart).setOnClickListener(this);

Now override onClick method with the following code:

    @Override
    public void onClick(View view) {
        switch (view.getId()){
            case R.id.btnCalibrate:
                mCameraView.Calibrate();
                break;
            case R.id.btnFlash:
                mCameraView.switchFlash();
                break;
            case R.id.btnConnect:
                break;
            case R.id.btnStop:
                break;
            case R.id.btnStart:
                break;
        }
    }

No, let’s edit the CameraView class. First, just add some private variables that we are going to use:

    private int width = 320, height = 240; //size of the preview image
    private boolean isRunning = false; //if the robot is running
    private boolean isCalibrating = false; //if the application is calibrating the colors
    private boolean isFlashOn = false; //if the flash is on
    private List<ImageView> mBlocks = new ArrayList<ImageView>(); //list of buttons
    private int[] mBlocksMedian = new int[5]; //median value of each block
    private boolean canProcess = false; //if we can process the image
    private int colorDivisor = 126; //if >  than colorDivisor then is black
    private MainActivity parent;

In you CameraView constructor, make sure to have the following line:

parent = (MainActivity)context;
mCamera = camera;

Camera.Parameters p = mCamera.getParameters();
p.setPreviewSize(width, height);
mCamera.setParameters(p);

Inside your Surface created, add this code:

  
mBlocks.add((ImageView)parent.findViewById(R.id.btnCalibrate));
mBlocks.add((ImageView)parent.findViewById(R.id.btnFlash));
mBlocks.add((ImageView)parent.findViewById(R.id.btnConnect));
mBlocks.add((ImageView)parent.findViewById(R.id.btnStop));
mBlocks.add((ImageView)parent.findViewById(R.id.btnStart));

Now, let’s create the method switchFlash(). This method will turn on/off the flashlight. Add this permission:

<uses-permission android:name="android.permission.FLASHLIGHT" />

And add this code:

    public void switchFlash() {
        Camera.Parameters p = mCamera.getParameters();
        if(isFlashOn)
            p.setFlashMode(Camera.Parameters.FLASH_MODE_OFF);
        else
            p.setFlashMode(Camera.Parameters.FLASH_MODE_TORCH);
        isFlashOn = !isFlashOn;
        mCamera.setParameters(p);
        Logger.Log("Flash switched");
    }

This method is simple, it just check if the flashlight is on or off and change the state. Add this new method to start a calibration:

  
    public void Calibrate() {
        Logger.Log("Calibrating...");
        isCalibrating = true;
    }

We’ll work with the calibration in the next section.

The next step is to get raw data from camera to analyze it. Implement Camera.PreviewCallback and add the following line inside your SurfaceCreated and SurfaceChanged:

mCamera.setPreviewCallback(this);

Now you can override the following method:

    @Override
    public void onPreviewFrame(byte[] bytes, Camera camera) {
        synchronized (this) {
            if (!isCalibrating && !canProcess && isRunning) {
                return; //if we are calibrating the camera, we'll analyze all pictures
                //but if we are running the robot(calibrating = false), we can only analyze it when
                //canProcess = true
                //with canProcess, we'll be able to choose how many pixels will be analyzed per second
            }

            int mBlocksQtd[] = new int[5]; //count number of pixels per block
            Arrays.fill(mBlocksMedian, 0); //we reset the block values

            int initialLine = 295; //where we start to analyze
            int finalLine = 300; //the final line to analyze
            int i, j = 5;

            for (i = 0; i < 240; i++) {
                if (i % 48 == 0) //we dived the 240 line in 5 blocks, each one with 48px
                    j--; //each 48px, we successfully read a block
                int value = 0;
                for (int k = initialLine; k < finalLine; k++) //for each line we read the value of the pixel
                    value += (bytes[k + 320 * i])  & 0xFF; //convert to a byte
                value /= (finalLine - initialLine); //get the median of this pixel

                mBlocksMedian[j] += value; //add the pixel median to the block
                mBlocksQtd[j]++; //add 1 to the number of pixels analyzed in each block.
                // Total will be 48, just use it to be easier to change the code :)
            }

            for (i = 0; i < 5; i++)
                mBlocksMedian[i] = mBlocksMedian[i] / mBlocksQtd[i]; //get the media value of each block

            if(isCalibrating) { //if this method is being called in a calibration, we need to
                doCalibrate(); //recalculate some variables
                return;
            }

            //if we are not processing, we need to display the camera results in the buttons
            for(i = 0; i < 5; i++)
                mBlocks.get(i).setBackgroundColor(mBlocksMedian[i] > colorDivisor ? Color.WHITE : Color.BLACK);

            //after each image processed, we'll need to wait until the next frame
            canProcess = false;
        }
    }

There is a lot of code, right?

This is the code necessary for processing the image! It’s fully commented to help you to understand.

The raw image is in the YUV420SP format, you can read a little more about it here.

I’ll talk more about it in the part 4, where we are going to edit this method to add new features to the project :)

Just add this following method, so you’ll be able to run your app.

private void doCalibrate() {
//we'll code it soon
}

If you run, you’ll probably see it working now! (If you have any problem, please just let me know)

But each pixel has a value between 0 and 254, and this value can be influenced by ambient light.

So, to try to avoid this problem, we need to calibrate the camera.

5) Calibration

To calibrate our app, we’ll analyze 100 pictures and check the difference between the lowest and biggest values.

These values will show us what is black and what is white.

First, add some private variables:

private int mBlocksBiggest = 0; //biggest value of blocks
private int mBlocksLowest = 255; //lowest value of blocks
private int calibratingCounter = 0; //how many pictures was processed in the calibration

Edit the Calibrate method with the following code, just to reset some variables:

    public void Calibrate() {
        Logger.Log("Calibrating...");
        mBlocksBiggest = 0;
        mBlocksLowest = 255;
        isCalibrating = true;
    }

Now, let’s edit the doCalibrate method.

To calibrate, we’ll save the biggest and lowest color median captured by the camera. So we can set the lowest color as the absolute black in this ambient; and the higher value as the absolute white in this ambient.

So if we need to check if something is white or black, we can compare each block with this value calculated with the calibration.

    private void doCalibrate() {
        //this method is called each frame
        int i;
        for (i = 0; i < ; 5; i++) {//so we check the lowest and biggest median of each block
            if (mBlocksMedian[i] > mBlocksBiggest)
                mBlocksBiggest = mBlocksMedian[i];
            if (mBlocksMedian[i] < mBlocksLowest)
                mBlocksLowest = mBlocksMedian[i];
        }

        calibratingCounter++;
        isCalibrating = true;//we do it 100 times, and then we call the stopCalibrate
        if (calibratingCounter == 100)
            stopCalibrate();

    }

After reading 100 pictures, we set the new colorDivisor value:

    private void stopCalibrate() {
        //this method will calculated the colorDivisor as the median of the lowest and biggest
        colorDivisor = (mBlocksLowest + mBlocksBiggest) / 2;
        isCalibrating = false;

        Logger.Log("New color dividor: " + colorDivisor);
    }

Now run your app and calibrate it. You’ll probably get a better result. You can get the full source code created in this part of the tutorial in this link.

V - The next tutorial

These tutorials are really big.

In this one we checked the basic about image processing and how to identify colors.

In the next one, we’ll code the Arduino robot. And then we’ll come back to this app in the part 3, coding the logic.

If you have any kind of problem, something is not working or not explained, please, just comment here so I can help you :)

Thx for reading, I’ll see you in the next post.

Aron Bordin.

Aron Bordin

Aron Bordin
Computer Science Student and AI researcher. Always coding something fun :)

[Tutorial] Developing Android Background Services

### Welcome!In this post, I'll show you how to develop background services on Android Studio. We'll see two type of services: `Service` a...… Continue reading