Introduction

This is a 6DoF inertial navigation example for the Android devices.

In this example, the device itself works like a camera which is initially directed to an imaginary cube located 5 meters away. Therefore, as you move the device you will start seeing the different faces of the cube. (Technically, the position and orientation of the device is used as the exterior calibration parameters for the opengl camera.)

About the Application

It has 3 modes:

3DoF Accelerometer+Compas : This mode is a 3DoF mode. It only computes the orientation based on accelerometer and compass outputs (actually it uses a built-in function). The position of the cube in the device's body frame of reference of is assumed to be constant. In other words, the device is allowed to move on a sphere where the cube is located on the center.

3DoF Gyroscope : This is a 3DoF mode too. However, in this mode the attitude is computed based on only the gyroscope outputs. (The initial orientation is taken from the first mode). In this mode, you will definitely observe the superiority of gyroscope based orientation to the the accelerometer based one.

6DoF: This is a pure 6DoF inertial navigation mode. In the previous 3DoF modes, the displacement of the device has no effect on the rendered cube image. However, in this mode, as you move the device, you will (hopefully) see a shift in the cube image in a proportional manner. (i.e. If you want to see the cube closer, move the device on its z-direction toward the imaginary cube. As you move, the cube image will move closer).

In the above modes, the following utilities must also be used accordingly:

Bias removal : Once it is clicked, inertial data is collected for a second, and the average of it is assumed to be the inertial sensor biases which are then subtracted automatically from the future sensor outputs. Obviously, you must use this when the device is on a horizontal platform. Otherwise, the accelerometer outputs will be spoiled. However, the gyroscope corrections will always be true regardless of the orientation.

ZUPT : Internally, a Kalman filter is used to stabilize the INS. Every time the system stays stationary, zero updates (zupts) should be clicked. Based on this information, the kalman filter estimates and corrects the accumulated INS errors.

CUPT: This is a signal for coordinate updates. When this is clicked, Kalman filter updates the system using the initial position. Therefore, this must only be used when the system is at the initial location. Obviously, GPS outputs could be directly used instead of the coordinates of the initial location. However, as I primarily want to use this inside the buildings, I intentionally did not use GPS as a possible external aiding source.

Reset: Resets the system including the covariance of the Kalman filter.

NOTE (i): I do not have an Android phone with gyroscopes. Therefore, I have not been able to check the program on a real system. Furthermore, I simply made up the sensor error model parameters for the Kalman filter as I did not have any chance to get sensor data from an android phone to perform error modeling.

(ii). The applications needs a minimum sdk level of 8. However, I did not force this in the manifest.


Some Notes About the Android Inertial Sensor Sub-Sytem.

In the android 2.3 there are some substantial changes with respect to 2.2. In 2.2, the sensor manager in the framework directly calls libhardware.so via JNI interface.

On the other hand, in 2.3 a new abstraction layer called sensor service is implemented. In this model, the sensor service, which runs as a linux service, is started during android run-time intialization. This sensor service then reads the inertial sensor data again using the libhardware.so.

Any client which wants to read sensor data (including the android framefork) then uses IPC to register itself to this sensor service. Everytime new sensor data becomes available, sensor service sends this data (or whatever is requested by the client) using the Android's weird binder mechanism (which is essentaially an IPC method specific to the android devices). Therefore, in android 2.3 no client (including the framework) has any direct interface with the sensor hardware.

The most important property of this structure is that it is as easy to get sensor data in native environment as it is in the framework. That is why, everything related to the sensors can now be implemented in C(++).

Another property of this structure is that additonal virtual sensors can now be added to the sensor service. (Essentially the current implementation of the sensor service contains 2 such virtual sensors which are the so-called linear acceleration and the gravity sensors.) However, unfortunately, you must have the root privileges to add such new sensors in your device.( You have to recompile the sensor service and then replace it with the original one). These new virtual sensors can be handled exactly the same way the existing sensors are used in any client.

Currently, the android framework does not use the gyroscope outputs for anything at all. Every kind of orientation value is still computed based on accelerometer+compass outputs. People seeing different characteristics for different orientation values usually get confused and wrongly think that some of the orientation values are computed based on gyroscope outputs in 2.3. However, this is not the case at all. The only reason for such differences is the linear filters used in the attitude computations. Some orientation values are obtained with heavily filtered accelerometer and compass outputs (not from gyros). That is why these attitude values seem less noisy (and quite unresponsive at the same time).

As a matter of fact, It seems as if the donkeys working for the google have no idea about how to use the gyroscope data to compute the orientation. Here is an example proving this. The following explanation is from the android SDK: (the following code was corrected after Android 4).

...

Typically the output of the gyroscope is integrated over time to calculate an angle, for example:

private static final float NS2S = 1.0f / 1000000000.0f;
private float timestamp;
public void onSensorChanged(SensorEvent event)
     {
          if (timestamp != 0) {
              final float dT = (event.timestamp - timestamp) * NS2S;
              angle[0] += event.data[0] * dT;
              angle[1] += event.data[1] * dT;
              angle[2] += event.data[2] * dT;
          }
          timestamp = event.timestamp;
     }

In practice, the gyroscope noise and offset will introduce some errors which need to be compensated for. This is usually done using the information from other sensors, but is beyond the scope of this document.

...

(You can find the above explanation in the SensorEvent.java).

As a navigation engineer you can immediately recognize what is wrong with this explanataion. However, for the rest, let me explain why this is so wrong:

The person who wrote this probably did not take any calculus course during his undergraduate education. Because, if he had attended any lecture, he would have known that the integrals are valid only if the frame of reference is fixed. However, obviously it changes during the motion of the device and this is what the gyroscopes sense. Therefore, just the integral of the gyroscope outputs physically do not correspond to anything meaningful. (At this point navigation engineers should be capable of stating the necessary condition for this integral to correspond to some meaningful orientation information. If you cannot, then you had better read the subject of differential equation of orientation vector once more.)

In the Demo6DoF, the 3DoF gyroscope mode shows one of the correct ways of processing the gyroscope data. If you are planning to use gyroscope data in your project, you had better review the code instead of trusting the explanation in the SDK.