The first group is called as PHI-model. This is nothing but the simple perturbation of the equations of kinematics (which are wrongfully called as INS equations among navigation engineers). Because the difference between the true attitude and the INS derived attitude is represented as the letter PHI, this model is called as PHI.

The second group is the PSI-model. In fact, this model is also obtained as a result of a certain perturbation. However, in the PSI-model we perturb the equation of kinematics around a fictitious navigation frame which is called as the “computer frame”. The difference between the INS derived attitude and computer frame is represented as the letter PSI. That is where this name comes from.

There are sufficient number of papers in the existing literature describing the derivation of psi-models. Especially the Benson’s short papers on the psi model (both his 1975 and 1978 dated papers) describe everything related with it in plain English. There are some other papers further unifying several concepts and deriving a bunch of other models also. However, Benson’s papers (especially the one titled “A Comparison of Two Approaches to Pure-Inertial and Doppler-Inertial Error Analysis”) is all you need to learn everything regarding PSI models.

Having learned both the PHI and PSI models, then you will face the real question: which model should you use? This is the main topic of this blog note.

The short answer is that you should prefer the PHI model in all your Kalman filter implementations.

This answer may at first be surprising for you as all the canonical sources about navigation systems usually favours the psi model. So, let me elaborate my answer a little bit more.

It is indeed true that PSI-model is more clean than the PHI model as the transport rate is not perturbed in it. Being clean means that less number of floating point calculations are required in the Kalman filter cycles. However, in today’s computing capabilities such a tinny reduction in the model computation is not important at all.

Even though psi-model does not have any clear advantage over phi-model, every navigation engineer must definitely learn how to derive PSI model even if he does not use it at all. (I personally had been able to learn the real meaning of navigation frame only after studying the PSI model.) Mostly because of its conceptual importance, navigation engineers learn it in the early stages of their careers and then continue to use it out of habitual tendency. This is in fact the main reason why psi-model is more commonly used in the navigation systems with high-grade sensors.

On the other hand, psi model has one big disadvantage that makes it not so suitable for low-grade units. One of the significant problems that we face in the design of low-cost systems is the azimuth initialization. High grade system can perform gyro-compasing to reduce the initial attitude uncertainty to the levels suitable for small-angle assumption. However, in low cost system, we almost always have to perform in-motion alignment starting with a large heading uncertainty. Because of the definition of the “computer frame”, the effect of large heading uncertainty manifest itself on the velocity errors in the PSI-models. Therefore both the position and the velocity errors are affected by the non-linearity of the large attitude errors in the PSI-models. On the other hand, the large heading uncertainty only affects the position errors in the PHI-models. Therefore, PHI-models behaves better than PSI-models under large azimuth errors.

As far as I know, in the entire literature it is only Scherzinger who uses a PSI-model based large heading filter. However, in his paper titled “Inertial Navigator Error Models For Large Heading Uncertainty”, he also recognizes the aforementioned difficulty of standard psi-models and therefore proposes a modifed version of it. I find his modified psi-model unnecessarily complex. I cannot see any advantage of his method over much easier (and almost standard) method described in “T. M. Pham, Kalman filter Mechanization for INS airstart, IEEE, 1992″.

As a result, if you are going to desing an INS with low-cost sensors, you should only consider using PHI-models as long as you do not have a robust mean of attitude initialization. You should always remember that under small angle assumption phi and psi models are equivalent. Therefore, there is absolutely no theoretical advantage of choosing one over the other. However, this does not mean that PSI-model is not important. On the contrary, if you are a navigation engineer you have to learn it by heart in order to understand the basic navigation frame concepts.

PS: See smoother_2filt_fwd and example_wander in the toolkit as introductory examples to the PSI-models. Also sys_llh_phipsi, sys_metric_phipsi and correctnav_Cen clearly shows the difference between phi and psi model implementations.

]]>

“The sensor manufacturers keep everything related with their sensors as confidential. They are even unwilling to share the per unit price of their sensors.”

Sadly this is true in most cases, it even makes it difficult to us vendors, who want to openly do a competitive comparison. From a competitive standpoint, no one wants to be the first to change this and be at a competitive disadvantage and expose their own IP.

“Intersense is a cheap MEMS inertial sensor manufacturer whose product is much lower quality than the other items in the list.”

BTW, I’ve received multiple confirmations from NAVCHIP customers that Intersense will be pulling the NAVCHIP from the market.

“Systron Donners SDI500 is a 6DOF IMU. The specification in their website is quite promising. However, I find it difficult to believe that that unit will be capable of showing the specified performance in the field.”

I also heard from Systron insiders that the company is going through some financial difficulties and re-orging, but none the less I think they make good products since they’ve been around for decades now.

]]>Anyway, recently I had to select some inertial sensors for one of the projects I was working on. Therefore, I had sent a couple of emails to manufacturers asking for their prices. Here is a short list of these prices. If you are also in search of some compact inertial sensors, this list will save you waiting for weeks before those bastards graciously accept to answer your quote requests:

Goodrich

SiIMU02: $9,000

AIMU: $11,000Honeywell (Only accelerometers)

QA650: $1,200

QA700-010: $1,800

QA700-020: $1,900

RBA500 :$1,200Systron Donner

SDI500: $16,000

SDG1000:$2,000

QRS116:$2,900

QRS14: $1,100

SDG500: $700KVH (Miniature Fiber Optic Gyroscope)

DSP-1750: $4000(Single Axis, Dual axis is $8000)Intersense

Navchip ($1300) (Engineering Sample:$4000)Litef (Previously independent, but now a subsidiary of Northrop Grumman)

µIMU-IC: 12,000 Euro

Among the above sensors I should say that I was impressed by the KVH’s mini fiber optic gyroscopes. For years, we were stuck with MEMS sensors when we have very tight space constraints. Finally, a manufacturer can think of compacting its FOGs as an alternative to MEMS. (In fact Honeywell has some compact and rugged RLGs for years. But, I do not know if it is possible to buy them.) The size of DSP-1750 is less than 3inch which is perfect for many application with tight space. I hope in the future they will release more FOG units like that so that we can totally get rid of other arrogant MEMS manufacturers.

Intersense is a cheap MEMS inertial sensor manufacturer whose product is much lower quality than the other items in the list. However, I like their design and engineering. (It is at least better than the similar products of Analog devices which is a hopeless manufacturer. It is indeed a mystery for me that how the engineers in Analog devices can succeed in being so dumb.) So if you are looking a MEMS unit in that price range, you may want to check the Intersense’s navchip. (I was planning to build a high accuracy IMU block with redundant configuration using this sensor. Yet, due to reasons that are out of my control, I was not able to realize this plan. I still believe that using 8 navchip in a single IMU block can be better that buying another expensive option.)

Honeywell is almost unrivaled in its accelerometers. If you need an accelerometer, you do not need to look for any further than Honeywell. Recently I heard of another company called “japan avionics electronics” who is said to have also some good accelerometers. However, I could not find any meaningful information in their website. Also, those bastards did not return my email. That is why I do not know their accelerometer price either. However, in any case, I do not think that their accelerometers can be any match to Honeywell’s unit.

Systron Donners SDI500 is a 6DOF IMU. The specification in their website is quite promising. However, I find it difficult to believe that that unit will be capable of showing the specified performance in the field. Lab test results of MEMS units can always be misleading. That is why, I find it a risk to invest on that product for any project. Instead of using it, I would prefer to go for KVH’s fiber optics with honeywell’s accelerometers.

PS: , I have to mention that there are plenty of other sensors out there with better performance/price ratio. I had inquired the above sensors with some space limitations on my mind. If you have no space constraints in your project I would advice you go for older generation products of Litton or Honeywell. (Though it is really a pain to by anything form honeywell. Most probably you won’t even be able to get a list of their RLGs.)

]]>Let us see what we have in our hands. The output of every inertial sensor consists of at least 4 components:

- Real acceleration/rotation rate: we can be sure that these are 0 during the bias tests by properly aligning our sensors.
- Additive white noise: we can be sure that the effect of this is 0 by computing sufficiently long averages rather than measuring a single sample.
- Bias: this is what we try to estimate
- Flicker noise: this is the noise that puts a lower limit to the Allan variance.

Every time you measure the sensor output, you must always assume your readout is at least the sum of bias and flicker noise. This brings us another important question. What is a flicker noise? Can we get rid of it by again computing long averages of the sensor outputs?

There are plenty of papers out there which investigates the physics behind the flicker noise in the semiconductors. It could be quite difficult to understand them thoroughly. However, as a navigation engineer you are at least supposed to know following basic facts about the flicker noise even if you do not understand it:

- The power of the difference between 2 samples of the flicker noise (i.e. y(T)-y(0)) increases with time proportional to “2h.ln(T)” where h is the flicker noise value.
- The power of the average of this difference also increases with time proportional to “h.ln(T)”. In other words:

This 2nd property is extremely important for us to understand the concept of bias in low cost inertial sensors. There are significant number of people out there who are not even aware of this simple property, yet do not hesitate writing papers (and even books) on INS. Naturally, these texts written by such idiots contain lots of statements which directly contradict to above properties of the flicker noise.

So, what should these properties mean to you? Let me list a couple of important results so that you do not repeat the same mistakes in your future papers as those fools did.

- Property 1 dictates that the flicker noise itself is divergent. This means that the value of the flicker noise can theoretically be anything between -infinity to +infinity.
- Property 2 dictates that the average of the flicker noise is also divergent. This means that you cannot eliminate the flicker noise by taking the average of sensor outputs regardless of the averaging duration.
- (But, we have to define a bias value. We cannot throw away an inertial sensor just because we cannot come up with a theoretically consistent bias estimate.We are engineers, not physicists.) Let us suppose that we define the sensor bias as the average of the sensor outputs in T seconds. Property 2 also dictates that this bias value changes more as you increase T. In other words, if you plot time vs average value (your bias estimate), you will see that your bias estimate changes more with time (and eventually diverge) instead of converging to a fixed point.

You must always remember this 3rd result. It simply says that it is impossible to define a constant bias value for an inertial sensor regardless of the averaging duration. In other words, the longer the averaging duration is, the more diverse bias estimate you will observe. You should not be tricked by the fact that the Allan variance of flicker noise is constant regardless of the cluster time. Although the Allan variance is constant, the power of the flicker noise average grows with the averaging duration. This is in fact quite an interesting result. As an example, suppose you compute 2 different bias values by taking the average of sensor outputs for 5 minutes and 1 hour respectively. The variation (the power of change) of 1-hour bias estimate will be bigger than the 5-minutes estimate. However, this does not mean that one of these bias estimates is better than the other. They are just 2 different values which are equally valid (or equally wrong). This is because there is no constant value that you may call as bias. You cannot differentiate flicker noise from bias. You must assume that the bias itself evolves in time in an unpredictable manner.

As a result, there is nothing called perfect bias calibration. You cannot claim that the bias calibration value that you compute now can be assumed valid some later time. The best bias estimate is the one that is computed most recently, not the one that is computed with longer averaging durations. As a matter of fact, you should consider bias as the current state of the flicker noise. As the flicker noise of low cost MEMS units are very dominant, representing the bias as the state of the flicker noise is a better mathematical model than random constants.

The next time you see an “INS expert” claiming that he/she obtain better bias estimates by repeating the calibration tests with longer times (or with more position), you can now easily conclude that he/she is another self deceptive ignorant that you should stay away.

]]>Theoretically, it is impossible to detect zero velocity instants by relying only on the accelerometers and gyroscopes. Even if we had perfect inertial sensors, which are completely noise free, and used the magnitude of the gravity and Earth rotation rate as our decision thresholds, our detection algorithm would not be able to differentiate a constant motion from a zero-velocity.

However, because of its importance especially for human motion tracking algorithms, we have to improvise some kind of a zero-motion detection method. As long as these algorithms do not generate too much false positives (zero-motion detection during a motion), we can still use them to diminish the accumulated velocity errors and, to a certain extent, to stabilize the roll/pitch errors.

In addition to this false-positive problem, zero-motion detectors also impose a big burden on the processing side. As I have described in my previous blog entries, we do not need inertial sensors to have very high data rates. As long as they provide angle and velocity increments, we can read their outputs at a very low rate without causing any algorithmic error. However, this is not the case for zero motion detectors at all. Zero-motion detectors need as many samples as possible to reduce their false positive probabilities. That is the reason why we need MEMS sensor producers to embed such decision algorithms into their sensors so that we don’t have to deal with all the problems associated with the high data rates.

Unfortunately, realizing this approach is not as easy as it may seem. First of all, what is a zero motion detection algorithm? Does anyone has ever answer this question? Or, does anyone have ever been able to come up with an optimal detector? The answer is unfortunately no. As a matter of fact the answer will always be “no”, because theoretically it is impossible to find one. All we can do is to adapt some ad-hoc method and hope for the best. In this case, what kind of algorithm can we expect from MEMS producers to embed in their system? Can such a thing be possible by any means?

Recently, I have been working on this issue. I tried to invent a method to be embedded in MEMS sensors such that it can be used for both ordinary and advanced users to detect zero motion instants. Ordinary users only need auto-decision result (regardless of its false positive rate), whereas an advanced user would possibly require all the raw sensor data to generate its own decision. A practical algorithm should also be capable of helping the user in between these groups by providing only the sufficient amount of useful data required for a meaningful decision.

At first I tried to formulate zero motion detection problem in terms of H-infinity settings. For a robust detection algorithm, we have to assume that nature always try to deceive us as much as possible. Therefore, I hoped I could come up with a meaningful 2-people zero-sum game definition and then robustly estimate the worst case motion. However, as in the case of my previous PhD topic, I totally failed. (I had studied the application of H-infinity theory to the navigation systems for 2 years before switching to my last PhD topic of redundant sensors and finally being able to graduate.)

Meanwhile, I realized one important fact about the motion: an acceptable motion definition certainly requires a scale to be associated to the data. Both a spike and almost a constant value in the sensor outputs may denote (hint) a motion. A spike cannot be detected if the data is analyzed at higher levels (averaged signals), whereas a small (almost) constant level output will be burried into additive sensor white noise if the analysis is performed at a very low level (row signal level).

As a result, I abandoned h-infinity concept and turned my attention into multi-resolution signal techniques. At the end, I came up with reasonable (non-orthogonal) signal bases each of which represent the change in different scales within a given data window. Together with these bases I also needed to derive some kind of threshold for the decision algorithm. I did not have to think about this part at all because I already knew that Allan variance coefficients is nothing but the Haar wavelet variance of the stationary signal. Therefore, previously computed Allan variance coefficients contain all the threshold values that I needed. Combining all these pieces of findings together, I finalized my zero-motion detector for low cost MEMS units. Given a data window, my detector first computes the signal coefficients for each basis function and then compares these coefficients with the threshold values to determine whether a motion exists or not.

In the following video, you can see the result of my zero motion detector. I wrote a simple application to broadcast the inertial sensor data of my “wannabe death” htc phone. I also coded simple python scripts to process these data in real time with my zero motion detector on the PC side. At the end, I output the result of the detector using a simple 2-color GUI. The red rectangle means a zero-motion detection instant and the green rectangle represents motion.

As you can see from the above video, my motion detection algorithm is at least capable of reasonably detecting motion/no motion instants. Although this video does not contain an example, my algorithm unavoidably suffers from false positives too. However, as the video proves, it can be still be considered sufficient for simple human motion tracking applications.

The most important feature of my algorithm is not its zero motion detection capability. The power of this algorithm comes from the fact that it relies on multi-resolution signal analysis. Therefore, it can serve for any user group equally well. If such an algorithm is embedded in an inertial sensor, the user can easily determine the amount of useful decision data to be delivered from the sensor. For instance, an average user can request the sensor to send only the biggest 5 coefficients, whereas an advanced user can request all the coefficients to reconstruct the original signal. In the above video, the decision is based on only the biggest coefficient which is all an ordinary user would probably care about.

With this final invention of mine, I am one step closer to my ultimate smart front-end design for MEMS inertial sensors. I know one day I will be able to convince some MEMS producers to listen to my genuine ideas.

]]>“…because of the singularity problem of Euler angles, quaternion representation is used in this study…”

It looks quite a legitimate statement, doesn’t it. It is indeed true that there is a singularity problem with the Euler angles. However, in reality, this statement is nothing but a self-confession that the authors of the paper do not know the topic they intent to write a paper.

Let’s see why this is the case.

The orientation between two coordinate systems is always uniquely represented by an orthogonal matrix. We call it direction cosine matrix (dcm) in navigation related literature. However, mathematicians prefer to call these matrices as a member SO(3) (which means set of special orthogonal set. “special” denotes that we consider only orthogonal matrices with a determinant of +1).

The key fact about dcm is that it is a unique representation. There is one and only one dcm between 2 coordinate frames.

On the other hand, dcm is an over representation. Therefore, in navigation algorithms we may prefer to use other representations with less number of elements such as euler angles, rotation vectors and quaternions.

Euler angles and rotation vectors consist of only 3 elements. However, they are not unique. In other words, between any coordinate systems we can define at least 2 different sets of Euler angles and/or rotation vectors. Furthermore, there are certain orientations which can be represented by infinitely many different euler angles/rotation vectors (e.g. 0 degree rotation vectors or 90 degree pitch for RPY).

In other words, neither euler angles nor rotation vectors are unique. The transformation between 2 coordinate systems can be represented by at least one set of euler angle/rotation vector. This is what we mean by singularity problem. Singularity does NOT mean that euler angles/rotation vectors are not defined for certain orientations. We can always use them to represent any arbitrary orientation regardless of this so-called singularity problem.

Quaternions are nothing but 4-element representation of the rotation vectors. The addition of this one more element makes quaternions also unique like dcm. Therefore, given any 2 coordinate systems, we can define one and only one quaternion.

As a result, in your next paper, if you try to state some reason for your choice of the quaternions, you had better use the following sentence instead of the one above:

“…because of the singularity problem of the rotation vectors, quaternion representation is used in this study…”

At the end of the day, any experienced navigation system designer knows that selection of attitude mechanization is completely personal. I usually prefer to use different mechanization in different projects just to keep my memory refreshed. Therefore, it is best to avoid using rookie statements like the above ones completely.

Finally, If you decide to use quaternions, do not use 4 states to represents errors on them. Small angle attitude errors are always represented by 3 states. Doing otherwise is another self-confession of ignorance. I am planning to elaborate this topic in a later blog entry.

Selection of a proper ADC by itself is a serious topic. Essentially, ADCs can be categorized into 3 classes:

- Sequential ADCs
- Flash ADCs
- Sigma-Delta ADCs (Over Sampled ADCs)

(For a complete review of ADC technology, I suggest everyone reading Analog Devices’ ADC handbook). Sequential and Flash ADCs are fast but have a low resolution. In order to sample a gyroscope with a resolution of 24bit or more, a sigma-delta ADC (ΣΔ ADC) has to be used.

ΣΔ converters internally sample the analog inputs using a 1 bit ADC with a very high rate (that is why they are called as over-sampled ADC). After 1bit sampling operation, these 1bit samples are processed with a digital filter (and downsampled) to generate high resolution samples of the analog input. The digital filter used in ΣΔ ADC has a low pass characteristic. Therefore, ΣΔ ADCs cannot be used for high BW signals.

High speed and high resolution ΣΔ ADCs with simultaneous multiple channels can be very expensive. (they can cost more than 200$). Therefore, IMU designers who use this traditional approach find themselves in a situation in which they have to make a compromise between speed and resolution.

Having summarized the traditional approach, I can start attacking and blaming everyone who blindly follows this approach.

The main point that these people fail to understand is that we never need acceleration and rotation rates to solve the INS equations. All we need is the integral of these quantities (a.k.a angle and velocity increments). In this case, why don’t we integrate the signal with an analog front-end and then sample only this integral value? Or, even better, why don’t we use a front-end which integrates and samples the rate signal at the same time so that we don’t bother ourselves with integrating the signal (either in digital or analog domain) at all?

Here is a block diagram of such a circuit that can be used to integrate and digitize analog signals. It is in fact nothing but the simplest form of an ΣΔ modulator. Despite its simplicity it accomplishes exactly what we need.

The first summation junction is used to add sensor outputs when more than 1 inertial sensor is used per axis (i.e. orthogonal redundant configurations). Redundancy is a life saver. I strongly recommend everyone to use it even if only 1 sensor per axis is used. The feedback loop is a sigma-delta modulator. The 1 bit ADC is essentially a comparator. The feedback loop stabilizes the integrator output at “0”. Without such a feedback loop, the capacitor would not operate in the linear region. The 1 bit DAC generates either Vmax or 0 (we assume that sensor outputs changes only in the range of [Vmax 0]). The clock of the D flip-flop determines the time support of each rectangular pulse generated by the 1bit DAC. When the output of the latch (D flip flop) is high, the counter increments. Thus, the counter output is equal to the integral of the feedback signal. Therefore, it approximates the integral of the input. Thanks to the feedback loop, the maximum error on this integral is equal to dt*Vmax where dt is the period of the clock. Therefore, the faster is the clock, the better is the approximation. Regardless of the total integration time the error is bounded by this value. (In rate sampling systems the integral error keeps increasing in time without a bound. That is why we need very high BW to reduce the integration errors in rate sampling structures). Even if a moderate clock is used (e.g. 1Mhz) this error becomes completely negligible with respect to any other error source. Therefore, this integral sampling method can safely be assumed errorless.

The following figure shows a very primitive circuit which realizes the above block diagram. It assumes that the analog signal varies between Vmax and “0”. An analog designer would probably laugh at the simplicity of this circuit (no op-amp bias current compensation, no voltage isolation etc). But still, I suppose it will work as intended.

As a result, the approach described above has 2 distinct advantages which make it superior to any ADC based approach:

- The quantization errors on the angle and velocity increments computed with this circuit is negligible.
- A rate sensor must be sampled very fast in order to minimize the errors on the integrals. The above circuit computes the integrals in the analog domain. You only need to read the counter output whenever you need the angle/velocity increment values. Therefore, it provides a great flexibility on the timing requirements.

In my navigation projects, I started to use STM32F4 discovery board as a basic prototyping board. It is STM’s microcontroller evaluation board which costs only 19$. Despite its cheap price, it has everything that immediately makes interfering with the inertial sensors possible. It not only contains pin-outs for the microcontroller, but also has an on board debugger/programmer (stm32 link/v2) so that embedded code can be directly uploaded (and debugged) via the USB port without an additional JTAG module.

If you had any previous interest on hobby electronics, you would have probably heard about Arduino boards. Arduino can also be considered as fast prototyping board. Due to its immense popularity among enthusiasts, there are significant amount of existing code samples on the internet. However, compared with the stm32f4’s capabilities, the Arduino boards are kind of simple toys. Therefore, I strongly suggest you not wasting your time with it.

The biggest problem with the STM32 is that there are not sufficient code examples on the internet. Therefore, it sometimes becomes quite a burden to read pages of reference manual to understand the function of certain registers, or to learn the efficient way of using a peripherals. There are some sample codes for STM32F1, however, these codes cannot be readily used for STM32F4 due to the differences between the F1 and F4 architectures. (By the way, STM32F4 is based on ARM Cortex M4 which is the newest pseudo-DSP family of the ARM microcontrollers.)

Currently, STM is the only manufacturer which produces ARM Cortex M4 based microcontroller. NXP is also going to start selling its own Cortex M4 family very soon. In fact, NXP has bigger user support and sample code base compared to STM. Furthermore, its application notes is more explanatory than the STM. Therefore, I think NXP may be a better choice for the beginners. On the other hand, STM32 has more peripherals than the LCP (NXP) which makes STM32 more favorable at least for me.

I have uploaded two STM32 sample projects for the I2C and SDIO peripherals. These peripherals are essential for the IMU designers as they are required to communicate with the sensors and then to record the sensor data to the SD cards. You can download these sample codes from the github page (here).

STM32’s I2C interface is a little bit problematic. One has to write different functions depending on how many read/write transaction is going to be performed. In order to stretch the clock properly or send NACK/ACK at the correct time, you have the read and learn each function of all registers. There is a considerable difference between reading a single byte and multiple bytes. In the sample codes, I added plenty of comments so that you can see how a proper read/write sequence can be performed with the I2C peripheral without using the DMA for different number of bytes.

STM32 uses 6 wire SDIO interface to communicate with the SD cards. SD cards themselves are also another pain to learn and to use. Because of it is a proprietary technology, there is no sufficient reference on the internet. Therefore, when your embedded code does not function as it is intended, it becomes hard to understand whether it is the code that has bugs or it is you who cannot understand the correct command/response sequence.

Most of the SD cards also support SPI interface. But, the SPI interface uses the same command and response protocol. As SPI uses the same line for data and command transmission, it becomes almost impossible to recover from an error. Therefore, I suggest everyone to use SDIO rather than SPI for the SD cards regardless of the data rate requirements.

The example codes that I uploaded have functions to perform single/multiple block write operations using the DMA. Again, I included plenty of comments to not only describe the codes but also the overall SD communication protocol. Therefore, I believe it will be quite useful for anyone who wants to record inertial sensor data to the SD cards for post processing.

The most problematic point about programming the SD cards is that the SD cards arbitrarily switch themselves into the so-called busy mode from time to time. During these busy modes, SD cards do not respond anything other than certain commands (in SPI mode, as the data line is the same as the command line, it does not respond to anything at all). Therefore, without using a DMA, it is almost impossible to use SD cards for projects which require strict timing specifications (such as periodic IMU sampling). The STM32’s data-path logic of SDIO takes care of these “busy-modes” without the intervention of the microcontroller when the DMA is used. (However, as I indicated in the comments of sample codes, one has to check the busy mode manually for the command-logic). In single block write operations, SD puts itself busy mode after every block. That is why, you should prefer multi-block write with an internal buffer structure if you want to use SD cards as fast as possible.

One problem with the STM32F4 discovery board is that another integrated circuit is connected to the microcontroller pins which are needed for the SDIO. Therefore, before using the SDIO interface you need to remove that component with a hot air gun. Otherwise that component will pull the lines high and block all the communication.

]]>Here is an example. The following is the Allan variance of the IMU3000:

It looks quite normal, doesn’t it?

Well then. Let’s play with it a little bit by downsampling the GyroY data as follows:

- gdat5=GyroY(1:5:end);
- gdat10=GyroY(1:10:end);
- gdat15=GyroY(1:15:end);
- gdat20=GyroY(1:20:end);

Just the simple downsampling. And now, let’s compute the allan variance of each of these downsampled data. In the following figure, you can see the computed Allan variances for the downsampled version of the GyroY (together with the original one).

What a striking result we have! The Allan variance of the gdat10, gdat15 and gdat20 (red, magenta and black curves) have a very strange additional bulge on their initial segments. Where do they come from? What happened?

Let me explain what you have just witnessed. When we downsample the data, we change the frequency of the additive sinusoidal error (remember the effect of downsampling operation on sine/cosine functions). The downsampling shifted the frequency of these sinuosidal components to the lower part of the signal spectrum where the Allan variance coefficients are computed with a smaller frequency window (in other words with a larger time window/cluster). The power of the white noise within that smaller window is obviously less. As the windows get smaller, the power of the additive noise becomes smaller than the power of sinusoidal components. From that point on, we can start seeing the effect of sinusoidal components on the Allan variance figures. But of course this procedure is also upper limited. For very large cluster lengths (narrow frequency windows) the sine function is also filtered out.

As you can see from the figure above, IMU3000 has a serious additive sinusoidal noise problem. An inexperienced navigation engineer can easily miss this fact if he/she just trusts simple Allan variance figures (because, the ignorant stupids usually advices them to do so).

In fact, one should always use correlation based analysis techniques together with the Allan variance figures in order not to miss such important error components. As an example, you can see the correlation of the GyroY output in the following figure.

The figure speaks for itself. We can immediately see that the correlation is a sinusoidal function.

To tell the truth, I first computed this correlation only to observe that we have an additive sinusoidal component problem for this sensor. And then, I came up with this downsampling idea in order to be able to see it in the Allan variance figures. (Yes, it is my original invention. But feel free to use it whenever you want to annoy stupid ignorants.)

Unfortunately, the autocorrelation computation for the inertial sensors is not simple. If you just try to compute the autocorrelation with “xcorr” function (of matlab), you won’t be able to see anything other than a triangular waveform. One should subtract the “approximate” signal before computing the autocorrelation. (The modelling section of the toolkit should give you some idea about how to do it properly.) However, as this example suggests, the results that we obtain is usually worth the additional efforts that we spent for correlation computation.

]]>During my search for a better microcontroller, I have gladly noticed that microcontroller manufacturers have seriously started looking for an effective method to add inertial sensors among their standard interfaces.

It seems Atmel is currently leading in this field. They have already included a sensor library to their embedded framework. I quickly reviewed their code. In addition to standardizing the inertial sensor interface, they also try to introduce some high level sensor calibration interfaces. It seems they expect the sensor manufacturers to develop their own code compatible with Atmel’s high level library. Invense has already done that. Although I think that having some calibration interface for any embedded library is quite unnecessary, I do find their efforts worth to praise.

STM has also done some work in inertial (pseudo-navigation) field. It seems a small group is trying to develop some basic navigation functions for ST. The development board for their latest STM32F4 microcontroller has 3 accelerometers on it. Their “hello word” application is also using these accelerometers to turn on/off leds. I guess they are willing to promote their cheap (and also quite useless) inertial products via their famous STM32 family of microcontrollers. By the way their new STM32F4 family has floating point support which is great for any embedded navigation application. (They send free STM32F4 discovery kids to Canada and USA. If you are planning to deal with embedded programming any time soon, it can be a good opportunity to get a free development board from ST).

There is no news on the NXP side. Perhaps someday they will also consider adding some inertial flavor to their microcontrollers.

]]>