Over the last decade, we have seen a huge influx in the use of wearable technologies, from wristwatches to wearable devices, creating vast amounts of data on movement. When we define movement, we are essentially referring to motion and require ways to both describe it, and more importantly to measure it.
Motion can include many components including, position, orientation, velocity and acceleration, all of which are related to one another. For example, velocity describes how fast we move through the distance between positions, and acceleration describes how we increase or decrease velocity. All of these are mathematically related, and each can be measured as either a magnitude (scalar), or a magnitude with direction (vector). However, the mathematics will not be in the scope of this article, but rather to paint a picture of how they all relate to one another (I don’t want to scare you all off).
As technology has improved, so has our ability to measure motion. Recent times have given birth to the inertial sensor, or the inertial measurement unit (IMU). IMU’s consist of an accelerometer, a gyroscope and a magnetometer, each being a Micro-Electro-Mechanical-System (MEMS). Briefly:
All of these individual components are put together to form an IMU. Combining the data from these particular sensors via a process of sensor fusion, we can estimate the orientation of the IMU (Figure 1). An orientation describes how something is positioned in space. So what does this all mean, and why is it relevant?
Figure 1: Sensor orientation, a description of how it is positioned in space (image from wolfram.com)
Knowing the orientation and motion of a sensor can give us valuable information if it represents a segment of the human body. This is precisely what we do, we place these sensors on a segment of the body which then represents the movement of the underlying bone (Figure 2). This in itself provides information on the speed of movements, the accelerations and the rates of turn of the segment. But we can take it a step further by using a collection of the sensors in an inertial motion capture system.
Figure 2: A sensor placed on the arm which can then represent the motion of the underlying bone.
An excellent example of an inertial motion capture system is Xsens MVN. Xsens MVN uses 17 of these sensors placed over the bodies main segments. The data is applied to an underlying anatomical model. First the model is scaled based on the height and length of the persons segments (Figure 3).
Figure 3: Measuring a persons dimensions in order to scale an underlying anatomical model
Following this scaling process, the person is placed in a known pose such as a N-pose or T-pose (Figure 4), to essentially align all of the segments in the model with how the person is positioned at that point in time. This is called sensor to segment calibration, and is one of the most important steps to ensure accurate motion tracking.
Figure 4: N-pose and T-pose calibration to align the models segments with that of the person to be measured (Image from Schepers et al. 2018)
This now enables full body motion tracking of a human, from which all of the segments can be analysed. A wide variety of applications are able to be performed ranging from sport applications, to clinical applications, as well as workplace applications. Below shows an example of an athlete during a change of direction task (Figure 5), which enables coaches to measure and analyse the movements in 3 dimensions, rather than just from 2 dimensional video footage.
Figure 5: Analysis of an athlete during a change of direction task
Being able to measure human motion is important to establish the characteristics of movement, from which interventions can be developed. Moreover, once interventions have been developed, measurement can take place afterward to ensure the desired improvement has taken effect.