Several months ago I was referred to the University of Southern Mississippi [Marine Research Center](https://www.usm.edu/ocean-enterprise/marine-research-center.php) by a coworker. They wanted some assistance from someone with a physics and data science background to work on an autonomous underwater vehicle for the purpose of developing a magnetic sensing platform.
The work began with using data acquired from high quality magnetic sensors including from [QUSPIN](https://quspin.com/), and processing the data to detect magnetic objects of interest. This magnetic sensing becomes a very difficult problem due to a combination of factors:
1. Magnetic fields fall off as the cube of the distance from the source (rather than the inverse square law that we all know from electromagnetic fields like light). This makes for a much more challenging problem since the size of the signals we are looking for are TINY compared to the noise in the data once we get to any significant distance from the source.
2. The Earth's magnetic field, while weak compared to some magnets that we use in everyday life, is very strong due to it's very large size and therefore slow falloff compared to small targets.
3. The targets of interest are passive ferromagnetic objects, meaning their field is created by becoming magnetized by the Earth's magnetic field. These induced magnetic fields are very weak compared to the Earth's magnetic field.
4. The sensing platform is moving, and so the vehicle's orientation, and its vector magnetic sensor readings are changing rapidly. Imbalances in the readings between vector components due to sensor imperfections, and due to the magnetic properties of the vehicle itself make this motion difficult to filter out.
Before we can even attempt to detect targets, we need to convert the vehicle reference frame to the Earth's reference frame. This is a non-trivial problem because the vehicle is free to rotate around the z-axis (up/down), can pitch and roll due to surface conditions and for an underwater vehicle it can move and orient itself in virtually any direction. In addition to the vehicle orientation quickly changing, the Earth's magnetic field is very complex, and varies from location to location on the surface of the Earth.
)")
While the Earth's scale magnetic field is complex, many people assume that on the local scale, the magnetic field is constant. This is not the case, and local variations can be large enough to throw off the vehicle's orientation estimate. In a surface vehicle, this can be mitigated by using GPS to correct for long-term drift of the compass / magnetic measurements, but underwater it becomes much more complicated without a detailed map of local magnetic anomalies. Designing a method for planning underwater missions which can correct for these local variations is a problem that is being addressed in the near future, but in short it involves some calibration using surface data, and then using that calibration to improve underwater navigation.
The sensor fusion process attempts to take the best aspects of each sensor and combine them in a way that provides the most accurate estimate of the vehicle's orientation and position. For gyroscopes, which are very accurate at measuring short-term rotations, but suffer from drift over time, we can use the magnetometer to correct for the drift. We can use accelerometers to correct for quick movements, and to measure the long term orientation relative to gravity, while using GPS to provide long term movement and position data. Magnetometers are immune to the errors introduced into accelerometers by quick movements, especially in surface waves, and so they provide a very accurate measurement of the vehicle's orientation, but are subject to small errors due to local magnetic anomalies, and due to magnetic interference from the vehicle itself. Work is underway to integrate doppler velocity data from a DVL sensor to improve the vehicle's estimate of its position and minimize drift from GPS reference during extended underwater missions.
The exact method used to fuse the data is highly platform specific and needs to be tuned for the physical characteristics of the vehicle, and its sensors. The diagram below shows the general process, but the exact implementation details will not be discussed here.
Once the vehicle knows its own position and orientation through sensor fusion, we can use that information to convert the location of detected objects near the vehicle into the Earth's reference frame. This allows combining measurements from multiple passes over the same area, by multiple vehicles, or even on different days.