
Extended Kalman Filter Implementation for Mobile Robot Localization
I developed and implemented an Extended Kalman Filter (EKF) to estimate the 2D position and orientation of a mobile robot using ROS 2. I integrated sensor data from range and bearing measurements to landmarks, ensuring robust and accurate localization in real time.
"Fundamentally, the role of the Kalman Filter is to “merge” information from multiple sources (our own model, various sensors), based on the confidence in each of these sources (expressed as a covariance matrix) and to give us its best guess of the state, as well as a measure of confidence in this guess.” --- Matei Ciocarlie
The problem we are facing in mobile robot state estimation is that the dynamic model is not perfectly accurate. In this project, I assume the error between the model output and the real robot pose is zero-mean white noise with a known covariance.
The robot can localize itself with respect to a number of landmarks in the scene. Every time a landmark is within range of the robot's sensors, the robot will record its distance to the landmark, as well as the bearing of the landmark with respect to the robot's orientation. The robot publishes a range and bearing for each of the landmarks that it can see at a given time, along with its velocity commands. Due to sensor noise, these measurements are not perfectly accurate. I assume zero-mean white noise with a given covariance.
The EKF I coded uses a prediction-update cycle to refine estimates. In the Prediction Step, it uses the robot's motion model to predict its next state based on control inputs, and the state covariance matrix is updated using the Jacobian of the motion model. In the Update Step, it corrects the predicted state by incorporating sensor measurements, which involves calculating the innovation (measurement residual), computing the Kalman Gain, and updating the state vector and covariance matrix accordingly.
Have a look at the interesting video of the algorithm working on a mobile robot!