Sensors, state, and uncertainty
Robots never know the world perfectly. This lesson frames sensing as evidence about hidden state and motivates estimators you will meet repeatedly (Kalman/Bayes filters at a conceptual level).
Figure
Two sensor families: body vs world
Learning objectives
- Categorize proprioceptive vs exteroceptive sensors with examples.
- Explain bias, noise, and drift for IMUs.
- State why state estimation outputs a distribution (or best estimate + uncertainty) rather than a single truth.
Prerequisites
- Basic probability intuition (mean, variance).
- Frames lesson (where measurements live).
Step 1 — What is “state”?
The state might include:
- base pose in the world,
- joint angles,
- velocities,
- slowly changing biases.
Different tasks require different state vectors — do not estimate everything “just because.”
Checkpoint: Why is joint angle from encoders often considered “more trusted” than absolute position from a single camera frame?
Step 2 — Proprioception
Examples:
- Encoders on joints (position, sometimes velocity).
- Tachometers / motor currents (related to torque indirectly).
- IMU (gyro + accelerometer, sometimes magnetometer).
IMU accelerometers measure specific force, not gravity alone — interpreting them requires care when the robot accelerates.
Exercise: List two failure modes for gyro integration over long horizons.
Step 3 — Exteroception
Examples:
- Cameras (rich, high-dimensional, sensitive to lighting).
- Lidar / ToF (range maps; different failure modes).
- Contact sensors / force–torque at the wrist.
Each sensor has a measurement model z_t = h(x_t) + noise (often modeled as random noise around a predictable mean).
Checkpoint: Why might two lidars disagree on range to the same surface?
Step 4 — Noise, bias, and calibration
- Noise: random variation around the mean (often modeled Gaussian for tractability).
- Bias: consistent offset (temperature-dependent in IMUs).
- Calibration: estimate parameters so raw readings map to physical units and frames.
Exercise: Give an example where ignoring time synchronization between camera and IMU breaks fusion.
Step 5 — From measurements to beliefs
If you know a model of motion (process) and sensors (measurement), Bayes’ rule updates a belief over .
Figure
Bayes update: prior × measurement = posterior
Practically:
- Kalman filters (linear-Gaussian idealization) and extensions (EKF, UKF) are workhorses.
- Particle filters handle nasty nonlinearities at computational cost.
You are not implementing them here — you are learning what problem they solve.
Check your understanding
- What is the difference between drift and noise?
- Why is “just average many IMU samples” insufficient to remove bias?
- Name one quantity cameras measure poorly at night without extra hardware.
Lab-style stretch goal (optional)
Plot simulated noisy range measurements to a wall and compare a simple moving average vs a 1D Kalman filter estimate of distance.