← Back to curriculum

Perception & closed-loop control

Sensors, state, and uncertainty

Proprioception, IMUs, lidar, cameras, noise models, and why state estimation treats belief distributions.

~55 min read + exercises

Sensors, state, and uncertainty

Robots never know the world perfectly. This lesson frames sensing as evidence about hidden state and motivates estimators you will meet repeatedly (Kalman/Bayes filters at a conceptual level).

Figure

Two sensor families: body vs world

Two sensor familiesProprioceptive (body) and exteroceptive (world). Fusion combines them.Proprioceptivewhat is the body doing?joint encoderstachometersIMU (gyro + accel)Exteroceptivewhat is around the body?cameras (RGB / depth)lidar / ToFforce–torque
Proprioceptive sensors describe the robot's own state; exteroceptive sensors describe its surroundings. Fusion blends them — and that's where state estimation lives.

Learning objectives

  • Categorize proprioceptive vs exteroceptive sensors with examples.
  • Explain bias, noise, and drift for IMUs.
  • State why state estimation outputs a distribution (or best estimate + uncertainty) rather than a single truth.

Prerequisites

  • Basic probability intuition (mean, variance).
  • Frames lesson (where measurements live).

Step 1 — What is “state”?

The state xt\mathbf{x}_t might include:

  • base pose in the world,
  • joint angles,
  • velocities,
  • slowly changing biases.

Different tasks require different state vectors — do not estimate everything “just because.”

Checkpoint: Why is joint angle from encoders often considered “more trusted” than absolute position from a single camera frame?


Step 2 — Proprioception

Examples:

  • Encoders on joints (position, sometimes velocity).
  • Tachometers / motor currents (related to torque indirectly).
  • IMU (gyro + accelerometer, sometimes magnetometer).

IMU accelerometers measure specific force, not gravity alone — interpreting them requires care when the robot accelerates.

Exercise: List two failure modes for gyro integration over long horizons.


Step 3 — Exteroception

Examples:

  • Cameras (rich, high-dimensional, sensitive to lighting).
  • Lidar / ToF (range maps; different failure modes).
  • Contact sensors / force–torque at the wrist.

Each sensor has a measurement model z_t = h(x_t) + noise (often modeled as random noise around a predictable mean).

Checkpoint: Why might two lidars disagree on range to the same surface?


Step 4 — Noise, bias, and calibration

  • Noise: random variation around the mean (often modeled Gaussian for tractability).
  • Bias: consistent offset (temperature-dependent in IMUs).
  • Calibration: estimate parameters so raw readings map to physical units and frames.

Exercise: Give an example where ignoring time synchronization between camera and IMU breaks fusion.


Step 5 — From measurements to beliefs

If you know a model of motion (process) and sensors (measurement), Bayes’ rule updates a belief over xtx_t.

Figure

Bayes update: prior × measurement = posterior

Bayes update: prior × measurement = sharper posteriorA new sensor reading shrinks our uncertainty when its noise is well-modeled.prior p(x)measurement p(z|x)posterior p(x|z)state x
A new sensor reading shrinks uncertainty when its noise model is correct. The posterior is sharper and centered between the prior and the measurement, weighted by their confidences.

Practically:

  • Kalman filters (linear-Gaussian idealization) and extensions (EKF, UKF) are workhorses.
  • Particle filters handle nasty nonlinearities at computational cost.

You are not implementing them here — you are learning what problem they solve.


Check your understanding

  1. What is the difference between drift and noise?
  2. Why is “just average many IMU samples” insufficient to remove bias?
  3. Name one quantity cameras measure poorly at night without extra hardware.

Lab-style stretch goal (optional)

Plot simulated noisy range measurements to a wall and compare a simple moving average vs a 1D Kalman filter estimate of distance.