[HN Gopher] Understanding the Kalman filter with a simple radar ...
___________________________________________________________________
Understanding the Kalman filter with a simple radar example
Author : alex_be
Score : 400 points
Date : 2026-04-08 17:11 UTC (22 hours ago)
HTML web link (kalmanfilter.net)
TEXT w3m dump (kalmanfilter.net)
| alex_be wrote:
| Author here.
|
| I recently updated the homepage of my Kalman Filter tutorial with
| a new example based on a simple radar tracking problem. The goal
| was to make the Kalman Filter understandable to anyone with basic
| knowledge of statistics and linear algebra, without requiring
| advanced mathematics.
|
| The example starts with a radar measuring the distance to a
| moving object and gradually builds intuition around noisy
| measurements, prediction using a motion model, and how the Kalman
| Filter combines both. I also tried to keep the math minimal while
| still showing where the equations come from.
|
| I would really appreciate feedback on clarity. Which parts are
| intuitive? Which parts are confusing? Is the math level
| appropriate?
|
| If you have used Kalman Filters in practice, I would also be
| interested to hear whether this explanation aligns with your
| intuition.
| magicalhippo wrote:
| I just glossed through for now so might have missed it, but it
| seemed you pulled the process noise matrix Q out of a hat. I
| guess it's explained properly in the book but would be nice
| with some justification for why the entries are what they are.
| alex_be wrote:
| To keep the example focused and reasonably short, I treated Q
| matrix as given and concentrated on building intuition around
| prediction and update. But you're right that this can feel
| like it appears out of nowhere.
|
| The derivation of the Q matrix is a separate topic and
| requires additional assumptions about the motion model and
| noise characteristics, which would have made the example
| significantly longer. I cover this topic in detail in the
| book.
|
| I'll consider adding a brief explanation or reference to make
| that step clearer. Thanks for pointing this out.
| magicalhippo wrote:
| Yeah I understand. I do think a brief explanation would
| help a lot though. As it sits it's not even entirely clear
| if the presented matrix is general or highly specific. I
| can easily see someone just use that as their Q matrix
| because that's what the Q matrix is, says so right there.
| renjimen wrote:
| You lead with "Moreover, it is an optimal algorithm that
| minimizes state estimation uncertainty." By the end of the
| tutorial I understood what this meant, but "optimal algorithm"
| is a vague term I am unfamiliar with (despite using Kalman
| Filters in my work). It might help to expand on the term
| briefly before diving into the math, since IIUC it's the key
| characteristic of the method.
| alex_be wrote:
| That's a good point. "Optimal" in this context means that,
| under the standard assumptions (linear system, Gaussian
| noise, correct model), the Kalman Filter minimizes the
| estimation error covariance. In other words, it provides the
| minimum-variance estimate among all linear unbiased
| estimators.
|
| You're right that the term can feel vague without that
| context. I'll consider adding a short clarification earlier
| in the introduction to make this clearer before diving into
| the math. Thanks for the suggestion.
| seanhunter wrote:
| Firstly I think the clarity in general is good. The one piece I
| think you could do with explaining early on is which pieces of
| what you are describing are the model of the system and which
| pieces are the Kalman filter. I was following along as you
| built the markov model of the state matrix etc and then you
| called those equations the Kalman filter, but I didn't think we
| had built a Kalman filter yet.
|
| Your early explanation of the filter (as a method for
| estimating the state of a system under uncertainty) was great
| but (unless I missed it) when you introduced the equations I
| wasn't clear that was the filter. I hope that makes sense.
| alex_be wrote:
| You're pointing out a real conceptual issue: where the system
| model ends and where the Kalman filter begins.
|
| In Kalman filter theory there are two different components:
|
| - The system model
|
| - The Kalman filter (the algorithm)
|
| The state transition and measurement equations belong to the
| system model. They describe the physics of the system and can
| vary from one application to another.
|
| The Kalman filter is the algorithm that uses this model to
| estimate the current state and predict the future state.
|
| I'll consider making that distinction more explicit when
| introducing the equations. Thanks for pointing this out.
| KellyCriterion wrote:
| You could do a line extension of your product, like "Kalman
| Filter in Financial Markets" and sell additional copies :)
| alex_be wrote:
| That's an interesting idea. The Kalman filter is definitely
| used in finance, often together with time-series models like
| ARMA. I've been thinking about writing something, although
| it's a bit outside my usual engineering focus.
|
| The challenge would be to keep it intuitive and accessible
| without oversimplifying. Still, it could be an interesting
| direction to explore.
| RickHull wrote:
| I recently (~6 mo ago) made it a goal to understand and
| implement a useful Kalman filter, but I realized that they are
| very tightly coupled to their domain and application. I got
| about half as far as I wanted, and took a pause. I expect your
| work here will get me to the finish line, so I am psyched!
| Thank you!
| alex_be wrote:
| Thanks for your feedback! Actually the KF concept is generic,
| but as mentioned above: "The state transition and measurement
| equations belong to the system model. They describe the
| physics of the system and can vary from one application to
| another."
|
| So it is right to say that the implementation of the KF is
| tightly coupled to the system. Getting that part right is
| usually the hardest step.
| alpinisme wrote:
| Tangent but I love the accessibility menu you have. Made it
| super easy to tweak the page to be more readable for me
| alex_be wrote:
| It's a free accessibility widget by Sienna. I tweaked the CSS
| to adapt it to the https://kalmanfilter.net/ style. You can
| find it here: https://accessibility-widget.pages.dev/
| ediamondscience wrote:
| I read and enjoyed your book a few months ago when a friend
| recommened it to me. I've been interested in control theory for
| a few years, but I'm still definitely a beginner when it comes
| to designing good control systems and have never done it
| professionally.
|
| I've been in the process of writing a tutorial on how PID
| filters work for a much younger audience. As a result, I've
| been looking back at the original tutorials that made stuff
| click for me. I had several engineers try to explain PID
| control to me over the course of about a year, but I don't
| think I really got it until I ended up watching Terry Davis
| (yeah, the TempleOS guy) show off how to use PID control in
| SimStructure using a hovering rocket as an example.
|
| The way he built the concept up was to take each component and
| build on the control system until he had something that worked.
| He started off with a simple proportional controller that ended
| up having a steady state error with the rocket hovering beneath
| the target height. Once he had that and pointed out the steady
| state error, he implemented the integral term showed off how it
| resulted in overshoot. Once that was working, he implemented
| the derivative control to back the overshoot off until he had
| something that settled pretty quickly.
|
| I'm not sure how you could do something similar for a Kalman
| Filter, but I did find it genuinely constructive to see the
| thought process behind adding each component of the equation.
| alex_be wrote:
| Yeah. Building things step by step often makes complex topics
| much easier to understand.
| smokel wrote:
| This seems to be an ad for a fairly expensive book on a topic
| that is described in detail in many (free) resources.
|
| See for example: https://rlabbe.github.io/Kalman-and-Bayesian-
| Filters-in-Pyth...
|
| Is there something in this particular resource that makes it
| worth buying?
| cwood-sdf wrote:
| i haven't seen much from other kalman filter resources, but i
| can say that this book is incredibly detailed and i would
| highly recommend it
|
| if you dont want to buy the book, most of the linear kalman
| filter stuff is available for free:
| https://kalmanfilter.net/kalman-filter-tutorial.html
| alex_be wrote:
| That's a fair question. My goal with the site was to make as
| much material available for free as possible, and the core
| linear Kalman filter content is indeed freely accessible.
|
| The book goes further into topics like tuning, practical design
| considerations, common pitfalls, and additional examples. But
| there are definitely many good free resources out there,
| including the one you linked.
| cameldrv wrote:
| Huge +1 for Roger Labbe's book/jupyter notebooks. They really
| helped me grok Kalman filters but also the more general problem
| and the various approaches that approximate the general problem
| from different directions.
| bmitc wrote:
| There are not many _good_ resources on Kalman filters. In fact,
| I have found a single one that I 'd consider good. This is
| someone who has spent a lot of time to newly understand Kalman
| filters.
| memming wrote:
| Link to that good one?
| bmitc wrote:
| It was a typo. I meant to say I haven't found a good one
| yet.
| the__alchemist wrote:
| That link is a classic!
| joshu wrote:
| i liked how https://www.bzarg.com/p/how-a-kalman-filter-works-in-
| picture... uses color visualization to explain
| alex_be wrote:
| That's a good article. I also like the visual approach there.
| My goal here was a bit different. I walk through a concrete
| radar example step by step, and use multiple examples
| throughout the tutorial to build intuition and highlight common
| pitfalls.
| palata wrote:
| I really loved this one: https://www.bzarg.com/p/how-a-kalman-
| filter-works-in-picture...
| ActorNightly wrote:
| I feel like people overcomplicate even the "simple"
| explanations like the OPs and this one.
|
| Basically, a Kalman filter is part of a larger class of
| "estimators", which take the input data, and run additional
| processing on top of it to figure out the true measurement.
|
| The very basic estimator a low pass filter is also an
| "estimator" - it rejects high frequency noise, and gives you
| essentially a moving average. But is a static filter that
| assumes that your process has noise of a certain frequency, and
| anything below that is actual changes in the measured variable.
|
| You can make the estimator better. Say you have some idea of
| how the process variable should behave.For a very simple case,
| say you are measuring temperature, and you have a current
| measurement, and you know that change in temperature is related
| to current being put through a winding. You can capture that
| relationship in a model of the process, which runs along side
| the measurement of the actual temperature. Now you have the
| noisy temperature reading, the predicted reading (which acts
| like a mean), and you can compute the covariance of the noise,
| which then you can use to tune the parameter of low pass
| filter. So if your noise changes in frequency for some reason,
| the filter will adjust and take care of it.
|
| The Kalman filter is an enhanced version of above, with the
| added feature of capturing correlation between process
| variables and using the measurement to update variables that
| are not directly measurement. For example, if position and
| velocity are correlated, a refined measurement on the position
| from gps, will also correct a refined measurement on velocity
| even if you are not measuring velocity (since you are computing
| velocity based of an internal model)
|
| The reason it can be kind of confusing is because it basically
| operates in the matrix linear space, by design to work with
| other tools that let you do further analysis. So with
| restriction to linear algebra, you have to assume gaussian
| noise profile, and estimate process dependence as a covariance
| measure.
|
| But Kalman filter isnt the end/all be all for noise rejection.
| You can do any sort of estimation in non linear ways. For
| example, I designed an automated braking system for an aircraft
| that tracks a certain brake force command, by commanding a
| servo to basically press on a brake pedal. Instead of a Kalman
| filter, I basically ran tests on the system and got a 4d map of
| (position, pressure, servo_velocity)-> new_pressure, which then
| I inverted to get the required velocity for target new
| pressure. So the process estimation was basically commanding
| the servo to move at a certain speed, getting the pressure,
| then using position, existing pressure, and pressure error to
| compute a new velocity, and so on.
| quibono wrote:
| How does braking work in an aircraft?
| kortilla wrote:
| When it lands. Auto brakes apply to the wheels to target a
| specific deceleration target. You don't want to brake too
| hard and cause undue wear and you don't want to under brake
| and miss your taxiway or go off the runway.
| quibono wrote:
| Gosh I should have thought of auto-braking. For some
| reason I kept thinking this was some fancy drone-braking
| system and couldn't figure out how you'd brake in the
| air... I never even considered the on-the-ground case.
| Thanks.
| RickHull wrote:
| Very interesting perspective. I will be reviewing in depth.
| Much appreciated.
| alex_be wrote:
| Interesting. It sounds like you ended up with a data-driven
| estimator. Did you have a chance to compare the data-driven
| and model-based approaches?
| lelandbatey wrote:
| Kalman filters are very cool, but when applying them you've got
| to know that they're not magic. I struggled to apply Kalman
| Filters for a toy project about ten years ago, because the thing
| I didn't internalize is that Kalman filters excel at offsetting
| low-quality data by sampling at a higher rate. You can
| "retroactively" apply a Kalman filter to a dataset and see some
| improvement, but you'll only get amazing results if you sample
| your very-noisy data at a much higher rate than if you were
| sampling at a "good enough" rate. The higher your sample rate,
| the better your results will be. In that way, a Kalman filter is
| something you want to design around, not a "fix all" for data you
| already have.
| alex_be wrote:
| I agree that Kalman filters are not magic and that having a
| reasonable model is essential for good performance.
|
| Higher sampling rates can help in some cases, especially when
| tracking fast dynamics or reducing measurement noise through
| repeated updates. However, the main strength of the Kalman
| filter is combining a model with noisy measurements, not
| necessarily relying on high sampling rates.
|
| In practice, Kalman filters can work well even with relatively
| low-rate measurements, as long as the model captures the system
| dynamics reasonably well.
|
| I also agree that it's often something you design into the
| system rather than applying as a post-processing step.
| moffkalast wrote:
| Yeah, I try to err on the side of not using them unless the
| accuracy obtained through more robust methods is just a no-go,
| because there are so many ways they can suddenly and
| irrecoverably fail if some sensor randomly produces something
| weird that wasn't accounted for. Which happens all the time in
| practice.
| alex_be wrote:
| It is always a good idea to include outliers treatment in KF
| algorithm to filter out weird measurements.
| ActorNightly wrote:
| Thats just a consequence of sample rate as a whole. The entire
| linear control space is intricately tied to frequency domain,
| so you have to sample at a rate at least twice higher than your
| highest frequency event for accurate capture, as per Nyquist
| theorem.
|
| All of that stuff is used in industry because a lot of
| regulation (for things like aircraft) basically requires your
| control laws to be linear so that you can prove stability.
|
| In reality, when you get into non linear control, you can do a
| lot more stuff. I did a research project in college where we
| had an autonomous underwater glider that could only get gps
| lock when it surfaced, and had to rely on shitty MEMS imu
| control under water. I actually proposed doing a neural network
| for control, but it got shot down because "neural nets are
| black boxes" lol.
| pjbk wrote:
| True. I have often encountered motion controllers where the
| implementer failed to realize that calculating derived
| variables like acceleration from position and velocity using
| a direct derivative formula will violate the Nyquist
| condition, and therefore yields underperforming controllers
| or totally noisy signal inputs to them. You either need to
| adjust your sample or control loop rates, or run an
| appropriate estimator. Depending on the problem it can be
| something sophisticated like an LQR/KF, or even in some cases
| a simple alpha-beta-gamma filter (poor version of a
| predictor-corrector process) can be adequate.
| roger_ wrote:
| Here's my (hopefully) intuitive guide:
|
| 1. understand weighted least squares and how you can update an
| initial estimate (prior mean and variance) with a new measurement
| and its uncertainty (i.e. inverse variance weighted least
| squares)
|
| 2. this works because the true mean hasn't changed between
| measurements. What if it did?
|
| 3. KF uses a model of how the mean changes to predict what it
| should be _now_ based on the past, including an inflation factor
| on the uncertainty since predictions aren 't perfect
|
| 4. after the prediction, it becomes the same problem as (1)
| except you use the predicted values as the initial estimate
|
| There are some details about the measurement matrix (when your
| measurement is a linear combination of the true value -- the
| state) and the Kalman gain, but these all come from the least
| squares formulation.
|
| Least squares is the key and you can prove it's optimal under
| certain assumptions (e.g. Bayesian MMSE).
| anamax wrote:
| There's also a .com - https://thekalmanfilter.com/kalman-filter-
| explained-simply/
| raluk wrote:
| Spending few weeks trying to understand Kalman filterm, I figured
| out that I need to understand all if the following:
|
| 1. Model of system
|
| 2. Internal state
|
| 3. How is optimal estimation defined
|
| 4. Covariance (statistics)
|
| Kalman filter is optimal estimation of internal state and
| covariance of system based on measurements so far.
|
| Kalman process/filter is mathematical solution to this problem as
| the system is evolving based on input and observable
| measurements. Turns out that internal state that includes both
| estimated value and covariance is all that is needed to fully
| capture internal state for such model.
|
| It is important to undrstand, that having different model for
| what is optimum, uncertenty or system model, compared to what
| Rudolf Kalman presented, gives just different mathematical
| solution for this problem. Examples of different optimal
| solutions for different estimation models are nonlinear Kalman
| filters and Wiener filter.
|
| ---
|
| I think that book on this topic from author Alex Becker is great
| and possibly best introduction into this topic. It has lot of
| examples and builds requred intuition really well. All I was
| missing is little more emphasis into mathematical rigor and
| chapter about LQG regulator, but you can find both of this in
| original paper by Rudolf Kalman.
| alex_be wrote:
| Thanks for your feedback. I am thinking of writing a second
| volume with more advanced and less introductory topics, but I
| haven't decided yet. It is a serious commitment and it will
| take years to complete. If I take this decision, I will
| consider a chapter on LQG.
|
| Small clarification: nonlinear Kalman filters are suboptimal.
| EKF relies on linear approximations, and UKF uses heuristic
| approximations.
| anilakar wrote:
| The missile knows...
| alex_be wrote:
| Classic :)
| eps wrote:
| When learning the Kalman filter, it clicks in place much faster
| when there are two or more inputs with different noise profiles.
| That's why it exists and that's what was its original use-case.
|
| Yet virtually all tutorials stick to single-input examples, which
| is really an edge case. This site is no exception.
| imtringued wrote:
| Kalman filters have always been about state estimation. What
| you consider an exception is the default in the vast majority
| of state estimation scenarios.
|
| Before I got into control theory, I've read a lot of HN posts
| about kalman filters being the "sensor fusion" algorithm, which
| is the wrong mental model. You can do sensor fusion with state
| estimation, but you can't do state estimation with sensor
| fusion.
| alex_be wrote:
| I have a chapter in my book that introduces sensor fusion as a
| concept. If you want to dive deeper into the sensor fusion
| topic, I would recommend Bar-Shalom's or Blackman's book.
| balloob wrote:
| Kalman filters are great! For people interested of one used in
| practice, it's used by Sendspin to keep speakers in sync, even
| works in browsers on phones on 5G etc.
|
| Open the Sendspin live demo in your browser:
| https://www.sendspin-audio.com/#live-demo
|
| Some more info on Kalman implementation here
| https://github.com/Sendspin/time-filter/blob/main/docs%2Fthe...
| pmarreck wrote:
| Could the Kalman Filter idea be applied to human witnesses to an
| event, where you model the person as a faulty sensor?
| alex_be wrote:
| Kalman filter is about combining uncertain measurements, and
| human observations could be viewed as noisy sensors. On the
| other hand, the standard KF assumes unbiased sensors with
| Gaussian noise, and I don't know if those assumptions hold for
| human witnesses.
___________________________________________________________________
(page generated 2026-04-09 16:01 UTC)