Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementation of Non-Gaussian transition models #1043

Open
wants to merge 70 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
70 commits
Select commit Hold shift + click to select a range
2e23cc3
feat: initial commit
zhenyuen May 11, 2024
f11b204
feat: remove support for non-isotropic driving noise
zhenyuen May 11, 2024
8156222
feat: add Gamma NVM mixture driving process
zhenyuen May 11, 2024
3a7974c
feat: implement marginal particle filter
zhenyuen May 11, 2024
8a9159c
feat: add mpf
zhenyuen May 17, 2024
cbc2efc
feat: add tutorial
zhenyuen May 21, 2024
e52ae58
Merge pull request #1 from zhenyuen/levy-state-space-model
zhenyuen May 21, 2024
656d6da
Add option in plot_groundtruths to set colour based on passed in kwar…
jswright-dstl May 21, 2024
ab95baf
Add axis labels for 1d plotting in Plotterly
jswright-dstl May 21, 2024
d607102
Add type checking for model_list in CombinedGaussianTransitionModel
jswright-dstl May 21, 2024
4b65b5e
Add test for type checking model_list in CombinedGaussianTransitionModel
jswright-dstl May 21, 2024
afac779
Update docs link GitHub workflow
sdhiscocks May 3, 2024
2817d7d
Use walrus operator to omit cases where no tracks exist in associations
jswright-dstl May 22, 2024
da94d65
Add Joseph method of covariance calculation to Kalman Updaters
csherman-dstl May 28, 2024
e94c82f
Add joseph covariance to tests
csherman-dstl May 28, 2024
4b40fbd
Adjust duplicated return
csherman-dstl May 28, 2024
08c9bc1
Add tests that fail on current CartesianToElevationBearingRangeRate i…
gawebb-dstl May 23, 2024
025dd76
Fixed inverse function for CartesianToElevationBearingRangeRate.
gawebb-dstl May 23, 2024
f064292
Update stonesoup/models/measurement/nonlinear.py
gawebb-dstl May 31, 2024
5e71626
Fix links to source code from docs "Edit on GitHub" links
sdhiscocks May 30, 2024
932776a
Tried getting the id from a groundtruth property. It doesn't work wit…
gawebb-dstl May 23, 2024
eb8114e
Changed the way a platform creates a ground truth path and the tests …
gawebb-dstl May 23, 2024
37fd57d
Tidied up code and added documentation
gawebb-dstl May 23, 2024
22fc888
Removed doctests in Platform.ground_truth_path as they didn't render …
gawebb-dstl May 31, 2024
f1c63e0
Fix incorrect generator kwargs with grid moveables
sdhiscocks Jun 3, 2024
39a0c48
feat: fix centering and add pdf (kinda)
zhenyuen May 26, 2024
7d38bb0
fix: particle filter covariance
zhenyuen Jun 1, 2024
5ca076b
feat: add new Tensor and CovarianceMatrices type. Still under develop…
zhenyuen Jun 4, 2024
1458e45
feat: add initial framework on computing pdf from fft
zhenyuen Jun 5, 2024
c5e4a73
feat: add Gaussian and AS driver
zhenyuen Jun 13, 2024
c16a55d
feat: add alpha-stable and Gaussian driver, with support for constant…
zhenyuen Jun 16, 2024
1fa11b9
feat: remove fast levy models implementation from this branch
zhenyuen Jun 16, 2024
3a6cc34
feat: initial commit
zhenyuen May 11, 2024
486e2ed
feat: remove support for non-isotropic driving noise
zhenyuen May 11, 2024
1428a5a
feat: add Gamma NVM mixture driving process
zhenyuen May 11, 2024
843ff78
feat: implement marginal particle filter
zhenyuen May 11, 2024
1eb52db
feat: add mpf
zhenyuen May 17, 2024
77d0935
feat: add tutorial
zhenyuen May 21, 2024
1c83e8a
feat: fix centering and add pdf (kinda)
zhenyuen May 26, 2024
84499c4
fix: particle filter covariance
zhenyuen Jun 1, 2024
4c47043
feat: add new Tensor and CovarianceMatrices type. Still under develop…
zhenyuen Jun 4, 2024
e8e784f
feat: add initial framework on computing pdf from fft
zhenyuen Jun 5, 2024
5081547
feat: add Gaussian and AS driver
zhenyuen Jun 13, 2024
5a7ac44
feat: add alpha-stable and Gaussian driver, with support for constant…
zhenyuen Jun 16, 2024
323f757
feat: remove fast levy models implementation from this branch
zhenyuen Jun 16, 2024
04d9c97
feat: merge branch 'main' into levy-models-slow
zhenyuen Jun 16, 2024
14be421
feat: complete merge
zhenyuen Jun 16, 2024
b266da0
fix: update Jupyter example and changed LevyOrnstein-Uhlenbeck to Lev…
zhenyuen Jun 16, 2024
92d9670
feat: add conditional pdf and log pdf functions.
zhenyuen Jun 24, 2024
79b2994
fix: replace model_id in NVM functions
zhenyuen Jul 4, 2024
10e35d3
fix: merge imports
zhenyuen Jul 4, 2024
de58f3b
fix: replace typehints '|' with Union for backwards compatability.
zhenyuen Sep 8, 2024
6e04b44
feat: override Levy Langvin model with close form expressions for fas…
zhenyuen Sep 8, 2024
bfd64fe
refactor: models
zhenyuen Nov 11, 2024
072d407
refactor: marginalised particle filter updater and predictor
zhenyuen Nov 11, 2024
30669eb
feat: convert tutorial notebook from ipynb to py
zhenyuen Nov 11, 2024
561fc09
patch: cleanup models
zhenyuen Nov 11, 2024
33e5b9b
patch: cleanup types
zhenyuen Nov 11, 2024
66824bc
patch: cleanup predictor
zhenyuen Nov 11, 2024
9fe1847
patch: cleanup updater
zhenyuen Nov 11, 2024
c15f69a
patch: cleanup array types
zhenyuen Nov 11, 2024
67a6dcb
patch: more cleanup
zhenyuen Nov 11, 2024
bef498c
feat: update docstrings
zhenyuen Nov 11, 2024
4d6fa3d
fix: remove checkpoint
zhenyuen Nov 11, 2024
1bc1af7
patch: add docstring to tutorial
zhenyuen Nov 11, 2024
0a09ac9
fix: type support for python 3.8
zhenyuen Nov 11, 2024
cc3e5b2
fix: syntax
zhenyuen Nov 11, 2024
f98c93b
fix: revert unintended changes
zhenyuen Nov 16, 2024
41a184e
fix: cleanup test
zhenyuen Nov 16, 2024
dff4b12
fix: revert unintended formatting
zhenyuen Nov 16, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,300 @@
#!/usr/bin/env python
# coding: utf-8
"""
=============================================================
11 - Tracking linear Levy transition models with the MPF
=============================================================
In line with the tutorial examples of the Kalman and particle
filters in Stone Soup, a simplified single-target tracking
example without clutter is provided here to demonstrate the
use of linear transition models driven by non-Gaussian Levy
noise, as well as how to perform inference tasks on this class
of models using the Marginalized Particle Filter (MPF).
"""

# Consider the scenario where the target evolves according to the Langevin model, driven by a normal sigma-mean mixture with the mixing distribution being the $\alpha$-stable distribution.

# In[1]:
Comment on lines +15 to +17
Copy link
Contributor

@nperree-dstl nperree-dstl Jan 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Each section of text in the tutorial (i.e. not code) needs to begin with # %%. To format mathematical symbols you need to wrap in ":math:maths_goes_here". You can also remove all the # In[] lines from the tutorial file.

Suggested change
# Consider the scenario where the target evolves according to the Langevin model, driven by a normal sigma-mean mixture with the mixing distribution being the $\alpha$-stable distribution.
# In[1]:
# %%
# Consider the scenario where the target evolves according to the Langevin model, driven by a normal sigma-mean mixture with the mixing distribution being the :math:`\alpha`-stable distribution.



import numpy as np
from datetime import datetime, timedelta


# The state of the target can be represented as 2D Cartesian coordinates, $\left[x, \dot x, y, \dot y\right]^{\top}$, modelling both its position and velocity. A simple truth path is created with a sampling rate of 1 Hz.

# In[2]:
Comment on lines +24 to +26
Copy link
Contributor

@nperree-dstl nperree-dstl Jan 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See previous comment (check my editing of the maths here though)

Suggested change
# The state of the target can be represented as 2D Cartesian coordinates, $\left[x, \dot x, y, \dot y\right]^{\top}$, modelling both its position and velocity. A simple truth path is created with a sampling rate of 1 Hz.
# In[2]:
# %%
# The state of the target can be represented as 2D Cartesian coordinates, :math`[x, \dot{x}, y, \dot{y}]^{\top}`, modelling both its position and velocity. A simple truth path is created with a sampling rate of 1 Hz.

Copy link
Author

@zhenyuen zhenyuen Feb 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can also render the docs locally by running python -m sphinx -W docs/source docs/build.

Thanks for that, will try! That being said, running -W fails as there appears as warnings from other sources that are treated as errors. For example,

generating gallery...
generating gallery for auto_tutorials... [  9%] 01_KalmanFilterTutorial.py
Warning, treated as error:

../tutorials/01_KalmanFilterTutorial.py unexpectedly failed to execute correctly:

    Traceback (most recent call last):
      File "/Users/chongzhenyuen/Documents/iib-project/stonesoup/docs/tutorials/01_KalmanFilterTutorial.py", line 192, in <module>
        plotter.plot_ground_truths(truth, [0, 2])
      File "/Users/chongzhenyuen/Documents/iib-project/stonesoup/stonesoup/plotter.py", line 2495, in plot_ground_truths
        self.fig.add_trace(go.Scatter(truth_kwargs))
      File "/Users/chongzhenyuen/miniforge3/envs/iib-project/lib/python3.10/site-packages/plotly/graph_objs/_scatter.py", line 3192, in __init__
        self._process_kwargs(**dict(arg, **kwargs))
      File "/Users/chongzhenyuen/miniforge3/envs/iib-project/lib/python3.10/site-packages/plotly/basedatatypes.py", line 4322, in _process_kwargs
        raise err
    ValueError: Invalid property specified for object of type plotly.graph_objs.Scatter: 'legendrank'

    Did you mean "legendgroup"?



from stonesoup.types.groundtruth import GroundTruthPath, GroundTruthState
from stonesoup.models.base_driver import NoiseCase
from stonesoup.models.driver import AlphaStableNSMDriver
from stonesoup.models.transition.levy_linear import LevyLangevin, CombinedLinearLevyTransitionModel

# And the clock starts
start_time = datetime.now().replace(microsecond=0)


# The `LevyLangevin` class creates a one-dimensional Langevin model, driven by the $\alpha$-stable NSM mixture process defined in the `AlphaStableNSMDriver` class.
#
# \begin{equation}
# d \dot{x}(t)=-\theta \dot{x}(t) d t+d W(t), \quad \theta>0
# \end{equation}
#
# where $\theta$ is the damping factor and $W(t)$ is the non-Gaussian driving process.
#
# The noise samples $\mathbf{w}_n$ are drawn from the $\alpha$-stable distribution parameterized by the $\alpha$-stable law, $S_{\alpha}(\sigma, \beta, \mu)$.
#
# The input parameters to `AlphaStableNSMDriver` class are the stability index $\alpha$, expected jumps per unit time $c$, conditional Gaussian mean $\mu_W$ & variance $\sigma_W^2$, and the type of residuals used for the truncated shot-noise representation, specified by `noise_case`.
#
# Without diving into technical details, the scaling factor $\sigma$, skewness parameter $\beta$ and location $\mu$, in the $\alpha$-stable law is a function of the conditional Gaussian parameters $\mu_W, \sigma_W^2$. In general, set $\mu_W=0$ for a symmetric target distribution $\beta=0$, or $\mu_W \neq 0$ to model biased trajectories otherwise. In addition, the size of the resulting trajectories (and jumps) can be adjusted by varying $\sigma_W^2$.
#
# The available noise cases are:
#
# 1. No residuals, `NoiseCase.TRUNCATED`, least expensive but drawn noise samples deviate further from target distribution.
# 2. `NoiseCase.GAUSSIAN_APPROX`, the most expensive but drawn noise samples closest target distribution.
# 3. `PartialNoiseCase.GAUSSIAN_APPROX`, a compromise between both cases (1) and (2).
#
#
# For interested readers, refer to [1, 2] for more details.
#
# Here, we initialise an $\alpha$-stable driver with the default parameters `mu_W=0, sigma_W2=1, alpha=1.4, noise_case=NoiseCase.GAUSSIAN_APPROX(), c=10`.
#
# Then, the driver instance is injected into the Langevin model for every coordinate axes (i.e., x and y) during initialisation with parameter `damping_coeff=0.15`.
#
# Note that we overwrite the default `mu_W` parameter in the $\alpha$-stable driver for the x-coordinate axes to bias our trajectories towards the left. This can be done by passing an additional argument `mu_W = -0.02` when injecting the driver into the Langevin model.
#
# Finallt, the `CombinedLinearLevyTransitionModel` class takes a set of 1-D models and combines them into a linear transition model of arbitrary dimension, $D$, (in this case, $D=2$).
#
#
#
#
# <!-- and in the $\alpha$-stable law is a function of the conditional Gaussian mean $\mu_W$
#
# where $\beta=\begin{cases} 1, \quad \mu_W \neq 0 \\ 0, \quad \text{otherwise} \end{cases}$ with $\beta=0$ being the a symmetric stable distribution.
#
# $\sigma=\frac{\mathbb{E}|w|^\alpha \Gamma(2-\alpha) \cos(\pi \alpha / 2))}{1- \alpha}$ represent the scale parameter and $\beta=$ controlling the skewness of the stable distribution. -->
#

# In[3]:


seed = 1 # Random seem for reproducibility

# Driving process parameters
mu_W = 0
sigma_W2 = 4
alpha = 1.4
c=10
noise_case=NoiseCase.GAUSSIAN_APPROX


# Model parameters
theta=0.15

driver_x = AlphaStableNSMDriver(mu_W=mu_W, sigma_W2=sigma_W2, seed=seed, c=c, alpha=alpha, noise_case=noise_case)
driver_y = driver_x # Same driving process in both dimensions and sharing the same latents (jumps)
langevin_x = LevyLangevin(driver=driver_x, damping_coeff=theta, mu_W=-0.02)
langevin_y = LevyLangevin(driver=driver_y, damping_coeff=theta)
transition_model = CombinedLinearLevyTransitionModel([langevin_x, langevin_y])


# In[4]:


print(transition_model.mu_W)
print(transition_model.sigma_W2)


# The ground truth is initialised from (0,0).

# In[5]:


timesteps = [start_time]
truth = GroundTruthPath([GroundTruthState([0, 1, 0, 1], timestamp=timesteps[0])])

num_steps = 40
for k in range(1, num_steps + 1):
timesteps.append(start_time+timedelta(seconds=k)) # add next timestep to list of timesteps
truth.append(GroundTruthState(
transition_model.function(truth[k-1], noise=True, time_interval=timedelta(seconds=1)),
timestamp=timesteps[k]))


# The simulated ground truth path can be plotted using the in-built plotting classes in Stone Soup.
#
# In addition to the ground truth, Stone Soup plotting tools allow measurements and predicted tracks (see later) to be plotted and synced together consistently.
#
# An animated plotter that uses Plotly graph objects can be accessed via the `AnimatedPlotterly` class from Stone Soup.
#
# Note that the animated plotter requires a list of timesteps as an input, and that `tail_length`
# is set to 0.3. This means that each data point will be on display for 30% of the total
# simulation time. The mapping argument is [0, 2] because those are the x and
# y position indices from our state vector.
#
# If a static plotter is preferred, the `Plotterly` class can be used instead
#
#

# In[6]:


# from stonesoup.plotter import AnimatedPlotterly
# plotter = AnimatedPlotterly(timesteps, tail_length=1.0, width=600, height=600)

from stonesoup.plotter import Plotterly
plotter = Plotterly(autosize=False, width=600, height=600)
plotter.plot_ground_truths(truth, [0, 2])
plotter.fig


# ## Simulate measurements

# Assume a 'linear' sensor which detects the
# position, but not velocity, of a target, such that
# $\mathbf{z}_k = H_k \mathbf{x}_k + \boldsymbol{\nu}_k$,
# $\boldsymbol{\nu}_k \sim \mathcal{N}(0,R)$, with
#
# \begin{align}H_k &= \begin{bmatrix}
# 1 & 0 & 0 & 0\\
# 0 & 0 & 1 & 0\\
# \end{bmatrix}\\
# R &= \begin{bmatrix}
# 25 & 0\\
# 0 & 25\\
# \end{bmatrix} \omega\end{align}
#
# where $\omega$ is set to 25 initially.

# In[7]:


from stonesoup.types.detection import Detection
from stonesoup.models.measurement.linear import LinearGaussian


# The linear Gaussian measurement model is set up by indicating the number of dimensions in the
# state vector and the dimensions that are measured (so specifying $H_k$) and the noise
# covariance matrix $R$.

# In[8]:


measurement_model = LinearGaussian(
ndim_state=4, # Number of state dimensions (position and velocity in 2D)
mapping=(0, 2), # Mapping measurement vector index to state index
noise_covar=np.array([[16, 0], # Covariance matrix for Gaussian PDF
[0, 16]])
)


# The measurements can now be generated and plotted accordingly.

# In[9]:


measurements = []
for state in truth:
measurement = measurement_model.function(state, noise=True)
measurements.append(Detection(measurement,
timestamp=state.timestamp,
measurement_model=measurement_model))


# In[10]:


plotter.plot_measurements(measurements, [0, 2])
plotter.fig


# ## Marginalised Particle Filtering

# The `MarginalisedParticlePredictor` and `MarginalisedParticleUpdater` classes correspond to the predict and update steps
# respectively.
# Both require a `TransitionModel` and a `MeasurementModel` instance respectively.
# To avoid degenerate samples, the `SystematicResampler` is used which is passed to the updater.
# More resamplers that are included in Stone Soup are covered in the
# [Resampler Tutorial](https://stonesoup.readthedocs.io/en/latest/auto_tutorials/sampling/ResamplingTutorial.html#sphx-glr-auto-tutorials-sampling-resamplingtutorial-py).

# In[11]:


from stonesoup.predictor.particle import MarginalisedParticlePredictor
from stonesoup.resampler.particle import SystematicResampler
from stonesoup.updater.particle import MarginalisedParticleUpdater

predictor = MarginalisedParticlePredictor(transition_model=transition_model)
resampler = SystematicResampler()
updater = MarginalisedParticleUpdater(measurement_model, resampler)


# To start we create a prior estimate. This is a `MarginalisedParticleState` which describes the state as a distribution of particles.
#
# The mean priors are randomly sampled from the standard normal distribution.
#
# The covariance priors is initialised with a scalar multiple of the identity matrix .

# In[12]:


from scipy.stats import multivariate_normal
from stonesoup.types.numeric import Probability # Similar to a float type
from stonesoup.types.state import MarginalisedParticleState
from stonesoup.types.array import StateVectors

number_particles = 100

# Sample from the prior Gaussian distribution
states = multivariate_normal.rvs(np.array([0, 1, 0, 1]),
np.diag([1., 1., 1., 1.]),
size=number_particles)
covars = np.stack([np.eye(4) * 100 for i in range(number_particles)], axis=2) # (M, M, N)

# Create prior particle state.
prior = MarginalisedParticleState(
state_vector=StateVectors(states.T),
covariance=covars,
weight=np.array([Probability(1/number_particles)]*number_particles),
timestamp=start_time-timedelta(seconds=1))


# We now run the predict and update steps, propagating the collection of particles and resampling at each step

# In[13]:


from stonesoup.types.hypothesis import SingleHypothesis
from stonesoup.types.track import Track

track = Track()
for measurement in measurements:
prediction = predictor.predict(prior, timestamp=measurement.timestamp)
hypothesis = SingleHypothesis(prediction, measurement)
post = updater.update(hypothesis)
track.append(post)
prior = track[-1]


# Plot the resulting track with the sample points at each iteration. Can also change 'plot_history'
# to True if wanted.

# In[14]:


plotter.plot_tracks(track, [0, 2], particle=True, uncertainty=True)
plotter.fig


# ## References
# [1] Lemke, Tatjana, and Simon J. Godsill, 'Inference for models with asymmetric α -stable noise processes', in Siem Jan Koopman, and Neil Shephard (eds), Unobserved Components and Time Series Econometrics (Oxford, 2015; online edn, Oxford Academic, 21 Jan. 2016)
#
# [2] S. Godsill, M. Riabiz, and I. Kontoyiannis, “The L ́evy state space model,” in 2019 53rd Asilomar Conference on Signals, Systems, and Computers, 2019, pp. 487–494.
#

# In[ ]:




3 changes: 2 additions & 1 deletion stonesoup/models/__init__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
from .base import Model
from .base_driver import Driver

__all__ = ['Model']
__all__ = ['Model', 'Driver']
Loading