Communications in Information and Systems

Volume 21 (2021)

Number 4

Deep filtering

Pages: 651 – 667

DOI: https://dx.doi.org/10.4310/CIS.2021.v21.n4.a6

Authors

Le Yi Wang (Department of Electrical and Computer Engineering, Wayne State University, Detroit, Michigan, U.S.A.)

George Yin (Department of Mathematics, University of Connecticut, Storrs, Ct., U.S.A.)

Qing Zhang (Department of Mathematics, University of Georgia, Athens, Ga., U.S.A.)

Abstract

This paper develops a deep learning method for linear and nonlinear filtering. The idea is to start with a nominal dynamic model and generate Monte Carlo sample paths. Then these samples are used to train a deep neutral network. A least squares error is used as a loss function for network training. Then the resulting weights are applied to Monte Carlo samples from an actual dynamic model. The deep filter obtained in such a way compares favorably to the traditional Kalman filter in linear cases and the extended Kalman filter in nonlinear cases. Moreover, a switching model with jumps is studied to show the adaptiveness and power of our deep filtering method. A main advantage of deep filtering is its robustness when the nominal model and actual model differ. Another advantage of deep filtering is that real data can be used directly to train the deep neutral network. Therefore, one does not need to calibrate the model.

Keywords

deep neutral network, filtering, regime switching model

The full text of this article is unavailable through your IP address: 18.117.168.40

The authors’ research was supported in part by the Army Research Office under grant W911NF-19-1-0176.

Received 3 April 2020

Published 4 June 2021