TU Darmstadt / ULB / TUprints

Scalable Structure Learning of Continuous-Time Bayesian Networks from Incomplete Data

Linzner, Dominik ; Schmidt, Michael ; Koeppl, Heinz
eds.: Wallach, H. ; Larochelle, H. ; Beygelzimer, A. ; d'Alché-Buc, F. ; Fox, E. ; Garnett, R. (2025)
Scalable Structure Learning of Continuous-Time Bayesian Networks from Incomplete Data.
33rd Conference on Neural Information Processing Systems (NeurIPS 2019). Vancouver, Canada (08.12.2019 - 14.12.2019)
doi: 10.26083/tuprints-00028995
Conference or Workshop Item, Secondary publication, Publisher's Version

[img] Text
NeurIPS-2019-scalable-structure-learning-of-continuous-time-bayesian-networks-from-incomplete-data-Paper.pdf
Copyright Information: CC BY 4.0 International - Creative Commons, Attribution.

Download (1MB)
[img] Text (Supplement)
gradient_ctbn_neurips_supplementary.pdf
Copyright Information: CC BY 4.0 International - Creative Commons, Attribution.

Download (1MB)
Item Type: Conference or Workshop Item
Type of entry: Secondary publication
Title: Scalable Structure Learning of Continuous-Time Bayesian Networks from Incomplete Data
Language: English
Date: 15 January 2025
Place of Publication: Darmstadt
Year of primary publication: 2019
Place of primary publication: San Diego, CA
Publisher: NeurIPS
Book Title: Advances in Neural Information Processing Systems 32 (NeurIPS 2019)
Collation: 11 Seiten
Event Title: 33rd Conference on Neural Information Processing Systems (NeurIPS 2019)
Event Location: Vancouver, Canada
Event Dates: 08.12.2019 - 14.12.2019
DOI: 10.26083/tuprints-00028995
Corresponding Links:
Origin: Secondary publication service
Abstract:

Continuous-time Bayesian Networks (CTBNs) represent a compact yet powerful framework for understanding multivariate time-series data. Given complete data, parameters and structure can be estimated efficiently in closed-form. However, if data is incomplete, the latent states of the CTBN have to be estimated by laboriously simulating the intractable dynamics of the assumed CTBN. This is a problem, especially for structure learning tasks, where this has to be done for each element of a super-exponentially growing set of possible structures. In order to circumvent this notorious bottleneck, we develop a novel gradient-based approach to structure learning. Instead of sampling and scoring all possible structures individually, we assume the generator of the CTBN to be composed as a mixture of generators stemming from different structures. In this framework, structure learning can be performed via a gradient-based optimization of mixture weights. We combine this approach with a new variational method that allows for a closed-form calculation of this mixture marginal likelihood. We show the scalability of our method by learning structures of previously inaccessible sizes from synthetic and real-world data.

Status: Publisher's Version
URN: urn:nbn:de:tuda-tuprints-289952
Additional Information:

Part of Advances in Neural Information Processing Systems 32 (NeurIPS 2019)

Classification DDC: 500 Science and mathematics > 570 Life sciences, biology
600 Technology, medicine, applied sciences > 621.3 Electrical engineering, electronics
Divisions: 18 Department of Electrical Engineering and Information Technology > Institute for Telecommunications > Bioinspired Communication Systems
18 Department of Electrical Engineering and Information Technology > Self-Organizing Systems Lab
Date Deposited: 15 Jan 2025 09:30
Last Modified: 15 Jan 2025 09:30
URI: https://tuprints.ulb.tu-darmstadt.de/id/eprint/28995
PPN:
Export:
Actions (login required)
View Item View Item