Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
104 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
40 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Enhancing Bayesian model updating in structural health monitoring via learnable mappings (2405.13648v2)

Published 22 May 2024 in cs.CE

Abstract: In the context of structural health monitoring (SHM), the selection and extraction of damage-sensitive features from raw sensor recordings represent a critical step towards solving the inverse problem underlying the identification of structural health conditions. This work introduces a novel approach that employs deep neural networks to enhance stochastic SHM methods. A learnable feature extractor and a feature-oriented surrogate model are synergistically exploited to evaluate a likelihood function within a Markov chain Monte Carlo sampling algorithm. The feature extractor undergoes pairwise supervised training to map sensor recordings onto a low-dimensional metric space, which encapsulates the sensitivity to structural health parameters. The surrogate model maps structural health parameters to their feature representation. The procedure enables the updating of beliefs about structural health parameters, eliminating the need for computationally expensive numerical models. A preliminary offline phase involves the generation of a labeled dataset to train both the feature extractor and the surrogate model. Within a simulation-based SHM framework, training vibration responses are efficiently generated using a multi-fidelity surrogate modeling strategy to approximate sensor recordings under varying damage and operational conditions. The multi-fidelity surrogate exploits model order reduction and artificial neural networks to speed up the data generation phase while ensuring the damage-sensitivity of the approximated signals. The proposed strategy is assessed through three synthetic case studies, demonstrating high accuracy in the estimated parameters and strong computational efficiency.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com