Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 164 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 37 tok/s Pro
GPT-4o 76 tok/s Pro
Kimi K2 216 tok/s Pro
GPT OSS 120B 435 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Calibrated Adaptive Teacher for Domain Adaptive Intelligent Fault Diagnosis (2312.02826v2)

Published 5 Dec 2023 in cs.LG, cs.AI, eess.SP, and stat.ML

Abstract: Intelligent Fault Diagnosis (IFD) based on deep learning has proven to be an effective and flexible solution, attracting extensive research. Deep neural networks can learn rich representations from vast amounts of representative labeled data for various applications. In IFD, they achieve high classification performance from signals in an end-to-end manner, without requiring extensive domain knowledge. However, deep learning models usually only perform well on the data distribution they have been trained on. When applied to a different distribution, they may experience performance drops. This is also observed in IFD, where assets are often operated in working conditions different from those in which labeled data have been collected. Unsupervised domain adaptation (UDA) deals with the scenario where labeled data are available in a source domain, and only unlabeled data are available in a target domain, where domains may correspond to operating conditions. Recent methods rely on training with confident pseudo-labels for target samples. However, the confidence-based selection of pseudo-labels is hindered by poorly calibrated confidence estimates in the target domain, primarily due to over-confident predictions, which limits the quality of pseudo-labels and leads to error accumulation. In this paper, we propose a novel UDA method called Calibrated Adaptive Teacher (CAT), where we propose to calibrate the predictions of the teacher network throughout the self-training process, leveraging post-hoc calibration techniques. We evaluate CAT on domain-adaptive IFD and perform extensive experiments on the Paderborn benchmark for bearing fault diagnosis under varying operating conditions. Our proposed method achieves state-of-the-art performance on most transfer tasks.

Summary

  • The paper introduces the CAT method, using calibrated pseudo-labeling to address domain shift in intelligent fault diagnosis.
  • The approach integrates both time-domain and frequency-domain inputs, achieving state-of-the-art performance on the Paderborn University benchmark.
  • Comparative analysis shows that techniques like temperature scaling can effectively reduce calibration errors and improve diagnostic accuracy.

In the field of intelligent fault diagnosis (IFD) for industrial machinery, deep learning models have shown remarkable promise. These models are adept at parsing complex signal data and categorizing various machine states, all without requiring extensive domain expertise. However, the utility of these models is often compromised when they encounter data that differs from the data on which they were trained. This issue, known as domain shift, can lead to a significant decline in model performance.

Recent approaches have tackled this problem through unsupervised domain adaptation (UDA), which allows a model to adapt from a labeled source domain to an unlabeled target domain. Traditional methods focus heavily on aligning the features of the two domains to reduce domain discrepancies. One common strategy involves pseudo-labeling, where the model generates labels for the target domain data. These labels are then used to refine the model further, ideally bridging the gap between the source and target domains. Yet, this approach is hampered by the generation of pseudo-labels based on confidence scores that are not well-calibrated, leading to the propagation of errors through the model.

In this context, a novel method known as Calibrated Adaptive Teacher (CAT) has been introduced, designed to address the calibration challenge within the UDA framework. CAT refines the process of pseudo-labeling by calibrating the confidence scores of a teacher network's predictions on the target samples. Throughout the self-training process, CAT applies well-known post-hoc calibration techniques, such as temperature scaling, to ensure that the confidence scores are more representative of the true likelihood of predictions being correct.

By incorporating both time-domain and frequency-domain inputs, CAT demonstrates state-of-the-art performance on a wide array of transfer tasks on the Paderborn University benchmark dataset for fault diagnosis of rolling bearings. In addition to temperature scaling, three other calibration techniques were explored: vector scaling, matrix scaling, and calibrated predictions with covariate shift (CPCS). Interestingly, for tasks on this dataset, temperature scaling and CPCS emerged as the most effective strategies, despite the fact that only CPCS accounts for the domain shift directly. This result suggests that when domain features are sufficiently aligned, even simpler calibration techniques can successfully adapt to the target domain.

Four principal contributions of this research include the introduction of the CAT method itself, extensive evaluation on the Paderborn University dataset, demonstrable improvements in diagnosis accuracy and calibration error reduction in the target domain, and a comparative analysis of different post-hoc calibration techniques. This work, therefore, marks a significant advance in unsupervised domain adaptation strategies for IFD, proposing a solution to one of the key issues of model calibration in target domains and enhancing the accuracy and reliability of fault diagnosis systems.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 0 likes.

Upgrade to Pro to view all of the tweets about this paper: