Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 152 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 25 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 101 tok/s Pro
Kimi K2 203 tok/s Pro
GPT OSS 120B 431 tok/s Pro
Claude Sonnet 4.5 26 tok/s Pro
2000 character limit reached

Explainable AI for Engineering Design: A Unified Approach of Systems Engineering and Component- Based Deep Learning Demonstrated by Energy- Efficient Building Design (2108.13836v7)

Published 30 Aug 2021 in cs.LG, cs.SE, cs.SY, and eess.SY

Abstract: Data-driven models created by machine learning, gain in importance in all fields of design and engineering. They, have high potential to assist decision-makers in creating novel, artefacts with better performance and sustainability. However,, limited generalization and the black-box nature of these models, lead to limited explainability and reusability. To overcome this, situation, we propose a component-based approach to create, partial component models by ML. This, component-based approach aligns deep learning with systems, engineering (SE). The key contribution of the component-based, method is that activations at interfaces between the components, are interpretable engineering quantities. In this way, the, hierarchical component system forms a deep neural network, (DNN) that a priori integrates information for engineering, explainability. The, approach adapts the model structure to engineering methods of, systems engineering and to domain knowledge. We examine the, performance of the approach by the field of energy-efficient, building design: First, we observed better generalization of the, component-based method by analyzing prediction accuracy, outside the training data. Especially for representative designs, different in structure, we observe a much higher accuracy, (R2 = 0.94) compared to conventional monolithic methods, (R2 = 0.71). Second, we illustrate explainability by exemplary, demonstrating how sensitivity information from SE and rules, from low-depth decision trees serve engineering. Third, we, evaluate explainability by qualitative and quantitative methods, demonstrating the matching of preliminary knowledge and data-driven, derived strategies and show correctness of activations at, component interfaces compared to white-box simulation results, (envelope components: R2 = 0.92..0.99; zones: R2 = 0.78..0.93).

Citations (8)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 1 like.

Upgrade to Pro to view all of the tweets about this paper: