Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 60 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 14 tok/s Pro
GPT-4o 77 tok/s Pro
Kimi K2 159 tok/s Pro
GPT OSS 120B 456 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Physical Data Embedding for Memory Efficient AI (2407.14504v3)

Published 19 Jul 2024 in cs.LG

Abstract: Deep neural networks (DNNs) have achieved exceptional performance across various fields by learning complex, nonlinear mappings from large-scale datasets. However, they face challenges such as high memory requirements and computational costs with limited interpretability. This paper introduces an approach where master equations of physics are converted into multilayered networks that are trained via backpropagation. The resulting general-purpose model effectively encodes data in the properties of the underlying physical system. In contrast to existing methods wherein a trained neural network is used as a computationally efficient alternative for solving physical equations, our approach directly treats physics equations as trainable models. We demonstrate this physical embedding concept with the Nonlinear Schr\"odinger Equation (NLSE), which acts as trainable architecture for learning complex patterns including nonlinear mappings and memory effects from data. The network embeds data representation in orders of magnitude fewer parameters than conventional neural networks when tested on time series data. Notably, the trained "Nonlinear Schr\"odinger Network" is interpretable, with all parameters having physical meanings. This interpretability offers insight into the underlying dynamics of the system that produced the data. The proposed method of replacing traditional DNN feature learning architectures with physical equations is also extended to the Gross-Pitaevskii Equation, demonstrating the broad applicability of the framework to other master equations of physics. Among our results, an ablation study quantifies the relative importance of physical terms such as dispersion, nonlinearity, and potential energy for classification accuracy. We also outline the limitations of this approach as it relates to generalizability.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets