Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 48 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 26 tok/s Pro
GPT-5 High 19 tok/s Pro
GPT-4o 107 tok/s Pro
Kimi K2 205 tok/s Pro
GPT OSS 120B 473 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Improving unsupervised anomaly localization by applying multi-scale memories to autoencoders (2012.11113v1)

Published 21 Dec 2020 in cs.CV

Abstract: Autoencoder and its variants have been widely applicated in anomaly detection.The previous work memory-augmented deep autoencoder proposed memorizing normality to detect anomaly, however it neglects the feature discrepancy between different resolution scales, therefore we introduce multi-scale memories to record scale-specific features and multi-scale attention fuser between the encoding and decoding module of the autoencoder for anomaly detection, namely MMAE.MMAE updates slots at corresponding resolution scale as prototype features during unsupervised learning. For anomaly detection, we accomplish anomaly removal by replacing the original encoded image features at each scale with most relevant prototype features,and fuse these features before feeding to the decoding module to reconstruct image. Experimental results on various datasets testify that our MMAE successfully removes anomalies at different scales and performs favorably on several datasets compared to similar reconstruction-based methods.

Citations (5)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.