Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SepVAE: a contrastive VAE to separate pathological patterns from healthy ones (2307.06206v2)

Published 12 Jul 2023 in cs.CV and stat.ML

Abstract: Contrastive Analysis VAE (CA-VAEs) is a family of Variational auto-encoders (VAEs) that aims at separating the common factors of variation between a background dataset (BG) (i.e., healthy subjects) and a target dataset (TG) (i.e., patients) from the ones that only exist in the target dataset. To do so, these methods separate the latent space into a set of salient features (i.e., proper to the target dataset) and a set of common features (i.e., exist in both datasets). Currently, all models fail to prevent the sharing of information between latent spaces effectively and to capture all salient factors of variation. To this end, we introduce two crucial regularization losses: a disentangling term between common and salient representations and a classification term between background and target samples in the salient space. We show a better performance than previous CA-VAEs methods on three medical applications and a natural images dataset (CelebA). Code and datasets are available on GitHub https://github.com/neurospin-projects/2023_rlouiset_sepvae.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. Contrastive Variational Autoencoder Enhances Salient Features, February 2019. arXiv:1902.04601 [cs, stat].
  2. Exploring patterns enriched in a dataset with contrastive principal component analysis. Nature Communications, 9(1):2134, May 2018. ISSN 2041-1723.
  3. oi-VAE: Output Interpretable VAEs for Nonlinear Group Factor Analysis. In Proceedings of the 35th International Conference on Machine Learning, pp.  119–128. PMLR, July 2018. ISSN: 2640-3498.
  4. The Role of the Autism Diagnostic Observation Schedule in the Assessment of Autism Spectrum Disorders in School and Community Settings. The California school psychologist: CASP, 11:7–19, 2006. ISSN 1087-3414.
  5. Sparse multi-channel variational autoencoder for the joint analysis of heterogeneous data. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp.  302–311. PMLR, 09–15 Jun 2019.
  6. Voxel-based morphometry–the methods. NeuroImage, 11(6 Pt 1):805–821, June 2000. ISSN 1053-8119.
  7. Understanding disentangling in β𝛽\betaitalic_β-VAE. arXiv:1804.03599 [cs, stat], April 2018.
  8. Isolating Sources of Disentanglement in Variational Autoencoders. arXiv:1802.04942 [cs, stat], April 2019.
  9. Towards Principled Objectives for Contrastive Disentanglement. December 2019.
  10. Rich component analysis. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML’16, pp.  1502–1510, New York, NY, USA, June 2016. JMLR.org.
  11. A single-cell survey of the small intestinal epithelium. Nature, 551(7680):333–339, November 2017. ISSN 1476-4687. Number: 7680 Publisher: Nature Publishing Group.
  12. Identification of autism spectrum disorder using deep learning and the ABIDE dataset. NeuroImage : Clinical, 17:16–23, August 2017. ISSN 2213-1582.
  13. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. In ICLR, 2017.
  14. NIA-AA Research Framework: Toward a biological definition of Alzheimer’s disease. Alzheimer’s & Dementia: The Journal of the Alzheimer’s Association, 14(4):535–562, April 2018. ISSN 1552-5279.
  15. Contrastive latent variable modeling with application to case-control sequencing experiments, February 2021.
  16. Capturing Label Characteristics in VAEs. ICLR 2021, June 2021.
  17. Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning. Cell, 172(5):1122–1131.e9, February 2018. ISSN 0092-8674, 1097-4172. Publisher: Elsevier.
  18. Variational Autoencoders and Nonlinear ICA: A Unifying Framework. In AISTATS, 2020.
  19. Disentangling by Factorising, July 2019. NeurIps 2017.
  20. Auto-Encoding Variational Bayes. December 2013.
  21. Semi-supervised Learning with Deep Generative Models.
  22. Disentangled Variational Auto-Encoder for Semi-supervised Learning. arXiv:1709.05047 [cs], December 2018.
  23. Deep Learning Face Attributes in the Wild, September 2015. arXiv:1411.7766 [cs].
  24. Auxiliary Deep Generative Models. In Proceedings of The 33rd International Conference on Machine Learning, pp.  1445–1453. PMLR, June 2016. ISSN: 1938-7228.
  25. Disentangling Disentanglement in Variational Autoencoders. In Proceedings of the 36th International Conference on Machine Learning, pp.  4402–4412. PMLR, May 2019. ISSN: 2640-3498.
  26. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS). IEEE Transactions on Medical Imaging, 34(10):1993–2024, October 2015.
  27. Estimating divergence functionals and the likelihood ratio by convex risk minimization. IEEE Transactions on Information Theory, 56(11):5847–5861, November 2010. ISSN 0018-9448, 1557-9654.
  28. Learning Disentangled Representations with Reference-Based Variational Autoencoders, January 2019. arXiv:1901.08534 [cs].
  29. Unsupervised Learning with Contrastive Latent Variable Models. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):4862–4869, July 2019. ISSN 2374-3468.
  30. Rethinking style and content disentanglement in variational auto encoders. 2018. ICLR Workshop.
  31. Density-ratio matching under the bregman divergence: a unified framework of density-ratio estimation. Annals of the Institute of Statistical Mathematics, 64:1009–1044, 2012.
  32. Bipolar and Schizophrenia Network for Intermediate Phenotypes: Outcomes Across the Psychosis Continuum. Schizophrenia Bulletin, 40(Suppl 2):S131–S137, March 2014. ISSN 0586-7614.
  33. Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style. In NeurIPS, 2021.
  34. SchizConnect: Mediating neuroimaging databases on schizophrenia and related disorders for large-scale integration. NeuroImage, 124(Pt B):1155–1167, January 2016. ISSN 1095-9572.
  35. Moment Matching Deep Contrastive Latent Variable Models. In AISTATS, 2022.
  36. Massively parallel digital transcriptional profiling of single cells. Nature Communications, 8(1):14049, January 2017. ISSN 2041-1723. Number: 1 Publisher: Nature Publishing Group.
  37. Disentangling Latent Space for VAE by Label Relevant/Irrelevant Dimensions. arXiv:1812.09502 [cs], March 2019. arXiv: 1812.09502.
  38. Contrastive Learning Using Spectral Methods. In Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc., 2013.
  39. Joint Disentanglement of Labels and Their Features with VAE. In IEEE International Conference on Image Processing (ICIP), pp.  1341–1345, 2022.
Citations (5)

Summary

We haven't generated a summary for this paper yet.