Emergent Mind

Abstract

Nutrition information is crucial in precision nutrition and the food industry. The current food composition compilation paradigm relies on laborious and experience-dependent methods. However, these methods struggle to keep up with the dynamic consumer market, resulting in delayed and incomplete nutrition data. In addition, earlier machine learning methods overlook the information in food ingredient statements or ignore the features of food images. To this end, we propose a novel vision-language model, UMDFood-VL, using front-of-package labeling and product images to accurately estimate food composition profiles. In order to empower model training, we established UMDFood-90k, the most comprehensive multimodal food database to date, containing 89,533 samples, each labeled with image and text-based ingredient descriptions and 11 nutrient annotations. UMDFood-VL achieves the macro-AUCROC up to 0.921 for fat content estimation, which is significantly higher than existing baseline methods and satisfies the practical requirements of food composition compilation. Meanwhile, up to 82.2% of selected products' estimated error between chemical analysis results and model estimation results are less than 10%. This performance sheds light on generalization towards other food and nutrition-related data compilation and catalyzation for the evolution of generative AI-based technology in other food applications that require personalization.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.