Benchmark Leaderboard: Food Object Detection & Food Weight Estimation
This leaderboard evaluates submissions for the Food Portion Benchmark dataset using mAP@50 for bounding boxes and Mean Absolute Error (MAE) for food weight predictions.
Submissions are expected in CSV format with the columns: image_name, class_id, xmin, ymin, xmax, ymax, weight, conf.
The ground truth CSV (kept privately) has the columns: image_name, class_id, xmin, ymin, xmax, ymax, weight.
Sample Submission Template
Download a sample CSV file to see the required format: Download Sample Submission CSV
Leaderboard
1 | 2 | 3 |
|---|---|---|
Evaluation Metrics
mAP@50 (Mean Average Precision at IoU 0.50):
This metric evaluates how well the predicted bounding boxes match the ground truth. In mAP@50, a prediction is considered a true positive if the Intersection over Union (IoU) between the predicted box and the ground truth box is at least 0.50. The final score is averaged across all classes and images, yielding a single value between 0 and 1, where a higher value indicates better localization performance.Weight MAE (Mean Absolute Error):
This metric calculates the average absolute difference (in grams) between the predicted food weight and the actual weight provided in the ground truth. A lower MAE signifies more accurate weight predictions.
Benchmark Dataset
The Food Portion Benchmark dataset is a comprehensive dataset for evaluating object detection and food weight estimation models. Here are some key details:
Dataset Composition:
It contains 14,083 RGB images of food items spanning 133 distinct classes. For each food item, the dataset includes manually annotated bounding boxes and precise weight measurements.Portion Sizes:
Each food item is represented with annotations for three different portion sizes (big, average, small), reflecting the real-world variation in food serving sizes.Annotations:
The ground truth annotations include the food itemβs image name, class, bounding box coordinates in YOLO format, and weight in grams.Reference and Access:
You can explore and download the dataset on Hugging Face at the following link:
Food Portion Benchmark Dataset
Additional Notes
Submission Requirements:
Prediction CSV files must contain: image_name, class_id, xmin, ymin, xmax, ymax, weight, confEvaluation Process:
- mAP@50 is computed via the COCO evaluation API using pycocotools library, which compares the predicted bounding boxes (along with their confidence scores) to the ground truth annotations.
- Weight MAE is computed using sklearnβs mean absolute error function.
Contact Information:
For any questions regarding the dataset or evaluation methodology, please refer to the dataset documentation on Hugging Face or contact our support team.
Submit your prediction CSV file and model metadata.
π Citation
If you use the Food Portion Benchmark dataset in your research, please cite our work as follows:
@misc{foodportionbenchmark2025,
title={Paper Title},
author={Authors},
year={2025},
note={Under Review}
}