Last month, from July 7 to 11, Wageningen University & Research hosted the Fishery Monitoring Hackathon 2025, a five-day event co-organised by the EVERYFISH and OptiFish projects. Both projects, funded by the European Union, aim to advance automation and data-driven approaches for sustainable fisheries management.
This collaborative hackathon gathered researchers, developers, and fisheries experts to address practical challenges in fisheries monitoring through machine-vision technologies. The event was designed to foster innovation, encourage collaboration, and provide a hands-on environment for participants to develop and share solutions.

Tackling Real-World Challenges in Fisheries Monitoring
The hackathon centered on four main themes, each targeting a specific aspect of machine-vision applications in fisheries:
- Fish Detection: Identifying and segmenting fish in images using object detection and classification techniques.
- Weight Estimation: Estimating fish weight based on image data and regression models.
- Length Estimation: Measuring fish length through image analysis and geometric relationships between key points on the fish, such as the eye, mouth, tail, and fin.
- Tracking: Developing algorithms to track individual fish across video frames for accurate counting.
To support these computationally demanding tasks, the Research IT Solutions team provided access to Wageningen’s Anunna High-Performance Computing resources, including GPU clusters and dedicated technical support, enabling participants to efficiently train their machine-learning models.
Team Projects and Outcomes
Fish Detection and Segmentation
The first team investigated how an AI model trained on one dataset (the “source domain”) performs when applied to a different, unseen dataset (the “target domain”). They explored the concept of “fine-tuning,” briefly retraining the model with a small number of new images to improve adaptability.
Using YOLOv9 as their base model, this team found that fine-tuning enhanced detection accuracy, but its success depended on the similarity between datasets. They also introduced innovations in labeling, enabling the AI to detect partially obscured fish more accurately.

Fish Tracking in Video Footage
The second team focused on the challenge of tracking multiple fish simultaneously through videos. Overlapping fish, occlusions, and identity switches complicate tracking accuracy.
They compared two methods for object detection: YOLO (trained specifically for fish) and Grounding DINO (a general foundational detection model). For tracking, they implemented SAM2MOT (SegmentAnything2MultiObjectTracking), a new framework leveraging segmentation masks to improve multi-object tracking accuracy. They also introduced and used advanced evaluation metrics (HOTA) to clearly identify strengths and weaknesses in tracking performance, pinpointing exactly where and how the tracking systems succeed or fail.
The team found that Grounding DINO worked well in simple scenarios but struggled with occlusions, while the specialised YOLO model provided more consistent results. Combining YOLO with SAM2MOT showed promise for robust fish tracking in complex environments.

Weight Estimation from Images
The third team tackled estimating fish weight – a critical but challenging task, as weight must be inferred indirectly from image features like size and shape.
Using datasets including the public FDWE and a subset of the newly collected Hirtshals dataset, they trained models using two approaches: ResNet-based regression augmented with scale information, and a YOLOv5-based model with a regression head that predicts weight.

Their experiments highlighted that incorporating physical properties such as area, shape, and scale improved model generalisation across different datasets. The team emphasised the importance of dataset diversity and calibration for practical deployment.
Length Estimation Using AI
The fourth team worked on estimating fish length, coping with challenges such as varying camera distances and fish occlusion.
Using four datasets, including the variable-distance Cukurova dataset and three fixed-distance sets, they tested several methods:
- Eye-to-fish ratio calculations using YOLO detections.
- Mask-based regression models predicting length directly from segmentation masks.
- Depth estimation techniques to introduce 3D context.

The CNN-based mask regression model performed best overall. On datasets with fixed camera distances, the team achieved high accuracy (e.g., MAE of 0.69 cm and R² of 95.2% on the Autofish dataset). However, occlusions remained difficult, and the team suggested further exploration of shape completion and mask autoencoders to improve results.
Final Presentations and Awards
On the last day, each team presented its solutions, which were evaluated based on innovation, feasibility, and potential impact. The weight estimation team received the Best Presentation award, while the length estimation team won Best Innovation. All participants were recognised for their contributions and received a small token of appreciation.

Balancing Work with Community Engagement
Alongside the technical challenges, the hackathon emphasised community and well-being. Participants took part in outdoor activities such as volleyball and football matches and enjoyed social gatherings like a barbecue dinner. These moments reinforced the importance of collaboration and team spirit in sustaining innovation over time.

Looking Forward
The Fishery Monitoring Hackathon demonstrated the value of collaboration across projects and disciplines to advance sustainable fisheries management. By bringing together diverse expertise and leveraging technology, the event reinforced the potential of machine vision to provide practical tools for the fisheries sector.
Wageningen University & Research thanks all participants, mentors, and partners whose contributions made this hackathon possible.

Text and photographs courtesy of Wageningen University & Research.
The article was edited by reframe.food.