one publication added to basket [362027] | Autonomous temporal pseudo-labeling for fish detection
Veiga, R.J.M.; Ochoa, I.E.; Belackova, A.; Bentes, L.; Silva, J.P.; Semião, J.; Rodrigues, J.M.F. (2022). Autonomous temporal pseudo-labeling for fish detection. Applied Sciences-Basel 12(12): 5910. https://dx.doi.org/10.3390/app12125910
In: Applied Sciences-Basel. MDPI: Basel. e-ISSN 2076-3417, more
| |
Keywords |
Pisces [WoRMS] Marine/Coastal |
Author keywords |
environmental monitoring; marine fishes; object detection; fish detection; pseudo-labeling; underwater video; deep learning |
Authors | | Top |
- Veiga, R.J.M.
- Ochoa, I.E., more
- Belackova, A.
- Bentes, L.
|
- Silva, J.P.
- Semião, J.
- Rodrigues, J.M.F.
|
|
Abstract |
The first major step in training an object detection model to different classes from the available datasets is the gathering of meaningful and properly annotated data. This recurring task will determine the length of any project, and, more importantly, the quality of the resulting models. This obstacle is amplified when the data available for the new classes are scarce or incompatible, as in the case of fish detection in the open sea. This issue was tackled using a mixed and reversed approach: a network is initiated with a noisy dataset of the same species as our classes (fish), although in different scenarios and conditions (fish from Australian marine fauna), and we gathered the target footage (fish from Portuguese marine fauna; Atlantic Ocean) for the application without annotations. Using the temporal information of the detected objects and augmented techniques during later training, it was possible to generate highly accurate labels from our targeted footage. Furthermore, the data selection method retained the samples of each unique situation, filtering repetitive data, which would bias the training process. The obtained results validate the proposed method of automating the labeling processing, resorting directly to the final application as the source of training data. The presented method achieved a mean average precision of 93.11% on our own data, and 73.61% on unseen data, an increase of 24.65% and 25.53% over the baseline of the noisy dataset, respectively. |
|