• AI & Data Science
October 21, 2020

An ODF Update

As part of Combine’s continued involvement in Ocean Data Factory Sweden, our team has collaborated with our partners to apply machine learning techniques to the compelling use case of underwater species detection in the Kosterhavet National Park on Sweden’s west coast.

Use case 2: Automatic species identification in the Kosterhavet National Park.

Figure 1: Kosterhavet National Park Map

The Kosterhavet National Park is one of Sweden’s most important and unique marine environments and is not only the first but also the most species-rich marine park in all of Sweden. The area obtained official protected status in 2009 and monitoring changes in the marine environment has become a top national priority. Understanding this complex ecosystem and its development in light of a warming planet and increased human activity is crucial to ensuring its survival for generations to come.

Initial problem formulation:

Researchers studying the changes in the marine ecosystem over the last 30 years at Kosterhavets National Park face many challenges, including:

· Storing and accessing observation data in a centralised and standardised way
· Identifying species when most of the captured footage contains little to no fauna/flora
· Analysing past footage with poor visibility conditions and/or low camera resolution

In the past, this footage has been examined and annotated manually by experts in marine biology. Given the advances in data science, this is simply not an efficient use of their time, and we believe that machine learning techniques could help to automate the parts of the process that significantly slow things down.

Initial Research Question

Could we set up the data infrastructure for a highly-performant object detection model to help us detect an important habitat-building marine species (Lophelia Pertusa) in real-time footage?

Figure 2: An example of Lophelia Pertusa

Data collection

Figure 3: ROV used for monitoring seafloor

For the last 30 years, the scientists at the Tjärnö Marine Laboratory have used Remotely Operated Vehicles (ROVs) and underwater cameras to monitor the area around the Kosterhavet National Park. The movies and images available to us today not only present an otherwise unseen part of the national park, but they also can take us 30 years back in time. In our research, we want to study how climate change and human activities influence the fauna in this area and which positive effects the protective status as a national park had on the seafloor habitats.

Data Annotation Workflow

Citizen science platforms allow researchers to educate others on topical issues such as marine biodiversity and environmental management, whilst also benefiting from crowd-sourced data annotation. In this project, we used the Zooniverse platform, which allowed us to upload the captured footage directly and present these as part of online workflows for citizen scientists.

The data annotation process consists of two separate workflows on Zooniverse. The first provides a 10-second video clip and then guides annotators to select the species they see in the clip with an extensive tutorial (as shown in Figure 4). Annotators may also provide the time of the first species appearance during the clip.

Figure 4: Screenshot from workflow 1 of Koster Seafloor Observatory on Zooniverse

Using the information from the first workflow, we can reduce the amount of footage significantly (by excluding all footage with no identified species) and extract relevant frames by species based on an aggregation of the citizen scientists’ input.

Figure 5: Screenshot from workflow 2 of Koster Seafloor Observatory on Zooniverse

The second workflow then prompts annotators to draw bounding boxes around the species they identified earlier (see Figure 5). These annotations are then aggregated and used to train the object detection model.

Data Infrastructure

Setting up the infrastructure for storage and retrieval of footage and metadata is a crucial part of ensuring the project’s longevity and scalability. For this purpose, we used a high-performance Linux-based project-specific server hosted by Chalmers University of Technology, Gothenburg. We also created an SQLite database to keep track of the information related to the movies and the classifications provided by both citizen scientists and machine learning algorithms. The database followed the Darwin Core (DwC) standards to maximise the sharing, use and reuse of open-access biodiversity data.

Automatic object detection model

The machine learning model chosen for this task is generally referred to as a single-shot detector, which means that the model looks at the entire image and predicts multiple objects inside the image at once, as opposed to region-based models which first identify regions of interest and then detect features in these regions as a second step. This leads to faster detections in most cases and allows these models to predict in near real-time. For our object detection model, we used the YOLOv3 architecture by Redmon and Farhadi (2018) [5]. As the name suggests, the third iteration of the YOLO model offers improvements. Still, it essentially uses the same building blocks as its predecessors, such as Darknet-53 as the feature detector, adding useful enhancements such as a multi-scale detector at three different scales to detect smaller objects. Our model was based on an open-source implementation to ensure that our results could be easily replicated and also be kept up to date with advances in future YOLO releases.

Figure 6: YOLOv3 model architecture

See the model in action (video)

Click on the button to accept cookie and content from YouTube.

By accepting, you agree to YouTube privacy policy which you can read more about here

Accept

Visualising our overall workflow

Figure 7: Flowchart showing the entire Koster Seafloor Observatory as an end-to-end process

Tools in development and education

In line with our goals at ODF to make our project outputs accessible and maximally flexible, we have created a web app that allows non-technical audiences to upload their own footage of Lophelia Pertusa corals and obtain predictions directly from the model. To increase understanding of the model, users can also tweak important parameters that affect model output, such as the confidence threshold and overlap thresholds of various bounding boxes.

Figure 8: Screenshot from the web app (to be released soon)

Code repositories

We open-source all our code in line with ODF objectives. Here are the repositories associated with the Koster seafloor observatory:

· Github repository for the machine learning model
· Github repository for the system data flow

Key Takeaways

· Our object detection model was able to analyse 150 hours’ worth of footage in just 30 hours.
· Our open-source database is now a valuable resource contains well-documented observation data for future research on marine environment management in this region and beyond.
· There is room to expand to other important habitat-building species as the model has been successfully validated on Lophelia Pertusa, and to continue adding footage as more protected areas get added.

The challenges ahead

No machine learning model is ever perfect, and as such, we still face many challenges that will be addressed in future iterations, including:
· Dealing with low-confidence or low-consensus predictions
· Improving underwater image quality for older footage
· Expanding the model to include more key species
· Improving model confidences by adding feedback from the model into future training data.

References

[1] https://www.sverigesnationalparker.se/park/kosterhavets-nationalpark/besoksinformation/hitta-hit/

[2] https://oceana.org/marine-life/corals-and-other-invertebrates/lophelia-coral

[3] https://www.zooniverse.org/projects/victorav/the-koster-seafloor-observatory/about/research

[4] https://www.zooniverse.org/projects/victorav/the-koster-seafloor-observatory/

[5] https://pjreddie.com/media/files/papers/YOLOv3.pdf

 

For more information on ODF and the work we do, visit the ODF Sweden website here.