Mark Clements, Editor-in-Chief, Poultry: I'm Mark Clements, Editor-in-Chief Poultry with WATT Global Media.
Earlier this year, I visited the agricultural trade show SPACE and was fortunate to speak to Léane Gernigon, data scientist with award winning digital technology company Adventiel, who I'm delighted to welcome to this edition of the Future of Poultry Podcast.
For those of you not familiar with Brittany, based Adventiel, the company specializes in digital technology in the agriculture and food supply chain.
Now, when I first met Léane earlier this year, we talked about an Adventiel initiative called EARWISE which stands for Equipment and Animal Recognition With Intelligent Sound Evaluation.
EARWISE listens to the sounds coming from a broiler or layer flock, or any other group of farmed animals, and interprets what it hears, alerting producers to any issues that it may detect. This allows flock managers to better understand animal health and welfare, a flock’s activity and also house security.
Léane, lovely to speak to you again. Welcome to the podcast, and thanks for agreeing to talk to us about EARWISE.
Gernigon: Hello Mark and hello to everybody listening to us today.
Clements: Lovely to have you here. Léane. Now, perhaps you could tell us a little bit about yourself, firstly, and what's your interest in animal production?
Gernigon: Yes, of course. So I hold a degree in agricultural engineering, and currently, as you said, I work as a data scientist at a French digital service company called Adventiel.
Our company specializes in developing tailor made solution for the agricultural and agrifood sectors. Our expertise ranges from project scoping to the development of application and web platforms, not forgetting, of course, the creation of AI models, which is my work. We are committed to delivering comprehensive solutions that meet the specific needs of our clients.
To talk a bit about animal production, we have seen for several years now that animal welfare has emerged as a significant concern within both the scientific community and the broader society. In response to these growing consumer demands for ethical practices in livestock farming, there is a heightened need to develop effective and innovative methods for monitoring and ensuring the wellbeing of animals.
Clements: How did EARWISE come about, could you tell us, and could you explain to us what your involvement in it is?
Gernigon: While video surveillance has traditionally been the primary approach for assessing animal conditions, the potential of audio data in contributing to this endeavor has remained largely untapped.
The environment in which animals live is rich in vocalization, each of which can provide important information about animal wellbeing and emotional state.
So, since 2017, at Adventiel, we believe that real time detection and analysis of a variety of vocalizations, such as cries, grunts or other health related sounds, promises to provide accurate and timely indicators of animal welfare in various sectors of animal production. So since then, we have undertaken extensive work in sound monitoring, and our expertise in this area has grown through the development of a comprehensive solution called EARWISE.
The EARWISE project really took off in 2019 thanks to an internal innovation competition at Adventiel. This competition provides employees with the opportunity to present their ideas, and the winning project receives funds to move from concepts to realization. So this is how the idea for a sound monitoring tool evolved into a concrete solution.
Clements: Okay, so if I understood correctly, there are various modules to EARWISE. Could you explain those to us?
Gernigon: Yes, there are two different modules in EARWISE. It's addressing a distinct aspect of sound based welfare monitoring. The first module is a module that empowers everybody to annotate preselected sounds captured by an algorithm we have built at Adventiel. The second module uses an audio event recognition model that we have built using the annotated data from the first module and this module makes real time predictions. Event data are time-stamped and saved and audio recordings are retained only for unrecognized or explicitly requested events by employing edge computing and strategic data retention, the model strikes an optimal balance between effective event detection and safeguarding data privacy.
So this unique combination of technical and audio data analysis facilitates swift problem detection and proactive intervention, because this module can also incorporate an alert system if the occurrence of some events, such as coughing, becomes too frequent, so you can act quickly thanks to this alert module.
Additionally, there is a third module that is data visualization module that allows users to see the probability of each prediction, as well as technical sound characteristics, such as frequency or amplitude, and all that information can be visualized into a user-friendly interface, and this visualization particularly aids to identify temporal anomalies.
In the future, we will add a fourth module that will be a module where farmers can share with veterinarians or partners, their data to establish diagnosis in complex situations. Data holds significant value, especially for cases with a substantial historical context related to health or equipment, we can think about partner veterinary laboratories that can utilize this data to assess treatment effectiveness or to adjust protocols based on the evolution of sound indicators. And, of course, since we offer tailored solutions, each of these modules is adapted to meet the specific needs of each client.
Clements: When we spoke at SPACE back in September, you mentioned that data needs to be annotated and that this can be a hugely time consuming process, but that EARWISE uses something that you call clustering to reduce the annotation time. Could you tell us about that?
Gernigon: So to build a model, we need to customize the model with the client specific labels. Those labels are small pieces of sound where we are saying, this segment is a bit of cough, this one is a bit of feeding system, but unlike speech recognition tools, such as Alexa or Google Home, which were trained on large databases with subtitles of movies, there is no such annotated data for animal vocalizations or machine sounds, so there is a need to find out how to have those annotations, so this gap between animal vocalization and speech recognition posed a challenge as creating labeled data manually is really time consuming. So to address this, at Adventiel we developed a tool that accelerates the annotation process by clustering similar sounds, allowing the domain experts to label clusters efficiently, because if all the first sounds of a cluster share the same level, we can assume that the other sounds of this cluster even if we didn't listen to them have the same level, and when differences arise into a cluster, we can split it into smaller groups until they become homogenous. And this approach minimizes annotation time and helps create a robust audio library for training an accurate model across various farm environments.
However, not all events occur frequently. For instance, a cough may be rare, requiring hours of recordings to identify specific events across batches. Also, vocalization can change as an animal ages, and all of this contributes to increasing the annotation time needed to capture the full range of vocalization. Furthermore, to generalize the model prediction across different farms, we need data from diverse locations. And since a poultry batch lasts about three months, annotating multiple batches across farms would demand thousands of hours of listening, which is not feasible.
When we began working on sounds at Adventiel, there was strong interest in this topic, but clients really hesitated at the annotation phase due to the time requested. So at Adventiel, we recognized that the quality and quantity of annotated data were essential. So this is why we created our tool to streamline the process. This allowing us to generate high quality data efficiently and to build an accurate model without extensive manual effort.
Clements: EARWISE is subject to ongoing testing in the field, and while I understand that there may be confidentiality around this, could you tell us what you're learning now that the system is being deployed on farm?
Gernigon: So I think you may know that farms are generally very poorly connected to the network, which is why we had to be able to deploy solutions with a local model rather than in the cloud. Therefore, it was essential to build compact models and to have an architecture simple enough to make real time predictions without delay of several hours. So we managed to overcome these issues. Today, we are testing our first models under real conditions. Our clients are really looking forward to seeing all the information they will be able to extract.
Clements: In practical terms, how might a farmer use EARWISE and what would the benefits be from using it. And what would be the specific conditions that EARWISE would be listening out for?
Gernigon: A farmer can use EARWISE to identify when the animals begin to show sign of illness. For instance, the first cough may occur at night when the farmer is not present in the building. If EARWISE can raise an alert two or three days earlier, then the moment the farmer will notice both sign of illness, the farmer can take action more swiftly, and this allows the farmer to reduce the impact on his animals, the presence of a tool like EARWISE also helps, reducing the mental burden of the farm, EARWISE can also be used to monitor indicators of animal wellbeing or equipment failures, such as the shutdown of a fan. So one of the projects due to be launched next year is called Acoust’Chick it's a project led by ITAVI and IDELE, two major French institutes specializing in livestock research, with ITAVI focusing more specifically in poultry. The aim of this project is to identify welfare indicators using acoustic data by exploring how some patterns can reveal information about the welfare of farm animals.
Clements: I wanted to finish by asking whether, as EARWISE develops and the software learns, will it be able to identify ever more conditions with ever more precision? And what can we expect from EARWISE in the future, or indeed from Adventiel?
Gernigon : As I mentioned earlier, EARWISE is a tailored solution that we adapt to each of our clients based on their data and annotation. The quality and quantity of annotated data greatly impact the model’s results. Two projects with a similar issue, but different data quality may lead to very different model performances. Therefore, at this time, we do not plan to sell a pre-trained EARWISE solution, because our clients always retain ownership of the data and annotation and also because we cannot assess the quality of their annotation.
However, for the same clients, it is possible to improve a model through what's called continuous learning. The model’s training can be enhanced with newly annotated data, and it's also possible to use the model’s outputs to select samples for re-annotation. We also want to add a new feature that will enable us to perform unsupervised monitoring by automatically detecting change in sound ambience without presuming what is happening, so without giving a label to the event. And this self-learning system will allow us to automatically trigger alerts which can be after analyzed by the farmer, and also, I need to tell you that we are not working directly with farmers. Instead, we collaborate with intermediary businesses, such as cooperative pharmaceutical suppliers or veterinarians. These partners bring their industry expertise and insights allowing us to adapt EARWISE to the specific needs of the sector.
Clements: Léane it's been fascinating to talk with you once again today, it would certainly seem that acoustic monitoring can open the doors for more precise flock management, and that Adventiel’s customizable approach will allow producers to monitor for as much or as little as they wish.
Audience, please do look out for future editions of our podcast, the Future of Poultry. Léane, thank you again, it's been a pleasure to talk to you. Until next time, goodbye.