Infothek

This page contains automatically translated content.

01/29/2024 | Intelligent Embedded Systems

New contribution to the "Workshop on Artificial Intelligence for Sustainability (AI4S), ECAI 2023"

A new paper titled "Active Bird2Vec: Towards End-To-End Bird Sound Monitoring with Transformers" by Lukas Rauch, Raphael Schwinger, Moritz Wirth, Bernhard Sick, Sven Tomforde and Christoph Scholz was presented at the "Workshop on Artificial Intelligence for Sustainability (AI4S), ECAI". The paper includes:

We propose a shift towards end-to-end learning in bird sound monitoring by combining self-supervised (SSL) and deep active learning (DAL). Leveraging transformer models, we aim to bypass traditional spectrogram conversions, enabling direct raw audio processing. ACTIVE BIRD2VEC is set to generate high-quality bird sound representations through SSL, potentially accelerating the assessment of environmental changes and decision-making processes for wind farms. Additionally, we seek to utilize the wide variety of bird vocalizations through DAL, reducing the reliance on extensively labeled datasets by human experts. We plan to curate a comprehensive set of tasks through Huggingface Datasets, enhancing future comparability and reproducibility of bioacoustic research. A comparative analysis between various transformer models will be conducted to evaluate their proficiency in bird sound recognition tasks. We aim to accelerate the progression of avian bioacoustic research and contribute to more effective conservation strategies.

News

01/29/2024 | Intelligent Embedded Systems

New contribution to the "Workshop on Artificial Intelligence for Sustainability (AI4S), ECAI 2023"

A new paper titled "Active Bird2Vec: Towards End-To-End Bird Sound Monitoring with Transformers" by Lukas Rauch, Raphael Schwinger, Moritz Wirth, Bernhard Sick, Sven Tomforde and Christoph Scholz was presented at the "Workshop on Artificial Intelligence for Sustainability (AI4S), ECAI". The paper includes:

We propose a shift towards end-to-end learning in bird sound monitoring by combining self-supervised (SSL) and deep active learning (DAL). Leveraging transformer models, we aim to bypass traditional spectrogram conversions, enabling direct raw audio processing. ACTIVE BIRD2VEC is set to generate high-quality bird sound representations through SSL, potentially accelerating the assessment of environmental changes and decision-making processes for wind farms. Additionally, we seek to utilize the wide variety of bird vocalizations through DAL, reducing the reliance on extensively labeled datasets by human experts. We plan to curate a comprehensive set of tasks through Huggingface Datasets, enhancing future comparability and reproducibility of bioacoustic research. A comparative analysis between various transformer models will be conducted to evaluate their proficiency in bird sound recognition tasks. We aim to accelerate the progression of avian bioacoustic research and contribute to more effective conservation strategies.

Dates

Back
01/29/2024 | Intelligent Embedded Systems

New contribution to the "Workshop on Artificial Intelligence for Sustainability (AI4S), ECAI 2023"

A new paper titled "Active Bird2Vec: Towards End-To-End Bird Sound Monitoring with Transformers" by Lukas Rauch, Raphael Schwinger, Moritz Wirth, Bernhard Sick, Sven Tomforde and Christoph Scholz was presented at the "Workshop on Artificial Intelligence for Sustainability (AI4S), ECAI". The paper includes:

We propose a shift towards end-to-end learning in bird sound monitoring by combining self-supervised (SSL) and deep active learning (DAL). Leveraging transformer models, we aim to bypass traditional spectrogram conversions, enabling direct raw audio processing. ACTIVE BIRD2VEC is set to generate high-quality bird sound representations through SSL, potentially accelerating the assessment of environmental changes and decision-making processes for wind farms. Additionally, we seek to utilize the wide variety of bird vocalizations through DAL, reducing the reliance on extensively labeled datasets by human experts. We plan to curate a comprehensive set of tasks through Huggingface Datasets, enhancing future comparability and reproducibility of bioacoustic research. A comparative analysis between various transformer models will be conducted to evaluate their proficiency in bird sound recognition tasks. We aim to accelerate the progression of avian bioacoustic research and contribute to more effective conservation strategies.