Infothek

29.01.2024 | Intelligente Eingebettete Systeme

Neuer Beitrag auf dem "Workshop on Artificial Intelligence for Sustainability (AI4S), ECAI 2023"

Ein neuer Beitrag mit dem Titel "Active Bird2Vec: Towards End-To-End Bird Sound Monitoring with Transformers" von Lukas Rauch, Raphael Schwinger, Moritz Wirth, Bernhard Sick, Sven Tomforde und Christoph Scholz wurde auf dem "Workshop on Artificial Intelligence for Sustainability (AI4S), ECAI" vorgestellt. Der Beitrag beinhaltet Folgendes:

We propose a shift towards end-to-end learning in bird sound monitoring by combining self-supervised (SSL) and deep active learning (DAL). Leveraging transformer models, we aim to bypass traditional spectrogram conversions, enabling direct raw audio processing. ACTIVE BIRD2VEC is set to generate high-quality bird sound representations through SSL, potentially accelerating the assessment of environmental changes and decision-making processes for wind farms. Additionally, we seek to utilize the wide variety of bird vocalizations through DAL, reducing the reliance on extensively labeled datasets by human experts. We plan to curate a comprehensive set of tasks through Huggingface Datasets, enhancing future comparability and reproducibility of bioacoustic research. A comparative analysis between various transformer models will be conducted to evaluate their proficiency in bird sound recognition tasks. We aim to accelerate the progression of avian bioacoustic research and contribute to more effective conservation strategies.

Aktuelles

29.01.2024 | Intelligente Eingebettete Systeme

Neuer Beitrag auf dem "Workshop on Artificial Intelligence for Sustainability (AI4S), ECAI 2023"

Ein neuer Beitrag mit dem Titel "Active Bird2Vec: Towards End-To-End Bird Sound Monitoring with Transformers" von Lukas Rauch, Raphael Schwinger, Moritz Wirth, Bernhard Sick, Sven Tomforde und Christoph Scholz wurde auf dem "Workshop on Artificial Intelligence for Sustainability (AI4S), ECAI" vorgestellt. Der Beitrag beinhaltet Folgendes:

We propose a shift towards end-to-end learning in bird sound monitoring by combining self-supervised (SSL) and deep active learning (DAL). Leveraging transformer models, we aim to bypass traditional spectrogram conversions, enabling direct raw audio processing. ACTIVE BIRD2VEC is set to generate high-quality bird sound representations through SSL, potentially accelerating the assessment of environmental changes and decision-making processes for wind farms. Additionally, we seek to utilize the wide variety of bird vocalizations through DAL, reducing the reliance on extensively labeled datasets by human experts. We plan to curate a comprehensive set of tasks through Huggingface Datasets, enhancing future comparability and reproducibility of bioacoustic research. A comparative analysis between various transformer models will be conducted to evaluate their proficiency in bird sound recognition tasks. We aim to accelerate the progression of avian bioacoustic research and contribute to more effective conservation strategies.

Termine

Zurück
29.01.2024 | Intelligente Eingebettete Systeme

Neuer Beitrag auf dem "Workshop on Artificial Intelligence for Sustainability (AI4S), ECAI 2023"

Ein neuer Beitrag mit dem Titel "Active Bird2Vec: Towards End-To-End Bird Sound Monitoring with Transformers" von Lukas Rauch, Raphael Schwinger, Moritz Wirth, Bernhard Sick, Sven Tomforde und Christoph Scholz wurde auf dem "Workshop on Artificial Intelligence for Sustainability (AI4S), ECAI" vorgestellt. Der Beitrag beinhaltet Folgendes:

We propose a shift towards end-to-end learning in bird sound monitoring by combining self-supervised (SSL) and deep active learning (DAL). Leveraging transformer models, we aim to bypass traditional spectrogram conversions, enabling direct raw audio processing. ACTIVE BIRD2VEC is set to generate high-quality bird sound representations through SSL, potentially accelerating the assessment of environmental changes and decision-making processes for wind farms. Additionally, we seek to utilize the wide variety of bird vocalizations through DAL, reducing the reliance on extensively labeled datasets by human experts. We plan to curate a comprehensive set of tasks through Huggingface Datasets, enhancing future comparability and reproducibility of bioacoustic research. A comparative analysis between various transformer models will be conducted to evaluate their proficiency in bird sound recognition tasks. We aim to accelerate the progression of avian bioacoustic research and contribute to more effective conservation strategies.