-
Notifications
You must be signed in to change notification settings - Fork 325
/
Copy path2024.jlcl.xml
39 lines (39 loc) · 3.92 KB
/
2024.jlcl.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
<?xml version='1.0' encoding='UTF-8'?>
<collection id="2024.jlcl">
<volume id="1" ingest-date="2025-03-07" type="proceedings">
<meta>
<booktitle>Journal for Language Technology and Computational Linguistics</booktitle>
<editor><first>Christian</first><last>Wartena</last></editor>
<publisher>German Society for Computational Lingustics and Language Technology</publisher>
<address>Germany</address>
<month>March.</month>
<year>2024</year>
<venue>jlcl</venue>
</meta>
<frontmatter>
<url hash="36513aeb">2024.jlcl-1.0</url>
<bibkey>jlcl-2024-1</bibkey>
</frontmatter>
<paper id="1">
<title>Speaker Attribution in <fixed-case>G</fixed-case>erman Parliamentary Debates with <fixed-case>QL</fixed-case>o<fixed-case>RA</fixed-case>-adapted Large Language Models</title>
<author><first>Tobias</first><last>Bornheim</last></author>
<author><first>Niklas</first><last>Grieger</last></author>
<author><first>Patrick Gustav</first><last>Blaneck</last></author>
<author><first>Stephan</first><last>Bialonski</last></author>
<pages>1–13</pages>
<abstract>The growing body of political texts opens up new opportunities for rich insights into political dynamics and ideologies but also increases the workload for manual analysis. Automated speaker attribution, which detects who said what to whom in a speech event and is closely related to semantic role labeling, is an important processing step for computational text analysis. We study the potential of the large language model family Llama 2 to automate speaker attribution in German parliamentary debates from 2017-2021. We fine-tune Llama 2 with QLoRA, an efficient training strategy, and observe our approach to achieve competitive performance in the GermEval 2023 Shared Task On Speaker Attribution in German News Articles and Parliamentary Debates. Our results shed light on the capabilities of large language models in automating speaker attribution, revealing a promising avenue for computational analysis of political discourse and the development of semantic role labeling systems.</abstract>
<url hash="3be8d027">2024.jlcl-1.1</url>
<doi>10.21248/jlcl.37.2024.244</doi>
<bibkey>bornheim-etal-2024-speaker</bibkey>
</paper>
<paper id="2">
<title>Where are Emotions in Text? A Human-based and Computational Investigation of Emotion Recognition and Generation</title>
<author><first>Enrica</first><last>Troiano</last></author>
<pages>15-26</pages>
<abstract>Natural language processing (NLP) boasts a vibrant tradition of emotion studies, unified by the aim of developing systems that generate and recognize emotions in language. The computational approximation of these two capabilities, however, still faces fundamental challenges, as there is no consensus on how emotions should be processed, particularly in text: application-driven works often lose sight of foundational theories that describe how humans communicate what they feel, resulting in conflicting premises about the type of data best suited for modeling and whether this modeling should focus on textual meaning or style. My thesis fills in these theoretical gaps that hinder the creation of emotion-aware systems, demonstrating that a trans-disciplinary approach to emotions, which accounts for their extra-linguistic characteristics, has the potential to improve their computational processing. I investigate the human ability to detect emotions in written productions, and explore the linguistic dimensions that contribute to the emergence of emotions through text. In doing so, I clarify the possibilities and limits of automatic emotion classifiers and generators, also providing insights into where systems should model affective information.</abstract>
<url hash="551be94b">2024.jlcl-1.2</url>
<doi>10.21248/jlcl.37.2024.253</doi>
<bibkey>troiano-2024-emotions</bibkey>
</paper>
</volume>
</collection>