Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ingestion: JLCL #4795

Draft
wants to merge 2 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
86 changes: 86 additions & 0 deletions data/xml/2022.jlcl.xml
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
<?xml version='1.0' encoding='UTF-8'?>
<collection id="2022.jlcl">
<volume id="1" ingest-date="2025-03-07" type="proceedings">
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a note that we should change this to type="journal" (for all of the volumes here) and currently have to do this manually, because the ingestion script has no support for that.

<meta>
<booktitle>Journal for Language Technology and Computational Linguistics</booktitle>
<editor><first>Christian</first><last>Wartena</last></editor>
<publisher>German Society for Computational Lingustics and Language Technology</publisher>
<address>Germany</address>
<month>Dec.</month>
<year>2022</year>
<venue>jlcl</venue>
</meta>
<frontmatter>
<url hash="714bad27">2022.jlcl-1.0</url>
<bibkey>jlcl-2022-1</bibkey>
</frontmatter>
<paper id="1">
<title>Optimizing the Training of Models for Automated Post-Correction of Arbitrary <fixed-case>OCR</fixed-case>-ed Historical Texts</title>
<author><first>Tobias</first><last>Englmeier</last></author>
<author><first>Florian</first><last>Fink</last></author>
<author><first>Uwe</first><last>Springmann</last></author>
<author><first>Klaus U.</first><last>Schulz</last></author>
<pages>1–27</pages>
<url hash="0ddb1655">2022.jlcl-1.1</url>
<doi>10.21248/jlcl.35.2022.232</doi>
<bibkey>englmeier-etal-2022-optimizing</bibkey>
</paper>
</volume>
<volume id="2" ingest-date="2025-03-07" type="proceedings">
<meta>
<booktitle>Journal for Language Technology and Computational Linguistics</booktitle>
<editor><first>Ines</first><last>Rehbein</last></editor>
<editor><first>Gabriella</first><last>Lapesa</last></editor>
<editor><first>Goran</first><last>Glavaš</last></editor>
<editor><first>Simone Paolo</first><last>Ponzetto</last></editor>
<publisher>German Society for Computational Lingustics and Language Technology</publisher>
<address>Germany</address>
<month>Jul.</month>
<year>2022</year>
<venue>jlcl</venue>
</meta>
<frontmatter>
<url hash="e272dfc7">2022.jlcl-2.0</url>
<bibkey>jlcl-2022-2</bibkey>
</frontmatter>
<paper id="1">
<title>Small Data Problems in Political Research: A Critical Replication Study</title>
<author><first>Hugo</first><last>de Vos</last></author>
<author><first>Suzan</first><last>Verberne</last></author>
<pages>1–14</pages>
<url hash="72bfae25">2022.jlcl-2.1</url>
<doi>10.21248/jlcl.35.2022.226</doi>
<bibkey>de-vos-verberne-2022-small</bibkey>
</paper>
<paper id="2">
<title>Frame Detection in <fixed-case>G</fixed-case>erman Political Discourses: How Far Can We Go Without Large-Scale Manual Corpus Annotation?</title>
<author><first>Qi</first><last>Yu</last></author>
<author><first>Anselm</first><last>Fliethmann</last></author>
<pages>15–31</pages>
<url hash="5c1f9917">2022.jlcl-2.2</url>
<doi>10.21248/jlcl.35.2022.227</doi>
<bibkey>yu-fliethmann-2022-frame</bibkey>
</paper>
<paper id="3">
<title>Share and Shout: Proto-Slogans in Online Political Communities</title>
<author><first>Irene</first><last>Russo</last></author>
<author><first>Gloria</first><last>Comandini</last></author>
<author><first>Tommaso</first><last>Caselli</last></author>
<author><first>Viviana</first><last>Patti</last></author>
<pages>33–49</pages>
<url hash="5e1745dc">2022.jlcl-2.3</url>
<doi>10.21248/jlcl.35.2022.228</doi>
<bibkey>russo-etal-2022-share</bibkey>
</paper>
<paper id="4">
<title><fixed-case>UNSC</fixed-case>-<fixed-case>NE</fixed-case>: A Named Entity Extension to the <fixed-case>UN</fixed-case> Security Council Debates Corpus</title>
<author><first>Luis</first><last>Glaser</last></author>
<author><first>Ronny</first><last>Patz</last></author>
<author><first>Manfred</first><last>Stede</last></author>
<pages>51–67</pages>
<url hash="e354fd39">2022.jlcl-2.4</url>
<doi>10.21248/jlcl.35.2022.229</doi>
<bibkey>glaser-etal-2022-unsc</bibkey>
</paper>
</volume>
</collection>
39 changes: 39 additions & 0 deletions data/xml/2024.jlcl.xml
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
<?xml version='1.0' encoding='UTF-8'?>
<collection id="2024.jlcl">
<volume id="1" ingest-date="2025-03-07" type="proceedings">
<meta>
<booktitle>Journal for Language Technology and Computational Linguistics</booktitle>
<editor><first>Christian</first><last>Wartena</last></editor>
<publisher>German Society for Computational Lingustics and Language Technology</publisher>
<address>Germany</address>
<month>March.</month>
<year>2024</year>
<venue>jlcl</venue>
</meta>
<frontmatter>
<url hash="36513aeb">2024.jlcl-1.0</url>
<bibkey>jlcl-2024-1</bibkey>
</frontmatter>
<paper id="1">
<title>Speaker Attribution in <fixed-case>G</fixed-case>erman Parliamentary Debates with <fixed-case>QL</fixed-case>o<fixed-case>RA</fixed-case>-adapted Large Language Models</title>
<author><first>Tobias</first><last>Bornheim</last></author>
<author><first>Niklas</first><last>Grieger</last></author>
<author><first>Patrick Gustav</first><last>Blaneck</last></author>
<author><first>Stephan</first><last>Bialonski</last></author>
<pages>1–13</pages>
<abstract>The growing body of political texts opens up new opportunities for rich insights into political dynamics and ideologies but also increases the workload for manual analysis. Automated speaker attribution, which detects who said what to whom in a speech event and is closely related to semantic role labeling, is an important processing step for computational text analysis. We study the potential of the large language model family Llama 2 to automate speaker attribution in German parliamentary debates from 2017-2021. We fine-tune Llama 2 with QLoRA, an efficient training strategy, and observe our approach to achieve competitive performance in the GermEval 2023 Shared Task On Speaker Attribution in German News Articles and Parliamentary Debates. Our results shed light on the capabilities of large language models in automating speaker attribution, revealing a promising avenue for computational analysis of political discourse and the development of semantic role labeling systems.</abstract>
<url hash="3be8d027">2024.jlcl-1.1</url>
<doi>10.21248/jlcl.37.2024.244</doi>
<bibkey>bornheim-etal-2024-speaker</bibkey>
</paper>
<paper id="2">
<title>Where are Emotions in Text? A Human-based and Computational Investigation of Emotion Recognition and Generation</title>
<author><first>Enrica</first><last>Troiano</last></author>
<pages>15-26</pages>
<abstract>Natural language processing (NLP) boasts a vibrant tradition of emotion studies, unified by the aim of developing systems that generate and recognize emotions in language. The computational approximation of these two capabilities, however, still faces fundamental challenges, as there is no consensus on how emotions should be processed, particularly in text: application-driven works often lose sight of foundational theories that describe how humans communicate what they feel, resulting in conflicting premises about the type of data best suited for modeling and whether this modeling should focus on textual meaning or style. My thesis fills in these theoretical gaps that hinder the creation of emotion-aware systems, demonstrating that a trans-disciplinary approach to emotions, which accounts for their extra-linguistic characteristics, has the potential to improve their computational processing. I investigate the human ability to detect emotions in written productions, and explore the linguistic dimensions that contribute to the emergence of emotions through text. In doing so, I clarify the possibilities and limits of automatic emotion classifiers and generators, also providing insights into where systems should model affective information.</abstract>
<url hash="551be94b">2024.jlcl-1.2</url>
<doi>10.21248/jlcl.37.2024.253</doi>
<bibkey>troiano-2024-emotions</bibkey>
</paper>
</volume>
</collection>