miun.sePublikasjoner
Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Bilingual Auto-Categorization Comparison of two LSTM Text Classifiers
Mittuniversitetet, Fakulteten för naturvetenskap, teknik och medier, Institutionen för informationssystem och –teknologi.
Mittuniversitetet, Fakulteten för naturvetenskap, teknik och medier, Institutionen för informationssystem och –teknologi.
Mittuniversitetet, Fakulteten för naturvetenskap, teknik och medier, Institutionen för informationssystem och –teknologi.ORCID-id: 0000-0002-1797-1095
Mittuniversitetet, Fakulteten för naturvetenskap, teknik och medier, Institutionen för informationssystem och –teknologi.
2019 (engelsk)Inngår i: 2019 8th International Congress on Advanced Applied Informatics (IIAI-AAI), 2019Konferansepaper, Publicerat paper (Annet vitenskapelig)
Abstract [en]

Multi linguistic problems such as auto-categorization is not an easy task. It is possible to train different models for each language, another way to do auto-categorization is to build the model in one base language and use automatic translation from other languages to that base language. Different languages have a bias to a language specific grammar and syntax and will therefore pose problems to be expressed in other languages. Translating from one language into a non-verbal language could potentially have a positive impact of the categorization results. A non-verbal language could for example be pure information in form of a knowledge graph relation extraction from the text. In this article a comparison is conducted between Chinese and Swedish languages. Two categorization models are developed and validated on each dataset. The purpose is to make an auto-categorization model that works for n'importe quel langage. One model is built upon LSTM and optimized for Swedish and the other is an improved Bidirectional-LSTM Convolution model optimized for Chinese. The improved algorithm is trained on both languages and compared with the LSTM algorithm. The Bidirectional-LSTM algorithm performs approximately 20% units better than the LSTM algorithm, which is significant.

sted, utgiver, år, opplag, sider
2019.
HSV kategori
Identifikatorer
URN: urn:nbn:se:miun:diva-37261DOI: 10.1109/IIAI-AAI.2019.00127ISBN: 978-1-7281-2627-2 (digital)OAI: oai:DiVA.org:miun-37261DiVA, id: diva2:1352551
Konferanse
8th International Congress on Advanced Applied Informatics, Toyama, Japan, July 7-11 (Main Event) & 12 (Forum), 2019
Prosjekter
SMART (Smarta system och tjänster för ett effektivt och innovativt samhälle)Tilgjengelig fra: 2019-09-19 Laget: 2019-09-19 Sist oppdatert: 2020-02-21bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekst

Personposter BETA

Lindén, JohannesForsström, StefanZhang, Tingting

Søk i DiVA

Av forfatter/redaktør
Lindén, JohannesWang, XutaoForsström, StefanZhang, Tingting
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric

doi
isbn
urn-nbn
Totalt: 60 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf