miun.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Bilingual Auto-Categorization Comparison of two LSTM Text Classifiers
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.ORCID iD: 0000-0002-1797-1095
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
2019 (English)In: 2019 8th International Congress on Advanced Applied Informatics (IIAI-AAI), 2019Conference paper, Published paper (Refereed)
Abstract [en]

Multi linguistic problems such as auto-categorization is not an easy task. It is possible to train different models for each language, another way to do auto-categorization is to build the model in one base language and use automatic translation from other languages to that base language. Different languages have a bias to a language specific grammar and syntax and will therefore pose problems to be expressed in other languages. Translating from one language into a non-verbal language could potentially have a positive impact of the categorization results. A non-verbal language could for example be pure information in form of a knowledge graph relation extraction from the text. In this article a comparison is conducted between Chinese and Swedish languages. Two categorization models are developed and validated on each dataset. The purpose is to make an auto-categorization model that works for n'importe quel langage. One model is built upon LSTM and optimized for Swedish and the other is an improved Bidirectional-LSTM Convolution model optimized for Chinese. The improved algorithm is trained on both languages and compared with the LSTM algorithm. The Bidirectional-LSTM algorithm performs approximately 20% units better than the LSTM algorithm, which is significant.

Place, publisher, year, edition, pages
2019.
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:miun:diva-37261DOI: 10.1109/IIAI-AAI.2019.00127Scopus ID: 2-s2.0-85080902973ISBN: 978-1-7281-2627-2 (electronic)OAI: oai:DiVA.org:miun-37261DiVA, id: diva2:1352551
Conference
8th International Congress on Advanced Applied Informatics, Toyama, Japan, July 7-11 (Main Event) & 12 (Forum), 2019
Projects
SMART (Smarta system och tjänster för ett effektivt och innovativt samhälle)Available from: 2019-09-19 Created: 2019-09-19 Last updated: 2020-03-19Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records BETA

Lindén, JohannesForsström, StefanZhang, Tingting

Search in DiVA

By author/editor
Lindén, JohannesWang, XutaoForsström, StefanZhang, Tingting
By organisation
Department of Information Systems and Technology
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 70 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf