Comparing LSTM and FOFE-based Architectures for Named Entity Recognition

Forskningsoutput: KonferensbidragKonferenspaper, ej i proceeding/ej förlagsutgivetPeer review

103 Nedladdningar (Pure)


LSTM architectures (Hochreiter and Schmidhuber, 1997) have become standard to recognize named entities (NER) in text (Lample et al., 2016; Chiu and Nichols, 2016). Nonetheless, Zhang et al. (2015) recently proposed an approach based on fixed-size ordinally forgetting encoding (FOFE) to translate variable-length contexts into fixed-length features. This encoding method can be used with feed-forward neural networks and, despite its simplicity, reach accuracy rates matching those of LTSMs in NER tasks (Xu et al., 2017). However, the figures reported in the NER articles are difficult to compare precisely as the experiments often use external resources such as gazetteers and corpora. In this paper, we describe an experimental setup, where we reimplemented the two core algorithms, to level the differences in initial conditions. This allowed us to measure more precisely the accuracy of both architectures and to report what we believe are unbiased results on English and Swedish datasets.
StatusPublished - 2018 nov. 7
EvenemangSeventh Swedish Language Technology Conference: Third National Swe-Clarin Workshop: Making ends meet - Stockholm, Sverige
Varaktighet: 2018 nov. 72018 nov. 9
Konferensnummer: 7


KonferensSeventh Swedish Language Technology Conference
Förkortad titelSLTC 2018

Ämnesklassifikation (UKÄ)

  • Språkteknologi (språkvetenskaplig databehandling)


Utforska forskningsämnen för ”Comparing LSTM and FOFE-based Architectures for Named Entity Recognition”. Tillsammans bildar de ett unikt fingeravtryck.

Citera det här