Constructing large proposition databases

Research output: Chapter in Book/Report/Conference proceedingPaper in conference proceeding

Bibtex

@inproceedings{8455e9a280064053952b50b1b702d760,
title = "Constructing large proposition databases",
abstract = "With the advent of massive online encyclopedic corpora such as Wikipedia, it has become possible to apply a systematic analysis to a wide range of documents covering a significant part of human knowledge. Using semantic parsers, it has become possible to extract such knowledge in the form of propositions (predicate―argument structures) and build large proposition databases from these documents. This paper describes the creation of multilingual proposition databases using generic semantic dependency parsing. Using Wikipedia, we extracted, processed, clustered, and evaluated a large number of propositions. We built an architecture to provide a complete pipeline dealing with the input of text, extraction of knowledge, storage, and presentation of the resulting propositions",
keywords = "Knowledge Discovery/Representation, Information Extraction, Information Retrieval, Semantics",
author = "Peter Exner and Pierre Nugues",
year = "2012",
language = "English",
pages = "3836--3839",
booktitle = "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)",
publisher = "European Language Resources Association (ELRA)",

}