Oana Inel
Title
Cited by
Cited by
Year
Crowdtruth: Machine-human computation framework for harnessing disagreement in gathering annotated data
O Inel, K Khamkham, T Cristea, A Dumitrache, A Rutjes, J van der Ploeg, ...
International semantic web conference, 486-504, 2014
812014
Measuring crowd truth: Disagreement metrics combined with worker behavior filters
G Soberón, L Aroyo, C Welty, O Inel, H Lin, M Overmeen
CrowdSem 2013 Workshop 2, 2013
332013
DIVE into the event-based browsing of linked historical media
V De Boer, J Oomen, O Inel, L Aroyo, E Van Staveren, W Helmich, ...
Journal of Web Semantics 35, 152-158, 2015
322015
A survey of crowdsourcing in medical image analysis
S Řrting, A Doyle, A van Hilten, M Hirth, O Inel, CR Madan, P Mavridis, ...
arXiv preprint arXiv:1902.09159, 2019
262019
Domain-independent quality measures for crowd truth disagreement
O Inel, L Aroyo, C Welty, RJ Sips
Detection, Representation, and Exploitation of Events in the Semantic Web, 2, 2013
22*2013
Time-aware multi-viewpoint summarization of multilingual social text streams
Z Ren, O Inel, L Aroyo, M De Rijke
Proceedings of the 25th ACM International on Conference on Information and …, 2016
212016
Empirical methodology for crowdsourcing ground truth
A Dumitrache, O Inel, B Timmermans, C Ortiz, RJ Sips, L Aroyo, C Welty
Semantic Web, 1-19, 2021
142021
Harnessing Diversity in Crowds and Machines for Better NER Performance
O Inel, L Aroyo
Extended Semantic Web Conference, 289-304, 2017
132017
Studying topical relevance with evidence-based crowdsourcing
O Inel, G Haralabopoulos, D Li, C Van Gysel, Z Szlávik, E Simperl, ...
Proceedings of the 27th ACM International Conference on Information and …, 2018
112018
CrowdTruth 2.0: Quality metrics for crowdsourcing with disagreement
A Dumitrache, O Inel, L Aroyo, B Timmermans, C Welty
arXiv preprint arXiv:1808.06080, 2018
112018
Enriching media collections for event-based exploration
V De Boer, L Melgar, O Inel, CM Ortiz, L Aroyo, J Oomen
Research Conference on Metadata and Semantics Research, 189-201, 2017
92017
Temporal Information Annotation: Crowd vs. Experts
T Caselli, R Sprugnoli, O Inel
Language Resources and Evaluation (LREC 2016), 3502-3509, 2016
82016
Crowdsourcing Salient Information from News and Tweets
O Inel, T Caselli, L Aroyo
Language Resources and Evaluation (LREC 2016), 3959-3966, 2016
62016
Crowdsourcing StoryLines: Harnessing the crowd for causal relation annotation
T Caselli, O Inel
Proceedings of the Workshop Events and Stories in the News 2018, 44-54, 2018
52018
From tools to “Recipes”: Building a media suite within the Dutch digital humanities infrastructure CLARIAH
C Martinez-Ortiz, R Ordelman, M Koolen, JJ Noordegraaf, L Melgar, ...
52017
Crowd watson: Crowdsourced text annotations
H Lin, O Inel, G Soberón, L Aroyo, C Welty, M Overmeen, RJ Sips
Technical report, VU University Amsterdam, 2013
52013
Validation methodology for expert-annotated datasets: Event annotation case study
O Inel, L Aroyo
2nd Conference on Language, Data and Knowledge (LDK 2019), 2019
42019
Someone really wanted that song but it was not me! Evaluating Which Information to Disclose in Explanations for Group Recommendations
S Najafian, O Inel, N Tintarev
Proceedings of the 25th International Conference on Intelligent User …, 2020
32020
A Study of Narrative Creation by Means of Crowds and Niches.
O Inel, S Sauer, L Aroyo, A Bozzon, M Venanzi
HCOMP (WIP&Demo), 2018
22018
Resource interoperability for sustainable benchmarking: The case of events
C Van Son, O Inel, R Morante, L Aroyo, P Vossen
Proceedings of the Eleventh International Conference on Language Resources …, 2018
22018
The system can't perform the operation now. Try again later.
Articles 1–20