Samsung Research achieved first place in the Microsoft MAchine Reading COmprehension (MS MARCO) competition leaderboard.
ConZNet, developed by the Language Understanding Lab at Samsung Research AI Center, claimed first place in the leaderboard of the on-going Machine Reading Comprehension (MS MACRO) Competition held by Microsoft, scoring 41.68 (Rouge-L) and 37.52 (BLEU-1). Rouge-L and BLEU-1 scores are metrics to compare AI-generated answers with a set of human-produced reference answers.
MS MARCO is a large-scale data set created by Microsoft to help AIs understand natural language text and answer questions. The data set consists of anonymized queries from around 1 million Bing users and relevant documents from the web relating to the queries. The corresponding answers to the queries were human generated.
Previously, in other challenges, models mostly focused on retrieving keywords as answers from the given paragraph. However, it turns out that such keywords may not be directly useful as an answer in virtual assistants such as Samsung Bixby. The MS MARCO challenge requires that AI for virtual assistants generate human-like answers, which may feel more natural to users than simple keywords.
Samsung Research's QA (question answering) researchers from the Language Understanding Lab also achieved first place in TriviaQA, a large-scale data set for reading comprehension and QA competition held by the University of Washington. The QA researchers are continuously working on improving QA technologies to build a state-of-the-art QA system, which can be applied to real-world applications.
MS MARCO Leaderboard on June 14
TriviaQA Leaderboard on June 7