text
stringclasses 21
values | inputs
dict | prediction
null | prediction_agent
null | annotation
stringclasses 2
values | annotation_agent
stringclasses 1
value | vectors
null | multi_label
bool 1
class | explanation
null | id
stringclasses 21
values | metadata
null | status
stringclasses 1
value | metrics
dict | label
class label 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Seeking health-related advice on the internet has become a common practice in
the digital era. Determining the trustworthiness of medical claims found online
and finding appropriate evidence for this information is increasingly
challenging. Fact-checking has emerged as an approach to assess the veracity of
factual claims using evidence from credible knowledge sources. To help advance
the automation of this task, in this paper, we introduce a novel dataset of 750
health-related claims, labeled for veracity by medical experts and backed with
evidence from appropriate clinical studies. We provide an analysis of the
dataset, highlighting its characteristics and challenges. The dataset can be
used for Machine Learning tasks related to automated fact-checking such as
evidence retrieval, veracity prediction, and explanation generation. For this
purpose, we provide baseline models based on different approaches, examine
their performance, and discuss the findings.
|
{
"abstract": "Seeking health-related advice on the internet has become a common practice in\nthe digital era. Determining the trustworthiness of medical claims found online\nand finding appropriate evidence for this information is increasingly\nchallenging. Fact-checking has emerged as an approach to assess the veracity of\nfactual claims using evidence from credible knowledge sources. To help advance\nthe automation of this task, in this paper, we introduce a novel dataset of 750\nhealth-related claims, labeled for veracity by medical experts and backed with\nevidence from appropriate clinical studies. We provide an analysis of the\ndataset, highlighting its characteristics and challenges. The dataset can be\nused for Machine Learning tasks related to automated fact-checking such as\nevidence retrieval, veracity prediction, and explanation generation. For this\npurpose, we provide baseline models based on different approaches, examine\ntheir performance, and discuss the findings.",
"title": "HealthFC: A Dataset of Health Claims for Evidence-Based Medical Fact-Checking",
"url": "http://arxiv.org/abs/2309.08503v1"
}
| null | null |
new_dataset
|
admin
| null | false
| null |
309ac8b2-3681-43f5-b36c-348a03e320ab
| null |
Validated
|
{
"text_length": 1081
}
| 0new_dataset
|
The development of semi-supervised learning techniques is essential to
enhance the generalization capacities of machine learning algorithms. Indeed,
raw image data are abundant while labels are scarce, therefore it is crucial to
leverage unlabeled inputs to build better models. The availability of large
databases have been key for the development of learning algorithms with high
level performance.
Despite the major role of machine learning in Earth Observation to derive
products such as land cover maps, datasets in the field are still limited,
either because of modest surface coverage, lack of variety of scenes or
restricted classes to identify. We introduce a novel large-scale dataset for
semi-supervised semantic segmentation in Earth Observation, the MiniFrance
suite. MiniFrance has several unprecedented properties: it is large-scale,
containing over 2000 very high resolution aerial images, accounting for more
than 200 billions samples (pixels); it is varied, covering 16 conurbations in
France, with various climates, different landscapes, and urban as well as
countryside scenes; and it is challenging, considering land use classes with
high-level semantics. Nevertheless, the most distinctive quality of MiniFrance
is being the only dataset in the field especially designed for semi-supervised
learning: it contains labeled and unlabeled images in its training partition,
which reproduces a life-like scenario. Along with this dataset, we present
tools for data representativeness analysis in terms of appearance similarity
and a thorough study of MiniFrance data, demonstrating that it is suitable for
learning and generalizes well in a semi-supervised setting. Finally, we present
semi-supervised deep architectures based on multi-task learning and the first
experiments on MiniFrance.
|
{
"abstract": "The development of semi-supervised learning techniques is essential to\nenhance the generalization capacities of machine learning algorithms. Indeed,\nraw image data are abundant while labels are scarce, therefore it is crucial to\nleverage unlabeled inputs to build better models. The availability of large\ndatabases have been key for the development of learning algorithms with high\nlevel performance.\n Despite the major role of machine learning in Earth Observation to derive\nproducts such as land cover maps, datasets in the field are still limited,\neither because of modest surface coverage, lack of variety of scenes or\nrestricted classes to identify. We introduce a novel large-scale dataset for\nsemi-supervised semantic segmentation in Earth Observation, the MiniFrance\nsuite. MiniFrance has several unprecedented properties: it is large-scale,\ncontaining over 2000 very high resolution aerial images, accounting for more\nthan 200 billions samples (pixels); it is varied, covering 16 conurbations in\nFrance, with various climates, different landscapes, and urban as well as\ncountryside scenes; and it is challenging, considering land use classes with\nhigh-level semantics. Nevertheless, the most distinctive quality of MiniFrance\nis being the only dataset in the field especially designed for semi-supervised\nlearning: it contains labeled and unlabeled images in its training partition,\nwhich reproduces a life-like scenario. Along with this dataset, we present\ntools for data representativeness analysis in terms of appearance similarity\nand a thorough study of MiniFrance data, demonstrating that it is suitable for\nlearning and generalizes well in a semi-supervised setting. Finally, we present\nsemi-supervised deep architectures based on multi-task learning and the first\nexperiments on MiniFrance.",
"title": "Semi-Supervised Semantic Segmentation in Earth Observation: The MiniFrance Suite, Dataset Analysis and Multi-task Network Study",
"url": "http://arxiv.org/abs/2010.07830v1"
}
| null | null |
new_dataset
|
admin
| null | false
| null |
27070983-724c-4a1f-b90c-e1d8738d2816
| null |
Validated
|
{
"text_length": 1970
}
| 0new_dataset
|
We introduce the well-established social scientific concept of social
solidarity and its contestation, anti-solidarity, as a new problem setting to
supervised machine learning in NLP to assess how European solidarity discourses
changed before and after the COVID-19 outbreak was declared a global pandemic.
To this end, we annotate 2.3k English and German tweets for (anti-)solidarity
expressions, utilizing multiple human annotators and two annotation approaches
(experts vs.\ crowds). We use these annotations to train a BERT model with
multiple data augmentation strategies. Our augmented BERT model that combines
both expert and crowd annotations outperforms the baseline BERT classifier
trained with expert annotations only by over 25 points, from 58\% macro-F1 to
almost 85\%. We use this high-quality model to automatically label over 270k
tweets between September 2019 and December 2020. We then assess the
automatically labeled data for how statements related to European
(anti-)solidarity discourses developed over time and in relation to one
another, before and during the COVID-19 crisis. Our results show that
solidarity became increasingly salient and contested during the crisis. While
the number of solidarity tweets remained on a higher level and dominated the
discourse in the scrutinized time frame, anti-solidarity tweets initially
spiked, then decreased to (almost) pre-COVID-19 values before rising to a
stable higher level until the end of 2020.
|
{
"abstract": "We introduce the well-established social scientific concept of social\nsolidarity and its contestation, anti-solidarity, as a new problem setting to\nsupervised machine learning in NLP to assess how European solidarity discourses\nchanged before and after the COVID-19 outbreak was declared a global pandemic.\nTo this end, we annotate 2.3k English and German tweets for (anti-)solidarity\nexpressions, utilizing multiple human annotators and two annotation approaches\n(experts vs.\\ crowds). We use these annotations to train a BERT model with\nmultiple data augmentation strategies. Our augmented BERT model that combines\nboth expert and crowd annotations outperforms the baseline BERT classifier\ntrained with expert annotations only by over 25 points, from 58\\% macro-F1 to\nalmost 85\\%. We use this high-quality model to automatically label over 270k\ntweets between September 2019 and December 2020. We then assess the\nautomatically labeled data for how statements related to European\n(anti-)solidarity discourses developed over time and in relation to one\nanother, before and during the COVID-19 crisis. Our results show that\nsolidarity became increasingly salient and contested during the crisis. While\nthe number of solidarity tweets remained on a higher level and dominated the\ndiscourse in the scrutinized time frame, anti-solidarity tweets initially\nspiked, then decreased to (almost) pre-COVID-19 values before rising to a\nstable higher level until the end of 2020.",
"title": "Changes in European Solidarity Before and During COVID-19: Evidence from a Large Crowd- and Expert-Annotated Twitter Dataset",
"url": "http://arxiv.org/abs/2108.01042v1"
}
| null | null |
no_new_dataset
|
admin
| null | false
| null |
076834be-f521-481a-b1ca-7946cc3f3e62
| null |
Validated
|
{
"text_length": 1627
}
| 1no_new_dataset
|
Entity linking (EL) is the task of linking a textual mention to its
corresponding entry in a knowledge base, and is critical for many
knowledge-intensive NLP applications. When applied to tables in scientific
papers, EL is a step toward large-scale scientific knowledge bases that could
enable advanced scientific question answering and analytics. We present the
first dataset for EL in scientific tables. EL for scientific tables is
especially challenging because scientific knowledge bases can be very
incomplete, and disambiguating table mentions typically requires understanding
the papers's tet in addition to the table. Our dataset, S2abEL, focuses on EL
in machine learning results tables and includes hand-labeled cell types,
attributed sources, and entity links from the PaperswithCode taxonomy for 8,429
cells from 732 tables. We introduce a neural baseline method designed for EL on
scientific tables containing many out-of-knowledge-base mentions, and show that
it significantly outperforms a state-of-the-art generic table EL method. The
best baselines fall below human performance, and our analysis highlights
avenues for improvement.
|
{
"abstract": "Entity linking (EL) is the task of linking a textual mention to its\ncorresponding entry in a knowledge base, and is critical for many\nknowledge-intensive NLP applications. When applied to tables in scientific\npapers, EL is a step toward large-scale scientific knowledge bases that could\nenable advanced scientific question answering and analytics. We present the\nfirst dataset for EL in scientific tables. EL for scientific tables is\nespecially challenging because scientific knowledge bases can be very\nincomplete, and disambiguating table mentions typically requires understanding\nthe papers's tet in addition to the table. Our dataset, S2abEL, focuses on EL\nin machine learning results tables and includes hand-labeled cell types,\nattributed sources, and entity links from the PaperswithCode taxonomy for 8,429\ncells from 732 tables. We introduce a neural baseline method designed for EL on\nscientific tables containing many out-of-knowledge-base mentions, and show that\nit significantly outperforms a state-of-the-art generic table EL method. The\nbest baselines fall below human performance, and our analysis highlights\navenues for improvement.",
"title": "S2abEL: A Dataset for Entity Linking from Scientific Tables",
"url": "http://arxiv.org/abs/2305.00366v1"
}
| null | null |
no_new_dataset
|
admin
| null | false
| null |
091d8728-b3a4-4de9-a5e1-a2419e5729c4
| null |
Validated
|
{
"text_length": 1242
}
| 1no_new_dataset
|
A riddle is a question or statement with double or veiled meanings, followed
by an unexpected answer. Solving riddle is a challenging task for both machine
and human, testing the capability of understanding figurative, creative natural
language and reasoning with commonsense knowledge. We introduce BiRdQA, a
bilingual multiple-choice question answering dataset with 6614 English riddles
and 8751 Chinese riddles. For each riddle-answer pair, we provide four
distractors with additional information from Wikipedia. The distractors are
automatically generated at scale with minimal bias. Existing monolingual and
multilingual QA models fail to perform well on our dataset, indicating that
there is a long way to go before machine can beat human on solving tricky
riddles. The dataset has been released to the community.
|
{
"abstract": "A riddle is a question or statement with double or veiled meanings, followed\nby an unexpected answer. Solving riddle is a challenging task for both machine\nand human, testing the capability of understanding figurative, creative natural\nlanguage and reasoning with commonsense knowledge. We introduce BiRdQA, a\nbilingual multiple-choice question answering dataset with 6614 English riddles\nand 8751 Chinese riddles. For each riddle-answer pair, we provide four\ndistractors with additional information from Wikipedia. The distractors are\nautomatically generated at scale with minimal bias. Existing monolingual and\nmultilingual QA models fail to perform well on our dataset, indicating that\nthere is a long way to go before machine can beat human on solving tricky\nriddles. The dataset has been released to the community.",
"title": "BiRdQA: A Bilingual Dataset for Question Answering on Tricky Riddles",
"url": "http://arxiv.org/abs/2109.11087v2"
}
| null | null |
new_dataset
|
admin
| null | false
| null |
039a5a67-2e67-40ac-8b05-939db7e0d062
| null |
Validated
|
{
"text_length": 922
}
| 0new_dataset
|
As our ability to sense increases, we are experiencing a transition from
data-poor problems, in which the central issue is a lack of relevant data, to
data-rich problems, in which the central issue is to identify a few relevant
features in a sea of observations. Motivated by applications in
gravitational-wave astrophysics, we study the problem of predicting the
presence of transient noise artifacts in a gravitational wave detector from a
rich collection of measurements from the detector and its environment. We argue
that feature learning--in which relevant features are optimized from data--is
critical to achieving high accuracy. We introduce models that reduce the error
rate by over 60% compared to the previous state of the art, which used fixed,
hand-crafted features. Feature learning is useful not only because it improves
performance on prediction tasks; the results provide valuable information about
patterns associated with phenomena of interest that would otherwise be
undiscoverable. In our application, features found to be associated with
transient noise provide diagnostic information about its origin and suggest
mitigation strategies. Learning in high-dimensional settings is challenging.
Through experiments with a variety of architectures, we identify two key
factors in successful models: sparsity, for selecting relevant variables within
the high-dimensional observations; and depth, which confers flexibility for
handling complex interactions and robustness with respect to temporal
variations. We illustrate their significance through systematic experiments on
real detector data. Our results provide experimental corroboration of common
assumptions in the machine-learning community and have direct applicability to
improving our ability to sense gravitational waves, as well as to many other
problem settings with similarly high-dimensional, noisy, or partly irrelevant
data.
|
{
"abstract": "As our ability to sense increases, we are experiencing a transition from\ndata-poor problems, in which the central issue is a lack of relevant data, to\ndata-rich problems, in which the central issue is to identify a few relevant\nfeatures in a sea of observations. Motivated by applications in\ngravitational-wave astrophysics, we study the problem of predicting the\npresence of transient noise artifacts in a gravitational wave detector from a\nrich collection of measurements from the detector and its environment. We argue\nthat feature learning--in which relevant features are optimized from data--is\ncritical to achieving high accuracy. We introduce models that reduce the error\nrate by over 60% compared to the previous state of the art, which used fixed,\nhand-crafted features. Feature learning is useful not only because it improves\nperformance on prediction tasks; the results provide valuable information about\npatterns associated with phenomena of interest that would otherwise be\nundiscoverable. In our application, features found to be associated with\ntransient noise provide diagnostic information about its origin and suggest\nmitigation strategies. Learning in high-dimensional settings is challenging.\nThrough experiments with a variety of architectures, we identify two key\nfactors in successful models: sparsity, for selecting relevant variables within\nthe high-dimensional observations; and depth, which confers flexibility for\nhandling complex interactions and robustness with respect to temporal\nvariations. We illustrate their significance through systematic experiments on\nreal detector data. Our results provide experimental corroboration of common\nassumptions in the machine-learning community and have direct applicability to\nimproving our ability to sense gravitational waves, as well as to many other\nproblem settings with similarly high-dimensional, noisy, or partly irrelevant\ndata.",
"title": "Architectural Optimization and Feature Learning for High-Dimensional Time Series Datasets",
"url": "http://arxiv.org/abs/2202.13486v2"
}
| null | null |
no_new_dataset
|
admin
| null | false
| null |
354aa25b-0f94-4d9e-b412-a830e2809237
| null |
Validated
|
{
"text_length": 2031
}
| 1no_new_dataset
|
To better interact with users, a social robot should understand the users'
behavior, infer the intention, and respond appropriately. Machine learning is
one way of implementing robot intelligence. It provides the ability to
automatically learn and improve from experience instead of explicitly telling
the robot what to do. Social skills can also be learned through watching
human-human interaction videos. However, human-human interaction datasets are
relatively scarce to learn interactions that occur in various situations.
Moreover, we aim to use service robots in the elderly-care domain; however,
there has been no interaction dataset collected for this domain. For this
reason, we introduce a human-human interaction dataset for teaching non-verbal
social behaviors to robots. It is the only interaction dataset that elderly
people have participated in as performers. We recruited 100 elderly people and
two college students to perform 10 interactions in an indoor environment. The
entire dataset has 5,000 interaction samples, each of which contains depth
maps, body indexes and 3D skeletal data that are captured with three Microsoft
Kinect v2 cameras. In addition, we provide the joint angles of a humanoid NAO
robot which are converted from the human behavior that robots need to learn.
The dataset and useful python scripts are available for download at
https://github.com/ai4r/AIR-Act2Act. It can be used to not only teach social
skills to robots but also benchmark action recognition algorithms.
|
{
"abstract": "To better interact with users, a social robot should understand the users'\nbehavior, infer the intention, and respond appropriately. Machine learning is\none way of implementing robot intelligence. It provides the ability to\nautomatically learn and improve from experience instead of explicitly telling\nthe robot what to do. Social skills can also be learned through watching\nhuman-human interaction videos. However, human-human interaction datasets are\nrelatively scarce to learn interactions that occur in various situations.\nMoreover, we aim to use service robots in the elderly-care domain; however,\nthere has been no interaction dataset collected for this domain. For this\nreason, we introduce a human-human interaction dataset for teaching non-verbal\nsocial behaviors to robots. It is the only interaction dataset that elderly\npeople have participated in as performers. We recruited 100 elderly people and\ntwo college students to perform 10 interactions in an indoor environment. The\nentire dataset has 5,000 interaction samples, each of which contains depth\nmaps, body indexes and 3D skeletal data that are captured with three Microsoft\nKinect v2 cameras. In addition, we provide the joint angles of a humanoid NAO\nrobot which are converted from the human behavior that robots need to learn.\nThe dataset and useful python scripts are available for download at\nhttps://github.com/ai4r/AIR-Act2Act. It can be used to not only teach social\nskills to robots but also benchmark action recognition algorithms.",
"title": "AIR-Act2Act: Human-human interaction dataset for teaching non-verbal social behaviors to robots",
"url": "http://arxiv.org/abs/2009.02041v1"
}
| null | null |
new_dataset
|
admin
| null | false
| null |
0aa0a75f-ff3b-4620-9932-64ad6aea89e4
| null |
Validated
|
{
"text_length": 1639
}
| 0new_dataset
|
Movie-making has become one of the most costly and risky endeavors in the
entertainment industry. Continuous change in the preference of the audience
makes it harder to predict what kind of movie will be financially successful at
the box office. So, it is no wonder that cautious, intelligent stakeholders and
large production houses will always want to know the probable revenue that will
be generated by a movie before making an investment. Researchers have been
working on finding an optimal strategy to help investors in making the right
decisions. But the lack of a large, up-to-date dataset makes their work harder.
In this work, we introduce an up-to-date, richer, and larger dataset that we
have prepared by scraping IMDb for researchers and data analysts to work with.
The compiled dataset contains the summery data of 7.5 million titles and detail
information of more than 200K movies. Additionally, we perform different
statistical analysis approaches on our dataset to find out how a movie's
revenue is affected by different pre-released attributes such as budget,
runtime, release month, content rating, genre etc. In our analysis, we have
found that having a star cast/director has a positive impact on generated
revenue. We introduce a novel approach for calculating the star power of a
movie. Based on our analysis we select a set of attributes as features and
train different machine learning algorithms to predict a movie's expected
revenue. Based on generated revenue, we classified the movies in 10 categories
and achieved a one-class-away accuracy rate of almost 60% (bingo accuracy of
30%). All the generated datasets and analysis codes are available online. We
also made the source codes of our scraper bots public, so that researchers
interested in extending this work can easily modify these bots as they need and
prepare their own up-to-date datasets.
|
{
"abstract": "Movie-making has become one of the most costly and risky endeavors in the\nentertainment industry. Continuous change in the preference of the audience\nmakes it harder to predict what kind of movie will be financially successful at\nthe box office. So, it is no wonder that cautious, intelligent stakeholders and\nlarge production houses will always want to know the probable revenue that will\nbe generated by a movie before making an investment. Researchers have been\nworking on finding an optimal strategy to help investors in making the right\ndecisions. But the lack of a large, up-to-date dataset makes their work harder.\nIn this work, we introduce an up-to-date, richer, and larger dataset that we\nhave prepared by scraping IMDb for researchers and data analysts to work with.\nThe compiled dataset contains the summery data of 7.5 million titles and detail\ninformation of more than 200K movies. Additionally, we perform different\nstatistical analysis approaches on our dataset to find out how a movie's\nrevenue is affected by different pre-released attributes such as budget,\nruntime, release month, content rating, genre etc. In our analysis, we have\nfound that having a star cast/director has a positive impact on generated\nrevenue. We introduce a novel approach for calculating the star power of a\nmovie. Based on our analysis we select a set of attributes as features and\ntrain different machine learning algorithms to predict a movie's expected\nrevenue. Based on generated revenue, we classified the movies in 10 categories\nand achieved a one-class-away accuracy rate of almost 60% (bingo accuracy of\n30%). All the generated datasets and analysis codes are available online. We\nalso made the source codes of our scraper bots public, so that researchers\ninterested in extending this work can easily modify these bots as they need and\nprepare their own up-to-date datasets.",
"title": "Presenting a Larger Up-to-date Movie Dataset and Investigating the Effects of Pre-released Attributes on Gross Revenue",
"url": "http://arxiv.org/abs/2110.07039v2"
}
| null | null |
new_dataset
|
admin
| null | false
| null |
14366fed-8b51-40fd-9cc3-c8ff049dd855
| null |
Validated
|
{
"text_length": 2030
}
| 0new_dataset
|
We introduce WikiLingua, a large-scale, multilingual dataset for the
evaluation of crosslingual abstractive summarization systems. We extract
article and summary pairs in 18 languages from WikiHow, a high quality,
collaborative resource of how-to guides on a diverse set of topics written by
human authors. We create gold-standard article-summary alignments across
languages by aligning the images that are used to describe each how-to step in
an article. As a set of baselines for further studies, we evaluate the
performance of existing cross-lingual abstractive summarization methods on our
dataset. We further propose a method for direct crosslingual summarization
(i.e., without requiring translation at inference time) by leveraging synthetic
data and Neural Machine Translation as a pre-training step. Our method
significantly outperforms the baseline approaches, while being more cost
efficient during inference.
|
{
"abstract": "We introduce WikiLingua, a large-scale, multilingual dataset for the\nevaluation of crosslingual abstractive summarization systems. We extract\narticle and summary pairs in 18 languages from WikiHow, a high quality,\ncollaborative resource of how-to guides on a diverse set of topics written by\nhuman authors. We create gold-standard article-summary alignments across\nlanguages by aligning the images that are used to describe each how-to step in\nan article. As a set of baselines for further studies, we evaluate the\nperformance of existing cross-lingual abstractive summarization methods on our\ndataset. We further propose a method for direct crosslingual summarization\n(i.e., without requiring translation at inference time) by leveraging synthetic\ndata and Neural Machine Translation as a pre-training step. Our method\nsignificantly outperforms the baseline approaches, while being more cost\nefficient during inference.",
"title": "WikiLingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization",
"url": "http://arxiv.org/abs/2010.03093v1"
}
| null | null |
new_dataset
|
admin
| null | false
| null |
2b06bf83-4565-40c7-bd94-55d327c90489
| null |
Validated
|
{
"text_length": 1034
}
| 0new_dataset
|
With increasingly more data and computation involved in their training,
machine learning models constitute valuable intellectual property. This has
spurred interest in model stealing, which is made more practical by advances in
learning with partial, little, or no supervision. Existing defenses focus on
inserting unique watermarks in a model's decision surface, but this is
insufficient: the watermarks are not sampled from the training distribution and
thus are not always preserved during model stealing. In this paper, we make the
key observation that knowledge contained in the stolen model's training set is
what is common to all stolen copies. The adversary's goal, irrespective of the
attack employed, is always to extract this knowledge or its by-products. This
gives the original model's owner a strong advantage over the adversary: model
owners have access to the original training data. We thus introduce $dataset$
$inference$, the process of identifying whether a suspected model copy has
private knowledge from the original model's dataset, as a defense against model
stealing. We develop an approach for dataset inference that combines
statistical testing with the ability to estimate the distance of multiple data
points to the decision boundary. Our experiments on CIFAR10, SVHN, CIFAR100 and
ImageNet show that model owners can claim with confidence greater than 99% that
their model (or dataset as a matter of fact) was stolen, despite only exposing
50 of the stolen model's training points. Dataset inference defends against
state-of-the-art attacks even when the adversary is adaptive. Unlike prior
work, it does not require retraining or overfitting the defended model.
|
{
"abstract": "With increasingly more data and computation involved in their training,\nmachine learning models constitute valuable intellectual property. This has\nspurred interest in model stealing, which is made more practical by advances in\nlearning with partial, little, or no supervision. Existing defenses focus on\ninserting unique watermarks in a model's decision surface, but this is\ninsufficient: the watermarks are not sampled from the training distribution and\nthus are not always preserved during model stealing. In this paper, we make the\nkey observation that knowledge contained in the stolen model's training set is\nwhat is common to all stolen copies. The adversary's goal, irrespective of the\nattack employed, is always to extract this knowledge or its by-products. This\ngives the original model's owner a strong advantage over the adversary: model\nowners have access to the original training data. We thus introduce $dataset$\n$inference$, the process of identifying whether a suspected model copy has\nprivate knowledge from the original model's dataset, as a defense against model\nstealing. We develop an approach for dataset inference that combines\nstatistical testing with the ability to estimate the distance of multiple data\npoints to the decision boundary. Our experiments on CIFAR10, SVHN, CIFAR100 and\nImageNet show that model owners can claim with confidence greater than 99% that\ntheir model (or dataset as a matter of fact) was stolen, despite only exposing\n50 of the stolen model's training points. Dataset inference defends against\nstate-of-the-art attacks even when the adversary is adaptive. Unlike prior\nwork, it does not require retraining or overfitting the defended model.",
"title": "Dataset Inference: Ownership Resolution in Machine Learning",
"url": "http://arxiv.org/abs/2104.10706v1"
}
| null | null |
no_new_dataset
|
admin
| null | false
| null |
34e24349-1528-48dd-a7cc-b25e36470f4d
| null |
Validated
|
{
"text_length": 1786
}
| 1no_new_dataset
|
Continuity of care is crucial to ensuring positive health outcomes for
patients discharged from an inpatient hospital setting, and improved
information sharing can help. To share information, caregivers write discharge
notes containing action items to share with patients and their future
caregivers, but these action items are easily lost due to the lengthiness of
the documents. In this work, we describe our creation of a dataset of clinical
action items annotated over MIMIC-III, the largest publicly available dataset
of real clinical notes. This dataset, which we call CLIP, is annotated by
physicians and covers 718 documents representing 100K sentences. We describe
the task of extracting the action items from these documents as multi-aspect
extractive summarization, with each aspect representing a type of action to be
taken. We evaluate several machine learning models on this task, and show that
the best models exploit in-domain language model pre-training on 59K
unannotated documents, and incorporate context from neighboring sentences. We
also propose an approach to pre-training data selection that allows us to
explore the trade-off between size and domain-specificity of pre-training
datasets for this task.
|
{
"abstract": "Continuity of care is crucial to ensuring positive health outcomes for\npatients discharged from an inpatient hospital setting, and improved\ninformation sharing can help. To share information, caregivers write discharge\nnotes containing action items to share with patients and their future\ncaregivers, but these action items are easily lost due to the lengthiness of\nthe documents. In this work, we describe our creation of a dataset of clinical\naction items annotated over MIMIC-III, the largest publicly available dataset\nof real clinical notes. This dataset, which we call CLIP, is annotated by\nphysicians and covers 718 documents representing 100K sentences. We describe\nthe task of extracting the action items from these documents as multi-aspect\nextractive summarization, with each aspect representing a type of action to be\ntaken. We evaluate several machine learning models on this task, and show that\nthe best models exploit in-domain language model pre-training on 59K\nunannotated documents, and incorporate context from neighboring sentences. We\nalso propose an approach to pre-training data selection that allows us to\nexplore the trade-off between size and domain-specificity of pre-training\ndatasets for this task.",
"title": "CLIP: A Dataset for Extracting Action Items for Physicians from Hospital Discharge Notes",
"url": "http://arxiv.org/abs/2106.02524v1"
}
| null | null |
new_dataset
|
admin
| null | false
| null |
00207d9e-f241-43fd-81d6-65b657045f7d
| null |
Validated
|
{
"text_length": 1350
}
| 0new_dataset
|
Machine learning models deployed in healthcare systems face data drawn from
continually evolving environments. However, researchers proposing such models
typically evaluate them in a time-agnostic manner, with train and test splits
sampling patients throughout the entire study period. We introduce the
Evaluation on Medical Datasets Over Time (EMDOT) framework and Python package,
which evaluates the performance of a model class over time. Across five medical
datasets and a variety of models, we compare two training strategies: (1) using
all historical data, and (2) using a window of the most recent data. We note
changes in performance over time, and identify possible explanations for these
shocks.
|
{
"abstract": "Machine learning models deployed in healthcare systems face data drawn from\ncontinually evolving environments. However, researchers proposing such models\ntypically evaluate them in a time-agnostic manner, with train and test splits\nsampling patients throughout the entire study period. We introduce the\nEvaluation on Medical Datasets Over Time (EMDOT) framework and Python package,\nwhich evaluates the performance of a model class over time. Across five medical\ndatasets and a variety of models, we compare two training strategies: (1) using\nall historical data, and (2) using a window of the most recent data. We note\nchanges in performance over time, and identify possible explanations for these\nshocks.",
"title": "Model Evaluation in Medical Datasets Over Time",
"url": "http://arxiv.org/abs/2211.07165v1"
}
| null | null |
no_new_dataset
|
admin
| null | false
| null |
14c4fda3-5210-42ac-b939-bbe5e881a6bc
| null |
Validated
|
{
"text_length": 786
}
| 1no_new_dataset
|
Lecture slide presentations, a sequence of pages that contain text and
figures accompanied by speech, are constructed and presented carefully in order
to optimally transfer knowledge to students. Previous studies in multimedia and
psychology attribute the effectiveness of lecture presentations to their
multimodal nature. As a step toward developing AI to aid in student learning as
intelligent teacher assistants, we introduce the Multimodal Lecture
Presentations dataset as a large-scale benchmark testing the capabilities of
machine learning models in multimodal understanding of educational content. Our
dataset contains aligned slides and spoken language, for 180+ hours of video
and 9000+ slides, with 10 lecturers from various subjects (e.g., computer
science, dentistry, biology). We introduce two research tasks which are
designed as stepping stones towards AI agents that can explain (automatically
captioning a lecture presentation) and illustrate (synthesizing visual figures
to accompany spoken explanations) educational content. We provide manual
annotations to help implement these two research tasks and evaluate
state-of-the-art models on them. Comparing baselines and human student
performances, we find that current models struggle in (1) weak crossmodal
alignment between slides and spoken text, (2) learning novel visual mediums,
(3) technical language, and (4) long-range sequences. Towards addressing this
issue, we also introduce PolyViLT, a multimodal transformer trained with a
multi-instance learning loss that is more effective than current approaches. We
conclude by shedding light on the challenges and opportunities in multimodal
understanding of educational presentations.
|
{
"abstract": "Lecture slide presentations, a sequence of pages that contain text and\nfigures accompanied by speech, are constructed and presented carefully in order\nto optimally transfer knowledge to students. Previous studies in multimedia and\npsychology attribute the effectiveness of lecture presentations to their\nmultimodal nature. As a step toward developing AI to aid in student learning as\nintelligent teacher assistants, we introduce the Multimodal Lecture\nPresentations dataset as a large-scale benchmark testing the capabilities of\nmachine learning models in multimodal understanding of educational content. Our\ndataset contains aligned slides and spoken language, for 180+ hours of video\nand 9000+ slides, with 10 lecturers from various subjects (e.g., computer\nscience, dentistry, biology). We introduce two research tasks which are\ndesigned as stepping stones towards AI agents that can explain (automatically\ncaptioning a lecture presentation) and illustrate (synthesizing visual figures\nto accompany spoken explanations) educational content. We provide manual\nannotations to help implement these two research tasks and evaluate\nstate-of-the-art models on them. Comparing baselines and human student\nperformances, we find that current models struggle in (1) weak crossmodal\nalignment between slides and spoken text, (2) learning novel visual mediums,\n(3) technical language, and (4) long-range sequences. Towards addressing this\nissue, we also introduce PolyViLT, a multimodal transformer trained with a\nmulti-instance learning loss that is more effective than current approaches. We\nconclude by shedding light on the challenges and opportunities in multimodal\nunderstanding of educational presentations.",
"title": "Multimodal Lecture Presentations Dataset: Understanding Multimodality in Educational Slides",
"url": "http://arxiv.org/abs/2208.08080v1"
}
| null | null |
new_dataset
|
admin
| null | false
| null |
1d4f2924-deb2-4c82-9a01-95e659100428
| null |
Validated
|
{
"text_length": 1831
}
| 0new_dataset
|
Imperfections in data annotation, known as label noise, are detrimental to
the training of machine learning models and have an often-overlooked
confounding effect on the assessment of model performance. Nevertheless,
employing experts to remove label noise by fully re-annotating large datasets
is infeasible in resource-constrained settings, such as healthcare. This work
advocates for a data-driven approach to prioritising samples for re-annotation
- which we term "active label cleaning". We propose to rank instances according
to estimated label correctness and labelling difficulty of each sample, and
introduce a simulation framework to evaluate relabelling efficacy. Our
experiments on natural images and on a new medical imaging benchmark show that
cleaning noisy labels mitigates their negative impact on model training,
evaluation, and selection. Crucially, the proposed active label cleaning
enables correcting labels up to 4 times more effectively than typical random
selection in realistic conditions, making better use of experts' valuable time
for improving dataset quality.
|
{
"abstract": "Imperfections in data annotation, known as label noise, are detrimental to\nthe training of machine learning models and have an often-overlooked\nconfounding effect on the assessment of model performance. Nevertheless,\nemploying experts to remove label noise by fully re-annotating large datasets\nis infeasible in resource-constrained settings, such as healthcare. This work\nadvocates for a data-driven approach to prioritising samples for re-annotation\n- which we term \"active label cleaning\". We propose to rank instances according\nto estimated label correctness and labelling difficulty of each sample, and\nintroduce a simulation framework to evaluate relabelling efficacy. Our\nexperiments on natural images and on a new medical imaging benchmark show that\ncleaning noisy labels mitigates their negative impact on model training,\nevaluation, and selection. Crucially, the proposed active label cleaning\nenables correcting labels up to 4 times more effectively than typical random\nselection in realistic conditions, making better use of experts' valuable time\nfor improving dataset quality.",
"title": "Active label cleaning for improved dataset quality under resource constraints",
"url": "http://arxiv.org/abs/2109.00574v2"
}
| null | null |
no_new_dataset
|
admin
| null | false
| null |
2664980c-6dd3-4e96-9645-ed72da54a84b
| null |
Validated
|
{
"text_length": 1202
}
| 1no_new_dataset
|
We introduce the first large-scale dataset, MNISQ, for both the Quantum and
the Classical Machine Learning community during the Noisy Intermediate-Scale
Quantum era. MNISQ consists of 4,950,000 data points organized in 9
subdatasets. Building our dataset from the quantum encoding of classical
information (e.g., MNIST dataset), we deliver a dataset in a dual form: in
quantum form, as circuits, and in classical form, as quantum circuit
descriptions (quantum programming language, QASM). In fact, also the Machine
Learning research related to quantum computers undertakes a dual challenge:
enhancing machine learning exploiting the power of quantum computers, while
also leveraging state-of-the-art classical machine learning methodologies to
help the advancement of quantum computing. Therefore, we perform circuit
classification on our dataset, tackling the task with both quantum and
classical models. In the quantum endeavor, we test our circuit dataset with
Quantum Kernel methods, and we show excellent results up to $97\%$ accuracy. In
the classical world, the underlying quantum mechanical structures within the
quantum circuit data are not trivial. Nevertheless, we test our dataset on
three classical models: Structured State Space sequence model (S4), Transformer
and LSTM. In particular, the S4 model applied on the tokenized QASM sequences
reaches an impressive $77\%$ accuracy. These findings illustrate that quantum
circuit-related datasets are likely to be quantum advantageous, but also that
state-of-the-art machine learning methodologies can competently classify and
recognize quantum circuits. We finally entrust the quantum and classical
machine learning community the fundamental challenge to build more
quantum-classical datasets like ours and to build future benchmarks from our
experiments. The dataset is accessible on GitHub and its circuits are easily
run in qulacs or qiskit.
|
{
"abstract": "We introduce the first large-scale dataset, MNISQ, for both the Quantum and\nthe Classical Machine Learning community during the Noisy Intermediate-Scale\nQuantum era. MNISQ consists of 4,950,000 data points organized in 9\nsubdatasets. Building our dataset from the quantum encoding of classical\ninformation (e.g., MNIST dataset), we deliver a dataset in a dual form: in\nquantum form, as circuits, and in classical form, as quantum circuit\ndescriptions (quantum programming language, QASM). In fact, also the Machine\nLearning research related to quantum computers undertakes a dual challenge:\nenhancing machine learning exploiting the power of quantum computers, while\nalso leveraging state-of-the-art classical machine learning methodologies to\nhelp the advancement of quantum computing. Therefore, we perform circuit\nclassification on our dataset, tackling the task with both quantum and\nclassical models. In the quantum endeavor, we test our circuit dataset with\nQuantum Kernel methods, and we show excellent results up to $97\\%$ accuracy. In\nthe classical world, the underlying quantum mechanical structures within the\nquantum circuit data are not trivial. Nevertheless, we test our dataset on\nthree classical models: Structured State Space sequence model (S4), Transformer\nand LSTM. In particular, the S4 model applied on the tokenized QASM sequences\nreaches an impressive $77\\%$ accuracy. These findings illustrate that quantum\ncircuit-related datasets are likely to be quantum advantageous, but also that\nstate-of-the-art machine learning methodologies can competently classify and\nrecognize quantum circuits. We finally entrust the quantum and classical\nmachine learning community the fundamental challenge to build more\nquantum-classical datasets like ours and to build future benchmarks from our\nexperiments. The dataset is accessible on GitHub and its circuits are easily\nrun in qulacs or qiskit.",
"title": "MNISQ: A Large-Scale Quantum Circuit Dataset for Machine Learning on/for Quantum Computers in the NISQ era",
"url": "http://arxiv.org/abs/2306.16627v1"
}
| null | null |
new_dataset
|
admin
| null | false
| null |
0b9322bb-fb4b-4408-979c-1f2ac365da9e
| null |
Validated
|
{
"text_length": 2046
}
| 0new_dataset
|
We introduce RaidaR, a rich annotated image dataset of rainy street scenes,
to support autonomous driving research. The new dataset contains the largest
number of rainy images (58,542) to date, 5,000 of which provide semantic
segmentations and 3,658 provide object instance segmentations. The RaidaR
images cover a wide range of realistic rain-induced artifacts, including fog,
droplets, and road reflections, which can effectively augment existing street
scene datasets to improve data-driven machine perception during rainy weather.
To facilitate efficient annotation of a large volume of images, we develop a
semi-automatic scheme combining manual segmentation and an automated processing
akin to cross validation, resulting in 10-20 fold reduction on annotation time.
We demonstrate the utility of our new dataset by showing how data augmentation
with RaidaR can elevate the accuracy of existing segmentation algorithms. We
also present a novel unpaired image-to-image translation algorithm for
adding/removing rain artifacts, which directly benefits from RaidaR.
|
{
"abstract": "We introduce RaidaR, a rich annotated image dataset of rainy street scenes,\nto support autonomous driving research. The new dataset contains the largest\nnumber of rainy images (58,542) to date, 5,000 of which provide semantic\nsegmentations and 3,658 provide object instance segmentations. The RaidaR\nimages cover a wide range of realistic rain-induced artifacts, including fog,\ndroplets, and road reflections, which can effectively augment existing street\nscene datasets to improve data-driven machine perception during rainy weather.\nTo facilitate efficient annotation of a large volume of images, we develop a\nsemi-automatic scheme combining manual segmentation and an automated processing\nakin to cross validation, resulting in 10-20 fold reduction on annotation time.\nWe demonstrate the utility of our new dataset by showing how data augmentation\nwith RaidaR can elevate the accuracy of existing segmentation algorithms. We\nalso present a novel unpaired image-to-image translation algorithm for\nadding/removing rain artifacts, which directly benefits from RaidaR.",
"title": "RaidaR: A Rich Annotated Image Dataset of Rainy Street Scenes",
"url": "http://arxiv.org/abs/2104.04606v3"
}
| null | null |
new_dataset
|
admin
| null | false
| null |
2811a11c-72ae-43e1-bf62-d086501ece10
| null |
Validated
|
{
"text_length": 1163
}
| 0new_dataset
|
The availability of different pre-trained semantic models enabled the quick
development of machine learning components for downstream applications. Despite
the availability of abundant text data for low resource languages, only a few
semantic models are publicly available. Publicly available pre-trained models
are usually built as a multilingual version of semantic models that can not fit
well for each language due to context variations. In this work, we introduce
different semantic models for Amharic. After we experiment with the existing
pre-trained semantic models, we trained and fine-tuned nine new different
models using a monolingual text corpus. The models are build using word2Vec
embeddings, distributional thesaurus (DT), contextual embeddings, and DT
embeddings obtained via network embedding algorithms. Moreover, we employ these
models for different NLP tasks and investigate their impact. We find that newly
trained models perform better than pre-trained multilingual models.
Furthermore, models based on contextual embeddings from RoBERTA perform better
than the word2Vec models.
|
{
"abstract": "The availability of different pre-trained semantic models enabled the quick\ndevelopment of machine learning components for downstream applications. Despite\nthe availability of abundant text data for low resource languages, only a few\nsemantic models are publicly available. Publicly available pre-trained models\nare usually built as a multilingual version of semantic models that can not fit\nwell for each language due to context variations. In this work, we introduce\ndifferent semantic models for Amharic. After we experiment with the existing\npre-trained semantic models, we trained and fine-tuned nine new different\nmodels using a monolingual text corpus. The models are build using word2Vec\nembeddings, distributional thesaurus (DT), contextual embeddings, and DT\nembeddings obtained via network embedding algorithms. Moreover, we employ these\nmodels for different NLP tasks and investigate their impact. We find that newly\ntrained models perform better than pre-trained multilingual models.\nFurthermore, models based on contextual embeddings from RoBERTA perform better\nthan the word2Vec models.",
"title": "Introducing various Semantic Models for Amharic: Experimentation and Evaluation with multiple Tasks and Datasets",
"url": "http://arxiv.org/abs/2011.01154v2"
}
| null | null |
no_new_dataset
|
admin
| null | false
| null |
23f200e3-8943-4963-b563-044769105c27
| null |
Validated
|
{
"text_length": 1248
}
| 1no_new_dataset
|
Many recent neural models have shown remarkable empirical results in Machine
Reading Comprehension, but evidence suggests sometimes the models take
advantage of dataset biases to predict and fail to generalize on out-of-sample
data. While many other approaches have been proposed to address this issue from
the computation perspective such as new architectures or training procedures,
we believe a method that allows researchers to discover biases, and adjust the
data or the models in an earlier stage will be beneficial. Thus, we introduce
MRCLens, a toolkit that detects whether biases exist before users train the
full model. For the convenience of introducing the toolkit, we also provide a
categorization of common biases in MRC.
|
{
"abstract": "Many recent neural models have shown remarkable empirical results in Machine\nReading Comprehension, but evidence suggests sometimes the models take\nadvantage of dataset biases to predict and fail to generalize on out-of-sample\ndata. While many other approaches have been proposed to address this issue from\nthe computation perspective such as new architectures or training procedures,\nwe believe a method that allows researchers to discover biases, and adjust the\ndata or the models in an earlier stage will be beneficial. Thus, we introduce\nMRCLens, a toolkit that detects whether biases exist before users train the\nfull model. For the convenience of introducing the toolkit, we also provide a\ncategorization of common biases in MRC.",
"title": "MRCLens: an MRC Dataset Bias Detection Toolkit",
"url": "http://arxiv.org/abs/2207.08943v1"
}
| null | null |
no_new_dataset
|
admin
| null | false
| null |
298e2a99-0e5b-49a9-935b-ddb37e83be36
| null |
Validated
|
{
"text_length": 816
}
| 1no_new_dataset
|
Subseasonal forecasting of the weather two to six weeks in advance is
critical for resource allocation and climate adaptation but poses many
challenges for the forecasting community. At this forecast horizon,
physics-based dynamical models have limited skill, and the targets for
prediction depend in a complex manner on both local weather and global climate
variables. Recently, machine learning methods have shown promise in advancing
the state of the art but only at the cost of complex data curation, integrating
expert knowledge with aggregation across multiple relevant data sources, file
formats, and temporal and spatial resolutions. To streamline this process and
accelerate future development, we introduce SubseasonalClimateUSA, a curated
dataset for training and benchmarking subseasonal forecasting models in the
United States. We use this dataset to benchmark a diverse suite of subseasonal
models, including operational dynamical models, classical meteorological
baselines, and ten state-of-the-art machine learning and deep learning-based
methods from the literature. Overall, our benchmarks suggest simple and
effective ways to extend the accuracy of current operational models.
SubseasonalClimateUSA is regularly updated and accessible via the
https://github.com/microsoft/subseasonal_data/ Python package.
|
{
"abstract": "Subseasonal forecasting of the weather two to six weeks in advance is\ncritical for resource allocation and climate adaptation but poses many\nchallenges for the forecasting community. At this forecast horizon,\nphysics-based dynamical models have limited skill, and the targets for\nprediction depend in a complex manner on both local weather and global climate\nvariables. Recently, machine learning methods have shown promise in advancing\nthe state of the art but only at the cost of complex data curation, integrating\nexpert knowledge with aggregation across multiple relevant data sources, file\nformats, and temporal and spatial resolutions. To streamline this process and\naccelerate future development, we introduce SubseasonalClimateUSA, a curated\ndataset for training and benchmarking subseasonal forecasting models in the\nUnited States. We use this dataset to benchmark a diverse suite of subseasonal\nmodels, including operational dynamical models, classical meteorological\nbaselines, and ten state-of-the-art machine learning and deep learning-based\nmethods from the literature. Overall, our benchmarks suggest simple and\neffective ways to extend the accuracy of current operational models.\nSubseasonalClimateUSA is regularly updated and accessible via the\nhttps://github.com/microsoft/subseasonal_data/ Python package.",
"title": "SubseasonalClimateUSA: A Dataset for Subseasonal Forecasting and Benchmarking",
"url": "http://arxiv.org/abs/2109.10399v3"
}
| null | null |
new_dataset
|
admin
| null | false
| null |
0bc818ba-e944-48fa-b660-12e49dde2661
| null |
Validated
|
{
"text_length": 1436
}
| 0new_dataset
|
Understanding how events are semantically related to each other is the
essence of reading comprehension. Recent event-centric reading comprehension
datasets focus mostly on event arguments or temporal relations. While these
tasks partially evaluate machines' ability of narrative understanding,
human-like reading comprehension requires the capability to process event-based
information beyond arguments and temporal reasoning. For example, to understand
causality between events, we need to infer motivation or purpose; to establish
event hierarchy, we need to understand the composition of events. To facilitate
these tasks, we introduce ESTER, a comprehensive machine reading comprehension
(MRC) dataset for Event Semantic Relation Reasoning. The dataset leverages
natural language queries to reason about the five most common event semantic
relations, provides more than 6K questions and captures 10.1K event relation
pairs. Experimental results show that the current SOTA systems achieve 22.1%,
63.3%, and 83.5% for token-based exact-match, F1, and event-based HIT@1 scores,
which are all significantly below human performances (36.0%, 79.6%, 100%
respectively), highlighting our dataset as a challenging benchmark.
|
{
"abstract": "Understanding how events are semantically related to each other is the\nessence of reading comprehension. Recent event-centric reading comprehension\ndatasets focus mostly on event arguments or temporal relations. While these\ntasks partially evaluate machines' ability of narrative understanding,\nhuman-like reading comprehension requires the capability to process event-based\ninformation beyond arguments and temporal reasoning. For example, to understand\ncausality between events, we need to infer motivation or purpose; to establish\nevent hierarchy, we need to understand the composition of events. To facilitate\nthese tasks, we introduce ESTER, a comprehensive machine reading comprehension\n(MRC) dataset for Event Semantic Relation Reasoning. The dataset leverages\nnatural language queries to reason about the five most common event semantic\nrelations, provides more than 6K questions and captures 10.1K event relation\npairs. Experimental results show that the current SOTA systems achieve 22.1%,\n63.3%, and 83.5% for token-based exact-match, F1, and event-based HIT@1 scores,\nwhich are all significantly below human performances (36.0%, 79.6%, 100%\nrespectively), highlighting our dataset as a challenging benchmark.",
"title": "ESTER: A Machine Reading Comprehension Dataset for Event Semantic Relation Reasoning",
"url": "http://arxiv.org/abs/2104.08350v2"
}
| null | null |
new_dataset
|
admin
| null | false
| null |
1db2c1e2-fa8e-442e-a345-cec8be294fd5
| null |
Validated
|
{
"text_length": 1339
}
| 0new_dataset
|
Machine reading comprehension (MRC) is a crucial task in natural language
processing and has achieved remarkable advancements. However, most of the
neural MRC models are still far from robust and fail to generalize well in
real-world applications. In order to comprehensively verify the robustness and
generalization of MRC models, we introduce a real-world Chinese dataset --
DuReader_robust. It is designed to evaluate the MRC models from three aspects:
over-sensitivity, over-stability and generalization. Comparing to previous
work, the instances in DuReader_robust are natural texts, rather than the
altered unnatural texts. It presents the challenges when applying MRC models to
real-world applications. The experimental results show that MRC models do not
perform well on the challenge test set. Moreover, we analyze the behavior of
existing models on the challenge test set, which may provide suggestions for
future model development. The dataset and codes are publicly available at
https://github.com/baidu/DuReader.
|
{
"abstract": "Machine reading comprehension (MRC) is a crucial task in natural language\nprocessing and has achieved remarkable advancements. However, most of the\nneural MRC models are still far from robust and fail to generalize well in\nreal-world applications. In order to comprehensively verify the robustness and\ngeneralization of MRC models, we introduce a real-world Chinese dataset --\nDuReader_robust. It is designed to evaluate the MRC models from three aspects:\nover-sensitivity, over-stability and generalization. Comparing to previous\nwork, the instances in DuReader_robust are natural texts, rather than the\naltered unnatural texts. It presents the challenges when applying MRC models to\nreal-world applications. The experimental results show that MRC models do not\nperform well on the challenge test set. Moreover, we analyze the behavior of\nexisting models on the challenge test set, which may provide suggestions for\nfuture model development. The dataset and codes are publicly available at\nhttps://github.com/baidu/DuReader.",
"title": "DuReader_robust: A Chinese Dataset Towards Evaluating Robustness and Generalization of Machine Reading Comprehension in Real-World Applications",
"url": "http://arxiv.org/abs/2004.11142v2"
}
| null | null |
new_dataset
|
admin
| null | false
| null |
1328354b-0919-4070-949f-efbc0212e99f
| null |
Validated
|
{
"text_length": 1203
}
| 0new_dataset
|
- Downloads last month
- 62