id stringlengths 6 11 | turns listlengths 4 44 |
|---|---|
QA_545:0 | [
{
"id": 0,
"ques_type_id": 1,
"question-type": "Simple Question (Direct)",
"description": "Simple Question",
"entities_in_utterance": [
"Q5217081"
],
"relations": [
"P102"
],
"type_list": [
"Q7210356"
],
"speaker": "USER",
"utterance": "Which party i... |
QA_545:1 | [
{
"id": 0,
"ques_type_id": 1,
"question-type": "Simple Question (Direct)",
"description": "Simple Question",
"entities_in_utterance": [
"Q865017"
],
"relations": [
"P26"
],
"type_list": [
"Q636497"
],
"speaker": "USER",
"utterance": "Which stock char... |
QA_545:2 | [
{
"id": 0,
"ques_type_id": 1,
"question-type": "Simple Question (Direct)",
"description": "Simple Question",
"entities_in_utterance": [
"Q21980014"
],
"relations": [
"P703"
],
"type_list": [
"Q16521"
],
"speaker": "USER",
"utterance": "Which taxon is... |
QA_545:3 | [{"id":0,"ques_type_id":1,"question-type":"Simple Question (Direct)","description":"Simple Question"(...TRUNCATED) |
QA_545:4 | [{"id":0,"ques_type_id":1,"question-type":"Simple Question (Direct)","description":"Simple Question"(...TRUNCATED) |
QA_545:5 | [{"id":0,"ques_type_id":1,"question-type":"Simple Question (Direct)","description":"Simple Question"(...TRUNCATED) |
QA_545:6 | [{"id":0,"ques_type_id":1,"question-type":"Simple Question (Direct)","description":"Simple Question"(...TRUNCATED) |
QA_545:7 | [{"id":0,"ques_type_id":1,"question-type":"Simple Question (Direct)","description":"Simple Question"(...TRUNCATED) |
QA_545:8 | [{"id":0,"ques_type_id":1,"question-type":"Simple Question (Direct)","description":"Simple Question"(...TRUNCATED) |
QA_545:9 | [{"id":0,"ques_type_id":1,"question-type":"Simple Question (Direct)","description":"Simple Question"(...TRUNCATED) |
Dataset Card for CSQA-SPARQLtoText
Dataset Summary
CSQA corpus (Complex Sequential Question-Answering, see https://amritasaha1812.github.io/CSQA/) is a large corpus for conversational knowledge-based question answering. The version here is augmented with various fields to make it easier to run specific tasks, especially SPARQL-to-text conversion.
The original data has been post-processing as follows:
Verbalization templates were applied on the answers and their entities were verbalized (replaced by their label in Wikidata)
Questions were parsed using the CARTON algorithm to produce a sequence of action in a specific grammar
Sequence of actions were mapped to SPARQL queries and entities were verbalized (replaced by their label in Wikidata)
Supported tasks
- Knowledge-based question-answering
- Text-to-SPARQL conversion
Knowledge based question-answering
Below is an example of dialogue:
- Q1: Which occupation is the profession of Edmond Yernaux ?
- A1: politician
- Q2: Which collectable has that occupation as its principal topic ?
- A2: Notitia Parliamentaria, An History of the Counties, etc.
SPARQL queries and natural language questions
SELECT DISTINCT ?x WHERE
{ ?x rdf:type ontology:occupation . resource:Edmond_Yernaux property:occupation ?x }
is equivalent to:
Which occupation is the profession of Edmond Yernaux ?
Languages
- English
Dataset Structure
The corpus follows the global architecture from the original version of CSQA (https://amritasaha1812.github.io/CSQA/).
There is one directory of the train, dev, and test sets, respectively.
Dialogues are stored in separate directories, 100 dialogues per directory.
Finally, each dialogue is stored in a JSON file as a list of turns.
Types of questions
Comparison of question types compared to related datasets:
| SimpleQuestions | ParaQA | LC-QuAD 2.0 | CSQA | WebNLQ-QA | ||
|---|---|---|---|---|---|---|
| Number of triplets in query | 1 | β | β | β | β | β |
| 2 | β | β | β | β | ||
| More | β | β | β | |||
| Logical connector between triplets | Conjunction | β | β | β | β | β |
| Disjunction | β | β | ||||
| Exclusion | β | β | ||||
| Topology of the query graph | Direct | β | β | β | β | β |
| Sibling | β | β | β | β | ||
| Chain | β | β | β | β | ||
| Mixed | β | β | ||||
| Other | β | β | β | β | ||
| Variable typing in the query | None | β | β | β | β | β |
| Target variable | β | β | β | β | ||
| Internal variable | β | β | β | β | ||
| Comparisons clauses | None | β | β | β | β | β |
| String | β | β | ||||
| Number | β | β | β | |||
| Date | β | β | ||||
| Superlative clauses | No | β | β | β | β | β |
| Yes | β | |||||
| Answer type | Entity (open) | β | β | β | β | β |
| Entity (closed) | β | β | ||||
| Number | β | β | β | |||
| Boolean | β | β | β | β | ||
| Answer cardinality | 0 (unanswerable) | β | β | |||
| 1 | β | β | β | β | β | |
| More | β | β | β | β | ||
| Number of target variables | 0 (β ASK verb) | β | β | β | β | |
| 1 | β | β | β | β | β | |
| 2 | β | β | ||||
| Dialogue context | Self-sufficient | β | β | β | β | β |
| Coreference | β | β | ||||
| Ellipsis | β | β | ||||
| Meaning | Meaningful | β | β | β | β | β |
| Non-sense | β |
Data splits
Text verbalization is only available for a subset of the test set, referred to as challenge set. Other sample only contain dialogues in the form of follow-up sparql queries.
| Train | Validation | Test | |
|---|---|---|---|
| Questions | 1.5M | 167K | 260K |
| Dialogues | 152K | 17K | 28K |
| NL question per query | 1 | ||
| Characters per query | 163 (Β± 100) | ||
| Tokens per question | 10 (Β± 4) |
JSON fields
Each turn of a dialogue contains the following fields:
Original fields
ques_type_id: ID corresponding to the question utterancedescription: Description of type of questionrelations: ID's of predicates used in the utteranceentities_in_utterance: ID's of entities used in the questionspeaker: The nature of speaker:SYSTEMorUSERutterance: The utterance: either the question, clarification or responseactive_set: A regular expression which identifies the entity set of answer listall_entities: List of ALL entities which constitute the answer of the questionquestion-type: Type of question (broad types used for evaluation as given in the original authors' paper)type_list: List containing entity IDs of all entity parents used in the question
New fields
is_spurious: introduced by CARTON,is_incomplete: either the question is self-sufficient (complete) or it relies on information given by the previous turns (incomplete)parsed_active_set:gold_actions: sequence of ACTIONs as returned by CARTONsparql_query: SPARQL query
Verbalized fields
Fields with verbalized in their name are verbalized versions of another fields, ie IDs were replaced by actual words/labels.
Format of the SPARQL queries
Clauses are in random order
Variables names are represented as random letters. The letters change from one turn to another.
Delimiters are spaced
Additional Information
Licensing Information
Content from original dataset: CC-BY-SA 4.0
New content: CC BY-SA 4.0
Citation Information
This version of the corpus (with SPARQL queries)
@inproceedings{lecorve2022sparql2text,
title={SPARQL-to-Text Question Generation for Knowledge-Based Conversational Applications},
author={Lecorv\'e, Gw\'enol\'e and Veyret, Morgan and Brabant, Quentin and Rojas-Barahona, Lina M.},
journal={Proceedings of the Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing (AACL-IJCNLP)},
year={2022}
}
Original corpus (CSQA)
@InProceedings{saha2018complex,
title = {Complex {Sequential} {Question} {Answering}: {Towards} {Learning} to {Converse} {Over} {Linked} {Question} {Answer} {Pairs} with a {Knowledge} {Graph}},
volume = {32},
issn = {2374-3468},
url = {https://ojs.aaai.org/index.php/AAAI/article/view/11332},
booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence},
author = {Saha, Amrita and Pahuja, Vardaan and Khapra, Mitesh and Sankaranarayanan, Karthik and Chandar, Sarath},
month = apr,
year = {2018}
}
CARTON
@InProceedings{plepi2021context,
author="Plepi, Joan and Kacupaj, Endri and Singh, Kuldeep and Thakkar, Harsh and Lehmann, Jens",
editor="Verborgh, Ruben and Hose, Katja and Paulheim, Heiko and Champin, Pierre-Antoine and Maleshkova, Maria and Corcho, Oscar and Ristoski, Petar and Alam, Mehwish",
title="Context Transformer with Stacked Pointer Networks for Conversational Question Answering over Knowledge Graphs",
booktitle="Proceedings of The Semantic Web",
year="2021",
publisher="Springer International Publishing",
pages="356--371",
isbn="978-3-030-77385-4"
}
- Downloads last month
- 390