Go To:

Paper Title Paper Authors Table Of Contents Abstract References
Home
Report a problem with this paper

Reasoning Over Paragraph Effects in Situations

Authors

Abstract

A key component of successfully reading a passage of text is the ability to apply knowledge gained from the passage to a new situation. In order to facilitate progress on this kind of reading, we present ROPES, a challenging benchmark for reading comprehension targeting Reasoning Over Paragraph Effects in Situations. We target expository language describing causes and effects (e.g., "animal pollinators increase efficiency of fertilization in flowers"), as they have clear implications for new situations. A system is presented a background passage containing at least one of these relations, a novel situation that uses this background, and questions that require reasoning about effects of the relationships in the background passage in the context of the situation. We collect background passages from science textbooks and Wikipedia that contain such phenomena, and ask crowd workers to author situations, questions, and answers, resulting in a 14,322 question dataset. We analyze the challenges of this task and evaluate the performance of state-of-the-art reading comprehension models. The best model performs only slightly better than randomly guessing an answer of the correct type, at 61.6% F1, well below the human performance of 89.0%.

1 Introduction

Comprehending a passage of text requires being able to understand the implications of the passage on other text that is read. For example, after reading a background passage about how animal pollinators increase the efficiency of fertilization in flowers, a human can easily deduce that given two types of flowers, one that attracts animal pollinators and one that does not, the former is likely to have a higher efficiency in fertilization ( Figure 1 ). This kind of reasoning however, is still challenging for state-of-the-art reading comprehension Background: Scientists think that the earliest flowers attracted insects and other animals, which spread pollen from flower to flower. This greatly increased the efficiency of fertilization over wind-spread pollen, which might or might not actually land on another flower. To take better advantage of this animal labor, plants evolved traits such as brightly colored petals to attract pollinators. In exchange for pollination, flowers gave the pollinators nectar.

Situation: Last week, John visited the national park near his city. He saw many flowers. His guide explained him that there are two categories of flowers, category A and category B. Category A flowers spread pollen via wind, and category B flowers spread pollen via animals.

Question: Would category B flower have more or less efficient fertilization than category A flower? Answer: more Question: Would category A flower have more or less efficient fertilization than category B flower? Answer: less Question: Which category of flowers would be more likely to have brightly colored petals? Answer: Category B Question: Which category of flowers would be less likely to have brightly colored petals? Answer: Category A models. Recent work in reading comprehension has seen impressive results, with models reaching human performance on well-established datasets Wang et al., 2017; Chen et al., 2016) , but so far has mostly focused on extracting local predicate-argument structure, without the need to apply what was read to outside context.

We introduce ROPES, a reading comprehension challenge that focuses on understanding causes and effects in an expository paragraph, requiring systems to apply this understanding to novel situations. If a new situation describes an occurrence of the cause, then the system should be able to reason over the effects if it has properly understood the background passage.

We constructed ROPES by first collecting background passages from science textbooks and Wikipedia articles that describe causal relationships. We showed these paragraphs to crowd workers and asked them to write situations that involve the relationships found in the background passage, and questions that connect the situation and the background using the causal relationships. The answers are spans from either the situation or the question. The dataset consists of 14,102 questions from various domains, mostly in science and economics.

In analyzing the data, we find (1) that there are a variety of cause / effect relationship types described; (2) that there is a wide range of difficulties in matching the descriptions of these phenomena between the background, situation, and question; and (3) that there are several distinct kinds of reasoning over causes and effects that appear.

To establish baseline performance on this dataset, we use a reading comprehension model based on BERT , reaching an accuracy of 51.9% F 1 . Most questions are designed to have two sensible answer choices (eg. "more" vs. "less"), so this performance is little better than randomly picking one of the choices. Expert humans achieved an average of 89.0% F 1 on a random sample.

2 Related Work

Reading comprehension There are many reading comprehension datasets (Richardson et al., 2013; Rajpurkar et al., 2016; Kwiatkowski et al., 2019; Dua et al., 2019) , the majority of which principally require understanding local predicateargument structure in a passage of text. The success of recent models suggests that machines are becoming capable of this level of understanding. ROPES challenges reading comprehension models to handle more difficult phenomena: understanding the implications of a passage of text. ROPES is also particularly related to datasets focusing on "multi-hop reasoning" (Yang et al., 2018; Khashabi et al., 2018) , as by construction answering questions in ROPES requires connecting information from multiple parts of a given passage.

The most closely related datasets to ROPES are ShARC (Saeidi et al., 2018) , Open-BookQA (Mihaylov et al., 2018) , and QuaRel (Tafjord et al., 2019) . ShARC shares the same goal of understanding causes and effects (in terms of specified rules), but frames it as a dialogue where the system has to also generate questions to gain complete information. OpenBookQA, similar to ROPES, requires reading scientific facts, but it is focused on a retrieval problem where a system must find the right fact for a question (and some additional common sense fact), whereas ROPES targets reading a given, complex passage of text, with no retrieval involved. QuaRel is also focused on reasoning about situational effects in a question-answering setting, but the "causes" are all pre-specified, not read from a background passage, so the setting is limited.

Recognizing textual entailment The application of causes and effects to new situations has a strong connection to notions of entailment-ROPES tries to get systems to understand what is entailed by an expository paragraph. The setup is fundamentally different, however: instead of giving systems pairs of sentences to classify as entailed or not, as in the traditional formulation (Dagan et al., 2006; Bowman et al., 2015 , inter alia), we give systems questions whose answers require understanding the entailment.

3 Data Collection

Background passages: We automatically scraped passages from science textbooks 1 and Wikipedia that contained causal connectives eg. "causes," "leads to," and keywords that signal qualitative relations, e.g. "increases," "decreases." 2 . We then manually filtered out the passages that do not have at least one relation. The passages were from a wide variety of domains, with the most coming from natural sciences and economics. In total, we collected over 1,000 background passages.

Crowdsourcing questions We used Amazon Mechanical Turk (AMT) to generate the situations, questions, and answers. The AMT workers were given background passages and asked to write situations that involved the relation(s) in the background passage. The AMT workers then authored questions about the situation that required both the Scientists think that the earliest flowers attracted insects and other animals, which spread pollen from flower to flower. This greatly increased the efficiency of fertilization over wind-spread pollen ... Q (4%) ... As decibel levels get higher, sound waves have greater intensity and sounds are louder. ...

C&Q (26%)

... Predators can be keystone species . These are species that can have a large effect on the balance of organisms in an ecosystem. For example, if all of the wolves are removed from a population, then the population of deer or rabbits may increase... background and the situation to answer. In each human intelligence task (HIT), AMT workers are given 5 background passages to select from and are asked to create a total of 10 questions. To mitigate the potential for easy lexical shortcuts in the dataset, the workers were encouraged via instructions to write questions in minimal pairs, where a very small change in the question results in a different answer. Two examples of these pairs are given in Figure 1 : switching "more" to "less" results in the opposite flower being the correct answer to the question.

Figure 1: Example questions in ROPES.

4 Dataset Analysis

We qualitatively and quantitatively analyze the phenomena that occur in ROPES. Table 1 shows the key statistics of the dataset. We randomly sample 100 questions and analyze the type of relation in the background, grounding in the situation, and reasoning required to answer the question. Background passages We manually annotate whether the relation in the background passage being asked about is causal (a clear cause and effect in the background), qualitative (e.g., as X increases, Y decreases), or both. Table 2 shows the breakdown of the kinds of relations in the dataset.

Table 1: Key statistics of ROPES.
Table 2: Types of relations in the background passages. C refers to causal relations and Q refers to qualitative relations.

Grounding To successfully apply the relation in the background to a situation, the system needs to be able to ground the relation to parts of the situation. To do this, the model has to either find an explicit mention of the cause/effect from the background and associate it with some property, use a common sense fact, or overcome a large lexical gap to connect them. Table 3 shows examples and breakdown of these three phenomena. Table 4 shows the breakdown and examples of the main types of questions by the types of reasoning required to answer them. In an effect comparison, two entities are each associated with an occurrence or absence of the cause described in the background and the question asks to compare the effects on the two entities. Similarly, in a cause comparison, two entities are each associated with an occurrence or absence of the effect described in the background and the question compares the causes of the occurrence or absence. In an effect prediction, the question asks to directly predict the effect on an occurrence of the cause on an entity in the situation. Finally, in cause prediction, the question asks to predict the cause of an occurrence of the effect on an entity in the situation. The majority of the examples are effect or cause comparison questions; these are challenging, as they require the model to ground two occurrences of causes or effects. ... gas atoms change to ions that can carry an electric current. The current causes the Geiger counter to click. The faster the clicks occur, the higher the level of radiation.

Table 3: Types of grounding found in ROPES.
Table 4: Example questions and answers from ROPES, showing the relevant parts of the associated passage and the reasoning required to answer the question. In the last example, the situation grounds the desired outcome and asks which of two cases would achieve the desired outcome.

Question Reasoning

. ... This carbon dioxide is then absorbed by the oceans, which lowers the pH of the water...

The biologists found out that the Indian Ocean had a lower water pH than it did a decade ago, and it became acidic. The water in the Arctic ocean still had a neutral to basic pH.

Which ocean has a lower content of carbon dioxide in its waters?

Arctic Cause prediction (1%)

... Conversely, if we want to convert a substance from a gas to a liquid or from a liquid to a solid, we remove energy from the system and decrease the temperature. ...

... she grabbed and empty ice tray and filled it. As she walked over to the freezer ... When she checked the tray later that day the ice was ready.

Did the freezer add or remove energy from the water? remove Other (8%) ... Charging an object by touching it with another charged object is called charging by conduction. ... induction allows a change in charge without actually touching the charged and uncharged objects to each other.

... In case A he used conduction, and in case B he used induction. In both cases he used same two objects. Finally, John tried to charge his phone remotely. He called this test as case C.

Which experiment would be less appropriate for case C, case A or case B? case A Table 4 : Example questions and answers from ROPES, showing the relevant parts of the associated passage and the reasoning required to answer the question. In the last example, the situation grounds the desired outcome and asks which of two cases would achieve the desired outcome. We use the BERT question answering model proposed by as our baseline and concatenate the background and situation to form the passage, following the SQuAD setup for BERT. To estimate the presence of annotation artifacts in our dataset (and as a potentially interesting future task where background reading is done up front), we also run the baseline without the background passage. We find that both of these models are able to select the correct answer type, but essentially perform the same as randomly selecting from answer options of the correct type. Table 5 presents the results for the baselines, which are significantly lower than human performance. 3

Table 5: Performance of baselines and human performance on the dev and test set.

6 Conclusion

We present ROPES, a new reading comprehension benchmark containing 14,102 questions, which aims to test the ability of systems to apply knowledge from reading text in a new setting. We hope that ROPES will aide efforts in tying language and reasoning together for more comprehensive understanding of text.

We used life science and physical science concepts from www.ck12.org, and biology, chemistry, physics, earth science, anatomy and physiology textbooks from openstax.org

Human performance is estimated by expert human annotation on 400 random questions evaluated with the same evaluation metrics as the baseline.