Go To:

Paper Title Paper Authors Table Of Contents Abstract References
Home
Report a problem with this paper

Latent Compositional Representations Improve Systematic Generalization in Grounded Question Answering

Authors

  • Ben Bogin
  • Sanjay Subramanian
  • Matt Gardner
  • Jonathan Berant
  • Transactions of the Association for Computational Linguistics
  • 2021
  • View in Semantic Scholar

Abstract

Abstract Answering questions that involve multi-step reasoning requires decomposing them and using the answers of intermediate steps to reach the final answer. However, state-of-the-art models in grounded question answering often do not explicitly perform decomposition, leading to difficulties in generalization to out-of-distribution examples. In this work, we propose a model that computes a representation and denotation for all question spans in a bottom-up, compositional manner using a CKY-style parser. Our model induces latent trees, driven by end-to-end (the answer) supervision only. We show that this inductive bias towards tree structures dramatically improves systematic generalization to out-of- distribution examples, compared to strong baselines on an arithmetic expressions benchmark as well as on C losure, a dataset that focuses on systematic generalization for grounded question answering. On this challenging dataset, our model reaches an accuracy of 96.1%, significantly higher than prior models that almost perfectly solve the task on a random, in-distribution split.

1 Introduction

Humans can effortlessly interpret new natural language utterances, as long as they are composed of previously-observed primitives and structure (Fodor and Pylyshyn, 1988) . Neural networks, on the other hand, do not exhibit this systematicity: while they generalize well to examples sampled from the same distribution as the training set, they have been shown to struggle in generalizing to out-of-distribution (OOD) examples that contain new compositions in both grounded question answering (Bahdanau et al., 2019a,b) and semantic parsing (Finegan-Dollak et al., 2018; Keysers et al., 2020) . For example, consider the question in Fig. 1 . This question requires querying the size of objects, comparing colors, identifying spatial relations and computing intersections between sets of objects. Neural networks tend to succeed whenever these concepts are combined in ways that were seen during training time. However, they commonly fail whenever these concepts are combined in novel ways at test time.

Figure 1: An example from CLOSURE illustrating how our model learns a latent structure over the input, where a representation and denotation is computed for every span (for denotation we show the set of objects with probability > 0.5). For brevity, some phrases were merged to a single node of the tree. For each phrase, we show the split point and module with the highest probability, although all possible split points and module outputs are softly computed. SKIP(L) and SKIP(R) refer to taking the denotation of the left or right sub-span, respectively.

A possible reason for this phenomenon is the expressivity of modern architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) and Transformers (Vaswani et al., 2017) , where rich representations that depend on the entire input are computed. The fact that token representations are contextualized by the entire utterance potentially lets the model avoid step-by-step reasoning, "collapse" multiple reasoning steps, and rely on shortcuts (Jiang and Bansal, 2019; Subramanian et al., 2020) . Such failures are revealed when evaluating models for systematic generalization on OOD examples. This stands in contrast to pre-neural loglinear models, where hierarchical representations were explicitly constructed over the input (Zettlemoyer and Collins, 2005; Liang et al., 2013) .

In this work, we propose a model for visual question answering (QA) that, analogous to these classical pre-neural models, computes for every span in the input question a representation and a denotation, that is, the set of objects in the image that the span refers to (see Fig. 1 ). Denotations for long spans are recursively computed from shorter spans using a bottom-up CKY-style parser without access to the entire input, leading to an inductive bias that encourages compositional computation. Because training is done from the final answer only, the model must effectively learn to induce latent trees that describe the compositional structure of the problem. We hypothesize that this explicit grounding of the meaning of sub-spans through hierarchical computation should result in better gen- Figure 1 : An example from CLOSURE illustrating how our model learns a latent structure over the input, where a representation and denotation is computed for every span (for denotation we show the set of objects with probability > 0.5). For brevity, some phrases were merged to a single node of the tree. For each phrase, we show the split point and module with the highest probability, although all possible split points and module outputs are softly computed. SKIP(L) and SKIP(R) refer to taking the denotation of the left or right sub-span, respectively. eralization to new compositions.

We evaluate our approach in two setups: (a) a synthetic arithmetic expressions dataset, and (b) CLOSURE (Bahdanau et al., 2019b) , a visual QA dataset that focuses on systematic generalization. On a random train/test split of the data (i.i.d split), both our model and prior baselines obtain near perfect performance. However, on splits that require systematic generalization to new compositions (compositional split) our model dramatically improves performance: for the arithmetic expressions problem, a vanilla Transformer fails to generalize and obtains 2.9% accuracy, while our model, Grounded Latent Trees (GLT), gets 98.4%. On CLOSURE, our model's accuracy is at 92.8%, 20% absolute points higher than strong baselines and even 15% points higher than models that use gold structures at training time or depend on domain-knowledge.

To conclude, we propose a model with an inherent inductive bias for copositional computation, which leads to large gains in systematic generalization, and induces latent structures that are useful for understanding its inner workings. Our work suggests that despite the undeniable success of general-purpose architectures built on top of contextualized representations, restricting information flow inside the network can greatly benefit compositional generalization. 1

2 Compositional Generalization

Natural language is mostly compositional; humans can understand and produce a potentially infinite number of novel combinations from a closed set of known components (Chomsky, 1957; Montague, 1970) . For example, a person would know what a "winged giraffe" is even if she's never seen one, assuming she knows the meaning of "winged" and "giraffe". This ability, which we term compositional generalization, is fundamental for building robust models that effectively learn from limited data (Lake et al., 2018).

Neural networks have been shown to generalize well in many language understanding tasks (Devlin et al., 2019; Raffel et al., 2019) , when using 1 Our code and data can be found at https://github.com/benbogin/ glt-grounded-latent-trees-qa.

i.i.d splits. However, when models are evaluated on splits that require compositional generalization, a significant drop in performance is observed. For example, in SCAN (Lake and Baroni, 2018) and gSCAN (Ruis et al., 2020) , synthetically generated commands are mapped into a sequence of actions. When tested on unseen command combinations, models perform poorly. A similar case was shown in text-to-SQL parsing Finegan-Dollak et al. (2018) , where splitting the training examples by the template of the target SQL query resulted in a dramatic drop in performance. SQOOP (Bahdanau et al., 2019a) shows the same phenomena on a synthetic visual QA task, which tests for generalization over unseen combinations of object properties and relations. This also led to developing methods that construct compositional splits automatically (Keysers et al., 2020) .

In this work, we focus on answering complex grounded questions over images. The CLEVR benchmark (Johnson et al., 2017a) contains pairs of synthetic images and questions that require multi-step reasoning, e.g., "Are there any large cyan spheres made of the same material as the large green sphere?". While this task is mostly solved, with an accuracy of 97%-99% (Perez et al., 2018; Hudson and Manning, 2018) , recent work (Bahdanau et al., 2019b) introduced CLOSURE: a new set of questions with identical vocabulary but different structure than CLEVR, asked on the same set of images. They evaluated generalization of different model families and showed that all fail on a large fraction of the examples.

The most common approach for grounded QA is based on end-to-end differentiable models such as FiLM (Perez et al., 2018) , MAC (Hudson and Manning, 2018) LXMERT (Tan and Bansal, 2019) , and UNITER (Chen et al., 2019) . These high-capacity models do not explicitly decompose the problem into smaller sub-tasks, and are thus prone to fail on compositional generalization. A different approach (Yi et al., 2018; Mao et al., 2019) is to parse the image into a symbolic or distributed knowledge graph with objects, attributes (color, size, etc.), and relations, and then parse the question into an executable logical form, which is deterministically executed. Last, Neural Module Networks (NMNs; Andreas et al. 2016) parse the question into an executable program as well, but execution is learned: each program module is a neural network designed to perform an atomic task, and modules are composed to perform complex reasoning. The latter two model families construct compositional programs and have been shown to generalize better on compositional splits (Bahdanau et al., 2019a,b) compared to fully differentiable models. However, programs are not explicitly tied to spans in the input question, and search over the space of possible programs is not differentiable, leading to difficulties in training.

In this work, we learn a latent structure for the question and tie each question span to an executable module in a differentiable manner. Our model balances the distributed and the symbolic approaches: we learn from downstream supervision only and output an inferred tree of the question, describing how the answer was computed. We base our model on work on latent tree parsers (Le and Zuidema, 2015; Liu et al., 2018; Maillard et al., 2019; Drozdov et al., 2019 ) that produce representations for all spans, and compute a soft weighting over all possible trees. We extend these parsers to answer grounded questions, grounding sub-trees in image objects. Closest to our work is Gupta and Lewis (2018) , where denotations are computed for each span. However, they do not compute compositional representations for the spans, limiting the expressivity of their model. Additionally, they work with a knowledge graph rather than images.

3 Model

In this section, we give a high-level overview of our proposed Grounded Latent Trees (GLT) model ( §3.1), explain our grounded CKY-based parser ( §3.2), and describe the architecture details ( §3.3, §3.4) and training procedure ( §3.5).

3.1 High-Level Overview

Problem setup Our task is visual QA, where given a question q = (q 0 , . . . , q n−1 ), and an image I, we aim to output an answer a ∈ A from a fixed set of natural language phrases. We train a model from a training set

{(q i , I i , a i )} N i=1

. We assume we can extract from the image up to n obj features vectors of objects, and represent them as a matrix V ∈ R n obj ×h dim (details on object detection and representation are in §3.4).

Our goal is to compute for every question span q ij = (q i , . . . , q j−1 ) a representation h ij ∈ R h dim and a denotation d ij ∈ [0, 1] n obj , which we interpret as the probability that the question span

Algorithm 1

Require: question q, image I, word embedding matrix E, visual representations matrix V 1: H: tensor holding representations hij, ∀i, j s.t. i < j 2: D: tensor holding denotations dij, ∀i, j s.t. i < j 3: for i = 1 . . . n do 4:

hi = Eq i , di = fground(Eq i , V ) (see §3.4) 5: for l = 1 . . . n do 6: compute hij, dij for all entries s.t j − i = l 7: p(a | q, I) = softmax(W [h0n; d0nV ]) 8: return arg maxa p(a | q, I)

refers to each object. We compute h ij and d ij in a bottom-up fashion, using CKY (Cocke, 1969; Kasami, 1965; Younger, 1967) . Algorithm 1 provides a high-level description of the procedure. We compute representations and denotations for length-1 spans (we use

h i = h i(i+1) , d i = d i(i+1)

for brevity) by setting the representation h i = E q i to be the corresponding word representation in an embedding matrix E, and grounding each word in the image objects:

d i = f ground (E q i , V ) (lines 3- 4; f ground function described in §3.4).

Then, we recursively compute representations and denotations of larger spans (lines 5-6). Last, we pass the representation of the entire question (h 0n ) together with the weighted sum of the visual representations (d 0n V ) through a softmax layer to produce a final answer distribution (line 7), using a learned classification matrix W ∈ R A×2h dim .

Computing h ij , d ij for all spans requires overcoming some challenges. Each span representation h ij should be a function of two sub-spans h ik , h kj . We use the term sub-spans to refer to all adjacent pairs of spans that cover q ij , formally,

{(q ik , q kj )} j−1

k=i+1 . However, we have no supervision for the "correct" split point k. Our model ( §3.2) considers all possible split points and learns to induce a latent tree structure from the final answer only. We show that this leads to a compositional structure and denotations that can be inspected at test time, providing an interpretable layer.

In §3.3 we describe the form of the composition functions, which compute both span representations and denotations from two sub-spans. These functions must be expressive enough to accommodate a wide range of interactions between sub-spans, but not create reasoning shortcuts that might hinder compositional generalization. First, we consider all possible split points and compose pairs of sub-spans using f h . Then, a weight is computed for all representations, and the output is their weighted sum.

3.2 Grounded Chart Parsing

We now describe how to recursively compute h ij , d ij from previously computed representations and denotations. In standard CKY-parsing, each constituent over a span q ij is constructed by combining two sub-spans q ik , q kj that meet at a split point k. Similarly, we define a representation h k ij that is conditioned on the split point and constructed from previously-computed representations of two sub-spans:

EQUATION (1): Not extracted; please refer to original document.

where

f h (•) is a composition function ( §3.3).

Since we want the loss to be differentiable with respect to its input, we do not pick a particular value k, but instead use a continuous relaxation. Specifically, we compute the probability p H (k | i, j) that k is the split point for the span q ij , given the tensor H of all computed representations of shorter spans. We then define the representation of the span h ij to be the expected representation over all possible split points:

h ij = k p H (k | i, j) • h k ij = E p H (k|•) [h k ij ]. (2)

The split point distribution is defined as p

H (k | i, j) ∝ exp(s T h k ij )

, where s ∈ R h dim is a parameter vector that determines what split points are likely. Figure 2 illustrates computing h ij .

Figure 2: Illustration of how hij is computed. First, we consider all possible split points and compose pairs of sub-spans using fh. Then, a weight is computed for all representations, and the output is their weighted sum.

Next, we turn to computing the denotation d ij of each span. Conceptually, computing d ij can Figure 3 : Illustration of how d ij is computed. We compute the denotations of all modules, and a weight for each one of the modules. The span denotation is then the weighted sum of the module outputs.

Figure 3: Illustration of how dij is computed. We compute the denotations of all modules, and a weight for each one of the modules. The span denotation is then the weighted sum of the module outputs.

be analogous to h ij ; that is, a function f d will compute d k ij for every possible split point k, and we will define

d ij = E p H (k|•) [d k ij ]

. However, the function f d (see §3.3) interacts with the visual representations of all objects and is thus computationally costly. Therefore, we propose a less expressive but more efficient approach, where f d (•) is applied only once for each span q ij .

Specifically, we compute the expected denotation of the left and right sub-spans of q ij :

EQUATION (4): Not extracted; please refer to original document.

If p H (k | •) puts most probability mass on a single split point k , then the expected denotations will be similar to picking that particular split point. Now we can compute d ij given the expected sub-span denotations and representations with a single application of f d (•):

EQUATION (5): Not extracted; please refer to original document.

which is substantially more efficient than the al-

ternative E p(k|•) [f d (d ik , d kj , h ij )]: in our imple- mentation f d is applied O(n 2 ) times versus O(n 3 )

with the alternative solution. This is important for making training tractable in practice.

3.3 Composition Functions

We now describe the exact form of the composition functions f h and f d .

Composing Representations

We first describe the function f h (h ik , h kj ), used to compose the representations of two sub-spans (Eq. 1). The goal of this function is to compose the "meanings" of two adjacent spans, without having access to the rest of the question or to the denotations of the sub-spans. For example, composing the representations of "same" and "size" to a representation for "same size". At a high-level, composition is based on a generic attention mechanism. Specifically, we use attention to form a convex sum of the representations of the two sub-spans (Eq. 6-7), and apply a non-linear transformation with a residual connection (Eq. 8).

α k ij = softmax ([a L h ik , a R h kj ]) ∈ R 2 (6) h k ij = α (1) W L h ik + α (2) W R h kj ∈ R h dim (7) f h (h ik , h kj ) = FF rep ĥ k ij +ĥ k ij ∈ R h dim (8)

where Composing denotations Next, we describe the

a L , a R ∈ R h dim , W L , W R ∈ R h dim ×h dim ,

function f d (d ij L ,d ij R , h ij )

, used to compute the span denotation d ij (Eq. 5). Importantly, this function has access only to words in the span q ij and not to the entire input utterance. We would like f d (•) to support both simple compositions that depend only on the denotations of sub-spans, as well as more complex functions that take into account the visual representations of different objects (spatial relations, colors, etc.). We define four modules in f d (•) for computing denotations and let the model learn when to use each module (we show in §4 that two modules suffice, but four improve interpretability). The modules are: SKIP, INTERSECTION, UNION, and a general-purpose VISUAL function, where only VISUAL uses the visual representations V . As illustrated in Fig. 3 , each module m outputs a denotation vector d m ij ∈ [0, 1] n obj , and the denotation d ij is a weighted average of the four modules:

EQUATION (10): Not extracted; please refer to original document.

where W mod ∈ R h dim ×4 . Next, we define the four modules (see Fig. 4 ). SKIP In many cases, only one of the left or right sub-spans have a meaningful denotation: for example, for the sub-spans "there is a" and "red cube", we should only keep the denotation of the right sub-span. To that end, the SKIP module weighs the two denotations and sums them:

Figure 4: The different modules used with their inputs and expected output.

EQUATION (2): Not extracted; please refer to original document.

EQUATION (12): Not extracted; please refer to original document.

where W sk ∈ R h dim ×2 .

Intersection And Union

We define two simple modules that only use the denotationsd ij L andd ij R . The first module corresponds to intersection of two sets, and the second to union:

EQUATION (13): Not extracted; please refer to original document.

EQUATION (14): Not extracted; please refer to original document.

where min(•) and max(•) are computed elementwise, per object. We show in §4.2 that while these two modules are helpful for interpretability, their effect on performance is relatively small, and they can be omitted for simplicity.

VISUAL This module is responsible for compositions that involve visual computation, such as computing spatial relations ("left of the red sphere") and comparing attributes of objects ("has the same size as the red sphere"). Unlike other modules, in addition to sub-span denotations it also uses the visual representations of the objects, V ∈ R n obj ×h dim . For example, for the subspans "left of" and "the red object", we expect the function to ignored ij L (since the denotation of "left to" is irrelevant), and return a denotation with high probability for objects that are left to objects with high probability ind ij R .

To determine whether an object with index o should have high probability in the output, we need to consider its relation to all other objects. A simple scoring function might be (h ij

+ v o 1 ) T (h ij + v o 2 )

, which will capture the relation between all pairs of objects conditioned on the span representation. However, this computation is quadratic in n obj . Instead, we propose a linear alternative that again leverages expected denotations of sub-spans. Specifically, we compute the expected visual representation of the right subspan and process this representation with a feedforward layer:

EQUATION (15): Not extracted; please refer to original document.

EQUATION (16): Not extracted; please refer to original document.

We use the right sub-span because the syntax in CLEVR is mostly right-branching, but a symmetric term can be computed if needed. Then, we generate a representation q(o) for every object that is conditioned on the span representation h ij , the object probability under the sub-span denotations, and its visual representation. The final object probability is based on the interaction of q(o) and q R :

q(o) = FF vis W h h ij + v o +d L (o)s 1 +d R (o)s 2 d vis ij (o) = σ q(o) T q R where W h ∈ R h dim ×h dim , s 1 , s 2 ∈ R h dim

are learned embeddings and FF vis is a feed-forward layer of size h dim × h dim with a non-linear activation. This is the most expressive module we propose.

Relation to CCG Our approach is related to classical linguistic formalisms, such as CCG (Steedman, 1996) , that tightly couple syntax and semantics. Under this view, one can consider the representations h and denotations d as analogous to syntax and semantics, and our composition functions and modules perform syntactic and semantic neural composition.

3.4 Grounding

In lines 3-4 of Algorithm 1, we initialize the representations and denotations of length-1 spans. The representation h i is initialized as the corresponding word embedding E q i , and the denotation is computed with a grounding function. A simple implementation for f ground would be σ(h i V ), based on the dot product between the word representation and the visual representations of all objects. However, in the case of a co-referring pronoun ("it"), we want to ground the pronoun to the denotation of a previous span. We now describe how we address this case.

Coreference Sentences such as "there is a red sphere; what is its material?" are harder to answer with a CKY parser, since the denotation of "its" depends on the denotation of a distant span. We propose a simple heuristic for this issue, that addresses the case where the referenced object is the denotation of a previous sentence. This solution could be potentially expanded in future work, to a wider array of coreference phenomena. In every example that comprises two sentences: 3 (a) We compute the denotation d first for the entire first sentence as described (standard CKY); (b) We ground each word in the second sentence as proposed above:

d second i = σ(h second i V

); (c) For each word in the second sentence, we predict whether it co-refers to d first using a learned gate (r 1 , r

2 ) = softmax FF coref (h second i ) , where FF coref ∈ R h dim ×2 . (d) We define d second i = r 1 • d first + r 2 •d second i .

Visual representation Next, we describe how we compute the visual embedding matrix V . Two common approaches to obtain visual features are (1) computing a feature map for the entire image and letting the model learn to attend to the correct feature position (Hudson and Manning, 2018; Perez et al., 2018); and (2) predicting the locations of objects in the image, and extracting features just for these objects (Anderson et al., 2018; Tan and Bansal, 2019; Chen et al., 2019) . We use the latter approach, since it simplifies learning over discrete sets, and has better memory efficiency -the model only attends to a small set of objects rather then the entire image feature map.

Specifically, we run CLEVR images through a RESNET101 model (He et al., 2016) , pre-trained on ImageNet (Russakovsky et al., 2015) . This model outputs a feature map V all of size W × H × D, where D = 512 and W = H = 28. We then use an object detector, Faster R- CNN (Ren et al., 2015) , which predicts the location bb pred ∈ R 4 of all objects in the image, in the format of bounding boxes (horizontal and vertical positions, width and height). We use these predicted locations to compute V pred , containing only the features in V all that are predicted to contain an object according to bb pred . Since Faster R-CNN was trained on real images, we adapt it to CLEVR images by training it to predict bounding boxes of 5,000 objects from CLEVR images (and 1,000 images used for validation), using gold scene data. The bounding boxes and features are extracted and fixed as a preprocessing step. Finally, to compute V , in a similar fashion to LXMERT and UNITER we augment the object representations in V pred with their position embeddings, and pass them through a single Transformer self-attention layer to add context about other objects:

V = TransformerLayer(V pred W feat + bb pred W pos ) , where W feat ∈ R D×h dim and W pos ∈ R 4×h dim .

Complexity Similar to CKY, we go over all O(n 2 ) spans in a sentence, and for each span compute h k ij for each of the possible O(n) splits (there is no grammar constant since the grammar has effectively one rule). To compute denotations d ij , for all O(n 2 ) spans, we perform a linear computation over all n obj objects. Thus, the algorithm runs in time O(n 3 + n 2 n obj ), with similar memory consumption. This is higher than end-to-end models that do not compute explicit span representations.

3.5 Training

The model is fully differentiable, and we train with maximum likelihood, maximizing the log probability log p(a * | q, I) of the correct answer a * (see Algorithm 1). Trask et al., 2018; Geva et al., 2020) . However, models are often evaluated on expressions that are similar to the ones they were trained on, where only the numbers change. To test for generalization, we create a simple dataset and evaluate on two splits that require learning the correct operator precedence and outputs. In the first split, sequences of operators that appear at test time do not appear at training time. In the second split, the test set contains longer sequences compared to the training set.

We define an arithmetic expression as a sequence containing n numbers with n−1 arithmetic operators between each pair. The answer a is the result evaluating the expression.

Evaluation setups The sampled operators are addition and multiplication, and we take only expressions such that a ∈ {0, 1, . . . , 100} to train as a multi-class problem. During training, we randomly pick the length n to be up to n train , and during test time we choose a fixed length n test . We evaluate on three setups. In the easy split, we choose n train = n test = 8, and the sequence of operators is randomly drawn from a uniform distribution for both training and test examples. In this setup, we only check that the exact same expression is not shared between the training and test set. In the compositional split, we randomly pick 3 positions, and for each one randomly assign exactly one operator that will appear at training time. On the test set, the operators in all three positions are flipped, so that they now contain the unseen operator (see Fig. 5 ). The same lengths are used as in the easy split. Finally, in the length split, we train with n train = 8 and test with n test = 10. Examples for all setups are generated on-the-fly for 3

Figure 5: Arithmetic expressions: unlike the easy setup, we evaluate models on expressions with operations ordered in ways unobserved at training time. Flipped operator positions are in red.

Easy Split

Op. split Len. split Transformer 100.0 ± 0.0 2.9 ± 1.1 10.4 ± 2.4 GLT 99.9 ± 0.2 98.4 ± 0.7 94.9 ± 1.1 Table 1 : Arithmetic expressions results for easy split, operation-position split, and length split.

Table 1: Arithmetic expressions results for easy split, operation-position split, and length split.

million steps with a batch size of 100.

Models

We compare GLT to a standard Transformer, where the input is the expression, and the output is predicted using a classification layer over the [CLS] token. All models are trained with cross-entropy loss given the correct answer.

For both models, we use an in-distribution validation set for hyper-parameter tuning. For the Transformer, we use 15 layers with a hidden size of 200 and feed-forward layer size of 300. For our model, we use h dim = 400. Since in this setup we do not have an image or any grounded input, we only compute h ij for all spans, and define p(a | q) = softmax(W h 0n ).

GLT layers are almost entirely recurrent, that is, the same parameters are used to compute representations for spans of all lengths. The only exception are layer-normalization parameters, which are not shared across layers. Thus, at test time when processing an expression longer than observed at training time, we use the layer-normalization parameters (total of 2 • h dim parameters per layer) from the longest span seen at training time. 4 Results Results are reported in Table 1 . We see that both models almost completely solve the in-distribution setup, but on out-of-distribution splits the Transformer performs poorly, while GLT shows only a small drop in accuracy.

4.2 Clevr And Closure

We evaluate performance on grounded complex questions using CLEVR (Johnson et al., 2017a) , consisting of 100,000 synthetic images with multiple objects of different shapes, colors, materials and sizes. 864,968 questions were synthetically created using 80 different templates, including simple questions ("what is the size of red cube?") and questions requiring multi-step reasoning (see Figure 1 ). The split in this dataset is i.i.d: templates used for training are the same as those in Table 2 : Test results for all models on CLEVR and CLOSURE. "Train Programs" stands for models trained with gold program, "Test Programs" for oracle models evaluated using gold programs, and "Deterministic Execution" for models that depend on domain-knowledge for execution (execution is not learned).

Table 2: Test results for all models on CLEVR and CLOSURE. “Train Programs” stands for models trained with gold program, “Test Programs” for oracle models evaluated using gold programs, and “Deterministic Execution” for models that depend on domain-knowledge for execution (execution is not learned).

the validation and test sets.

To test compositional generalization after training on CLEVR, we use the recent CLOSURE dataset (Bahdanau et al., 2019b) , which includes seven new question templates, with a total of 25,200 questions, asked on the CLEVR validation set images. The new templates are created by taking referring expressions of various types from CLEVR and combining them in novel ways.

A problem found in CLOSURE is that sentences from the template embed_mat_spa are ambiguous. For example, in the question "Is there a sphere on the left side of the cyan object that is the same size as purple cube?", the phrase "that is the same size as purple cube" can modify either "the sphere" or "the cyan object", but the answer in CLOSURE is always the latter. Therefore, we deterministically compute both of the two possible answers and keep two sets of questionanswer pairs of this template for the entire dataset. We evaluate models 5 on this template by taking the maximum score over these two sets (such that models must be consistent and choose a single interpretation for the template to get a perfect score).

Baselines We evaluate against the baselines presented in Bahdanau et al. (2019b) . The most comparable baselines are MAC (Hudson and Manning, 2018) and FiLM (Perez et al., 2018) , which are differentiable and do not use any program annotations. We also compare to NMNs that require at least a few hundred program examples for training. We show results for PG+EE (Johnson et al., 2017b) and an improved version, PG-Vector-NMN (Bahdanau et al., 2019b) . Last, we compare to NS-VQA, which in addition to parsing the question, also parses the scene into a knowledge graph. NS-VQA also requires domain-knowledge and data, as it parses the image into a knowledge graph based on gold data from CLEVR (objects color, shape, location, etc.).

Setup Baseline results are taken from previous papers (Bahdanau et al., 2019b; Hudson and Manning, 2018; Yi et al., 2018; Johnson et al., 2017b) , except for MAC and FiLM on CLOSURE, which we re-executed due to the aforementioned evaluation change. For GLT, we use CLEVR's validation set for hyper-parameter tuning, and run 4 experiments to compute mean and variance on CLO-SURE test set. We train for 40 epochs and perform early-stopping on CLEVR's validation set. We use h dim = 400.

Because of our model's high run-time and memory demands (see §3.4), we found that running on CLEVR and CLOSURE, where question length goes up to 42 tokens, is difficult. Thus, we delete function words that typically have empty denotations and can be safely skipped, 6 reducing the maximum length to 25.

CLEVR and CLOSURE In this experiment we compare results on i.i.d and compositional splits. Results are in Table 2 . We see that GLT performs well on CLEVR and gets the highest score on CLOSURE, improving by almost 20 points over comparable models. GLT is competitive even with Removing intersection and union As described in §3.3, we defined two modules specifically for CLEVR, (INTERSECTION and UNION) . We remove these modules to evaluate performance without them, and see that the model suffers only a small loss in accuracy and generalization: accuracy on CLEVR (validation set) is 98.0 ± 0.3, and accuracy on CLOSURE (test set) is 90.1 ± 7.1. Removing these modules leads to more cases where the VISUAL function is used, effectively performing intersection and union as well. While the drop in performance and generalization is mild, this model is harder to interpret since the VISUAL function performs multiple functions.

Few-Shot

We test GLT in a few-shot (FS) setup, where we add a few out-of-distribution examples. Specifically, we use 36 questions for each CLOSURE template, with a total of 252 examples. Similar to Bahdanau et al. (2019b) , we take a model that was trained on CLEVR and finetune it by oversampling CLOSURE examples (300 times) and adding them to the original training set.

To make results comparable to Bahdanau et al. (2019b), we perform model selection based on the CLOSURE validation set, and evaluate on the test set. As we see in Table 3 , GLT gets the best accuracy. If we perform model selection based on CLEVR alone (the preferred way to evaluate in the OOD setup, Teney et al. 2020) , accuracy on CLOSURE is 94.2 ± 2.1, which is still highest.

Table 3: Test results in the few-shot setup and for CLEVR-Humans.

Clevr-Humans

To test the performance of GLT on real natural language, we test on CLEVR-Humans (Johnson et al., 2017b) , which consists of 32,164 questions based on images from CLEVR. These questions, asked and answered by humans, contain new words and reasoning steps that were not seen in CLEVR. We take a model that was trained on CLEVR and fine-tune it on CLEVR-Humans training set, similar to prior work. We use GloVe (Pennington et al., 2014) for the embeddings of words unseen in CLEVR. We show results in Table 3 . We see that GLT gets better results than models that use programs, showing its flexibility to learn new concepts and phrasings, but lower results compared to more flexible models like MAC and FILM (see error analysis below).

4.3 Error Analysis

We sampled 25 questions with wrong predictions on CLEVR, CLOSURE, and CLEVR-Humans to analyze model errors. On CLEVR, most errors (84%) are due to problems in visual processing of the images such as grounding the word "rubber" to a metal object, problems in bounding box prediction or questions that require subtle spatial relation reasoning, such as identifying if an object is left to another object of different size, when they are at an almost identical x-position. The remaining errors (16%) are due to failed comparisons of numbers or attributes ("does the red object have the same material as the cube").

On CLOSURE, 60% of the errors were similar to those seen in CLEVR, e.g. problematic visual processing or failed comparisons. We've found that in 4% of cases, the execution of the VISUAL module was wrong, e.g. it collapsed two reasoning steps (both intersection and finding objects of same shape), but did not output the correct denotation. Other errors (36%) are in the predicted latent tree, where the model was uncertain about the split point and softly predicted more than one tree, resulting in wrong answer predictions. In some cases (16%) this was due to question ambiguity (see §4.2), and in others cases the cause was unclear (e.g., for the phrase "same color as the cube" the model gave similar probability for the split after "same" and after "color", leading to a wrong denotation for that span).

On CLEVR-Humans, we see that the model successfully learned certain new "concepts" such as colors ("gold"), superlatives ("smallest", "largest"), relations ("the reflecting object"), positions ("back left") and negation (see Fig. 6 ). It also answered correctly questions with different style than CLEVR ("Are there more blocks or balls?", "... cube being covered by ..."). However, the model fails on other new concepts, such Figure 6 : An example from CLEVR-Humans. The model learned to negate ("not") using the VISUAL module (negation is not part of CLEVR). as the "all" quantifier, arithmetic computations ("how many more... are there than...?"), and others ("What is the most common shape?").

Figure 6: An example from CLEVR-Humans. The model learned to negate (“not”) using the VISUAL module (negation is not part of CLEVR).

4.4 Interpretability

A key advantage of latent trees is interpretability -one can analyze the computation structure of a given question. Next, we analyze when are model outputs interpretable, and discuss how interpretability is affected by the limitations of GLT and relates to its generalization abilities. Additional output examples can be seen in Appendix A.

The model predicts a denotation for each span, which is a probability for all objects in the image. Thus, for every question span that should correspond to a set of objects, the output is interpretable, as can be seen in Fig. 1 . Having interpretable tree structures helps analyze ambiguous questions, such as the ones found in CLEVR and CLOSURE.

Figure 7: An example from CLEVR-Humans. This question requires reasoning steps that are not explicitly mentioned in the input. This results in a correct answer but non-interpretable output.
Figure 8. Not extracted; please refer to original document.
Figure 9: An example from CLEVR-Humans.

However, span denotations are not always distributions over objects, but rather a number or an attribute. For example, in comparison questions ("is the number of cubes higher than the number of spheres?") a fully interpretable model would have a numerical denotation for each group of objects. GLT solves such questions, by grounding the objects correctly and leaving the counting and arithmetic comparison to the answer function (line 7 in Algorithm 1). However, this comes at a cost to interpretability (see Fig. 10 ). In the numerical comparison example, it is easy to inspect the grounding of objects, but hard to tell what is the count for each group, which is likely to affect generalization as well. A future research direction is to learn richer denotation structure.

Figure 10: An example from CLEVR.

Another case where interpretability is suboptimal is counting. Due to the expressivity of the answer function, the denotation in counting questions does not necessarily contain only the objects to be counted. For example, for a question such as "how many cubes are there", the most interpretable model would only have all the cubes in the denotation of the entire question. However, GLT often outputs non-interpretable probabilities for the objects. In such cases, the outputs are interpretable for sub-spans of the question ("cubes are there"), as seen in Fig. 6 . This issue could be addressed by pre-training or injecting different count modules, as shown by Subramanian et al. (2020) .

Finally, the hardest case is when the required reasoning steps are not explicitly mentioned in the question. For example, the question "what is the most common shape?" requires to count the different shapes in the image, then take the shape with the maximum count. While our model answers this question correctly (see Fig. 7 ), it does so by "falling back" to the flexible answer function, rather than by explicitly performing the required computation. In future work, we will explore combining the compositional generalization abilities of our model, which grounds intermediate answers to spans, with the advantages of NMNs, that support more flexible reasoning.

5 Conclusion

We propose a model for grounded question answering that strongly relies on compositional computation. We show our model leads to large gains in a systematic generalization setup and provides an interpretable structure that can be inspected by humans and sheds light on the model's inner workings. Our work suggests that generalizing to unseen language structures can benefit from a strong inductive bias in the network architecture. By limiting our model to compose non-contextualized representations in a recursive bottom-up manner, we outperform state-of-the-art models a challenging compositional generalization task. Our model also obtains high performance on real natural language questions in the CLEVR-humans dataset.

In future work, we plan to investigate the structures revealed by our model in other grounded question answering setups, and to allow the model more freedom to incorporate non-compositional signals, which go hand in hand with compositional computation in natural language.

We also use Dropout and Layer-Norm(Ba et al., 2016) throughout the paper, omitted for simplicity.

in CLEVR, we split sentences based on semi-colons.

Removing layer normalization leads to improved accuracy of 99% on the arithmetic expressions length split, but training on CLEVR becomes too slow.

We update the scores on CLOSURE for MAC, FiLM and GLT due to this change in evaluation. The scores for the rest of the models were not affected.

The removed tokens are punctuations, 'the', 'there', 'is', 'a', 'as', 'it', 'its', 'of', 'are', 'other', 'on', 'that'.