* traditional semantic parsers get around need for annotated logical forms by increasing number of logical predicates
* semantic parser must combine predicates into coherent logical form
* parser define few simple composition rules which over-generate and use model features to simulate soft rules and categories
* use POS tag features and features on denotations of predicted logical forms
* database is queried using logical language: lambda-dependency-based compositional semantics
* given utterance, semantic parser constructs a distribution over possible derivations, each derivation specifying application of a set of rules that culminates in the logical form at root of tree
* derivations constructed recursively by mapping natural language phrases to knowledge base predicates and small set of composition rules
* to produce manageable set of predicates per utterance, construct a lexicon that maps natural language phrases to logical predicates by aligning large text corpus to Freebase; also generate logical predicates compatible with neighboring predicates using bridging operation
* logical approaches rely on techniques from proof theory and model-theoretic semantics; primarily concerned with inference, ambiguity, vagueness, and compositional interpretation
* statistical approaches derive their tools from algorithms and optimization and tend to focus on word meanings, vector space models, and other broad notions of semantic content
* principle of compositionality: meaning of complex syntactic phrase is function of meanings of its constituent phrases
* heart of grammar is its lexicon
* generate a grammar exponential in the length of the sentence; dynamic programming can mitigate problems for parsing
* learning via denotations in general results in increased computational complexity
* learning from denotations offers advantage of being able to define features on denotations
* semantic representations can also be distributed representations (rather than logical forms)
* Supervised semantic parsers
* First must map questiosn into logical forms and this requires data with manually labeled semantic forms
* all we really care about is resulting denotation for a given input, so are free to choose how we represent logical forms
* introduce new semantic representation: dependency-based compositional semantics
* represent logical forms as DCS trees where nodes represent predicates (State, Country, Genus, ...) and edges represent relations
* such a form allows for a transparency between syntactics and semantics and hence a streamlined framework for program induction
* denotation at root node
* trees mirror syntactic dependency structure, facilitating parsing but also enable efficient computation of denotations defined on a given tree
* to handle divergence between syntactic and semantic scope in some more complicated expressions, mark nodes low in tree with *mark* relation (E, Q, or C) and then invoke it higher up with *execute* relation to create desired semantic scope
* discriminative semantic parsing model placing a log-linear distribution over the set of permissible DCS trees given an utterance
Seeks to tackle database completion using Maccartney's natural logic. Approach does not require explicit alignment between premise and query and allows imprecise inferences at an associated cost learned from data. Casts transformation from query to supporting premise as a unified search problem where each step may have associated with it a cost reflecting confidence in the step. System allows for unstructured text as input to database, without a need to specify a schema or domain of text.
Represent Maccartney's inference model as a finite state machine that can be collapsed into three states. Represent acquisition of new premises in knowledge base as search over FSA, with nodes representing candidate facts and edges as mutations of these facts with associated costs. The confidence is a path is then computed using the cost vector and associated feature vector (representing the distance between the endpoints of two transitions). Model was tested on FraCaS entailment corpus as well as a corpus of OpenIE extractions.
## Future Work
Search process does not have full access to the parse tree so struggles with issues of alignment.
Paper proposes an abstract generic task to frame the textual entailment problem.
Generated a dataset of text snippets from general news domain, annotated by humans with entailment properties. Annotators generated hypotheses for certain text corpora by converting questions and text phrases across various domains including QA, information extraction, reading comprehension, machine translation, and paraphrase acquisition. Sixteen submissions made to the challenge encompassing a wide variety of entailment inference systems. Basic kinds of features for the system include stemming, lemmatization, POS tagging, and some sort of statistical weighting. Other features included making use of higher-level lexical relationships via Wordnet or evaluating distance between syntactic structures of hypothesis and premise.
## Future Work
Wished to improve the challenge by dealing with multi-valued annotation, relaxing assumptions on assumed background knowledge, providing entailment subtasks, and offering a wider variety of inference scope.
First published: 2014/06/06 (7 years ago) Abstract: Tree-structured recursive neural networks (TreeRNNs) for sentence meaning
have been successful for many applications, but it remains an open question
whether the fixed-length representations that they learn can support tasks as
demanding as logical deduction. We pursue this question by evaluating whether
two such models---plain TreeRNNs and tree-structured neural tensor networks
(TreeRNTNs)---can correctly learn to identify logical relationships such as
entailment and contradiction using these representations. In our first set of
experiments, we generate artificial data from a logical grammar and use it to
evaluate the models' ability to learn to handle basic relational reasoning,
recursive structures, and quantification. We then evaluate the models on the
more natural SICK challenge data. Both models perform competitively on the SICK
data and generalize well in all three experiments on simulated data, suggesting
that they can learn suitable representations for logical inference in natural
The paper wished to see whether distributed word representations could be used in a machine learning setting to achieve good performance in the task of natural language inference (also called recognizing textual entailment).
Paper investigates the use of two neural architectures for identifying the entailment and contradiction logical relationships. In particular, the paper uses tree-based recursive neural networks and tree-based recursive neural tensor networks relying on the notion of compositionality to codify natural language word order and semantic meaning. The models are first tested on a reduced artificial data set organized around a small boolean structure world model where the models are tasked with learning the propositional relationships. Afterwards these models are tested on another more complex artificial data set where simple propositions are converted into more complex formulas. Both models achieved solid performance on these datasets although in the latter data set the RNTN seemed to struggle when tested on larger-length expressions. The models were also tested on an artificial dataset where they were tasked with learning how to correctly interpret various quantifiers and negation in the context of natural logics. Finally, the models were tested on a freely available textual entailment dataset called SICK (that was supplemented with data from the Denotation Graph project). The models achieved reasonably good performance on the SICK challenge, showing that they have the potential to accurately learn distributed representations on noisy real-world data.
## Future Work
The neural models proposed seem to show particular promise for achieving good performance on very natural language logical semantics tasks. It is firmly believed that given enough data, the neural architectures proposed have the potential to perform even better on the proposed task. This makes acquisition of a more comprehensive and diverse dataset a natural next step in pursuing this modeling approach. Further even the more powerful RNTN seems to show rapidly declining performance on larger expressions which leaves the question of whether stronger models or learning techniques can be used to improve performance on considerably-sized expressions. In addition, there is still the question as to how these architectures actually encode the natural logics they are being asked to learn.
* Efficiently find fraction of referring expressions for scenes that are used; estimate associated likelihoods
* Learn probability distribution over set of logical expressions that select a target set of objects in a world state
* Model as globally normalized log-linear model using features of logical form *z*
* Distinction for plural entities in generated logical forms
* globally optimized log-linear model, conditioned on state S and set of target objects G
* Three kinds of features: logical expression structure features, situated features, and a complexity feature
* learning two models--one for a global logical form for world-state model; and one learning a series of classifiers for each
* learn codebooks and associated sparse codes
First published: 2012/06/27 (8 years ago) Abstract: As robots become more ubiquitous and capable, it becomes ever more important
to enable untrained users to easily interact with them. Recently, this has led
to study of the language grounding problem, where the goal is to extract
representations of the meanings of natural language tied to perception and
actuation in the physical world. In this paper, we present an approach for
joint learning of language and perception models for grounded attribute
induction. Our perception model includes attribute classifiers, for example to
detect object color and shape, and the language model is based on a
probabilistic categorial grammar that enables the construction of rich,
compositional meaning representations. The approach is evaluated on the task of
interpreting sentences that describe sets of objects in a physical workspace.
We demonstrate accurate task performance and effective latent-variable concept
induction in physical grounded scenes.
* Task of extracting representations of language tied to physical world
* New grounded concepts from a set of scenes containing only sentences, images, and indications of what objects being referred to
* System includes:
* *Semantic parsing model*
* Defines distribution over logical meaning representations for each given sentence
* Set of visual attribute classifiers for each possible object in scene
* Joint model learning mapping from logical constants in logical form to set of visual attribute classifiers
* Extracted depth and RGB values from images as features (shape and color attributes)