Technical Review

Knowledge Graphs

Last modified: 2018-12-11


Copyright notice, citation: Copyright
Source
© 2018-present, Victoria A. Stuart


These Contents


[Table of Contents]

KNOWLEDGE GRAPHS

[Table of Contents]

Why Graphs?

The introductory chapter in Probabilistic Graphical Models (Daphne Koller and Nir Friedman: 2009) succinctly states some key points, which I paraphrase here.

Koller2009PGM-f1.1.png

[Image source. Click image to open in new window.]


  • [Probabilistic] Graphical models use a graph-based representation as the basis for compactly encoding a complex distribution over a high-dimensional space. There is a dual perspective that one can use to interpret the structure of this graph. … From one perspective, the graph is a compact representation of a set of independencies that hold in the distribution; these properties take the form $\small \mathbf{X}$ is independent of $\small \mathbf{Y}$ given $\small \mathbf{Z}$, denoted $\small (\mathbf{X}\perp \mathbf{Y}\ \vert\ \mathbf{Z})$, for some subsets of variables $\small \mathbf{X,Y,Z}$. … The other perspective is that the graph defines a skeleton for compactly representing a high-dimensional distribution: rather than encode the probability of every possible assignment to all of the variables in our domain, we can partition the distribution into smaller factors, each over a much smaller space of possibilities. We can then define the overall joint distribution as a product of these factors.

  • This framework has many advantages. First, it often allows the distribution to be written down tractably, even in cases where the explicit representation of the joint distribution is astronomically large. Importantly, the type of representation provided by this framework is transparent, in that a human expert can understand and evaluate its semantics and properties. This property is important for constructing models that provide an accurate reflection of our understanding of a domain. Models that are opaque can easily give rise to unexplained, and even undesirable, answers.

    Second, as we show, the same structure often also allows the distribution to be used effectively for inference – answering queries using the distribution as our model of the world. In particular, we provide algorithms for computing the posterior probability of some variables given evidence on others. …

    Third, this framework facilitates the effective construction of these models, whether by a human expert or automatically, by learning from data a model that provides a good approximation to our past experience. …

    These three components – representation, inference, and learning – are critical components in constructing an intelligent system.

[Table of Contents]

Knowledge Graph Construction and Completion

Knowledge graphs (relational property graphs) model information in the form of entities (nodes/vertices) and the relationships between them (edges). For a formal definition see the “Problem Definition” subsection in Discriminative Predicate Path Mining for Fact Checking in Knowledge Graphs, and for a general overview see Towards a Definition of Knowledge Graphs.

Knowledge graphs (KG) provide semantically structured information that is interpretable by computers – an important property to building more intelligent machines, as well as an important step to transforming text-based knowledge stores and search engines into semantically-aware question answering services [A Review of Relational Machine Learning for Knowledge Graphs (Sep 2015)].

Knowledge graphs are applicable to a broad spectrum of problems ranging from biochemistry to recommender systems; for example, question answering, structured search, exploratory search, and digital assistants. Examples of the use of KG in the biomedical domain include:

  • BioGraph: Unsupervised Biomedical Knowledge Discovery via Automated Hypothesis Generation (2011) [code];

    BioGraph-f1.png

    [Image source. Click image to open in new window.]


    BioGraph-f2.png

    [Image source. Click image to open in new window.]


    BioGraph-f4.png

    [Image source. Click image to open in new window.]


  • KnowLife: A Versatile Approach for Constructing a Large Knowledge Graph for Biomedical Sciences (May 2015);

    Background. Biomedical knowledge bases (KB’s) have become important assets in life sciences. Prior work on KB construction has three major limitations. First, most biomedical KBs are manually built and curated, and cannot keep up with the rate at which new findings are published. Second, for automatic information extraction (IE), the text genre of choice has been scientific publications, neglecting sources like health portals and online communities. Third, most prior work on IE has focused on the molecular level or chemogenomics only, like protein-protein interactions or gene-drug relationships, or solely address highly specific topics such as drug effects.

    Results. We address these three limitations by a versatile and scalable approach to automatic KB construction. Using a small number of seed facts for distant supervision of pattern-based extraction, we harvest a huge number of facts in an automated manner without requiring any explicit training.
    We extend previous techniques for pattern-based IE with confidence statistics, and we combine this recall-oriented stage with logical reasoning for consistency constraint checking to achieve high precision. To our knowledge, this is the first method that uses consistency checking for biomedical relations. Our approach can be easily extended to incorporate additional relations and constraints.
    We ran extensive experiments not only for scientific publications, but also for encyclopedic health portals and online communities, creating different KB’s based on different configurations. We assess the size and quality of each KB, in terms of number of facts and precision. The best configured KB, KnowLife, contains more than 500,000 facts at a precision of 93% for 13 relations covering genes, organs, diseases, symptoms, treatments, as well as environmental and lifestyle risk factors.

    Conclusion. KnowLife is a large knowledge base for health and life sciences, automatically constructed from different Web sources. As a unique feature, KnowLife is harvested from different text genres such as scientific publications, health portals, and online communities. Thus, it has the potential to serve as one-stop portal for a wide range of relations and use cases. To showcase the breadth and usefulness, we make the KnowLife KB accessible through the KnowLife – One-Stop Health Portal.

    KnowLife-f1-edited.png

    [Image source. Click image to open in new window.]


    KnowLife-f2.png

    [Image source. Click image to open in new window.]


  • [Hetionet] Systematic Integration of Biomedical Knowledge Prioritizes Drugs for Repurposing; and,

    Hetionet-f1.png

    [Image source. Click image to open in new window.]


    Hetionet-f4.png

    [Image source. Click image to open in new window.]


  • Semantic Health Knowledge Graph: Semantic Integration of Heterogeneous Medical Knowledge and Services.

    2858423.png

    [Image source. Click image to open in new window.]


A noteworthy feature of knowledge graphs is the excellent performance of traversal-type queries across entities of diverse types within them. Such queries can be challenging to realize in relational databases because of the cost of computing statements (joins) across different tables. See:

Cypher queries in Neo4j are easier to write and understand than complex SQL queries in relational database management systems (RDBMS), especially those involving multiple join statements. For example, see pp. 22-23 in The Definitive Guide to Graph Databases for the RDBMS Developer).

Graphs represent a recent and exciting extension of machine learning; for a good review, see The Knowledge Graph as the Default Data Model for Learning on Heterogeneous Knowledge. Although a typical knowledge graph may contain millions of entities and billions of relational facts, it is usually far from complete. Determining the validity of information in a knowledge graph and filling in its missing parts are crucial tasks for many researchers and practitioners.

Traditional fact checking by experts and analysts cannot keep pace with the volume of newly created information. It is therefore important and necessary to enhance our ability to computationally determine whether statements of fact are true or false. This problem may be viewed as a link prediction (knowledge graph completion) task. Knowledge graph completion (KGC) aims at predicting relations between entities under supervision of the existing knowledge graph, which is an important supplement to relation extraction from plain texts. To address those challenges, a number of knowledge graph completion methods have been developed using link prediction and low-dimensional graph embeddings.

Multi-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction (Aug 2018) [data/code] introduced a joint multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. They created SciERC, a dataset that included annotations for all three tasks, and developed a unified framework called Scientific Information Extractor  (SciIE ) with shared span representations. The multi-task setup reduced cascading errors between tasks and leveraged cross-sentence relations through coreference links. Their multi-task model was better at predicting span boundaries and outperformed previous state of the art scientific IE systems on entity and relation extraction, without using any hand-engineered features or pipeline processing.

arxiv1808.09602-f2.png

[Image source. Click image to open in new window.]


arxiv1808.09602-f1+f3+f4.png

[Image source. Click image to open in new window.]


  • SciIE was able to automatically organize the extracted information from a large collection of scientific articles into a knowledge graph. SciIE supported the construction of a scientific knowledge graph, which they used to analyze information in scientific literature. Their analysis showed the importance of coreference links in making a dense, useful graph. Experiments showed that their multi-task model outperformed previous models in scientific information extraction without using any domain-specific features.
Knowledge Graph Construction and Completion:

Additional Reading

  • Augmenting Compositional Models for Knowledge Base Completion Using Gradient Representations (Nov 2018)

    “Neural models of Knowledge Base data have typically employed compositional representations of graph objects: entity and relation embeddings are systematically combined to evaluate the truth of a candidate Knowledge Base entry. Using a model inspired by Harmonic Grammar, we propose to tokenize triplet embeddings by subjecting them to a process of optimization with respect to learned well-formedness conditions on Knowledge Base triplets. The resulting model, known as Gradient Graphs, leads to sizable improvements when implemented as a companion to compositional models. Also, we show that the “supracompositional” triplet token embeddings it produces have interpretable properties that prove helpful in performing inference on the resulting triplet representations.”

    arxiv1811.01062-f1.png

    [Image source. Click image to open in new window.]


    arxiv1811.01062-t1.png

    [Image source. Click image to open in new window.]

[Table of Contents]

Statistical Relational Learning

A Review of Relational Machine Learning for Knowledge Graphs (Sep 2015) – by Maximilian Nickel, Kevin Murphy, Volker Tresp and Evgeniy Gabrilovich – provides an excellent introduction to machine learning and knowledge graphs, with a focus on statistical relational learning. The authors demonstrated how statistical relational learning can be used in conjunction with machine reading and information extraction methods to automatically build KG.

arxiv1503.00759-f1+f2.png

[Image source. Click image to open in new window.]


arxiv1503.00759-f4.png

[Image source. See Section IV (Latent Feature Models), pp. 6-10 for details. [Essentially, the relationships may be modeled/represented as tensors (vectors), amenable to matrix factorization methods. ] Click image to open in new window.]


  • “knowledge graphs have found important applications in question answering, structured search, exploratory search, and digital assistants. We provided a review of state of the art statistical relational learning methods applied to very large knowledge graphs. We also demonstrated how statistical relational learning can be used in conjunction with machine reading and information extraction methods to automatically build such knowledge repositories. As a result, we showed how to create a truly massive, machine-interpretable ‘semantic memory’ of facts, which is already empowering numerous practical applications. …”

In statistical relational learning the representation of an object can contain its relationships to other objects. The data is in the form of a graph, consisting of nodes (entities) and labelled edges (relationships between entities). The main goals of statistical relational learning include prediction of missing edges, prediction of properties of nodes, and clustering nodes based on their connectivity patterns. These tasks arise in many settings such as analysis of social networks and biological pathways.

  • Statistical relational learning (sometimes called Relational Machine Learning, RML), is a subdiscipline of machine learning that is concerned with domain models that exhibit both uncertainty (which can be dealt with using statistical methods) and complex, relational structure. Typically, the knowledge representation formalisms developed in statistical relational learning use (a subset of) first-order logic to describe relational properties of a domain in a general manner (universal quantification) and draw upon probabilistic graphical models (such as Bayesian networks or Markov networks) to model the uncertainty; some also build upon the methods of inductive logic programming.

    The field of statistical relational learning is not strictly limited to learning aspects; it is equally concerned with reasoning (specifically, probabilistic inference) and knowledge representation. Therefore, alternative terms that reflect the main foci of the field include statistical relational learning and reasoning (emphasizing the importance of reasoning) and first-order probabilistic languages (emphasizing the key properties of the languages with which models are represented).

    • Statistical relational learning should not be confused with Semantic Role Labeling (sometimes also called shallow semantic parsing); unfortunately, both are abbreviated as SRL. Semantic role labeling is a process in NLP that assigns labels to words or phrases in a sentence that indicate their semantic role in the sentence. It consists of the detection of the semantic arguments associated with the predicate or verb of a sentence and their classification into their specific roles (i.e., automatically finding the semantic roles of each argument of each predicate in a sentence).

Ontology reasoning is the ephemeral name given to reasoning over ontologies and knowledge bases: deriving facts that are not explicitly expressed in an ontology or in knowledge base. Ontology reasoning describes problem settings where the rules for conducting reasoning are specified alongside with the actual information. In 2017 Hohenecker and Lukasiewicz (University of Oxford) presented a novel approach to ontology reasoning that was based on deep learning rather than logic-based formal reasoning [Deep Learning for Ontology Reasoning (May 2017)]. They introduced a new model for statistical relational learning built upon deep recursive neural networks which easily competed with existing logic-based reasoners on the task of ontology reasoning on “RDFox” (an in-memory RDF triple store, i.e. graph, being developed as a commercial product).

In their 2018 follow-on work, Ontology Reasoning with Deep Neural Networks (Sep 2018), Hohenecker and Lukasiewicz devised a novel model that was closely coupled to symbolic reasoning methods and thus able to learn how to effectively perform basic ontology reasoning. Tests showed that their model learned to perform precise reasoning on a number of diverse inference tasks that required comprehensive deductive proficiencies; furthermore, the suggested model suffered much less from different obstacles that prohibit symbolic reasoning.

arxiv1808.07980-f1.png

[Image source. Click image to open in new window.]


arxiv1808.07980-f2.png

[image source. click image to open in new window.]


arxiv1808.07980-f3.png

[Image source. Click image to open in new window.]


arxiv1808.07980-f4.png

[Image source. Click image to open in new window.]


[Table of Contents]

Knowledge Graph Embedding

In a radically different approach from the probabilistic methods employed in statistical relational learning, knowledge graph embedding (KGE) aims to represent entities and relations in a knowledge graph as points or vectors in a continuous vector space (Euclidean space) – simplifying manipulation, while preserving the inherent structure of the KG. For recent reviews see:

  • A Comprehensive Survey of Graph Embedding: Problems, Techniques and Applications (Feb 2018)

    arxiv1709.07604-f1.png

    [Image source. Click image to open in new window.]


  • Knowledge Graph Embedding: A Survey of Approaches and Applications (Sep 2017; pdf).

    Wang2017Knowledge-f1.png

    [Image source. Click image to open in new window.]


    Wang2017Knowledge-f2+t1.png

    [Image source. Click image to open in new window.]


    Wang2017Knowledge-f4.png

    [Image source. Click image to open in new window.]


  • An excellent follow-on article (different authors) to A Review of Relational Machine Learning for Knowledge Graphs (Sep 2015) is On Embeddings as Alternative Paradigm for Relational Learning (Jul 2018), which systematically compared knowledge graph embedding (KGE) and logic-based statistical relational learning methods on standard relational classification and clustering tasks.

    “The most important takeaway messages from this work are the following:

    • KGE seems to be suitable for curated data. KGEs seem to have difficulties in the case when only a small fraction of available information is necessary for the given prediction task. SRL methods do not have that issue as they cherry pick useful information during the learning phase. This observation explains why KGEs perform well on the knowledge base completion task where the complexity of reasoning is low, but the dataset is huge. This further indicates the target use cases for the respective approaches: KGEs target simple relational reasoning with huge amounts of data, while SRL targets complex reasoning tasks with not a lot of data.

    • KGE might be useful for relational clustering but needs further research. Although logic-based relational clustering algorithms achieve the overall best results, clustering instances in the embeddings space does not lack much behind. They certainly show potential for this task, however they rise many problem, such as the question of an appropriate distance/similarity measure.

    • Hyper-parameters matter a lot. A major disadvantage of KGEs is their sensitivity to hyper-parameters. Out experiments show there is no clear strategy for choosing the right dimension. Though this might be less of an issue for classification problems, it poses a major one for clustering where there is no option to tune the parameters.

    • Besides the above outlined point, some obvious (dis)advantages are worth re-iterating here. A strong advantage of KGEs is their scalability, at the expense of their black-box nature and limited reasoning capabilities. SRL methods are a direct opposite – they can capture very complex reasoning, are interpretable but currently of a limited scalability.”

A typical KG is represented as a graph built from symbolic triples – variously stated as (source, relation, target),  (subject, predicate, object), or (head, relation, tail). KGE methods attempt to represent those symbols (nodes and edges) with their corresponding source, relation, and target vectors – amenable to mathematical processing.

rdbms_vs_graph.png

[Image source (slide 7). Click image to open in new window.]


arxiv1611.05425-f1.png

[Image source. Click image to open in new window.]


arxiv1710.05980-f2.png

[Image source. Click image to open in new window.]


The underlying concept of KGE is that in a knowledge graph each entity can be regarded as a point in a continuous vector space while relations can be modelled as translation vectors [Expeditious Generation of Knowledge Graph Embeddings (Mar 2018)]. The generated vector representations can be used by machine learning algorithms to accomplish specific tasks.

Restated, KGE aim to represent instances and their relationships as vectors and/or matrices in the Euclidean space  (On Embeddings as Alternative Paradigm for Relational Learning). The hope is that the geometry of the embedding space will resemble the structure of the data by keeping the instances that participate in the same relationships close to one another in Euclidean space. This in turn allows one to apply standard propositional (logic-based) machine learning tools and retain their scalability, while at the same time preserving certain properties of structured relational data.

  • Euclidean space is a mathematical construct that encompasses the line, the plane, and three-dimensional space as special cases. Its elements are called vectors. Vector space (also called a linear space) is a collection of objects called vectors, which may be added together and multiplied (“scaled”) by numbers, called scalars.

    Vectors can be understood in various ways: as arrows, as quantities with magnitude and direction, as displacements, or as points. Euclidean vectors are an example of a vector space. They represent physical quantities such as forces: any two forces (of the same type) can be added to yield a third, and the multiplication of a force vector by a real multiplier is another force vector. In the same vein, but in a more geometric sense, vectors representing displacements in the plane or in three-dimensional space also form vector spaces.

    Vectors are abstract mathematical objects with particular properties, which in some cases can be visualized as arrows. However, vectors in vector spaces do not necessarily have to be arrow-like objects. Vector spaces are the subject of linear algebra and are well characterized by their dimension, which, roughly speaking, specifies the number of independent directions in the space.

  • A recent Microsoft/Google paper, Link Prediction using Embedded Knowledge Graphs (Apr 2018), discussed the differences between symbolic and vector space
    Source
    in KG completion; instead of sampling branching paths in the symbolic space of the original knowledge graph, they performed short sequences of interactive lookup operations in the vector space of an embedded knowledge graph.

KGE [reviewed in An Overview of Embedding Models of Entities and Relationships for Knowledge Base Completion (Mar 2017; updated Feb 2018)] has proven to be very effective for the task of knowledge graph completion, where the goal is to identify missing links in the existing knowledge graph [A Review of Relational Machine Learning for Knowledge Graphs (Sep 2015)]. Accordingly, KGE approaches are the current (2018) dominating methodology for knowledge graph link prediction [On Link Prediction in Knowledge Bases: Max-K Criterion and Prediction Protocols (2018)]. KGE – fundamentally based on distributed representations – has not only proved to be effective for KG link prediction, but has also helped to improve our understanding and engineering of knowledge representation. A strong advantage of KGE is their scalability (at the expense of their black-box nature and limited reasoning capabilities): KGE has proven to be scalable to very large knowledge graphs.

A well known KGE method is TransE – by Antoine Bordes et al. at the Université de Technologie de Compiègne – CNRS, and Jason Weston and Oksana Yakhnenko at Google – which embedded entities and relations into the same space where the difference between the head and the tail was approximately the relation [Translating Embeddings for Modeling Multi-relational Data (NIPS 2013)].

TransE adopted the principle of geometric translation, formally as $\small h + r ≈ t$. While this embedding permits very simple translation-based relational inference, it is too restrictive in dealing with $\small \text{1-to-N}$, $\small \text{N-to-1}$ and $\small \text{N-to-N}$ relations. Furthermore, the representation of these models is still not semantically interpretable, which may be a major flaw and harm the potential applications (as stated in KSR: A Semantic Representation of Knowledge Graph within a Novel Unsupervised Paradigm).

In the traditional methods it is almost impossible to exactly extract the specific semantics from geometric points; for the example of TransE, a representation of $\small \text{Table}$ as $\small (0.82, 0.51, …)$ hardly tells us anything meaningful – such as a $\small \text{Table}$ being $\small \text{furniture}$, not being an $\small \text{animal}$, etc. Without semantics, the gap between knowledge and language remains, limiting the task of knowledge applications and natural language understanding (NLU).

One effective solution is to consider two separate embedding spaces for entities and relations. Entities are then mapped into the relation space using relation-specific projections, such as those in TransR [Learning Entity and Relation Embeddings for Knowledge Graph Completion (Jan 2018)]. This mapping strategy, however, causes critical drawbacks. First, when the number of relations is large, the whole projection matrices are expensive to model. Second, treating each relation separately does not account for the latent structure in the relation space, leading to waste of resources. An example of such a latent structure is the correlation between relations “nationality” and “place-of-birth”, as the latter may infer about the former.

While the raw representation of KG as (head, relation, tail) triples is adequate to store known knowledge, relating distant entities requires expensive graph traversal, possibly through multiple paths. Thus knowledge graph completion calls for learning of a new representation that supports scalable reasoning. The most successful approach thus far is through embedding entities and relations into a continuous vector space, which naturally lends itself to simple algebraic manipulations. An effective solution is to consider two separate embedding spaces for entities and relations. Entities are then mapped into the relation space using relation-specific projections , such as those in TransR. However, this mapping strategy causes critical drawbacks. First, when the number of relations is large, the whole projection matrices are expensive to model. Second, treating each relation separately does not account for the latent structure in the relation space, leading to waste of resources. An example of such a latent structure is the correlation between relations ‘nationality’ and ‘place-of-birth,’ as the latter may infer knowledge about the former.

Knowledge Graph Embedding with Multiple Relation Projections (Jan 2018) proposed a new translation-based KGE method called TransF, which was inspired by TransR but did not suffer from those issues. Under TransF, projection matrices are members of a matrix space spanned by a fixed number of matrix bases. A relation-specific projection matrix is characterized by a relation-specific coordinate in the space. Put in other way, the relation projection tensor is factorized into product of a relation coordinate matrix and a basis tensor. Hence, TransF is much more efficient and robust than TransR.

arxiv1801.08641-f1.png

[Image source. Click image to open in new window.]


arxiv1801.08641-t2+t3.png

[Image source. Click image to open in new window.]


arxiv1801.08641-f2.png

[Image source. Click image to open in new window.]


While KG structures used for representation learning and other purposes generally represent (subject, relation, object) relational triples, knowledge bases often contain a wide variety of data types beyond these direct links. Apart from relations to a fixed set of entities, KB often not only include numerical attributes (e.g. ages, dates, financial, etc.) but also textual attributes (e.g. names, descriptions, and titles/designations) and images (profile photos, flags, posters, etc.). These different types of data can play a crucial role as extra pieces of evidence for knowledge base completion. For example the textual descriptions and images might provide evidence for a person’s age, profession, and designation.

arxiv1809.01341-f1.png

[Image source. Click image to open in new window.]


Embedding Multimodal Relational Data for Knowledge Base Completion (Sep 2018) [code] proposed multimodal knowledge base embeddings (MKBE) that used different neural encoders for this variety of observed data, combining them with existing relational models to learn embeddings of the entities and multimodal data. Furthermore, those learned embeddings and different neural decoders were used to develop a novel multimodal imputation model to generate missing multimodal values (like text and images) from information in the knowledge base.

arxiv1809.01341-f2.png

[Image source. Click image to open in new window.]


Most previous research in KG completion has focused on the problem of inferring missing entities and missing relation types between entities. However, in addition to these many KG also suffer from missing entity types (i.e. the category labels for entities, such as “/music/artist”). Learning Entity Type Embeddings for Knowledge Graph Completion (Nov 2017; pdf) [code] addressed this issue, proposing a novel approach to entity type prediction.

Moon-2017-entity_embeddings.png

[Image source. Click image to open in new window.]


DOLORES: Deep Contextualized Knowledge Graph Embeddings (Nov 2018) introduced a new method, DOLORES, for learning knowledge graph embeddings that effectively captured contextual cues and dependencies among entities and relations. Short paths on knowledge graphs comprising of chains of entities and relations can encode valuable information regarding their contextual usage. They operationalized this notion by representing knowledge graphs not as a collection of triples but as a collection of entity-relation chains, and learned embeddings for entities and relations using deep neural models that captured such contextual usage. Their model was based on bi-directional LSTMs, and learned deep representations of entities and relations from constructed entity-relation chains. They showed that these representations could very easily be incorporated into existing models, significantly advancing the state of the art on several knowledge graph prediction tasks like link prediction, triple classification, and missing relation type prediction (in some cases by at least 9.5%).

For a related paper by the same authors contemporaneously released on arXiv, see *MOHONE*: Modeling Higher Order Network Effects in Knowledge Graphs via Network Infused Embeddings.

arxiv1811.00147-f1.png

[Image source. Click image to open in new window.]


arxiv1811.00147-f2.png

[Image source. Click image to open in new window.]


arxiv1811.00147-t1+t2.png

[Image source. Click image to open in new window.]


arxiv1811.00147-f3+t3+t4.png

[Image source. Click image to open in new window.]


Knowledge graph embedding aims at modeling entities and relations with low-dimensional vectors. Most previous methods require that all entities should be seen during training, which is unpractical for real-world knowledge graphs with new entities emerging on a daily basis. Recent efforts on this issue suggest training a neighborhood aggregator in conjunction with the conventional entity and relation embeddings, which may help embed new entities inductively via their existing neighbors. However, their neighborhood aggregators neglect the unordered and unequal natures of an entity’s neighbors. To this end, Logic Attention Based Neighborhood Aggregation for Inductive Knowledge Graph Embedding (Nov 2018) summarized the desired properties that may lead to effective neighborhood aggregators, and introduced a novel aggregator: Logic Attention Network (LAN), which addressed the properties by aggregating neighbors with both rules-based and network-based attention weights. By comparing with conventional aggregators on two knowledge graph completion tasks, they experimentally validated LAN’s superiority in terms of the desired properties.

arxiv1811.01399-f1+f2.png

[Image source. Click image to open in new window.]


arxiv1811.01399-t3+t4+t5.png

[Image source. Click image to open in new window.]


arxiv1811.01399-t6.png

[Image source. Click image to open in new window.]


While the celebrated sequence to sequence learning (seq2seq) technique and its numerous variants achieve excellent performance on many tasks, many machine learning tasks have inputs that are naturally represented as graphs, and existing seq2seq models face a significant challenge in achieving accurate conversion from graph form to the appropriate sequence. To address this challenge, IBM Research Graph2Seq: Graph to Sequence Learning with Attention-based Neural Networks (Dec 2018) [code] introduced an end to end, graph to sequence neural encoder-decoder model that mapped an input graph to a sequence of vectors, and used an attention-based LSTM method to decode the target sequence from those vectors. Their method generated the node and graph embeddings using a graph-based neural network that incorporated edge directions in the node embeddings. They then introduced an attention mechanism that aligns node embeddings and the decoding sequence to better cope with large graphs. Their model achieved state of the art performance on bAbI, Shortest Path, and Natural Language Generation tasks, and significantly outperformed existing graph neural network, seq2seq, and Tree2Seq models. Using the proposed bi-directional node embedding aggregation strategy, the model rapidly converged to the optimal performance.

arxiv1804.00823-f1.png

[Image source. Click image to open in new window.]


arxiv1804.00823-f2+t1+f3+t2+t3+f4.png

[Image source. Click image to open in new window.]


[Table of Contents]

Probing the Effectiveness of Embedding Models for Knowledge Graph Completion

Knowledge bases (graphs) contribute to many artificial intelligence tasks, yet they are often incomplete. To add missing facts to a given knowledge base, various embedding models have been proposed. Perhaps surprisingly, relatively simple models with limited expressiveness often performed remarkably well under most commonly used evaluation protocols. On Evaluating Embedding Models for Knowledge Base Completion (Oct 2018) explored whether recent embedding models work well for knowledge base completion tasks and argue that the current evaluation protocols are more suited for question answering rather than knowledge base completion. They showed that when using an alternative evaluation protocol more suitable for knowledge base completion the performance of all models was unsatisfactory, indicating the need for more research into embedding models and evaluation protocols for knowledge base completion.

arxiv1810.07180-f1.png

[Image source. Click image to open in new window.]


arxiv1810.07180-t2+t3.png

[Image source. Click image to open in new window.]


arxiv1810.07180-t4.png

[Image source. Click image to open in new window.]


Current Evaluation Protocols. Most studies use the triple classification (TC) and the entity ranking (ER) protocols to assess model performance, where ER is arguably the most widely adopted protocol. We assume throughout that only true but no false triples are available (as is commonly the case), and that these triples are divided into training, validation, and test triples. The union of these three sets acts as a proxy of the entire KB, which is unknown due to incompleteness.”

Triple classification (TC). The goal of TC is to test the model’s ability to discriminate between true and false triples [Socher et al., Reasoning With Neural Tensor Networks for Knowledge Base Completion (2013)]. Since only true triples are available in practice, pseudo-negative triples are generated by randomly replacing either the subject or the object of each test triple by another random entity that appears as a subject or object, respectively. Each resulting triple is then classified as positive or negative. In particular, triple $\small (i,k,j)$ is classified as positive if its score $\small s(i,k,j)$ exceeds a relation-specific decision threshold $\small \sigma_k$ (learned on validation data using the same procedure). Model performance is assessed by its accuracy, i.e., how many triples are classified correctly.”

Socher2013reasoning-f1.png

[Image source. Click image to open in new window.]


Socher2013reasoning-f2.png

[Image source. Click image to open in new window.]


Socher2013reasoning-f5.png

[Image source. Click image to open in new window.]


Entity ranking (ER). The goal of ER is to assess model performance in terms of ranking answers to certain questions. In particular, for each test triple $\small t = (i,k,j)$, two questions $\small q_s = (?,k,j)$ and $\small q_o = (i,k,?)$ are generated. For question $\small q_s$, all entities $\small i’ \in \mathcal{E}$ are ranked based on the score $\small s(i’,k,j)$ . To avoid misleading results, entities $\small i’$ that correspond to observed triples in the dataset (i.e., $\small (i’,k,j)$ in train/validate) are discarded to obtain a filtered ranking. The same process is applied for question $\small q_o$. Model performance is evaluated based on the recorded positions of the test triples in the filtered ranking. The intuition is that models that rank test triples (which are known to be true) higher are expected to be superior. Usually, the micro-average of $\small \text{filtered Hits@K}$ (i.e., the proportion of test triples ranking in the top-$\small K$) and $\small \text{filtered MRR}$ (i.e., the mean reciprocal rank of the test triples) are reported. Figure 1a
Source
provides a pictorial view of ER for a single relation. Given the score matrix of a relation $\small k$, where $\small s_{ij}$ is the score of triple $\small (i,k,j)$, a single test triple is shown in green, all candidate triples considered during the evaluation are shown in blue, and all triples observed in the training, validation and testing sets (not considered during evaluation) are shown in grey.”

Discussion. Regarding triple classification, Wang, Ye, and Gupta (Apr 2018) found that most models achieve an accuracy of at least 93%. This is due to the fact that negative triples with high score are rarely sampled as pseudo-negative triples because of the large number of entities from which the single replacement entity is picked for a given test triple. This means that most classification tasks are ‘easy.’ Consequently, the accuracy of triple classification overestimates model performance for KBC tasks. This protocol is less adopted in recent work. We argue that ER also overestimates model performance for KBC. In particular, the protocol is more appropriate to evaluate question answering tasks. Since ER generates questions from true test triples, it only asks questions that are known to have an answer. The question itself leaks this information from the test data into the evaluation. …”

Entity-pair ranking (PR). To study the overestimation effect of current evaluation protocols, we used an evaluation protocol for KBC termed entity-pair ranking (PR). PR is simple: for each relation $\small k$, we ask question $\small (?,k,?)$ . As before, we use the model to rank all answers, i.e., pairs of entities, and filter out training and validation data in the ranking so as to rank only triples not used during model training. In this way, any negative triples with a high score will appear at a top position, making it harder for true triples to rank high. Figure 1b
Source
shows the contrast between the number of negative triples considered for entity-pair and those considered for ER. Again, test triples are shown in green, candidate triples are shown in blue, and triples observed during training and validation are shown in grey. The number of candidates is much higher than those considered for ER. However, when answering the question $\small (?,k,?)$ with all possible entity pairs, all test triples for relation $\small k$ will be ranked simultaneously. Let $\small |\mathcal{T}_k|$ be the number of test triples in $\small k$ . ER needs to consider in total $\small 2 \cdot |\mathcal{T}_k| \cdot |\mathcal{E}|$ candidates for $\small k$, while PR needs to consider $\small |\mathcal{E}|^2$ candidates. Since all test triples in relation $\small k$ are considered at once, we do not rely on MRR for PR, but consider weighted $\small \text{MAP@K}$, i.e., weighted mean average precision in the top-$\small K$ filtered results, and weighted Hits@$\small K$, i.e., weighted percentage of test triples in the top$\small -K$ filtered results. …”

Knowledge Graph Embedding:

Additional Reading

  • “Awesome” lists:

  • Constructing Graph Node Embeddings via Discrimination of Similarity Distributions (Oct 2018)

    “The problem of unsupervised learning node embeddings in graphs is one of the important directions in modern network science. In this work we propose a novel framework, which is aimed to find embeddings by discriminating distributions of similarities (DDoS) between nodes in the graph. The general idea is implemented by maximizing the Earth Mover’s Distance between distributions of decoded similarities of similar and dissimilar nodes. The resulting algorithm generates embeddings which give a state of the art performance in the problem of link prediction in real-world graphs.”

    “The Earth Mover’s Distance (EMD) is a state-of-the art metric for comparing probability distributions. The high distinguishability offered by the EMD comes at a high cost in computational complexity. Therefore, linear-complexity approximation algorithms have been proposed to improve its scalability. However, these algorithms are either limited to vector spaces with only a few dimensions or require the probability distributions to populate the vector space sparsely. We propose novel approximation algorithms that overcome both of these limitations, yet still achieve linear time complexity. All our algorithms are data parallel, and therefore, we can take advantage of massively parallel computing engines, such as Graphics Processing Units (GPUs). The experiments on MNIST images show that the new algorithms can perform a billion distance computations in less than a minute using a single GPU. On the popular text-based 20 Newsgroups dataset, the new algorithms are four orders of magnitude faster than the state-of-the-art FastEMD library and match its search accuracy.

    “… We also propose data-parallel implementations that are suitable for GPU acceleration, and demonstrate a 30,000-fold performance improvement with respect to a multi-threaded implementation of WMD without sacrificing any accuracy.”

  • Embedding with Adaptive Similarities for Scalable Learning over Graphs (Dec 2018)

    “Node embedding is the task of extracting informative and descriptive features over the nodes of a graph. The importance of node embeddings for graph analytics, as well as learning tasks such as node classification, link prediction and community detection, has led to increased interest on the problem leading to a number of recent advances. Much like PCA in the feature domain, node embedding is an inherently \emph{unsupervised} task; in lack of metadata used for validation, practical methods may require standardization and limiting the use of tunable hyperparameters. Finally, node embedding methods are faced with maintaining scalability in the face of large-scale real-world graphs of ever-increasing sizes.

    “In the present work, we propose an adaptive node embedding framework that adjusts the embedding process to a given underlying graph, in a fully unsupervised manner. To achieve this, we adopt the notion of a tunable node similarity matrix that assigns weights on paths of different length. The design of the multilength similarities ensures that the resulting embeddings also inherit interpretable spectral properties. The proposed model is carefully studied, interpreted, and numerically evaluated using stochastic block models. Moreover, an algorithmic scheme is proposed for training the model parameters efficiently and in an unsupervised manner. We perform extensive node classification, link prediction, and clustering experiments on many real world graphs from various domains, and compare with state-of-the-art scalable and unsupervised node embedding alternatives. The proposed method enjoys superior performance in many cases, while also yielding interpretable information on the underlying structure of the graph.”

  • Embedding Uncertain Knowledge Graphs (Nov 2018)

    “Embedding models for deterministic Knowledge Graphs (KG) have been extensively studied, with the purpose of capturing latent semantic relations between entities and incorporating the structured knowledge into machine learning. However, there are many KGs that model uncertain knowledge, which typically model the inherent uncertainty of relations facts with a confidence score, and embedding such uncertain knowledge represents an unresolved challenge. The capturing of uncertain knowledge will benefit many knowledge-driven applications such as question answering and semantic search by providing more natural characterization of the knowledge.

    “In this paper, we propose a novel uncertain KG embedding model UKGE, which aims to preserve both structural and uncertainty information of relation facts in the embedding space. Unlike previous models that characterize relation facts with binary classification techniques, UKGE learns embeddings according to the confidence scores of uncertain relation facts. To further enhance the precision of UKGE, we also introduce probabilistic soft logic to infer confidence scores for unseen relation facts during training. We propose and evaluate two variants of UKGE based on different learning objectives. Experiments are conducted on three real-world uncertain KGs via three tasks, i.e. confidence prediction, relation fact ranking, and relation fact classification. UKGE shows effectiveness in capturing uncertain knowledge by achieving promising results on these tasks, and consistently outperforms baselines on these tasks.”

    arxiv1811.10667-t5+t6.png

    [Image source. Click image to open in new window.]

[Table of Contents]

Knowledge Graph Edge Semantics

Here I collate and summarize/paraphrase “edge related” discussion from elsewhere in this REVIEW.

  • A major problem in current graph neural network models such as graph attention networks (GAT) and graph convolutional networks (GCN) is that edge features – which contain important information about graph – are not incorporated. Adaptive Edge Features Guided Graph Attention Networks (Sep 2018) proposed a graph learning model, edge features guided graph attention networks (EGAT). Guided by the edge features, the attention mechanism on pairs of graph nodes not only depended on node contents but also adjusted automatically with respect to the properties of the edge connecting those nodes. Moreover, the edge features were adjusted by the attention function and fed to the next layer, which meant that the edge features were adaptive across network layers.

  • Exploring Graph-structured Passage Representation for Multi-hop Reading Comprehension with Graph Neural Networks (Sep 2018) introduced a new method for better connecting the global evidence present in knowledge graphs, to form more complex graphs as compared to directed acyclic graphs (DAG). After obtaining representation vectors for question and entity mentions in passages, an additive attention model treated all entity mention representations and the question representation as the memory and the query, respectively. Footnote 2 was of interest: “The concurrent unpublished work (Cao et al., 2018) also investigates the usage of graph convolution networks on WikiHop. Our work proposes a different model architecture, and focus more on the exploration and comparison of multiple edge types for building the graph-structured passage representation.”

  • While representation learning on networks generates reliable results with regard to node embeddings, it is limited to homogeneous networks in which all nodes and edges are of the same type. Addressing this challenge, edge2vec : Learning Node Representation using Edge Semantics (Sep 2018) proposed a model that incorporated edge semantics to represent different edge-types in heterogeneous networks. Their model generated an edge type transition matrix as an extra criterion of a biased node random walk on networks, and a biased skip-gram model was then leveraged to learn node embeddings based on the random walks. During the process to generate the node random walk corpus, the heterogeneity of the network was considered. By considering edge semantics, edge2vec significantly outperformed other state of the art models on all three tasks [note that in their tables, edge2vec is listed as heterogeneous node2vec].

  • mvn2vec: Preservation and Collaboration in Multi-View Network Embedding (Jan 2018) focused on characteristics that were specific and important in embedding multi-view networks (a multi-view network consists of multiple network views, where each view corresponds to a type of edge, and all views share the same set of nodes). With respect to edges, two comments are relevant, here:

    • [re: Collaboration:] In some datasets, edges between the same pair of nodes may be observed in different views due to shared latent reasons. For instance, if nodes in two views may complement (interact with) each other in various social media contexts, embedding them jointly may potentially yield better results than embedding them independently.

    • [re: Preservation:] On the other hand, it is possible for different network views to have different semantic meanings; it is also possible that a portion of nodes have completely disagreeing edges in different views since edges in different views are formed due to distinct latent reasons.

[Table of Contents]

Multi-View / Multi-Layer / Temporal Graph Embedding

Metabolic pathways are intuitively modeled as graph structures, which can represent relationships between elements (here, genetic and biochemical entities) in complex high dimensional datasets. Describing dynamic (temporal) processes, metabolic graphs should also represent various states – e.g. metabolic states (enzyme isoforms and metabolite concentrations), health status (healthy; diabetic; breast cancer; etc.) – perhaps through different graph representations. Likewise, metadata encoded in a graph representing scientific experiments could reflect changes in experimental variables over time, or in response to different experimental conditions (controls vs. experiments). Thus, depending on the state, we could have graphs that at different times differ in the data represented in the nodes and/or the connections between them (different paths or altered strengths of those connections).

  • Relevant to the embedding of temporal relations in graphs are the following:

  • Learning Sequence Encoders for Temporal Knowledge Graph Completion (Sep 2018) [code] [Summary]“Research on link prediction in knowledge graphs has mainly focused on static multi-relational data. In this work we consider temporal knowledge graphs where relations between entities may only hold for a time interval or a specific point in time. In line with previous work on static knowledge graphs, we propose to address this problem by learning latent entity and relation type representations. To incorporate temporal information, we utilize recurrent neural networks to learn time-aware representations of relation types which can be used in conjunction with existing latent factorization methods. The proposed approach is shown to be robust to common challenges in real-world KGs: the sparsity and heterogeneity of temporal expressions. Experiments show the benefits of our approach on four temporal KGs.”
    Source: Learning Sequence Encoders for Temporal Knowledge Graph Completion

  • dyngraph2vec: Capturing Network Dynamics using Dynamic Graph Representation Learning (Sep 2018) [Summary]Learning graph representations is a fundamental task aimed at capturing various properties of graphs in vector space. The most recent methods learn such representations for static networks. However, real world networks evolve over time and have varying dynamics. Capturing such evolution is key to predicting the properties of unseen networks. To understand how the network dynamics affect the prediction performance, we propose an embedding approach which learns the structure of evolution in dynamic graphs and can predict unseen links with higher precision. Our model, dyngraph2vec, learns the temporal transitions in the network using a deep architecture composed of dense and recurrent layers. We motivate the need of capturing dynamics for prediction on a toy data set created using stochastic block models. We then demonstrate the efficacy of dyngraph2vec over existing state-of-the-art methods on two real world data sets. We observe that learning dynamics can improve the quality of embedding and yield better performance in link prediction.
    Source: dyngraph2vec: Capturing Network Dynamics using Dynamic Graph Representation Learning

  • [In the NLP domain:] Word-Level Loss Extensions for Neural Temporal Relation Classification

  • Dynamic Graph Neural Networks (Oct 2018)

    “… graph neural networks, which extend the neural network models to graph data, have attracted increasing attention. Graph neural networks have been applied to advance many different graph related tasks such as reasoning dynamics of the physical system, graph classification, and node classification.

    Most of the existing graph neural network models have been designed for static graphs, while many real-world graphs are inherently dynamic. For example, social networks are naturally evolving as new users joining and new relations being created. Current graph neural network models cannot utilize the dynamic information in dynamic graphs. However, the dynamic information has been proven to enhance the performance of many graph analytical tasks such as community detection and link prediction. Hence, it is necessary to design dedicated graph neural networks for dynamic graphs.

    In this paper, we propose DGNN (Dynamic Graph Neural Network ) model, which can model the dynamic information as the graph evolving. In particular, the proposed framework can keep updating node information by capturing the sequential information of edges, the time intervals between edges and information propagation coherently.”

    arxiv1810.10627-f1.png

    [Image source. Click image to open in new window.]


    arxiv1810.10627-f2+f3.png

[Image source. Click image to open in new window.]


  • Temporal Graph Offset Reconstruction: Towards Temporally Robust Graph Representation Learning (Nov 2018) [code: not available, 2018-11-21]

    “Graphs are a commonly used construct for representing relationships between elements in complex high dimensional datasets. Many real-world phenomenon are dynamic in nature, meaning that any graph used to represent them is inherently temporal. However, many of the machine learning models designed to capture knowledge about the structure of these graphs ignore this rich temporal information when creating representations of the graph, resulting in models that do not perform well when used to make predictions about the future state of the graph – especially when the delta between time stamps is not small. In this work, we explore a novel training procedure and an associated unsupervised model which creates graph representations optimised to predict the future state of the graph. We make use of graph convolutional neural networks to encode the graph into a latent representation, which we then use to train our temporal offset reconstruction method, inspired by auto-encoders, to predict a later time point – multiple time steps into the future. Using our method, we demonstrate superior performance for the task of future link prediction compared with none-temporal state-of-the-art baselines. We show our approach to be capable of outperforming non-temporal baselines by 38% on a real world dataset.

    • “Whilst a lot of focus has recently been placed on finding methods for learning graph representations which are accurately able to make predictions about the current state of the graph, few works have investigated how these models perform for temporal graphs. This paper has explored the use of graph specific neural networks to learn vertex level representations which are accurately able to make predictions about future time points of evolving graphs. This is achieved by explicitly training the models to predict future time steps via the temporal offset reconstruction method.”

    • “For creating our temporally offset graph embeddings, we will explore the use of both non-probabilistic and variational encoder models. These are related to the convolutional graph auto-encoders of Kipf, 2016, however, we are exploring the creation of two models explicitly to reconstruct a future state of the graph, rather than just to capture the current graph. Below we detail the specifics of both the non-probabilistic (TO-GAE) and variational (TO-GVAE) models used for temporal offset reconstruction.”

arxiv1811.08366-t2+t3.png

[Image source. Click image to open in new window.]


  • DynamicGEM: A Library for Dynamic Graph Embedding Methods (Nov 2018)

    DynamicGEM is an open-source Python library for learning node representations of dynamic graphs. It consists of state-of-the-art algorithms for defining embeddings of nodes whose connections evolve over time. The library also contains the evaluation framework for four downstream tasks on the network: graph reconstruction, static and temporal link prediction, node classification, and temporal visualization. We have implemented various metrics to evaluate the state-of-the-art methods, and examples of evolving networks from various domains. We have easy-to-use functions to call and evaluate the methods and have extensive usage documentation. Furthermore, DynamicGEM provides a template to add new algorithms with ease to facilitate further research on the topic.”


Cellular growth and development provides an excellent example: a totipotent fertilized egg or a pluripotent stem cell can give rise to any other cell type – blood, liver, heart, brain, etc. – each arising from differential expression of the identical genome encoded within these cells throughout the lineage of those cell types; e.g.:

This cellular differentiation arises largely due to selective epigenetic modification of the genome during growth and development.

It would be exceptionally useful to be able to access multiple “views” of those types of graphs, depending on the data, metadata and embeddings.

Recent work in knowledge graph embedding such as TransE, TransH and TransR have shown competitive and promising results in relational learning. Non-Parametric Estimation of Multiple Embeddings for Link Prediction on Dynamic Knowledge Graphs (2017) [local copy] proposed a novel extension [ puTransE: Parallel Universe TransE ] of the translational embedding model to solve three main problems of the current models:

  • translational models are highly sensitive to hyperparameters such as margin and learning rate;

  • the translation principle only allows one spot in vector space for each “golden” triplet; thus, congestion of entities and relations in vector space may reduce precision;

  • current models are not able to handle dynamic data, especially the introduction of new unseen entities/relations or removal of triplets.

The approach proposed for puTransE proposed explicitly generated multiple embedding spaces via semantically and structurally aware triplet selection schemes, and non-parametrically estimated the energy score of a triplet. The intuition for the approach was that in every “parallel universe” embedding space, a constraint on triplets was imposed in terms of count and diversity such that each embedding space observed the original knowledge graph from a different view. The proposed puTransE approach was simple, robust and parallelizable. Experimental results showed that puTransE outperformed TransE and many other embedding methods for link prediction on knowledge graphs on both public benchmark dataset and a real world dynamic dataset.

Tay2017nonparametric-f1+f2+t2+t4.png

[Image source. Click image to open in new window.]


KSR : A Semantic Representation of Knowledge Graph within a Novel Unsupervised Paradigm (May 2018) proposed a semantic representation method for knowledge graph (KSR ) which imposed a two-level hierarchical generative process that globally extracted many aspects and then locally assigned a specific category in each aspect for every triple. Notably, they introduced knowledge features (i.e. views for clustering) to describe some knowledge semantic aspects, such as being a university or not (University), geographical position (Location), etc.

  • By gathering the information of clusters in each view/feature, the semantic representations were formed. The first-level process generated knowledge features (views) with different semantics such as University and Location, while the second-level process grouped the entities/relations/triples according to the corresponding semantic features (views). Finally, by summarizing the cluster identifications within each feature/view, KSR constructed the semantic representation of the knowledge elements. For example, for “Tsinghua University” a Yes category (cluster identification) is assigned to the University feature, while the Beijing category/cluster identification is assigned to the Location feature. By exploiting multi-view clustering, this knowledge was semantically organized as $\small \color{Brown}{\text{Tsinghua University = (University:Yes, Location:Beijing)}}$.

arxiv1608.07685-f1+f2.png

arxiv1608.07685-f3.png

arxiv1608.07685-t1+t2.png

[Image source. Click image to open in new window.]


Multi-view Clustering with Graph Embedding for Connectome Analysis (2017) from the University of Illinois at Chicago addressed multi-view clustering on graph instances with their model Multi-view Clustering framework on graph instances with Graph Embedding (MCGE), in which they modeled multi-view graph data as tensors and applied tensor factorization to learn the multi-view graph embeddings thereby capturing the local structure of graphs. They built an iterative framework incorporating multi-view graph embedding into the multi-view clustering task on graph instances, jointly performing multi-view clustering and multi-view graph embedding simultaneously. The multi-view clustering results were used for refining the multi-view graph embedding, and the updated multi-view graph embedding results further improved the multi-view clustering.

MCGE.png

[Image source. Click image to open in new window.]


Extensive experiments on two brain network datasets (HIV and Bipolar) demonstrated the superior performance of the proposed MCGE approach in multi-view connectome analysis for clinical investigation and application. In a simple two-view example, given fMRI and DTI brain networks of five subjects MCGE aimed to learn multi-view graph embedding for each of them and cluster these subjects into different groups based on the obtained multi-view graph embeddings. In the HIV and bipolar brain networks (their Figs. 4 and 5, respectively), differential brain region clustering maps of normal controls and patients could be compared. Each color-coded node represented a brain region, and each edge indicated the correlation between two brain regions. Nodes of the normal brain network were well grouped into several clusters, while nodes in the HIV brain network were less coherent. Additionally, for the normal control the edges within each cluster were much more intense than the edges across different clusters.

  • This work was cited in earlier work (also in 2017) by the same authors, Multi-view Graph Embedding with Hub Detection for Brain Network Analysis (Sep 2017). In that paper, they proposed incorporating the hub detection task into the multi-view graph embedding framework so that the two tasks could benefit each other, via an auto-weighted framework of Multi-view Graph Embedding with Hub Detection (MVGE-HD) for brain network analysis (again, HIV and bipolar brain networks). The MVGE-HD framework learned a unified graph embedding across all the views while reducing the potential influence of the hubs on blurring the boundaries between node clusters in the graph, thus leading to a clear and discriminative node clustering structure for the graph.

    arxiv1709.03659-f1+f2+f3.png

    [Image source. Click image to open in new window.]


  • Follow-on work (2018) by those authors, Multi-View Multi-Graph Embedding for Brain Network Clustering Analysis (Jun 2018), proposed Multi-view Multi-graph Embedding (M2E) by stacking multi-graphs into multiple partially-symmetric tensors and using tensor techniques to simultaneously leverage the dependencies and correlations among multi-view and multi-graph brain networks. Although there had been work on single-graph embedding and multi-view learning, there had been no embedding method available which enabled preserving multi-graph structures (i.e. taking multiple graphs as input) on multiple views. The goal of M2E is to find low-dimensional representations from multi-view multi-graph data which reveal patterns and structures among the brain networks. Experiments (again) on real HIV and bipolar disorder brain network datasets demonstrated the superior performance of M2E on clustering brain networks by leveraging the multi-view multi-graph interactions.

    arxiv-1806.07703.png

    [Image source. Click image to open in new window.]


    arxiv1806.07703-f2.png

    [Image source. Click image to open in new window.]


    • Critique.

      While these authors survey clustering accuracy of a number of baselines against M2E (“We compare the proposed M2E with eight other methods for multi-view clustering on brain networks.”), in a rather suspect omission they do not include comparisons of this current work to their earlier MCGE or MVGE-HD work (cited in this paper as Ma et al. 2017a and Ma et al. 2017b, respectively) among those models.

For many real-world systems, multiple types of relations are naturally represented by multiple networks. However, existing network embedding methods mainly focus on single network embedding and neglect the information shared among different networks. For example, in social networks relationships between people may include friendships, money transfers, colleagues, etc. One simple solution for multi-network embedding is to summarize multiple networks into a single network and apply the single-network embedding method on the integrated network. Deep Feature Learning of Multi-Network Topology for Node Classification (Sep 2018) proposed a novel multiple network embedding method based on semisupervised autoencoder, named DeepMNE, which captured complex topological structures of multi-networks, taking the correlation among multi-networks into account.

The DeepMNE algorithm mainly contained two parts: obtaining topological information learning, and learning multi-network-based features. After obtaining the integrated representations of multi-networks, they trained a machine learning model based on the outputs of DeepMNE to classify the nodes. Experimental results on two real world datasets (yeast and human gene networks) demonstrated the superior performance of the method over four state of the art algorithms on tasks such as accurate prediction of gene function.

arxiv-1809.02394.png

[Image source. Click image to open in new window.]


Existing approaches to learning node representations usually study networks with a single type of proximity between nodes, which defines a single view of a network. However, in reality multiple types of proximities between nodes usually exist, yielding networks with multiple views. An Attention-based Collaboration Framework for Multi-View Network Representation Learning (Sep 2017) [code] studied learning node representations for networks with multiple views, which aimed to infer robust node representations across different views. They proposed a multi-view representation learning approach (MVE), which promoted the collaboration of different views, letting them vote for robust representations. During the voting process, an attention mechanism was introduced, which enabled each node to focus on the most informative views. Experimental results on real-world networks showed that the proposed approach outperformed state of the art approaches for network representation learning (node classification; link prediction) with a single or multiple views.

arxiv-1709.06636c.png

[Image source. Click image to open in new window.]


mvn2vec: Preservation and Collaboration in Multi-View Network Embedding (Jan 2018) focused on characteristics that were specific and important in embedding multi-view networks. They identified two characteristics, “preservation” and “collaboration” and explored the feasibility of achieving better embedding quality by simultaneously modeling preservation and collaboration. As shown in their Fig. 1a, a multi-view network consists of multiple network views, where each view corresponds to a type of edge, and all views share the same set of nodes.

arxiv-1801.06597.png

[Image source. Click image to open in new window.]


  • This paper, and An Attention-based Collaboration Framework for Multi-View Network Representation Learning (Sep 2017) are conceptually similar (compare Fig. 1
    Sources: arXiv:1709.06636 and arXiv:1801.06597
    in each paper). Dr. Jiawei Han, University of Chicago at Urbana-Champaign, is also a coauthor on both papers.

  • Re: “Collaboration ”: in some datasets, edges between the same pair of nodes may be observed in different views due to shared latent reasons. For instance, if nodes in two views may complement (interact with) each other in various social media contexts, embedding them jointly may potentially yield better results than embedding them independently.

  • Re: “Preservation ”: on the other hand, it is possible for different network views to have different semantic meanings; it is also possible that a portion of nodes have completely disagreeing edges in different views since edges in different views are formed due to distinct latent reasons. For example, professional relationships may not align well with friendships. If we embed the profession and the friendship views into the same embedding space, the embedding fails to preserve the unique information carried by the different network views. The authors refer to the need for preserving unique information carried by different views as “preservation.”

  • It is also possible for preservation and collaboration to co-exist in the same multi-view network.

Real-world social networks and digital platforms are comprised of individuals (nodes) that are linked to other individuals or entities through multiple types of relationships (links). Sub-networks of such a network based on each type of link correspond to distinct views of the underlying network. In real-world applications each node is typically linked to only a small subset of other nodes; hence, practical approaches to problems such as node labeling have to cope with the resulting sparse networks. While low-dimensional network embeddings offer a promising approach to this problem, most of the current network embedding methods focus primarily on single view networks.

Multi-View Network Embedding Via Graph Factorization Clustering and Co-Regularized Multi-View Agreement (Nov 2018) introduced a novel multi-view network embedding (MVNE) algorithm for constructing low-dimensional node embeddings from multi-view networks. MVNE adapted and extended an approach to single view network embedding (SVNE ) using graph factorization clustering (GFC) to the multi-view setting using an objective function that maximized the agreement between views based on both the local and global structure of the underlying multi-view graph. Experiments with several benchmark real-world single view networks showed that GFC-based SVNE yielded network embeddings that were competitive with or superior to those produced by the state of the art single view network embedding methods when the embeddings are used for labeling unlabeled nodes in the networks. Experiments with several multi-view networks showed that MVNE substantially outperformed the single view methods on integrated view and the state of the art multi-view methods. Even when the goal was to predict labels of nodes within a single target view, MVNE outperformed its single-view counterpart, suggesting that MVNE was able to extract the information that was useful for labeling nodes in the target view from the all of the views.

arxiv1811.02616-t1.png

[Image source. Click image to open in new window.]


arxiv1811.02616-t2+t3.png

[Image source. Click image to open in new window.]


In a multi-view graph embedding, each node is assigned a vector that incorporates data from all views of the graph. Simple methods to create multi-view embeddings include combining multiple views of the graph into one graph using an $\small \text{AND/OR}$ aggregation of the edge sets and embedding the resulting single graph, or embedding each view independently and concatenating the different embeddings obtained for each node. More sophisticated algorithms have been developed based on matrix factorization, tensor factorization, and spectral embedding. Many of these algorithms focus on clustering multi-view graphs, a specific application thereof. High clustering accuracy indicates a good embedding since relative similarity between nodes should be correctly reflected in the embedding. … . Ideally, graph embeddings should preserve the distances between the nodes in their respective node embeddings. [Discussion in this paragraph was drawn from the Introduction in Multi-View Graph Embedding Using Randomized Shortest Paths, and the references cited therein.)

Multi-View Graph Embedding Using Randomized Shortest Paths (Aug 2018) [datasets, experimental results] proposed a generalized distance on multi-view graphs called the Common Randomized Shortest Path Dissimilarity (C-RSP), based on the randomized shortest path (RSP) dissimilarity measure on single-view graphs. This algorithm generated a dissimilarity measure between nodes by minimizing the expected cost of a random walk between any two nodes across all views of a multi-view graph, in doing so encoding both the local and global structure of the graph. This leads to more accurate graph embeddings, resulting in better visualization and high clustering accuracy.

arxiv1808.06560-f3.png

[Image source. Click image to open in new window.]


arxiv1808.06560-t1.png

[Image source. Click image to open in new window.]


Unsupervised Multi-view Nonlinear Graph Embedding (2018) addressed “multi-view” graph embedding: given a graph with node features they aimed to learn a network embedding (for each node from its network features) and a content embedding (from its content features) simultaneously for each node in an unsupervised manner. Generally, network structure and node features are considered as two different “views” for a node in the graph; in this work the authors allowed the network and content “views” to reinforce one another, to obtain better graph embeddings. They proposed a simple, effective unsupervised Multi-viEw nonlineaR Graph Embedding (MERGE) model. Inspired by structural deep network embedding, MERGE encoded the nonlinearity of the network/content by taking the network/content features as input, and then applying a deep autoencoder to learn a nonlinear network/content embedding for each node. MERGE also preserved the second-order proximity by extending DeepWalk to the multi-view setting: MERGE extended DeepWalk enabling one node’s embedding to interpret its “neighbor” node’s content embedding, enforcing cross-instance-cross-view consistency. MERGE consistently outperformed the state of the art baselines over five public datasets including PubMed (their Table 2) with $\small \theta(\vert V \vert)$ complexity, with one-third the training data.

MERGE-1.png

[Image source. Click image to open in new window.]


MERGE-2.png

[Image source. Note that Table is truncated, here. Click image to open in new window.]


End-to-End Multi-View Networks for Text Classification (Apr 2017) [non-author code herehere and here;  discussion here and here: the OP later deleted their question but the comments are apropos] from National Research Council Canada proposed a “multi-view network” for text classification. Their MVN model was outperformed by Richard Socher/SalesForce’s BCN+Char+CoVe algorithm (see their Table 4 in Learned in Translation: Contextualized Word Vectors).

arxiv-704.05907.png

[Image source. Click image to open in new window.]


Hypergraph Neural Networks (Sep 2018) presented a hypergraph neural network (HGNN) framework for data representation learning, which could encode correlated, high-order data in a hypergraph structure. Confronting the challenge of representation learning for complex data, they proposed incorporating those data in a hypergraph, which was more flexible for modeling complex data. A hyperedge convolution operation handled the data correlation during representation learning, permitting the efficient use of traditional hypergraph learning procedures to be used using hyperedge convolution operations. HGNN was thus able to learn the hidden layer representation (the high-order data structure).

While graphs are effective in modeling complex relationships, in many scenarios a single graph is rarely sufficient to succinctly represent all interactions, and hence multi-layered graphs have become popular. Though this leads to richer representations, extending solutions from the single-graph case is not straightforward. Consequently, there is a strong need for novel solutions to solve classical problems, such as node classification, in the multi-layered case. Attention Models with Random Features for Multi-layered Graph Embeddings (Oct 2018) considered the problem of semi-supervised learning with multi-layered graphs. Though deep network embeddings, e.g. DeepWalk, are widely adopted for community discovery, these authors argue that feature learning with random node attributes – using graph neural network – can be more effective. They proposed the use of attention models for effective feature learning, and developed two novel architectures, GrAMME-SG and GrAMME-Fusion, that exploited the inter-layer dependencies for building multi-layered graph embeddings. Empirical studies on several benchmark datasets demonstrated significant performance improvements in comparison to state of the art network embedding strategies. The results also showed that using simple random features was an effective choice, even in cases where explicit node attributes were not available.

arxiv-1810.01405a.png

[Image source. Click image to open in new window.]


arxiv-1810.01405b.png

[Image source. Click image to open in new window.]


In very interesting work, Harada et al. Dual Convolutional Neural Network for Graph of Graphs Link Prediction (Oct 2018) proposed the use of graphs of graphs (GoG) for feature extraction from graphs. GoG consist of an external graph and internal graphs, where each node in the external graph has an internal graph structure. They proposed a dual CNN that (i) extracted node representations by combining the external and internal graph structures in an end-to-end manner, and (ii) efficiently learned low-dimensional representations of the GoG nodes. Experiments on link prediction tasks using several chemical network datasets demonstrated the effectiveness of the proposed method.

arxiv-1810.02080.png

[Image source. Click image to open in new window.]


A Recurrent Graph Neural Network for Multi-Relational Data (Nov 2018) [code] introduced a graph recurrent neural network (GRNN) for scalable semi-supervised learning from multi-relational data. Key aspects of the novel GRNN architecture were the use of multi-relational graphs, the dynamic adaptation to the different relations via learnable weights, and the consideration of graph-based regularizers to promote smoothness and alleviate over-parametrization. The goal was to design a powerful learning architecture able to (i) discover complex and highly non-linear data associations, (ii) combine (and select) multiple types of relations, and (iii) scale gracefully with respect to the size of the graph. Numerical tests with real data sets corroborated the design goals and illustrated the performance gains relative to competing alternatives.

arxiv1811.02061-f1+f2.png

[Image source. Click image to open in new window.]


arxiv1811.02061-t1+t2.png

[Image source. Click image to open in new window.]


Community detection is a challenging, yet crucial, problem while mining large-scale graph structured data. Most existing approaches solve this problem by mapping nodes into a vector space and performing unsupervised learning with the resulting embeddings. In cases where multiple types of connectivity patterns exist for the set of nodes, commonly modeled as multilayer graphs, new strategies are required to model the interlayer dependencies in order to perform effective inferencing. Improved Community Detection using Deep Embeddings from Multilayer Graphs (Sep 2018) focused on learning embeddings for each node of a multilayer graph through neural modeling techniques, such that the complex dependencies could be concisely encoded into low-dimensional representations. Referred to as multilayer graph embeddings, these representations could be utilized for discovering community structure in a scalable fashion, even with a large number of layers. Furthermore, in order to ensure that the semantics that persisted over a longer range in the network were well modeled, they proposed to refine the multilayer embeddings via a proxy clustering loss and a graph modularity measure. Using real-world datasets, they demonstrated that this algorithm generated scalable and robust representations, and outperformed existing multilayer community detection approaches.

arxiv1811.12156-f1+f2+t1+t2.png

[Image source. Click image to open in new window.]


Modeling a sequence of interactions between users and items (e.g., products, posts, or courses) is crucial in domains such as e-commerce, social networking, and education to predict future interactions. Representation learning presents an attractive solution to model the dynamic evolution of user and item properties, where each user/item can be embedded in a Euclidean space and its evolution can be modeled by dynamic changes in embedding. However, existing embedding methods either generate static embeddings, treat users and items independently, or are not scalable.2018-12-07Learning Dynamic Embeddings from Temporal Interactions (Dec 2018) by Jure Leskovec and colleagues presents JODIE, a coupled recurrent model to jointly learn the dynamic embeddings of users and items from a sequence of user-item interactions. JODIE has three components. First, the update component updates the user and item embedding from each interaction using their previous embeddings with the two mutually-recursive Recurrent Neural Networks. Second, a novel projection component is trained to forecast the embedding of users at any future time. Finally, the prediction component directly predicts the embedding of the item in a future interaction. For models that learn from a sequence of interactions, traditional training data batching cannot be done due to complex user-user dependencies. Therefore, they presented a novel batching algorithm called t-Batch that generated time-consistent batches of training data that can run in parallel, giving massive speed-up. Six experiments were conducted: two prediction tasks (future interaction prediction, and state change prediction), and four real world datasets. JODIE outperforms six state of-the-art algorithms in these tasks by up to 22.4%. Moreover, we show that JODIE is highly scalable and up to 9.2x faster than comparable models. As an additional experiment, we illustrate that JODIE can predict student drop-out from courses five interactions in advance.

arxiv1812.02289-f1+f2+f3+t1+t2.png

[Image source. Click image to open in new window.]


arxiv1812.02289-t3+t4+t5+f4+f7.png

[Image source. Click image to open in new window.]


Multi-View / Multi-Layer / Temporal Graph Embedding:

Additional Reading

  • Graph Signal Processing
  • Hyperbolic Embeddings

  • Multi-Multi-View Learning: Multilingual and Multi-Representation Entity Typing (Oct 2018) [Summary] “Knowledge bases (KBs) are paramount in NLP. We employ multiview learning for increasing accuracy and coverage of entity type information in KBs. We rely on two metaviews: language and representation. For language, we consider high-resource and low-resource languages from Wikipedia. For representation, we consider representations based on the context distribution of the entity (i.e., on its embedding), on the entity’s name (i.e., on its surface form) and on its description in Wikipedia. The two metaviews language and representation can be freely combined: each pair of language and representation (e.g., German embedding, English description, Spanish name) is a distinct view. Our experiments on entity typing with fine-grained classes demonstrate the effectiveness of multiview learning.”

    [Click image to enlarge.]
    Source: Multi-Multi-View Learning: Multilingual and Multi-Representation Entity typing

  • Multigraph Networks for Discovering and Fusing Relationships in Molecules (Nov 2018) [Summary]Spectral Graph Convolutional Networks (GCNs) are a generalization of convolutional networks to learning on graph-structured data. Applications of spectral GCNs have been successful, but limited to a few problems where the graph is fixed, such as shape correspondence and node classification. In this work, we address this limitation by revisiting a particular family of spectral graph networks, Chebyshev GCNs, showing its efficacy in solving graph classification tasks with a variable graph structure and size. Chebyshev GCNs restrict graphs to have at most one edge between any pair of nodes. To this end, we propose a novel multigraph network that learns from multi-relational graphs. We model learned edges with abstract meaning and experiment with different ways to fuse the representations extracted from annotated and learned edges, achieving competitive results on a variety of chemical classification benchmarks.>

  • Multi-layered Graph Embedding with Graph Convolution Networks (Nov 2018)

    “Recently, graph embedding emerges as an effective approach for graph analysis tasks such as node classification and link prediction. The goal of network embedding is to find low dimensional representation of graph nodes that preserves the graph structure. Since there might be signals on nodes as features, recent methods like Graph Convolutional Networks (GCNs) try to consider node signals besides the node relations. On the other hand, multi-layered graph analysis has been received much attention. However, the recent methods for node embedding have not been explored in these networks. In this paper, we study the problem of node embedding in multi-layered graphs and propose a deep method that embeds nodes using both relations (connections within and between layers of the graph) and nodes signals. We evaluate our method on node classification tasks. Experimental results demonstrate the superiority of the proposed method to other multi-layered and single-layered competitors and also proves the effect of using cross-layer edges.”

    arxiv1811.08800-t1+f1.png

    [Image source. Click image to open in new window.]
  • Embedding Models for Episodic Knowledge Graphs (Dec 2018), by Volker Tresp and colleagues, extended models for static knowledge graphs to temporal knowledge graphs – enabling them to store episodic data and to generalize to new facts (inductive learning). They generalized leading learning models for static knowledge graphs (i.e., TuckerRESCALHolEComplExDistMult ) to temporal knowledge graphs. They introduced a new tensor model with superior generalization performance, ConT . Performance was analyzed on two different datasets: the Global Database of Events, Language, and Tone (GDELT), and the database for Integrated Conflict Early Warning System (ICEWS). They also argue that temporal knowledge graph embeddings might also be models for cognitive episodic memory (facts we remember and can recollect) and that a semantic memory (current facts we know) can be generated from episodic memory by a marginalization operation. They validated this episodic-to-semantic projection hypothesis with the ICEWS dataset.

    arxiv1807.00228-f1+f2.png

    [Image source. Click image to open in new window.]


    arxiv1807.00228-f3+t4+t5.png

    [Image source. Click image to open in new window.]


    arxiv1807.00228-t6.png

    [Image source. Click image to open in new window.]


    arxiv1807.00228-t7+descr.png

    [Image source. Click image to open in new window.]


  • Dynamic Network Embeddings: From Random Walks to Temporal Random Walks (2018)

    “Networks evolve continuously over time with the addition, deletion, and changing of links and nodes. Although many networks contain this type of temporal information, the majority of research in network representation learning has focused on static snapshots of the graph and has largely ignored the temporal dynamics of the network. In this work, we describe a general framework for incorporating temporal information into network embedding methods. The framework gives rise to methods for learning time-respecting embeddings from continuous-time dynamic networks. Overall, the experiments demonstrate the effectiveness of the proposed framework and dynamic network embedding approach as it achieves an average gain of 11.9% across all methods and graphs. The results indicate that modeling temporal dependencies in graphs is important for learning appropriate and meaningful network representations.”

    nguyen2018dynamic-f1+f2+f4+t1+t3.png

    [Image source. Click image to open in new window.]

[Table of Contents]

Hypergraphs

Here I collate and summarize/paraphrase hypergraph-related discussion from elsewhere in this REVIEW.



Neural Segmental Hypergraphs for Overlapping Mention Recognition (Oct 2018) [code | supplementary material] proposed a novel segmental hypergraph representation to model overlapping entity mentions that are prevalent in many practical datasets. Their model was robust in handling both overlapping and non-overlapping mentions, and was thus able to capture features and interactions that could not be captured by previous models while maintaining a low time complexity for inference. They also presented a theoretical analysis to formally assess how our representation is better than alternative representations reported in the literature in terms of representational power. Coupled with neural networks for feature learning, their model achieved the state of the art performance in three benchmark datasets [including GENIA] annotated with overlapping mentions.

arxiv-1810.01817a.png

[Image source. Click image to open in new window.]


arxiv-1810.01817b.png

[Image source. Click image to open in new window.]


arxiv-1810.01817c.png

[Image source. Click image to open in new window.]


In a similar approach to Neural Segmental Hypergraphs for Overlapping Mention Recognition (Oct 2018; above), Learning to Recognize Discontiguous Entities (Oct 2018) [project page/code] focused on the study of recognizing discontiguous entities, that can be overlapping at the same time. They proposed a novel hypergraph representation to jointly encode discontiguous entities of unbounded length, which could overlap with one another. Empirical results showed that their model was able to achieve significantly better results when evaluated on standard data with many discontiguous entities.

arxiv1810.08579-f1+f2.png

[Image source. Click image to open in new window.]


arxiv1810.08579-f3.png

[Image source. Click image to open in new window.]




Hypergraph Neural Networks (Sep 2018) presented a hypergraph neural network (HGNN) framework for data representation learning, which could encode correlated, high-order data in a hypergraph structure. Confronting the challenge of representation learning for complex data, they proposed incorporating those data in a hypergraph, which was more flexible for modeling complex data. A hyperedge convolution operation handled the data correlation during representation learning, permitting the efficient use of traditional hypergraph learning procedures to be used using hyperedge convolution operations. The convolution in the spectral domain was conducted with the hypergraph Laplacian, and further approximated by truncated Chebyshev polynomials. HGNN was thus able to learn the hidden layer representation (the high-order data structure). HGNN, graph convolutional networks (GCN) and other methods were applied to citation network classification and visual object recognition tasks, demonstrating that the proposed HGNN method could outperform recent state of the art methods. HGNN was also superior than other methods when dealing with multi-modal data.

arxiv-1809.09401a.png

[Image source. Click image to open in new window.]


arxiv-1809.09401b.png

[Image source. Click image to open in new window.]


arxiv-1809.09401c.png

[Image source. Click image to open in new window.]




While GCNs inherently assume pairwise relationships in graph-structured data, in many real-world problems relationships extend beyond pairwise connections – hypergraphs naturally capture these complex relationships. HyperGCN: Hypergraph Convolutional Networks for Semi-Supervised Classification (Sep 2018) explored the use of GCN for hypergraph based semi-supervised learning (SSL). They proposed HyperGCN, a SSL method which used a layer-wise propagation rule for CNN operating directly on hypergraphs (the first principled adaptation of GCN to hypergraphs). HyperGCN was able to encode both the hypergraph structure and hypernode features in an effective manner.

“In conventional graph-based SSL problems, the loss function is defined as a weighted sum of the supervised loss over labeled data and a regulariser for the graph structure. $\small \mathcal{L} = \underbrace{\mathcal{L}_0}_\text{labeled data} + \underbrace{\lambda \mathcal{L}_{reg}}_\text{graph data}$. Here, $\small \mathcal{L}_0$ denotes the supervised loss w.r.t. the labeled part of the graph and $\small \mathcal{L}_{reg}$ is an explicit graph-based regulariser that smooths the label information over the graph (with $\small \lambda$ being the weighting factor). A popularly used graph-based regulariser is the graph Laplacian regulariser which relies on the prior assumption that connected nodes in the graph are likely to share the same label (the cluster assumption (Chapelle, Weston, and Schőlkopf 2003). This assumption might restrict modeling capacity, as graph edges need not encode node similarity, but could instead contain other information such as pointwise mutual semantic information (Zhuang and Ma 2018), or knowledge graph relationship information (Wang, Ye, and Gupta 2018).

“To avoid this restriction, recent works have encoded both the labeled and graph data using a convolutional neural network (Atwood and Towsley 2016Kipf and Welling 2017). This allows the network to be trained on a supervised loss function $\small \mathcal{L} = \mathcal{L}_0$ for all graph nodes, thereby, avoiding an explicit graph-based regulariser in the loss function. Specifically, graph convolutional networks (GCNs) (Kipf and Welling 2017) naturally integrate the graph structure (e.g., citation networks) and the feature attributes of nodes (e.g., bag-of-words features). GCNs have achieved state-of-the art performance on benchmark graph-based semi-supervised node classification datasets. Even though GCNs are able to incorporate arbitrary relationships between nodes (and not just similarity), they are still limited by pairwise relationships in the graph.

“However, in many real-world problems, relationships go beyond pairwise associations. In many naturally occurring graphs, we observe complex relationships involving more than two nodes, e.g., co-authorship, co-citation, email communication, etc. Hypergraphs provide a flexible and natural modeling tool to model such complex relationships. A hypergraph is a generalisation of a simple graph in which an edge (a.k.a., hyperedge) can connect any number of nodes. While simple graph edges connect pairs of nodes (pairwise relationships), hyperedges can connect an arbitrary number of nodes (and thus capture complex relationships). For example, in a co-authorship network modeled as a hypergraph, each node represents a paper and each hyperedge represents an author and connects all papers coauthored by the author. Because of the obvious existence of such complex relationships in many real-world networks, the problem of learning with hypergraphs assumes significance (Zhou, Huang, and Schölkopf 2007; Hein et al. 2013 Zhang et al. 2017).

“In this work, we consider the problem of hypergraph-based semi supervised classification. …”

arxiv1809.02589-f1.png

[Image source. Click image to open in new window.]


arxiv1809.02589-f2.png

[Image source. Click image to open in new window.]


arxiv1809.02589-t3.png

[Image source. Click image to open in new window.]


arxiv1809.02589-t4+t5.png

[Image source. Click image to open in new window.]




Most network-based machine learning methods assume that the labels of two adjacent samples in the network are likely to be the same. However, assuming the pairwise relationship between samples is not complete. The information a group of samples that shows very similar pattern and tends to have similar labels is missed. The natural way overcoming the information loss of the above assumption is to represent the feature dataset of samples as the hypergraph. Un-normalized Hypergraph p-Laplacian Based Semi-Supervised Learning Methods (Nov 2018) were applied to the zoo dataset and the tiny version of 20 newsgroups dataset. Experiment results showed that the accuracy performance measures of these unnormalized hypergraph p-Laplacian based semi-supervised learning methods were significantly greater than the accuracy performance measure of the non-p-Laplacian method (the current state of the art method hypergraph Laplacian based semi-supervised learning method for classification problems).

Zhou2006learning-f1.png

[Image source. [Original source.] Click image to open in new window.]


Diagnosing an inherited disease often requires identifying the pattern of inheritance in a patient’s family. In an interesting approach, Explainable Genetic Inheritance Pattern Prediction (Dec 2018) [code] represented family trees with genetic patterns of inheritance using hypergraphs and latent state space models to provide explainable inheritance pattern predictions. Their approach, GraphLSSM, allowed for exact causal inference over a patient’s possible genotypes given their relatives’ phenotypes. By design, inference could be examined at a low level to provide explainable predictions. Furthermore, they made use of human intuition by providing a method to assign hypothetical evidence to any inherited gene alleles. Their analysis supported the application of latent state space models to improve patient care in cases of rare inherited diseases, where access to genetic specialists is limited.

arxiv1812.00259-f1+f2.png

[Image source. Click image to open in new window.]


  • The code provided by these authors, Graphical Latent State Space Models (GraphLSSM), is a Python library that extends traditional generative time series models, such as hidden Markov models, linear dynamical systems and their extensions, to graphical models. With this framework, a user can do Bayesian inference over graphical structures (i.e., probabilistic graphical modeling). One use case is doing inference over a pedigree chart, where phenotypes (observations) are emitted based on each person’s genotype (latent state), and the genotypes of individuals are linked through their ancestor tree. [Visualization with Graphviz.]

    GraphLSSM-HMM.gif

    [Image source. Click image to open in new window.]


Hypergraphs:

Additional Reading

  • On a Hypergraph Probabilistic Graphical Model (Nov 2018)

    “We propose a directed acyclic hypergraph framework for a probabilistic graphical model that we call Bayesian hypergraphs. The space of directed acyclic hypergraphs is much larger than the space of chain graphs. Hence Bayesian hypergraphs can model much finer factorizations than Bayesian networks or LWF chain graphs and provide simpler and more computationally efficient procedures for factorizations and interventions. Bayesian hypergraphs also allow a modeler to represent causal patterns of interaction such as Noisy-OR graphically (without additional annotations). We introduce global, local and pairwise Markov properties of Bayesian hypergraphs and prove under which conditions they are equivalent. We define a projection operator, called shadow, that maps Bayesian hypergraphs to chain graphs, and show that the Markov properties of a Bayesian hypergraph are equivalent to those of its corresponding chain graph. We extend the causal interpretation of LWF chain graphs to Bayesian hypergraphs and provide corresponding formulas and a graphical criterion for intervention.”

    arxiv1811.08372-f6+f7+f8.png

    [Image source. Click image to open in new window.]

[Table of Contents]

Graph Signal Processing

This is a fascinating domain. For additional background, see my comprehensive Resources pages:



Recent work mentioned but not discussed on that page includes work by Zou and Lerman [Graph Generation via Scattering (Sep 2018)] at the University of Minnesota, which employed the graph wavelet approach of Wavelets on Graphs via Spectral Graph Theory (2011). While generative models like generative adversarial networks (GAN) and variational autoencoders (VAE) have recently been applied to graphs, they are difficult to train. This work proposed a graph generation model that used an adaptation of a scattering transform to graphs. The proposed model was composed of an encoder (a Gaussianized graph scattering transform) and a decoder (a simple fully connected network that is adapted to specific tasks, such as link prediction, signal generation on graphs and full graph and signal generation). Results demonstrated state of the art performance of the proposed system for both link prediction (Cora, Citeseer and PubMed citation data) and graph and signal generation.

arxiv-1809.10851.png

[Click image to open in new window.]


The application of CNN to structured signal classification (e.g. image classification) inspired the development of deep filter banks, referred to as scattering transforms. These transforms apply a cascade of wavelet transforms and complex modulus operators to extract features that are invariant to group operations and stable to deformations. Furthermore, ConvNets inspired recent advances in geometric deep learning, which aim to generalize these networks to graph data by applying notions from graph signal processing to learn deep graph filter cascades. Graph Classification with Geometric Scattering (Oct 2018) further advanced those lines of research by proposing a geometric scattering transform using graph wavelets defined in terms of random walks on the graph. They demonstrated the utility of features extracted with this designed deep filter bank in graph classification, and showed its competitive performance relative to other methods, including graph kernel methods and geometric deep learning ones, on both social and biochemistry data.

“In supervised graph classification problems one is given a training database of graph/label pairs $\small \{ (G_i,y{_i})^{N}_{i=1} \subset \mathcal{G} \times \mathcal{Y} \}$ sampled from a set of potential graphs $\small \mathcal{G}$ and potential labels $\small \mathcal{Y}$. The goal is to use the training data to learn a model $\small f : \mathcal{G} → \mathcal{Y}$ that associates to any graph $\small G ∈ \mathcal{G}$ a label $\small y = f(G) \in \mathcal{Y}$. These types of databases arise in biochemistry, in which the graphs may be molecules and the labels some property of the molecule (e.g., its toxicity), as well as in various types of social network databases. Until recently, most approaches were kernel based methods, in which the model $\small f$ was selected from the reproducing kernel Hilbert space generated by a kernel that measures the similarity between two graphs [ … snip … ]

“In many of these algorithms, task based (i.e., dependent upon the labels $\small \mathcal{Y}$) graph filters are learned from the training data as part of the larger network architecture. These filters act on a characteristic signal $\small \mathbf{x}_G$ that is defined on the vertices of any graph $\small G$, e.g., $\small \mathbf{x}_G$ is the vector of degrees of each vertex (we remark there are also edge based algorithms, such as Gilmer et al. and references within, but these have largely been developed for and tested on databases not considered in Sec. 4).

“Here, we propose an alternative to these methods in the form of a Geometric Scattering Classifier (GSC) that leverages graph-dependent (but not label dependent) scattering transforms to map each graph $\small G$ to the scattering features extracted from $\small \mathbf{x}_G$. Furthermore, inspired by transfer learning approaches, we apply the scattering cascade as frozen network layers on $\small \mathbf{x}_G$, followed by several fully connected classification layers (see Fig. 2). We note that while the formulation in Sec. 3 is phrased for a single signal $\small \mathbf{x}_G$, it naturally extends to multiple signals by concatenating their scattering features.

  • “… our evaluation results on graph classification show the potential of the produced scattering features to serve as universal representations of graphs. Indeed, classification with these features with relatively simple classifier models reaches high accuracy results on most commonly used graph classification datasets, and outperforms both traditional and recent deep learning feed forward methods in terms of average classification accuracy over multiple datasets. …

    Finally, the geometric scattering features provide a new way for computing and considering global graph representations, independent of specific learning tasks. Therefore, they raise the possibility of embedding entire graphs in Euclidean space (albeit high dimensional) and computing meaningful distances between graphs with them, which can be used for both supervised and unsupervised learning, as well as exploratory analysis of graph-structured data.”

arxiv-1810.03068a.png

[Image source. Click image to open in new window.]


arxiv-1810.03068b.png

[Image source. Click image to open in new window.]


Graph classification has recently received a lot of attention from various fields of machine learning e.g. kernel methods, sequential modeling or graph embedding. All these approaches offer promising results with different respective strengths and weaknesses. However, most of them rely on complex mathematics and require heavy computational power to achieve their best performance. A Simple Baseline Algorithm for Graph Classification (Oct 2018) proposed a simple and fast algorithm based on the spectral decomposition of the graph Laplacian to perform graph classification and get a first reference score for a dataset. This method obtained competitive results compared to state of the art algorithms.

arxiv1810.09155-f1.png

[Image source. Click image to open in new window.]


Datasets. We evaluated our model against some standard datasets from biology:

  • Mutag (MT),
  • Predictive Toxicology Challenge (PTC),
  • Enzymes (EZ),
  • Proteins Full (PF),
  • Dobson and Doig (DD) and
  • National Cancer Institute (NCI1).

All graphs represent chemical compounds. Nodes are molecular substructures (typically atoms) and edges represent connections between these substructures (chemical bound or spatial proximity). In MT, the compounds are either mutagenic and not mutagenic while in PTC, they are either carcinogens or not. EZ contains tertiary structures of proteins from the 6 Enzyme Commission top level classes. In DD, graphs represent secondary structures of proteins being either enzyme or not enzyme. PF is a subset of DD where the largest graphs have been removed. In NCI1, compounds have either an anti-cancer activity or not. Statistics about the graphs are presented in table 1.

arxiv1810.09155-t1+t2.png

[Image source. Click image to open in new window.]


arxiv1810.09155-f2.png

[Image source. Click image to open in new window.]


Interpretability has emerged as a crucial aspect of machine learning, aimed at providing insights into the working of complex neural networks. However, existing solutions vary vastly based on the nature of the interpretability task, with each use case requiring substantial time and effort. MARGIN: Uncovering Deep Neural Networks using Graph Signal Analysis (Dec 2108) introduced MARGIN , a simple yet general approach to address a large set of interpretability tasks ranging from identifying prototypes to explaining image predictions. MARGIN exploited ideas rooted in graph signal analysis to determine influential nodes in a graph, which were defined as those nodes that maximally described a function defined on the graph. By carefully defining task-specific graphs and functions, MARGIN outperformed existing approaches in a number of disparate interpretability challenges.

arxiv1711.05407-f1+f3+f4.png

[Image source. Click image to open in new window.]


arxiv1711.05407-f5descr+implementation.png

[Image source. Click image to open in new window.]




Many networks exhibit rich, lower-order connectivity patterns that can be captured at the level of individual nodes and edges. Higher-Order Organization of Complex Networks (Jul 2016) [project;  slides here and here], by Jure Leskovec and colleagues at Stanford University (Benson, Gleich and Leskovec), developed a generalized framework for clustering networks on the basis of higher-order connectivity patterns, at the level of small network subgraphs. That project focused on finding higher-order organization of complex networks at the level of small network subgraphs (motifs), by clustering the nodes of a graph based on motifs instead of edges via a motif-based spectral clustering method.

  • “Graphs are a pervasive tool for modeling and analyzing network data throughout the sciences. Benson et al. developed an algorithmic framework for studying how complex networks are organized by higher-order connectivity patterns (see the Perspective by Pržulj and Malod-Dognin). Motifs in transportation networks reveal hubs and geographical elements not readily achievable by other methods. A motif previously suggested as important for neuronal networks is part of a “rich club” of subnetworks.”

    Benson2016higher-order-f1.png

    [Image source. Click image to open in new window.]


    Benson2016higher-order-f1c.png

    [Image source;  (see also). Click image to open in new window.]


  • The algorithm illustrated in Fig. 1C, above, efficiently identifies a cluster of nodes $\small S$ as follows:

    • Step 1: Given a network and a motif $\small M$ of interest, form the motif adjacency matrix $\small W_M$ whose entries $\small (i,j)$ are the co-occurrence counts of nodes $\small i$ and $\small j$ in the motif $\small M:(W _jM)_{ij}$ = number of instances of $\small M$ that contain nodes $\small i$ and $\small j$.

    • Step 2: Compute the spectral ordering $\small \sigma$ of the nodes from the normalized motif graph Laplacian matrix constructed via $\small W_M$.

      • The normalized motif Laplacian matrix is $\small L_M = D^{-1/2}(D - W_M)D^{-1/2}$, where $\small D$ is a diagonal matrix with the row-sums of $\small W_M$ on the diagonal, $\small D_{ii} = \sum_j (W_M)_{ij}$ , and $\small D^{-1/2}$ is the same matrix with the inverse square roots on the diagonal $\small D_{ii}^{-1/2} = 1 / \sqrt{\sum_j (W_M)_{ij}}$. The spectral ordering $\small \sigma$ is the by-value ordering of $\small D^{-1/2}z$, where $\small z$ is the eigenvector corresponding to the second smallest eigenvalue of $\small L_M$, i.e., $\small \sigma_i$ is the index of $\small D^{-1/2}z$ with the $\small i^{th}$ smallest value.

    • Step 3: Find the prefix set of $\small \sigma$ with the smallest motif conductance (the argument of the minimum), formally, $\small S :=argmin_r \phi_M(S_r)$, where $\small S_r = \{\sigma_1, \ldots, \sigma_r \}$.

      Benson2016higher-order-f1c-annotated.png

      [Image source;  (see also). Click image to open in new window.]


  • [Perspective]  Network Analytics in the Age of Big Data (2016):

    “We live in a complex world of interconnected entities. In all areas of human endeavor, from biology to medicine, economics, and climate science, we are flooded with large-scale data sets. These data sets describe intricate real-world systems from different and complementary viewpoints, with entities being modeled as nodes and their connections as edges, comprising large networks. These networked data are a new and rich source of domain-specific information, but that information is currently largely hidden within the complicated wiring patterns. Deciphering these patterns is paramount, because computational analyses of large networks are often intractable, so that many questions we ask about the world cannot be answered exactly, even with unlimited computer power and time. Hence, the only hope is to answer these questions approximately (that is, heuristically) and prove how far the approximate answer is from the exact, unknown one, in the worst case. On page 163 of this issue, Benson et al. take an important step in that direction by providing a scalable heuristic framework for grouping entities based on their wiring patterns and using the discovered patterns for revealing the higher-order organizational principles of several real-world networked systems.”  [Source]

    Prulj2016network-f1.png

    [Image source. Click image to open in new window.]


  • Response.  “Hypergraph-Based Spectral Clustering of Higher-Order Network Structures” [Tom Michoel & Bruno Nachtergaele] (July 2016):

    • “The authors refer to our work T Michoel et al, Mol. Bio. Syst. 7, 2769 (2011)  [local copy] where we introduced an algorithm for clustering networks on the basis of 3-node network motifs, but appear to have missed our subsequent work where this algorithm was extended into a general spectral clustering algorithm for hypergraphs T Michoel and B Nachtergaele, Phys Rev E 86, 05611 (2012). As a special case, and similar to SNAP, this algorithm can be (and was) used to cluster signed, colored or weighted networks based on network motifs or subgraph patterns of arbitrary size and shape, including patterns of unequal size such as shortest paths. An implementation of the algorithm is available on GitHub.”

    Michoel et al. (2011):


    Michoel2011enrichment-f2.png

    [Image source. Click image to open in new window.]


    Michoel2011enrichment-f3.png

    [Image source. Click image to open in new window.]


    Michoel2011enrichment-f4.png

    [Image source. Click image to open in new window.]


    Michoel & Nachtergaele, 2012:


    arxiv1205.3630-f1.png

    [Image source. Click image to open in new window.]


    arxiv1205.3630-f3.png

    [Image source. Click image to open in new window.]


    arxiv1205.3630-f4.png

    [Image source. Click image to open in new window.]


    arxiv1205.3630-f5.png

    [Image source. Click image to open in new window.]




Graph Signal Processing:

Additional Reading

  • Multilayer Graph Signal Clustering (Nov 2018)

    “Multilayer graphs are commonly used to model relationships of different types between data points. In this paper, we propose a method for multilayer graph data clustering, which combines the different graph layers in the Riemann manifold of Semi-Positive Definite (SPD) graph Laplacian matrices. The resulting combination can be seen as a low-dimensional representation of the original data points. In addition, we consider that data can also carry signal values and not only graph information. We thus propose new clustering solution for such hybrid data by training a neural network such that the transformed data points are orthonormal, and their distance on the aggregated graph is minimized. Experiments on synthetic and real data show that our method leads to a significant improvement with respect to state-of-the-art clustering algorithms for graph data.”

    arxiv1811.00821-t1+t2+t3.png

    [Image source. Click image to open in new window.]


  • Multigraph Networks for Discovering and Fusing Relationships in Molecules (Nov 2018)

    “Spectral Graph Convolutional Networks (GCNs) are a generalization of convolutional networks to learning on graph-structured data. Applications of spectral GCNs have been successful, but limited to a few problems where the graph is fixed, such as shape correspondence and node classification. In this work, we address this limitation by revisiting a particular family of spectral graph networks, Chebyshev GCNs, showing its efficacy in solving graph classification tasks with a variable graph structure and size. Chebyshev GCNs restrict graphs to have at most one edge between any pair of nodes. To this end, we propose a novel multigraph network that learns from multi-relational graphs. We model learned edges with abstract meaning and experiment with different ways to fuse the representations extracted from annotated and learned edges, achieving competitive results on a variety of chemical classification benchmarks.”

    arxiv1811.09595-f1.png

    [Image source. Click image to open in new window.]


    arxiv1811.09595-f2.png

    [Image source. Click image to open in new window.]


    arxiv1811.09595-f3.png

    [Image source. Click image to open in new window.]


    arxiv1811.09595-t1.png

    [Image source. Click image to open in new window.]
  • Convolutional Neural Network Architectures for Signals Supported on Graphs (Dec 2018)

    “Two architectures that generalize convolutional neural networks (CNNs) for the processing of signals supported on graphs are introduced. We start with the selection graph neural network (GNN), which replaces linear time invariant filters with linear shift invariant graph filters to generate convolutional features and reinterprets pooling as a possibly nonlinear subsampling stage where nearby nodes pool their information in a set of preselected sample nodes. A key component of the architecture is to remember the position of sampled nodes to permit computation of convolutional features at deeper layers. The second architecture, dubbed aggregation GNN, diffuses the signal through the graph and stores the sequence of diffused components observed by a designated node. This procedure effectively aggregates all components into a stream of information having temporal structure to which the convolution and pooling stages of regular CNNs can be applied. A multinode version of aggregation GNNs is further introduced for operation in large scale graphs. An important property of selection and aggregation GNNs is that they reduce to conventional CNNs when particularized to time signals reinterpreted as graph signals in a circulant graph. Comparative numerical analyses are performed in a source localization application over synthetic and real-world networks. Performance is also evaluated for an authorship attribution problem and text category classification. Multinode aggregation GNNs are consistently the best performing GNN architecture.”

[Table of Contents]

Link prediction, the discovery of relations such as (subject, relation, object) triples, is tremendously useful for knowledge graph {construction | embedding | completion}, fact checking, and knowledge discovery.

Embedding Entities and Relations for Learning and Inference in Knowledge Bases (Dec 2014; updated Aug 2015) [non-author code here and here] considered learning representations of entities and relations in KBs using a neural-embedding approach. They showed that most existing models, including NTN (Socher et al., 2013) and TransE (Bordes et al., 2013b), could be generalized under a unified learning framework, where entities are low-dimensional vectors learned from a neural network and relations are bilinear and/or linear mapping functions. Under this framework (DistMult ), they compared a variety of embedding models on the link prediction task, showing that a simple bilinear formulation could achieve [then] state of the art results for the task (achieving a top-10 accuracy of 73.2% vs. 54.7% by TransE on Freebase). They also introduced a novel approach that utilized the learned relation embeddings to mine logical rules such as $\small \text{BornInCity(a,b) and CityInCountry(b,c) $\Rightarrow$ Nationality(a,c)}$. Embeddings learned from the bilinear objective were particularly good at capturing relational semantics; more interestingly, their embedding based rule extraction approach successfully outperformed [then] state of the art confidence based rule mining approach (Horn rules) that involved compositional reasoning.

arxiv1412.6575-t2+t3+t4.png

[Image source. Click image to open in new window.]


arxiv1412.6575-f3+f4.png

[Image source. Click image to open in new window.]


Knowledge graph completion deals with automatically understanding the structure of large knowledge graphs (labeled directed graphs), and predicting missing relationships (labeled edges). In statistical relational learning, the link prediction problem is key to automatically understand the structure of large knowledge bases. Complex Embeddings for Simple Link Prediction (Jun 2016) [code] – by Théo Trouillon (Xerox Research Centre Europe) and colleagues – proposed to solve this problem through latent factorization, but through the use of complex valued embeddings. The composition of complex embeddings could handle a large variety of binary relations, among them symmetric and antisymmetric relations. Compared to state of the art models such as Neural Tensor Networks and Holographic Embeddings, their approach – based on complex embeddings – was arguably simpler, as it only uses the Hermitian dot product (the complex counterpart of the standard dot product between real vectors). Their approach, ComplEx, was scalable to large datasets as it remained linear in time and space, consistently outperforming alternative approaches on standard link prediction benchmarks.

arxiv1606.06357-t2+t4.png

[Image source. Click image to open in new window.]


State of the art (statistical relational learning / knowledge graph completion) embedding models propose different trade-offs between modeling expressiveness, and time and space complexity. Trouillon et al. followed up their ComplEx paper (Jun 2016) with Knowledge Graph Completion via Complex Tensor Factorization (Feb 2017; updated Nov 2017) [code], in which they reconciled both expressiveness and complexity through the use of complex-valued embeddings, and explored the link between such complex-valued embeddings and unitary diagonalization. They corroborated their approach theoretically and showed that all real square matrices – thus all possible relation/adjacency matrices – are the real part of some unitarily diagonalizable matrix – opening the door to other applications of square matrices factorization. Their approach, based on complex embeddings, was arguably simple (as it only involves a Hermitian dot product, the complex counterpart of the standard dot product between real vectors), whereas other methods resorted to increasingly complicated composition functions to increase their expressiveness. The proposed complex embeddings were scalable to large data sets as they remain linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks.

“This extended version adds proofs of existence of the proposed model in both single and multi-relational settings, as well as proofs of the non-uniqueness of the complex embeddings for a given relation. Bounds on the rank of the proposed decomposition are also demonstrated and discussed. The learning algorithm is provided in more details, and more experiments are provided, especially regarding the training time of the models.”

Improving Knowledge Graph Embedding Using Simple Constraints (May 2018) [code] investigated the potential of using very simple constraints to improve KG embedding. They examined non-negativity constraints on entity representations and approximate entailment constraints on relation representations (hence, ComplEx-NNE+AER – an extension of the original ComplEx.

arxiv1805.02408-t3+t4.png

[Image source. Click image to open in new window.]


Non-negativity constraints on entity representations in that work helped learn compact and interpretable representations for entities, whereas approximate entailment constraints on relation representations further encoded regularities of logical entailment between relations into their distributed representations. The constraints imposed prior beliefs upon the structure of the embedding space, without negative impacts on efficiency or scalability. For each relation, it assumed a score matrix whose sign matrix was partially observed. The entries corresponding to factual and nonfactual triples had the sign $\small 1$ and $\small -1$, respectively.

Notes:

  1. Regarding non-negativity constraints: for technical reasons the variables of linear programs must always take non-negative values; for example, the linear inequalities x ≥ 0 and y ≥ 0 specify that you cannot produce a negative number of items.

  2. Regarding approximate_entailment_constraints, these authors mean an ordered pair of relations such that the former approximately entails the latter – e.g., BornInCountry and Nationality – stating that a person born in a country is very likely, but not necessarily, to have a nationality of that country.]

Holographic embeddings are an approach to generating entity embeddings from a list of (head, tail, relation) triples like (‘Jeff’, ‘Amazon’, ‘employer’) and (‘Zuck’, ‘Palo Alto’, ‘location’). Embeddings can be used as lossy (but memory-efficient) inputs to other machine learning models, used directly in triple inference (i.e. knowledge base completion, or link prediction) by evaluating the likelihood of candidate triples, or to search for associated entities using $\small k$-nearest neighbors. For example, a search from the embedding representing the entity ‘University of California, Berkeley’ yields the associated entities ‘UC Irvine’, ‘Stanford University’, ‘USC’, ‘UCLA’, and ‘UCSD’ (see the figure in this GitHub repo).

Learning embeddings of entities and relations is an efficient and versatile method to perform machine learning on relational data such as knowledge graphs. Holographic Embeddings of Knowledge Graphs (Dec 2017) [author’s code]) – by Maximilian Nickel, Lorenzo Rosasco and Tomaso Poggio at MIT – proposed holographic embeddings (HolE) to learn compositional vector space representations of entire knowledge graphs. The proposed method was related to holographic models of associative memory in that it employed circular correlation to create compositional representations. By using correlation as the compositional operator HolE could capture rich interactions but simultaneously remained efficient to compute, easy to train, and scalable to very large datasets. In extensive experiments they showed that holographic embeddings were able to outperform state of the art methods for link prediction in knowledge graphs and relational learning benchmark datasets.

arxiv-1510.04935.png

[Image source. Click image to open in new window.]


arxiv1510.04935-t2.png

[Image source. Click image to open in new window.]


  • What is a “holographic model?” Holographic Embeddings of Knowledge Graphs states “The proposed method is related to holographic models of associative memory in that it employs circular correlation to create compositional representations. By using correlation as the compositional operator HolE can capture rich interactions but simultaneously remains efficient to compute, easy to train, and scalable to very large datasets.”

    arxiv-1510.04935b.png

    [Image source. Click image to open in new window.]


    The Wikipedia Cross-correlation page and this Quora post, What is the need for circular correlation and its advantages over a linear correlation? provide a basic description of the concept of circular correlation and its Fourier transformation, which is used in that paper, and discussed in the ComplEx / HolE  “comparison” paper.

  • These slides (starting at slide 7) describe holographic embeddings, Holographic Embeddings of Knowledge Graphs (2016; Maximilian Nickel, Lorenzo Rosasco and Tomaso Poggio)  [local copy].

  • Excellent non-author code, discussion and examples for HolE are found in this GitHub repository.

Knowledge graph embeddings have received significant attention due to their excellent performance for tasks like link prediction and entity resolution. In Complex and Holographic Embeddings of Knowledge Graphs: A Comparison (Jul 2017) Théo Trouillon (Université Grenoble Alpes) and Maximilian Nickel (Facebook AI Research | MIT) provided a comparison of two state of the art knowledge graph embeddings: ComplEx and HolE. They briefly reviewed both models, discussing how their scoring functions were equivalent, and then analyzed the discrepancy of results reported in the original articles – showing experimentally that they are likely due to the use of different loss functions. They also discussed advantages and disadvantages of both models and under which conditions one would be preferable to the other.

Critique: although they do not label the models in their Table 1, from their discussion of the loss models, “Margin” refers to their HolE model, whereas “Neg-LL” refers to Trouillon et al.’s ComplEx model.

arxiv1707.01475-t1-labeled.png

[Image source. Click image to open in new window.]


  • Though similar, the filtered MRR value (0.541) that they report for HolE on the FB15k dataset in this 2017 paper differs from the value (0.524) that they reported in their earlier 2016 and 2017 papers.

  • Likewise, the filtered MRR value (0.639) that they report for ComplEx on the FB15k dataset in this 2017 paper is similar but differs from the value (0.692) that they reported in their earlier 2016 and 2017 papers.

Analogical Inference for Multi-Relational Embeddings (Jul 2017) [code] imposed analogical properties on embeddings. Each relation was represented as a normal matrix, which were constrained to satisfy commutativity properties. ANALOGY represented a novel framework for optimizing the latent representations with respect to the “analogical” properties of the embedded entities and relations for knowledge base completion. Analogical inference posited that if subsets of entities and relations were analogous in systems A and B, then the unobserved triples in B could be inferred by mirroring their counterparts in A. The proposed approach obtained state of the art results on two popular benchmark datasets, outperforming a large number of strong baselines in most cases.

Related to analogical reasoning, note also that Beyond Word Embeddings: Learning Entity and Concept Representations from Large Scale Knowledge Bases described work that jointly learned concept vectors from a textual knowledge base (Wikipedia) and a graphical knowledge base (Probase), demonstrating superior results on analogical reasoning and concept categorization.

arxiv-1705.02426d.png

[Image source. Click image to open in new window.]


arxiv1705.02426-t3.png

[Image source. Click image to open in new window.]


Knowledge Graph Embedding with Iterative Guidance from Soft Rules (Nov 2017) [code] presented a novel approach to knowledge graph embedding (KGE) combined with guidance from soft logic rules, called Rule-Guided Embedding (RUGE), that provided state of the art results in KG link prediction (compared e.g. to ComplEx: their Table 3). This work builds on previous work by these authors (that provides additional information): Jointly Embedding Knowledge Graphs and Logical Rules (which introduced KALE, an approach that learned entity and relation embeddings by jointly modeling knowledge and logic), and Knowledge Base Completion using Embeddings and Rules (which employed an integer linear programming approach plus logic rules that leveraged prior knowledge in the KG to greatly reduce the solution space during inference, i.e. link prediction).

arxiv-1711.11231.png

[Image source. Click image to open in new window.]


arxiv1711.11231-t3.png

[Image source. Click image to open in new window.]


PredPath, described in Discriminative Predicate Path Mining for Fact Checking in Knowledge Graphs (Shi and Weninger, Apr 2016) [code] presented a discriminative path-based method for fact checking in KG that incorporated connectivity, type information, and predicate interactions. Given a statement in the form of a (subject, predicate, object) triple – for example, (Chicago, capitalOf, Illinois) – their approach mined discriminative paths that alternatively defined the generalized statement (U.S. city, predicate, U.S. state) and used the mined rules to evaluate the veracity of that statement.

arxiv1510.05911-f1.png

[Image source. Click image to open in new window.]


arxiv-1510.05911.png

[Image source. Click image to open in new window.]


arxiv1510.05911-t1.png

[Image source. Click image to open in new window.]


Follow-on work by the PredPath authors, ProjE: Embedding Projection for Knowledge Graph Completion (Shi and Weninger, Nov 2016) [code] presented a shared variable neural network model that filled in missing information in a KG by learning joint embeddings of the KG entities and edges (collectively calculating the scores of all candidate triples). ProjE had a parameter size smaller than 11 out of 15 existing methods while performing 37% better than the current best method on standard datasets. They also showed, via a fact checking task, that ProjE was capable of accurately determining the veracity of many declarative statements.

arxiv-1611.05425.png

[Image source. Click image to open in new window.]


arxiv1611.05425-t2+t3.png

[Image source. Click image to open in new window.]


arxiv1611.05425-t4.png

[Image source. Click image to open in new window.]


ProjE authors Shi and Weninger) also proposed a new KGC task, the Open-World Knowledge Graph Completion (Nov 2017) [code]. As a first attempt to solve this task they introduced an open-world KGC model called ConMask, which learned embeddings of an entity’s name and parts of its text-description to connect unseen entities to the KG. To mitigate the presence of noisy text descriptions, ConMask used a relationship-dependent content masking to extract relevant snippets, and then trained a CNN to fuse the extracted snippets with entities in the KG. Experiments on large data sets showed that ConMask performed well in the open-world KGC task, even outperforming existing KGC models on the standard closed-world KGC task.

arxiv1711.03438-f1.png

[Image source. Click image to open in new window.]


arxiv1711.03438-f2.png

[Image source. Click image to open in new window.]


arxiv1711.03438-f3.png

[Image source. Click image to open in new window.]


arxiv1711.03438-t3+t4.png

[Image source. Click image to open in new window.]


Convolutional 2D Knowledge Graph Embeddings (Jul 2018) [code] introduced ConvE, a deeper, multi-layer convolutional network model for link prediction. ConvE was highly parameter efficient, yielding the same performance as DistMult and R-GCN  [R-GCN is discussed below] with 8x and 17x fewer parameters.

“Analysis of our model suggests that it is particularly effective at modelling nodes with high indegree – which are common in highly-connected, complex knowledge graphs such as Freebase and YAGO3. In addition, it has been noted that the WN18 and FB15k datasets suffer from test set leakage, due to inverse relations from the training set being present in the test set – however, the extent of this issue has so far not been quantified. We find this problem to be severe: a simple rule-based model can achieve state-of-the-art results on both WN18 and FB15k.”

arxiv-1707.01476.png

[Image source. Click image to open in new window.]


arxiv1707.01476-t3+t4.png

[Image source. Click image to open in new window.]


arxiv1707.01476-t5.png

[Image source. Click image to open in new window.]


ConvE was the inspiration – by different authors – for HypER (Hypernetwork Knowledge Graph Embeddings) (Aug 2018) [code]. HypER used a hypernetwork architecture to generate convolutional layer filters specific to each relation and apply those filters to the subject entity embeddings.

arxiv-1609.09106.png

[Image source. Click image to open in new window.]


arxiv-1609.09106b.png

[Image source  (local copy). Click image to open in new window.]


Their model, HypER, simplified the entity and relation embedding interactions introduced by ConvE while outperforming all previous approaches to link prediction across all standard link prediction datasets. HypER used a hypernetwork to generate weights of convolutional filters for each relation. In this work, they generated relation-specific weights to process input entities, and also obtained multi-task knowledge sharing across different relations in the knowledge graph.

Whereas ConvE had a global common set of 2-dimensional convolutional filters (suggesting the presence of a 2D structure in word embeddings) that reshaped and concatenated subject entity and relation embeddings, HypER used a hypernetwork to generate 1-dimensional relation-specific filters to process the unadjusted subject entity embeddings, thereby simplifying the interaction between subject entity and relation embeddings. Thus, instead of extracting a limited number of both entity-related and relation-related features from the dimensions around the concatenation point, HypER covered all subject entity embedding dimensions by sliding the relation-dependent convolutional filters over the whole entity embedding.

  • A hypernetwork is an approach of having one network generate weights for another network, which can be used to provide weight-sharing across layers and to dynamically synthesize weights based on an input. [See also HyperNetworks (Google Brain: Dec 2016).]

arxiv-1808.07018.png

[Image source  (see also). Click image to open in new window.]


arxiv1808.07018-t4+t5+t6.png

[Image source. Click image to open in new window.]


arxiv1808.07018-t7+t8+t9+t10.png

[Image source. Click image to open in new window.]


Link Prediction using Embedded Knowledge Graphs (Nov 2016; updated Apr 2018) by Microsoft Research and Google Research addressed the task of knowledge base completion by performing a single, short sequence of interactive lookup operations on an embedded knowledge graph which had been trained through end-to-end backpropagation to be an optimized and compressed version of the initial knowledge base. Their proposed model, Embedded Knowledge Graph Network (EKGN), achieved state of the art results on popular knowledge base completion benchmarks.

arxiv1611.04642-f1.png

[Image source. Click image to open in new window.]


arxiv1611.04642-f2.png

[Image source. Click image to open in new window.]


arxiv1611.04642-f3.png

[Image source. Click image to open in new window.]


arxiv1611.04642-t1.png

[Image source. Click image to open in new window.]


arxiv1611.04642-t2.png

[Image source. Click image to open in new window.]


In August 2018 A Capsule Network-based Embedding Model for Knowledge Graph Completion and Search Personalization introduced CapsE, a new model for knowledge graph completion. CapsE employed a capsule network to model (subject, relation, object) relationship triples. CapsE obtained state of the art link prediction results for knowledge graph completion on two benchmark datasets: WN18RR and FB15k-237. Comparing Table 2 in that paper to Table 4 in On Link Prediction in Knowledge Bases: Max-K Criterion and Prediction Protocols appears to confirm CapsE as a state of the art model, the first to apply capsule_networks to knowledge graph completion and search personalization.

arxiv-1808.04122.png

[Image source. Click image to open in new window.]


arxiv1808.04122-t2.png

[Image source. Click image to open in new window.]


Predicting Semantic Relations using Global Graph Properties (Aug 2018) [code] combined global and local properties of semantic graphs through the framework of Max-Margin Markov Graph Models (M3GM), a novel extension of Exponential Random Graph Model (ERGM) that scales to large multi-relational graphs. They demonstrated how such global modeling improves performance on the local task of predicting semantic relations between synsets, yielding new state of the art results on the WN18RR dataset, a challenging version of WordNet link prediction in which “easy” reciprocal cases were removed. In addition, the M3GM model identified multirelational motifs that were characteristic of well-formed lexical semantic ontologies.

arxiv1808.08644-f1.png

[Image source. Click image to open in new window.]


arxiv1808.08644-t1+t2.png

[Image source. Click image to open in new window.]


One-Shot Relational Learning for Knowledge Graphs (Aug 2018) [code] by the University of California (Santa Barbara) and IBM Research observed that long-tail relations are common in KGs (in other words, they have very few instances) and those newly added relations often do not have many known triples for training. In this work, they aimed at predicting new facts under a challenging setting where only one training instance was available. They proposed a one-shot relational learning framework, which utilized the knowledge extracted by embedding models and learned a matching metric by considering both the learned embeddings and one-hop graph structures. Empirically, their model yielded considerable performance improvements over existing embedding models, and also eliminated the need of retraining the embedding models when dealing with newly added relations. These authors also prepared two new datasets, for their work.

[paraphrased:] “Existing benchmarks for knowledge graph completion, such as and YAGO3-10 are small subsets of real-world KGs. These datasets consider the same set of relations during training and testing and often include sufficient training triples for every relation. To construct datasets for one-shot learning, we go back to the original KGs and select those relations that do not have too many triples as one-shot task relations. We refer the rest of the relations as background relations, since their triples provide important background knowledge for us to match entity pairs. Our first dataset is based on NELL, a system that continuously collects structured knowledge by reading webs. We take the latest dump and remove those inverse relations. We select the relations with less than 500 but more than 50 triples 3 as one-shot tasks. To show that our model is able to operate on large-scale KGs, we follow the similar process to build another larger dataset based on Wikidata. The dataset statistics are shown in Table 1. Note that the Wiki-One dataset is an order of magnitude larger than any other benchmark datasets in terms of the numbers of entities and triples.”

arxiv1808.09040-f1.png

[Image source. Click image to open in new window.]


arxiv1808.09040-f2.png

[Image source. Click image to open in new window.]


arxiv1808.09040-t2.png

[Image source. Click image to open in new window.]


Most existing knowledge graph completion methods either focus on the positional relationship between entity pair and single relation (1-hop path) in semantic space, or concentrate on the joint probability of random walks on multi-hop paths among entities. However, they do not fully consider the intrinsic relationships of all the links among entities. By observing that the single relation and multi-hop paths between the same entity pair generally contain shared/similar semantic information, Hierarchical Attention Networks for Knowledge Base Completion via Joint Adversarial Training (Oct 2018) proposed a novel method for KB completion, which captured the features shared by different data sources utilizing hierarchical attention networks (HAN) and adversarial training (AT). The joint adversarial training with a gradient reversal layer (GRL) reversed the backpropagation gradient to allow the feature extractor to extract the shared features between different data sources. The HANs automatically encoded the inputs into low-dimensional vectors and exploited two partial parameter-shared components: one for feature source discrimination, and the other for determining missing relations. By joint adversarial training (AT) the entire model, their method minimized the classification error of missing relations. The AT mechanism encouraged their model to extract features that were both discriminative for missing relation prediction, and shareable between single relation and multi-hop paths.

arxiv1810.06033-fig1.png

[Image source. Click image to open in new window.]


arxiv1810.06033-fig2.png

[Image source. Click image to open in new window.]


arxiv1810.06033-table2.png

[Image source. Click image to open in new window.]


Hierarchical Attention Networks for Document Classification also employed a hierarchical attention network.




My Table 1, below, summarizes MRR scores for link prediction for selected, best-performing embedding models ca. mid-2018. Around that time, ComplEx was employed by Facebook AI Research as the state of the art for comparison of various models in Canonical Tensor Decomposition for Knowledge Base Completion  [code], which framed knowledge base completion as a 3rd-order binary tensor completion problem.

Table 1. Link prediction results: comparison of various embedding models (filtered MRR metric1)
General notes.
1. This list is current to approximately late August, 2018.
2. I did not exhaustively search the literature, only the papers mentioned in this subsection.
3. From those papers, I selected ("cherry picked") the data from some of the better-performing models discussed in this subsection, and from those again selected those models that are discussed herein. I also only collected/reported the MRR scores, as I wanted an uncluttered comparison of those models.
4. References cited may not be the primary source (refer to the references cited therein).
Table footnotes.
1Filtered MRR: filtered MRR.
2 Datasets:

    WN18 is a subset of WordNet which consists of 18 relations and 40,943 entities ("generic facts"). Most of the 151,442 triples consist of hyponym and hypernym relations and, for such a reason, WN18 tends to follow a strictly hierarchical structure.

    WN18RR corrects flaws in WN18:

      "WN18RR reclaims WN18 as a dataset, which cannot easily be completed using a single rule, but requires modeling of the complete knowledge graph. WN18RR contains 93,003 triples with 40,943 entities and 11 relations. For future research, we recommend against using FB15k and WN18 and instead recommend FB15k-237, WN18RR, and YAGO3-10."

      "A popular relation prediction dataset for WordNet is the subset curated as WN18, containing 18 relations for about 41,000 synsets extracted from WordNet 3.0. It has been noted that this dataset suffers from considerable leakage: edges from reciprocal relations such as hypernym/hyponym appear in one direction in the training set and in the opposite direction in dev/test. This allows trivial rule-based baselines to achieve high performance. To alleviate this concern, Dettmers et al. (2018) released the WN18RR set, removing seven relations altogether. However, even this dataset retains four symmetric relation types: 'also see', 'derivationally related form', 'similar to', and 'verb group'. These symmetric relations can be exploited by defaulting to a simple rule-based predictor." [Source: Section 4.1 in Predicting Semantic Relations using Global Graph Properties; references therein.]
    FB15k is a subset of Freebase which contains about 14,951 entities with 1,345 different relations. A large fraction of content in this knowledge graph describes facts about movies, actors, awards, sports, and sport teams.

    FB15k-237  (see also), which corrects errors in FB15k, contains about 14,541 entities with 237 different relations. "This dataset contains knowledge base relation triples and textual mentions of Freebase entity pairs. The knowledge base triples are a subset of the FB15k set. The textual mentions are derived from 200 million sentences from the ClueWeb12 corpus coupled with FACC1 Freebase entity mention annotations."

    YAGO3-10 is a subset of YAGO3 which consists of entities which have a minimum of 10 relations each. It has 123,182 entities and 37 relations. Most of the triples deal with descriptive attributes of people, such as citizenship, gender, and profession.

    YAGO37 is extracted from the core facts of YAGO3, containing 37 relations and 123,189 entities. This dataset was created by the RUGE authors: "The FB15k data set consists of 1,345 relations and 14,951 entities among them. The training set contains 483,142 triples, the validation set 50,000 triples, and the test set 59,071 triples. 454 rules are created for FB15k. The YAGO37 data set consists of 37 relations and 123,189 entities among them. The training set contains 989,132 triples, the validation set 50,000 triples, and the test set 50,000 triples. 16 rules are created for YAGO37. All triples are unique and we made sure that all entities/relations appearing in the validation or test sets were occurring in the training set."
Dataset2
Model [ref] FB15k FB15k-237 WN18 WN18RR YAGO3-10 YAGO37
ANALOGY [1] 0.723 0.211 0.942 0.391 0.257
ANALOGY [7] 0.725 0.942
CapsE [8] 0.538 0.391
ComplEx [1] 0.716 0.206 0.942 0.390 0.266
ComplEx [10] 0.692 0.247 0.941 0.440 0.340
ComplEx [11] 0.69 0.240 0.941 0.444
ComplEx [5] 0.692 0.941
ComplEx [6] 0.639 0.941
ComplEx-NNE+AER [3] 0.803 0.943
ConvE [10] 0.657 0.325 0.943 0.430 0.440
ConvE [4] 0.745 0.942
ConvE [8] 0.316 0.460
ConvE [9] 0.301 0.342
ConvE [11] 0.745 0.301 0.942 0.342
DistMult [10] 0.654 0.241 0.822 0.430 0.340
DistMult [11] 0.35 0.241 0.83 0.425
DistMult [3] 0.644 0.365
DistMult [5] 0.654 0.822
HolE [6] 0.541 0.938
HolE [5] 0.524 0.938
HypER [2] 0.762 0.341 0.951 0.463
M3GM [12] 0.4983
ProjE [1] 0.588 0.249 0.820 0.367 0.470
R-GCN [10] 0.696 0.248 0.814
R-GCN+ [9] 0.696 0.249 0.819
RUGE [3] 0.768 0.431
TransE [1] 0.456 0.219 0.584 0.191 0.151
TransE [3] 0.400 0.303
TransE [5] 0.380 0.454
TransE [8] 0.294 0.226
TransE [9] 0.463 0.294 0.495 0.226
TransE [12] 0.4659
TransF [11] 0.564 0.286 0.856 0.505

From the table above we can see that there is considerable variation among the filtered MRR for link prediction by the various models on the specified datasets. Per the footnotes in that table – although there are fewer comparisons at present – we should probably assign greater performance to models that score well on the more challenging datasets:

Knowledge graph embedding has been an active research topic for knowledge base completion, with progressive improvement from the initial TransE, TransH, DistMult etc. to the current state of the art ConvE. ConvE uses 2D convolution over embeddings and multiple layers of nonlinear features to model knowledge graphs. The model can be efficiently trained and scalable to large knowledge graphs. However, there is no structure enforcement in the embedding space of ConvE. The recent graph convolutional network (GCN) provides another way of learning graph node embedding by successfully utilizing graph connectivity structure.

End-to-end Structure-Aware Convolutional Networks for Knowledge Base Completion (Nov 2018) proposed a novel end-to-end Structure-Aware Convolutional Networks (SACN ) that took the benefit of GCN and ConvE together. SACN consisted of an encoder of a weighted graph convolutional network (WGCN), and a decoder of a convolutional network called Conv-TransE. WGCN utilized knowledge graph node structure, node attributes and relation types. It had learnable weights that collected adaptive amounts of information from neighboring graph nodes, resulting in more accurate embeddings of graph nodes. In addition, the node attributes were added as the nodes and were easily integrated into the WGCN. The decoder Conv-TransE extended the state of the art ConvE to be translational between entities and relations while keeping the state of the art performance as ConvE. They demonstrated the effectiveness of their proposed SACN model on standard FB15k-237 and WN18RR datasets, and presented about 10% relative improvement over the state of the art ConvE in terms of HITS@1, HITS@3 and HITS@10.

arxiv1811.04441-f1.png

[Image source. Click image to open in new window.]


arxiv1811.04441-f2.png

[Image source. Click image to open in new window.]


arxiv1811.04441-t3.png

[Image source. Click image to open in new window.]


In an interesting approach, Finding Streams in Knowledge Graphs to Support Fact Checking (Aug 2017) [code] viewed a knowledge graph as a “flow network” and knowledge as a fluid, abstract commodity. They showed that computational fact checking of a (subject, predicate, object) triple then amounts to finding a “knowledge stream” that emanates from the subject node and flows toward the object node through paths connecting them. Evaluations of their models (KS: Knowledge Stream and KL-REL: Relational Knowledge Linker) revealed that this network-flow model was very effective in discerning true statements from false ones, outperforming existing algorithms on many test cases. Moreover, the model was expressive in its ability to automatically discover useful path patterns and relevant facts that may help human fact checkers corroborate or refute a claim.

arxiv1708.07239-f5.png

[Image source. Click image to open in new window.]


arxiv1708.07239-t2+t3.png

[Image source. Click image to open in new window.]


arxiv1708.07239-t4.png

[Image source. Click image to open in new window.]


Predictive Network Representation Learning for Link Prediction (2017) [pdf] addressed the structural link prediction, which infers missing links on a static network. Their proposed model, Predictive Network Representation Learning (PNRL), defined two learning objectives: observed structure preservation, and hidden link prediction. [Network representation learning models learn the latent representations of nodes, which can embed the rich structural information into the latent space. Most of these models are learned in an unsupervised manner.] To integrate the two objectives in a unified model, they developed an effective sampling strategy to select certain edges in a given network as assumed hidden links and regard the rest network structure as observed when training the model. By jointly optimizing the two objectives, the model not only enhanced the predictive ability of node representations, but also learned additional link prediction knowledge in the representation space.

Wang2017predictive-f1.png

[Image source. Click image to open in new window.]


Wang2017predictive-t2.png

[Image source. Click image to open in new window.]


The use of drug combinations, termed polypharmacy, is common for treating patients with complex diseases or co-existing conditions. However, a major consequence of polypharmacy is a much higher risk of adverse side effects for the patient. Polypharmacy side effects may emerge because of drug-drug interactions, in which activity of one drug may change favorably or unfavorably if taken with another drug. The knowledge of drug interactions is often limited because these complex relationships are rare, and are usually not observed in relatively small clinical testing. Discovering polypharmacy side effects thus remains an important challenge with significant implications for patient mortality and morbidity.

Modeling Polypharmacy Side effects with Graph Convolutional Networks (Jul 2018) [project  (code/datasets);  discussion] – by Jure Leskovec and colleagues at Stanford University – presented Decagon, an approach for modeling polypharmacy side effects. The approach constructed a multimodal graph of protein-protein interactions, drug-protein target interactions and the polypharmacy side effects, which were represented as drug-drug interactions, where each side effect was an edge of a different type. Decagon was developed specifically to handle multimodal graphs with a large number of edge types. Their approach developed a new graph convolutional neural network for multirelational link prediction in multimodal networks. Unlike approaches limited to predicting simple drug-drug interaction values, Decagon could predict the exact side effect, if any, through which a given drug combination manifests clinically.

Decagon accurately predicted polypharmacy side effects, outperforming baselines by up to 69%. They found that it automatically learned representations of side effects indicative of co-occurrence of polypharmacy in patients. Furthermore, Decagon modeled particularly well polypharmacy side effects that had a strong molecular basis, while on predominantly non-molecular side effects it achieved good performance because of effective sharing of model parameters across edge types. Decagon opens up opportunities to use large pharmacogenomic and patient population data to flag and prioritize polypharmacy side effects for follow-up analysis via formal pharmacological studies.

Decagon2018-f1.png

[Image source. Click image to open in new window.]


Decagon2018-GCN.png

[Image source. Click image to open in new window.]


Decagon2018-encoder.png

[Image source. Click image to open in new window.]


Decagon2018-f3.png

[Image source. Click image to open in new window.]


Decagon2018-f4.png

[Image source. Click image to open in new window.]


Current KG completion models compel two-thirds of a triple provided (e.g., subject and relation) to predict the remaining one. DSKG: A Deep Sequential Model for Knowledge Graph Completion (Oct 2018) [code] proposed a new model which used a KG-specific multi-layer recurrent neutral network (RNN) to model triples in a KG as sequences. DSKG outperformed several state of the art KG completion models on the conventional entity prediction task for many evaluation metrics, based on two benchmark datasets (FB15kWN18) and a more difficult dataset (FB15k-237). Furthermore, their model was capable of predicting the triples, given only one entity.

arxiv1810.12582-f1.png

[Image source. Click image to open in new window.]


arxiv1810.12582-t1.png

[Image source. Click image to open in new window.]


arxiv1810.12582-t2+t3.png

[Image source. Click image to open in new window.]


Many knowledge graph embedding methods operate on triples and are therefore implicitly limited by a very local view of the entire knowledge graph. MOHONE: Modeling Higher Order Network Effects in KnowledgeGraphs via Network Infused Embeddings (Nov 2018) presented a new framework, MOHONE, to effectively model higher order network effects in knowledge-graphs thus enabling one to capture varying degrees of network connectivity (from the local to the global). Their framework was generic, explicitly modeled the network scale, and captured two different aspects of similarity in networks: (a) shared local neighborhood and (b) structural role-based similarity. They first introduced methods that learned network representations of entities in the knowledge graph, capturing these varied aspects of similarity. They then proposed a fast, efficient method to incorporate the information captured by these network representations into existing knowledge graph embeddings. Their method consistently and significantly improved the performance on link prediction of several different knowledge-graph embedding methods including TransE, TransD, DistMult, and ComplEx (by at least 4 points or 17% in some cases).

For a related paper by the same authors contemporaneously released on arXiv, see *DOLORES*: Deep Contextualized Knowledge Graph Embeddings.

arxiv1811.00198-f1.png

[Image source. Click image to open in new window.]


arxiv1811.00198-f2+f3.png

[Image source. Click image to open in new window.]


arxiv1811.00198-t2+t3.png

[Image source. Click image to open in new window.]


An interesting approach to KG construction involves the Biological Expression Language (BEL), which represents findings in the life sciences in a computable form. Biological statements in BEL are represented as (subject, predicate, object) triples, where the subject is a BEL term, the predicate is a biological relationship that connects the subject with the object (which can be a BEL term or a BEL statement). Hence, the knowledge captured by BEL statements can be represented as a graph. A BEL term represent the abundance of a biological entity, or a biological process such as a Gene Ontology entry or a disease. Each entity is described in an existing namespace (describing, for example, it’s source database, e.g. ChEBI), or a user-defined namespace. A fixed set of causal, correlative and other relationships link the entities: each statement can be associated with metadata that contextualizes it, for example qualifying it to be true only in specific tissues.

While BEL is an active research area with Tracks in the BioCreative VI Community Challengesp (see also) and preceding BioCreative Tracks, the approach appears to have gained little traction outside that community. For example, only a handful of BEL-related papers are present in PubMed that are not focused on the development of BEL, itself. For more information on BEL, see:

  • Training and Evaluation Corpora for the Extraction of Causal Relationships Encoded in Biological Expression Language (BEL) (2016), which described the corpus prepared for the BioCreative V BEL track, which also provides an excellent review of the Biological Expression Language:

    PMC4995071-f1.png

    [Image source. Click image to open in new window.]


  • Saved by the BEL: Ringing in a Common Language for the Life Sciences (2012):

    BEL2012-f2.png

    [Image source. Click image to open in new window.]


    BEL2012-f3.png

    [Image source. Click image to open in new window.]


  • BEL 2.0 Specification, home of the BEL Language Documentation v2.0;
  • OpenBEL;
  • PyBEL Python ecosystem;

  • Navigating the Disease Landscape: Knowledge Representations for Contextualizing Molecular Signatures (Apr 2018): the “Contextualization by Knowledge Graph Representations” subsection, pp. 8-9, summarizes some biomedical applications of BEL:

    Saqi2018-f1.png

    [Image source. Click image to open in new window.]


    Saqi2018-f2.png

    [Image source. Click image to open in new window.]


    Saqi2018-f3.png

    [Image source. Click image to open in new window.]


    Saqi2018-f4.png

    [Image source. Click image to open in new window.]


    Saqi2018-f1.png

  • The Causal Biological Network database  (Causal Biological Network Database: A Comprehensive Platform of Causal Biological Network Models Focused on the Pulmonary and Vascular Systems (2015)), “… a set of biological network models scripted in BEL that reflect causal signaling pathways across a wide range of biological processes, including cell fate, cell stress, cell proliferation, inflammation, tissue repair and angiogenesis in the pulmonary and cardiovascular context. This comprehensive collection of networks is now freely available to the scientific community in a centralized web-based repository, composed of over 120 manually curated and well annotated biological network models supported by over 80,000 unique pieces of evidence from the scientific literature.

    PMC4401337-f2.png

    [Image source. Click image to open in new window.]


    PMC4401337-f3.png

    [Image source. Click image to open in new window.]


[Table of Contents]

Knowledge Discovery

KG are ideally-suited for knowledge discovery. Notable examples in the biomedical domain include the following papers.

In Exploiting Semantic Patterns Over Biomedical Knowledge Graphs for Predicting Treatment and Causative Relations (Jun 2018) [pdf], the authors first built a large knowledge graph of biomedical relations obtained from the National Library of Medicine (NLM)’s Unified Medical Language System Metathesaurus (UMLS). They then took a different approach to predicting potential relations between arbitrary pairs of biomedical entities. They refrained from NLP approaches that looked at individual sentences to extract a potential relation, instead exploiting semantic path patterns over this graph to build models for specific predicates. Instead of looking at what a particular sentence conveys, they modeled their prediction problem at a global level, outputting probability estimates of whether a pair of entities participated in a particular relation. A different binary classification model was trained for each predicate. While the approach was demonstrated using the “TREATS” and “CAUSES” predicates drawn from the UMLS Metathesaurus SemMedDB (Semantic Medline Database), their method also generalized to other predicates (such as “DISRUPTS” and “PREVENTS”), and could also complement other lexical and syntactic pattern-based distant supervision approaches for relation extraction.

PMC6070294-excerpt.png

[Image source. Click image to open in new window.]


PMC6070294-f1+f2.png

[Image source. Click image to open in new window.]


PMC6070294-f3.png

[Image source. Click image to open in new window.]


PMC6070294-f4.png

[Image source. Click image to open in new window.]


MOLIERE : Automatic Biomedical Hypothesis Generation System (May 2017) [projectcode] is a system that can identify connections within biomedical literature. MOLIERE utilized a multi-modal/multi-relational network of biomedical objects (papers; keywords; genes; proteins; diseases; diagnoses) extracted from PubMed. MOLIERE finds the shortest path between two query keywords in the KG, and extends this path to identify a significant set of related abstracts (which, due to the network construction process, share common topics). Topic modeling, performed on these documents using PLDA+, returns a set of plain text topics representing concepts that likely connect the queried keywords, supporting hypothesis generation (for example, on historical findings MOLIERE showed the implicit link between Venlafaxine and HTR1A, and the involvement of DDX3 on Wnt signaling).

arxiv1702.06176-f3descr.png

[Image source. Click image to open in new window.]


arxiv1702.06176-excerpt.png

[Image source. Click image to open in new window.]


arxiv1702.06176-f4.png

[Image source. Click image to open in new window.]


  • A: abstract layer; K: keyword layer:

    “In order to create edges between A and K, we used a simple metric of term frequency-inverse document frequency (TF-IDF). UMLS provides not only a list of keywords, but all known synonyms for each keyword. For example, the keyword Color C0009393 has the American spelling, the British spelling, and the pluralization of both defined as synonyms. Therefore we used the raw text abstracts and titles (before running the SPECIALIST NLP tools) to calculate tf-idf. In order to quickly count all occurrences of UMLS keywords across all synonyms, we implemented a simple parser. This was especially important because many keywords in UMLS are actually multi-word phrases such as ‘Clustered Regularly Interspaced Short Palindromic Repeats’ (a.k.a. CRISPR) C3658200 . …“

arxiv1702.06176-f5+f6.png

[Image source. Click image to open in new window.]


The task of entity relation extraction discovers new relation facts and enables broader applications of knowledge graph. Attention-Aware Path-Based Relation Extraction for Medical Knowledge Graph (Jan 2018) proposed a novel path attention-based model (GRU + ONE) to discover new entity relation facts. Instead of finding texts for relation extraction, the proposed method extracted path-only information for entity pairs from the knowledge graph. For each pair of entities, multiple paths could be extracted, with some of them being more useful than others for relation extraction. They employed an attention mechanism to assign different weights to different paths, which highlighted paths useful for entity relation extraction. On a large medical knowledge graph, the proposed method significantly improved the accuracy of extracted relation facts compared with state of the art relation extraction methods.

wen2018attention-t1+t2+t2descr.png

[Image source. Click image to open in new window.]


wen2018attention-case_study.png

[Image source. Click image to open in new window.]


Distilled Wasserstein Learning for Word Embedding and Topic Modeling (Sep 2018) proposed a novel Wasserstein method with a distillation mechanism, yielding joint learning of word embeddings and topics. The proposed method – distilled Wasserstein learning (DWL) – was based on the fact that the Euclidean distance between word embeddings may be employed as the underlying distance in the Wasserstein topic model. The word distributions of topics, their optimal transports to the word distributions of documents, and the embeddings of words were learned in a unified framework. Evaluated on patient admission records, the proposed method embedded disease codes and procedures and learned the admission topics, obtaining superior performance on clinically-meaningful disease network construction, mortality prediction as a function of admission codes, and procedure recommendation.

arXiv-1809.04705.png

[Image source. Click image to open in new window.]


arxiv1809.04705-t1+t2.png

[Image source. Click image to open in new window.]


arxiv1809.04705-f2+t3.png

[Image source. Click image to open in new window.]


arxiv1809.04705-f4.png

[Image source. Click image to open in new window.]


Beyond Word Embeddings: Learning Entity and Concept Representations from Large Scale Knowledge Bases (Aug 2018) [code/data (the URL for the code is broken)] described work that jointly learned concept vectors from a textual knowledge base (Wikipedia) and a graphical knowledge base (Probase), demonstrating superior results (their Table 1) on analogical reasoning and concept categorization. The authors employed the skip-gram model to seamlessly learn from the knowledge in Wikipedia text and Probase concept graph, in an unsupervised approach to argument-type identification for neural semantic parsing.

Recall that:

  • the QA4IE question answering framework likewise processed input documents, along with a knowledge base (the Wikipedia Ontology, to produce high quality relation triples, and

  • ANALOGY also employed analogical inference (for link prediction).

arxiv1801.00388-f1.png

[Image source. Click image to open in new window.]


arxiv1801.00388-t1+t2.png

[Image source. Click image to open in new window.]


In Learning a Health Knowledge Graph from Electronic Medical Records (Jul 2017) [data], maximum likelihood estimation of three probabilistic models was used to automatically construct knowledge graphs: logistic regression, naive Bayes classifier, and a Bayesian network using noisy-OR gates. Logistic regression, widely used for binary classification, was chosen as an example of a well-established machine learning classifier with interpretable parameters which is frequently used for modeling binary outcomes. Naive Bayes was chosen as it provides a baseline of what can be inferred from simple pairwise co-occurrences. Noisy-OR was chosen as an example of a probabilistic model that jointly models diseases and symptoms; similar models have successfully been used in previous medical diagnosis applications. Parameters for all three models were learned using maximum likelihood estimation. A graph of disease-symptom relationships was elicited from the learned parameters, and the constructed knowledge graphs were evaluated and validated, with permission, against Google’s manually-constructed knowledge graph and against expert physician opinions. The noisy-OR model significantly outperformed the other tested models, producing a high quality knowledge graph reaching precision of 0.85 for a recall of 0.6 in the clinical evaluation.

PMC5519723-f1.png

[Image source. Click image to open in new window.]


PMC5519723-f2.png

[Image source. Click image to open in new window.]


PMC5519723-t1.png

[Image source. Click image to open in new window.]


PMC5519723-t3.png

[Image source. Click image to open in new window.]


  • Careful reading of that paper and its Appendix, which describes the assumptions associated with each of those three models, clarifies the approach. Diseases (and their effects: symptoms) were separately modeled by logistic regression and naive Bayes; additionally, in naive Bayes, all symptoms were assumed to be conditionally independent from one another. However, the noisy-OR model jointly models all diseases and symptoms [wherein the central assumption of independence of effects (symptoms) is maintained]. Noisy-OR is a conditional probability distribution that describes the causal mechanisms by which parent nodes (e.g. diseases) affect the states of children nodes (e.g. symptoms). For example, given that patients tend to present with few diseases, the presence of one disease typically lowers the probability of others. In the logistic regression and naive Bayes models, however, since each disease is modeled separately (resulting in an assumption of independence between diseases), the presence of one disease does not (for example) rule out or lessen the probability of another disease (i.e. diagnosis).

  • Author David Sontag also evaluates noisy-or maximum likelihood estimation in Clinical Tagging with Joint Probabilistic Models (Sep 2016). See also Sontag’s paper, Unsupervised Learning of Noisy-Or Bayesian Networks (Sep 2013).

    arxiv1309.6834-f1+f2.png

    [Image source. Click image to open in new window.]


  • Code for Learning a Health Knowledge Graph from Electronic Medical Records is not available.

The noisy-OR model was also employed in a 2009 paper [different authors], Biomedical Discovery Acceleration, with Applications to Craniofacial Development:

“… the explosion of new results in the scientific literature, particularly in molecular biomedicine, is both a blessing and a curse to the bench researcher. Even knowledgeable and experienced scientists can benefit from computational tools that help navigate this vast and rapidly evolving terrain. In this paper, we describe a novel computational approach to this challenge, a knowledge-based system that combines reading, reasoning, and reporting methods to facilitate analysis of experimental data.

“Reading methods extract information from external resources, either by parsing structured data or using biomedical language processing to extract information from unstructured data, and track knowledge provenance. Reasoning methods enrich the knowledge that results from reading by, for example, noting two genes that are annotated to the same ontology term or database entry. Reasoning is also used to combine all sources into a knowledge network that represents the integration of all sorts of relationships between a pair of genes, and to calculate a combined reliability score. Reporting methods combine the knowledge network with a congruent network constructed from experimental data and visualize the combined network in a tool that facilitates the knowledge-based analysis of that data.

“An implementation of this approach, called the Hanalyzer, is demonstrated on a large-scale gene expression array dataset relevant to craniofacial development. The use of the tool was critical in the creation of hypotheses regarding the roles of four genes never previously characterized as involved in craniofacial development; each of these hypotheses was validated by further experimental work.”

  • They also stated (paraphrased):

    “One of the most popular methods to combine individual reliabilities is to assume independence of experts (naive Bayes assumption) and compute the integrated likelihood P for each relationship using the Noisy-OR function … The Noisy-OR function has the useful property that the probability of a relationship is high with at least one reliable assertion yet increases with additional support. This property is especially relevant in biology, where it is often difficult to identify false negatives; a given assertion is strengthened by additional information but unlike the case for estimating the reliability of an expert on the whole, an individual assertion is not penalized for lack of additional evidence. Moreover, since the experts are assumed to be independent, experts can be removed from or added to the analysis without excessive re-computation.”

PMC2653649-f1.png

[Image source. Click image to open in new window.]


PMC2653649-f2.png

[Image source. Click image to open in new window.]


PMC2653649-f4.png

[Image source. Click image to open in new window.]


PMC2653649-f7.png

[Image source. Click image to open in new window.]


PMC2653649-f9.png

[Image source. Click image to open in new window.]


PMC2653649-f11.png

[Image source. Click image to open in new window.]


Their approach is implemented in a tool, “Hanalyzer” [project page/code here and herevideoCytoscape plugin], which was used to reason over gene expression data and the biomedical data, creating hypotheses regarding the roles of genes never previously characterized as involved in craniofacial development.

Heterogeneous information networks (HINs), complex networks widely-seen in real-world applications, often encapsulate higher-order interactions that crucially reflect the complex nature among nodes and edges in real-world data. Modeling higher-order interactions in HIN facilitates the user-guided clustering problem by providing an informative collection of signals. At the same time, network motifs have been used extensively to reveal higher-order interactions and network semantics in homogeneous networks; thus, it is natural to extend the use of motifs to HINs. Higher-Order Clustering in Heterogeneous Information Networks (Nov 2018) tackles the problem of user-guided clustering in HIN by using motifs, highlighting the benefits of comprehensively modeling higher-order interactions instead of decomposing the complex relationships to pairwise interaction. They proposed the MoCHIN model which was applicable to arbitrary forms of HIN motifs, which is often necessary for the application scenario in HINs due to the rich and diverse semantics encapsulated in the heterogeneity. To overcome the curse of dimensionality (the tensor size grows exponentially as the number of nodes increased in their model), they proposed an efficient inference algorithm for MoCHIN. MoCHIN surpassed all baselines in three evaluation tasks under different metrics. The advantage of the model when the supervision was weak was also discussed in additional experiments.

arxiv1811.11320-f1+f3.png

[Image source. Click image to open in new window.]


arxiv1811.11320-t2+t3+f2.png

[Image source. Click image to open in new window.]


Network representation aims to represent the nodes in a network as continuous and compact vectors, and has attracted much attention in due to its ability to capture complex structure relationships inside networks. However, existing network representation methods are commonly designed for homogeneous information networks where all the nodes (entities) of a network are of the same type, e.g., papers in a citation network. Universal Network Representation for Heterogeneous Information Networks (Nov 2018) proposed a universal network representation approach (UNRA), that represented different types of nodes in heterogeneous information networks in a continuous and common vector space. UNRA was built on a mutually updated neural language module, which simultaneously captured inter-relationship among homogeneous nodes and node-content correlation. Relationships between different types of nodes were also assembled and learned in a unified framework. UNRA achieved outstanding performance compared to six other state of the art algorithms in node representation, node classification, and network visualization. In node classification, UNRA achieved a 3% to 132% performance improvement in terms of accuracy.

arxiv1811.12157-f1.png

[Image source. Click image to open in new window.]


arxiv1811.12157-f2+t1+f3.png

[Image source. Click image to open in new window.]


arxiv1811.12157-t2+t3.png

[Image source. Click image to open in new window.]


Regarding natural language understanding and reasoning, a recent review/perspective paper (Relational Inductive Biases, Deep Learning, and Graph Networks (Jun 2018; updated Oct 2018) [code/demosmediacritique;  discussion here and here] – jointly from DeepMind, Google Brain, MIT and the University of Edinburgh – described how graph networks provide a straightforward interface for manipulating structured knowledge and producing structured behaviors. The paper argued that their Graph Networks could effectively support two critical human-like capabilities: relational reasoning (i.e. drawing logical conclusions of how different objects and things relate to one another), and combinatorial generalization (i.e. constructing new inferences, predictions, and behaviors from known building blocks). Their Graph Networks framework defined a class of functions for relational reasoning over graph structured representations. They discussed how Graph Networks could support relational reasoning and combinatorial generalization, laying the foundation for more sophisticated, interpretable, and flexible patterns of reasoning. While “natural language understanding” is not discussed, per se, it is likely that NLU will be integral to the relational reasoning paradigm that is described in that paper.

  • Basically this is a “the whole is greater than the sum of the parts ” perspective paper which posits that graphs are needed for human-like, relational reasoning. They propose “graph networks,” that can be built as needed form smaller subgraphs, to address specific tasks. A structured representation captures a composition of known building blocks, with structured computations operating over the elements and their composition as a whole. * Relational reasoning* operates over structure representations combining entities and relations, using rules for how they can be composed.

    The paper argues that Graph Networks can effectively support two critical human-like capabilities:

    • relational reasoning, i.e. drawing logical conclusions of how different objects and things relate to one another, and
    • combinatorial generalization, i.e. constructing new inferences, predictions, and behaviors from known building blocks.

    The main unit of computation in the Graph Network framework is the GN block – also referred to as “graph-to-graph” modules – which takes a graph as input, performs computations over the structure, and returns a graph as output… graph features (entities) are represented by (i) the graph’s nodes, (ii) relations by the edges, and (iii) system-level properties by global attributes. The Graph Network block takes a graph as an input, performs calculations from the edge, to the node, to the global attributes – generating a new graph as output.

    arxiv-1806.01261a.png

    [Image source. Click image to open in new window.]


    A graph is thus a three-tuple $\small G = (\mathbf{u}, \mathcal{V}, \mathcal{E})$ where $\small \mathbf{u}$ is a global attribute, $\small \mathcal{V}$ is a set of nodes (each represented by attributes), and $\small \mathcal{E}$ is a set of edges (also represented by attributes). As described (above), the framework is agnostic to specific representations and functional forms. However, GN may be used in deep learning architectures, allowing them to act as learnable graph-to-graph function approximators. The three design principles behind graph networks which support this are:

    • flexible representations,
    • configurable within-block structure, and
    • composable multi-block architectures.

    One common design is the encode-process-decode configuration in which an input graph is transformed into a latent representation by an encoder GN, a shared core block is applied multiple times to process the encoding, and then a decoder GN produces the final output. [Discussion source.]

    Graph networks are flexible in terms of attributes representations we might want to choose: tensors, sets, graphs, etc. – these will most likely depend on your problem. Regarding the output, we can use edges, nodes or the global attributes based on what we want to achieve. Having all these possibilities means that we have to spend more time on our model architectures when using GNs. Not like now when if it’s an image, use CNN; when it’s a sequence, use LSTM. Regarding the question of how the input data is to be represented as a graph, there are two possible options: (i) the input contains the relational structure or (ii) the relations must be inferred. [Discussion source.]

  • Reviews:

[Table of Contents]

Graph Structure

Here, I summarize work which leverages the representational and topological structure of knowledge graphs, that can be used to learn latent relationships or aid KG construction (e.g., through link prediction).

Network embedding methods can be divided into two categories (all discussed below – at times excerpted from Improved Semantic-Aware Network Embedding with Fine-Grained Word Alignment):

  • methods that solely rely on structure (e.g. vertex information), such as DeepWalk, LINE, node2vec, etc.; and,

  • methods that leverage both the structure the network and the semantic information associated with its vertices, such as

    • TADW (Text-Associated DeepWalk),
    • CENE (Content-Enhanced Network Embedding),
    • Context-Aware Network Embedding (CANE), and
    • Word-Alignment-based Network Embedding (WANE).

Similar to the skip-gram concept [Mikolov et al. (2013)] originally introduced for learning word embeddings, DeepWalk [DeepWalk: Online Learning of Social Representations (Jun 2014)] generalized language modeling and unsupervised feature learning from sequences of words to graphs, using local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. In other words, DeepWalk learned latent representations of vertices in a network in a continuous vector space via truncated random walks.

arxiv1403.6652-f1.png

[Image source. Click image to open in new window.]


[More on Zachary's Karate Club.]

arxiv1403.6652-f3.png

[Image source. Click image to open in new window.]


Likewise, LINE  [LINE : Large-scale Information Network Embedding (Mar 2015)] [code] proposed a principled objective to explicitly capture first-order and second-order proximity information from the vertices of a network.

  • “This paper studies the problem of embedding very large information networks into low-dimensional vector spaces, which is useful in many tasks such as visualization, node classification, and link prediction. … In this paper, we propose a novel network embedding method called the LINE, which is suitable for arbitrary types of information networks: undirected, directed, and/or weighted. The method optimizes a carefully designed objective function that preserves both the local and global network structures. An edge-sampling algorithm is proposed that addresses the limitation of the classical stochastic gradient descent and improves both the effectiveness and the efficiency of the inference.

    “Empirical experiments prove the effectiveness of the LINE on a variety of real-world information networks, including language networks, social networks, and citation networks. The algorithm is very efficient, which is able to learn the embedding of a network with millions of vertices and billions of edges in a few hours on a typical single machine. …”

arxiv1503.03578-f1.png

[Image source. Click image to open in new window.]


arxiv1503.03578-t2.png

[Image source. Click image to open in new window.]


arxiv1503.03578-t3.png

[Image source. Click image to open in new window.]


arxiv1503.03578-t7+t8+f2.png

[Image source. Click image to open in new window.]


Subsequently, node2vec – by Aditya Grover and Jure Leskovec at Stanford University – introduced a biased random walk procedure to generate the neighborhood for a vertex, which inferred the node representations by maximizing the likelihood of preserving the local context information of vertices [node2vec: Scalable Feature Learning for Networks (Jul 2016);  [project]].

  • “Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks.

    “Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node’s network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations.

    “We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.”

arxiv1607.00653-f1+f2.png

[Image source. Click image to open in new window.]


arxiv1607.00653-f3.png

[Image source. Click image to open in new window.]


arxiv1607.00653-t2+t4.png

[Image source. Click image to open in new window.]




DeepWalk

The DeepWalk approach has proven to be very popular: for example, recent applications of DeepWalk in the biomedical domain include the following works.

  • Graph structure was leveraged by Alshahrani et al. [Neuro-Symbolic Representation Learning on Biological Knowledge Graphs (Sep 2017)], who developed a method for feature learning on biological knowledge graphs. Their method combined symbolic methods (specifically, knowledge representation using symbolic logic and automated reasoning) with neural networks to generate vector representations (embeddings) of the nodes in those graphs, as well as the kinds of relations that existed to neighboring nodes. To learn those representations, they repeatedly performed random walks from each node in the KG, using the resulting walks as sentences within a corpus, and applied a word2vec skip-gram model (DeepWalk) to learn the embeddings for each node. Through the use of symbolic logic, these embeddings contain both explicit and implicit information. These embeddings were applied to the prediction of edges in the KG, applicable to tasks such as the prediction of biological function, finding candidate genes of diseases, and identifying protein-protein interactions or drug-target relations.

    PMC5860058-f1.png

    [Image source. Click image to open in new window.]


    PMC5860058-t1.png

    [Image source. Click image to open in new window.]


  • An extension of Alshahrani’s work (Neuro-symbolic representation learning on biological knowledge graphs) was described by Agibetov and Samwald in Fast and Scalable Learning of Neuro-Symbolic Representations of Biomedical Knowledge (Apr 2018). They showed how to train fast (less than one minute, vs. hours previously) log-linear neural embeddings (representations) of the entities, that were used as inputs for ML classifiers enabling important tasks such as biological link prediction. Classifiers were trained by concatenating learned entity embeddings to represent entity relations, and training classifiers on the concatenated embeddings to discern true relations from automatically generated negative examples. Their simple embedding methodology greatly improved on classification error compared to previously published state of the art results, yielding increased F-measure and ROC AUC scores for the most difficult biological link prediction problem. Their embedding approach was also much more economical in terms of embedding dimensions (d=50 vs. d=512), and naturally encoded the directionality of the asymmetric biological relations, that could be controlled by the order with which the embeddings were concatenated.

    “As opposed to the approach taken by Alshahrani et al. we employ another neural embedding method which requires fewer parameters and is much faster to train. Specifically, we exploit the fact that the biological relations have well defined non-overlapping domain and ranges, and therefore the whole knowledge graph can be treated as an untyped directed graph, where there is no ambiguity in the semantics of any relation. To this end, we employ the neural embedding model from the StarSpace toolkit, which aims at learning entities , each of which is described by a set of discrete features (bag-of-features) coming from a fixed-length dictionary. The model is trained by assigning a $\small d$-dimensional vector to each of the discrete features in the set that we want to embed directly. Ultimately, the look-up matrix (the matrix of embeddings: latent vectors) is learned by minimizing the following loss function …”

    arxiv1804.11105-f1.png

    [Image source. Click image to open in new window.]


    In the following figure, results in Table 2 (from Table 1
    Source
    in Alshahrani et al., 2017 serve as the baseline for Tables 3, 4:

    arxiv1804.11105-t2+t3+t4.png

    [Image source. Click image to open in new window.]


  • Notably, in the work described by Agibetov and Samwald [Fast and Scalable Learning of Neuro-Symbolic Representations of Biomedical Knowledge] in the preceding paragraph, there was no need for a sophisticated labeled DeepWalk, since all considered biological relations had clear non-overlapping domain and range separations (and therefore the whole knowledge graph could be treated as an untyped directed graph, where there is no ambiguity in the semantics of any relation). Instead of using DeepWalk, they trained faster and more economical log-linear neural embeddings using Jason Weston’s (Facebook AI Research) StarSpace: Embed All the Things!  [code]. StarSpace is a general-purpose neural embedding model that aims at learning entities, each of which is described by a set of discrete features (bag-of-features) coming from a fixed-length dictionary. StarSpace can solve a wide variety of problems including labeling tasks such as text classification, ranking tasks such as information retrieval/web search, collaborative filtering-based or content-based recommendation, embedding of multi-relational graphs, and learning word, sentence or document level embeddings. The classifiers used by Agibetov and Samwald were trained on concatenated embeddings of entities (nodes), which were obtained from the flattened graphs for each biomedical relations using StarSpace.

    arxiv1709.03856-t6+t7.png

    [Image source. Click image to open in new window.]


    arxiv1709.03856-t8.png

    [Image source. Click image to open in new window.]


  • DeepWalk was recently applied to the prediction of microRNA-disease associations, by calculating similarities within a miRNA-disease association network [Predicting MicroRNA-Disease Associations using Network Topological Similarity Based on DeepWalk (Oct 2017)]. This approach showed superior predictive performance for 22 complex diseases, with area under the ROC curve scores ranging from 0.805 to 0.937, using five-fold cross-validation. In addition, case studies on breast, lung and prostate cancer further justified the use the method for the discovery of latent miRNA-disease pairs. DeepWalk has also been used to predict novel drug-target associations (Deep Mining Heterogeneous Networks of Biomedical Linked Data to Predict Novel Drug-Target Associations).

Li2017predicting-f1.png

[Image source. Click image to open in new window.]


Li2017predicting-t1.png

[Image source. Click image to open in new window.]


Li2017predicting-f4.png

[Image source. Click image to open in new window.]


Li2017predicting-t2+t3+t4.png

[Image source. Click image to open in new window.]




Deep Neural Networks for Learning Graph Representations (2016) (describing the DNGR model) provided an alternative deep learning approach to learning graph representations (especially word representations), via a random surfing model to directly capture graph structural information. Stacked denoising autoencoders (SDAE) were used to extract complex features and to model non-linearities, generating informative representations. The learned representations could be regarded as input features that could be fed into other tasks, such as unsupervised and supervised classification. Note Fig. 3 in their paper (image below), which is reminiscent of Fig. 2
Source
in in the struct2vec paper, and the comparison of each of those approaches in those figures to DeepWalk.

Cao2016deep-f1.png

[Image source. SDAE: stacked denoising autoencoders.  Click image to open in new window.]


Cao2016deep-f3.png

[Image source. Click image to open in new window.]


The wine dataset visualized in the image above is a dataset from the UCI Machine Learning Repository describing the results of a chemical analysis of wines grown in the same region in Italy but derived from three different cultivars. The analysis determined the quantities of 13 constituents found in each of the 3 types of wines and comprises 178 instances. 

struc2vec: Learning Node Representations from Structural Identity is yet another vector space-based model, which like DeepWalk and node2vec learned latent representations for the structural identity of nodes. While DeepWalk and node2vec failed to capture the notion of structural identity, struc2vec excelled on this task, even when the original network was subject to strong random noise (random edge removal). Fig. 3 in the struct2vec paper showed superior node representations for the toy Karate network (Zachary's Karate Club), compared to DeepWalk and node2vec.

arxiv1704.03165-f1.png

[Image source. Click image to open in new window.]


arxiv1704.03165-f2.png

[Image source. Click image to open in new window.]


arxiv1704.03165-f3+f4.png

[Image source. Click image to open in new window.]


arxiv1704.03165-f7.png

[Image source. Click image to open in new window.]


Another recent improvement over node2vecgraph2vec: Learning Distributed Representations of Graphs (Jul 2017) [code  |  non-author codediscussionimage
Source
] learned data-driven distributed representations of arbitrary sized graphs, rather than learning distributed representations of graph substructures. graph2vec’s embeddings were learned in an unsupervised manner and were task agnostic. Hence, those representations could be used for any downstream task such as graph classification, clustering and even seeding supervised representation learning approaches. Experiments on several benchmark and large real-world datasets showed that graph2vec achieved significant improvements in classification and clustering accuracies over substructure representation learning approaches. For example, compared to node2vec, graph2vec resulted in increased accuracy on five chemoinformatic and bioinformatic domains (see results in Subsection 5.1 and Table 2 in the graph2vec paper; datasets used there are described/available herehere and here). Comparisons of graph2vec to struct2vec are not available at this time; however, the approach and performance of struct2vec over node2vec are clearly evident in Figs. 2, 3 and 7 in the struct2vec paper.

arxiv1707.05005-f1.png

[Image source. Click image to open in new window.]


arxiv1707.05005-t1+f2.png

[Image source. Click image to open in new window.]


arxiv1707.05005-t2.png

[Image source. Click image to open in new window.]


Syntree2Vec – An Algorithm to Augment Syntactic Hierarchy into Word Embeddings (Aug 2018) [code] aimed to include syntactic knowledge – here, preserving the word order during sampling – into word embeddings via a graph based embedding algorithm inspired from node2vec. Using SyntaxNet for syntactic/dependency parsing, Syntree2Vec generated word embeddings by optimizing a graph based objective function that maximized the probability of preserving a network neighborhood. A semi-guided second order random walk procedure then sampled network neighborhood of nodes. In an example of their graph creation (their Figs. 1, 2), the Authors parsed two similar sentences and concatenated the two graphs, that preserved the word order among the syntactic parse trees. Syntree2vec performed similarly to word2vec and node2vec (perplexity scores), albeit with training times that scaled with the data size (unlike word2vec; Fig. 3) for the data sizes evaluated (Table 1). [Data were drawn from a 2008 Wikipedia data dump.]

    SyntaxNet.gif
    [Image source. Click image to open in new window.]

arxiv1808.05907-f1+f2.png

[Image source. Click image to open in new window.]


arxiv1808.05907-f3+t1.png

[Image source. Click image to open in new window.]


The various graph structure algorithms discussed above generally ignore rich heterogeneous information associated with vertices (Improved Semantic-Aware Network Embedding with Fine-Grained Word Alignment). Models that learn semantic-aware network embeddings include the following.

  • Text-Associated DeepWalk (TADW) integrated textual features into network representations with matrix factorization by leveraging the equivalence between DeepWalk and matrix factorization (Network Representation Learning with Rich Text Information (2015) [code]).

    Yang2015network-f1.png

    [Image source. Click image to open in new window.]


    Yang2015network-t1+t2.png

    [Image source. Click image to open in new window.]


  • CENE (Content-Enhanced Network Embedding) used bidirectional recurrent neural networks to abstract the semantic information associated with vertices, further demonstrating the advantages of employing textual information (A General Framework for Content-enhanced Network Representation Learning).

    Text-associated DeepWalk (TADW) incorporated textual features into network embeddings through matrix factorization. “… This approach typically suffers from high computational cost and not scalable to large-scale networks. Besides, contents in TADW are simply incorporated as unordered text features instead of being explicitly modeled. Therefore, deeper semantics contained in the contents cannot be well captured. Present work. We present a general framework for learning Content-Enhanced Network Embedding (CENE) that is capable of jointly leveraging the network structure and the contents. We consider textual contents in this study, however, our approach can be flexibly scaled to other modalities of content.”

    arxiv1610.02906-f1.png

    [Image source. Click image to open in new window.]


    arxiv1610.02906-f2+f3.png

    [Image source. Click image to open in new window.]


  • To capture the interaction between sentences of vertex pairs, Context-Aware Network Embedding (CANE) employed a mutual attention mechanism to adaptively account for the textual information from neighboring vertices (CANE: Context-Aware Network Embedding for Relation Modeling (Oct 2016 | ACL2017) [code]).

    Tu2017CANE-f1.png

    [Image source. Click image to open in new window.]


    Tu2017CANE-f2.png

    [Image source. Click image to open in new window.]


    Tu2017CANE-f3+f4.png

    [Image source. Click image to open in new window.]


    Tu2017CANE-t2.png

    [Image source. Click image to open in new window.]


Despite showing improvement over structure-only models, those semantic-aware methods could not capture word-level alignment information – important for inferring the relationship between node pairs. For a wide range of applications, vertices in a network are typically accompanied by rich textual information such as user profiles, paper abstracts, etc. Improved Semantic-Aware Network Embedding with Fine-Grained Word Alignment (Aug 2018) focused on incorporating semantic features into network embeddings by matching important words between text sequences for all pairs of vertices. They introduced a word-by-word alignment framework, Word-Alignment-based Network Embedding (WANE), that measured the compatibility of embeddings between word pairs, and then adaptively accumulated those alignment features with an aggregation function. Evaluations on three real world benchmarks for downstream tasks including link prediction and multi-label vertex classification demonstrated that their model outperformed state of the art network embedding methods by a large margin. [” Our code will be released to encourage future research.” ]

arxiv1808.09633-f1+f2.png

[Image source. Click image to open in new window.]


arxiv1808.09633-t1.png

[Image source. Click image to open in new window.]


arxiv1808.09633-f4+t4.png

[Image source. Click image to open in new window.]


arxiv1808.09633-f5.png

[Image source. Click image to open in new window.]


Graph algorithms are key tools in many fields of science and technology. Some of these algorithms depend on propagating information between distant nodes in a graph. Recently, there have been a number of deep learning architectures proposed to learn on undirected graphs. However, most of these architectures aggregate information in the local neighborhood of a node, and therefore they may not be capable of efficiently propagating long-range information. Learning Long-Range Information in Undirected Graphs with Wave Networks (Oct 2018) [code] addresses this problem with a recently proposed architecture (wave) which propagates information back and forth across an undirected graph in waves of nonlinear computation. They compared wave to graph convolution – an architecture based on local aggregation – and find that wave learned three different graph-based tasks with greater efficiency and accuracy: (1) labeling a path connecting two nodes in a graph, (2) solving a maze presented as an image, and (3) computing voltages in a circuit. Wave could extrapolate from small training examples to much larger testing examples. The results showed that wave may be able to efficiently solve a wide range of problems that require long-range information propagation across undirected graphs.

arxiv1810.12153-f1.png

[Image source. Click image to open in new window.]


arxiv1810.12153-f2.png

[Image source. Click image to open in new window.]


Graph Structure:

Additional Reading

  • PANDA: AdaPtive Noisy Data Augmentation for Regularization of Undirected Graphical Models (Oct 2018)

    “We propose PANDA, an AdaPtive Noise Augmentation technique to regularize estimating and constructing undirected graphical models (UGMs). PANDA iteratively solves MLEs [maximum likelihood estimations] given noise augmented data in the regression-based framework until convergence to achieve the designed regularization effects. … Our empirical results suggest the inferences achieve nominal or near-nominal coverage and are far more efficient compared to some existing post-selection procedures. On the algorithm level, PANDA can be easily programmed in any standard software without resorting to complicated optimization techniques. We show the non-inferior performance of PANDA in constructing graphs of different types in simulation studies and also apply PANDA to the autism spectrum disorder data to construct a mixed-node graph.”

    arxiv1810.04851c.png

    [Image source. . Click image to open in new window.]


    arxiv1810.04851d.png

    [Image source. . Click image to open in new window.]


    “We apply PANDA to an autism spectrum disorder (ASD) data collected by the Dutch Association for Autism and the VU University Amsterdam. The dataset contains 28 variables of various types (10 continuous, 7 categorical, and 11 count variables) from 3521 participants; and is available in the R package mgm.

    “Figure 6 presents a visualization of the estimated UGM via PANDA. … Figure 6 suggests that node “GCdtA” (Good Characteristics due to Autism) is connected with multiple nodes of different domains including “NoSC” (Number of Social Contacts), “NoI” (Number of Interests) and “IQ” (Intelligence Quotient), indicating that the uniquely positive traits of people with autism connects with various aspects of their lives (social environment, well-being, diagnostic measurements). PANDA is also able to detect expected relationship among the nodes, such as the strong positive relationship between the present age of a participant (“Age”) and the age when the participant was diagnosed with autism (“Agd”).

    “In addition to the relationships among the variables, we can obtain some insights on the relative importance of those variables in the structure of the estimated graph. Figure 7
    Source
    displays the standardized centrality measures (strength, closeness and betweenness) (Opsahl et al., 2010) for each node. The results suggest that some variables, such as “Good Characteristics due to Autism”, “Satisfaction: Work” and “No of Social Contacts” were identified as having relatively high centrality level while some other variables, such as “Openness about Diagnosis”, “Type of Work”, “Type of Housing” and “Gender”, had low values in the centrality measures, implying that those variables were not as important in the constitution of the network structure (Figure 6 also shows that these three nodes are not connected to the rest of the nodes).”

[Table of Contents]

Graph Convolutional Neural Networks

Graph neural networks (GNN) have revolutionized the field of graph representation learning through effectively learned node embeddings, achieving state of the art results in tasks such as node classification and link prediction. However, current GNN methods are inherently flat and do not learn hierarchical representations of graphs – a limitation that is especially problematic for the task of graph classification, where the goal is to predict the label associated with an entire graph.

An important early paper that applied a neural network to a graph structure is Kipf and Welling’s Semi-Supervised Classification with Graph Convolutional Networks (Sep 2016; updated Feb 2017), which introduced graph convolutional neural networks (GCN).

“We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.”

arxiv1609.02907-f1.png

[Image source. Click image to open in new window.]


arxiv1609.02907-t2.png

[Image source. Click image to open in new window.]


arxiv1609.02907-t3.png

[Image source. Click image to open in new window.]


Thomas Kipf provides an excellent introduction to GCN in his blog post Graph Convolutional Networks  [local copy]. When Work Matters: Transforming Classical Network Structures to Graph CNN also provides a recent review of GCN, as well as transformation of the basic structures of ResNet, Inception and DenseNet as GCN.


Semi-supervised classification with GCNs: Latent space dynamics for 300 training iterations with a single label per class. Labeled nodes are highlighted.  "Since everything in our model is differentiable and parameterized, we can add some labels, train the model and observe how the embeddings react. We can use the semi-supervised learning algorithm for GCNs introduced in Kipf & Welling (ICLR 2017). We simply label one node per class/community (highlighted nodes in the video below) and start training for a couple of iterations. As we don't anneal the learning rate in this example, things become quite "jiggly" once the model has converged to a good solution."
[Source]


GCN provide a scalable approach to semi-supervised learning on graph structured data based on an efficient variant of convolutional neural networks which operate directly on graphs. GCN have been used to explore graph structure; for example, Ying et al. (Pinterest) with William Hamilton and Jure Leskovec (Stanford University) employed GCN to investigate web-scale recommender systems [Graph Convolutional Neural Networks for Web-Scale Recommender Systems (Jun 2018)]. Notably in that work:

  • “Here we describe a large-scale deep recommendation engine that we developed and deployed at Pinterest. We develop a data-efficient Graph Convolutional Network (GCN) algorithm PinSage, which combines efficient random walks and graph convolutions to generate embeddings of nodes (i.e., items) that incorporate both graph structure as well as node feature information. Compared to prior GCN approaches, we develop a novel method based on highly efficient random walks to structure the convolutions and design a novel training strategy which relies on harder-and-harder training examples to improve robustness and convergence of the model. …

    “The core idea behind GCNs is to learn how to iteratively aggregate feature information from local graph neighborhoods using neural networks (Fig. 1). Here a single ‘convolution’ operation transforms and aggregates feature information from a node’s one-hop graph neighborhood, and by stacking multiple such convolutions information can be propagated across far reaches of a graph. Unlike purely content-based deep models (e.g., recurrent neural networks), GCNs leverage both content information as well as graph structure. GCN-based methods have set a new standard on countless recommender system benchmarks.”

arxiv1806.01973-f1.png

[Image source. Click image to open in new window.]


arxiv1806.01973-f3.png

[Image source. Click image to open in new window.]


arxiv1806.01973-f5.png

[Image source. Click image to open in new window.]


Like Graph Convolutional Neural Networks for Web-Scale Recommender Systems, above, Graph Convolutional Networks based Word Embeddings also employed GCN for learning node embeddings. They proposed WordGCN, a graph convolution based approach to learning word representations. WordGCN utilized multiple types of word relationships such as linear context, syntactic context, [hypernymmeronym and synonym in a principled manner, through a single model. Through extensive experiments on various intrinsic and extrinsic tasks, they evaluated the effectiveness of WordGCN and found that it outperformed existing word embedding approaches. They also found that WordGCN was able to capture multiple connotations of a word in the learned embeddings.

  • WordGCN is able to capture multiple senses of the same word. For instance, we find that for most embedding methods, closest word to “bank” are banking related terms like “citibank”, “investment” whereas WordGCN is also able to capture the other sense of “bank” i.e. “riverbank”. For the word “grave”, we find WordGCN captures its sense as a part of “graveyard” (topical similarity) and also captures its other meaning i.e. to be “serious” (functional similarity). This shows that WordGCN captures multiple, diverse connotations of a word as well as functional and topical similarities.”
  • “We make WordGCN’s source code available to encourage reproducible research.”  ← not provided!

arxiv1809.04283-f1.png

[Image source. Click image to open in new window.]


arxiv1809.04283-t1.png

[Image source. Click image to open in new window.]


arxiv1809.04283-t2+t3+t4+f2.png

[Image source. Click image to open in new window.]


DiffPool – again by Jure Leskovec and colleagues at Stanford University – is a differentiable graph pooling module that can generate hierarchical representations of graphs, and can be combined with various graph neural network architectures in an end-to-end fashion [Hierarchical Graph Representation Learning with Differentiable Pooling (Oct 2018)]. DiffPool learned a differentiable soft cluster assignment for nodes at each layer of a deep GNN, mapping nodes to a set of clusters, which then formed the coarsened input for the next GNN layer. Experimental results showed that combining existing GNN methods with DiffPool yielded an average improvement of 5-10% accuracy on graph classification benchmarks compared to existing pooling approaches, achieving a new state of the art on four out of five benchmark data sets.

arxiv1806.08804-f1.png

[Image source. Click image to open in new window.]


arxiv1806.08804-f2.png

[Image source. Click image to open in new window.]


arxiv1806.08804-t1.png

[Image source. [GraphSAGE discussion.] Click image to open in new window.]


Regarding Table 2, below:

“Differentiable Pooling on structure2vec (Discriminative Embeddings of Latent Variable Models for Structured Data; note that this is not struc2vec). *Diffpool can be applied to other GNN architectures besides GraphSAGE to capture hierarchical structure in the graph data. To further support answering Q1 [“How does diffpool compare to other pooling methods proposed for GNNs (e.g., using sort pooling or the set2set method)?”], we also applied diffpool on structure2vec (S2V). …”

arxiv1806.08804-t2.png

[Image source. Click image to open in new window.]


Neural Graph Machines: Learning Neural Networks Using Graphs (Mar 2017) [published pdfnon-author codecritical review], from the University of Cambridge and Google Research, discussed and implemented various neural graphical models. That work generalised previous literature on graph-augmented training of neural networks, enabling it to be applied to multiple neural architectures (feedforward neural networks, CNN and LSTM) and a wide range of graphs. The new objective allowed the neural networks to harness both labeled and unlabeled data by allowing the network to train using labeled data as in the supervised setting, and biasing the network to learn similar hidden representations for neighboring nodes on a graph (in the same vein as label propagation). The proposed joint training approach convincingly outperformed many existing methods on a wide range of tasks (multi-label classification on social graphs, news categorization, document classification and semantic intent classification), with multiple forms of graph inputs (including graphs with and without node-level features) and using different types of neural networks.

arxiv1703.04818-f1.png

[Image source. Click image to open in new window.]


arxiv1703.04818-t5.png

[Image source. Click image to open in new window.]


Large-Scale Learnable Graph Convolutional Networks (Aug 2018) [code] proposed the Learnable Graph Convolutional Layer (LGCL), which automatically selected a fixed number of neighboring nodes for each feature based on value ranking in order to transform graph data into grid-like structures in 1D format, thereby enabling the use of regular convolutional operations on generic graphs. To enable model training on large-scale graphs, they proposed a subgraph training method to reduce the excessive memory and computational resource requirements suffered by prior methods on graph convolutions on large-scale graph data. Experiments on node classification tasks demonstrated consistently better performance on the Cora, Citeseer and Pubmed citation networks, and protein interaction network datasets. The proposed subgraph training strategy was also more efficient than prior approaches.

arxiv1808.03965-f1+f4.png

[Image source. Click image to open in new window.]


arxiv1808.03965-f2+f3.png

[Image source. Click image to open in new window.]


arxiv1808.03965-t2+t3+t4+t5.png

[Image source. Click image to open in new window.]




Uses of GCNs

Uses of GCNs include:

  • Modeling Relational Data with Graph Convolutional Networks (Oct 2017) [code (Thomas Kipf); code] – by Thomas Kipf, Ivan Titov, Max Welling and others at the University of Amsterdam – introduced relational graph convolutional networks (R-GCN) and applied them to two standard knowledge base completion tasks: link prediction (the recovery of missing facts, i.e. subject-predicate-object triples), and entity classification (the recovery of missing entity attributes). R-GCN are related to a recent class of neural networks operating on graphs, and are developed specifically to deal with the highly multi-relational data characteristic of knowledge bases.

    arxiv1703.06103-f1+f2+f3.png

    [Image source. Click image to open in new window.]


    arxiv1703.06103-t2+t4+t5.png

    [Image source. Click image to open in new window.]


  • FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling (Jan 2018) [code] [different authors: IBM Research] extended the previous work, R-GCN, above – which was originally designed to be learned with the presence of both training and test data; moreover, for which the recursive neighborhood expansion across layers posed time and memory challenges for training with large, dense graphs. To relax the requirement of simultaneous availability of test data the newer model, FastGCN, interpreted graph convolutions as integral transformations of embedding functions under probability measures. That interpretation allowed the use of Monte Carlo approaches to consistently estimate the integrals, that in turn led to a batched training scheme. FastGCN was efficient for training and generalized well for inference. FastGCN was discussed in A Scalable Deep Learning Approach for Massive Graphs, which stated:

    • “A challenge for GCN is that for a network with multiple layers, the neighborhood is quickly expanded, involving many vectors to be summed together, for even just a single node. Such a computation is prohibitively costly for graphs with a large number of nodes. where the neighborhood expands rapidly, making computation very costly. To resolve this challenge, the authors propose a novel approach (FastGCN), validated through mathematical proofs and experimental results, that suggests that it suffices to gather the information of only a handful of random entities in each neighborhood expansion. This substantial reduction in neighborhood size renders the same quality of prediction as state of the art deep neural networks but cuts training cost by orders of magnitude (e.g., 10x to 100x less computation and resource time), leading to appealing scalability.”

      arxiv1801.10247-f1.png

      [Image source. Click image to open in new window.]


      arxiv1801.10247-f3.png

      [Image source. Click image to open in new window.]


  • Disease Prediction using Graph Convolutional Networks: Application to Autism Spectrum Disorder and Alzheimer’s Disease (Jun 2018) used GCN to represent populations as a sparse graph where its nodes were associated with imaging-based feature vectors, while phenotypic information was integrated as edge weights.

    arxiv1806.01738-f1+f3.png

    [Image source. Click image to open in new window.]


    arxiv1806.01738-f4+f5.png

    [Image source. Click image to open in new window.]


  • Towards Gene Expression Convolutions using Gene Interaction Graphs (Jun 2018) – a rather poorly-written paper from Yoshua Bengio’s group which nonetheless shows high-level interest in this domain – applied GCN to the analysis of RNA-Seq gene expression data from The Cancer Genome Atlas (TCGA). They explored the use of GCN with dropout, and gene embeddings (to utilize the graph information), with perhaps the main finding reported being that while the GCN approach provided an advantage for particular tasks in a low data regime, it was very dependent on the quality of the graph used. [That conclusion, at least, echoes my long-standing emphasis on data quality.]

    arxiv1806.06975-f2.png

    [Image source. Click image to open in new window.]


    arxiv1806.06975-f3.png

    [Image source. Click image to open in new window.]


Named Entity Disambiguation using Deep Learning on Graphs (Oct 2018) [dataset/code] addressed named entity disambiguation (NED) by comparing entities in short sentences with Wikidata graphs. Creating a context vector from graphs through deep learning is a challenging problem that has never been applied to NED. Their main contribution was to present an experimental study of recent neural techniques, as well as a discussion about which graph features were most important for the disambiguation task. In addition, a new dataset, Wikidata-Disamb, was created to allow a clean and scalable evaluation of NED with Wikidata entries, and to be used as a reference in future research. Results showed that a Bi-LSTM encoding of the graph triplets performed best, improving upon the baseline models and scoring a $\small F_1$ value of 91.6% on the Wikidata-Disamb test set.

arxiv1810.09164-f1.png

[Image source. Click image to open in new window.]


arxiv1810.09164-f2+f3+f4.png

[Image source. Click image to open in new window.]


arxiv1810.09164-t3.png

[Image source. Click image to open in new window.]


In Graph Capsule Convolutional Neural Networks (Aug 2018) [code: “Our GCAPS-CNN code and data will be made available at Github.” – not available, 2018-11-02)], the authors build on both GCN and Hinton’s capsules to tackle some of the basic weaknesses in GCN, proposing a Graph Capsule Network (GCAPS-CNN) model designed to solve especially graph classification problem which current GCNN (graph CNN) models find challenging. Through extensive experiments, they showed that their proposed graph capsule network significantly outperformed existing state of the art deep learning methods and graph kernels on graph classification benchmark datasets – for example, the bioinformatics datasets summarized in their Table 1. Their graph capsule network model captured more local structure information than traditional GCNN, and could provide much richer representation of individual graph nodes (or for the whole graph), preserving more information in a local pool.

arxiv1805.08090-f1.png

[Image source. Click image to open in new window.]


arxiv1805.08090-t1.png

[Image source. Click image to open in new window.]


Interpretable Graph Convolutional Neural Networks for Inference on Noisy Knowledge Graphs (Dec 2018) provided a new graph convolutional neural networks (GCNNs) formulation for link prediction on graph data that addressed common challenges for biomedical knowledge graphs (KGs). They introduced a regularized attention mechanism to GCNNs that not only improved performance on clean datasets, but also favorably accommodated noise in KGs, a pervasive issue in real-world applications. Further, they explored new visualization methods for interpretable modelling and illustrated how the learned representation can be exploited to automate dataset denoising. Results were demonstrated on a synthetic dataset (FB15k-237) and a large biomedical knowledge graph derived from a combination of noisy and clean data sources. Using these improvements, they visualized a learned model representation of the disease cystic fibrosis and demonstrated how to interrogate a neural network to show the potential of PPARG (PPAR-γ: peroxisome proliferator-activated receptor gamma) as a candidate therapeutic target for rheumatoid arthritis.

arxiv1812.00279-t1+f1.png

[Image source. Click image to open in new window.]


arxiv1812.00279-f2.png

[Image source. Click image to open in new window.]


arxiv1812.00279-f3.png

[Image source. Click image to open in new window.]


arxiv1812.00279-fs1.png

[Image source. Click image to open in new window.]


While GCNs inherently assume pairwise relationships in graph-structured data, in many real-world problems relationships extend beyond pairwise connections – hypergraphs naturally capture these complex relationships. *HyperGCN*: Hypergraph Convolutional Networks for Semi-Supervised Classification (Sep 2018) explored the use of GCN for hypergraph based semi-supervised learning (SSL). They proposed HyperGCN, a SSL method which used a layer-wise propagation rule for CNN operating directly on hypergraphs (the first principled adaptation of GCN to hypergraphs). HyperGCN was able to encode both the hypergraph structure and hypernode features in an effective manner.

Graph Convolutional Neural Networks:

Additional Reading

  • Multitask Learning on Graph Neural Networks – Learning Multiple Graph Centrality Measures with a Unified Network (Sep 2018; image
    Source
    )

    “… we show that GNN are capable of multitask learning, which can be naturally enforced by training the model to refine a single set of multidimensional embeddings $\small \in \mathbb{R}^d$ and decode them into multiple outputs by connecting MLPs at the end of the pipeline. We demonstrate the multitask learning capability of the model in the relevant relational problem of estimating network centrality measures, i.e. is vertex $\small v_1$ more central than vertex $\small v_2$ given centrality $\small c$? We then show that a GNN can be trained to develop a lingua franca of vertex embeddings from which all relevant information about any of the trained centrality measures can be decoded. The proposed model achieves 89% accuracy on a test dataset of random instances with up to 128 vertices and is shown to generalise to larger problem sizes. The model is also shown to obtain reasonable accuracy on a dataset of real world instances with up to 4k vertices, vastly surpassing the sizes of the largest instances with which the model was trained (n=128). Finally, we believe that our contributions attest to the potential of GNN in symbolic domains in general and in relational learning in particular.”

  • Deep Graph Convolutional Encoders for Structured Data to Text Generation (Oct 2018) [code]

    • “Most previous work on neural text generation from graph-structured data relies on standard sequence-to-sequence methods. These approaches linearise the input graph to be fed to a recurrent neural network. In this paper, we propose an alternative encoder based on graph convolutional networks that directly exploits the input structure. We report results on two graph-to-sequence datasets that empirically show the benefits of explicitly encoding the input graph structure.”

    • “We compared LSTM sequential encoders with a structured data encoder based on GCNs on the task of structured data to text generation. On two different tasks, WebNLG and SR11Deep, we show that explicitly encoding structural information with GCNs is beneficial with respect to sequential encoding.”

    arxiv1810.09995-f1.png

    [Image source. Click image to open in new window.]


  • Accurate, Efficient and Scalable Graph Embedding (Oct 2018)

    “The Graph Convolutional Network (GCN) model and its variants are powerful graph embedding tools for facilitating classification and clustering on graphs. However, a major challenge is to reduce the complexity of layered GCNs and make them parallelizable and scalable on very large graphs – state of the art techniques are unable to achieve scalability without losing accuracy and efficiency. In this paper, we propose novel parallelization techniques for graph sampling-based GCNs that achieve superior scalable performance on very large graphs without compromising accuracy. … We demonstrate that our parallel graph embedding outperforms state of the art methods in scalability (with respect to number of processors, graph size and GCN model size), efficiency and accuracy on several large datasets. On a 40-core Xeon platform, our parallel training achieves 64× speedup in the sampling step and 25× speedup in the feature propagation step, compared to the serial implementation, resulting in a net speedup of 21×. Our scalable algorithm enables deeper GCN, as demonstrated by 1306× speedup on a 3-layer GCN compared to Tensorflow implementation of state of the art.”

    arxiv1810.11899-f1.png

    [Image source. Click image to open in new window.]


  • [Oct 2018 blog post] Knowledge Graph Convolutional Networks: Machine Learning over Reasoned Knowledge  (discussion) – relevant to Relational Inductive Biases, Deep Learning, and Graph Networks.

[Table of Contents]

Graph Attention Networks

Graph-structured data arise naturally in many different application domains. By representing data as graphs we can capture entities (nodes) and their relationships (edges). Useful insights can be derived from graph-structured data, demonstrated by an ever-growing body of work focused on graph mining. However, in the real-world, graphs can be both large (with many complex patterns) and noisy, which poses problems for effective graph mining. An effective way to deal with this issue is to incorporate “attention” into graph mining solutions. An attention mechanism allows a method to focus on task-relevant parts of the graph, helping it to make better decisions. Attention Models in Graphs: A Survey (Jul 2018) provides a recent focused survey of the literature on the emerging field of graph attention models.

arxiv1807.07984-f3.png

[Image source. Click image to open in new window.]


Unlike graph convolutional networks (GCNs) which use a fixed or learnable polynomial of a graph Laplacian or an adjacency matrix to aggregate (filter) node information, graph attention networks (GATs) aggregate node information by using an attention mechanism on graph neighborhoods (Adaptive Edge Features Guided Graph Attention Networks). The essential difference between GAT and GCN is stark: in GCN the weights for aggregating (filtering) neighbor nodes are defined by the graph structure, which is independent on node contents; in contrast, weights in GAT are a function of node contents, thanks to the attention mechanism. Results on graph node classification show that the adaptiveness of GAT makes it more effective to fuse information from node features and graph topological structures.

One major problem in current graph neural network models such as GAT and GCN is that edge features, which contain important information about graphs, are not incorporated (Adaptive Edge Features Guided Graph Attention Networks). In GAT, graph topological information is injected into the model by forcing the attention coefficient between two nodes to zero if they are not connected. Therefore, the edge information used in GAT is only the indication about whether there is an edge or not, i.e. connectivities. However, graph edges are often in possession of rich information like strengths, types, etc. Instead of being a binary indicator variable, edge features could be continuous, e.g. strengths, or multi-dimensional. Properly addressing this problem is likely to benefit many graph learning problems. Another problem of GAT and GCN is that each GAT or GCN layer filters node features based on the original input adjacency matrix. The original adjacency matrix is likely to be noisy and not optimal, which will restrict the effectiveness of the filtering operation.

Adaptive Edge Features Guided Graph Attention Networks (Sep 2018) proposed a graph learning model, edge features guided graph attention networks (EGAT). Guided by the edge features, the attention mechanism on pairs of graph nodes not only depended on node contents but also adjusted automatically with respect to the properties of the edge connecting those nodes. Moreover, the edge features were adjusted by the attention function and fed to the next layer, which meant that the edge features were adaptive across network layers. Experimental results on three citation networks (including PubMed) and a biological network (a chromatin interaction network: Bio-1kb-Hic) showed that EGAT outperformed the current state of the art methods.

arxiv1809.02709-f1.png

[Image source. Click image to open in new window.]


arxiv1809.02709-t1+t2.png

[Image source. Datasets. Click image to open in new window.]


Graph Attention Networks (Feb 2018) [code;  non-author code here and here] are neural network architectures that operate on graph structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions (GCN, above) or their approximations. The GAT models described in this paper (by Yoshua Bengio and colleagues) achieved or matched state of the art results across four established transductive and inductive graph benchmarks: the Cora, CiteSeerX and PubMed citation network datasets, as well as a protein-protein interaction dataset (in which test graphs remained unseen during training). Note Figure 2 in that paper, showing a t-SNE plot of the computed feature representations of a pretrained GAT model’s first hidden layer on the Cora dataset. GAT appears to offer better results than GCN (above) and GraphSAGE (below).

Notes:

arxiv1710.10903-f1.png

[Image source. Click image to open in new window.]


arxiv1710.10903-t1.png

[Image source. Datasets. Click image to open in new window.]


arxiv1710.10903-t2+t3.png

[Image source. Click image to open in new window.]


arxiv1710.10903-f2.png

[Image source. Click image to open in new window.]


Discussed in Graph Attention Networks, the graph attentional layers in GAT directly address several issues affecting prior approaches to modeling graph structured data with neural networks:

  • Computational efficiency: self-attentional layer operations can be parallelized across all edges, and the computation of output features can be parallelized across all nodes. The time complexity of a single GAT attention head computing $\small F’$ features may be expressed as $\small O(|V|FF’ + |E|F’)$, where $\small F$ is the number of input features, and $\small |V|$ and $\small |E|$ are the numbers of nodes and edges in the graph, respectively. This complexity is on par with the baseline methods such as graph convolutional networks (GCN).

  • As opposed to GCN, GAT stack graph layers in which nodes are able to attend over their neighborhood features, which enables (implicitly) specifying different weights to different nodes in a neighborhood, enabling a leap in model capacity without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront. Furthermore, analyzing the learned attentional weights may lead to benefits in interpretability, as was the case in the machine translation domain.

  • The attention mechanism is applied in a shared manner to all edges in the graph, and therefore it does not depend on upfront access to the global graph structure or (features of) all of its nodes (a limitation of many prior techniques). This has several desirable implications: the graph is not required to be undirected, and it makes the GAT technique directly applicable to inductive learning – including tasks where the model is evaluated on graphs that are completely unseen during training.

  • Unlike previous methods which sampled a fixed-size neighborhood of each node to keep its computational footprint consistent (therefore not allowing access to the entirety of the neighborhood while performing inference), GAT works with the entirety of the neighborhood (at the expense of a variable computational footprint, which is still on-par with methods like the GCN) and does not assume any ordering within it.

  • The authors were also able to produce a version of the GAT layer that leverages sparse matrix operations, reducing the storage complexity to linear in the number of nodes and edges and enabling the execution of GAT models on larger graph datasets.

Deeply Learning Molecular Structure-Property Relationships using Graph Attention Neural Network (Oct 2018) applied GAT to the study of molecular structure-property relationships, which are key to molecular engineering for materials and drug discovery. The authors showed that GAT could greatly improve performance of the deep learning for chemistry, distinguishing atoms in different environments and extracting important structural features determining target properties (such as molecular polarity, solubility, and energy). Interestingly, it identified two distinct parts of molecules; as a result, it could accurately predict molecular properties. Moreover, the resultant latent space was well-organized such that molecules with similar properties were closely located, which is critical for successful molecular engineering.

arxiv1805.10988-f1.png

[Image source. Click image to open in new window.]


arxiv1805.10988-f2.png

[Image source. Click image to open in new window.]


arxiv1805.10988-f3.png

[Image source. Click image to open in new window.]


Aside: returning to the graph convolutional networks domain, Geometric Deep Learning Autonomously Learns Chemical Features That Outperform Those Engineered by Domain Experts (2018) [discussion] explored the performance of deep learning methods in the context of drug discovery, comparing machine learned features against the domain expert engineered features that are mainstream in the pharmaceutical industry. [See also Geometric deep learning.]

geometric_deep_learning.png

[Image source. Click image to open in new window.]


Hop2018geometric-f1.png

[Image source. Click image to open in new window.]


In an innovative approach, Attention-based Graph Neural Network for Semi-supervised Learning (Mar 2018) [non-author code] proposed a novel graph neural network (AGNN) that removed all the intermediate fully-connected layers, and replaced the propagation layers with attention mechanisms that respected the structure of the graph. The attention mechanism allowed them to learn a dynamic and adaptive local summary of the neighborhood to achieve more accurate predictions. Using this approach, the authors attained state of the art results on classification of a PubMed citation network (shown in Table 2 in that paper). Notably, as shown in Fig. 2, AGNN correctly classified three target nodes in the test set that were misclassified by a graph convolutional network (GCN), and shows how the local attention network weighted the contribution of its local neighborhood.

arxiv1803.03735-t1+t3+t4.png

[Image source. Click image to open in new window.]


arxiv1803.03735-f2.png

[Image source. Click image to open in new window.]


There are many hyperparameters (non-trained parameters, such as random walk length) in graph embedding methods which have to be manually tuned for every graph. Watch Your Step: Learning Node Embeddings via Graph Attention (Sep 2018) [code] by the University of Southern California and Google AI replaced random walk hyperparameters with trainable parameters that they automatically learned via backpropagation. In particular, they learned a novel attention model on the power series of the transition matrix, which guided the random walk to optimize an upstream objective. Unlike previous approaches to attention models, the method that we propose utilized attention parameters exclusively on the data (e.g. on the random walk), and not used by the model for inference. They experimented on link prediction tasks, as they aimed to produce embeddings that best-preserved the graph structure, generalizing to unseen information. They improved state of the art on a comprehensive suite of real world datasets including social, collaboration, and biological networks. Adding attention to random walks reduced the error by 20% to 45% on datasets evaluated. Further, the learned attention parameters were different for every graph, and the automatically found values agreed with the optimal choice of hyperparameters, if existing methods were manually tuned.

arxiv1710.09599-f1+t1+f2.png

[Image source. Click image to open in new window.]


arxiv1710.09599-f3.png

[Image source. Click image to open in new window.]


arxiv1710.09599-f7.3.png

[Image source. Click image to open in new window.]


Graph convolutional networks (GCN) have been able to achieve state of the art results in the task of node classification; however, since GCN relies on the localized first-order approximations of spectral graph convolutions, it is unable to capture higher-order interactions between nodes in the graph. Higher-order Graph Convolutional Networks (Sep 2018) proposed a motif-based
Source
graph attention model called Motif Convolutional Networks (MCN), which generalized past approaches by using weighted multi-hop motif adjacency matrices to capture higher-order neighborhoods. A novel attention mechanism was used, allowing each individual node to select the most relevant neighborhood to apply its filter. Experiments showed that their proposed method was able to achieve state of the art results on the semi-supervised node classification task.

arxiv-1809.07697a.png

[Image source. Click image to open in new window.]


arxiv-1809.07697b.png

[Image source. Click image to open in new window.]


arxiv1809.07697-t2.png

[Image source. Click image to open in new window.]


arxiv-1809.07697c.png

[Image source. Click image to open in new window.]


Graph Attention Networks:

Additional Reading

  • Improving Robustness of Attention Models on Graphs (Nov 2018)

    “Machine learning models that can exploit the inherent structure in data have gained prominence. In particular, there is a surge in deep learning solutions for graph-structured data, due to its wide-spread applicability in several fields. Graph attention networks (GAT), a recent addition to the broad class of feature learning models in graphs, utilizes the attention mechanism to efficiently learn continuous vector representations for semi-supervised learning problems. In this paper, we perform a detailed analysis of GAT models, and present interesting insights into their behavior. In particular, we show that the models are vulnerable to adversaries (rogue nodes) and hence propose novel regularization strategies to improve the robustness of GAT models. Using benchmark datasets, we demonstrate performance improvements on semi-supervised learning, using the proposed robust variant of GAT.”

    arxiv1811.00181-t1+t2.png

    [Image source. Click image to open in new window.]


[Table of Contents]

Question Answering Over Knowledge Graphs

As with textual sources, knowledge graphs (which provide well structured relational information between entities, and allow inference over indirect facts) may be utilized for questioning answering. Recent neural network-based approaches to knowledge graph based QA include:

  • attention-based, sequence-to-sequence models; e.g., A Joint Model for Question Answering and Question Generation (Jun 2017), which used a sequence-to-sequence framework that encoded the document and generated a question given an answer, and an answer given a question;

    arxiv1706.01450-t2.png

    [Image source. Click image to open in new window.]


  • Simple Question Answering by Attentive Convolutional Neural Network (Oct 2016), which focused on answering single-relation factoid questions over a knowledge base (graph: Freebase). Each question could acquire the answer from a single fact as subject-predicate-object triples via a two-step pipeline: entity linking and fact selection. In fact selection, the subject entity in a fact candidate was matched with the entity mention in the question by a character-level convolutional neural network (char-CNN), while the predicate in that fact with was matched with the question by a word-level CNN (word-CNN);

    arxiv1606.03391-f1.png

    [Image source. Click image to open in new window.]


    arxiv1606.03391-f2.png

    [Image source. Click image to open in new window.]


  • Neural Network-Based Question Answering Over Knowledge Graphs on Word and Character Level (2017), which trained a neural network for answering simple questions in an end-to-end manner, leaving all decisions to the model. It learned to rank subject-predicate pairs to enable the retrieval of relevant facts, given a question. The network contains a nested word/character-level question encoder that allows it to handle out-of-vocabulary and rare word problems, while still being able to exploit word-level semantics;

    Lukovnikov2017neural-f1+f2+f3+f4+f5+f6.png

    [Image source. Click image to open in new window.]


    Lukovnikov2017neural-t4.png

    [Image source. Click image to open in new window.]


  • Variational Reasoning for Question Answering with Knowledge Graph (Nov 2017), which provided a unified deep learning architecture and an end-to-end variational learning algorithm that can handle noise in questions, and learn multi-hop reasoning simultaneously;

    arxiv1709.04071-f1.png

    [Image source. Click image to open in new window.]


    arxiv1709.04071-f2+t1+t2+f5.png

    [Image source. Click image to open in new window.]


  • Similarly, Richard Socher and colleagues at SalesForce employed a multi-hop reasoning approach to QA over incomplete knowledge graphs [Multi-Hop Knowledge Graph Reasoning with Reward Shaping (Sep 2018)]. A detailed analysis of their reinforcement learning approach (as opposed to knowledge graph embedding-based models) indicated that access to a more accurate environment representation (reward shaping) and a more thorough exploration of the search space (action dropout) were important to performance.

    arxiv1808.10568-f1+t1.png

    [Image source. Click image to open in new window.]


    arxiv1808.10568-f3.png

    [Image source. Click image to open in new window.]


    arxiv1808.10568-t2.png

    [Image source. Click image to open in new window.]


  • EARL: Joint Entity and Relation Linking for Question Answering Over Knowledge Graphs (Jun 2018) [GitHub], which performed entity linking and relation linking as a joint task:

    “Many question answering systems over knowledge graphs rely on entity and relation linking components in order to connect the natural language input to the underlying knowledge graph. Traditionally, entity linking and relation linking have been performed either as dependent sequential tasks or as independent parallel tasks. In this paper, we propose a framework called EARL, which performs entity linking and relation linking as a joint task. EARL implements two different solution strategies for which we provide a comparative analysis in this paper: The first strategy is a formalisation of the joint entity and relation linking tasks as an instance of the Generalised Travelling Salesman Problem (GTSP). In order to be computationally feasible, we employ approximate GTSP solvers. The second strategy uses machine learning in order to exploit the connection density between nodes in the knowledge graph. It relies on three base features and re-ranking steps in order to predict entities and relations. We compare the strategies and evaluate them on a dataset with 5000 questions. Both strategies significantly outperform the current state-of-the-art approaches for entity and relation linking.”

    arxiv1801.03825-f1+f2+f3+f4+f5+t2.png

    [Image source. Click image to open in new window.]


    arxiv1801.03825-t5+t6.png

    [Image source. Click image to open in new window.]


  • Andrew McCallum et al. (Go for a Walk and Arrive at the Answer: Reasoning Over Paths in Knowledge Bases using Reinforcement Learning (Nov 2017) [code] proposed the MINERVA algorithm, that addressed the difficult and practical task of answering questions where only one entity of the relation was known (versus finding paths connecting a given pair of entities). Since random walks are impractical in a setting with combinatorially many destinations from a start node, they presented a neural reinforcement learning approach that learned how to navigate the graph conditioned on the input query to find predictive paths. Empirically, this approach significantly outperformed prior methods on several datasets. The model was robust, and could learn long chains-of-reasoning (~93% accuracy on 8-10 path length: note their Fig. 2); moreover it did not require pretraining or initial supervision.

    arxiv1711.05851-f1+f2+t1+t2+t3+t4+t5+t6.png

    [Image source. Click image to open in new window.]


    arxiv1711.05851-f5.png

    [Image source. Click image to open in new window.]


    arxiv1711.05851-t8.png

    [Image source. Click image to open in new window.]


  • An implementation of a tumor-based knowledge graph is presented in Semantically Linking In silico Cancer Models (Dec 2014), which introduced a graph-based model for semantically linking computational cancer models via domain graphs to better understand and explore combinations of models spanning multiple biological scales.

    “Multiscale models are commonplace in cancer modeling, where individual models acting on different biological scales are combined within a single, cohesive modeling framework. However, model composition gives rise to challenges in understanding interfaces and interactions between them. Based on specific domain expertise, typically these computational models are developed by separate research groups using different methodologies, programming languages, and parameters. This paper introduces a graph-based model for semantically linking computational cancer models via domain graphs that can help us better understand and explore combinations of models spanning multiple biological scales. We take the data model encoded by TumorML, an XML-based markup language for storing cancer models in online repositories, and transpose its model description elements into a graph-based representation. By taking such an approach, we can link domain models, such as controlled vocabularies, taxonomic schemes, and ontologies, with cancer model descriptions to better understand and explore relationships between models. The union of these graphs creates a connected property graph that links cancer models by categorizations, by computational compatibility, and by semantic interoperability, yielding a framework in which opportunities for exploration and discovery of combinations of models become possible.”

    • Critique.

      While I find their implementation to be very awkwardly conceptualized and described, it nonetheless provides examples of Cypher-based question answer queries that are relevant to this REVIEW (note that I implemented my KG in Neo4j, which uses the Cypher graph query language).

    Johnson2014semantically-f1+t1+f2+f4+f5+listing6.png

    [Image source. Click image to open in new window.]


An interesting graph-based approach to question answering is the use of subgraphs. Work from the Allen Institute for AI, Answering Complex Questions Using Open Information Extraction (Apr 2017), used a recently proposed support graph optimization framework for QA to develop a new inference model for Open IE. Their model significantly outperformed a state of the art structured solver on complex questions of varying difficulty, while also removing the reliance on manually curated knowledge. Their system, TupleINF, used integer linear programming to solve QA over complex structures that required multi-fact reasoning. They viewed the QA task as searching for a graph (support graph search) that best connects the terms in the question (qterms) with an answer choice via the knowledge (see Fig. 1
Source
in their paper for a simple illustrative example). The qterms, answer choices, and tuples fields formed the set of possible vertices, $\small \mathcal{V}$, of the support graph. Edges connecting qterms to tuple fields and tuple fields to answer choices formed the set of possible edges, $\small \mathcal{E}$. The support graph, $\small \mathcal{G}(\mathcal{V,E})$, was a subgraph of $\small \mathcal{G}(\mathcal{V,E)}$ where $\small \mathcal{V}$ and $\small \mathcal{E}$ denoted “active” nodes and edges, respectively. Similar to TableILP (below), they scored the support graph based on the weight of the active nodes and edges.

arxiv1604.06076-f1.png

[Image source. Click image to open in new window.]


arxiv1704.05572-descr.png

[Image source. Click image to open in new window.]


arxiv1704.05572-f1+t1.png

[Image source. Click image to open in new window.]


The approach of using subgraphs and Integer Linear Programming (ILP) in QA was also described on slides 15-25 in Reasoning-driven Question Answering by the University of Pennsylvania Ph.D. candidate Daniel Khashabi, at a Stanford NLP seminar (June 2018). The work summarized in that presentation is described in the UPenn/Allen Institute for Artificial Intelligence paper Question Answering as Global Reasoning over Semantic Abstractions (2018) [code], which introduced their SemanticILP system. They claimed SemanticILP to be the first system that reasons over a wide range of semantic abstractions of the text, which are derived using off-the-shelf, general-purpose, pretrained natural language modules such as semantic role labelers, coreference resolvers, and dependency parsers. Representing multiple abstractions as a family of graphs, they translated QA into a search for an optimal subgraph that satisfied certain global and local properties. SemanticILP outperformed baseline models BiDAF, an information retrieval solver, and TupleINF on QA tasks on 4th and 8th grade science exams and a biology exam (Tables 4 and 5 in their paper). Thus on two domains with very different characteristics, the graph-based SemanticILP outperformed traditional and neural network models.

Khashabi2018question-f1+descr.png

[Image source. Click image to open in new window.]


Khashabi2018question-f2.png

[Image source. Click image to open in new window.]


Khashabi2018question-t4.png

[Image source. Click image to open in new window.]


Most research in reading comprehension has focused on answering questions based on individual documents or even single paragraphs. Question Answering by Reasoning Across Documents with Graph Convolutional Networks (Aug 2018) by Cao et al. introduced a method which integrates and reasons relying on information spread within documents and across multiple documents. They framed this task as an inference problem on a graph, with mentions of entities as nodes and edges encoding relations between different mentions. Graph convolutional networks (GCN) were applied to these graphs and trained to perform multi-step reasoning. Multi-hop reading comprehension focuses on one type of factoid question, where a system needs to properly integrate multiple pieces of evidence to correctly answer a question. Each step of the algorithm (also referred to as a hop) updated all node representations in parallel. In particular, a node was updated as a function of messages from its direct neighbours. At the end of the first step, every node was aware of every other node it connects directly to. Taking this idea recursively, each further step of the algorithm allowed a node to indirectly interact with nodes already known to their neighbours. After $\small L$ layers of R-GCN, information has been propagated through paths connecting up to $\small L$ nodes. In this work, words in supporting documents and in the query using a model of contextualized word representation (ELMo). ELMo encodings were used to produce a query representation, as well as a set of representations. Their Entity-GCN method achieved state of the art results on the WikiHop dataset.

arxiv1808.09920-fig1+2.png

[Image source. Click image to open in new window.]


As stated in their paper, Question Answering by Reasoning Across Documents with Graph Convolutional Networks,

    "The methods reported in Welbl et al. (2017) -- 'Constructing Datasets for Multi-hop Reading Comprehension Across Documents' -- approach the task by merely concatenating all documents into a single long text and training a standard reading comprehension model, namely, BiDAF and FastQA. Instead, we frame question answering as an inference problem on a graph representing the document collection. Nodes in this graph correspond to named entities in the document whereas edges encode simple relations between them (e.g., cross-document and within-document coreferences). We assume that the chains of reasoning can be captured by propagating information, first extracted by a deep contextualized word representation such as ELMo, along edges in this graph using a graph convolutional network (GCN). Our *Entity-GCN* model substantially outperforms previous systems. Our ablation studies confirm that we benefit both from using contextualized word embeddings and from modeling relations between entities."

    arxiv1808.09920-t1.png
    [Image source. Click image to open in new window.]

Related to the work described in Question Answering by Reasoning Across Documents with Graph Convolutional NetworksExploring Graph-structured Passage Representation for Multi-hop Reading Comprehension with Graph Neural Networks (Sep 2018) also employed a multi-hop reading comprehension approach on the WikiHop dataset. Previous work approximated global evidence with local coreference information, encoding coreference chains with directed acyclic graph (DAG)-styled GRU layers within a Gated-Attention Reader. However, coreference is limited in providing information for rich inference. In this present work, the authors introduced a new method for better connecting global evidence, which formed more complex graphs compared to DAGs. To perform evidence integration on their graphs, they investigated two recent graph neural networks: graph convolutional networks (GCN), and graph recurrent networks (GRN). After obtaining the representation vectors for question and entity mentions in passages, an additive attention model (Bahdanau et al., 2015) was adopted, treating all entity mention representations and the question representation as the memory and the query, respectively. Word embeddings were initialized from the 300-dimensional pretrained GloVe word embeddings.

arxiv1809.02040-f1.png

[Image source. Click image to open in new window.]


arxiv1809.02040-f2+f3.png

[Image source. Click image to open in new window.]


arxiv1809.02040-f4.png

[Image source. Click image to open in new window.]


arxiv1809.02040-f5+t1.png

[Image source. Click image to open in new window.]


That paper, Question Answering by Reasoning Across Documents with Graph Convolutional Networks, included two footnotes of interest (2; 7), which stated:

  • “The concurrent unpublished work (Cao et al., 2018) also investigates the usage of graph convolution networks on WikiHop. Our work proposes a different model architecture, and focus more on the exploration and comparison of multiple edge types for building the graph-structured passage representation.

  • “At submission time we observe a recent short arXiv paper (Cao et al., 2018), available on August 28th, showing an accuracy of 67.6 using ELMo (Peters et al., 2018), which is the only result better than ours. ELMo has achieved dramatic performance gains of 3+ points over a broad range of tasks. Our main contribution is studying an evidence integration approach, which is orthogonal to the contribution of ELMo on large-scale training. For more fair comparison with existing work, we did not adopt ELMo, but we will conduct experiments with ELMo as well.”

Exploiting Explicit Paths for Multi-hop Reading Comprehension (Nov 2018) focused on the task of multi-hop reading comprehension where a system was required to reason over a chain of multiple facts, distributed across multiple passages, to answer a question. Inspired by graph-based reasoning, they presented a path-based reasoning approach for textual reading comprehension, which operated by generating potential paths across multiple passages, extracting implicit relations along this path, and composing them to encode each path. The proposed model achieved a 2.3% gain on the WikiHop Dev set as compared to previous state of the art, and was also able to explain its reasoning through explicit paths of sentences.

arxiv1811.01127-f1+f2.png

[Image source. Click image to open in new window.]


arxiv1811.01127-f3.png

[Image source. Click image to open in new window.]


arxiv1811.01127-t1+t2.png

[Image source. Click image to open in new window.]


arxiv1811.01127-t3.png

[Image source. Click image to open in new window.]


Lastly, in a variation of graph convolutional networks (GCN),  Open Domain Question Answering Using Early Fusion of Knowledge Bases and Text (Sep 2018) [code] by Ruslan Salakhutdinov and colleagues at Carnegie Mellon University looked at the combination of a knowledge base/graph and entity-linked text – appropriate when an incomplete KB is available with a large text corpus. They built on recent advances in graph representation learning, proposing their novel GRAFT-Net model for extracting answers from a question-specific subgraph containing text and KB entities and relations.

To answer a question posed in natural language, GRAFT-Net considered a heterogeneous graph constructed from text (sentences retrieved from Wikipedia) and KB facts (constructed from entities around question seeds in FreeBase), leveraging the rich relational structure between the two information sources. Embeddings were propagated in the graph for a fixed number of layers, and the final node representations were used to classify answers. The model included an attention mechanism, computed using the question and relation embeddings. They showed that GRAFT-Net was competitive with the state of the art when tested using either KB or text alone, vastly outperforming existing methods in the combined setting.

arxiv1809.00782-f1.png

[image source. click image to open in new window.]


arxiv1809.00782-f2.png

[image source. click image to open in new window.]


arxiv1809.00782-f3.png

[image source. click image to open in new window.]


arxiv1809.00782-t3+t4+f4.png

[image source. click image to open in new window.]


arxiv1809.00782-t1+t2.png

[image source. click image to open in new window.]


Question Answering Over Knowledge Graphs:

Additional Reading

  • Learning to Rank Query Graphs for Complex Question Answering over Knowledge Graphs (Nov 2018) [code]

    “In this paper, we conduct an empirical investigation of neural query graph ranking approaches for the task of complex question answering over knowledge graphs. We experiment with six different ranking models and propose a novel self-attention based slot matching model which exploits the inherent structure of query graphs, our logical form of choice. Our proposed model generally outperforms the other models on two QA datasets over the DBpedia knowledge graph, evaluated in different settings. In addition, we show that transfer learning from the larger of those QA datasets to the smaller dataset yields substantial improvements, effectively offsetting the general lack of training data.”

  • Graph based Question Answering System (Dec 2018)

    “In today’s digital age in the dawning era of big data analytics it is not the information but the linking of information through entities and actions which defines the discourse. Any textual data either available on the Internet off off-line (like newspaper data, Wikipedia dump, etc) is basically connect information which cannot be treated isolated for its wholesome semantics. There is a need for an automated retrieval process with proper information extraction to structure the data for relevant and fast text analytics. The first big challenge is the conversion of unstructured textual data to structured data. Unlike other databases, graph databases handle relationships and connections elegantly. Our project aims at developing a graph-based information extraction and retrieval system.”

[Table of Contents]

Graph Representation Learning

Representation learning, variously mentioned above, is a class of machine learning techniques that autonomously learn to derive latent features from raw data. Representation learning of knowledge graphs (graph representation learning) aims to project both entities and relations into a continuous low-dimensional space [Representation Learning of Knowledge Graphs with Entity Descriptions (2016)].

Exploring the Semantic Content of Unsupervised Graph Embeddings: an Empirical Study (Jun 2018) reviews graph embedding techniques, demonstrating that topological features are approximated by the embedding space, allowing key insight into how graph embeddings create good representations.

Representation Learning on Graphs: Methods and Applications  (presented by coauthor Jure Leskovec as a keynote speech at the 4th IEEE BigData Conference (Boston, MA: December 2017; slides) provides a superb overview of machine learning on graphs and graph representation learning. The primary challenge in this domain is finding a way to represent, or encode, graph structure so that it can be easily exploited by machine learning models. [GraphSAGE is discussed on pp. 12-13].

Hamilton2017representation-f1.png

[Image source. Click image to open in new window.]


Hamilton2017representation-f2.png

[Image source. Click image to open in new window.]


Hamilton2017representation-f3.png

[Image source. Click image to open in new window.]


Hamilton2017representation-t1.png

[Image source. Click image to open in new window.]


Hamilton2017representation-f4.png

[Image source. Click image to open in new window.]


Hamilton2017representation-f5.png

[Image source. Click image to open in new window.]


Hamilton2017representation-f6.png

[Image source. Click image to open in new window.]


Hamilton2017representation-f7.png

[Image source. Click image to open in new window.]


Hamilton2017representation-f8.png

[Image source. Click image to open in new window.]


Hamilton2017representation-f9.png

[Image source. Click image to open in new window.]


Inductive Representation Learning on Large Graphs (Jun 2017, updated Sep 2018) (also by Jure Leskovec, Stanford University)  [projectcodeapplied code] introduced GraphSAGE, a general, inductive framework which leveraged node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Instead of training individual embeddings for each node, they learned a function that generated embeddings by sampling and aggregating features from a node’s local neighborhood. Relevant to the biomedical domain, the authors classified the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and showed that their algorithm generalized to completely unseen graphs using a multi-graph dataset of protein-protein interactions.

arxiv1706.02216-f1.png

[Image source. Click image to open in new window.]


arxiv1706.02216-t1.png

[Image source. Click image to open in new window.]


Work by authors (Representation Learning on Graphs with Jumping Knowledge Networks (Jun 2018)) at MIT (with National Institute of Informatics, Tokyo coauthors) explored a “jumping knowledge networks” architecture that for each node flexibly leveraged different neighborhood ranges to enable better structure-aware representation. In a number of experiments on social, bioinformatics and citation networks they demonstrated state of the art performance. Furthermore, combining the jumping knowledge framework with models like Graph Convolutional Networks (GCN), GraphSAGE, and Graph Attention Networks (GAT) consistently improved the performance of those models.

arxiv1806.03536-f1.png

[Image source. Click image to open in new window.]


arxiv1806.03536-f2+f3.png

[Image source. Click image to open in new window.]


arxiv1806.03536-f4.png

[Image source. Click image to open in new window.]


arxiv1806.03536-f5.png

[Image source. Click image to open in new window.]


arxiv1806.03536-t1+t2.png

[Image source. Click image to open in new window.]


Follow-on work by those MIT authors (How Powerful are Graph Neural Networks? (Oct 2018) – with Stanford University coauthors including Jure Leskovec – characterized the discriminative power of popular GNN variants such as graph convolutional networks (GCN) and GraphSAGE, showing that they cannot learn to distinguish certain simple graph structures. They then developed a simple architecture that was provably the most expressive among the class of GNN and was as powerful as the Weisfeiler-Lehman graph isomorphism test, demonstrating that their model (GIN: Graph Isomorphism Network) achieved state of the art performance. Their results confirmed that the most powerful GNN (their GIN model) had high representational power, as it almost perfectly fit the training data – whereas the less powerful GNN variants often severely underfit the training data.

arxiv-1810.00826.png

[Image source. Click image to open in new window.]


arxiv1810.00826-t1.png

[Image source. Click image to open in new window.]


Bioinformatics datasets (shown in Table 1 in the figure, above).

  • MUTAG is a dataset of 188 mutagenic aromatic and heteroaromatic nitro compounds with 7 discrete labels.
  • PROTEINS is a dataset where nodes are secondary structure elements (SSEs) and there is an edge between two nodes if they are neighbors in the amino-acid sequence or in 3D space. It has 3 discrete labels, representing helix, sheet or turn.
  • PTC is a dataset of 344 chemical compounds that reports the carcinogenicity for male and female rats and it has 19 discrete labels.
  • NCI1 is a dataset made publicly available by the National Cancer Institute (NCI) and is a subset of balanced datasets of chemical compounds screened for ability to suppress or inhibit the growth of a panel of human tumor cell lines, having 37 discrete labels.

Relevant to How Powerful are Graph Neural Networks?, above, A Simple Yet Effective Baseline for Non-Attribute Graph Classification (Nov 2018) sought to better understand the increasingly complex methods necessary for the application of graph kernels or graph neural networks to graph classification and representation learning on graphs. As a first step, they developed a simple yet meaningful graph representation which they called LDP, and explored its effectiveness in graph classification. Interestingly, this simple representation achieved similar performance as the state of the art graph kernels and graph neural networks for non-attributed graph classification. Its performance on classifying attributed graphs was slightly weaker as it did not incorporate attributes; however, given its simplicity and efficiency it still serves as an effective baseline for attributed graph classification. They also provided a simple connection with the graph neural networks. Caveats included that these observations were only for the task of graph classification, while existing methods are often designed for a broader scope including node embedding and link prediction. The results are also likely biased due to the limited amount of benchmark datasets available. Nevertheless, the good performance of our simple baseline calls for the development of new, more comprehensive benchmark datasets so as to better evaluate and analyze different graph learning methods. Furthermore, given the computational efficiency of our graph summary, we believe that it is a good candidate as a baseline method for future graph classification (or even other graph learning) studies.

Relevant to How Powerful are Graph Neural Networks? (cited as [XHLJ18] in this paper):

  • “The theoretical understanding of the properties and limitations of GNNs is somewhat limited, although there exists very recent work starting to address this issue [XHLJ18].”

  • “Interestingly a recent paper [XHLJ18] explored the expressiveness of different pooling strategies and concludes that sum pooling is more powerful than mean and max pooling.”

  • “Interestingly, in a recent paper [XHLJ18] that explores the discriminative power of graph neural networks showed that GNNs are at most as powerful as the WL test in distinguishing graph structures. By choosing right aggregation function, they developed the graph isomorphism network (GIN), that can achieve the same discriminative power as the WL test. However, in experiments it is observed that GIN outperforms the WL kernel on social network datasets. One explanation the authors provide is that WL-tests are one-hot encodings and thus cannot capture the ‘similarity’ between subtrees (while they can capture that whether they are ‘the same’ or not). In contrast, a GNN satisfying certain criteria (see Theorem 3 of [XHLJ18]) generalizes the WL test by learning to embed the subtrees to a feature space continuously. This enables GNNs to not only discriminate different structures, but also learn to map similar graph structures to similar embeddings and capture dependencies between graph structures.”

Otherwise relevant:

  • “As we can see, LDP is still quite good on MUTAG, PTC, and PROTEIN datasets that do not have rich label information. For graphs with rich labels such as ENZYME, DD, and NCI1 (see details in Appendix), LDP and its variant is much worse than the other kernels that can better utilize label information. This also suggests that MUTAG, PTC, and PROTEIN are not sufficient to evaluate different methods.

  • “Most graph kernels aim to capture graph topology and graph similarity in the hope of improving classification accuracy. Our experiment suggests that this is not yet well-reflected on current benchmark datasets for non-attribute graphs. This calls for the construction of better and more comprehensive benchmark datasets for understanding and analyzing graph representations. On the other hand, we emphasize that in scenarios where both graph nodes and edges have certain associated features, proper handling labels can significantly improve the classification accuracy, as is shown clearly for the NCI1 dataset. Graph kernels have been rather effective in incorporating node/edge attributes. It will be an interesting question to see how to incorporate attributes in a more effective yet still simple manner in our graph representation. Also, although there are large scale chemical graphs datasets available, a benchmark dataset that contains many large graphs is still missing. We plan to create such benchmark dataset for future use. In general, while not addressed in this paper, we note that understanding the power and limitation of various graph representations, as well as the types of datasets with respect to which they are most effective, are crucial, yet challenging and remain largely open.”

Graph representation learning methods can be classified into two categories: generative models that learn the underlying connectivity distribution in the graph, and discriminative models that predict the probability of edge existence between a pair of vertices. GraphGAN [GraphGAN: Graph Representation Learning with Generative Adversarial Nets (Nov 2017)] unified these two classes of methods in an adversarial approach in which the generative model and discriminative model played a game-theoretical minimax game for node classification and link prediction and recommendation. For a given vertex (node), the generative model tried to fit its underlying true connectivity distribution over all other vertices and produced “fake” samples to fool the discriminative model, while the discriminative model tried to detect whether the sampled vertex was from ground truth or generated by the generative model. With the competition between these two models, both of them could alternately and iteratively boost their performance. It has been demonstrated (GraphGAN) that graph representation learning with generative adversarial nets achieves substantial gains over state of the art baselines in a variety of applications including link prediction, node classification, and recommendation.

arxiv1711.08267-f1.png

[Image source. Click image to open in new window.]


arxiv1711.08267-f2.png

[Image source. Click image to open in new window.]


arxiv1711.08267-t1+t2.png

[Image source. Click image to open in new window.]


In a adversarial approach similar to GraphGAN (different authors),  KBGAN: Adversarial Learning For Knowledge Graph Embeddings (Nov 2017; updated Apr 2018) [code] introduced KBGAN, an adversarial learning framework to improve the performances of a wide range of existing knowledge graph embedding models. Because KG typically only contain positive facts, sampling useful negative training examples is a non-trivial task. Replacing the head or tail entity of a fact with a uniformly randomly selected entity is a conventional method for generating negative facts, but the majority of the generated negative facts can be easily discriminated from positive facts, and will contribute little towards the training. Inspired by generative adversarial networks (GANs), these authors used one knowledge graph embedding model as a negative sample generator to assist the training of the desired model, which acted as the discriminator in the GANs. This framework was independent of the concrete form of generator and discriminator, and therefore could utilize a wide variety of knowledge graph embedding models as building blocks. They adversarially trained two translation-based models, TransE and TransD, each with assistance from one of the two probability-based models, DistMult and ComplEx. The performance of KBGAN was evaluated on the link prediction task using three knowledge base completion datasets – FB15k-237, WN18 and WN18RR – showing that adversarial training substantially improved the performances of target embedding models under various settings.

arxiv1711.04071-f1.png

[Image source. Click image to open in new window.]


arxiv1711.04071-t4.png

[Image source. Click image to open in new window.]


Disease Related Knowledge Summarization Based on Deep Graph Search (Apr 2107) [code] presented a graph-based attention model (GRAM), which addressed two challenges of deep learning applied to healthcare-related knowledge graphs: data insufficiency (insufficient sample size), and interpretation (representations learned by deep learning methods should align with medical knowledge). GRAM supplemented electronic health records with the hierarchical information inherent in medical ontologies (represented as a directed acyclic graph). The ontological KG provided a multi-resolution view of medical concepts upon which GRAM analyzed health records via recurrent neural networks to learn accurate and interpretable representations of medical concepts. GRAM chose a weighted average of ancestors of a medical concept, and trained the entire process with a predictive model in an end-to-end fashion.

arxiv1611.07012-f1.png

[Image source. Click image to open in new window.]


arxiv1611.07012-f3.png

[Image source. Click image to open in new window.]


A new approach to learning graph embeddings that relies on structural measures of node similarities for generation of training data was presented in Learning Graph Embeddings from WordNet-based Similarity Measures (Aug 2018) [codedatasets]. Their path2vec model learned node embeddings that were able to approximate a given measure, such as the shortest path distance (node similarities, based on network structure) or any other. In contrast to existing approaches, path2vec did not rely on random walks or autoencoder architectures (link prediction) and provided a way to supply additional information for the target measure, not only the graph structure itself. Thus, in applications one could replace path-based measures with these node embeddings, gaining significant speedup in yielding distances and nearest neighbors.

path2vec could be trained on arbitrary graph measures and was not restricted to the shortest path or to only tree-structured graphs. Evaluations of the proposed model on semantic similarity and word sense disambiguation tasks (using WordNet as the source of gold similarities) showed that path2vec yielded state of the art results. Notably, the model was computationally efficient, orders of magnitude faster than the direct computation of graph distances: in the WordNet semantic similarity task the learned embeddings trained 3 or 4 orders of magnitude faster than other embedding approaches, depending on the similarity measures used.

arxiv1808.05611-t1+t2+t3+f4+t5.png

[Image source. Click image to open in new window.]


Ruslan Salakhutdinov and colleagues proposed a probabilistic graphical model that leveraged a topic model to design a word sense disambiguation (WSD) system that scaled linearly with the number of words in the context [Knowledge-based Word Sense Disambiguation using Topic Models (Jan 2018)]. Their logistic normal topic model – a variant of Latent Dirichlet Allocation in which the topic proportions for a document were replaced by WordNet  synsets
Source
(sets of synonyms) – incorporated semantic information about synsets as its priors. They used a non-uniform prior for the synset distribution over words to model the frequency of words within a synset, and they modeled the relationships between synsets by using a logistic-normal prior for drawing the synset proportions of the document. This made their model similar to the correlated topic model, with the difference that their priors were fixed, not learned. In particular, the values of those priors were determined using the knowledge from WordNet. They illustrated the benefit of using the whole document as the context for disambiguation. The proposed model, WSD-TM, outperformed state of the art knowledge-based WSD systems.

  • Excerpted from the paper:

    Consider an excerpt from the SensEval 2 dataset shown in Figure 3
    Source
    :

      "Medical scientists are starting to uncover a handful of genes which if damaged unleash the chaotic growth of cells that characterizes cancer. Scientists say the discovery of these genes in recent months is painting a new and startling picture of how cancer develops. An emerging understanding of the genes is expected to produce an array of new strategies for future cancer treatment and prevention. That is for the future. Already scientists are developing tests based on the newly identified genes that for the first time can predict whether an otherwise healthy individual is likely to get cancer. "It's a super-exciting set of discoveries," says Bert Vogelstein, a Johns Hopkins University researcher."

    Highlighted [bolded] words clearly indicate that the domain of the document is biology. While most of these words are monosemous, let’s consider disambiguating the word “cell”, which is highly polysemous, having 7 possible [WordNet] senses as shown in Figure 5
    Source
    .

    WordNet Search: “cell”:

        S: (n) cell (any small compartment) “the cells of a honeycomb”
        S: (n) cell ((biology) the basic structural and functional unit of all organisms; they may exist as independent units of life (as in monads) or may form colonies or tissues as in higher plants and animals)
        S: (n) cell, electric cell (a device that delivers an electric current as the result of a chemical reaction)
        S: (n) cell, cadre (a small unit serving as part of or as the nucleus of a larger political movement)
        S: (n) cellular telephone, cellular phone, cellphone, cell, mobile phone (a hand-held mobile radiotelephone for use in an area divided into small sections, each with its own short-range transmitter/receiver)
        S: (n) cell, cubicle (small room in which a monk or nun lives)
        S: (n) cell, jail cell, prison cell (a room where a prisoner is kept)

    As shown in Table 3
    Source
    , the correct sense of “cell” (“cell#2”) has the highest similarity with senses of three monosemous words “scientist”, “researcher” and “protein”. The word “cell” occurs 21 times in the document, and several times the other words in the sentence are not adequate to disambiguate it. Since our method uses the whole document as the context, words such as “scientists”, “researchers” and “protein” help in disambiguating “cell”, which is not possible otherwise.

arxiv-1801.01900b.png

[Image source. Click image to open in new window.]


arxiv-1801.01900c.png

[Image source. Click image to open in new window.]


arxiv1801.01900-f4.png

[Image source.  See Latent Dirichlet Allocation for a description of the LDA method.  (Click image to open in new window.)]


arxiv1801.01900-t1.png

[Image source. Click image to open in new window.]


arxiv1801.01900-t2+t3+f5.png

[Image source. Click image to open in new window.]


While representation learning on networks generates reliable results with regard to node embeddings, it is limited to homogeneous networks in which all nodes and edges are of the same type. Addressing this challenge, edge2vec: Learning Node Representation using Edge Semantics (Sep 2018) proposed a model that incorporated edge semantics to represent different edge-types in heterogeneous networks. An edge type transition matrix was optimized from an Expectation-Maximization framework as an extra criterion of a biased node random walk on networks, and a biased skip-gram model was then leveraged to learn node embeddings based on the random walks. The novelty of their work was the generation of the edge type transition matrix so that during the process to generate the node random walk corpus, the heterogeneity of the network was considered. edge2vec was validated and evaluated using three medical domain problems on an ensemble of complex medical networks (>10 node and edge types): medical entity classification, compound-gene binding prediction, and medical information search costs. By considering edge semantics, edge2vec significantly outperformed other state of the art models on all three tasks [note that in their tables, edge2vec is listed as heterogeneous node2vec (H-node2vec)].

arXiv-1809.02269.png

[Image source. Click image to open in new window.]


arxiv1809.02269-f4.png

[Image source. Click image to open in new window.]


arxiv1809.02269-t5.png

[Image source. Click image to open in new window.]


Deep Graphs (Jun 2018) , which applied recurrent neural networks (RNN) to graphs and networks, relies on the notion that many graph algorithms can be expressed as iterative vertex updates, where the features of each vertex (or edge) are computed as a function of the neighboring vertex and edge features. Therefore, training and deployment are both efficient, since the cost is $\small \theta(\vert E \vert + \vert V \vert)$, where $\small \vert E \vert$ and $\small \vert V \vert$ are the sets of edges and vertices respectively. The vertex features can be used to perform multiple tasks simultaneously, e.g. in a multitask learning setting, and the proposed algorithm is able to learn the functions that generate these features and the functions that perform these tasks, jointly. Furthermore, it is able to learn representations for graph vertices and edges that can, subsequently, be used by other algorithms to perform new tasks, such as in transfer learning. Finally, apart from the graph structure, this algorithm is also able to use attributes of the vertices and edges of the graph, rendering it even more powerful in cases where such attributes are available.

The authors claimed that Deep Graphs was able to outperform all competing methods for vertex classification tasks by a significant amount (they achieved an AUC of 0.98 while the best competing method achieved 0.67). Furthermore, Deep Graphs could learn to compute PageRank and HITS scores by only applying a vertex update equation 6 times, as opposed to hundreds or thousands of iterations which the original algorithms require. Fig. 1 in that paper shows the node, edge and embedded features that may be be computed/inferred.

arxiv1806.01235-t1+f1+f2.png

[Image source. Click image to open in new window.]


arxiv1806.01235-f3.png

[Image source. Click image to open in new window.]


arxiv1806.01235-t2+t3+t4.png

[Image source. Click image to open in new window.]


The use of constraint graphs for finding optimal Bayesian networks (graphs) is an intuitive method for incorporating rich prior knowledge into the structure learning tasks (Finding the Optimal Bayesian Network Given a Constraint Graph (Jul 2017) [code]. The accompanying Jupyter notebook by these authors provides toy examples, including a three-layer network with:

  • health symptoms (low energy; bloating; loss of appetite; vomiting; abdominal cramps) on the bottom layer;
  • diseases (ovarian cancer; lactose intolerance; pregnancy) on the middle layer; and,
  • genetic tests on the top layer

for three different genetic mutations (BRCA1, BRCA2, and LCT). The edges in this graph were constrained such that symptoms were explained by diseases, and diseases could be partially explained by genetic mutations. There were no edges from diseases to genetic conditions, and no edges from genetic conditions to symptoms. The authors defined a constraint graph which comprised three nodes; “symptoms”, “diseases”, and “genetic mutations”. There was a directed edge from genetic mutations to diseases, and a directed edge from diseases to symptoms. This specified that genetic mutations could be parents to diseases, and diseases to symptoms. Priors were simply programmed Python expressions from the constraint graph, and a resultant learned Bayesian network was returned. Relevant to this REVIEW, this approach could enable very interesting explorations of the KG, for example the discovery of latent/learned embeddings.

Schreiber2017finding-f1+f2.png`

[Image source. Click image to open in new window.]


Schreiber2017finding-f3+f4.png

[Image source. Click image to open in new window.]


Schreiber2017finding-t1+t2.png

[Image source. Click image to open in new window.]


While RNN are a cornerstone in learning latent representations from long text sequences, a purely convolutional and deconvolutional autoencoding framework may be employed, as described in Deconvolutional Paragraph Representation Learning (Sep 2017) [code here and here]. That paper addressed the issue that the quality of sentences during RNN-based decoding (reconstruction) decreased with the length of the text. Compared to RNN, their convolutional-deconvolutional autoencoder framework (CNN-DCNN ) was better at reconstructing and correcting long paragraphs (note Table 1 in their paper, showing paragraphs reconstructed by LSTM and CNN, as well as the vastly superior BLEU / ROUGE scores in Table 2). There is also additional NLP-related LSTM vs. CNN discussion in this Hacker News thread.

arxiv1708.04729-f1.png

[Image source. Click image to open in new window.]


arxiv1708.04729-t1.png

[Image source. Click image to open in new window.]


arxiv1708.04729-t2+f2+desc.png

[Image source. Click image to open in new window.]


arxiv1708.04729-t4+f5.png

[Image source. Click image to open in new window.]


Deep Graph Infomax (Sep 2018) by Yoshua Bengio and colleagues presented Deep Graph Infomax (DGI ), a general approach to learning node representations within graph-structured data in an unsupervised manner. DGI relied on maximizing mutual information between patch representations and corresponding high-level summaries of graphs – both derived using established graph convolutional network (GCN) architectures. Through the learned patch representations, they were able to obtain node embeddings representative of the global structural properties of the graph. Summarized subgraphs centered around nodes of interest could be reused for downstream node-wise learning tasks. In contrast to most prior approaches to graph representation learning, DGI did not rely on random walks, and was readily applicable to both transductive and inductive learning. This enabled competitive performance across a variety of both transductive and inductive node classification tasks, at times even outperforming relevant supervised architectures.

arxiv1809.10341-f1.png

[Image source. Click image to open in new window.]


arxiv-1809.10341a.png

[Image source. Click image to open in new window.]


arxiv1809.10341-t2.png

[Image source. Click image to open in new window.]


arxiv1809.10341-f3.png

[Image source. Click image to open in new window.]


Representing a graph as a vector is a challenging task; ideally, the representation should be easily computable and conducive to efficient comparisons among graphs, tailored to the particular data and analytical task at hand. SGR: Self-Supervised Spectral Graph Representation Learning (Nov 2018) developed SGR, the first method for learning graph representations in a self-supervised manner. Grounded on spectral graph analysis, SGR facilitated self-supervised representation learning across a variety of application domains, and performed competitively to state of the art methods without re-training.

arxiv1811.06237-t2.png

[Image source. Click image to open in new window.]


In Towards the Latent Transcriptome (Oct 2018) Yoshua Bengio and colleagues employed representation to learn continuous embeddings for kmers from raw RNA-seq data, in a reference-free fashion. Their model captured information both of DNA sequence similarity, as well as DNA sequence abundance in the embedding latent space. They confirmed the quality of those vectors by comparing them to known gene sub-structures, finding that the latent space recovered exon information from the raw RNA-Seq data from acute myeloid leukemia patients. Furthermore they showed that this latent space allowed the detection of genomic abnormalities such as translocations as well as patient-specific mutations, making this representation space useful for visualization as well as analysis.

arxiv-1810.03442a.png

[Image source. Click image to open in new window.]


arxiv-1810.03442b.png

[Image source. Click image to open in new window.]


arxiv-1810.03442c.png

[Image source. Click image to open in new window.]


arxiv-1810.03442d.png

[Image source. Click image to open in new window.]


Graph Representation Learning:

Additional Reading

  • TNE: A Latent Model for Representation Learning on Networks (Oct 2018) [project/code]

    “Network representation learning (NRL) methods aim to map each vertex into a low dimensional space by preserving the local and global structure of a given network, and in recent years they have received a significant attention thanks to their success in several challenging problems. Although various approaches have been proposed to compute node embeddings, many successful methods benefit from random walks in order to transform a given network into a collection of sequences of nodes and then they target to learn the representation of nodes by predicting the context of each vertex within the sequence. In this paper, we introduce a general framework to enhance the embeddings of nodes acquired by means of the random walk-based approaches. Similar to the notion of topical word embeddings in NLP, the proposed method assigns each vertex to a topic with the favor of various statistical models and community detection methods, and then generates the enhanced community representations. We evaluate our method on two downstream tasks: node classification and link prediction. The experimental results demonstrate that the incorporation of vertex and topic embeddings outperform widely-known baseline NRL methods.”

    arxiv1810.06917-f1.png

    [Image source Click image to open in new window.]


    arxiv1810.06917-f2.png

    [Image source. Click image to open in new window.]


    arxiv1810.06917-t2+f4.png

    [Image source. Click image to open in new window.]


  • Node Representation Learning for Directed Graphs (Oct 2018) proposed a novel approach for learning node representations in directed graphs, which maintained separate views or embedding spaces for the two distinct node roles induced by the directionality of the edges. In order to achieve this, they proposed a novel alternating random walk strategy to generate training samples from the directed graph while preserving the role information. These samples were then trained using Skip-Gram with Negative Sampling (SGNS) with nodes retaining their source/target semantics. They conducted extensive experimental evaluation to showcase their effectiveness on several real-world datasets on link prediction, multi-label classification and graph reconstruction tasks, showing that the embeddings were robust, generalizable and well performing across multiple kinds of tasks and networks – consistently outperforming all random-walk based neural embedding methods for link prediction and graph reconstruction tasks. In addition to providing a theoretical interpretation of their method they also showed that they were considerably more robust than other directed graph approaches.

    • “We propose a novel scalable approach for learning Node Embeddings Respecting Directionality (*NERD for directed and (un)weighted graphs. NERD aims at learning representations that maximize the likelihood of preserving node neighborhoods. But unlike the previous methods, it identifies the existence of two different types of node neighborhoods; one in its source role and the other in its target role. We propose a novel alternating random walk strategy to sample such node neighborhoods while preserving their respective role information.”

    • “We presented a novel approach, NERD, for embedding directed graphs while preserving the role semantics of the nodes. We propose an alternating random walk strategy to sample node neighborhoods from a directed graph. The runtime and space complexities of NERD are both linear in the input size, which makes it suitable for large scale directed graphs. In addition to providing advantages of using two embedding representations of nodes in a directed graph, we revisit the evaluation strategies that have been used in the previous works while evaluating directed networks. To this extent, we chart out a clear evaluation strategy for link prediction and graph reconstruction tasks. We show that the embeddings from NERD are indeed robust, generalizable and well performing across multiple types of tasks and networks. …”

    arxiv1810.09176-f1.png

    [Image source. Click image to open in new window.]


    arxiv1810.09176-alt_walk_desc.png

    [Image source. Click image to open in new window.]


    arxiv1810.09176-t1.png

    [Image source. Click image to open in new window.]


    arxiv1810.09176-t2+t3.png

    [Image source. Click image to open in new window.]


    arxiv1810.09176-t4.png

    [Image source. Click image to open in new window.]


  • Towards Understanding Learning Representations: To What Extent Do Different Neural Networks Learn the Same Representation (Oct 2018) [code]

    It is widely believed that learning good representations is one of the main reasons for the success of deep neural networks. Although highly intuitive, there is a lack of theory and systematic approach quantitatively characterizing what representations do deep neural networks learn. In this paper the authors moved toward a theory and a better understanding of the representations. Specifically, they studied a simpler problem: “How similar are the representations learned by two networks with identical architecture but trained from different initializations?” – developing a rigorous theory based on the neuron activation subspace match model. The theory gave a complete characterization of the structure of neuron activation subspace matches, where the core concepts were “maximum match” and “simple match,” which respectively describe the overall and the finest similarity between sets of neurons in two networks. They also proposed efficient algorithms to find the maximum match and simple matches. Surprisingly, experimental results on their algorithms suggested that representations learned by the same convolutional layers of networks trained from different initializations were not as similar as prevalently expected, at least in terms of subspace match.

  • Representation Learning by Reconstructing Neighborhoods (Nov 2018)

    “Since its introduction, unsupervised representation learning has attracted a lot of attention from the research community, as it is demonstrated to be highly effective and easy-to-apply in tasks such as dimension reduction, clustering, visualization, information retrieval, and semi-supervised learning. In this work, we propose a novel unsupervised representation learning framework called neighbor-encoder, in which domain knowledge can be easily incorporated into the learning process without modifying the general encoder-decoder architecture of the classic autoencoder. In contrast to autoencoder, which reconstructs the input data itself, neighbor-encoder reconstructs the input data’s neighbors. As the proposed representation learning problem is essentially a neighbor reconstruction problem, domain knowledge can be easily incorporated in the form of an appropriate definition of similarity between objects. Based on that observation, our framework can leverage any off-the-shelf similarity search algorithms or side information to find the neighbor of an input object. Applications of other algorithms (e.g., association rule mining) in our framework are also possible, given that the appropriate definition of neighbor can vary in different contexts. We have demonstrated the effectiveness of our framework in many diverse domains, including images, text, and time series, and for various data mining tasks including classification, clustering, and visualization. Experimental results show that neighbor-encoder not only outperforms autoencoder in most of the scenarios we consider, but also achieves the state-of-the-art performance on text document clustering.”

    arxiv1811.01557-f1.png

    [Image source. Click image to open in new window.]


    arxiv1811.01557-f2+f3.png

    [Image source. Click image to open in new window.]


[Table of Contents]

Hyperbolic Embeddings

The slides in Hyperbolic Geometry, Möbius Transformations, and Geometric Optimization  [local copy] provide a good overview of hyperbolic spaces, etc. Interestingly, compare slides 5 and 6 (an Escher woodcut viewed from the outside and the “inside”, respectively), which is reminiscent of an internal view (structure) of a highly interconnected knowledge graph.

Eppstein2003hyperbolic-slide5+slide6.png

[Image source. Click image to open in new window.]




A hyperbolic space has the property that power-law degree distributions, strong clustering and hierarchical community structure emerge naturally when random graphs are embedded in hyperbolic space. It is therefore logical to exploit the structure of the hyperbolic space for useful embeddings of complex network. Neural Embeddings of Graphs in Hyperbolic Space (May 2017) [code] proposed learning neural embeddings of graphs in hyperbolic space, providing experimental evidence that embedding graphs in their natural geometry significantly improved performance on downstream tasks for several real-world public datasets.

arxiv1705.10359-f1.png

[Image source. Click image to open in new window.]


arxiv1705.10359-f2.png

[Image source. Click image to open in new window.]


arxiv1705.10359-f3.png

[Image source. Click image to open in new window.]


arxiv1705.10359-f4+t1.png

[Image source. Click image to open in new window.]


Embeddings of tree-like graphs in hyperbolic space were recently shown to surpass their Euclidean in performance by a large margin. Inspired by those results, Skip-gram Word Embeddings in Hyperbolic Space (Aug 2018) [code] presented an algorithm (Minkowski) for learning word embeddings in hyperbolic space from free text. Minkowski was based on code from fastText. Each word in the vocabulary was represented by a point on the hyperboloid model in Minkowski space. The embeddings were then optimized by negative sampling to minimize the hyperbolic distance of co-occurring words. An objective function based on the hyperbolic distance was derived and included in the skip-gram architecture from word2vec. The hyperbolic word embeddings were then evaluated on word similarity and analogy benchmarks. The results demonstrated the potential of hyperbolic word embeddings, particularly in low dimensions, although without clear superiority over their Euclidean counterparts.

  • In mathematical physics, Minkowski space (or Minkowski spacetime) is a combining of three-dimensional Euclidean space and time into a four-dimensional manifold where the spacetime interval between any two events is independent of the inertial frame of reference in which they are recorded.

  • “Note that the resulting vectors can not be treated as vectors in Euclidean space. In particular, you can not form document vectors by averaging, and the similarity of two vectors needs to be measured using the hyperbolic distance. Furthermore, these vectors can not be used in downstream tasks that are not specifically designed to work with the hyperboloid model of hyperbolic space.” [GitHub]

  • “To understand the hyperboloid model of hyperbolic space and how to compute distances between points, please see Gradient Descent in Hyperbolic Space.”

arxiv1809.01498-f1.png

[Image source. Click image to open in new window.]


arxiv1809.01498-t1+t2+f3.png

[Image source. Click image to open in new window.]


Maximilian Nickel and Douwe Kiela at Facebook AI Research recently introduced a new approach to learning hierarchical representations of symbolic data by embedding them into hyperbolic space (more precisely, into an $\small n$-dimensional Poincaré ball), allowing them to learn parsimonious representations of symbolic data by simultaneously capturing hierarchy and similarity. Described in Poincaré Embeddings for Learning Hierarchical Representations (May 2107) [code non-author codeproject], Poincaré embeddings significantly outperformed Euclidean embeddings on data with latent hierarchies in terms of representation capacity and generalization ability.

Experiments showed that Poincaré embeddings provided important advantages over Euclidean embeddings on hierarchical data:

  1. Poincaré embeddings enabled very parsimonious representations that enabled the learning of high-quality embeddings of large-scale taxonomies.
  2. Excellent link prediction results indicated that hyperbolic geometry could introduce an important structural bias for the embedding of complex symbolic data.
  3. State of the art results for predicting lexical entailment suggested that the hierarchy in the embedding space corresponded well to the underlying semantics of the data.

arxiv1705.08039-f1.png

[Image source. Click image to open in new window.]


arxiv1705.08039-t1.png

[Image source. Click image to open in new window.]


arxiv-1705.08039.png

[Image source. Click image to open in new window.]


Hyperbolic embeddings seek to embed structured, discrete objects such as knowledge graphs into a continuous representation that can be used with modern machine learning methods. Hyperbolic embeddings can preserve graph distances and complex relationships in very few dimensions, particularly for hierarchical graphs. Many graphs, such as complex networks including the Internet and social networks, are known to have hyperbolic structure and thus befit hyperbolic embeddings [see Representation Tradeoffs for Hyperbolic Embeddings (Apr 2018) and references therein]. For both NLP and graph based tasks, embeddings have been learned in high-dimensional Euclidean spaces. However, recent work has shown that the appropriate isometric space for embedding complex networks is not the flat Euclidean space, but negatively curved, hyperbolic space.

Representation Tradeoffs for Hyperbolic Embeddings (Apr 2018) [project provides an excellent summary of hyperbolic embeddings as well as entity embeddings with 64-bit precision] found that – given a tree – they could give a combinatorial construction that embedded the tree in hyperbolic space with arbitrarily low distortion without using optimization. On WordNet, their combinatorial embedding obtained a mean-average-precision of 0.989 with only two dimensions
Source
, while Nickel et al.’s recent construction (Poincaré Embeddings for Learning Hierarchical Representation) obtained 0.87 using 200 dimensions
Source
.

arxiv1804.03329-t2_vs_arxiv1705.08039-t1.png

[Image sources: top (Table 2)bottom (Table 1). Click image to open in new window.]


arxiv1804.03329-f1-edited.png

[Image source. Click image to open in new window.]


arxiv1804.03329-f2.png

[Image source. Click image to open in new window.]


HyperE-arxiv-1804.03329.gif

[Image source. Click image to open in new window.]


arxiv1804.03329-t2.png

[Image source. Click image to open in new window.]


arxiv1804.03329-t3+t4.png

[Image source. Click image to open in new window.]


Based on information at Semantic Scholar and the figures in both papers, HyBed : Hyperbolic Neural Graph Embedding (2018) [OpenReview] is a follow-on article by the authors of Neural Embeddings of Graphs in Hyperbolic Space. The HyBed paper again posits that recent work has shown that the appropriate isometric space for embedding complex networks is not the flat Euclidean space, but a negatively curved hyperbolic space. They proposed learning neural embeddings of graphs in hyperbolic space, providing experimental evidence that hyperbolic embeddings significantly outperformed Euclidean embeddings on vertex classification tasks for several real-world public datasets.

Chamberlain2018HyBed-t1+t2.png

[Image source. Click image to open in new window.]


Many well-established recommender systems are based on representation learning in Euclidean space. In these models, matching functions such as the Euclidean distance or inner product are typically used for computing similarity scores between user and item embeddings. In an extension of representation learning from the Euclidean to the hyperbolic domain, Hyperbolic Recommender Systems (Sep 2018) investigated the notion of learning user and item representations in hyperbolic space, arguing that hyperbolic space is more suitable for learning user-item embeddings in the recommendation domain. Unlike Euclidean spaces, hyperbolic spaces are intrinsically equipped to handle hierarchical structure, encouraged by its property of exponentially increasing distances away from origin. Their proposed system (HyperBPR) outperformed its Euclidean counterparts, achieving state of the art performance on multiple benchmark datasets.

arxiv1809.01703-t1+f1.png

[Image source. Click image to open in new window.]


arxiv1809.01703-f2+t2.png

[Image source. Click image to open in new window.]


arxiv1809.01703-f3.png

[Image source. Click image to open in new window.]


arxiv1809.01703-f4.png

[Image source. Click image to open in new window.]


Biomedical association studies are increasingly done using clinical concepts, and in particular diagnostic codes from clinical data repositories as phenotypes. Clinical concepts can be represented in a meaningful, vector space using word embedding models. These embeddings allow for comparison between clinical concepts or for straightforward input to machine learning models. Using traditional approaches, good representations require high dimensionality, making downstream tasks such as visualization more difficult. Learning Contextual Hierarchical Structure of Medical Concepts with Poincairé Embeddings to Clarify Phenotypes (Nov 2018) [code here and here] applied Poincaré embeddings in a 2-dimensional hyperbolic space to a large-scale administrative claims database and show performance comparable to 100-dimensional embeddings in a Euclidean space. They then examined disease relationships under different disease contexts to better understand potential phenotypes.

arxiv1811.01294-t1+t3.png

[Image source. Click image to open in new window.]


arxiv1811.01294-f1.png

[Image source. Click image to open in new window.]


arxiv1811.01294-f2.png

[Image source. Click image to open in new window.]


arxiv1811.01294-f3+t4.png

[Image source. Click image to open in new window.]


Poincaré GloVe: Hyperbolic Word Embeddings (Oct 2018)

  • “Words are not created equal. In fact, they form an aristocratic graph with a latent hierarchical structure that the next generation of unsupervised learned word embeddings should reveal. In this paper, driven by the notion of delta-hyperbolicity or tree-likeliness of a space, we propose to embed words in a Cartesian product of hyperbolic spaces which we theoretically connect with the Gaussian word embeddings and their Fisher distance. We adapt the well-known GloVe algorithm to learn unsupervised word embeddings in this type of Riemannian manifolds. We explain how concepts from the Euclidean space such as parallel transport (used to solve analogy tasks) generalize to this new type of geometry. Moreover, we show that our embeddings exhibit hierarchical and hypernymy detection capabilities. We back up our findings with extensive experiments in which we outperform strong and popular baselines on the tasks of similarity, analogy and hypernymy detection.”

arxiv1810.06546-f1+descr.png

[Image source. Click image to open in new window.]


arxiv1810.06546-f2+descr.png

[Image source. Click image to open in new window.]


arxiv1810.06546-t2+t3.png

[Image source. Click image to open in new window.]


arxiv1810.06546-t4+t5.png

[Image source. Click image to open in new window.]


arxiv1810.06546-f3.png

[Image source. Click image to open in new window.]


Hyperbolic Embeddings:

Additional Reading

  • Gradient descent in hyperbolic space (Aug 2018)

    “Gradient descent generalises naturally to Riemannian manifolds, and to hyperbolic $\small n$-space, in particular. Namely, having calculated the gradient at the point on the manifold representing the model parameters, the updated point is obtained by travelling along the geodesic passing in the direction of the gradient. Some recent works employing optimisation in hyperbolic space have not attempted this procedure, however, employing instead various approximations to avoid a calculation that was considered to be too complicated. In this tutorial, we demonstrate that in the hyperboloid model of hyperbolic space, the necessary calculations to perform gradient descent are in fact straight-forward. The advantages of the approach are then both illustrated and quantified for the optimisation problem of computing the Fréchet mean (i.e. barycentre) of points in hyperbolic space.”

[Table of Contents]

Graph Generation Through Generative Adversarial Networks

Related to graph representation learning (above) is the building of graphs through generative adversarial networks (GANs) or sequence models (e.g., RNN) [discussion]. Generative models for real-world graphs have important applications in many domains, including modeling physical and social interactions, discovering new chemical and molecular structures, and constructing knowledge graphs.

DeepMind’s Learning Deep Generative Models of Graphs (Mar 2018) introduced the DeepGMG approach to learning generative models over arbitrary graphs, which could capture both their structure and attributes. Their approach was capable of generating arbitrary graphs by sequentially adding nodes and edges), opening new directions for moving away from restrictions of vector- and sequence-like knowledge representations, toward more expressive and flexible relational data structures. There are two major distinctions between the proposed graph model and typical sequence models like LSTMs: model architecture (this model is graph-structured with structured memory, while the LSTM stores information in flat vectors), and graph generating grammar (this model uses the generic graph generating decision sequence for generation, while the LSTM uses a linearization of the graphs, which typically encodes knowledge specific to a domain). For example, for molecules SMILES strings provide a way to linearize a molecules chemical structure (“graph”) into a string, addressing practical issues like the ordering of nodes. [The Simplified Molecular-Input Line-Entry System (SMILES) is a specification in the form of a line notation for describing the structure of chemical species using short ASCII strings.]

NetGAN: Generating Graphs via Random Walks (Jun 2018) [code] posed the problem of graph generation as learning the distribution of biased random walks over the input graph. The proposed model was based on a stochastic neural network that generated discrete output samples and was trained using the Wasserstein GAN objective. NetGAN was able to produce graphs that exhibited well-known network patterns without explicitly specifying them in the model definition. At the same time, NetGAN exhibited strong generalization properties, highlighted by its competitive link prediction performance despite not being trained specifically for this task.

arxiv1803.00816-f1.png

[Image source. Click image to open in new window.]


arxiv1803.00816-f2+t2+t3.png

[Image source. Click image to open in new window.]


NetGAN (above) focused on learning an implicit generative model from a single large graph (e.g. a large social network), whereas GraphRNN (below) learned from many small graphs, such as molecules. Since NetGAN had a single training graph, it formulated the problem as learning a distribution of random walks over the graph, which also solved scaling challenges. [Source: NetGAN author Aleksandar Bojchevski on reddit.]

Modeling and generating graphs is fundamental for studying networks in biology and other sciences; however, modeling complex distributions over graphs and then efficiently sampling from these distributions is challenging due to the non-unique, high-dimensional nature of graphs and the complex, non-local dependencies that exist between edges in a given graph. GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Models (Jun 2018) [slidescode] described a deep autoregressive model that addressed those challenges and approximated any distribution of graphs with minimal assumptions about their structure. GraphRNN learned to generate graphs by training on a representative set of graphs and decomposes the graph generation process into an iterative sequence of node and edge formations, conditioned on the preceding graph structure.

arxiv1802.08773-f1.png

[Image source. Click image to open in new window.]


arxiv1802.08773-f2+t1.png

[Image source. Click image to open in new window.]


arxiv1802.08773-t2.png

[Image source. Click image to open in new window.]


arxiv1802.08773-f7+descr.png

[Image source. Click image to open in new window.]


[Table of Contents]

Subgraphs: Higher-Order Organization of Complex Networks

Analogous to clustering (dimensionality reduction) approaches applied to textual sources, content within knowledge graphs may also be classified and clustered. Many networks exhibit rich, lower-order connectivity patterns that can be captured at the level of individual nodes and edges. Higher-Order Organization of Complex Networks (Jul 2016) by Jure Leskovec and colleagues at Stanford University (Benson, Gleich and Leskovec), developed a generalized framework for clustering networks on the basis of higher-order connectivity patterns, at the level of small network subgraphs. That project focused on finding higher-order organization of complex networks at the level of small network subgraphs (motifs), by clustering the nodes of a graph based on motifs instead of edges via a motif-based spectral clustering method. The approach is well explained and illustrated on their project page, for example using a bifan motif to find clusters in networks such as C. elegans (explained in more detail in Fig. 2 in the paper), or coherent feedforward loop motifs in the S. cerevisiae transcriptional regulation network.

Benson2016higher-order-f1.png

[Image source. Click image to open in new window.]


Benson2016higher-order-f2.png

[Image source. Click image to open in new window.]


Benson2016higher-order-f3.png

[Image source. Click image to open in new window.]


Like the clustering of the content within knowledge graphs (above), portions of graphs may also be classified, as described below in MotifNet, a graph CNN-based approach. Broadly speaking, there are two classes of graph CNN formulations:

  • spatial approaches, which generalize subgraph regions by constructing a local system of weights on the graph), and

  • spectral approaches, which use the analogy between the eigenfunctions of the graph Laplacian and the classical Fourier transform, and define a convolution-like operation in the spectral domain).

However, one of the key drawbacks of spectral CNNs is the explicit assumption of an undirected graph, leading to a symmetric Laplacian matrix with orthogonal eigendecomposition. MotifNet: A Motif-Based Graph Convolutional Network for Directed Graphs (Feb 2018) introduced MotifNet, a graph CNN for directed graphs that used convolution-like anisotropic graph filters that were based on local subgraph structures (motifs). An attention mechanism allowed MotifNet to generalize some standard graph CNN models without significantly increasing the model complexity. The approach was tested on Bojchevski and Günnemann’s (2017) directed CORA citation network, achieving superior classification accuracy compared to ChebNet.

  • See also Higher-Order Organization of Complex Networks (Jul 2016) by Jure Leskovec and colleagues at Stanford University, who developed a generalized framework for clustering networks on the basis of higher-order connectivity patterns, at the level of small network subgraphs. That project focused on finding higher-order organization of complex networks at the level of small network subgraphs (motifs), by clustering the nodes of a graph based on motifs instead of edges via a motif-based spectral clustering method.

  • ChebNet
    Source
    refers to the Chebyshev polynomials  [see also] described by Thomas Kipf and Max Welling in their 2017 paper, Semi-Supervised Classification with Graph Convolutional Networks [discussed here].

    arxiv1802.01572-f1.png

    [Image source. Click image to open in new window.]


    arxiv1802.01572-f2.png

    [Image source. Click image to open in new window.]


    arxiv1802.01572-f3.png

    [Image source. Click image to open in new window.]


    arxiv1802.01572-f4.png

    [Image source. Click image to open in new window.]


    arxiv1802.01572-f5+f6.png

    [Image source. Click image to open in new window.]


Note that the 2018 MotifNet paper does not cite earlier work (different authors) – Motif-based Convolutional Neural Network on Graphs (Nov 2017; updated Feb 2018) – perhaps due to this statement in that 2017 paper: “We exclude spectral methods as they have been shown inferior to GCN.

Motif-based Convolutional Neural Network on Graphs (Nov 2017; updated 2018) [code] introduced a generalization of CNNs to graphs with irregular linkage structures, especially heterogeneous graphs with typed nodes and schemas. They proposed a novel spatial convolution operation to model the key properties of local connectivity and translation invariance, using high-order connection patterns or motifs (Motif-CNN). Like the subsequent MotifNet paper, Motif-CNN employed an attention model to combine the features extracted from multiple patterns, thus effectively capturing high-order structural and feature information. Experiments on semi-supervised node classification on real-world social networks and multiple representative heterogeneous graph datasets indicated significant gains (6-21%) over existing graph CNNs and other state of the art techniques.

arxiv1711.05697-f1.png

[Image source. Click image to open in new window.]


arxiv1711.05697-f2.png

[Image source. Click image to open in new window.]


arxiv1711.05697-t5.png

[Image source. Click image to open in new window.]


For more background on graphs and convolutions, spectral approaches and the graph Laplacian, etc. see my Graph signal processing page.

Higher-order connectivity patterns such as small induced sub-graphs called graphlets (network motifs) are vital to understand the important components (modules/functional units) governing the configuration and behavior of complex networks. Existing work in higher-order clustering has focused on simple homogeneous graphs with a single node/edge type. However, heterogeneous graphs consisting of nodes and edges of different types are seemingly ubiquitous in the real-world. Higher-order Spectral Clustering for Heterogeneous Graphs (Oct 2018) introduced the notion of typed-graphlet that explicitly captured the rich (typed) connectivity patterns in heterogeneous networks. Using typed-graphlets as a basis, we develop a general principled framework for higher-order clustering in heterogeneous networks. Experiments demonstrated the effectiveness of the framework quantitatively for three important applications including (i) clustering, (ii) link prediction, and (iii) graph compression. In particular, the approach achieved a mean improvement of 43x over all methods and graphs for clustering while achieving a 18.7% and 20.8% improvement for link prediction and graph compression, respectively.

arxiv-1810.02959.png

[Image source. Click image to open in new window.]


arxiv1810.02959-t2+t3+t4.png

[Image source. Click image to open in new window.]


arxiv1810.02959-t5+t6+t7.png

[Image source. Click image to open in new window.]


Subgraphs: Higher-Order Organization of Complex Networks:

Additional Reading

  • Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks (Oct 2018) [code]

    “In recent years, graph neural networks (GNN) have emerged as a powerful neural architecture to learn vector representations of nodes and graphs in a supervised, end-to-end fashion. … The following work investigates GNN from a theoretical point of view and relates them to the 1-dimensional Weisfeiler-Leman graph isomorphism heuristic (1-WL). We show that GNN have the same expressiveness as the 1 -WL in terms of distinguishing non-isomorphic (sub-)graphs. Hence, both algorithms also have the same shortcomings. Based on this, we propose a generalization of GNN, so-called k-dimensional GNN (k -GNN), which can take higher-order graph structures at multiple scales into account. These higher-order structures play an essential role in the characterization of social networks and molecule graphs. Our experimental evaluation confirms our theoretical findings as well as confirms that higher-order information is useful in the task of graph classification and regression.

    arxiv-1810.02244.png

    [Image source. Click image to open in new window.]


  • Deep Graph Infomax

  • Graph refinement, or the task of obtaining subgraphs of interest from over-complete graphs, can have many varied applications. In Graph Refinement Based Tree Extraction Using Mean-Field Networks and Graph Neural Networks (Nov 2018) Selvan et al. [Thomas Kipf | Max Welling] extracted tree structures from image data by first deriving a graph-based representation of the volumetric data and then posing tree extraction as a graph refinement task. They presented two methods to perform graph refinement: (1) mean-field approximation (MFA), to approximate the posterior density over the subgraphs from which the optimal subgraph of interest can be estimated …; and (2) they presented a supervised learning approach using graph neural networks (GNNs), which can be seen as generalisations of mean field networks. Subgraphs were obtained by jointly training a GNN based encoder-decoder pair, wherein the encoder learned useful edge embeddings from which the edge probabilities were predicted using a simple decoder. They discussed connections between the two classes of methods, and compared them for the task of extracting airways from 3D, low-dose, chest CT data.

    arxiv1811.08674-f1.png

    [Image source. Click image to open in new window.]


    arxiv1811.08674-f2.png

    [Image source. Click image to open in new window.]

[Table of Contents]

Text Grounding: Mapping Text to Knowledge Graphs; External Knowledge Lookup

In NLP, the ability to code text into an entity of a knowledge graph finds applications in tasks such as question answering and information retrieval, or any task that involves some form of mapping a definition to a term (Mapping Text to Knowledge Graph Entities using Multi-Sense LSTMs, and references therein). Further, matching text to graphs can be invaluable in providing solutions to domain-specific challenges, for example medical concept normalisation (disambiguation) and identification of adverse drug reactions.
Augmenting KG with external resources such as textual knowledge stores and ontologies is another obvious approach to providing explanatory material to KG. This is particularly important when machine learning approaches are employed in the biomedical and clinical sciences domains, where high precision is imperative for supporting, not distracting practitioners (What Do We Need to Build Explainable AI Systems for the Medical Domain?), and it is crucial to underpin machine output with reasons that are human-verifiable.

  • Paraphrased from What Do We Need to Build Explainable AI Systems for the Medical Domain?” (Dec 2017):

    “The only way forward seems to be the integration of both knowledge-based and neural approaches to combine the interpretability of the former with the high efficiency of the latter. To this end, there have been attempts to retrofit neural embeddings with information from knowledge bases as well as to project embedding dimensions onto interpretable low-dimensional sub-spaces. More promising, in our opinion, is the use of hybrid distributional models that combine sparse graph-based representations with dense vector representations and link them to lexical resources and knowledge bases. Here a hybrid human-in-the-loop approach can be beneficial, where not only the machine learning models for knowledge extraction are supported and improved over time, the final entity graph becomes larger, cleaner, more precise and thus more usable for domain experts. Contrary to classical automatic machine learning, human-in-the-loop approaches do not operate on predefined training or test sets, but assume that human expert input regarding system improvement is supplied iteratively.”

Regarding the “hybrid human-in-the-loop approach” mentioned in What Do We Need to Build Explainable AI Systems for the Medical Domain?, the Never-Ending Language Learner [NELLproject] occasionally interacts with human trainers, mostly for negative feedback identifying NELL’s incorrect beliefs (see Section 7 and Figs. 6 & 7
Source
in that paper), though this human-machine interaction has been decreasing as NELL gets more accurate.

Mapping Text to Knowledge Graph Entities using Multi-Sense LSTMs (Aug 2018) [code to be released at SIPHS  (not available, 2018-11-08)] addressed the problem of mapping natural language text to knowledge base entities. This paper detailed a model for efficiently mapping unrestricted text at the level of phrases and sentences to the entities of a knowledge base (KB) – a task also referred to as text grounding or normalisation. The model aimed at characterising short focused texts, such as definitions or tweets; for example, given a biomedical KB a tweet of the form “Can’t sleep, too tired to think straight” would be mapped to the entity “Insomnia.” The process of text-to-entity mapping was treated as a transformation from a textual vector space, to a KB vector space created from a graph and populated by vectors representing entities. The mapping process was approached as a composition of a phrase or a sentence into a point in a multi-dimensional entity space obtained from a knowledge graph. The compositional model was an LSTM equipped with a dynamic disambiguation mechanism on the input word embeddings (a Multi-Sense LSTM or MS-LSTM), addressing polysemy issues. Further, the knowledge base space was prepared by collecting random walks from a graph enhanced with textual features, which acted as a set of semantic bridges between text and knowledge base entities. The ideas of this work were demonstrated on large-scale text-to-entity mapping and entity classification tasks, with state of the art results.

arxiv1808.07724-f1+f2+f3+f4.png

[Image source. Click image to open in new window.]


arxiv1808.07724-t3+f5.png

[Image source. Click image to open in new window.]


arxiv1808.07724-t4+t5.png

[Image source. Click image to open in new window.]


Augmenting End-to-End Dialog Systems with Commonsense Knowledge (Feb 2018) likewise integrated a large commonsense knowledge base (ConceptNet, in one example) into end-to-end conversational models. Although very confusingly/poorly described, the basic approach was as follows.

  • Concepts (keywords) were derived from each message (inputted sentence) using simple n-gram matching, where every n-gram in the concept was considered as a potential concept. If the n-gram was a key in a dictionary of the concepts derived from ConceptNet, the corresponding value or assertion, a (concept-relation-concept) relation triple, was added to the assertions dataset as external knowledge.

  • The authors employed a Tri-LSTM encoder: LSTM were used to encode message, response and commonsense assertions, where LSTM weights for message and response were tied. A component of the Tri-LSTM encoder (Fig. 3) served as the memory module, encoding all commonsense assertions. Their experiments suggested that their knowledge-augmented models were superior to their knowledge-free counterparts.

    arxiv1709.05453-f1.png

    [Image source. Click image to open in new window.]


    Critique: this was a clever, but complicated, approach. It is mentioned here for its use of knowledge graphs as an external memory source in LSTM-based approaches. The approach was prone to noise due to uncertainties inherent in ConceptNet 5 where many of the assertions were considered “false,” “nonsense” or “vague” by human evaluators:

    False: 34/535 (6.4%);Nonsense: 50/535 (9.4%); Vague: 15/535 (2.8%); Don’t Know: 19/535 (3.6%); Sometimes: 117/535 (21.9%); True: 300/535 (56.1%); Total: 535.

    Source: Table 6.2 (p. 173) in Speer and Havasi (2012) “ConceptNet 5: A Large Semantic Network for Relational Knowledge” [Chapter 6 in Theory and Applications of Natural Language Processing  (local copy)].

    Also, this work is conceptually identical to Yoshua Bengio’s earlier and technically superior work (A Neural Knowledge Language Model  [OpenReview], which is not cited. In that work, Bengio and colleagues proposed a neural knowledge language model which combined the symbolic knowledge provided by a knowledge graph with the expressive power of RNN language models. Like the “assertions” in Augmenting End-to-End Dialog Systems with Commonsense Knowledge, in Bengio’s work facts were represented as (subject, relationship, object) triples, a global vocabulary of frequent terms was maintained, and string matching was employed to align words in the topic description with their corresponding facts in the topic knowledge. The input consisting of a word and a fact went into LSTM; the LSTM output together with the knowledge context generated a fact key. Using the fact key, the fact embedding was retrieved from the topic knowledge memory.

Microsoft Research Asia also recently used ConceptNet as an external knowledge base, as described in Improving Question Answering by Commonsense-Based Pre-Training (Oct 2018). They provided a simple and effective method that leveraged an external commonsense knowledge base (ConceptNet) to answer questions that required commonsense knowledge. . They pretrained direct and indirect relational functions between concepts, showing that those pretrained functions could be easily added to existing neural networks. Each concept was represented based on its internal information (i.e. the words it contained) and external context (i.e. neighbors in the knowledge graph), as indicated in their Fig. 6. To verify the effectiveness of their approach, they used two multiple-choice question answering tasks that required commonsense reasoning. Given a question and optionally a supporting passage, both tasks were to predict the correct answer from a set of candidate answers. Results showed that incorporating commonsense-based functions improved the state of the art on two question answering tasks that required commonsense reasoning. Further analysis showed that their system discovered and leveraged useful evidence from an external commonsense knowledge base, which was missing existing neural network models, helping the model derive the correct answer.

1809.03568a.png

[Image source. Click image to open in new window.]

arxiv1809.03568-f5.png

[Image source. Click image to open in new window.]


arxiv1809.03568-f6.png

[Image source. Click image to open in new window.]


Machine reading comprehension (MRC) requires reasoning about both the knowledge involved in a document and knowledge about the world. However, existing datasets are typically dominated by questions that can be well solved by context matching, which fail to test this capability. Contemporaneously published with Improving Question Answering by Commonsense-Based Pre-Training (above), Microsoft Research Asia also recently published Knowledge Based Machine Reading Comprehension (Sep 2018), which addressed knowledge-based MRC and built a new dataset consisting of 40,047 question-answer pairs. The annotation of this dataset was designed so that successfully answering the questions required understanding and the knowledge involved in a document. They implemented a framework consisting of a question answering model and a question generation model, both of which take the knowledge extracted from the document as well as relevant facts from an external knowledge base such as Freebase, ProBase, Reverb, or NELL. Results showed that incorporating side information from external KB improved the accuracy of the baseline question answer system (BiDAF), while error analysis showed the advantages and limitations of their approach.

arxiv1809.04267-f1+f2+f3+f4+t3.png

[Image source. Click image to open in new window.]


In a similar approach similar to Augmenting End-to-End Dialog Systems with Commonsense KnowledgeA Knowledge Enhanced Generative Conversational Service Agent (Sep 2017) described a system that consisted of a knowledge extractor, a knowledge encoder, and a knowledge enhanced seq2seq model including a query encoder and a decoder for response generation. The knowledge extractor was designed to obtain dialog-related knowledge from the web in a given dialog history including the latest query. In that phase, keywords from the dialog history by a keyword extraction component (RAKE) were used to capture external related knowledge from the web (the top-K most relevant results from/ranked by a search engine). A convolutional neural network (CNN) based knowledge encoder then extracted features of the obtained knowledge. In the knowledge enhanced seq2seq model, the encoder adopted a LSTM to project the dialog history and the user input into a real-valued vector, followed by the decoder which generated response based on the dialog history vector and the knowledge features.

Long2017knowledge-f2.png

[Image source. Click image to open in new window.]


Long2017knowledge-t2+t5.png

[Image source. Click image to open in new window.]


Critique: this is a moderately clever idea, albeit prone to noise (acknowledged in their Conclusions
Source
), plus other deficiencies.

Most textual entailment models focus on lexical gaps between the premise text and the hypothesis, but rarely on knowledge gaps. Bridging Knowledge Gaps in Neural Entailment via Symbolic Models focused on filling these knowledge gaps in the Science Entailment task, by leveraging an external structured knowledge base (KB) of science facts. Their architecture (NSnet
Source
) combined standard neural entailment models with a knowledge lookup module. To facilitate this lookup, they proposed a fact-level decomposition of the hypothesis, and verifying the resulting sub-facts against both the textual premise and the structured KB. NSnet learned to aggregate predictions from these heterogeneous data formats. On the SciTail dataset, NSnet outperformed
Source
a simpler combination of the two predictions by 3% and the base entailment model by 5%.

In web searches entity-seeking queries often trigger a special question answering (QA) system. For example, the QA system may use a parser to reinterpret the question as a structured query, execute that query on a knowledge graph (KG), and return entity responses. However, QA systems based on precise parsing tend to be brittle: minor variations in the question may dramatically change the output. Moreover, KG coverage is patchy, and a large corpus may have supporting information – but in an unstructured, unreliable form.

Addressing those issues, Neural Architecture for Question Answering Using a Knowledge Graph and Web Corpus (Sep 2018) proposed a more robust approach that degraded gracefully on a “structure ramp.” Their system, AQQUCN, accepted a broad spectrum of queries ranging from well-formed questions to short keyword sequences. To address query ambiguity, AQQUCN aggregated signals from the KG and the corpora to directly rank KG entities (rather than committing to one semantic interpretation of the query). AQQUCN modeled the ideal interpretation as an unobservable or latent variable. Interpretations and candidate entity responses were scored as pairs by combining signals from multiple convolutional networks that operate collectively on the query, KG and corpus. On four public query workloads (over 8,000 queries in different query formats) they saw 5-16% absolute improvement in mean average precision (MAP) compared to the entity ranking performance of recent systems. Their system was also competitive at entity set retrieval, almost doubling $\small F_1$ scores for challenging short queries.

arxiv1706.00973-f3+f5+t2+t4+t8.png

[Image source. Click image to open in new window.]


  • AQQUCN, by researchers at IIT Bombay, extends previous work, Aqqu [More Accurate Question Answering on Freebase (2015)] by another research group (at the University of Freiburg). Unlike Aqqu and other systems, the end goal of AQQUCN was not to “compile” the input into a structured query to execute on the KG (because broken/missing input syntax can make this attempt fail in brittle ways), but rather to directly create a ranking over entities using a KG and a corpus (using interpretations as latent variables).

    • AQQUCN is (apparently) a portmanteau of Aqqu + CN (convolutional networks).
    • Our code with relevant data will be made available (CSAW, 2018).”


[Text Grounding: Mapping Text to Knowledge Graphs; External Knowledge Lookup:]

Additional Reading

  • Improving Natural Language Inference Using External Knowledge in the Science Questions Domain

  • Combining neural and knowledge-based approaches to Named Entity Recognition in Polish (Nov 2018)

    “Named entity recognition (NER) is one of the tasks in natural language processing that can greatly benefit from the use of external knowledge sources. We propose a named entity recognition framework composed of knowledge-based feature extractors and a deep learning model including contextual word embeddings, long short-term memory (LSTM) layers and conditional random fields (CRF) inference layer. We use an entity linking module to integrate our system with Wikipedia. The combination of effective neural architecture and external resources allows us to obtain state-of-the-art results on recognition of Polish proper names. We evaluate our model on data from PolEval 2018 NER challenge on which it outperforms other methods, reducing the error rate by 22.4% compared to the winning solution. Our work shows that combining neural NER model and entity linking model with a knowledge base is more effective in recognizing named entities than using NER model alone.”

  • HCqa: Hybrid and Complex Question Answering on Textual Corpus and Knowledge Graph (Nov 2018) [code

    “Question Answering (QA) systems provide easy access to the vast amount of knowledge without having to know the underlying complex structure of the knowledge. The research community has provided ad hoc solutions to the key QA tasks, including named entity recognition and disambiguation, relation extraction and query building. Furthermore, some have integrated and composed these components to implement many tasks automatically and efficiently. However, in general, the existing solutions are limited to simple and short questions and still do not address complex questions composed of several sub-questions. Exploiting the answer to complex questions is further challenged if it requires integrating knowledge from unstructured data sources, i.e., textual corpus, as well as structured data sources, i.e., knowledge graphs. In this paper, an approach (HCqa) is introduced for dealing with complex questions requiring federating knowledge from a hybrid of heterogeneous data sources (structured and unstructured). We contribute in developing (i) a decomposition mechanism which extracts sub-questions from potentially long and complex input questions, (ii) a novel comprehensive schema, first of its kind, for extracting and annotating relations, and (iii) an approach for executing and aggregating the answers of sub-questions. The evaluation of HCqa showed a superior accuracy in the fundamental tasks, such as relation extraction, as well as the federation task.”

    arxiv1811.10986-f4.png

    [Image source. Click image to open in new window.]