Last modified: 2020-06-28 11:32:45 -0700 (PST)


In this draft post I collate and summarize some material and my thoughts connecting various domains:

  • holographic universe model;
  • quantum entanglement;
  • relational theory;
  • knowledge graphs; and,
  • dimensionality.

I have a laypersons background and interest in quantum physics (accordingly, some of my thoughts below may be of questionable validity    ), a professional interest in informatics and knowledge graphs, and formal background/ongoing interest in biochemistry, metabolism, molecular genetics/genomics, etc.

Let’s have some fun!! 

About a week ago (mid-Jan 2019) I watched a very interesting (PBS) NOVA program, Einstein’s Quantum Riddle  (VPN/USA to that site to watch it online), which discussed quantum entanglement. Among the topics presented were the following comments.

  • “In a radical theory known as the holographic universe, space and time are created by entangled quantum particles on a sphere that’s infinitely far away. … The most puzzling element of entanglement, that, you know, somehow, two points in space can communicate, becomes less of a problem, because space itself has disappeared. In the end we just have this quantum mechanical world, there is no space anymore. And so, in some sense, the paradoxes of entanglement, the E.P.R. Paradox, disappears into thin air.”

I find quantum mechanics (string theory; multiverse theories, etc. ) really fascinating, including discussion here regarding the holographic universe theory (links below).

red-pill-blue-pill.jpg

[MemeImage source. Click image to open in new window.]



Information Entropy

Entropy, if considered as information (see information entropy), is measured in bits. The total quantity of bits is related to the total degrees of freedom of matter/energy. For a given energy in a given volume, there is an upper limit to the density of information (the Bekenstein bound) about the whereabouts of all the particles which compose matter in that volume, suggesting that matter itself cannot be subdivided infinitely many times and there must be an ultimate level of fundamental particles. As the degrees of freedom of a particle are the product of all the degrees of freedom of its sub-particles, were a particle to have infinite subdivisions into lower-level particles, the degrees of freedom of the original particle would be infinite, violating the maximal limit of entropy density. The holographic principle thus implies that the subdivisions must stop at some level, and that the fundamental particle is a bit (1 or 0) of information.

The physical universe is widely seen to be composed of “matter” and “energy”. In an article entitled Information in the Holographic Universe (Scientific American, Aug 2003)  Jacob Bekenstein speculatively summarized a current trend started by John Archibald Wheeler, which suggests scientists may “regard the physical world as made of information, with energy and matter as incidentals”. … Bekenstein’s topical overview “A Tale of Two Entropies” [  ← Victoria: I can’t find this “source:” does it exist? ] describes potentially profound implications of Wheeler’s trend, in part by noting a previously unexpected connection between the world of information theory and classical physics. This connection was first described shortly after the seminal 1948 papers of American applied mathematician Claude E. Shannon introduced today’s most widely used measure of information content, now known as Shannon entropy. As an objective measure of the quantity of information, Shannon entropy has been enormously useful, as the design of all modern communications and data storage devices, from cellular phones to modems to hard disk drives and DVDs, rely on Shannon entropy.

… Shannon’s efforts to find a way to quantify the information contained in, for example, a telegraph message, led him unexpectedly to a formula with the same form as Boltzmann’s. In his 2003 Scientific American article Bekenstein summarizes that “Thermodynamic entropy and Shannon entropy are conceptually equivalent: the number of arrangements that are counted by Boltzmann entropy reflects the amount of Shannon information one would need to implement any particular arrangement” of matter and energy. The only salient difference between the thermodynamic entropy of physics and Shannon’s entropy of information is in the units of measure; the former is expressed in units of energy divided by temperature, the latter in essentially dimensionless “bits” of information.

The holographic principle states that the entropy of ordinary mass (not just black holes) is also proportional to surface area and not volume; that volume itself is illusory and the universe is really a hologram which is isomorphic to the information “inscribed” on the surface of its boundary.

Source


Associated Fundamental Constants

CODATA-fundamental_physical_constants_2014.jpg

[CODATA -- Recommended Values of Fundamental Physical Constants (2014)see alsoImage source. Click image to open in new window.]


Q. Are fundamental constants (like the speed of light, $\small c$) (i) integral to the holographic boundary, and (ii) consequently manifest in the bulk dimension? It’s intriguing that some fundamental constants are associated with the holographic boundary through the Planck length, which is based on the speed of light, the Planck constant, and the gravitational constant.

Bekenstein’s 2003 Scientific American article provides a good introduction to the entropy of a black hole (by analogy extended to the holographic boundary): “… a hole with a horizon spanning A Planck areas has A⁄4 units of entropy. (The Planck area, approximately $\small 10^{-66}\ cm^2$, is the fundamental quantum unit of area determined by the strength of gravity, the speed of light and the size of quanta.)”, setting a limit on the density of entropy or information in various circumstances.

    "The holographic bound defines how much information can be contained in a specified region of space. It can be derived by considering a roughly spherical distribution of matter that is contained within a surface of area $\small A$. The matter is induced to collapse to form a black hole ($\small a$). The black hole's area must be smaller than $\small A$, so its entropy must be less than $\small \text{A⁄4}$. Because entropy cannot decrease, one infers that the original distribution of matter also must carry less than $\small \text{A⁄4}$ units of entropy or information. This result -- that the maximum information content of a region of space is fixed by its area -- defies the commonsense expectation that the capacity of a region should depend on its volume. The universal entropy bound defines how much information can be carried by a mass $\small m$ of diameter $\small d$. ..."

bekenstein2003information-black_hole_entropy.png

[Image source. Click image to open in new window.]


A subsequent Scientific American article by Jacob Bekenstein, Information in the Holographic Universe (Apr 2007) gives an estimate of the amount of information that may be embedded on a holographic boundary.

    "... black hole entropy is precisely one quarter of the event horizon's area measured in Planck areas. (The Planck length, about 10-33 centimeter, is the fundamental length scale related to gravity and quantum mechanics. The Planck area is its square.) Even in thermodynamic terms, this is a vast quantity of entropy. The entropy of a black hole one centimeter in diameter would be about 1066 bits, roughly equal to the thermodynamic entropy of a cube of water 10 billion kilometers on a side."

bekenstein2003information-entropy_limit.png

[Image source. Click image to open in new window.]


In 1993 the Dutch theoretical physicist/Nobel Laureate Gerard ‘t Hooft put forward a bold proposal [Dimensional Reduction in Quantum Gravity (Oct 2003)] known as the Holographic Principle, which consisted of two basic assertions:

  • Assertion 1. All information contained in some region of space can be represented as a hologram, on the boundary of that region.
  • Assertion 2. The boundary of the region of space in question should contain at most one degree of freedom per Planck area.

't Hooft's paper also specifies the maximum entropy of the boundary of a black hole,

    $\small S_{max} = \frac{1}{4}A$.

That relationship in turn derives from earlier work in 1973 in which Jacob Bekenstein suggested that a black hole could have a well-defined entropy proportional to the area of its event horizon. … a black hole swallows an enormous amount of information. The Bekenstein-Hawking formula describes the entropy that might be assigned to this information:

    $\small S = c^3 \frac{A}{4} \hbar G$,

where $\small A$ is the area of the event horizon, $\small c$ is the speed of light in a vacuum, $\small\hbar$ is the normalized Planck constant, and $\small G$ is Newton’s gravitational constant. [Source: The Holographic Universe (arXiv, Feb 2016), and references therein.]

For an extended background discussion, see The Holographic Principle for General Backgrounds (Nov 1999).]  A Planck area is the area which has sides with a length equal to the Planck length, a basic unit of length which is usually denoted $\small L_p$. The Planck length is a fundamental unit of length, constructed out of the basic constants $\small G$ (Newton’s constant for the strength of gravitational interactions), $\small \hbar$ (Planck’s constant), and $\small c$ (the speed of light).

    $\small L_p = 1.6 \times 10^{-33} centimeters$

Aside.  Note that Planck units are a set of units of measurement defined exclusively in terms of five universal physical constants, in such a manner that these five physical constants take on the numerical value of 1 when expressed in terms of these units. The five universal constants that Planck units, by definition, normalize to 1 are (and their associated fundamental theory or constant) are:

  • speed of light in a vacuum, $\small c$ [associated with special relativity],
  • gravitational constant, $\small G$ [associated with general relativity],
  • reduced Planck constant, $\small \hbar$ [associated with quantum mechanics],
  • Coulomb constant, $k_e = \frac{1}{4 \pi èpsilon_0}$ [$\small èpsilon_0$ associated with electromagnetism], and
  • Boltzmann constant, $\small k_B$ [associated with the notion of temperature/energy, in statistical mechanics and thermodynamics].

Bulk Universe

Applying the terminology that is generally used, I refer to our perceived (4D space-time, “temporal-spatial”) universe as the “bulk” universe which (in the holographic universe model) is bounded by the “holographic boundary” (a holographic sphere).

Several approaches to quantum gravity – most of all, string theory – now see entanglement as crucial. String theory applies the holographic principle not just to black holes but also to the universe at large, providing a recipe for how to create space – or at least some of it. For instance, a two-dimensional space could be threaded by fields that, when structured in the right way, generate an additional dimension of space. The original two-dimensional space would serve as the boundary of a more expansive realm, known as the bulk space. And entanglement is what knits the bulk space into a contiguous whole.

… Whereas these string theory ideas work only for specific geometries and reconstruct only a single dimension of space, some researchers have sought to explain how all of space can emerge from scratch. For instance, ChunJun Cao, Spyridon Michalakis and Sean M. Carroll, all at the California Institute of Technology, begin with a minimalist quantum description of a system, formulated with no direct reference to spacetime or even to matter. If it has the right pattern of correlations, the system can be cleaved into component parts that can be identified as different regions of spacetime. In this model, the degree of entanglement defines a notion of spatial distance.

[Source]


A collaboration of physicists and a mathematician has made a significant step toward unifying general relativity and quantum mechanics by explaining how spacetime emerges from quantum entanglement in a more fundamental theory. The paper announcing the discovery by Hirosi Ooguri, a Principal Investigator at the University of Tokyo’s Kavli IPMU, with Caltech mathematician Matilde Marcolli and graduate students Jennifer Lin and Bogdan Stoica, will be published in Physical Review Letters as an Editors’ Suggestion “for the potential interest in the results presented and on the success of the paper in communicating its message, in particular to readers from other fields.”

… The holographic principle is widely regarded as an essential feature of a successful Theory of Everything. The holographic principle states that gravity in a three-dimensional volume can be described by quantum mechanics on a two-dimensional surface surrounding the volume. In particular, the three dimensions of the volume should emerge from the two dimensions of the surface. However, understanding the precise mechanics for the emergence of the volume from the surface has been elusive.

Now, Ooguri and his collaborators have found that quantum entanglement is the key to solving this question. Using a quantum theory (that does not include gravity), they showed how to compute energy density, which is a source of gravitational interactions in three dimensions, using quantum entanglement data on the surface. … This allowed them to interpret universal properties of quantum entanglement as conditions on the energy density that should be satisfied by any consistent quantum theory of gravity, without actually explicitly including gravity in the theory. The importance of quantum entanglement has been suggested before, but its precise role in emergence of spacetime was not clear until the new paper by Ooguri and collaborators.

Quantum entanglement is a phenomenon whereby quantum states such as spin or polarization of particles at different locations cannot be described independently. Measuring (and hence acting on) one particle must also act on the other, something that Einstein called “spooky action at distance.” The work of Ooguri and collaborators shows that this quantum entanglement generates the extra dimensions of the gravitational theory.

[Source, discusses Locality of Gravitational Systems from Entanglement of Conformal Field Theories]


Entangling Spacetime

The notion that spacetime has bits or is “made up” of anything is a departure from the traditional picture according to general relativity. According to the new view, spacetime, rather than being fundamental, might “emerge” via the interactions of such bits. … The key to this organization may be the strange phenomenon known as quantum entanglement – a weird kind of correlation that can exist between particles, wherein actions performed on one particle can affect the other even when a great distance separates them. “Lately one absolutely fascinating proposal is that the fabric of spacetime is knitted together by the quantum entanglement of whatever the underlying ‘atoms’ of spacetime are,” Balasubramanian says. “That’s amazing if true.”

… To understand how entanglement might give rise to spacetime, physicists first must better understand how entanglement works. … Once the dynamics of entanglement are clearer, scientists hope to comprehend how spacetime emerges, …

… One of the most successful embodiments of the holographic principle is a discovery known as the AdS/CFT correspondence, found by Maldacena in 1997 within the framework of string theory. String theory, itself an attempt at a theory of quantum gravity, replaces all the fundamental particles of nature with tiny vibrating strings. In the AdS/CFT correspondence Maldacena showed that one can completely describe a black hole purely by describing what happens on its surface. In other words, the physics of the inside – the 3-D “bulk” – corresponds perfectly to the physics of the outside – the 2-D “boundary.”

… For the past 20 years scientists have found that the AdS/CFT correspondence works – a 2-D theory can describe a 3-D situation – but they do not fully understand why. … Quantum information theory may be able to help because it turns out that a familiar concept from this field, quantum error-correcting codes [note also How Space and Time Could Be a Quantum Error-Correcting Code], could be at work in the AdS/CFT correspondence. … “There’s an underlying mathematical structure that seems to be common to the error-correcting codes and AdS/CFT,”

[Source]


… associate professor of physics Matthew Headrick works on one of the most cutting-edge theories in theoretical physics – the holographic principle. It holds that the universe is a three-dimensional image projected off a two-dimensional surface, much like a hologram emerges from a sheet of photographic film. “In my view, the discovery of holographic entanglement and its generalizations has been one of the most exciting developments in theoretical physics in this century so far,” Headrick said.

… It’s long been thought that the universe at its most fundamental level is made up of subatomic particles like electrons or quarks. But now physicists believe those particles are made up of something even smaller – information. … In most cases, when you drop an object into a jar – we’ll use a jelly bean – it will fall inside and take up space. Put in another jelly bean, the amount of unfilled space shrinks and the volume of the jelly beans increases. It doesn’t work this way with qubits. Qubits won’t fall into the jar but instead spread out on a surface. Add a qubit, it will adhere to the side of the jar. Add another qubit, it will do the same. Increasing the number of qubits doesn’t increase the volume. Instead, it increases the surface area the qubits take up. More and more qubits spreading out across a flat surface – this is how you get the two-dimensional plane described by the holographic principle.

… The building blocks of relativity are also units of information. Now though, they’re called bits. And bits behave in a way that’s much more familiar to us. They exist in three dimensions. So how do you get a hologram? Let’s go back to that two-dimensional surface covered with entangled qubits. Since the value of a qubit changes depending on the value of its entangled pair, there’s a degree of indeterminacy built into the system. If you haven’t yet measured the first qubit, you can’t be sure about the second. The amount of uncertainty in any given system is called its entropy. As qubits become entangled and disentangled, the level of entropy rises and falls. You wind up with fields of entropy in a constantly changing state. The holographic principle holds that our three-dimensional world is a representation or projection of all this activity taking place on a two-dimensional surface full of qubits.

[Source]


Emergent Properties

If all of this sounds fantastical, it touches on the phenomenon of emergent properties, those properties that arise from the collaborative functioning of a system, but do not belong to any one part of that system. We’ve seen that here in discussions of how spacetime may emerge from quantum entanglement: the work of Ooguri and collaborators (Locality of Gravitational Systems from Entanglement of Conformal Field Theories) shows that this quantum entanglement generates the extra dimensions of the gravitational theory. My own intuition – albeit based on highly limited understanding: I haven’t yet thought much about the discussion of “indeterminacy” mentioned in the preceding paragraph, nor resolved that idea with the following supposition – is that time may similarly emerge from the information entropy content on the holographic boundary (more information resulting in a perceived passage of time, in the bulk universe).

In my native domain (biochemistry/genetics/genomics), cellular growth and development provides an excellent example. We are complex multicellular organisms composed of ~400 distinct cell types. This diversity of cell types allow us to see, think, hear, fight infections etc. – yet all of this information is encoded in the same genome [a molecule: deoxyribonucleic acid (DNA)]. The fundamental differences between these cell types are manifestations of the parts of the genome that are used – for instance, neurons use different genes than muscle cells – resulting in cell specific metabolism, signal transduction and function.

A totipotent [image] fertilized egg or a pluripotent stem cell can give rise to any other cell type – blood, liver, heart, brain, etc. – each arising from differential expression of the identical genome encoded within these cells throughout their lineage. This cellular differentiation largely arises due to selective epigenetic modification of the genome during growth and development. [For simplicity, I am ignoring here other mechanisms, such as physicochemical properties, that may also play a role, i.e. “environmental” effects due to the spatiotemporal presence of other cells, chemicals, etc. that may shape cellular growth and developmental processes.]

Through the collective behavior of tissue specific cell types, tissues (brain; heart; lung; skin; liver; …) are capable of functions that supersede the capabilities of single cells. Most astoundingly, it appears that the association of neurons in our brains gives rise to consciousness – an emergent property – capable of understanding itself, it’s relationship to the cosmos (time and space), etc. However, even that viewpoint is debatable; e.g., There Is No Such Thing as Conscious Thought (Scientific American, Dec 2018). If you are interested in this debate, then be sure to also read this fascinating, wide ranging discussion on the nature of intelligence: The Genius Neuroscientist Who Might Hold the Key to True AI.


GedankenerfahrungA thought experiment that considers some hypothesis, theory, or principle for the purpose of thinking through its consequences. Given the structure of the experiment, it may not be possible to perform it, and even if it could be performed, there need not be an intention to perform it. The common goal of a thought experiment is to explore the potential consequences of the principle in question..  [How to pronounce Erfahrung]

So, I’ve been thinking about all of that, and here are my thoughts (right or wrong). 

Basically, in the holographic universe model all information since the origin of the universe is embedded on the 2D surface of a sphere, with new information added to that (ever expanding) sphere. Barring wormholes, possibly black holes or other esoteric entity/process, that information cannot be destroyed. Therefore, that holographic boundary can only grow, creates the experience of time in the observed universe i.e. bulk universe. [For simplicity, I’ll regard the holographic boundary as unobserved, and the bulk universe as our perceived/observed universe.] Consequently, the particular state of the holographic boundary automatically determines the unidirectional flow of time in the bulk universe. The information in that hologram can also manifest spatially, as described above.

Now, regarding “spooky action at a distance,” i.e. quantum entanglement, first some background. An entangled system is defined to be one whose quantum state cannot be factored as a product of states of its local constituents; that is to say, they are not individual particles but are an inseparable whole. In entanglement, one constituent cannot be fully described without considering the other(s). Note that the state of a composite system is always expressible as a sum, or superposition, of products of states of local constituents; it is entangled if this sum necessarily has more than one term. Quantum systems can become entangled through various types of interactions. For some ways in which entanglement may be achieved for experimental purposes, refer here for methods. Entanglement is broken when the entangled particles decohere through interaction with the environment; for example, when a measurement is made.

Systems which contain no entanglement are said to be separable. In quantum computing, a qubit or quantum bit is the basic unit of quantum information – the quantum version of the classical binary bit physically realized with a two-state device. A qubit is a two-state (or two-level) quantum-mechanical system, one of the simplest quantum systems displaying the peculiarity of quantum mechanics. Examples include: the spin of the electron in which the two levels can be taken as spin up and spin down; or the polarization of a single photon in which the two states can be taken to be the vertical polarization and the horizontal polarization. In a classical system, a bit would have to be in one state or the other. However, quantum mechanics allows the qubit to be in a coherent superposition of both states/levels simultaneously, a property which is fundamental to quantum mechanics and quantum computing.

A pure qubit state is a coherent superposition of the basis states. This means that a single qubit can be described by a linear combination of $\small \vert 0\rangle$ and $\small \vert 1\rangle$:

    $\small \vert \psi \rangle = α \vert 0\rangle + β \vert 1\rangle$

where $\small α$ and $\small β$ are probability amplitudes and can in general both be complex numbers. When we measure this qubit in the standard basis, the probability of outcome $\small \vert 0\rangle$ with value “0” is $\small \vert α \vert ^{2}$ and the probability of outcome $\small \vert 1\rangle$ with value “1” is $\small \vert β \vert ^{2}$. Because the absolute squares of the amplitudes equate to probabilities, it follows that $\small α$ and $\small β$ must be constrained by the equation

    $\small \vert α \vert ^{2} + \vert β \vert ^{2} = 1$.

Note that a qubit in this superposition state does not have a value in between “0” and “1”; rather, when measured, the qubit has a probability $\small \vert α \vert ^{2}$ of the value “0” and a probability $\small \vert β \vert ^{2}$ of the value “1”. In other words, superposition means that there is no way, even in principle, to tell which of the two possible states forming the superposition state actually pertains.

An important distinguishing feature between qubits and classical bits is that multiple qubits can exhibit quantum entanglement. When a qubit is measured, the superposition state collapses (i.e., coherence is lost).

Ok, so if we generate two quantum entangled qubits and move them apart, they remain entangled until one of them is observed. The spooky action at a distance arises from moving those particles further apart than light can travel, in the time that it takes to observe/measure one of those qubits (therefore, according to quantum physics, instantaneously collapsing each of those quantum states). That is, that quantum superposition decoheres instantly, before the information that one of the particles has been observed can travel (through our bulk universe) faster than the speed of light, a fundamental constant that defines the maximum speed that anything can travel in our universe, independent of the movement of the observer (this is the theory of general relativity). That paradox is resolved in the holographic universe model.

I’m going to make the presumptions (remember this is gedankenerfahrung) that when new information arises on the holographic universe/boundary, it is quantum entangled, and remains entangled in our bulk universe until it is observed. Regardless of the distance in our bulk universe, the behavior of those observed particles (bits; collapsed qubits) are determined by their holographic embedding. Hence, the illusion that they are separated by a faster than light distance is a false (holographic/spatial) dichotomy.

Q: Is the holographic universe also possibly preserving the history of the bulk universe? If information, when generated in the holographic boundary is/remains entangled until observed (I have no idea if this is the case), then this could be a consequence. In one of the references below I recall reading that the holographic-embedded information can manifest in more than one way in our bulk universe (a crude analogy – mine – like we can create endless words and sentences from our alphabet), and when we make an observation and collapse those superpositions, it is a spatial localized effect. If true, that allows the preservation of entangled informational states in the hologram: entanglements themselves are information, that must be preserved – right?

Anyway ….

Quantum Knowledge Graph

… that has me thinking about knowledge graphs (relational property graphs), how to embed that information, and how to selectively view that information – a huge challenge, for large, complex graphs. Here are two preliminary figures (my previously unpublished work from my gene expression studies) from about a decade ago when I was at NIEHS (in this group in N.C..

stuart-yeast_subinteractome-2.png

[Stuart, V.A. (unpublished work). Image source. Click image to open in new window.]


stuart-yeast_subinteractome-1.png

[Stuart, V.A. (unpublished work). Image source. Click image to open in new window.]


As you can imagine, highly relational knowledge graphs quickly become very complex, structurally, making it difficult to visualize those data – the so-called “hairball” problem:

hairball.png

[Image source. Click image to open in new window.]


That issue has vexed me for more than a decade – for example, alluded to in my published work here  and here (not shown in those papers are the enormously complex visualizations of those data). Hence, I am always searching for and thinking about solutions to that issue: hypergraphs; multi-view graphs; etc.

One idea I have – and this is again gedankenerfahrung – is inspired by the holographic universe / quantum entanglement paradigm. It goes like this: if the information in the holographic universe is entangled (or not: while the inspiration for my thought experiment, it really doesn’t matter here) and collapses (here) when we observe our bulk universe, then that is a means of preserving information while also selectively choosing what we wish to view.

If when creating a knowledge graph we can entangle the various relations (as per a normal graph; perhaps through the use of labels, values, flags or keys), then we can have all of that information embedded, entangled, and set (predetermined by us) as a “dark” (entangled, invisible) knowledge graph. By making observations (e.g. we want to look at glycolysis, or the Wnt cellular signaling network, or some other signal transduction network, that would detangle (collapse, decohere) those entangled node and edges, collapsing them to a visible state (graph).

For instance, say we have isozymes of enzymes, that differentially (depending on their isoform) cause a particular disease or affect a particular protein or DNA binding, or affect enzyme stoichiometry or enzymatic activity, or restriction endonuclease or protein binding site … coarsely represented here as alphabet case:

quantum_kg+key.png

[Click image to open in new window.]



Relational Theory

This topic [relational theory  |  relationalism  [note also], which has come up a few times in my recent searches, seems relevant to this discussion as a whole. For example, the Scientific American article What Is Spacetime? (Jun 2018) includes this passage:

    "Although the organizing principles of these theories vary, all strive to uphold some version of the so-called relationalism of 17th- and 18th-century German philosopher Gottfried Leibniz. In broad terms, relationalism holds that space arises from a certain pattern of correlations among objects. In this view, space is a jigsaw puzzle. You start with a big pile of pieces, see how they connect and place them accordingly. If two pieces have similar properties, such as color, they are likely to be nearby; if they differ strongly, you tentatively put them far apart. Physicists commonly express these relations as a network with a certain pattern of connectivity. The relations are dictated by quantum theory or other principles, and the spatial arrangement follows."

Relationalism is any theoretical position that gives importance to the relational nature of things. For relationalism, things exist and function only as relational entities. Relationalism may be contrasted with relationism, which tends to emphasize relations per se. In discussions about space and time, the name relationalism (or relationism) refers to Leibniz’s relationist notion of space and time as against Newton’s substantivalist views. According to Newton’s substantivalism, space and time are entities in their own right, existing independently of things. Leibniz’s relationism, on the other hand, describes space and time as systems of relations that exist between objects.

In physics and philosophy, a relational theory (or relationism) is a framework to understand reality or a physical system in such a way that the positions and other properties of objects are only meaningful relative to other objects. In a relational spacetime theory, space does not exist unless there are objects in it; nor does time exist without events. The relational view proposes that space is contained in objects and that an object represents within itself relationships to other objects. Space can be defined through the relations among the objects that it contains considering their variations through time. The alternative spatial theory is an absolute theory in which the space exists independently of any objects that can be immersed in it.

This Wikipedia passage caught my attention for several reasons (the general discussion here, and mention of Robert Rosen):

    "A recent synthesis of relational theory, called R-theory, continuing the work of the mathematical biologist Robert Rosen (who developed 'Relational Biology' and 'Relational Complexity' as theories of life takes a position between the above views. Rosen's theory differed from other relational views in defining fundamental relations in nature (as opposed to merely epistemic relations we might discuss) as information transfers between natural systems and their organization (as expressed in models).

    "R-theory extends the idea of organizational models to nature generally. As interpreted by R-theory, such 'modeling relations' describe reality in terms of information relations (encoding and decoding) between measurable existence (expressed as material states and established by efficient behavior) and implicate organization or identity (expressed as formal potential and established by final exemplar), thus capturing all four of Aristotle's causalities within nature (Aristotle defined final cause as immanent from outside of nature).

    "Applied to space-time physics, it claims that space-time is real but established only in relation to existing events, as a formal cause or model for the location of events relative to each other; and in reverse a system of space-time events establishes a template for space-time. R-theory is thus a form of model-dependent realism. It claims to more closely follow the views of Mach, Leibniz, Wheeler and Bohm, suggesting that natural law itself is system-dependent."

Aside.  I encountered the work of Robert Rosen last week (mid 2019) in an “unrelated” – but as it turns out, “related”    – search for how to represent and represent related content in knowledge graphs, that transcend simple binary/ternary/… relations – akin to modeling metabolic pathways (glycolysis; TCA cycle; etc.) as hypernodes on a hypergraph. In another coincidence, Dr. Rosen was a Professor of Biophysics (1975-1994) at Dalhousie University, where I completed a B.Sc. with Honours in Biochemistry (1979-1983).

While I precaution against overextending ideas in different domains in grand unification schemes, the purpose of this post to to collate and connect (in a thought experiment approach) recent areas of interest – subject to further reflection and revision.


Dimensionality

square-cube-tesseract-2d-3d-4d-composite.png

[Representations of 0, 1, 2, 3 and 4 dimensions.  Image source. Click image to open in new window.]



tesseract_animated.gif8-cell_tesseract.gif

LEFT: An animated projection of a rotating hypercube.  RIGHT: A 3D projection of an 8-cell performing a simple rotation about a plane which bisects the figure from front-left to back-right and top to bottom.  [More animations here.]  [Image source. Click images to open in new window.]


unfolded_tesseract.gif

[The tesseract can be unfolded into eight cubes into 3D space, just as the cube can be unfolded into six squares into 2D space. An unfolding of a polytope is called a net. There are 261 distinct nets of the tesseract.  Image source. Click image to open in new window.]


Related to this discussion, I am also very interested in the dimensionality of data, and it’s relevance (e.g.) to knowledge graphs. By way of introduction, imagine an infinitely thin sheet of paper: it is two dimensional (with two sides. Two points on one side of that surface will be $\small x$ (arbitrary) units apart. If you introduce a small deflection on that surface between those points, the distance between those two points (in two dimensions) will increase. However, if you introduce a third dimension, the points could be represented (e.g.) on the surface of a sphere, where the direct connection between those points in the third dimension (through the volume determined by that topology) is shorter than the distance between those points on the surface of that surface (imagine any two points on a sphere). [For additional information see my Geometry page.]

My point is: that the distance between two points can be reduced if we alter the topology and change the dimensionality. I like to think about the topology and dimensionality of natural language, and knowledge graphs (basically, the representation of knowledge/information). When we think of vector space models (VSM) in natural language processing, in a manner we envision those embeddings in higher dimensional space (see, e.g. my definitions of real coordinate space, and my Complex Numbers, π, Fourier Transform, Eigenvectors blog post (which essentially explains the relationship between VSM dimensions and polar coordinates).

word_vector_dimensions.png

[Image source. Click image to open in new window.]
In the figure above we are imagining that each dimension captures a clearly defined meaning. For example, if you imagine that the first dimension represents the meaning or concept of "animal", then the remaining dimensions represent each word's values for the other features (domesticated; pet; fluffy; ...).  (Adapted from Introduction to Word Vectors.)
In word2vec a distributed representation of a word is used. Take a vector with 1000 dimensions, where each word is represented by a distribution of weights across those elements. Instead of a one-to-one mapping between an element in the vector and a word, the representation of a word is spread across all of the elements in the vector, and each element in the vector contributes to the definition of many words. If the dimensions in a hypothetical word vector are labeled (there are no such pre-assigned labels in the algorithm, of course), it might look a bit like this
Source
. Such a vector comes to represent in some abstract way the "meaning" of a word. [Excerpted from The amazing power of word vectors  (local copy).]


The following figure indicates a vectorized, multidimensional embedding on a knowledge graph node.

leskovec2017graph-slide9.png

[Image source. Click image to open in new window.]


That embedding represents a signal on a node, which is the foundational concept in graph signal processing – as discussed here and illustrated here.

Tensor based representations conveniently allow the embedding of multiple signals on each vertex (node). Tensors are simply a mathematical object analogous to but more general than a vector, represented by an array of components that are functions of the coordinates of a space. Tensors are particularly useful for representing and processing multi-view / multi-layer / temporal graph embeddings – a particular interest of mine.

3d_tensor.jpg

[Representation of a 3-dimensional tensor.  Image source. Click image to open in new window.]


tresp2017learning-slide8.png

[Image sourceMachine learning: generalization via tensor decomposition.  $\small \text{(s,p,o): (subject, predicate, object)}$ triple.  That image is in turn taken from Fig. 4
Source
in A Review of Relational Machine Learning for Knowledge Graphs: see Section IV (Latent Feature Models), pp. 6-10 for details.  Essentially, the relationships may be modeled/represented as tensors (vectors), amenable to matrix factorization methods.  Click image to open in new window.]


The ability to encode multidimensional feature representations with each node (and edge) is fundamentally important to graph representation learningstatistical relational learninggraph convolutional networksgraph signal processing and other aspects of graph processing, as well as the word embeddings such as word2vec and recent pretrained, multilayer language models.

When I started this blog post, I was thinking (abstractly) about possible relationships between my so-called quantum knowledge graphs and the holographic model of the universe. Again, I wandering off into questionable gedankenerfahrung “terroir” here but if this line of thinking is sound, it could worthy of further consideration. When I skimmed over Bekenstein’s 2003 Scientific American article, a sidebar (p. 64) suggested that universes of different dimensions (3D; 4D; 5D; …) are rendered completely equivalent by the holographic principle. Two thoughts immediately stuck me.

  • A knowledge graph constructed in our bulk universe must be faithfully represented in the 2D holosphere. By analogy, any higher-dimensional knowledge graph that we construct must similarly be representable in two dimensions.

  • I was reminded generally of my summary here of hyperbolic space, and particularly of Poincaré embeddings.

    HyperE-arxiv-1804.03329.gif

    [Image source. Click image to open in new window.]


    arxiv-1705.08039.png

    [Image source. Click image to open in new window.]


    arxiv1811.01294-f1.png

    [image source. click image to open in new window.]


Representation Tradeoffs for Hyperbolic Embeddings (Apr 2018) [project: provides an excellent summary of hyperbolic embeddings as well as entity embeddings with 64-bit precision] found that – given a tree – they could give a combinatorial construction that embedded the tree in hyperbolic space with arbitrarily low distortion without using optimization. On WordNet, their combinatorial embedding obtained a mean-average-precision of 0.989 with only two dimensions
Source
, while Nickel et al.’s recent construction (Poincaré Embeddings for Learning Hierarchical Representation) obtained 0.87 using 200 dimensions
Source
.

arxiv1804.03329-t2_vs_arxiv1705.08039-t1.png

[Image sources: top (Table 2)bottom (Table 1). Click image to open in new window.]


Food for thought!! 

$\small \color{Magenta}{\text{[ … tbc …]}}$

Aside [topology].

  • This is neat: if you take a band (like a long strip of paper), it has two sides. Rotate one end 180° and tape those ends together. That strip now has one side – a Möbius strip

    Escher-Mobius_strip-animated_ants.gif

    [Image source. Click image to open in new window.]



SOURCES AND ADDITIONAL READING

Holographic Universe

Primary Literature


  • Hyperspherical Embedding of Graphs and Networks in Communicability Spaces [Oct 2014) [image 1
    Source
     |  image 2]
    Source

      "Let $\small G$ be a simple connected graph with $\small n$ nodes and let $\small f_{α k} (A)$ be a communicability function of the adjacency matrix $\small A$ , which is expressible by the following Taylor series expansion: $\small \begin{align} \sum_{k=0}^{\infty} α_k A^k\ ènd{align}$.  We prove here that if $\small f_{α k} (A)$ is positive semidefinite then the function $\small èta_{p,q} = (f_{α k} (A)_{pp} + f_{α k} (A)_{qq} -- f_{α k} (A)_{pq})^{\frac{1}{2}}$ is a Euclidean distance between the corresponding nodes of the graph. Then, we prove that if $\small f_{α k} (A)$ is positive definite, the communicability distance induces an embedding of the graph into a hyperdimensional sphere (hypersphere) such as the distances between the nodes are given by $\small èta_{p,q}$. ..."

      "... We have proved here the existence of hyper spheres of radius $\small R$ in which the nodes of a network can be embedded, such that the distances between pairs of nodes are given by the communicability distance. ..."

  • Hyperspherical Prototype Networks (Jan 2019) [OpenReview  (earlier draft)]  [Summary]This paper introduces hyperspherical prototype networks, which unify regression and classification by prototypes on hyperspherical output spaces. Rather than defining prototypes as the mean output vector over training examples per class, we propose hyperspheres as output spaces to define class prototypes a priori with large margin separation. By doing so, we do not require any prototype updating, we can handle any training size, and the output dimensionality is no longer constrained to the number of classes. Furthermore, hyperspherical prototype networks generalize to regression, by optimizing outputs as an interpolation between two prototypes on the hypersphere. Since both tasks are now defined by the same loss function, they can be jointly optimized for multi-task problems. Experimental evaluation shows the benefits of hyperspherical prototype networks for classification, regression, and their combination.

    arxiv1901.10514-f1+f2+f3+t1+t2+t3.png

    [Image source. Click image to open in new window.]


    arxiv1901.10514-f6+t4.png

    [Image source. Click image to open in new window.]


  • Hyperspherical Variational Auto-Encoders (Thomas Kipf and colleagues: 2018) [code here (TensorFlow) and here (PyTorch);  authors’ blog post[image]
    Source

    svae.gif

    [From left to right, plots of learned latent space representations of the Auto-Encoder, the N-VAE, the N-VAE with a scaled down KL divergence, and the S-VAE.  Image source. Click image to open in new window.]



Other Materials

Media

Wikipedia


Consciousness/Intelligence


Knowledge Graphs

The following articles are not “holographic” in the sense described above (physics/cosmology), but nonetheless address the relationships among entities in knowledge graphs as hyperbolic embeddings.

Wikipedia

  • Bloch sphere

    • “In quantum mechanics, the Bloch sphere is a geometrical representation of the pure state space of a two-level quantum mechanical system (qubit) … For historical reasons, in optics the Bloch sphere is also known as the Poincaré sphere and specifically represents different types of polarizations.”

      Bloch_sphere.png

      [Image source. Click image to open in new window.]
  • Poincaré sphere

Miscellaneous

Topology (Dimensionality)


You made it this far – a treat for you! 

dt190303.jpg

[Image source. Click image to open in new window.]