Research Article | Open Access

Michael Miller, Daniel Tward, Alain Trouvé, "Molecular Computational Anatomy: Unifying the Particle to Tissue Continuum via Measure Representations of the Brain", *BME Frontiers*, vol. 2022, Article ID 9868673, 16 pages, 2022. https://doi.org/10.34133/2022/9868673

# Molecular Computational Anatomy: Unifying the Particle to Tissue Continuum via Measure Representations of the Brain

#### Abstract

*Objective*. The objective of this research is to unify the molecular representations of spatial transcriptomics and cellular scale histology with the tissue scales of computational anatomy for brain mapping. *Impact Statement*. We present a unified representation theory for brain mapping based on geometric varifold measures of the microscale deterministic structure and function with the statistical ensembles of the spatially aggregated tissue scales. *Introduction*. Mapping across coordinate systems in computational anatomy allows us to understand structural and functional properties of the brain at the millimeter scale. New measurement technologies in digital pathology and spatial transcriptomics allow us to measure the brain molecule by molecule and cell by cell based on protein and transcriptomic functional identity. We currently have no mathematical representations for integrating consistently the tissue limits with the molecular particle descriptions. The formalism derived here demonstrates the methodology for transitioning consistently from the molecular scale of quantized particles—using mathematical structures as first introduced by Dirac as the class of generalized functions—to the tissue scales with methods originally introduced by Euler for fluids. *Methods*. We introduce two mathematical methods based on notions of generalized functions and statistical mechanics. We use geometric varifolds, a product measure on space and function, to represent functional states at the micro-scales—electrophysiology, molecular histology—integrated with a Boltzmann-like program to pass from deterministic particle descriptions to empirical probabilities on the functional states at the tissue scales. *Results*. Our space-function varifold representation provides a recipe for traversing from molecular to tissue scales in terms of a cascade of linear space scaling composed with nonlinear functional feature mapping. Following the cascade implies every scale is a geometric measure so that a universal family of measure norms can be introduced which quantifies the geodesic connection between brains in the orbit independent of the probing technology, whether it be RNA identities, Tau or amyloid histology, spike trains, or dense MR imagery. *Conclusions*. We demonstrate a unified brain mapping theory for molecular and tissue scales based on geometric measure representations. We call the consistent aggregation of tissue scales from particle and cellular scales, molecular computational anatomy.

#### 1. Introduction

One of the striking aspects of the study of the brain in modern neurobiology is the fact that the distributions of discrete structures that make up physical tissue, from neural cells to synapses to genes and molecules, exist across nearly ten orders of magnitude in spatial scale. This paper focuses on the challenge of building multiscale representations that simultaneously connect the quantized nanoscales of modern molecular biology and digital pathology for characterizing neural circuit architecture in the brain with the classical continuum representations at the anatomical gross and mesoscales.

We have been highly motivated by the Cell Census Network project (BICCN [1]) which highlights the interplay between the nano- and micron scales of single-cell measures of RNA via spatial transcriptomics [2–4] coupled to the tissue scales of mouse atlases. The recent review on bridging scales from cells to physiology [5] motivates the mathematical framework presented herein. The recent emergence of spatial transcriptomics as 2020 Nature Method of the Year highlights the importance of such approaches for understanding the dense metric structure of the brain built up from dense imaging measurements at the cellular scales. Specifically, in our own work on digital pathology for the study of Alzheimer’s disease called the BIOCARD study [6], we are examining pathological Tau in the medial temporal lobe (MTL) at both the microhistological and macroscopic atlas scales, from 10 to 100 *μ*m [7, 8], extended to the magnetic resonance millimeter scales for examining entire circuits in the MTL. In the mouse cell census project, we are examining single-cell spatial transcriptomics using modern RNA sequencing in dense tissue at the micron scale and its representations in the Allen atlas coordinates [9].

Most noteworthy for any representation is that at the finest microscales, nothing is smooth; the distributions of cells and molecules are more well described as random quantum counting processes in space [10]. In contrast, information associated to atlasing methods at the gross anatomical tissue and organ scales of computational anatomy extend smoothly [11–16]. Cross-sectionally and even cross-species, gross anatomical labelling is largely repeatable, implying information transfers and changes from one coordinate system to another smoothly. This is built into the representation theory of diffeomorphisms and soft matter tissue models for which advection and transport hold [17–23], principles upon which continuum mechanics and computational anatomy are based. Also of note is the fact that the brain organizes information on geometric objects, submanifolds of the brains such as the foliation of the cortex and associated coordinates of the cortical columns. Our representations must both represent the quantum to ensemble scales and encode the global macroscopic organizations of the brain.

The focus of this paper is to build a coherent representation theory across scales. For this, we view the micron to millimeter scales via the same representation theory called mathematical *geometric measures*, building the finest micron scales from discrete units termed varifold measures which represent space and the function of molecules, synapses, and cells. The measure representation from fine- to coarse-scale aggregates forming tissue. This measure representation allows us to understand subsets of tissues that contain discretely positioned and placed functional objects at the finest quantized scales and simultaneously pass smoothly with aggregation to the classical continuum scales at which stable functional and anatomical representations exist. Since the study of the function of the brain on its geometric submanifolds—the gyri, sulci, subnuclei, and laminae of the cortex—is so important, we extend our general framework to exploit varifold measures [24] arising in the modern discipline of geometric measure theory. Varifolds are defined as a cross-product measure on space with a measure on molecular function. Geometric measures are a class of generalized functions which have the basic measure property of additivity on disjoint unions of the experimental probe space and encode the complex physiological functions with the geometric properties of the submanifolds to which they are associated. To be able to compare the brains, we use diffeomorphisms as the comparator tool, with their action representing 3D varifold action which we formulate as “copy and paste” so that basic particle quantities that are conserved biologically are combined with greater multiplicity and not geometrically distorted as would be the case for measure transport.

The functional features are represented via generalized Dirac delta functions at the finest microstructure scales. The functional feature is abstracted into a function space rich enough to accommodate the molecular machinery as represented by RNA or Tau particles, as well as electrophysiology associated to spiking neurons, or at the tissue scales of medical imaging dense contrasts of magnetic resonance images (MRIs). We pass to the classical function continuum via introduction of a scale space that extends the descriptions of cortical microcircuits to the meso- and anatomical scales. This passage from the quantized features to the stochastic laws is in fact akin to the Boltzmann program transferring the view from the Newtonian particles to the stable distributions describing them. For this, we introduce a scale space of kernel density transformations which allows us to retrieve the empirical averages represented by the determinism of the stochastic law consistent with our views of the macroscopic tissue scales.

The representation provides a recipe for scale traversal in terms of a cascade of linear space scaling composed with nonlinear functional feature mapping. Following the cascade implies every scale is a varifold measure so that a universal family of varifold norms can be introduced which simultaneously measure the disparity between brains in the orbit independent of the probing technology, yielding one of the many types of data: RNA identities, Tau or amyloid histology, spike trains, or dense MR imagery.

Our multiscale brain measure model implies the existence of a sequence. We call this scale space of pairs, the measure representation of the brain, and the associated probing measurement technologies Brainspace. To formulate a consistent measurement and comparison technology on Brainspace, we construct a natural metric upon it allowing us to study its geometry and connectedness. The metric between brains is constructed via a Hamiltonian which defines the geodesic connections throughout scale space, providing for the first time a hierarchical representation that unifies microscopic to millimeter representation in the brain and makes Brainspace into a metric space. Examples of representation and comparison are given for Alzheimer’s histology integrated to magnetic resonance imaging scales and spatial transcriptomics. We call the consistent formalism presented here for aggregation of tissue scales from particle and cellular scales Molecular Computational Anatomy.

#### 2. Results

##### 2.1. Measure Model of Brain Structures

To build a coherent theory, we view the micron to anatomical scales via the same representation theory building upon discrete units termed particles or atoms. As they aggregate, they form tissues. This is depicted in Figure 1 in which (a) shows mouse imaging of CUX1 labelling of the inner layers of mouse cortex (white) and CTP2 imaging of the outer layers (green) at 2.5 micron in plane resolution. Notice the discrete nature of the cells clearly resolved which form the layers of tissue which are the global macro scale features of layers 2, 3, and 4 which stain more prolifically in white and the outer layers 5 and 6 which stain more prolifically in green.

Our representation exists simultaneously at both the microscopic and tissue millimeter scales. A key aspect of anatomy is that at the microscale, information is encoded as a massive collection of pairs where describes the position of a “particle” and is a functional state in a given set attached to it. In our applications, are proteins representing RNA signatures or Tau tangles and for single-cell neurophysiology represent the dynamics of neural spiking. At the microscale, basically, everything is deterministic, with every particle attached to its own functional state among possible functional states in . But zooming out, the tissue level, say millimeter scale, appears through the statistical distribution of its constituents with two key quantities, the local density of particles and the conditional probability distribution of the functional features at any location . At position , we no longer have a deterministic functional state but a distribution on functional states, which we represent analogous to the Boltzman probability.

The integration of both descriptions into a common mathematical framework can be done quite naturally in the setting of mathematical measures which are mathematical constructs that are able to represent both the discrete and continuous worlds as well as the natural levels of approximation between both.

At the finest scale we associate to particles the elementary “Dirac” which applies to infinitesimal volumes in space and function so that it evaluates as , which is equal to if and , and otherwise. Indeed, the set of finite positive measures on contains discrete measures written as where is a positive weight, that can encode the collection at microscale.

As in Boltzmann modeling, we describe the features statistically at a fixed spatial scale transferring our attention to their stochastic laws modeled as conditional probabilities in with integral 1. For this, we factor the measures into the marginal space measure on with , and the field of probability distributions on conditioned on . For the convention and taken as events gives the factorization with field of conditional probabilities:

Dense tissue is modeled as having marginal continuous with Lebesgue measure on :

A fundamental link between the molecular and continuum tissues can be addressed through the law of large numbers since if is an independent and identically distributed sample drawn from law of where is the total mass of such , then we have almost surely the weak convergence

Passing from the tissue scales to the molecular-cellular scales of Figure 1(a) behooves us to introduce a scale space so that empirical averages which govern it are repeatable. Figure 1(b) depicts our multiscale model of a brain as a sequence of measures:

Our idealization of Brainspace as a sequence of measures is depicted in Figure 1 descending from the coarse tissue scale (top b) to the finest particle representation (bottom b), with color representing function and radii space scale. Throughout, the range of scales is denoted shorthand to mean .

##### 2.2. Nonlinear Transformation Model for Crossing Scales

The brain being a multiscale collection of measures requires us to be able to transform from one scale to another. We do this by associating to each scale transformation a transition kernel acting on the measure at that scale. The transition kernels carry resolution scales or reciprocally bandwidths, analogous to Planck’s scale.

We introduce the abstract representation of our system as a collection of descriptive elements made from spatial and functional features. We transform our mathematical measure on generating new measures on by defining correspondences via transition kernels , with the kernel acting on the measures transforming as

This implies the particles transform as .

Figure 1(c) shows the cascade of operations, the first transforming linearly on according to , and the second transforming nonlinearly on smoothing at scale the conditional distribution on features:

Smooth space resampling projects particles to the continuum using a smooth resampling process defined by , the fraction particle transfers to , giving

Notice . Feature reduction uses maps from machine learning, :

For computing we resample to the computational lattices interpolating from the continuum to the lattice centers ; defining gives the transition kernel and transformed measure:

##### 2.3. Dynamical Systems Model via Varifold Action of Multiscale Diffeomorphisms

We want to measure and cluster brains by building a metric space structure. We do this by following the original program of D’Arcy Thompson building bijective correspondences. In this setting, this must be done at every scale with each scale having different numbers of particles and resolutions. We build correspondences between sample brains via dense connections of the discrete particles to the continuum at all scales using the diffeomorphism group and diffeomorphic transport. For this, define the group of -times continuously differentiable diffeomorphisms with group operation function composition . For any brain , the diffeomorphisms act

The tissue has classical density with feature indexed over space, with action

Space scales are represented with the group product, , acting component-wise with action

We call the term in the action the “copy and paste” varifold action. It enables the crucial property that when a tissue is extended to a larger area, the total number of its basic constituents increases accordingly with total integral not conserved, in contrast to classic measure or probability transport.

Dynamics occurs by generating the diffeomorphism as flows , with dynamics controlled by vector fields via the ordinary differential equation at each scale satisfying

The controls are coupled by successive refinements , with :

To control smoothness of the maps, we force the vector fields to be elements of reproducing kernel Hilbert spaces (RKHS’s) , norms , with multiscale . Each RKHS is taken to have a diagonal kernel with the Green’s function, where is the identity matrix; see [25] for nondiagonal kernels. Geodesic mapping flows under a control process along paths of minimum energy respecting boundary conditions. Figure 2 shows the multiscale control hierarchy.

The multiscale dynamical controls are written , with observer and dynamics equation:

Dynamics translates into a navigation in the orbit of brains and provides a metric distance between brains. Paths of minimum energy connecting the identity to any fixed boundary condition (BC) where is accessible defines the distance extending LDDMM [26] to a hierarchy of diffeomorphisms and is a geodesic for an associated Riemannian metric [25].

The metric from to in the orbit accessible from via diffeomorphisms is the shortest length geodesic paths with BCs and . This extension to multiscale LDDMM equation (34) is given in Section 4.3 where we discuss the smoothness required for the geodesics to define a metric and specify the optimal control problem in the state equation (36).

##### 2.4. Geodesic Brain Mapping via the Varifold Measure Norm

The BC matching brains is defined using measure norms with equality meaning brains are equal, with small normed difference meaning brains are similar; for particle brains, ; for tissue, . Geodesic mapping controls the flow to minimize energy simultaneously minimizing the norm distance to the target. Every brain has a variable number of particles with no correspondence between particles. Varifold measure norms accommodate these variabilities. The varifold norm is constructed modeling the particles as elements of the dual space of an RKHS associated with the isometry and kernel function defining the inner product. We introduce the dual bracket notation for and for *μ* a measure, . Then, we have

The norm-square for particle and tissue measures reduces to

The hierarchical norms across the scales become .

The optimal control is square-integrable for the -norms for particles and tissue, satisfying for :

The control flows the measures with state processes and endpoint :

The endpoint is modeled as continuously differentiable in the states.

Hamiltonian control of particles reparameterizes (16) in momentum “costates” , with momentum at each scale a function of same scale dynamics ( index suppressed). The optimal control averages Green’s functions (13) across scales with the gradient on the first variable ; define ; then

Control of tissue reparameterizes (16) in ; if is absolutely continuous, it remains for :

The endpoint momentum for particles and dense tissue is given by the variation of the norm-square match determined by , the kernel smoothing of the difference of the measures: with endpoint momentum

The RKHS kernel defined in the Section 4.2, Eqn. (29) is a separable Gaussian in space and function. The optimal control at any scale “averages” all the particle/tissue data across scales. Section 4.4 establishes the smoothness for the Hamiltonian equations. Section 4.5 establishes the variation and smoothness for the norm gradients.

We emphasize that the varifold action gives the continuum problem unifying with image-based LDDMM [26] such as studied by the MRI community. Taking with , the action becomes

##### 2.5. MRI and Digital Pathology for Tau Histology in Alzheimer’s

###### 2.5.1. Bayes Segmentation of MRI

Figure 3 shows the multiscale data from the clinical BIOCARD study [6] of Alzheimer’s disease within the medial temporal lobe [7, 8, 27]. Figure 3(a) shows the clinical magnetic resonance imaging (MRI) with the high-field 200 *μ*m MRI scale (b) shown depicting the medial temporal lobe including the collateral sulcus and lateral bank of the entorhinal cortex. Bayes classifiers for brain parcellation performs feature reduction as a key step for segmentation at tissue scales [28]. Feature reduction maps the distribution on gray levels to probabilities on tissue types, defined by the integration over the decision regions :

Figure 3(b) depicts a Bayes classifier for gray, white, and cerebrospinal fluid compartments generated from the temporal lobe high-field MRI section corresponding to the Mai-Paxinos section (panel 3, top row).

###### 2.5.2. Gaussian Scale-Space Resampling of Tau Histology

For histology at the molecular scales, the measure encodes the detected tau and amyloid particles for fine-scale particles with function the size . Figure 3(c) shows the detected tau particles as red dots at 4 *μ*m. We use computational lattices to interpolate between scales reapportioning to the lattice centers via Gaussian resampling . Feature reduction to the tissue scales maps to the first two moments (Figure 3(d)) of mean and variance of particle size :

The millimeter tissue scale depicts the global folding property of the tissue. The color codes the mean tau particle area as a function of position at the tissue scales with deep red denoting 80 *μ*m^{2} maximum tau area for the detected particles.

##### 2.6. Cellular Neurophysiology: Neural Network Temporal Models

Single-unit neurophysiology uses temporal models of spiking neurons with a “neural network” taking each neuron modeled as a counting measure in time with the spike times the feature :

The Poisson model with intensity [10] has probabilities .

Post-stimulus time (PST) [29] and interval histograms are used to examine the instantaneous discharge rates and interspike interval statistics [30]. The interval histogram abandons the requirement of maintaining the absolute phase of the signal for measuring temporal periodicity and phase locking. Synchrony in the PST is measured using binning and Fourier transforms, :

The frequency computes integrated rate; each phase-locked feature is complex .

##### 2.7. Scale-Space Resampling of RNA to Cell and Tissue Scales

Methods in spatial transcriptomics which have emerged for localizing and identifying cell types via marker genes and across different cellular resolutions [4, 31–35] present the opportunity of localizing in spatial coordinates the transcriptionally distinct cell types. Depicted in Figure 4 are the molecular measurements at the micron scales with MERFISH [34] at three different scales.

The molecular measures represent RNA locations with sparse RNA features , . Crossing to cells resamples to their centers partitioning into the closest subsets as defined by the distance of particle to cell , accumulating the mixtures of RNA within the closest cell. The cell scale feature is the conditional probability of the 17 cell type in :

For this example, the conditional probabilities on the RNA feature vector were found using principle components followed by Gaussian mixture modeling on following [34].

Resampling to the tissue lattice uses Gaussian rescaling with the new feature vector the probability of the cell at any position being one of 10 tissue types . The probability of tissue type is calculated using 10-means clustering on the cell type probabilities. The distance for 10-means clustering is computed using the Fisher-Rao metric [36] between the feature laws , partitioning the cell type feature space into 10 regions giving probability features:

Figures 4(f)–4(h) show the RNA forming depicted as colored markers corresponding to the different gene species (bar scale 1, 10 microns). Figure 4(g) shows the feature space of 17 cell types making up associated to the maximal probability in the PCA projection from a classifier based on the mixtures of RNA at each cell location. Figure 4(h) shows the 10 tissue features of the 10-means procedure. In both scales, probabilities are concentrated on single classes via indicator functions computed on the conditional probabilities.

##### 2.8. Geodesic Mapping for Spatial Transcriptomics, Histology, and MRI

###### 2.8.1. Navigation between Sections of Cell Identity in Spatial Transcriptomics

Figures 5(a) and 5(b) show sections (a) from [4] depicting neuronal cell types via colors including excitatory cells eL2/3 (yellow), eL4 (orange), red eL5 (red), inhibitory cells ST (green), and VIP (light blue), each classified via high-dimensional gene expression feature vectors via spatial transcriptomics.

The measure on cell types crosses to atlas tissue scales using in of equation (27a) with feature reduction expectations of moments, : with .

Figures 5(b) and 5(d) show the tissue scale features associated to the cell type and the entropy. They shows the results of transforming the neuronal cells depicting the cell type (b) and entropy feature (d). The entropy is a measure of dispersion across the cell types given by the expectation of the log probability function with zero entropy meaning the space location feature distribution has all its mass on 1 cell type. Geodesic mapping enforces vector field smoothness via differential operators specifying the norms in the RKHS with .

###### 2.8.2. Navigation between Sections of Histology

Figures 6(a)–6(h) show navigation between the cortical folds of the 4 *μ*m histology. Shown in (a) is a section showing the machine learning detection of the Tau particles. Figures 6(b)–6(d) and 6(f)–6(h) depict the template, mapped template, and target showing the mathematical measure representation of the perirhinal cortex constructed from the positions and sizes at the 4 *μ*m scale (b–d) and reconstruction using Gaussian resampling onto the tissue scale (f–h). The color codes the mean of representing tau area as a function of position at the tissue scales with deep red maximum denoting 80 *μ*m^{2} of tau tangle area. The gradients in tau tangle area between superficial and deep layers are apparent with the deep red as high as 80 *μ*m^{2} for the innermost tissue fold. Panel (e) shows the vector field encoding of the geodesic transformation between the sections depicted by the bottom row transformation of grids. The narrowing of the banks of the perirhinal cortex is exhibited at the tissue scale for motion order 1000 *μ*m (brightness on scale bar).

Figure 6(i) shows the collateral sulcus fold at the boundary of the transentorhinal cortex region transforming based on the normed distances between sections with deformation motions 1000 *μ*m in size. Shown is the micron scale depicting the transformation of the gyrus with the color representing the entropy of the particle identity distribution.

###### 2.8.3. Mapping Digital Pathology from Histology to MRI Scales

All of the examples thus far have created the multiscale data generated using the resampling kernels from the finest scales. As illustrated in our early figures, much of the data is inherently multiscale, with the measurement technologies generating the coarse scale representations. Shown in Figure 7 is data illustrating our Alzheimer’s study of postmortem MR images that are simultaneously collected with amyloid and tau pathology sections. MR images have a resolution of approximately 100 *μ*m, while pathology images have a resolution of approximately 1 *μ*m. For computational purposes, the MRI template and target images were downsampled to 759 and 693 particles, respectively, with the tau tangles downsampled to 1038 and 1028 particles, respectively. We treated every pixel in the MR image as a coarse scale particle with image intensity as its feature value equation (21) and every detected tau tangle as a fine-scale particle with a constant feature value and performed varifold matching to align to neighboring sections. The endpoint representing the two scales is . For each scale norm, we use a varifold kernel given by the products of Gaussian distributions with the varifold measure norm equation (29), (30) at each scale. For the MRI scale, the weights are identical with the function component given by the MRI image value; for the tau particles, there is no function component making the kernel of equation (15) for all function values in the varifold norm identically 1.

Figures 7(a)–7(f) show the imaging data for both sections. Figures 7(g)–7(i) show the transformed template image at the fine scale. The high-resolution mapping carries the kernels across all the scales as indicated by the geodesic equation for the control (18a). Notice the global motions of the high resolution of the fine particles.

#### 3. Discussion

Computational anatomy was originally formulated as a mathematical orbit model for representing medical images at the tissue scales. The orbit model generalizes linear algebra to the group action on images by the diffeomorphism group. The orbit inherits a metric structure from the group of diffeomorphisms. The formulation relies on principles of continuity of medical images as classical functions, generalizing optical flow and advection of material to diffeomorphic flow of material, the material represented by the contrast seen in the medical imaging modality such as bold MRI contrast for gray matter or fiber orientation for diffusion tensor imaging. Unifying this representation to images built at the particle and molecular biological scale has required us to move away from classical functions, to the more modern 20th century theory of nonclassical generalized functions. Mathematical measures are the proper representation as they generally reflect the property that probes from molecular biology associated to disjoint sets are additive, the basic starting point of measure theory. Changing the model from a focus on groups acting on functions to groups acting on measures allows for a unified representation that has both a metric structure at the finest scales and a unification with the tissue imaging scales.

The brain measure formulation carries with it implicitly the notion of scale space, i.e., the existence of a sequence of pairs across scales, the measure representation of the brain, and the associated scale space reproducing kernel Hilbert space of functions which correspond to the probing measurement technologies. As such, part of the prescription of the theory is a method for crossing scales and carrying information from one scale to the other. Important to this approach is that at every scale we generate a new measure; therefore, the recipe of introducing “measure norms” built from RKHS’s for measuring brain disparity is universal across the hierarchy allowing us to work simultaneously with common data structures and a common formalism. Interestingly, the measure norms do not require identical particle numbers across brains in brain space at the molecular scales.

The key modeling element of brain function is that the conditional feature probability is manipulated from the quantized features to the stochastic laws. These are the analogues of the Boltzmann distributions generalized to the complex feature spaces representing function. As they correspond to arbitrary feature spaces not necessarily Newtonian particles, we represent them simply as empirical distributions on the feature space, with the empirical measure constructed from the collapse of the fine scale to the resampled coarse scale. To model rescaling through scale space explicitly, the two kernel transformations are used allowing us to retrieve the empirical averages represented by the determinism of the stochastic law consistent with our views of the macro tissue scales. This solves the dilemma that for the quantized atomic and microscales, cell occurrence will never repeat; i.e., there is zero probability of finding a particular cell at a particular location and conditioned on finding it once it will never be found again in the exact same location in another preparation. The properties that are stable are the probability laws with associated statistics that may transfer across organisms and species.

Importantly, our introduction of the term in the action enables the crucial property that when a tissue is extended to a larger area, the total number of its basic constituents should increase accordingly and not be conserved. This is not a traditional measure transport which is mass preserving which is not a desirable feature for biological samples. Rather, we have defined a new action on measures that is reminiscent of the action on -dimensonal varifolds [37, 38]. We call this property “copy and paste,” the notion being that the brain is built on basic structuring elements in their design that are conserved.

We believe that many different diffeomorphism methods can be used at multiscale. The proper coupling of the vector fields would have to be derived to determine how the different scales mix as we have done for the multiscale LDDMM formulation here based on the total kinetic energy. Also, successive refinement for the small deformation setting has been introduced in many areas associated to multigrid and basis expansions. The notion of building multiscale representation in the large deformation LDDMM setting was originally explored by Risser et al. [39] in which the kernels are represented as a sum of kernels and Sommer et al. [40] in which the kernel is represented as vector bundles. In their multiscale setting, there is a postoptimization decomposition in which the contribution of the velocity field into its different components can then each be integrated. In that multiscale setting, the basic Euler-Lagrange equation termed EPDIFF remains that of LDDMM [41]. In the setting proposed here, we separate the scales before optimization via the hierarchy of layered diffeomorphisms and use a multiscale representation of the brain hierarchy itself which is directly associated to the diffeomorphism at that scale. This gives the fundamental setting of the product group of diffeomorphisms with the Euler-Lagrange equation corresponding to the sequence of layered diffeomorphisms for multiscale LDDMM [25].

In terms of the efficiency of the multiscale representation, Figure 6 shows clearly the power of the multiscale diffeomorphism transferring microscopic and macroscopic scale properties of the brain as well as the power of crossing of scales transferring information consistently from particles to the continuum. What is striking in Figure 6 is that the deformations of the particles at the micron scale result in consistent motions of the cortical surface as a smooth global manifold. Also, the functional feature being transferred consistently from the particles via the composition of transformations and of equation (8) shows the clear pattern of the functional tau particle size being layered, a property that is hardly noticed at the molecular scale, but is clearly associated to the tissue scale.

The aggregation across scales from particle to tissue scales on lattices provides the essential link to inference on graphs. It is natural for the aggregated features on lattices with associated conditional probability laws to become the nodes in Markov random field modeling for spatial inference (see examples in spatial transcriptomics and tissue segmentation [42]). Building neighborhood relations as conditional probabilities between lattice sites from which global probability laws are constructed with the Hammersley-Clifford theorem links us to Grenander’s metric pattern theory formalisms with the atoms and conditional laws at any scale playing the roles of the generators.

#### 4. Materials and Methods

##### 4.1. Experimental and Technical Design

The objective of this research is to unify the molecular representations of spatial transcriptomics and cellular scale histology with the tissue scales of computational anatomy for brain mapping. To accomplish this, we designed a mathematical framework for representing data at multiple scales using geometric measures as generalized functions and mapping data using geodesic flows of multiscale diffeomorphisms. We illustrate the method using several examples from human MRI and digital pathology, as well as mouse spatial transcriptomics.

##### 4.2. Gaussian Kernel Varifold Norm

Our varifold norm construction models the brain measures as elements of a Hilbert space which is dual to an RKHS with a kernel . Using the dual bracket notation for , for *μ* a measure, then , and the norm becomes the integration against the kernel equation (15b) written as ; the multiscale norm is given by .

To ensure the brain measures are elements of dual to the RKHS, the kernel is chosen to densely and continuously embed in bounded continuous functions so that the signed measure spaces are continuously embedded in . An example kernel is the Gaussian kernel which satisfies this condition, the kernel taken as non-normalized, separable Gaussians with Euclidean distance:

Measures have norm-square

For data with position information but no features (tau tangle locations), each is constant with exponential terms all 1.

##### 4.3. The Riemannian Distance Metric on the Multiscale Group

The diffeomorphism group acts on the hierarchy component-wise equation (11c) with the multiscale group product
with elements satisfying the law of composition component-wise . The group supporting *-*derivatives of the diffeomorphisms builds from a space of -times continuously differentiable vector fields vanishing at infinity and its partial derivatives of order intersecting with diffeomorphisms with 1-derivative:

Dynamics occurs via group action generated as a dynamical system in which the multiscale control flows the hierarchy satisfying of (12a). The control is in the product , each space an RKHS with norm-square selected to control the smoothness of the vector fields. The hierarchy of spaces is organized as a sequence of continuous embeddings, , where is an additional layer containing the others with defined as a space of -times continuously differentiable vector fields vanishing at infinity as well all its partial derivatives of order .

The hierarchy is connected via successive refinements expressed via the continuous linear operator with . The control process has finite square integral with total energy

Optimal curves which minimize the integrated energy between any two fixed boundary conditions (BC) and which is accessible with a path of finite energy extend the LDDMM setting [26] to a hierarchy of diffeomorphisms and describe a geodesic for an associated Riemannian metric and multiscale LDDMM [25] on :

Existence of solutions for minimizers over of (34) when is finite can be established when .

##### 4.4. Geodesic Multiscale LDDMM via Hamiltonian Control

The shape of Brainspace is given by its geodesics. We use Hamiltonian control to generate the geodesics.

The Hamiltonian method reduces the parameterization of the vector field to the dynamics of the particles that encode the flow of states (17a). We write the dynamics explicitly as a linear function of the control, and define the flow of the measures indexed by the dynamical state:

The control problem satisfying (16) reparameterized in the states becomes, for ,

Hamiltonian control for particles and tissues introduces the costates via the Hamiltonian

Under the assumption , the Pontryagin maximum [22] with gives the optimal control for all scales satisfying

*Statement 1. *Geodesics of particles. Assume that with and . If is a solution of the optimal control problem (36), then there exists a time-dependent costate for all satisfying
The optimal control satisfying for any and is given by (18a).

Geodesics of Tissue. Assume that with . If solves the optimal control problem (36) then the time-dependent costate for all satisfies

The optimal controls satisfying for any and are given by (19a).

See Appendix A for proof of differential equations.

*Statement 2. (Integral equations for Hamiltonian Momentum of particles). *Assuming is in , the geodesic costate for particles flowing from satisfies

As proven in Appendix B, the particle integral equations of (40) and (18b) satisfy (39a). The second set of dense tissue integral equations (19b) satisfies (39b) by a similar argument.

##### 4.5. Gradients of the Endpoint Varifold Matching Norm

The gradients of the matching endpoints require us to compute the variation