Skip to Main Content

Graph Rules for Recurrent Neural Network Dynamics

Carina Curto
Katherine Morrison

Communicated by Notices Associate Editor Emilie Purvine

Article cover

1. Introduction

Neurons in the brain are constantly flickering with activity, which can be spontaneous or in response to stimuli LBH09. Because of positive feedback loops and the potential for runaway excitation, real neural networks often possess an abundance of inhibition that serves to shape and stabilize the dynamics YMSL05KAY14. The excitatory neurons in such networks exhibit intricate patterns of connectivity, whose structure controls the allowed patterns of activity. A central question in neuroscience is thus: how does network connectivity shape dynamics?

For a given model, this question becomes a mathematical challenge. The goal is to develop a theory that directly relates properties of a nonlinear dynamical system to its underlying graph. Such a theory can provide insights and hypotheses about how network connectivity constrains activity in real brains. It also opens up new possibilities for modeling neural phenomena in a mathematically tractable way.

Here we describe a class of inhibition-dominated neural networks corresponding to directed graphs, and introduce some of the theory that has been developed to study them. The heart of the theory is a set of parameter-independent graph rules that enables us to directly predict features of the dynamics from combinatorial properties of the graph. Specifically, graph rules allow us to constrain, and in some cases fully determine, the collection of stable and unstable fixed points of a network based solely on graph structure.

Stable fixed points are themselves static attractors of the network, and have long been used as a model of stored memory patterns Hop82. In contrast, unstable fixed points have been shown to play an important role in shaping dynamic (nonstatic) attractors, such as limit cycles PMMC22. By understanding the fixed points of simple networks, and how they relate to the underlying architecture, we can gain valuable insight into the high-dimensional nonlinear dynamics of neurons in the brain.

For more complex architectures, built from smaller component subgraphs, we present a series of gluing rules that allow us to determine all fixed points of the network by gluing together those of the components. These gluing rules are reminiscent of sheaf-theoretic constructions, with fixed points playing the role of sections over subnetworks.

First, we review some basics of recurrent neural networks and a bit of historical context.

Basic network setup

A recurrent neural network is a directed graph together with a prescription for the dynamics on the vertices, which represent neurons (see Figure 1A). To each vertex we associate a function that tracks the activity level of neuron as it evolves in time. To each ordered pair of vertices we assign a weight, , governing the strength of the influence of neuron on neuron . In principle, there can be a nonzero weight between any two nodes, with the graph providing constraints on the allowed values , depending on the specifics of the model.

Figure 1.

(A) Recurrent network setup. (B) A Ramón y Cajal drawing of real cortical neurons.

Graphic without alt text

The dynamics often take the form of a system of ODEs, called a firing rate model DA01:

for , …, . The various terms in the equation are illustrated in Figure 1, and can be thought of as follows:

is the firing rate of a single neuron (or the average activity of a subpopulation of neurons);

is the “leak” timescale, governing how quickly a neuron’s activity exponentially decays to zero in the absence of external or recurrent input;

is a real-valued matrix of synaptic interaction strengths, with representing the strength of the connection from neuron to neuron ;

is a real-valued external input to neuron that may or may not vary with time;

is the total input to neuron as a function of time; and

is a nonlinear, but typically monotone increasing function.

Of particular importance for this article is the family of threshold-linear networks (TLNs). In this case, the nonlinearity is chosen to be the popular threshold-linear (or ReLU) function,

TLNs are common firing rate models that have been used in computational neuroscience for decades SY12TSSM97HSM00BF22. The use of threshold-linear units in neural modeling dates back at least to 1958 HR58. In the last 20 years, TLNs have also been shown to be surprisingly tractable mathematically HSS03CDI13, CM16MDIC16CGM19PLACM22, though much of the theory remains underdeveloped. We are especially interested in competitive or inhibition-dominated TLNs, where the matrix is nonpositive so the effective interaction between any pair of neurons is inhibitory. In this case, the activity remains bounded despite the lack of saturation in the nonlinearity MDIC16. These networks produce complex nonlinear dynamics and can possess a remarkable variety of attractors MDIC16PLACM22PMMC22.

Firing rate models of the form 1 are examples of recurrent networks because the matrix allows for all pairwise interactions, and there is no constraint that the architecture (i.e., the underlying graph ) be feedforward. Unlike deep neural networks, which can be thought of as classifiers implementing a clustering function, recurrent networks are primarily thought of as dynamical systems. And the main purpose of these networks is to model the dynamics of neural activity in the brain. The central question is thus:

Question 1.

Given a firing rate model defined by 1 with network parameters and underlying graph , what are the emergent network dynamics? What can we say about the dynamics from knowledge of alone?

We are particularly interested in understanding the attractors of such a network, including both stable fixed points and dynamic attractors such as limit cycles. The attractors are important because they comprise the set of possible asymptotic behaviors of the network in response to different inputs or initial conditions (see Figure 2).

Note that Question 1 is posed for a fixed connectivity matrix , but of course can change over time (e.g., as a result of learning or training of the network). Here we restrict ourselves to considering constant matrices; this allows us to focus on understanding network dynamics on a fast timescale, assuming slowly varying synaptic weights. Understanding the dynamics associated to changing is an important topic, currently beyond the scope of this work.

Historical interlude: memories as attractors

Attractor neural networks became popular in the 1980s as models of associative memory encoding and retrieval. The best-known example from that era is the Hopfield model Hop82, originally conceived as a variant on the Ising model from statistical mechanics. In the Hopfield model, the neurons can be in one of two states, , and the activity evolves according to the discrete time update rule:

Hopfield’s famous 1982 result is that the dynamics are guaranteed to converge to a stable fixed point, provided the interaction matrix is symmetric: that is, for every . Specifically, he showed that the “energy” function,

decreases along trajectories of the dynamics, and thus acts as a Lyapunov function Hop82. The stable fixed points are local minima of the energy landscape (Figure 2A). A stronger, more general convergence result for competitive neural networks was shown in CG83.

Figure 2.

Attractor neural networks. (A) For symmetric Hopfield networks and symmetric inhibitory TLNs, trajectories are guaranteed to converge to stable fixed point attractors. Sample trajectories are shown, with the basin of attraction for the blue stable fixed point outlined in blue. (B) For asymmetric TLNs, dynamic attractors can coexist with (static) stable fixed point attractors.

Graphic without alt text

These fixed points are the only attractors of the network, and they represent the set of memories encoded in the network. Hopfield networks perform a kind of pattern completion: given an initial condition , the activity evolves until it converges to one of multiple stored patterns in the network. If, for example, the individual neurons store black and white pixel values, this process could input a corrupted image and recover the original image, provided it has previously been stored as a stable fixed point in the network by appropriately selecting the weights of the matrix. The novelty at the time was the nonlinear phenomenon of multistability: namely, that the network could encode many such stable equilibria and thus maintain an entire catalogue of stored memory patterns. The key to Hopfield’s convergence result was the requirement that be a symmetric interaction matrix. Although this was known to be an unrealistic assumption for real (biological) neural networks, it was considered a tolerable price to pay for guaranteed convergence. One did not want an associative memory network that wandered the state space indefinitely without ever recalling a definite pattern.

Twenty years later, Hahnloser, Seung, and others followed up and proved a similar convergence result in the case of symmetric threshold-linear networks HSS03. More results on the collections of stable fixed points that can be simultaneously encoded in a symmetric TLN can be found in CDI13CM16, including some unexpected connections to Cayley–Menger determinants and classical distance geometry.

In all of this work, stable fixed points have served as the model for encoded memories. Indeed, these are the only types of attractors that arise for symmetric Hopfield networks or symmetric TLNs. Whether or not guaranteed convergence to stable fixed points is desirable, however, is a matter of perspective. For a network whose job it is to perform pattern completion or classification for static images (or codewords), as in the classical Hopfield model, this is exactly what one wants. But it is also important to consider memories that are temporal in nature, such as sequences and other dynamic patterns of activity. Sequential activity, as observed in central pattern generator circuits (CPGs) and spontaneous activity in hippocampus and cortex, is more naturally modeled by dynamic attractors such as limit cycles. This requires shifting attention to the asymmetric case, in order to be able to encode attractors that are not stable fixed points (Figure 2B).

Beyond stable fixed points

When the symmetry assumption is removed, TLNs can support a rich variety of dynamic attractors such as limit cycles, quasiperiodic attractors, and even strange (chaotic) attractors. Indeed, this richness can already be observed in a special class of TLNs called combinatorial threshold-linear networks (CTLNs), introduced in Section 3. These networks are defined from directed graphs, and the dynamics are almost entirely determined by the graph structure. A striking feature of CTLNs is that the dynamics are shaped not only by the stable fixed points, but also the unstable fixed points. In particular, we have observed a direct correspondence between certain types of unstable fixed points and dynamic attractors (see Figure 3). This is reviewed in Section 4.

Figure 3.

Stable and unstable fixed points. (A) Stable fixed points are attractors of the network. (B-C) Unstable fixed points are not themselves attractors, but certain unstable fixed points seem to correspond to dynamic attractors (B), while others function solely as tipping points between multiple attractors (C).

Graphic without alt text

Despite exhibiting complex, high-dimensional, nonlinear dynamics, recent work has shown that TLNs—and especially CTLNs—are surprisingly tractable mathematically. Motivated by the relationship between fixed points and attractors, a great deal of progress has been made on the problem of relating fixed point structure to network architecture. In the case of CTLNs, this has resulted in a series of graph rules: theorems that allow us to rule in and rule out potential fixed points based purely on the structure of the underlying graph CGM19PLACM22. In Section 5, we give a novel exposition of graph rules, and introduce several elementary graph rules from which the others can be derived.

Inhibition-dominated TLNs and CTLNs also display a remarkable degree of modularity. Namely, attractors associated to smaller networks can be embedded in larger ones with minimal distortion PMMC22. This is likely a consequence of the high levels of background inhibition: it serves to stabilize and preserve local properties of the dynamics. These networks also exhibit a kind of compositionality, wherein fixed points and attractors of subnetworks can be effectively “glued” together into fixed points and attractors of a larger network. These local-to-global relationships are given by a series of theorems we call gluing rules, given in Section 6.

2. TLNs and Hyperplane Arrangements

For firing rate models with threshold-nonlinearity , the network equations 1 become

for , …, . We also assume for each . Note that the leak timescales have been set to for all . We thus measure time in units of this timescale.

For constant matrix and input vector , the equations

define a hyperplane arrangement in . The -th hyperplane is defined by , with normal vector , population activity vector , and affine shift . If , then intersects the -th coordinate axis at the point . is parallel to the -th axis.

The hyperplanes partition the positive orthant into chambers. Within the interior of each chamber, each point is on the plus or minus side of each hyperplane . The equations thus reduce to a linear system of ODEs, with either or for each . In particular, TLNs are piecewise-linear dynamical systems with a different linear system governing the dynamics in each chamber.

Figure 4.

TLNs as a patchwork of linear systems. (A) The connectivity matrix , input , and differential equations for a TLN with neurons. (B) The state space is divided into chambers (regions) , each having dynamics governed by a different linear system . The chambers are defined by the hyperplanes , with defined by (gray lines).

Graphic without alt text
Figure 5.

A network on neurons, its hyperplane arrangement, and limit cycle. (A) A TLN whose connectivity matrix is dictated by a -cycle graph, together with the TLN equations. (B) The TLN from A produces firing rate activity in a periodic sequence. (C) (Left) The hyperplane arrangement defined by the equations , with a trajectory initialized near the fixed point shown in black. (Right) A close-up of the trajectory, spiraling out from the unstable fixed point and falling into a limit cycle. Different colors correspond to different chambers of the hyperplane arrangement through which the trajectory passes.

Graphic without alt text

A fixed point of a TLN 3 is a point that satisfies for each . In particular, we must have

where is evaluated at the fixed point. We typically assume a nondegeneracy condition on CGM19, which guarantees that each linear system is nondegenerate and has a single fixed point. This fixed point may or may not lie within the chamber where its corresponding linear system applies. The fixed points of the TLN are precisely the fixed points of the linear systems that lie within their respective chambers.

Figure 4 illustrates the hyperplanes and chambers for a TLN with . Each chamber, denoted as a region , has its own linear system of ODEs, , for , or . The fixed points corresponding to each linear system are denoted by , in matching color. Note that only chamber contains its own fixed point (in red). This fixed point, , is thus the only fixed point of the TLN.

Figure 5 shows an example of a TLN on neurons. The matrix is constructed from a -cycle graph and for each . The dynamics fall into a limit cycle where the neurons fire in a repeating sequence that follows the arrows of the graph. This time, the TLN equations define a hyperplane arrangement in , again with each hyperplane defined by (Figure 5C). An initial condition near the unstable fixed point in the all + chamber (where for each ) spirals out and converges to a limit cycle that passes through four distinct chambers. Note that the threshold nonlinearity is critical for the model to produce nonlinear behavior such as limit cycles; without it, the system would be linear. It is, nonetheless, nontrivial to prove that the limit cycle shown in Figure 5 exists. A recent proof was given for a special family of TLNs constructed from any -cycle graph BCRR21.

The set of all fixed points

A central object that is useful for understanding the dynamics of TLNs is the collection of all fixed points of the network, both stable and unstable. The support of a fixed point is the subset of active neurons,

Our nondegeneracy condition (that is generically satisfied) guarantees we can have at most one fixed point per chamber of the hyperplane arrangement , and thus at most one fixed point per support. We can thus label all the fixed points of a given network by their supports:

where . For each support , the fixed point itself is easily recovered. Outside the support, for all . Within the support, is given by:

Here and are the column vectors obtained by restricting and to the indices in , and is the induced principal submatrix obtained by restricting rows and columns of to .

From 4, we see that a fixed point with must satisfy the “on-neuron” conditions, for all , as well as the “off-neuron” conditions, for all , to ensure that for each and for each . Equivalently, these conditions guarantee that the fixed point of lies inside its corresponding chamber, . Note that for such a fixed point, the values for depend only on the restricted subnetwork . Therefore, the on-neuron conditions for in are satisfied if and only if they hold in . Since the off-neuron conditions are trivially satisfied in , it follows that is a necessary condition for . It is not, however, sufficient, as the off-neuron conditions may fail in the larger network.

Conveniently, the off-neuron conditions are independent and can be checked one neuron at a time. Thus,

When satisfies all the off-neuron conditions, so that , we say that survives to the larger network; otherwise, we say dies.

The fixed point corresponding to is stable if and only if all eigenvalues of have negative real part. For competitive (or inhibition-dominated) TLNs, all fixed points—whether stable or unstable—have a stable manifold. This is because competitive TLNs have for all . Applying the Perron–Frobenius theorem to , we see that the largest magnitude eigenvalue is guaranteed to be real and negative. The corresponding eigenvector provides an attracting direction into the fixed point. Combining this observation with the nondegeneracy condition reveals that the unstable fixed points are all hyperbolic (i.e., saddle points).

3. Combinatorial Threshold-Linear Networks

Combinatorial threshold-linear networks (CTLNs) are a special case of competitive (or inhibition-dominated) TLNs, with the same threshold nonlinearity, that were first introduced in MDIC16. What makes CTLNs special is that we restrict to having only two values for the connection strengths , for . These are obtained as follows from a directed graph , where indicates that there is an edge from to and indicates that there is no such edge:

Additionally, CTLNs typically have a constant external input for all in order to ensure the dynamics are internally generated rather than inherited from a changing or spatially heterogeneous input.

A CTLN is thus completely specified by the choice of a graph , together with three real parameters: , and . We additionally require that , , and . When these conditions are met, we say the parameters are within the legal range. Note that the upper bound on implies , and so the matrix is always effectively inhibitory. For fixed parameters, only the graph varies between networks. The network in Figure 5 is a CTLN with the standard parameters , , and .

We interpret a CTLN as modeling a network of excitatory neurons, whose net interactions are effectively inhibitory due to a strong global inhibition (Figure 6). When , we say strongly inhibits ; when , we say weakly inhibits . The weak inhibition is thought of as the sum of an excitatory synaptic connection and the background inhibition. Note that because , when , neuron inhibits more than it inhibits itself via its leak term; when , neuron inhibits less than it inhibits itself. These differences in inhibition strength cause the activity to follow the arrows of the graph.

Figure 6.

CTLNs. A neural network with excitatory pyramidal neurons (triangles) and a background network of inhibitory interneurons (gray circles) that produces a global inhibition. The corresponding graph (right) retains only the excitatory neurons and their connections.

Graphic without alt text

The set of fixed point supports of a CTLN with graph is denoted as:

is precisely , where and are specified by a CTLN with graph and parameters and . Note that is independent of , provided is constant across neurons as in a CTLN. It is also frequently independent of and . For this reason we often refer to it as , especially when a fixed choice of and is understood.

The legal range condition, , is motivated by a theorem in MDIC16. It ensures that single directed edges are not allowed to support stable fixed points . This allows us to prove the following theorem connecting a certain graph structure to the absence of stable fixed points. Note that a graph is oriented if for any pair of nodes, implies (i.e., there are no bidirectional edges). A sink is a node with no outgoing edges.

Theorem 3.1 (MDIC16, Theorem 2.4).

Let be an oriented graph with no sinks. Then for any parameters in the legal range, the associated CTLN has no stable fixed points. Moreover, the activity is bounded.

The graph in Figure 5A is an oriented graph with no sinks. It has a single fixed point, , irrespective of the parameters (note that we use as shorthand for the set ). This fixed point is unstable and the dynamics converge to a limit cycle (Figure 5C).

Even when there are no stable fixed points, the dynamics of a CTLN are always bounded MDIC16. In the limit as , we can bound the total population activity as a function of the parameters , and :

In simulations, we observe a rapid convergence to this regime.

Figure 7.

Dynamics of a CTLN network on neurons. The graph is a directed Erdos–Renyi random graph with edge probability and no self loops. The CTLN parameters are , , and . Initial conditions for each neuron, , are randomly and independently chosen from the uniform distribution on . (A-D) Four solutions from the same deterministic network, differing only in the choice of initial conditions. In each panel, the top plot shows the firing rate as a function of time for each neuron in grayscale. The middle plot shows the summed total population activity, , which quickly becomes trapped between the horizontal gray lines—the bounds in equation 6. The bottom plot shows individual rate curves for all neurons, in different colors. (A) The network appears chaotic, with some recurring patterns of activity. (B) The solution initially appears to be chaotic, like the one in A, but eventually converges to a stable fixed point supported on a -clique. (C) The solution converges to a limit cycle after . (D) The solution converges to a different limit cycle after . Note that one can observe brief “echoes” of this limit cycle in the transient activity of panel B.

Graphic without alt text

Figure 7 depicts four solutions for the same CTLN on neurons. The graph was generated as a directed Erdos–Renyi random graph with edge probability ; note that it is not an oriented graph. Since the network is deterministic, the only difference between simulations is the initial conditions. While panel A appears to show chaotic activity, the solutions in panels B, C, and D all settle into a fixed point or a limit cycle within the allotted time frame. The long transient of panel B is especially striking: around , the activity appears as though it will fall into the same limit cycle from panel D, but then escapes into another period of chaotic-looking dynamics before abruptly converging to a stable fixed point. In all cases, the total population activity rapidly converges to lie within the bounds given in 6, depicted in gray.

Fun examples

Despite their simplicity, CTLNs display a rich variety of nonlinear dynamics. Even very small networks can exhibit interesting attractors with unexpected properties. Theorem 3.1 tells us that one way to guarantee that a network will produce dynamic—as opposed to static—attractors is to choose to be an oriented graph with no sinks. The following examples are of this type.

Figure 8.

Gaudi attractor. A CTLN for a cyclically symmetric tournament on nodes produces two distinct attractors, depending on initial conditions. We call the top one the Gaudi attractor because the undulating curves are reminiscent of work by the architect from Barcelona.

Graphic without alt text

The Gaudi attractor. Figure 8 shows two solutions to a CTLN for a cyclically symmetric tournament⁠Footnote1 graph on nodes. For some initial conditions, the solutions converge to a somewhat boring limit cycle with the firing rates , …, all peaking in the expected sequence, (bottom middle). For a different set of initial conditions, however, the solution converges to the beautiful and unusual attractor displayed at the top.

1

A tournament is a directed graph in which every pair of nodes has exactly one (directed) edge between them.

Symmetry and synchrony. Because the pattern of weights in a CTLN is completely determined by the graph , any symmetry of the graph necessarily translates to a symmetry of the differential equations, and hence of the vector field. It follows that the automorphism group of also acts on the set of all attractors, which must respect the symmetry. For example, in the cyclically symmetric tournament of Figure 8, both the Gaudi attractor and the “boring” limit cycle below it are invariant under the cyclic permutation : the solution is preserved up to a time translation.

Another way for symmetry to manifest itself in an attractor is via synchrony. The network in Figure 9A depicts a CTLN with a graph on nodes that has a nontrivial automorphism group , cyclically permuting the nodes 2, 3, and 4. In the corresponding attractor, the neurons 2, 3, 4 perfectly synchronize as the solution settles into the limit cycle. Notice, however, what happens for the network in Figure 9B. In this case, the limit cycle looks very similar to the one in A, with the same synchrony among neurons 2, 3, and 4. However, the graph is missing the edge, and so the graph has no nontrivial automorphisms. We refer to this phenomenon as surprise symmetry.

Figure 9.

Symmetry and synchrony. (A) A graph with automorphism group has an attractor where nodes , and fire synchronously. (B) The symmetry is broken due to the dropped edge. Nevertheless, the attractor still respects the symmetry with nodes , and firing synchronously. Note that both attractors are very similar limit cycles, but the one in B has longer period. (Standard parameters: , , .)

Graphic without alt text

On the flip side, a network with graph symmetry may have multiple attractors that are exchanged by the group action, but do not individually respect the symmetry. This is the more familiar scenario of spontaneous symmetry breaking.

Emergent sequences. One of the most reliable properties of CTLNs is the tendency of neurons to fire in sequence. Although we have seen examples of synchrony, the global inhibition promotes competitive dynamics wherein only one or a few neurons reach their peak firing rates at the same time. The sequences may be intuitive, as in the networks of Figures 8 and 9, following obvious cycles in the graph. However, even for small networks the emergent sequences may be difficult to predict.

Figure 10.

Emergent sequences can be difficult to predict. (A) (Left) The graph of a CTLN that is a tournament on seven nodes. (Right) The same graph, but with the cycle corresponding to the sequential activity highlighted in black. (B) A solution to the CTLN that converges to a limit cycle. This appears to be the only attractor of the network for the standard parameters.

Graphic without alt text

The network in Figure 10A has neurons, and the graph is a tournament with no nontrivial automorphisms. The corresponding CTLN appears to have a single, global attractor, shown in Figure 10B. The neurons in this limit cycle fire in a repeating sequence, 634517, with 5 being the lowest-firing node. This sequence is highlighted in black in the graph, and corresponds to a cycle in the graph. However, it is only one of many cycles in the graph. Why do the dynamics select this sequence and not the others? And why does neuron 2 drop out, while all others persist? This is particularly puzzling given that node 2 has in-degree three, while nodes 3 and 5 have in-degree two.

Figure 11.

An example CTLN and its attractors. (A) The graph of a CTLN. The fixed point supports are given by , irrespective of parameters . (B) Solutions to the CTLN in A using the standard parameters , , and . (Top) The initial condition was chosen as a small perturbation of the fixed point supported on . The activity quickly converges to a limit cycle where the high-firing neurons are the ones in the fixed point support. (Bottom) A different initial condition yields a solution that converges to the static attractor corresponding to the stable fixed point on node . (C) The three fixed points are depicted in a three-dimensional projection of the four-dimensional state space. Perturbations of the fixed point supported on produce solutions that either converge to the limit cycle or to the stable fixed point from B.

Graphic without alt text

Indeed, local properties of a network, such as the in- and out-degrees of individual nodes, are insufficient for predicting the participation and ordering of neurons in emergent sequences. Nevertheless, the sequence is fully determined by the structure of . We just have a limited understanding of how. Recent progress in understanding sequential attractors has relied on special network architectures that are cyclic like the ones in Figure 9 PLACM22. Interestingly, although the graph in Figure 10 does not have such an architecture, the induced subgraph generated by the high-firing nodes 1, 3, 4, 6, and 7 is isomorphic to the graph in Figure 8. This graph, as well as the two graphs in Figure 9, have corresponding networks that are in some sense irreducible in their dynamics. These are examples of graphs that we refer to as core motifs PMMC22.

4. Minimal Fixed Points, Core Motifs, and Attractors

Stable fixed points of a network are of obvious interest because they correspond to static attractors HSS03CDI13. One of the most striking features of CTLNs, however, is the strong connection between unstable fixed points and dynamic attractors PMMC22PLACM22.

Question 2.

For a given CTLN, can we predict the dynamic attractors of the network from its unstable fixed points? Can the unstable fixed points be determined from the structure of the underlying graph ?

Throughout this section, is a directed graph on nodes. Subsets are often used to denote both the collection of vertices indexed by and the induced subgraph . The corresponding network is assumed to be a CTLN with fixed parameters , and .

Figure 11 provides an example to illustrate the relationship between unstable fixed points and dynamic attractors. Any CTLN with the graph in panel A has three fixed points, with supports . The collection of fixed point supports can be thought of as a partially ordered set, ordered by inclusion. In our example, and are thus minimal fixed point supports, because they are minimal under inclusion. It turns out that the corresponding fixed points each have an associated attractor (Figure 11B). The one supported on , a sink in the graph, yields a stable fixed point, while the (unstable) fixed point, whose induced subgraph is a -cycle, yields a limit cycle attractor with high-firing neurons , , and . Figure 11C depicts all three fixed points in the state space. Here we can see that the third one, supported on , acts as a “tipping point” on the boundary of two basins of attraction. Initial conditions near this fixed point can yield solutions that converge either to the stable fixed point or the limit cycle.

Not all minimal fixed points have corresponding attractors. In PMMC22 we saw that the key property of such a is that it be minimal not only in but also in , corresponding to the induced subnetwork restricted to the nodes in . In other words, is the only fixed point in . This motivates the definition of core motifs.

Figure 12.

Small core motifs. For each of these graphs, , where is the number of nodes. Attractors are shown for CTLNs with the standard parameters , , and .

Graphic without alt text
Definition 4.1.

Let be the graph of a CTLN on nodes. An induced subgraph is a core motif of the network if .

When the graph is understood, we sometimes refer to itself as a core motif if is one. The associated fixed point is called a core fixed point. Core motifs can be thought of as “irreducible” networks because they have a single fixed point of full support. Since the activity is bounded and must converge to an attractor, the attractor can be said to correspond to this fixed point. A larger network that contains as an induced subgraph may or may not have . When the core fixed point does survive, we refer to the embedded as a surviving core motif, and we expect the associated attractor to survive. In Figure 11, the surviving core motifs are and , and they precisely predict the attractors of the network.

The simplest core motifs are cliques. When these survive inside a network , the corresponding attractor is always a stable fixed point supported on all nodes of the clique. In fact, we conjectured that any stable fixed point for a CTLN must correspond to a maximal clique of —specifically, a target-free clique CGM19.

Up to size , all core motifs are parameter-independent. For size , of core motifs are parameter-independent. Figure 12 shows the complete list of all core motifs of size , together with some associated attractors. The cliques all correspond to stable fixed points, the simplest type of attractor. The -cycle yields the limit cycle attractor in Figure 5, which may be distorted when embedded in a larger network (see Figure 11B). The other core motifs whose fixed points are unstable have dynamic attractors. Note that the -cycu graph has a symmetry, and the rate curves for these two neurons are synchronous in the attractor. This synchrony is also evident in the -ufd attractor, despite the fact that this graph does not have the symmetry. Perhaps the most interesting attractor, however, is the one for the fusion -cycle graph. Here the -cycle attractor, which does not survive the embedding to the larger graph, appears to “fuse” with the stable fixed point associated to (which also does not survive). The resulting attractor can be thought of as binding together a pair of smaller attractors.

We have performed extensive tests on whether or not core motifs predict attractors in small networks. Specifically, we decomposed all 9608 directed graphs on nodes into core motif components, and used this to predict the attractors. We found that 1053 of the graphs have surviving core motifs that are not cliques; these graphs were thus expected to support dynamic attractors. The remaining 8555 graphs contain only cliques as surviving core motifs, and were thus expected to have only stable fixed point attractors. Overall, we found that core motifs correctly predicted the set of attractors in 9586 of the 9608 graphs. Of the 22 graphs with mistakes, 19 graphs have a core motif with no corresponding attractor, and 3 graphs have no core motifs for the chosen parameters.⁠Footnote2

2

Classification of CTLNs on n=5 nodes available at https://github.com/ccurto/n5-graphs-package.

5. Graph Rules

We have seen that CTLNs exhibit a rich variety of nonlinear dynamics, and that the attractors are closely related to the fixed points. This opens up a strategy for linking attractors to the underlying network architecture via the fixed point supports . Our main tools for doing this are graph rules.

Throughout this section, we will use greek letters to denote subsets of corresponding to fixed point supports (or potential supports), while latin letters denote individual nodes/neurons. As before, denotes the induced subgraph obtained from by restricting to and keeping only edges between vertices of . The fixed point supports are:

The main question addressed by graph rules is:

Question 3.

What can we say about from knowledge of alone?

For example, consider the graphs in Figure 13. Can we determine from the graph alone which subgraphs will support fixed points? Moreover, can we determine which of those subgraphs are core motifs that will give rise to attractors of the network? We saw in Section 4 (Figure 12) that cycles and cliques are among the small core motifs; can cycles and cliques produce core motifs of any size? Can we identify other graph structures that are relevant for either ruling in or ruling out certain subgraphs as fixed point supports? The rest of Section 5 focuses on addressing these questions.

Figure 13.

Graphs for which is completely determined by graph rules.

Graphic without alt text

Note that implicit in the above questions is the idea that graph rules are parameter-independent: that is, they directly relate the structure of to via results that are valid for all choices of , and (provided they lie within the legal range). In order to obtain the most powerful results, we also require that our CTLNs be nondegenerate. As has already been noted, nondegeneracy is generically satisfied for TLNs CGM19. For CTLNs, it is satisfied irrespective of and for almost all legal range choices of and (i.e., up to a set of measure zero in the two-dimensional parameter space for and ).

5.1. Examples of graph rules

We’ve already seen some graph rules. For example, Theorem 3.1 told us that if is an oriented graph with no sinks, the associated CTLN has no stable fixed points. Such CTLNs are thus guaranteed to only exhibit dynamic attractors. Here we present a set of eight simple graph rules, all proven in CGM19, that are easy to understand and give a flavor of the kinds of theorems we have found.

We will use the following graph theoretic terminology. A source is a node with no incoming edges, while a sink is a node with no outgoing edges. Note that a node can be a source or sink in an induced subgraph , while not being one in . An independent set is a collection of nodes with no edges between them, while a clique is a set of nodes that is all-to-all bidirectionally connected. A cycle is a graph (or an induced subgraph) where each node has exactly one incoming and one outgoing edge, and they are all connected in a single directed cycle. A directed acyclic graph (DAG) is a graph with a topological ordering of vertices such that whenever ; such a graph does not contain any directed cycles. Finally, a target of a graph is a node such that for all . Note that a target may be inside or outside .

Examples of graph rules:

Rule 1 (independent sets).

If is an independent set, then if and only if each is a sink in .

Rule 2 (cliques).

If is a clique, then if and only if there is no node of , , such that for all . In other words, if and only if is a target-free clique. If , the corresponding fixed point is stable.

Rule 3 (cycles).

If is a cycle, then if and only if there is no node of , , such that receives two or more edges from . If , the corresponding fixed point is unstable.

Rule 4 (sources).

(i) If contains a source , with for some , then . (ii) Suppose , but is a source in . Then .

Rule 5 (targets).

(i) If has target , with and for some (), then and thus . (ii) If has target , then and thus .

Rule 6 (sinks).

If has a sink , then .

Rule 7 (DAGs).

If is a directed acyclic graph with sinks , …, , then , the set of all unions of sinks.

Rule 8 (parity).

For any , is odd.

In many cases, particularly for small graphs, our graph rules are complete enough that they can be used to fully work out . In such cases, is guaranteed to be parameter-independent (since the graph rules do not depend on and ). As an example, consider the graph on nodes in Figure 13A; we will show that is completely determined by graph rules. Going through the possible subsets of different sizes, we find that for only (as those are the sinks). Using Rules 1, 2, and 4, we see that the only elements in are the clique and the independent set . A crucial ingredient for determining the fixed point supports of sizes and is the sinks rule, which guarantees that , , and are the only supports of these sizes. Finally, notice that the total number of fixed points up through size is odd. Using Rule 8 (parity), we can thus conclude that there is no fixed point of full support—that is, with . It follows that ; moreover, this result is parameter-independent because it was determined purely from graph rules.

We leave it as an exercise to use graph rules to show that for the graph in Figure 13B, and for the graph in Figure 13C. For the graph in C, it is necessary to appeal to a more general rule for uniform in-degree subgraphs, which we review next.

Rules 17, and many more, all emerge as corollaries of more general rules. In the next few subsections, we will introduce the uniform in-degree rule, graphical domination, and simply-embedded subgraphs. These results form part of a collection of elementary graph rules, from which all other known graph rules can be derived. A complete list of elementary graph rules can be found in CM23.

5.2. Uniform in-degree rule

It turns out that Rules 1, 2, and 3 (for independent sets, cliques, and cycles) are all corollaries of a single rule for graphs of uniform in-degree.

Definition 5.1.

We say that has uniform in-degree if every node has incoming edges from within .

Note that an independent set has uniform in-degree , a cycle has uniform in-degree , and an -clique is uniform in-degree with . But, in general, uniform in-degree graphs need not be symmetric. For example, the induced subgraph in Figure 13A is uniform in-degree, with .

Theorem 5.2 (CGM19).

Let be an induced subgraph of with uniform in-degree . For , let denote the number of edges for . Then , and

In particular, if and only if there does not exist such that .

Uniform in-degree fixed points are also uniform in value. If is uniform in-degree , then the fixed point supported on has the same value for all entries in CGM19, Lemma 18:

Interestingly, for all , even for uniform in-degree graphs that are not symmetric.

5.3. Graphical domination

More generally, fixed points can have very different values across neurons. However, there is some level of “graphical balance” that is required of for any fixed point support . For example, if contains a pair of neurons that have the property that all neurons sending edges to also send edges to , and but , then cannot be a fixed point support. This is because is receiving a strict superset of the inputs to , and this imbalance rules out their ability to coexist in the same fixed point support. This motivates the following definition.

Definition 5.3.

We say that graphically dominates with respect to in if the following three conditions all hold:

(1)

For each , if then .

(2)

If , then .

(3)

If , then .

We refer to this as “inside-in” domination if (see Figure 14A). In this case, we must have and . Remaining cases are shown in Figure 14B-D.

Figure 14.

Graphical domination: four cases. In all cases, graphically dominates with respect to . In particular, the set of vertices of sending edges to (red ovals) always contains the set of vertices sending edges to (blue ovals).

Graphic without alt text

What graph rules does domination give us? Intuitively, when inside-in domination is present, the “graphical balance” necessary to support a fixed point is violated, and so . When outside-in dominates for , again there is an imbalance, and this time it guarantees that neuron turns on, since it received all the inputs that were sufficient to turn on neuron . Thus, there cannot be a fixed point with support since node will violate the off-neuron conditions. We can draw similar conclusions in the other cases of graphical domination as well, as Theorem 5.4 shows. This theorem was originally proven in CGM19, but a more elementary proof of this result is given in CM23.

Theorem 5.4 (CGM19).

Suppose graphically dominates with respect to in . Then the following all hold:

(1)

(inside-in) If , then and thus .

(2)

(outside-in) If , , then and thus .

(3)

(inside-out) If , , then .

(4)

(outside-out) If , then .

To see how this theorem can be used to prove simpler graph rules, consider a graph with a source that has an edge for some . Since is a source, it has no incoming edges from within . If , then inside-in dominates and so . If , then outside-in dominates and again . Rule 4(i) immediately follows. We leave it as an exercise to prove Rules 4(ii), 5(i), 5(ii), and 7.

5.4. Simply-embedded subgraphs and covers

Finally, we introduce the concept of simply-embedded subgraphs.

Definition 5.5 (simply-embedded).

We say that a subgraph is simply-embedded in if for each , either

(i)

for all , or

(ii)

for all .

In other words, while can have any internal structure, the rest of the network treats all nodes in equally (see Figure 15A). By abuse of notation, we sometimes say that the corresponding subset of vertices is simply-embedded in .

Figure 15.

Simply-embedded subgraphs.

Graphic without alt text

We have the following key lemma (see Figure 15B):

Lemma 5.6.

Let be simply-embedded in . Then for any ,

What happens if we consider more than one simply-embedded subgraph? It is not difficult to see that intersections of simply-embedded subgraphs are also simply-embedded. However, the union of two simply-embedded subgraphs is only guaranteed to be simply-embedded if the intersection is nonempty. If we have two or more simply-embedded subgraphs, and , we know that for any , must restrict to a fixed point and in each of those subgraphs. But when can we glue together such a and to produce a larger fixed point support in ?

Lemma 5.7 precisely answers this question. It uses the following notation: .

Lemma 5.7 (pairwise gluing).

Suppose are simply-embedded in , and consider and that satisfy (so that agree on the overlap ). Then

if and only if one of the following holds:

(i)

and , or

(ii)

and , or

(iii)

.

6. Gluing Rules

So far we have seen a variety of graph rules and the elementary graph rules from which they are derived. These rules allow us to rule in and rule out potential fixed points in from purely graph-theoretic considerations. In this section, we consider networks whose graph is composed of smaller induced subgraphs, , for . What is the relationship between and the fixed points of the components, ?

It turns out we can obtain nice results if the induced subgraphs are all simply-embedded in . In this case, we say that has a simply-embedded cover.

Definition 6.1 (simply-embedded covers).

We say that is a simply-embedded cover of if each is simply-embedded in , and for every vertex , there exists an such that . In other words, the ’s are a vertex cover of . If the ’s are all disjoint, we say that is a simply-embedded partition of .

In the case that has a simply-embedded cover, Lemma 5.6 tells us that all “global” fixed point supports in must be unions of “local” fixed point supports in the , since every restricts to . But what about the other direction?

Question 4.

When does a collection of local fixed point supports , with each nonempty , glue together to form a global fixed point support ?

To answer this question, we develop some notions inspired by sheaf theory. For a graph on nodes, with a simply-embedded cover , we define the gluing complex as:

In other words, consists of all that can be obtained by gluing together local fixed point supports . Note that in order to guarantee that for each , it is necessary that the ’s agree on overlaps (hence the last requirement). This means that is equivalent to:

It will also be useful to consider the case where is not allowed to be empty for any . This is defined as

Translating Lemma 5.6 into the new notation yields the following:

Lemma 6.2.

A CTLN with graph and simply-embedded cover satisfies

The central question addressed by gluing rules (Question 4) thus translates to: What elements of are actually in ?

Our strategy to address this question will be to identify architectures where we can iterate the pairwise gluing rule, Lemma 5.7. Iteration is possible in a simply-embedded cover provided the unions at each step, , are themselves simply-embedded (this may depend on the order). Fortunately, this is the case for several types of natural constructions, including disjoint unions and clique unions, which we consider next. It also holds for connected unions, which are introduced in CM23. Finally, we will examine the case of cyclic unions, where pairwise gluing rules cannot be iterated, but for which we find an equally clean characterization of .

6.1. Disjoint, clique, and cyclic unions

The following graph constructions all arise from simply-embedded partitions.

Definition 6.3.

Consider a graph with induced subgraphs corresponding to a vertex partition . Then

is a disjoint union if there are no edges between and for . (See Figure 16A.)

is a clique union if it contains all possible edges between and for . (See Figure 16B.)

is a cyclic union if it contains all possible edges from to , for , …, , as well as all possible edges from to , but no other edges between distinct components , . (See Figure 16C.)

Note that in each of these cases, is a simply-embedded partition of .

Figure 16.

Disjoint unions, clique unions, and cyclic unions. In each architecture, the form a simply-embedded partition of . Thick edges between components indicate that there are edges between every pair of nodes in the components.

Graphic without alt text

Since the simply-embedded subgraphs in a partition are all disjoint, Lemma 5.7(i-ii) applies. Consequently, fixed point supports and will glue together if and only if either and both survive to yield fixed points in , or neither survives. For both disjoint unions and clique unions, it is easy to see that all larger unions of the form are themselves simply-embedded. We can thus iteratively use the pairwise gluing Lemma 5.7. For disjoint unions, Lemma 5.7(i) applies, yielding our first gluing theorem. Recall that .

Theorem 6.4 (CGM19, Theorem 11).

If is a disjoint union of subgraphs , with , then

On the other hand, for clique unions, we must apply Lemma 5.7(ii), which shows that only gluings involving a nonempty from each component are allowed. Hence . Interestingly, the same result holds for cyclic unions, but the proof is different because the simply-embedded structure does not get preserved under unions, and hence Lemma 5.7 cannot be iterated. These results are combined in the next theorem.

Theorem 6.5 (CGM19, Theorems 12 and 13).

If is a clique union or a cyclic union of subgraphs , with , then

We end this section by revisiting core motifs. Recall that core motifs of CTLNs are subgraphs that support a unique fixed point, which has full-support: . We denote the set of core motifs by

For small CTLNs, we have seen that core motifs are predictive of a network’s attractors PMMC22.

What can gluing rules tell us about core motifs? It turns out that we can precisely characterize these motifs for clique and cyclic unions.

Corollary 6.6.

Let be a clique union or a cyclic union of components , …, . Then

In particular, is a core motif if and only if every is a core motif.

6.2. Modeling with cyclic unions

The power of graph rules is that they enable us to reason mathematically about the graph of a CTLN and make surprisingly accurate predictions about the dynamics. This is particularly true for cyclic unions, where the dynamics consistently appear to traverse the components in cyclic order. Consequently, these architectures are useful for modeling a variety of phenomena that involve sequential attractors. This includes the storage and retrieval of sequential memories, as well as CPGs responsible for rhythmic activity, such as locomotion YMSL05.

Recall that the attractors of a network tend to correspond to core motifs in . Using Corollary 6.6, we can easily engineer cyclic unions that have multiple sequential attractors. For example, consider the cyclic union in Figure 17A, with comprised of all cycles of length 5 that contain exactly one node per component. For parameters , , the CTLN yields a limit cycle (Figure 17B), corresponding to one such core motif, with sequential firing of a node from each component. By symmetry, there must be an equivalent limit cycle for every choice of five nodes, one from each layer, and thus the network is guaranteed to have limit cycles. Note that this network architecture, increased to seven layers, could serve as a mechanism for storing phone numbers in working memory ( for digits ).

Figure 17.

The phone number network. (A) A cyclic union with neurons per layer (component), and all feedforward connections from one layer to the next. (B) A limit cycle for the corresponding CTLN (with parameters , ).

Graphic without alt text

As another application of cyclic unions, consider the graph in Figure 18A which produces the quadruped gait ‘bound’ (similar to gallop), where we have associated each of the four colored nodes with a leg of the animal. Notice that the clique between pairs of legs ensures that those nodes co-fire, and the cyclic union structure guarantees that the activity flows forward cyclically. A similar network was created for the ‘trot’ gait, with appropriate pairs of legs joined by cliques.

Figure 18.

A Central Pattern Generator circuit for quadruped motion. (A) (Left) A cyclic union architecture on six nodes that produces the ‘bound’ gait. (Right) The limit cycle corresponding to the bound gait. (B) The graph on eight nodes is formed from merging together architectures for the individual gaits, ‘bound’ and ‘trot’. Note that the positions of the two hind legs (LH, RH) are flipped for ease of drawing the graph.

Graphic without alt text

Figure 18B shows a network in which both the ‘bound’ and ‘trot’ gaits can coexist, with the network selecting one pattern (limit cycle) over the other based solely on initial conditions. This network was produced by essentially overlaying the two architectures that would produce the desired gaits, identifying the two graphs along the nodes corresponding to each leg. Notice that within this larger network, the induced subgraphs for each gait are no longer perfect cyclic unions (since they include additional edges between pairs of legs), and are no longer core motifs. And yet the combined network still produces limit cycles that are qualitatively similar to those of the isolated cyclic unions for each gait. It is an open question when this type of merging procedure for cyclic unions (or other types of subnetworks) will preserve the original limit cycles within the larger network.

7. Conclusions

Recurrent network models such as TLNs have historically played an important role in theoretical neuroscience; they give mathematical grounding to key ideas about neural dynamics and connectivity, and provide concrete examples of networks that encode multiple attractors. These attractors represent the possible responses, e.g., stored memory patterns, of the network.

In the case of CTLNs, we have been able to prove a variety of results, such as graph rules, about the fixed point supports —yielding valuable insights into the attractor dynamics. Many of these results can be extended beyond CTLNs to more general families of TLNs, and potentially to other threshold nonlinearities. The reason lies in the combinatorial geometry of the hyperplane arrangements. In addition to the arrangements discussed in Section 2, there are closely related hyperplane arrangements given by the nullclines of TLNs, defined by for each . It is easy to see that fixed points correspond to intersections of nullclines, and thus the elements of are completely determined by the combinatorial geometry of the nullcline arrangement. Intuitively, the combinatorial geometry of such an arrangement is preserved under small perturbations of and . This allows us to extend CTLN results and study how changes as we vary the TLN parameters and . These ideas, including connections to oriented matroids, are further developed in CM23 and references therein. CM23 also lays out a number of open questions on graph rules, gluing rules, core motifs, and the relationship between fixed points and attractors.

Acknowledgments

We would like to thank Zelong Li, Nicole Sanderson, and Juliana Londono Alvarez for a careful reading of the manuscript. We also thank Caitlyn Parmelee, Caitlin Lienkaemper, Safaan Sadiq, Anda Degeratu, Vladimir Itskov, Christopher Langdon, Jesse Geneson, Daniela Egas Santander, Stefania Ebli, Alice Patania, Joshua Paik, Samantha Moore, Devon Olds, and Joaquin Castañeda for many useful discussions.

References

[BCRR21]
Andrea Bel, Romina Cobiaga, Walter Reartes, and Horacio G. Rotstein, Periodic solutions in threshold-linear networks and their entrainment, SIAM J. Appl. Dyn. Syst. 20 (2021), no. 3, 1177–1208, DOI 10.1137/20M1337831. MR4279921Show rawAMSref\bib{Horacio-paper}{article}{ author={Bel, Andrea}, author={Cobiaga, Romina}, author={Reartes, Walter}, author={Rotstein, Horacio G.}, title={Periodic solutions in threshold-linear networks and their entrainment}, journal={SIAM J. Appl. Dyn. Syst.}, volume={20}, date={2021}, number={3}, pages={1177--1208}, review={\MR {4279921}}, doi={10.1137/20M1337831}, } Close amsref.
[BF22]
T. Biswas and J. E. Fitzgerald, Geometric framework to predict structure from function in neural networks, Phys. Rev. Research, 4 (2022).
[CG83]
Michael A. Cohen and Stephen Grossberg, Absolute stability of global pattern formation and parallel memory storage by competitive neural networks, IEEE Trans. Systems Man Cybernet. 13 (1983), no. 5, 815–826, DOI 10.1016/S0166-4115(08)60913-9. MR730500Show rawAMSref\bib{CohenGrossberg1983}{article}{ author={Cohen, Michael A.}, author={Grossberg, Stephen}, title={Absolute stability of global pattern formation and parallel memory storage by competitive neural networks}, journal={IEEE Trans. Systems Man Cybernet.}, volume={13}, date={1983}, number={5}, pages={815--826}, issn={0018-9472}, review={\MR {730500}}, doi={10.1016/S0166-4115(08)60913-9}, } Close amsref.
[CDI13]
Carina Curto, Anda Degeratu, and Vladimir Itskov, Encoding binary neural codes in networks of threshold-linear neurons, Neural Comput. 25 (2013), no. 11, 2858–2903, DOI 10.1162/NECO_a_00504. MR3136636Show rawAMSref\bib{net-encoding}{article}{ author={Curto, Carina}, author={Degeratu, Anda}, author={Itskov, Vladimir}, title={Encoding binary neural codes in networks of threshold-linear neurons}, journal={Neural Comput.}, volume={25}, date={2013}, number={11}, pages={2858--2903}, issn={0899-7667}, review={\MR {3136636}}, doi={10.1162/NECO_a_00504}, } Close amsref.
[CGM19]
Carina Curto, Jesse Geneson, and Katherine Morrison, Fixed points of competitive threshold-linear networks, Neural Comput. 31 (2019), no. 1, 94–155, DOI 10.1162/neco_a_01151. MR3898981Show rawAMSref\bib{fp-paper}{article}{ author={Curto, Carina}, author={Geneson, Jesse}, author={Morrison, Katherine}, title={Fixed points of competitive threshold-linear networks}, journal={Neural Comput.}, volume={31}, date={2019}, number={1}, pages={94--155}, issn={0899-7667}, review={\MR {3898981}}, doi={10.1162/neco_a_01151}, } Close amsref.
[CM16]
Carina Curto and Katherine Morrison, Pattern completion in symmetric threshold-linear networks, Neural Comput. 28 (2016), no. 12, 2825–2852, DOI 10.1162/neco_a_00869. MR3866422Show rawAMSref\bib{pattern-completion}{article}{ author={Curto, Carina}, author={Morrison, Katherine}, title={Pattern completion in symmetric threshold-linear networks}, journal={Neural Comput.}, volume={28}, date={2016}, number={12}, pages={2825--2852}, issn={0899-7667}, review={\MR {3866422}}, doi={10.1162/neco_a_00869}, } Close amsref.
[CM23]
C. Curto and K. Morrison, Graph rules for recurrent neural network dynamics: extended version, Available at https://arxiv.org/abs/2301.12638, 2023.
[DA01]
Peter Dayan and L. F. Abbott, Theoretical neuroscience, Computational Neuroscience, MIT Press, Cambridge, MA, 2001. Computational and mathematical modeling of neural systems. MR1985615Show rawAMSref\bib{Dayan-Abbott}{book}{ author={Dayan, Peter}, author={Abbott, L. F.}, title={Theoretical neuroscience}, series={Computational Neuroscience}, note={Computational and mathematical modeling of neural systems}, publisher={MIT Press, Cambridge, MA}, date={2001}, pages={xvi+460}, isbn={0-262-04199-5}, review={\MR {1985615}}, } Close amsref.
[HSM00]
R. H. Hahnloser, R. Sarpeshkar, M. A. Mahowald, R. J. Douglas, and H. S. Seung, Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit, Nature, 405 (2000), 947–951.
[HSS03]
R. H. Hahnloser, H. S. Seung, and J. J. Slotine, Permitted and forbidden sets in symmetric threshold-linear networks, Neural Comput., 15 (2003), no. 3, 621–638.
[HR58]
H. K. Hartline and F. Ratliff, Spatial summation of inhibitory influence in the eye of limulus and the mutual interaction of receptor units, J. Gen. Physiol., 41 (1958), 1049–1066.
[Hop82]
J. J. Hopfield, Neural networks and physical systems with emergent collective computational abilities, Proc. Nat. Acad. Sci. U.S.A. 79 (1982), no. 8, 2554–2558, DOI 10.1073/pnas.79.8.2554. MR652033Show rawAMSref\bib{Hopfield1982}{article}{ author={Hopfield, J. J.}, title={Neural networks and physical systems with emergent collective computational abilities}, journal={Proc. Nat. Acad. Sci. U.S.A.}, volume={79}, date={1982}, number={8}, pages={2554--2558}, issn={0027-8424}, review={\MR {652033}}, doi={10.1073/pnas.79.8.2554}, } Close amsref.
[KAY14]
M. M. Karnani, M. Agetsuma, and R. Yuste, A blanket of inhibition: functional inferences from dense inhibitory connectivity, Curr Opin Neurobiol, 26 (2014), 96–102.
[LBH09]
A. Luczak, P. Barthó, and K.D. Harris, Spontaneous events outline the realm of possible sensory responses in neocortical populations, Neuron, 62 (2009), no. 3, 13–425
[MDIC16]
K. Morrison, A. Degeratu, V. Itskov, and C. Curto, Diversity of emergent dynamics in competitive threshold-linear networks, Preprint, arXiv:1605.04463, 2022.
[PLACM22]
Caitlyn Parmelee, Juliana Londono Alvarez, Carina Curto, and Katherine Morrison, Sequential attractors in combinatorial threshold-linear networks, SIAM J. Appl. Dyn. Syst. 21 (2022), no. 2, 1597–1630, DOI 10.1137/21M1445120. MR4444287Show rawAMSref\bib{seq-attractors}{article}{ author={Parmelee, Caitlyn}, author={Londono Alvarez, Juliana}, author={Curto, Carina}, author={Morrison, Katherine}, title={Sequential attractors in combinatorial threshold-linear networks}, journal={SIAM J. Appl. Dyn. Syst.}, volume={21}, date={2022}, number={2}, pages={1597--1630}, review={\MR {4444287}}, doi={10.1137/21M1445120}, } Close amsref.
[PMMC22]
C. Parmelee, S. Moore, K. Morrison, and C. Curto, Core motifs predict dynamic attractors in combinatorial threshold-linear networks, PLOS ONE, 2022.
[SY12]
H. S. Seung and R. Yuste, Principles of Neural Science, McGraw-Hill Education/Medical, 5th edition, 2012.
[TSSM97]
M. Tsodyks, W. Skaggs, T. Sejnowski, and B. McNaughton, Paradoxical effects of external modulation of inhibitory interneurons, Journal of Neuroscience, 17 (1997), no. 11, 4382–4388.
[YMSL05]
R. Yuste, J. N. MacLean, J. Smith, and A. Lansner, The cortex as a central pattern generator, Nat. Rev. Neurosci., 6 (2005), 477–483.

Credits

All figures, including the opener, are courtesy of the authors.

Figures 6, 11, and 12 previously appeared in PMMC22. CC-By 2.0.

Photo of Carina Curto is courtesy of Carina Curto.

Photo of Katherine Morrison is courtesy of Woody Myers.