MONOIDAL WIDTH

. We introduce monoidal width as a measure of complexity for morphisms in monoidal categories. Inspired by well-known structural width measures for graphs, like tree width and rank width, monoidal width is based on a notion of syntactic decomposition: a monoidal decomposition of a morphism is an expression in the language of monoidal categories, where operations are monoidal products and compositions, that specifies this morphism. Monoidal width penalises the composition operation along “big” objects, while it encourages the use of monoidal products. We show that, by choosing the correct categorical algebra for decomposing graphs, we can capture tree width and rank width. For matrices, monoidal width is related to the rank. These examples suggest monoidal width as a good measure for structural complexity of processes modelled as morphisms in monoidal categories.


Introduction
In recent years, a current of research has emerged with focus on the interaction of structure -especially algebraic, using category theory and related subjects -and power, that is algorithmic and combinatorial insights stemming from graph theory, game theory and related subjects.Recent works include [ADW17, AS21,MS22].
The algebra of monoidal categories is a fruitful source of structure -it can be seen as a general process algebra of concurrent processes, featuring a sequential (;) as well as a parallel (⊗) composition.Serving as a process algebra in this sense, it has been used to describe artefacts of a computational nature as arrows of appropriate monoidal categories.Examples include Petri nets [FS18], quantum circuits [CK17,DKPvdW20], signal flow graphs [FS18,BSZ21], electrical circuits [CK22,BS21], digital circuits [GJL17], stochastic processes [Fri20,CJ19] and games [GHWZ18].
Given that the algebra of monoidal categories has proved its utility as a language for describing computational artefacts in various applications areas, a natural question is to examine its relationship with power : can monoidal structure help us to design efficient algorithms?To begin to answer this question, let us consider a mainstay of computer science: divide-and-conquer algorithms.Such algorithms rely on the internal geometry of the global artefact under consideration to ensure the ability to divide, that is, decompose it consistently into simpler components, inductively compute partial solutions on the components, and then recombine these local results to obtain a global solution.
. This morphism can be decomposed in two different ways: Let us now return to systems described as arrows of monoidal categories.In applications, the parallel (⊗) composition typically means placing systems side-by-side with no explicit interconnections.On the other hand, the sequential (;) composition along an object typically means communication, resource sharing or synchronisation, the complexity of which is determined by the object along which the composition is performed.Based on examples in the literature, our basic motivating intuition is: An algorithmic problem on an artefact that is a '⊗' lends itself to a divideand-conquer approach more easily than one that is a ';'.Moreover, the "size" of the object along which the ';' occurs matters; typically the "larger" the object, the more work is needed in order to recombine results in any kind of divideand-conquer approach.An example is compositional reachability checking in Petri nets of Rathke et.al. [RSS14]: calculating the sequential composition is exponential in the size of the boundary.Another recent example is the work of Master [Mas22] on a compositional approach to calculating shortest paths.
On the other hand, (monoidal) category theory equates different descriptions of systems.Consider what is known as middle-four interchange, illustrated in Figure 1.Although monoidal category theory asserts that (f ⊗ f ′ ) ; (g ⊗ g ′ ) = (f ; g) ⊗ (f ′ ; g ′ ), considering the two sides of the equations as decomposition blueprints for a divide-and-conquer approach, the right-hand side of the equation is clearly preferable since it maximises parallelism by minimising the size of the boundary along which composition occurs.This, roughly speaking, is the idea of width -expressions in the language of monoidal categories are assigned a natural number that measures "how good" they are as decomposition blueprints.The monoidal width of an arrow is then the width of its most efficient decomposition.In concrete examples, arrows with low width lend themselves to efficient divide-and-conquer approaches, following a width-optimal expression as a decomposition blueprint.
The study of efficient decompositions of combinatorial artefacts is well-established, especially in graph theory.A number of graph widths -by which we refer to related concepts like tree width, path with, branch width, cut width, rank width or twin width -have become known in computer science because of their relationship with algorithmic properties.All of them share a similar basic idea: in each case, a specific notion of legal decomposition is priced according to the most expensive operation involved, and the price of the cheapest decomposition is the width.
Perhaps the most famous of these is tree width, a measure of complexity for graphs that was independently defined by different authors [BB73,Hal76,RS86].Every nonempty graph has a tree width, which is a natural number.Intuitively, a tree decomposition is a recipe for decomposing a graph into smaller subgraphs that form a tree shape.These subgraphs, when some of their vertices are identified, need to compose into the original graph, as shown in Figure 2

. Courcelle's theorem
Every property expressible in the monadic second order logic of graphs can be verified in linear time on graphs with bounded tree width. is probably the best known among several results that establish links with algorithms [Bod92, BK08,Cou90] thus illustrating its importance for computer science.Another important measure is rank width [OS06] -a relatively recent development that has attracted significant attention in the graph theory community.A rank decomposition is a recipe for decomposing a graph into its single-vertex subgraphs by cutting along edges.The cost of a cut is the rank of the adjacency matrix that represents it, as illustrated in Figure 3.An intuition for rank width is that it is a kind of "Kolmogorov complexity" for graphs, with higher rank widths indicating that the connectivity data of the graph cannot be easily compressed.For example, while the family of cliques has unbounded tree width, their connectivity rather simple: in fact, all cliques have rank width 1.
Contribution.Building on our conference paper [DLS22], our goals are twofold.Firstly, to introduce the concept of monoidal width and begin to develop techniques for reasoning about it.
Before describing concrete, technical contributions, let us take a bird's eye view.It is natural for the seasoned researcher to be sceptical of a new abstract framework that seeks to generalise known results.The best abstract approaches (i) simplify existing known arguments, (ii) clean up the research landscape by connecting existing notions, or (iii) introduce techniques that allow one to prove new theorems.This paper does not (yet) bring strong arguments in favour of monoidal width if one uses these three points as yardsticks.Our high-level, conceptual contribution is, instead, the fact that the algebra of monoidal categories -already used in several contexts in theoretical computer science -is a multi-purpose algebra for specifying decompositions of graph-like structures important for computer scientists.There are several ways of making this work, and making these monoidal algebras of "open graphs" explicit as monoidal categories is itself a valuable endeavour.Indeed, identifying a monoidal category automatically yields a particular notion of decomposition: the instance of monoidal width in the monoidal category of interest.This point of view therefore demystifies ad hoc notions of decomposition that accompany each notion of width that we consider in this paper.Moreover, having an explicit algebra is also useful because it suggests a data structure -the expression in the language of monoidal categories -as a way of describing decompositions.
The results in this paper can be seen as a "sanity check" of these general claims, but can also be seen as taking the first technical steps in order to build towards points (i)-(iii) of the previous paragraph.To this end we examine monoidal width in the presence of common structure, such as coherent comultiplication on objects, and in a foundational setting such as the monoidal category of matrices.Secondly, connecting this approach with previous work, to examine graph widths through the prism of monoidal width.The two widths we focus on are tree width and rank width.We show that both can be seen as instances of monoidal width.The interesting part of this endeavour is identifying the monoidal category, and thus the relevant "decomposition algebra" of interest.
Unlike the situation with graph widths, it does not make sense to talk about monoidal width per se, since it is dependent on the choice of underlying monoidal category and thus a particular "decomposition algebra".The decomposition algebras that underlie tree and rank decompositions reflect their intuitive understanding.For tree width, this is a cospan category whose morphisms represent graphs with vertex interfaces, while for rank width it is a category whose morphisms represent graphs with edge interfaces, with adjacency matrices playing the role of tracking connectivity information within a graph.We show that the monoidal width of a morphism in these two categories is bounded, respectively, by the branch (Theorem 3.34) and rank width (Theorem 5.26) of the corresponding graph.In the first instance, this is enough to establish the connection between monoidal width and tree width, given that it is known that tree width and branch width are closely related.A small technical innovation is the definition of intermediate inductive notions of branch (Definition 3.14) and rank (Definition 5.7) decompositions, equivalent to the original definitions via "global" combinatorial notions of graph decomposition.The inductive presentations are closer in spirit to the inductive definition of monoidal decomposition, and allow us to give direct proofs of the main correspondences.
String diagrams.String diagrams [JS91] are a convenient syntax for monoidal categories, where a morphism f : X → Y is depicted as a box with input and output wires: Morphisms in monoidal categories can be composed sequentially, using the composition of the category, and in parallel, using the monoidal structure.These two kinds of composition are reflected in the string diagrammatic syntax: the sequential composition f ; g is depicted by connecting the output wire of f with the input wire of g; the parallel composition f ⊗ f ′ is depicted by writing f on top of f ′ .
The advantage of this syntax is that all coherence equations for monoidal categories are trivially true when written with string diagrams.An example is the middle-four interchange law (f ⊗ f ′ ) ; (g ⊗ g ′ ) = (f ; g) ⊗ (g ; g ′ ).These two expressions have one representation in terms of string diagrams, as shown in Figure 1.The coherence theorem for monoidal categories [Mac78] ensures that string diagrams are a sound and complete syntax for morphisms in monoidal categories.
Related work.This paper contains the results of [DLS21] and [DLS22] with detailed proofs.We generalise the results of [DLS21] to undirected hypergraphs and provide a syntactic presentation of the subcategory of the monoidal category of cospans of hypergraphs on discrete objects.Previous syntactical approaches to graph widths are the work of Pudlák, Rödl and Savickỳ [PRS88] and the work of Bauderon and Courcelle [BC87].Their works consider different notions of graph decompositions, which lead to different notions of graph complexity.In particular, in [BC87], the cost of a decomposition is measured by counting shared names, which is clearly closely related to penalising sequential composition as in monoidal width.Nevertheless, these approaches are specific to particular, concrete notions of graphs, whereas our work concerns the more general algebraic framework of monoidal categories.
Abstract approaches to width have received some attention recently, with a number of diverse contributions.Blume et.al. [BBFK11], similarly to our work, use (the category of) cospans of graphs as a formal setting to study graph decompositions: indeed, a major insight of loc.cit. is that tree decompositions are tree-shaped diagrams in the cospan category, and the original graph is reconstructed as a colimit of such a diagram.Our approach is more general, however, emphasising the relevance of the algebra of monoidal categories, of which cospan categories are just one family of examples.
The literature on comonads for game semantics characterises tree and path decompositions of relational structures (and graphs in particular) as coalgebras of certain comonads [ADW17, AS21, MS22, AM21, CD21].Bumpus and Kocsis [BK21,Bum21] and, later, Bumpus, Kocsis and Master [BKM23] also generalise tree width to the categorical setting, although their approach is conceptually and technically removed from ours.Their work takes a combinatorial perspective on decompositions, following the classical graph theory literature.Given a shape of decomposition, called the spine in [BK21], a decomposition is defined globally as a functor out of that shape.This generalises the characterisation of tree width based on Halin's S-functions [Hal76].In contrast, monoidal width is algebraic in flavour, following Bauderon and Courcelle's insights on tree decompositions [BC87].Monoidal decompositions are syntax trees defined inductively and rely on the decomposition algebra given by monoidal categories.
Synopsis.The definition of monoidal width is introduced in Section 2, together with a worked out example.In Section 3 we recover tree width by instantiating monoidal width in a suitable category of cospans of hypergraphs.We recall it in Section 3.3 and provide a syntax for it in Section 3.4.Similarly, in Section 5 we recover rank width by instantiating monoidal width in a prop of graphs with boundaries where the connectivity information is stored in adjacency matrices, which we recall in Section 5.3.This motivates us to study monoidal width for matrices over the natural numbers in Section 4.

Monoidal width
We introduce monoidal width, a notion of complexity for morphisms in monoidal categories that relies on explicit syntactic decompositions, relying on the algebra of monoidal categories.We then proceed with a simple, yet useful examples of efficient monoidal decompositions in Section 2.1.
A monoidal decomposition of a morphism f is a binary tree where internal nodes are labelled with the operations of composition ; or monoidal product ⊗, and leaves are labelled with "atomic" morphisms.A decomposition, when evaluated in the obvious sense, results in f .We do not assume that the set of atomic morphisms A is minimal, they are merely morphisms that do not necessarily need to be further decomposed.We assume that A contains enough atoms to have a decomposition for every morphism.In most cases, we will take A to contain all the morphisms.Definition 2.1 (Monoidal decomposition).Let C be a monoidal category and A be a subset of its morphisms to which we refer as atomic.The set D f of monoidal decompositions of f : A → B in C is defined inductively: In general, a morphism can be decomposed in different ways and decompositions that maximise parallelism are deemed more efficient.The monoidal width of a morphism is the cost of its cheapest monoidal decomposition.
Formally, each operation and atom in a decomposition is assigned a weight that will determine the cost of the decomposition.This is captured by the concept of a weight function.15:7 Definition 2.2.Let C be a monoidal category and let A be its atomic morphisms.A weight function for (C, A) is a function w : A∪{⊗}∪Obj(C) → N such that w(X ⊗Y ) = w(X)+w(Y ), and w(⊗) = 0.
A prop is a strict symmetric monoidal category where objects are natural numbers and the monoidal product on them is addition.If C is a prop, then, typically, we let w(1) := 1.The idea behind giving a weight to an object X ∈ C is that w(X) is the cost paid for composing along X.
Definition 2.3 (Monoidal width).Let w be a weight function for (C, A).Let f be in C and d ∈ D f .The width of d is defined inductively as follows: The monoidal width of f is mwd(f ) := min d∈D f wd(d).
Example 2.4.Let f : 1 → 2 and g : 2 → 1 be morphisms in a prop such that mwd(f ) = mwd(g) = 2.The following figure represents the monoidal decomposition of f ;(f ⊗f );(g⊗g);g given by Indeed, taking advantage of string diagrammatic syntax, decompositions can be illustrated by enhancing string diagrams with additional annotations that indicate the order of decomposition.Throughout this paper, we use thick yellow dividing lines for this purpose.
Given that the width of a decomposition is the most expensive operation or atom, the above has width is 2 as compositions are along at most 2 wires.
Example 2.5.With the data of Example 2.4, define a family of morphisms h n : 1 → 1 inductively as h 0 := f ; 2 g, and Each h n has a decomposition of width 2 n where the root node is the composition along the middle wires.However -following the schematic diagram above -we have that mwd(h n ) ≤ 2 for any n.2.1.Monoidal width of copy.Although monoidal width is a very simple notion, reasoning about it in concrete examples can be daunting because of the combinatorial explosion in the number of possible decompositions of any morphism.For this reason, it is useful to examine some commonly occurring structures that one encounters "in the wild" and examine their decompositions.One such situation is when the objects are equipped with a coherent comultiplication structure.
Definition 2.6.Let C be a symmetric monoidal category, with symmetries X,Y : X ⊗Y → Y ⊗ X.We say that C has coherent copying if there is a class of objects ∆ C ⊆ Obj(C), satisfying An example is any cartesian prop, where the copy morphisms are the universal ones given by the cartesian structure: For props with coherent copy, we assume that copy morphisms, symmetries and identities are atoms, X , X,Y , 1 X ∈ A, and that their weight is given by w( X ) := 2 • w(X), w( X,Y ) := w(X) + w(Y ) and w(1 X ) := w(X).
Example 2.7.Let C be a prop with coherent copy and suppose that 1 ∈ ∆ C .This implies that every n ∈ ∆ C and there are copy morphisms n : n → 2n for all n.Let γ n,m := ( n ⊗ 1 m ) ; (1 n ⊗ n,m ) : n + m → n + m + n.We can decompose γ n,m in terms of γ n−1,m+1 (in the dashed box), 1 and 1,1 by cutting along at most n + 1 + m wires: This allows us to decompose n = γ n,0 cutting along only n + 1 wires.In particular, this means that mwd( n ) ≤ n + 1.
The following lemma generalises the above example and is used in the proofs of some results in later sections, Proposition 3.30 and Proposition 4.6.
Lemma 2.8.Let C be a symmetric monoidal category with coherent copying.Suppose that A contains X for all X ∈ ∆ C , and X,Y and 1 X for all X ∈ Obj(C).Let Then there is a monoidal decomposition Proof.Proceed by induction on the number n of objects being copied.If n = 0, then we are done because we keep the decomposition d and define C I (d) := d.Suppose that the statement is true for any f Let γ X (f ) be the morphism in the above dashed box.By the induction hypothesis, there is a monoidal decomposition C X (d) of γ X (f ) of bounded width: wd(C X (d)) ≤ max{wd(d), w(Y ) + w(X n+1 ⊗ Z) + (n + 1) • max i=1,...,n w(X i )}.We can use this decomposition to define a monoidal decomposition Note that the only cut that matters is the longest vertical one, the composition node along Y ⊗ X ⊗ X n+1 ⊗ Z ⊗ X n+1 , because all the other cuts are cheaper.The cost of this cut is w i=1 w(X i ).With this observation and applying the induction hypothesis, we can compute the width of the decomposition C X⊗X n+1 (d).

A monoidal algebra for tree width
Our first case study is tree width of undirected hypergraphs.We show that monoidal width in a suitable monoidal category of hypergraphs is within constant factors of tree width.We rely on branch width, a measure equivalent to tree width, to relate the latter with monoidal width.
After recalling tree and branch width and the bounds between them in Section 3.1, we define the intermediate notion of inductive branch decomposition in Section 3.2 and show its equivalence to that of branch decomposition.Separating this intermediate step allows a clearer presentation of the correspondence between branch decompositions and monoidal decompositions.Section 3.3 recalls the categorical algebra of cospans of hypergraphs and Section 3.4 introduces a syntactic presentations of them.Finally, Section 3.5 contains the main result of the present section, which relates inductive branch decompositions, and thus tree decompositions, with monoidal decompositions.
Classically, tree and branch widths have been defined for finite undirected multihypergraphs, which we simply call hypergraphs.These have undirected edges that connect sets of vertices and they may have parallel edges.Definition 3.1.A (multi)hypergraph G = (V, E) is given by a finite set of vertices V , a finite set of edges E and an adjacency function ends : E → ℘(V ), where ℘(V ) indicates the set of subsets of V .A subhypergraph of G is a hypergraph G ′ = (V ′ , E ′ ) such that V ′ ⊆ V , E ′ ⊆ E and ends ′ (e) = ends(e) for all e ∈ E ′ .Definition 3.2.Given two hypergraphs G = (V, E) and H = (W, F ), a hypergraph homomorphism α : G → H is given by a pair of functions α V : V → W and α E : E → F such that, for all edges e ∈ E, ends H (α E (e)) = α V (ends G (e)).
Hypergraphs and hypergraph homomorphisms form a category UHGraph, where composition and identities are given by component-wise composition and identities.
Note that the category UHGraph is not the functor category [{• → •}, kl(℘)]: their objects coincide but the morphisms are different.
Definition 3.3.The hyperedge size of a hypergraph G is defined as γ(G) := max e∈edges(G) |ends(e)|.A graph is a hypergraph with hyperedge size 2. Definition 3.4.A neighbour of a vertex v is a vertex w distinct from v with an edge e such that v, w ∈ ends(e).A path in a hypergraph is a sequence of vertices (v 1 , . . ., v n ) such that, for every i = 1, . . ., n − 1, v i and v i+1 are neighbours.A cycle in a hypergraph is a path where the first vertex v 1 coincides with the last vertex v n .A hypergraph is connected if there is a path between every two vertices.A tree is a connected acyclic hypergraph.A tree is subcubic if every vertex has at most three neighbours.
Definition 3.5.The set of binary trees with labels in a set Λ is either: a leaf (λ) with label λ ∈ Λ; or a label λ ∈ Λ with two binary trees T 1 and T 2 with labels in Λ, (T 1 -λ-T 2 ).
3.1.Background: tree width and branch width.Intuitively, tree width measures "how far" a hypergraph G is from being a tree: a hypergraph is a tree iff it has tree width 1. Hypergraphs with tree widths larger than 1 are not trees; for example, the family of cliques has unbounded tree width.
The definition relies on the concept of a tree decomposition.For Robertson and Seymour [RS86], a decomposition is itself a tree Y , each vertex of which is associated with a subhypergraph of G. Then G can be reconstructed from Y by identifying some vertices.
where Y is a tree and t : vertices(Y ) → ℘(V ) is a function such that: (1) Every vertex is in one of the components: i∈vertices(Y ) t(i) = V .
(3) The components are glued in a tree shape: ∀i, The cost is the maximum number of vertices of the component subhypergraphs.
Example 3.7.Consider the hypergraph G and its tree decomposition (Y, t) below.Its cost is 3 as its biggest component has three vertices.
The tree width of G is given by the min-max formula: wd(Y, t).
Note that Robertson and Seymour subtract 1 from twd(G) so that trees have tree width 1.To minimise bureaucratic overhead, we ignore this convention.
We use branch width [RS91] as a technical stepping stone to relate monoidal width and tree width.Before presenting its definition, it is important to note that branch width and tree width are equivalent, i.e. they are within a constant factor of each other.Theorem 3.9 [RS91, Theorem 5.1].Branch width is equivalent to tree width.More precisely, for a hypergraph G = (V, E), Branch width relies on branch decompositions, which, intuitively, record in a tree a way of iteratively partitioning the edges of a hypergraph.
Each edge e in the tree Y determines a splitting of the hypergraph.More precisely, it determines a two partition of the leaves of Y , which, through b, determines a 2-partition {A e , B e } of the edges of G.This corresponds to a splitting of the hypergraph G into two subhypergraphs G 1 and G 2 .Intuitively, the order of an edge e is the number of vertices that are glued together when joining G 1 and G 2 to get G.Given the partition {A e , B e } of the edges of G, we say that a vertex v of G separates A e and B e whenever there are an edge in A e and an edge in B e that are both adjacent to v.
Let Example 3.12.If we start reading the decomposition from an edge in the tree Y , we can extend the labelling to internal vertices by labelling them with the glueing of the labels of their children.
In this example, there is only one vertex separating the first two subgraphs of the decomposition.This means that the corresponding edge in the decomposition tree has order 1.

Hypergraphs with sources and inductive definition.
We introduce a definition of decomposition that is intermediate between a branch decomposition and a monoidal decomposition.It adds to branch decompositions the algebraic flavour of monoidal decompositions by using an inductive data type, that of binary trees, to encode a decomposition.Our approach follows closely Bauderon and Courcelle's hypergraphs with sources [BC87] and the corresponding inductive definition of tree decompositions [Cou92].Courcelle's result [Cou92, Theorem 2.2] is technically involved as it translates between a combinatorial description of a decomposition to a syntactic one.Our results in this and the next sections are similarly technically involved.
We recall the definition of hypergraphs with sources and introduce inductive branch decompositions of them.Intuitively, the sources of a hypergraph are marked vertices that are allowed to be "glued" together with the sources of another hypergraph.Thus, the equivalence between branch decompositions and inductive branch decompositions formalises the intuition that a branch decomposition encodes a way of dividing a hypergraph into smaller subgraphs by "cutting" along some vertices.Definition 3.13 [BC87].A hypergraph with sources is a pair Γ = (G, X) where G = (V, E) is a hypergraph and X ⊆ V is a subset of its vertices, called the sources (Figure 4).Given two hypergraphs with sources Γ = (G, X) and Γ ′ = (G ′ , X ′ ), we say that Γ ′ is a subhypergraph of Γ whenever G ′ is a subhypergraph of G.
Note that the sources of a subhypergraph Γ ′ of Γ need not to appear as sources of Γ, nor vice versa.In fact, if Γ is obtained by identifying all the sources of Γ 1 with some of the sources of Γ 2 , the sources of Γ and Γ 1 will be disjoint.
Figure 4. Sources are marked vertices in the graph and are thought of as an interface that can be glued with that of another graph.
An inductive branch decomposition is a binary tree whose vertices carry subhypergraphs Γ ′ of the ambient hypergraph Γ.This set of all such binary trees is defined as follows where Γ ′ ranges over the non-empty subhypergraphs of Γ.An inductive branch decomposition has to satisfy additional conditions that ensure that "glueing" Γ 1 and Γ 2 together yields Γ. Definition 3.14.Let Γ = ((V, E), X) be a hypergraph with sources.An inductive branch decomposition of Γ is T ∈ T Γ where either: • Γ is discrete (i.e. it has no edges) and T = (); • Γ has one edge and T = (()-Γ-()).We will use the shorthand T = (Γ) in this case; -The sources are those vertices shared with the original sources as well as those shared with the other subhypergraph, Note that ends(E i ) ⊆ V i and that not all subtrees of a decomposition T are themselves decompositions: only those T ′ that contain all the nodes in T that are below the root of T ′ .We call these full subtrees and indicate with λ(T ′ ) the subhypergraph of Γ that T ′ is a decomposition of.We sometimes write Definition 3.15.Let T = (T 1 -Γ-T 2 ) be an inductive branch decomposition of Γ = (G, X), with T i possibly both empty.Define the width of T inductively: wd(()) := 0, and wd(T ) := max{wd(T 1 ), wd(T 2 ), |sources(Γ)|}.Expanding this expression, we obtain The inductive branch width of Γ is defined by the min-max formula ibwd(Γ) := min T wd(T ).
We show that this definition is equivalent to the original one by exhibiting a mapping from branch decompositions to inductive branch decompositions that preserve the width and vice versa.Showing that these mappings preserve the width is a bit involved because the order of the edges in a decomposition is defined "globally", while, for an inductive decomposition, the width is defined inductively.Thus, we first need to show that we can compute the inductive width globally.
Lemma 3.16.Let Γ = (G, X) be a hypergraph with sources and T be an inductive branch decomposition of Γ.Let T 0 be a full subtree of T and let T ′ ≹ T 0 denote a full subtree T ′ of T such that its intersection with T 0 is empty.Then, Proof.Proceed by induction on the decomposition tree T .If it is empty, T = (), then its subtree is also empty, T 0 = (), and we are done.
If T = (T 1 -Γ-T 2 ), then either T 0 is a full subtree of T 1 , or it is a full subtree of T 2 , or it coincides with T .If T 0 coincides with T , then their boundaries coincide and the statement is satisfied because sources(λ(T 0 )) = X = V ∩ X.Now suppose that T 0 is a full subtree of T 1 .Then, by applying the induction hypothesis, Equation (3.1), and using the fact that λ(T 0 ) ⊆ λ(T 1 ), we compute the sources of T 0 : Lemma 3.17.Let Γ = (G, X) be a hypergraph with sources and G = (V, E) be its underlying hypergraph.Let T be an inductive branch decomposition of Γ.Then, there is a branch decomposition I † (T ) of G such that wd(I † (T )) ≤ wd(T ).
Proof.A binary tree is, in particular, a subcubic tree.Then, we can define Y to be the unlabelled tree underlying T .The label of a leaf l of T is a subhypergraph of Γ with one edge e l .Then, there is a bijection b : By induction hypothesis, there are inductive branch decompositions given by the pushout of g and h.Intuitively, the pushout of g and h "glues" E and F along the images of g and h (see Example 3.23).The monoidal product is given by component-wise coproducts.
We can construct the category of cospans of hypergraphs Cospan(UHGraph) because the category of hypergraphs UHGraph has all finite colimits.Proposition 3.21.The category UHGraph has all finite colimits and they are computed pointwise.
Proof.Let D : J → UHGraph be a diagram in UHGraph.Then, every object i in J determines a hypergraph The category Set has all colimits, thus there are E 0 := colim(D;U E ) and V 0 := colim(D;U V ).Let c i : V i → V 0 and d i : E i → E 0 be the inclusions given by the colimits.Then, for any i, j ∈ Obj(J) the following diagrams commute: By definition of hypergraph morphism, f E ; ends j = ends i ; ℘(f V ), and, by functoriality of ℘, ℘(f V ) ; ℘(c j ) = ℘(c i ).This shows that ℘(V 0 ) is a cocone over D ; U E with morphisms given by ends i ; ℘(c i ).Then, there is a unique morphism ends : E 0 → ℘(V 0 ) that commutes with the cocone morphisms: This shows that the pairs (c i , d i ) are hypergraph morphisms and, with the hypergraph defined by G 0 := (V 0 , E 0 , ends), form a cocone over D in UHGraph.Let H = (V H , E H , ends H ) be another cocone over D with morphisms (a i , b i ) : We show that G 0 is initial by constructing a morphism (h V , h E ) : G 0 → H and showing that it is the unique one commuting with the inclusions.By applying the functors U E and U V to the diagram above, we obtain the following diagrams in Set, where h V : V 0 → V H and h E : E 0 → E H are the unique morphism from the colimit cone.
) is a cocone over D ; U E in (at least) two ways: with morphisms d i ; ends ; ℘(h V ) and morphisms b i ; ends H .By initiality of E 0 , there is a unique morphism E 0 → ℘(V H ) and it must coincide with h E ; ends H and ends ; ℘(h V ).
This proves that (h V , h E ) is a hypergraph morphism.It is, moreover, unique because any other morphism with this property would have the same components.In fact, let Then, its components must commute with the respective cocones in Set, by functoriality of U E and U V : By construction, V 0 and E 0 are the colimits of D ; U V and D ; U E , so there are unique morphisms to any other cocone over the same diagrams.This means that h ′ V = h V and h ′ E = h E , which shows the uniqueness of (h V , h E ). with the vertex u of the second.
3.4.String diagrams for cospans of hypergraphs.We introduce a syntax for the monoidal category Cospan(UHGraph) * , which we will use for proving some of the results in this section.We will show that the syntax for Cospan(UHGraph) * is given by the syntax of Cospan(Set) together with an extra "hyperedge" generator e n : n → 0 for every n ∈ N.This result is inspired by the similar one for cospans of directed graphs [RSW05].
It is well-known that the category Cospan(Set) of finite sets and cospans of functions between them has a convenient syntax given by the walking special Frobenius monoid [Lac04].
Proposition 3.24 [Lac04].The skeleton of the monoidal category Cospan(Set) is isomorphic to the prop sFrob, whose generators and axioms are in Figure 5.
In order to obtain cospans of hypergraphs from cospans of sets, we need to add generators that behave like hyperedges: they have n inputs and these inputs can be permuted without any effect.Definition 3.25.Define UHedge to be the prop generated by a "hyperedge" generator e n : n → 0 for every n ∈ N such that permuting its inputs does not have any effect: The syntax for cospans of graphs is defined as a coproduct of props.The category of cospans of finite sets embeds into the category of cospans of undirected hypergraphs, and in particular Cospan(Set) → Cospan(UHGraph) * .By Proposition 3.24, there is a functor sFrob → Cospan(Set), which gives us a functor S 1 : sFrob → Cospan(UHGraph) * .
For the functor S 2 , we need to define it on the generators of UHedge and show that it preserves the equations.We define S 2 (e n ) to be the cospan of graphs n → (n, {e}) ← ∅ given by 1 n : n → n and ¡ n : ∅ → n.With this assignment, we can freely extend S 2 to a monoidal functor UHedge → Cospan(UHGraph) * .In fact, it preserves the equations of UHedge because permuting the order of the endpoints of an undirected hyperedge has no effect by definition.
In order to instantiate monoidal width in Cospan(UHGraph) * , we need to define an appropriate weight function.
Definition 3.29.Let A be all morphisms of Cospan(UHGraph) * .Define the weight function as follows.For an object X, w(X) := |X|.For a morphism g ∈ A, w(g) := |V |, where V is the set of vertices of the apex of g, i.e. g = X → G ← Y and G = (V, E).

Tree width as monoidal width.
Here we show that monoidal width in the monoidal category Cospan(UHGraph) * , with the weight function given in Definition 3.29, is equivalent to tree width.We do this by bounding monoidal width by above with branch width +1 and by below with half of branch width (Theorem 3.34).We prove these bounds by defining maps from inductive branch decompositions to monoidal decompositions that preserve the width (Proposition 3.30), and vice versa (Proposition 3.33).
The idea behind the mapping from inductive branch decompositions to monoidal decompositions is to take a one-edge hypergraph for each leaf of the inductive branch decomposition and compose them following the structure of the decomposition tree.The 3-clique has a branch decomposition as shown on the left.The corresponding monoidal decomposition is shown on the right.Proof.Let G = (V, E) and proceed by induction on the decomposition tree T .If the tree T = (Γ) is composed of only a leaf, then the label Γ of this leaf must have only one hyperedge with γ(G) endpoints and wd(T ) := |X|.We define the corresponding monoidal decomposition to also consist of only a leaf, B † (T ) := (g), and obtain the desired bound wd(B † (T )) = max{|X|, γ(G)} = max{wd(T ), γ(G)}.
If T = (T 1 -Γ-T 2 ), then, by definition of branch decomposition, T is composed of two subtrees T 1 and T 2 that give branch decompositions of Γ 1 = (G 1 , X 1 ) and Γ 2 = (G 2 , X 2 ).There are three conditions imposed by the definition on these subgraphs ∅ be the cospan given by ι : X i → V i and corresponding to Γ i .Then, we can decompose g in terms of identities, the structure of Cospan(UHGraph) * , and its subgraphs g 1 and g 2 : By induction hypothesis, there are monoidal decompositions B † (T i ) of g i whose width is bounded: wd(B † (T i )) ≤ max{wd(T i ) + 1, γ(G i )}.By Lemma 2.8, there is a monoidal decomposition C(B † (T 1 )) of the morphism in the above dashed box of bounded width: Using this decomposition, we can define the monoidal decomposition given by the cuts in the figure above.
We can bound its width by applying Lemma 2.8, the induction hypothesis and the relevant definitions of width (Definition 3.11 and Definition 3.29).
wd(B † (T )) The mapping B follows the same idea of the mapping B † but requires extra care: we need to keep track of which vertices are going to be identified in the final cospan.The function ϕ stores this information, thus it cannot identify two vertices that are not already in the boundary of the hypergraph.The proof of Proposition 3.33 proceeds by induction on the monoidal decomposition and constructs the corresponding branch decomposition.The inductive step relies on ϕ to identify which subgraphs of Γ correspond to the two subtrees in the monoidal decomposition, and, consequently, to define the corresponding branch decomposition.
Remark 3.31.Let f : A → C and g : B → C be two functions.The union of the images of f and g is the image of the coproduct map [f, g] : A+B → C, i.e. im(f )∪im(g) = im([f, g]).The intersection of the images of f and g is the image of the pullback map ⟨f ∧ g⟩ : We have that im(⟨f ;ϕ∧g;ϕ⟩) ⊇ im(⟨f ∧g⟩;ϕ).Then, im(⟨f ;ϕ∧g;ϕ⟩) = im(⟨f ∧ g⟩ ; ϕ) because their difference is empty: If the decomposition is just a leaf d = (h) but H has exactly one edge, F = {e}, then the corresponding branch decomposition is just a leaf as well, B(d) := (Γ), and we can compute its width: wd(B(d If the decomposition is just a leaf d = (h) and H has more than one edge, |F | > 1, then we can let B(d) be any inductive branch decomposition of Γ.Its width is not greater than the number of vertices in Γ, thus we can bound its width wd(B(d We can give the expressions of these morphisms: , and obtain the following diagram, where ι i : W i → W are the functions induced by the pushout and we define ϕ i := ι i ; ϕ.
We show that ϕ 1 satisfies the glueing property in order to apply the induction hypothesis to ϕ 1 and H . Similarly, we can show that ϕ 2 satisfies the same property.Then, we can apply the induction hypothesis to get an inductive branch decomposition B(d 1 ) of Γ 1 = ((im(ϕ 1 ), F 1 ), im(∂ 1 A ; ϕ 1 ) ∪ im(∂ 1 ; ϕ 1 )) and an inductive branch decomposition B(d We check that we can define an inductive branch decomposition of Γ from B(d 1 ) and B(d 2 ).
⇒ W i .Let ι i : W i → W be the inclusions induced by the monoidal product.Define ϕ i := ι i ; ϕ.We show that ϕ 1 satisfies the glueing property: . Similarly, we can show that ϕ 2 satisfies the same property.Then, we can apply the induction hypothesis to get We check that we can define an inductive branch decomposition of Γ from B(d 1 ) and B(d 2 ).
• F = F 1 ⊔ F 2 because the monoidal product is given by the coproduct in Set.
= (by Remark 3.31 and property of the coproduct) where we applied the induction hypothesis and Definition 3.29.

Monoidal width in matrices
We have just seen that instantiating monoidal width in a monoidal category of graphs yields a measure that is equivalent to tree width.Now, we turn our attention to rank width, which is more linear algebraic in flavour as it relies on treating the connectivity of graphs by means of adjacency matrices.Thus, the monoidal category of matrices is a natural example to study first.We relate monoidal width in the category of matrices over the natural numbers, which we introduce in Section 4.1, to their rank (Section 4.2).
The rank of a matrix is the maximum number of its linearly independent rows (or, equivalently, columns).Conveniently, it can be characterised in terms of minimal factorisations.In order to instantiate monoidal width in Bialg, we need to define an appropriate weight function: the natural choice for a prop is to assign weight n to compositions along the object n.
Definition 4.5.The atoms for Bialg are its generators (Figure 6) with the symmetry and identity on 1: The weight function w : A ∪ {⊗} ∪ Obj(Bialg) → N has w(n) := n, for any n ∈ N, and w(g) := max{m, n}, for g : n → m ∈ A.

Monoidal width of matrices.
We show that the monoidal width of a morphism in the category of matrices Bialg, with the weight function in Definition 4.5, is, up to 1, the maximum rank of its blocks.The overall strategy to prove this result is to first relate monoidal width directly with the rank (Proposition 4.8) and then to improve this bound by prioritising ⊗-nodes in a decomposition (Proposition 4.10).Combining these two results leads to Theorem 4.13.The shape of an optimal decomposition is given in Figure 7: a matrix where A j = M j ; N j is a rank factorisation as in Lemma 4.1.The characterisation of the rank of a matrix in Lemma 4.1 hints at some relationship between the monoidal width of a matrix and its rank.In fact, we have Proposition 4.8, M1 N1 M2 N2 . . .
. Generic shape of an optimal decomposition in Bialg.
which bounds the monoidal width of a matrix with its rank.In order to prove this result, we first need to bound the monoidal width of a matrix with its domain and codomain, which is done in Proposition 4.6.
Proposition 4.6.Let P be a cartesian and cocartesian prop.Suppose that Proof.We proceed by induction on k = max{m, n}.There are three base cases.
• If n = 0, then f = m because 0 is initial by hypothesis, and we can compute its width, mwd(f ) = mwd( m 1 ) ≤ w( 1 ) ≤ 1 ≤ 0 + 1. • If m = 0, then f = n because 0 is terminal by hypothesis, and we can compute its width, For the induction steps, suppose that the statement is true for any f ′ : n ′ → m ′ with max{m ′ , n ′ } < k = max{m, n} and min{m ′ , n ′ } ≥ 1.There are three possibilities.
(1) If 0 < n < m = k, then f can be decomposed as shown below because n+1 is uniform and morphisms are copiable because P is cartesian by hypothesis.

For the second morphism, we apply the induction hypothesis because h
(2) If 0 < m < n = k, we can apply Item 1 to P op with the same assumptions on the set of atoms because P op is also cartesian and cocartesian.We obtain that mwd(f can be decomposed as in Item 1 and, instead of applying the induction hypothesis to bound mwd(h 1 ) and mwd(h 2 ), one applies Item 2.Then, mwd(f We can apply the former result to Bialg and obtain Proposition 4.8 because the width of 1 × 1 matrices, which are numbers, is at most 2.This follows from the reasoning in Example 2.5 as we can write every natural number k : 1 → 1 as the following composition: Proof.We prove the second inequality.Let d be a monoidal decomposition of f .By hypothesis, f is non ⊗-decomposable.Then, there are two options.
(1) If the decomposition is just a leaf, d = (f ), then f must be an atom.We can check the inequality for all the atoms: w( We prove the first inequality.By Lemma 4.1, there are g : n → r and h : r → m such that f = g ; h with r = rk(Matf ).Then, r ≤ m, n by definition of rank.By Lemma 4.7, we can apply Proposition 4.6 to obtain that mwd(g) ≤ min{n, r} + 1 = r + 1 and mwd(h) ≤ min{m, r} + 1 = r + 1.Then, mwd(f ) ≤ max{mwd(g), r, mwd(h)} ≤ r + 1.
The bounds given by Proposition 4.8 can be improved when we have a ⊗-decomposition of a matrix, i.e. we can write f = f 1 ⊗ . . .⊗ f k , to obtain Proposition 4.10.The latter relies on Lemma 4.9, which shows that discarding inputs or outputs cannot increase the monoidal width of a morphism in Bialg.Lemma 4.9.
If the decomposition starts with a tensor node, . Then, we can use this decomposition to define a decomposition By induction hypothesis, there are monoidal decompositions Proof.By hypothesis, d ′ is a monoidal decomposition of f .Then, there are g and h such that f 1 ⊗ f 2 = f = g ; h.By Proposition 4.8, there are monoidal decompositions d i of f i with wd(d i ) ≤ r i + 1, where r i := rk(Matf i ).By properties of the rank, r 1 + r 2 = rk(Matf ) and, by Lemma 4.1, rk(Matf ) ≤ k.
There are two cases: either both ranks are non-zero, or at least one is zero.If r i > 0, then r 1 + r 2 ≥ max{r 1 , r 2 } + 1.If there is r i = 0, then f i = ; 0 and we may assume that f 1 = ; 0 .Then, we can express f 2 in terms of g and h.
Proof.By Proposition 4.10 there is a decomposition of f of the form , where we can choose d i to be a minimal decomposition of f i .Then, mwd(f Moreover, if f i are not ⊗-decomposable, Proposition 4.8 gives also a lower bound on their monoidal width: rk(Mat(f i )) ≤ mwdf i ; and we obtain that max i=1,...,k rk(Mat(f i )) ≤ mwdf .
The results so far show a way to construct efficient decompositions given a ⊗-decomposition of the matrix.However, we do not know whether ⊗-decompositions are unique.Proposition 4.12 shows that every morphism in Bialg has a unique ⊗-decomposition.Proposition 4.12.Let C be a monoidal category whose monoidal unit 0 is both initial and terminal, and whose objects are a unique factorisation monoid.Let f be a morphism in C. Then f has a unique ⊗-decomposition.
and g j : Z j → W j non ⊗-decomposables.Suppose m ≤ n and proceed by induction on m.If m = 0, then f = 1 0 and g i = 1 0 for every i = 1, . . ., n because 0 is initial and terminal. Suppose Then, we can rewrite f in terms of g i s:

By hypothesis,
Our main result in this section follows from Corollary 4.11 and Proposition 4.12, which can be applied to Bialg because 0 is both terminal and initial, and the objects, being a free monoid, are a unique factorisation monoid.Theorem 4.13.Let f = f 1 ⊗. ..⊗f k be a morphism in Bialg and its unique ⊗-decomposition given by Proposition 4.12, with r i = rk(Mat(f i )).Then max{r 1 , . . ., r k } ≤ mwd(f ) ≤ max{r 1 , . . ., r k } + 1.
Note that the identity matrix has monoidal width 1 and twice the identity matrix has monoidal width 2, attaining both the upper and lower bounds for the monoidal width of a matrix.

A monoidal algebra for rank width
After having studied monoidal width in the monoidal category of matrices, we are ready to introduce the second monoidal category of "open graphs", which relies on matrices to encode the connectivity of graphs.In this setting, we capture rank width: we show that instantiating monoidal width in this monoidal category of graphs is equivalent to rank width.
After recalling rank width in Section 5.1, we define the intermediate notion of inductive rank decomposition in Section 5.2, and show its equivalence to that of rank decomposition.As for branch decompositions, adding this intermediate step allows a clearer presentation of the correspondence between rank decompositions and monoidal decompositions.Section 5.3 recalls the categorical algebra of graphs with boundaries [CS15,DLHS21].Finally, Section 5.4 contains the main result of the present section, which relates inductive rank decompositions, and thus rank decompositions, with monoidal decompositions.
Rank decompositions were originally defined for undirected graphs [OS06].This motivates us to consider graphs rather than hypergraphs as in Section 3. As mentioned in Definition 3.3, a finite undirected graph is a finite undirected hypergraph with hyperedge size 2.More explicitly, Definition 5.1.A graph G = (V, E) is given by a finite set of vertices V , a finite set of edges E and an adjacency function ends : E → ℘ ≤2 (V ), where ℘ ≤2 (V ) indicates the set of subsets of V with at most two elements.The same information recorded in the function ends can be encoded in an equivalence class of matrices, an adjacency matrix [G]: the sum of the entries (i, j) and (j, i) of this matrix records the number of edges between vertex i and vertex j; two adjacency matrices are equivalent when they encode the same graph, i.e.
5.1.Background: rank width.Intuitively, rank width measures the amount of information needed to construct a graph by adding edges to a discrete graph.Constructing a clique requires little information: we add an edge between any two vertices.This is reflected in the fact that cliques have rank width 1.
Rank width relies on rank decompositions.In analogy with branch decompositions, a rank decomposition records in a tree a way of iteratively partitioning the vertices of a graph.Definition 5.2 [OS06].A rank decomposition (Y, r) of a graph G is given by a subcubic tree Y together with a bijection r : leaves(Y ) → vertices(G).
Each edge b in the tree Y determines a splitting of the graph: it determines a two partition of the leaves of Y , which, through r, determines a 2-partition {A b , B b } of the vertices of G.This corresponds to a splitting of the graph G into two subgraphs G 1 and G 2 .Intuitively, the order of an edge b is the amount of information required to recover G by joining G 1 and G 2 .Given the partition {A b , B b } of the vertices of G, we can record the edges in G beween A b and B b in a matrix X b .This means that, if v i ∈ A b and v j ∈ B b , the entry (i, j) of the matrix X b is the number of edges between v i and v j .Note that the order of the two sets in the partition does not matter as the rank is invariant to transposition.The width of a rank decomposition is the maximum order of the edges of the tree and the rank width of a graph is the width of its cheapest decomposition.wd(Y, r).

5.2.
Graphs with dangling edges and inductive definition.We introduce graphs with dangling edges and inductive rank decomposition of them.These decompositions are an intermediate notion between rank decompositions and monoidal decompositions.
Similarly to the definition of inductive branch decomposition (Section 3.2), they add to rank decompositions the algebraic flavour of monoidal decompositions by using the inductive data type of binary trees to encode a decomposition.Intuitively, a graph with dangling edges is a graph equipped with some extra edges that connect some vertices in the graph to some boundary ports.This allows us to combine graphs with dangling edges by connecting some of their dangling edges.Thus, the equivalence between rank decompositions and inductive rank decompositions formalises the intuition that a rank decomposition encodes a way of dividing a graph into smaller subgraphs by "cutting" along some edges.
Definition 5.5.A graph with dangling edges Γ = ([G] , B) is given by an adjacency matrix G ∈ Mat N (k, k) that records the connectivity of the graph and a matrix B ∈ Mat N (k, n) that records the "dangling edges" connected to n boundary ports.We will sometimes write G ∈ adjacency(Γ) and B = sources(Γ).
Example 5.6.Two graphs with the same ports, as illustrated below, can be "glued" together: glued with gives A rank decomposition is, intuitively, a recipe for decomposing a graph into its singlevertex subgraphs by cutting along its edges.The cost of each cut is given by the rank of the adjacency matrix that represents it.
Decompositions are elements of a tree data type, with nodes carrying subgraphs Γ ′ of the ambient graph Γ.In the following Γ ′ ranges over the non-empty subgraphs of Γ: Given T ∈ T Γ , the label function λ takes a decomposition and returns the graph with dangling edges at the root: λ(T 1 -Γ-T 2 ) := Γ and λ((Γ)) := Γ.
The conditions in the definition of inductive rank decomposition ensure that, by glueing Γ 1 and Γ 2 together, we get Γ back.We will sometimes write Γ i = λ(T i ), G i = adjacency(Γ i ) and B i = sources(Γ i ).We can always assume that the rows of G and B are ordered like the leaves of T so that we can actually split B horizontally to get A 1 and A 2 .
Remark 5.8.The perspective on rank width and branch width given by their inductive definitions emphasises an operational difference between them: a branch decompositon gives a recipe to construct a graph from its one-edge subgraphs by identifying some of their vertices; on the other hand, a rank decomposition gives a recipe to construct a graph from its one-vertex components by connecting some of their "dangling" edges.Definition 5.9.Let T = (T 1 -Γ-T 2 ) be an inductive rank decomposition of Γ = ([G] , B), with T i possibly both empty.Define the width of T inductively: if T is empty, wd(()) := 0; otherwise, wd(T ) := max{wd(T 1 ), wd(T 2 ), rk(B)}.Expanding this expression, we obtain wd(T ) = max The inductive rank width of Γ is defined by the min-max formula irwd(Γ) := min T wd(T ).
We show that the inductive rank width of Γ = ([G] , B) is the same as the rank width of G, up to the rank of the boundary matrix B.
Before proving the upper bound for inductive rank width, we need a technical lemma that relates the width of a graph with that of its subgraphs and allows us to compute it "globally".

and its boundary as B =
Proof.Proceed by induction on the decomposition tree T .If it is just a leaf, T = (Γ), then Γ has at most one vertex, and Γ ′ = ∅ or Γ ′ = Γ.In both cases, the desired equality is true.If T = (T 1 -Γ-T 2 ), then, by the definition of inductive rank decomposition, λ(T The rank is invariant to permuting the order of columns, thus rk( The above result allows us to relate the width of rank decompositions, which is computed "globally", to the width of inductive rank decompositions, which is computed "locally", with the following bound.Proof.Proceed by induction on the number of edges of the decomposition tree Y to construct an inductive decomposition tree T in which every non-trivial full subtree T ′ has a corresponding edge b ′ in the tree Y .Suppose Y has no edges, then either G = ∅ or G has one vertex.In either case, we define an inductive rank decomposition with just a leaf labelled with Γ, I(Y, r) := (Γ).We compute its width by definition: wd(I(Y, r)) := rk(B) ≤ wd(Y, r) + rk(B).
If the decomposition tree has at least an edge, then it is composed of two subcubic be the set of vertices associated to Y i and G i := G[V i ] be the subgraph of G induced by the set of vertices V i .By induction hypothesis, there are inductive rank decompositions T i of Γ i = ([G i ] , B i ) in which every full subtree T ′ has an associated edge b ′ .Associate the edge b to both T 1 and T 2 so that every subtree of T has an associated edge in Y .We can use these decompositions to define an inductive rank decomposition T = (T 1 -Γ-T 2 ) of Γ.Let T ′ be a full subtree of T corresponding to Γ ′ = ([G ′ ] , B ′ ).By Lemma 5.10, we can compute the rank of its boundary matrix . Combining Proposition 5.11 and Proposition 5.12 we obtain: Proposition 5.13.Inductive rank width is equivalent to rank width.

A prop of graphs.
Here we recall the algebra of graphs with boundaries and its diagrammatic syntax [DLHS21].Graphs with boundaries are graphs together with some extra "dangling" edges that connect the graph to the left and right boundaries.They compose by connecting edges that share a common boundary.All the information about connectivity is handled with matrices.
Remark 5.14.The categorical algebra of graphs with boundaries is a natural choice for capturing rank width because it emphasises the operation of splitting a graph into parts that share some edges.This contrasts with the algebra of cospans of graphs (Section 3.3), in which graphs are split into subgraphs that share some vertices.The difference in the operation that is emphasised by these two algebras reflects the difference between rank width and tree or branch width pointed out in Remark 5.8.

Definition 5.15 [DLHS21]. A graph with boundaries
that record the connectivity of the vertices with the left and right boundary; a matrix P ∈ Mat N (m, n) that records the passing wires from the left boundary to the right one; and a matrix F ∈ Mat N (m, m) that records the wires from the right boundary to itself.Graphs with boundaries are taken up to an equivalence making the order of the vertices immaterial.Let g, g ′ : n → m on k vertices, with g = ([G] , L, R, P, [F ]) and g ′ = ([G ′ ] , L ′ , R ′ , P, [F ]).The graphs g and g ′ are considered equal iff there is a permutation matrix σ ∈ Mat N (k, k) such that g ′ = ( σGσ ⊤ , σL, σR, P, [F ]).
Graphs with boundaries can be composed sequentially and in parallel [DLHS21], forming a symmetric monoidal category MGraph.
The prop Grph provides a convenient syntax for graphs with boundaries.It is obtained by adding a cup and a vertex generators to the prop of matrices Bialg (Figure 6).These equations mean, in particular, that the cup transposes matrices (Figure 8, left) and that we can express the equivalence relation of adjacency matrices as in Definition 5.1: The prop Grph is more expressive than graphs with dangling edges (Definition 5.5): its morphisms can have edges between the boundaries as well.In fact, graphs with dangling edges can be seen as morphisms n → 0 in Grph., where !: n → 0 and ¡ : 0 → k are the unique maps to and from the terminal and initial object 0. We can now formalise the intuition of glueing graphs with dangling edges as explained in Example 5.6.The two graphs there correspond to g 1 and g 2 below left and middle.Their glueing is obtained by precomposing their monoidal product with a cup, i.e. ∪ 2 ; (g 1 ⊗ g 2 ), 15:37 as shown below right.
Definition 5.19.Let the set of atomic morphisms A be the set of all the morphisms of Grph.
The weight function w : A ∪ {⊗} ∪ Obj(Grph) → N is defined, on objects n, as w(n) := n; and, on morphisms g ∈ A, as w(g) := k, where k is the number of vertices of g.
Note that, the monoidal width of g is bounded by the number k of its vertices, thus we could take as atoms all the morphisms with at most one vertex and the results would not change.
5.4.Rank width as monoidal width.We show that monoidal width in the prop Grph, with the weight function given in Definition 5.19, is equivalent to rank width.We do this by bounding monoidal width by above with twice rank width and by below with half of rank width (Theorem 5.26).We prove these bounds by defining maps from inductive rank decompositions to monoidal decompositions that preserve the width (Proposition 5.23), and vice versa (Proposition 5.25).
The upper bound (Proposition 5.23) is established by associating to each inductive rank decomposition a suitable monoidal decomposition.This mapping is defined inductively, given the inductive nature of both these structures.Given an inductive rank decomposition of a graph Γ, we can construct a decomposition of its corresponding morphism g as shown by the first equality in Figure 9.However, this decomposition is not optimal as it cuts along Proof.Proceed by induction on the decomposition tree T .If the tree T is just a leaf with label Γ, then we define the corresponding tree to be just a leaf with label Γ ′ : T ′ := (Γ ′ ).Clearly, T and T ′ have the same underlying tree structure.By Remark 5.20 and the fact that M has full rank, we can relate their widths: wd(T = wd(T ).If, moreover, M ′ has full rank, the inequality becomes an equality and wd(T ′ ) = wd(T ).
If T = (T 1 -Γ-T 2 ), then the adjacency and boundary matrices of Γ can be expressed in terms of those of its subgraphs Γ The boundary matrices D i of the subgraphs Γ i can also be expressed as a composition with a full-rank matrix: . By induction hypothesis, there are inductive rank decompositions with the same underlying tree structure as T 1 and T 2 , respectively.Moreover, their width is bounded, wd(T ′ i ) ≤ wd(T i ), and if, additionally, M ′ has full rank, wd(T ′ i ) = wd(T i ).Then, we can use these decompositions to define an inductive rank decomposition T ′ := (T ′ 1 -Γ ′ -T ′ 2 ) of Γ ′ because its adjacency and boundary matrices can be expressed in terms of those of Γ ′ i as in the definition of inductive rank decomposition: With the above ingredients, we can show that rank width bounds monoidal width from above.
Proposition 5.23.Let Γ = ([G] , B) be a graph with dangling edges and g : n → 0 be the morphism in Grph corresponding to Γ. Let T be an inductive rank decomposition of Γ.Then, there is a monoidal decomposition R † (T ) of g such that wd(R † (T )) ≤ 2 • wd(T ).
Proof.Proceed by induction on the decomposition tree T .If it is empty, then G must also be empty, R † (T ) = () and we are done.If the decomposition tree consists of just one leaf with label Γ, then Γ must have one vertex, we can define R † (T ) := (g) to also be just a leaf, and bound its width wd(T ) := rk(G) = wd(R † (T )).
By induction, we have inductive rank decompositions T ′ i of Γ ′ i such that wd(T ′ i ) ≤ wd(T i ).We defined Γ ′ i so that T ′ := (T ′ 1 -Γ ′ -T ′ 2 ) would be an inductive rank decomposition of Γ ′ .We can bound its width as desired.In order to obtain the subgraphs of the desired shape we need to add some extra connections to the boundaries.This can be done thanks to Lemma 5.22, by taking M = 1.We are finally able to prove the lower bound for monoidal width.Proof.Proceed by induction on the decomposition tree d.If it is just a leaf with label g, then its width is defined to be the number k of vertices of g, wd(d) := k.Pick any inductive rank decomposition of Γ and define R(d) := T .Surely, wd(T ) ≤ k : = wd(d) If d = (d 1 -; j -d 2 ), then g is the composition of two morphisms: g = g 1 ; g 2 , with g i = ([G i ] , L i , R i , P i , [F i ]).Given the partition of the vertices determined by g 1 and g 2 , we can decompose g in another way, by writing [G] = Then, we have that , and F = F 2 + P 2 • F 1 • P ⊤ 2 .This corresponds to the following diagrammatic rewriting using the equations of Grph.In order to build an inductive rank decomposition of Γ, we need rank decompositions of Γ i = ( G i , B i ).We obtain these in three steps.Firstly, we apply induction to obtain inductive rank decompositions R(d i ) of Γ i = ([G i ] , (L i | R i )) such that wd(R(d i )) ≤ 2 • max{wd(d i ), rk(L i ), rk(R i )}.Secondly, we apply Lemma 5.24 to obtain an inductive rank decomposition ) such that wd(T ′ 2 ) ≤ wd(R(d 2 )).Lastly, we observe that (R ).Then we obtain that , and we can apply Lemma 5.22, with M = 1, to get inductive rank decompositions T i of Γ i such that wd(T 1 ) ≤ wd(R(d 1 )) and wd(T 2 ) ≤ wd(T ′ 2 ) ≤ wd(R(d 2 )).If k 1 , k 2 > 0, then we define R(d) := (T 1 -Γ-T 2 ), which is an inductive rank decomposition of Γ because Γ i satisfy the two conditions in Definition 5.7.If k 1 = 0, then Γ = Γ 2 and we can define R(d) := T 2 .Similarly, if k 2 = 0, then Γ = Γ 1 and we can define R(d) := T 1 .In any case, we can compute the width of R(d) (if k i = 0 then T i = () and wd(T i ) = 0) using the inductive hypothesis, Lemma 5.24, Lemma 5.22, the fact that rk(L) ≥ rk(L 1 ), rk(R) ≥ rk(R 2 ) and j ≥ rk(R 1 ), rk(L 2 ) because R 1 : j → k 1 and L 2 : j → k 2 .

Conclusion and future work
We defined monoidal width for measuring the complexity of morphisms in monoidal categories.
The concrete examples that we aimed to capture are tree width and rank width.In fact, we have shown that, by choosing suitable categorical algebras, monoidal width is equivalent to these widths.We have also related monoidal width to the rank of matrices over the natural numbers.
Our future goal is to leverage the generality of monoidal categories to study other examples outside the graph theory literature.In the same way Courcelle's theorem gives fixed-parameter tractability of a class of problems on graphs with parameter tree width or rank width, we aim to obtain fixed-parameter tractability of a class of problems on morphisms of monoidal categories with parameter monoidal width.This result would rely on Feferman-Vaught-Mostowski type theorems specific to the operations of a particular monoidal category C or particular class of monoidal categories, which would ensure that the problems at hand respect the compositional structure of these categories.Conjecture.Computing a compositional problem on the set of morphisms C k (X, Y ) with kbounded monoidal width with a compositional algorithm is linear in w.Explicitly, computing the solution on f ∈ C k (X, Y ) takes O(c(k) • w(f )), for some more than exponential function c : N → N.

Figure 2 .
Figure 2. A tree decomposition cuts the graph along its vertices.
(Y, b) be a branch decomposition of a hypergraph G. Let e be an edge of Y .The order of e is the number of vertices that separate A e and B e : ord(e) := |ends(A e ) ∩ ends(B e )|.Definition 3.11 (Branch width).Given a branch decomposition (Y, b) of a hypergraph G = (V, E), define its width as wd(Y, b) := max e∈edges(Y ) ord(e).The branch width of G is given by the min-max formula: bwd(G) := min (Y,b) wd(Y, b).
leaves(T ) → edges(G) such that b(l) := e l .Then, (Y, b) is a branch decomposition of G and we can define I † (T ) := (Y, b).By construction, e ∈ edges(Y ) if and only if e ∈ edges(T ).Let {v, w} = ends(e) with v parent of w in T and let T w the full subtree of T with root w.Let {E v , E w } be the (non-trivial) partition of E induced by e.Then, for the edges sets, E w = edges(λ(T w )) and E v = T ′ ≹Tw edges(λ(T ′ )), and, for the vertices sets, ends(E w ) ⊆ vertices(λ(T w )) and ends(E v ) ⊆ T ′ ≹Tw vertices(λ(T ′ )).Using these inclusions and applying Lemma 3.16, ord(e) wd(Y, b) be a hypergraph with sources and G = (V, E) be its underlying hypergraph.Let (Y, b) be a branch decomposition of G.Then, there is a branch decomposition I(Y, b) of Γ such that wd(I(Y, b)) ≤ wd(Y, b) + |X|.Proof.Proceed by induction on |edges(Y )|.If Y has no edges, then either G has no edges and (Y, b) = () or G has only one edge e l and (Y, b) = (e l ).In either case, define I(Y, b) := (Γ) and wd(I(Y, b)) := |X| ≤ wd(Y, b) + |X|.If Y has at least one edge e, then Y = Y 1 e -Y 2 with Y i a subcubic tree.Let E i = b(leaves(Y i )) be the sets of edges of G indicated by the leaves of Then, the tree I(Y, b) := (T 1 -Γ-T 2 ) is an inductive branch decomposition of Γ and, by applying Lemma 3.16, wd(I(Y, b)) e∈edges(Y ) ord(e) + |X| : = wd(Y, b) + |X| Combining Lemma 3.17 and Lemma 3.18 we obtain: Proposition 3.19.Inductive branch width is equivalent to branch width.3.3.Cospans of hypergraphs.We work with the category UHGraph of undirected hypergraphs and their homomorphisms (Definition 3.1).The monoidal category Cospan(UHGraph) of cospans is a standard choice for an algebra of "open" hypergraphs.Hypergraphs are composed by glueing vertices [RSW05, GH97, Fon15].We do not need the full expressivity of Cospan(UHGraph) and restrict to Cospan(UHGraph) * , where the objects are sets, seen as discrete hypergraphs.Definition 3.20.A cospan in a category C is a pair of morphisms in C that share the same codomain, called the head, f : X → E and g : Y → E. When C has finite colimits, cospans form a symmetric monoidal category Cospan(C) whose objects are the objects of C and morphisms are cospans in C.More precisely, a morphism X → Y in Cospan(C) is an equivalence class of cospans X f → E g ← Y , up to isomorphism of the head of the cospan.The composition of X

Figure 5 .
Figure 5. Generators and axioms of a special Frobenius monoid.
Definition 3.26.Define the prop FGraph as a coproduct: FGraph := sFrob + UHedge.We will show that every morphism g : n → m in FGraph corresponds to a morphism in Cospan(UHGraph) * .Example 3.27.The string diagram below corresponds to a hypergraph with two left sources, one right source and two hyperedges.The number of endpoints of each hyperedge is given by the arity of the corresponding generator in the string diagram.Two hyperedges are adjacent to the same vertex when they are connected by the Frobenius structure in the string diagram, and a hyperedge is adjacent to a source when it is connected to an input or output in the string diagram.⇝ Proposition 3.28.There is a symmetric monoidal functor S : FGraph → Cospan(UHGraph) * .Proof.By definition, FGraph := sFrob + UHedge is a coproduct.Therefore, it suffices to define two symmetric monoidal functors S 1 : sFrob → Cospan(UHGraph) * and S 2 : UHedge → Cospan(UHGraph) * for constructing the functor S := [S 1 , S 2 ].
Then there are D(d) ∈ D f D and Z(d) ∈ D f Z such that wd(D(d)) ≤ wd(d) and wd(Z(d)) ≤ wd(d).Proof.We show the inequality for f D by induction on the decomposition d.The inequality for f Z follows from the fact that Bialg coincides with its opposite category.If the decomposition has only one node, d = (f ), then f is an atom and we can check these cases by hand in the table below.The first column shows the possibilities for f , while the second and third columns show the decompositions of f D for k = 1 and k = 2. starts with a composition node, d = (d 1 -; -d 2 ), then f = f 1 ; f 2 , with d i monoidal decomposition of f i .By induction hypothesis, there is a monoidal decomposition D(d 2 ) of f 2 ; (1 m−k ⊗ k ) such that wd(D(d 2 )) ≤ wd(d 2 ).We use this decomposition to define a decomposition D(d) := (d 1 Then, we can use these decompositions to define a monoidal decomposition D(d) := (D(d 1 )-⊗ -D(d 2 )) of f D .15:29 Proposition 4.10.Let f : n → m in Bialg and d Then, we can express f m in terms of g m , . . ., g n : Definition 5.3 (Order of an edge).Let (Y, r) be a rank decomposition of a graph G. Let b be an edge of Y .The order of b is the rank of the matrix associated to it: ord(b) := rk(X b ).

Definition 5. 4 (
Rank width).Given a rank decomposition (Y, r) of a graph G, define its width as wd(Y, r) := max b∈edges(Y ) ord(b).The rank width of G is given by the min-max formula:rwd(G) := min (Y,r)

Definition 5. 7 . 2 ;•
Let Γ = ([G] , B) be a graph with dangling edges, where G ∈ Mat N (k, k) and B ∈ Mat N (k, n).An inductive rank decomposition of Γ is T ∈ T Γ where either: Γ is empty and T = (); or Γ has one vertex and T = (Γ); or T = (T 1 -Γ-T 2 ) and T i ∈ T Γ i are inductive rank decompositions of subgraphs Γ i = ([G i ] , B i ) of Γ such that: • The vertices are partitioned in two, [G] = G 1 C 0 G The dangling edges are those to the original boundary and to the other subgraph, B 1 = (A 1 | C) and B 2 = (A 2 | C ⊤ ), where B = A 1A 2 .
Proposition 5.11.Let Γ = ([G] , B) be a graph with dangling edges and (Y, r) be a rank decomposition of G.Then, there is an inductive rank decomposition I(Y, r) of Γ such that wd(I(Y, r)) ≤ wd(Y, r) + rk(B).
where A ′ , C L and C R are defined as in the statement of Lemma 5.10.The matrix A ′ contains some of the rows of B, then its rank is bounded by the rank of B and we obtain rk(B ′ ) ≤ rk(B) + rk(C ⊤ L | C R ).The matrix (C ⊤ L | C R )records the edges between the vertices in G ′ and the vertices in the rest of G, which, by definition, are the edges that determine ord(b ′ ).This means that the rank of this matrix is the order of the edge b′ : rk(C ⊤ L | C R ) = ord(b ′ ).With these observations, we can compute the width of T .rk(A′ | C ⊤ L | C R ) ≤ max T ′ ≤T rk(C ⊤ L | C R ) + rk(B) = max b∈edges(Y ) ord(b) + rk(B) : = wd(Y, r) + rk(B)Proposition 5.12.Let T be an inductive rank decomposition ofΓ = ([G] , B) with G ∈ Mat N (k, k) and B ∈ Mat N (k, n).Then, there is a rank decomposition I † (T ) of G such that wd(I † (T )) ≤ wd(T ).Proof.A binary tree is, in particular, a subcubic tree.Then, the rank decomposition corresponding to an inductive rank decomposition T can be defined by its underlying unlabelled tree Y .The corresponding bijection r : leaves(Y ) → vertices(G) between the leaves of Y and the vertices of G can be defined by the labels of the leaves in T : the label of a leaf l of T is a subgraph of Γ with one vertex v l and these subgraphs need to give Γ when composed together.Then, the leaves of T , which are the leaves of Y , are in bijection with the vertices of G: there is a bijection r : leaves(Y ) → vertices(G) such that r(l) := v l .Then, (Y, r) is a branch decomposition of G and we can define I † (T ) := (Y, r).By construction, the edges of Y are the same as the edges of T so we can compute the order of the edges in Y from the labellings of the nodes in T .Consider an edge b in Y and consider its endpoints in T : let {v, v b } = ends(b) with v parent of v b in T .The order of b is related to the rank of the boundary of the subtree T b of T with root in v b .Let λ(T b ) = Γ b = ([G b ] , B b ) be the subgraph of Γ identified by T b .We can express the adjacency and boundary matrices of Γ in terms of those of Γ b : [ Lemma 5.10, the boundary rank of Γ b can be computed by rk(B b ) = rk(A ′ | C ⊤ L | C R ).By definition, the order of the edge b is ord(b) := rk(C ⊤ L | C R ), and we can bound it with the boundary rank of Γ b : rk(B b ) ≥ ord(b).These observations allow us to bound the width of the rank decomposition Y that corresponds to T .wd(Y, r) := max b∈edges(Y ) ord(b) ≤ max b∈edges(Y ) rk(B b ) ≤ max T ′ ≤T rk(sources(λ(T ′ ))) : = wd(T )

Figure 9 .
Figure 9. First step of a monoidal decomposition given by an inductive rank decomposition
the functors U E : UHGraph → Set and U V : UHGraph → Set associate the edges, resp.vertices, component to hypergraphs and hypergraph homomorphisms: for