Sbornik: Mathematics
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Forthcoming papers
Archive
Impact factor
Guidelines for authors
License agreement
Submit a manuscript

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Mat. Sb.:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Sbornik: Mathematics, 2023, Volume 214, Issue 10, Pages 1458–1499
DOI: https://doi.org/10.4213/sm9683e
(Mi sm9683)
 

Efficient computations with counting functions on free groups and free monoids

A. L. Talambutsaa, T. Hartnickb

a Steklov Mathematical Institute of Russian Academy of Sciences, Moscow, Russia
b Institut für Algebra und Geometrie, Karlsruher Institut für Technologie, Karlsruhe, Germany
References:
Abstract: We present efficient algorithms to decide whether two given counting functions on nonabelian free groups or monoids are at bounded distance from each other and to decide whether two given counting quasimorphisms on nonabelian free groups are cohomologous. We work in the multi-tape Turing machine model with nonconstant-time arithmetic operations. In the case of integer coefficients we construct an algorithm of linear time complexity (assuming that the rank is at least $3$ in the monoid case). In the case of rational coefficients we prove that the time complexity is $O(N\log N)$, where $N$ denotes the size of the input, that is, it is the same as in addition of rational numbers (implemented using the Harvey-van der Hoeven algorithm for integer multiplication). These algorithms are based on our previous work which characterizes bounded counting functions.
Bibliography: 20 titles.
Keywords: free monoid, free group, quasimorphism, counting function, bounded cohomology.
Funding agency Grant number
Ministry of Science and Higher Education of the Russian Federation 075-15-2022-265
The work of A. L. Talambutsa was performed at the Steklov International Mathematical Center and supported by the Ministry of Science and Higher Education of the Russian Federation (agreement no. 075-15-2022-265).
Received: 28.10.2021 and 17.07.2023
Russian version:
Matematicheskii Sbornik, 2023, Volume 214, Number 10, Pages 116–162
DOI: https://doi.org/10.4213/sm9683
Bibliographic databases:
Document Type: Article
MSC: 20F10, 20J06
Language: English
Original paper language: Russian

§ 1. Introduction

1.1. From combinatorics of words to quasimorphisms

The study of words over a finite alphabet $S$ is a central topic in many areas of mathematics and computer science, including algebra, combinatorics, dynamical systems, decision problems and many others. In particular, the combinatorics of such words have been studied intensively during the last 50 years in algebra and computer science; see, for example, [18] and [16] for two very different perspectives. Of fundamental importance for the combinatorics of words is the subword relation: a word $v = s_1\ldots s_l$ with $s_1, \dots, s_l \in S$ is called a subword of a word $w = r_1 \ldots r_m$, where $r_1, \dots, r_m \in S$, provided there is some $j \in \{1, \dots, m-l+1\}$ such that $s_i = r_{j+i}$ for all $i \in \{1, \dots, l\}$; we then call $\{j+1, \dots, j+l\}$ an occurrence of $v$ in $w$. Many famous problems concerning the combinatorics of words are related to this subword relation. For example, the question whether a word of length $N$ can be reconstructed from the set of its subwords of length up to $f(N)$ was solved in [16] and, independently, by Levenstein in [15], and the corresponding length was shown to be $f(N)=\lceil (N+1)/2 \rceil$. If one replaces the set of subwords by the multiset of subwords, then the currently best known lower bound is $f(N)\leqslant\lfloor\frac{16}7\sqrt{N}\rfloor+5$, due to Krasikov and Roditty [14].

The present article is concerned with a quantitative refinement of the subword relation: given two words $v$ and $w$ over $S$, we denote by $\rho_v(w)$ the number of (possibly overlapping) occurrences of $v$ in $w$; thus, by definition we have $\rho_v(w) > 0$ if and only if $v$ is a subword of $w$. The collection $S^*$ of all words over $S$ is a free monoid, and we can consider $\rho_v$ as a function $\rho_v\colon S^* \to \mathbb N_0$, called the $v$-counting function. If $v$ happens to be a single letter, then this counting function is actually a monoid homomorphism; in general, it is only a quasimorphism in the following sense:

$$ \begin{equation*} \sup_{w_1, w_2 \in S^*}| \rho_v(w_1w_2)-\rho_v(w_1)-\rho_v(w_2)| < \infty. \end{equation*} \notag $$
To summarize, counting subwords is a natural source of quasimorphisms on free monoids.

1.2. Computations in bounded cohomology of free groups

While not much is known about general quasimorphisms on monoids, there is a well-developed theory of quasimorphisms on groups, since these are closely related to bounded cohomology [5] and stable commutator length [2]. Note that if $F_n$ is a free group of rank $n \geqslant 2$ with basis $S = \{a_1, \dots, a_n\}$, then we can identify elements of $F_n$ with reduced words over the extended alphabet $S^{\pm} = \{a_1, \dots, a_n, a_1^{-1}, \dots, a_n^{-1}\}$. This allows us to define, for every $v \in F_n$, a corresponding $v$-counting function $\rho_v\colon F_n \to \mathbb N_0$. It was pointed out by Brooks [1] that the symmetrizations

$$ \begin{equation*} \varphi_v\colon F_n \to \mathbb Z, \qquad w \mapsto \rho_v(w)-\rho_v(w^{-1}) \end{equation*} \notag $$
of these counting functions are quasimorphisms on the free group; symmetrization is needed to deal with the effects of cancellations in free groups. In what follows we refer to $\varphi_v$ as the $v$-counting quasimorphism. Finite linear combinations of these quasimorphisms are known as counting quasimorphisms or sometimes quasimorphisms of finite type. Similarly, finite linear combinations of $v$-counting functions are simply known as counting functions.

Every quasimorphism $\varphi\colon F_n \to \mathbb R$ gives rise to a bounded $2$-cocycle $d\varphi$ on $F_n$ given by $d\varphi(x,y) := \varphi(y) - \varphi(xy) + \varphi(x)$ and thus defines a class in the second bounded cohomology $H^2_b(F_n; \mathbb R)$ (see [5]). We say that two quasimorphisms are cohomologous if they define the same bounded cohomology class. In particular, this is the case if they are at a bounded distance from each other with respect to the $\ell^\infty$-norm.

It follows from our previous results in [9] that the question whether two given counting quasimorphisms (say, with coefficients in $\mathbb{Z}$ or $\mathbb{Q}$) are cohomologous is decidable; in fact, one can extract from [9] an explicit algorithm which decides this question. This algorithm is actually sufficient for many applications; see, for instance, the work of Hase [10] for an application concerning the $\mathrm{Out}(F_n)$-action on $H^2_b(F_n; \mathbb R)$. However, while the algorithm sketched in [9] is effective, it is certainly not efficient. By contrast, the purpose of this article is to provide an efficient algorithm and analyze its complexity. Our main result reads as follows.

Theorem 1.1. Let $n \geqslant 2$ and $\mathfrak N \in \{\mathbb{Z}, \mathbb{Q}\}$. Then there exists an algorithm which, given two counting quasimorphisms on $F_n$ with coefficients in $\mathfrak N$, decides whether they are cohomologous. Moreover, this algorithm can be implemented in such a way that its runtime is at most $O(N)$ if $\mathfrak N = \mathbb{Z}$ and at most $O(N \log N)$ if $\mathfrak N = \mathbb{Q}$, where $N$ denotes the size of the input data and the implied constants depend on $n$.

We will provide a more precise formulation of Theorem 1.1 in Corollary 5.9 below after clarifying the data structures used to encode counting quasimorphisms. In particular, we specify precisely what we mean by the ‘size’ of the input data. As far as our model of computation is concerned, we will be working in the multi-tape Turing machine model throughout. There are a couple of variants of Theorem 1.1 worth mentioning.

The fact that the runtime is linear in the case of $\mathbb{Z}$-coefficients is connected with the fact that addition of integers can be implemented in linear time. On the contrary, it is currently not known whether addition of large rational numbers can be implemented in linear time. In view of the recent work by Harvey and van der Hoeven [11], $T(N) := N \log N$ is currently the best known time complexity for addition of rational numbers of size at most $N$, provided that rational numbers are encoded as (possibly unreduced) mixed fractions (see the discussion in § 6), and this is how this function enters the proof of the theorem. The reader is invited to check that our results also holds for more general coefficient groups $\mathfrak N \subset \mathbb R$ instead of $\mathbb{Z}$ or $\mathbb{Q}$, as long as addition, subtraction and comparison in $\mathfrak N$ can be carried out efficiently by a multi-tape Turing machine.

1.3. Towards applications

Theorem 1.1 can be seen as a starting point for efficient computations in the bounded cohomology of the free group or, more precisely, for efficient computations in its subspace $H^2_{b, \mathrm{fin}}(F_2, \mathbb{Q}) \subset H^2_{b}(F_2, \mathbb{R})$ generated by counting quasimorphisms with rational coefficients. This subspace is actually quite remarkable: as pointed out by Grigorchuk [6], it is dense in $H^2_{b}(F_2, \mathbb{R})$ for a suitable topology (namely the topology of pointwise convergence of homogeneous representatives; see [7]) and by a result of Schweitzer and the first author it is moreover invariant under the natural action of $\mathrm{Out}(F_n)$ and therefore independent of the basis chosen [7]. Nevertheless, we have to admit that there are many explicit examples of classes in $H^2_{b}(F_2, \mathbb{R})$ which are of infinite type (that is, are not representable by counting quasimorphisms), and it is still a major open problem how to decide whether two such quasimorphisms are cohomologous.

This latter question is actually relevant beyond the theory of free groups. Namely, if $G$ is an arbitrary group and $F \subset G$ is a finitely-generated free subgroup, then every bounded cohomology class in $G$ restricts to a bounded cohomology class of $F$. It turns out that for a large class of groups with weak negative curvature properties (namely, acylindrically hyperbolic groups in the sense of [17]), every second bounded cohomology class is uniquely determined by its restrictions to (hyperbolically embedded) free subgroups [8]. This provides a strong motivation to look for algorithms which decide whether two given quasimorphisms on a free group are cohomologous. Theorem 1.1 solves this problem for quasimorphisms of finite type, whereas the case of quasimorphisms of infinite type remains open for the moment.

1.4. The organization of the article

The goal of this article is to establish Theorem 1.1 and all of its variants mentioned thereafter. It turns out that the case of counting functions over free monoids is notationally simpler to handle than the case of counting quasimorphisms over free groups, but uses essentially the same idea. In the body of this text we thus discuss first of all algorithms that are concerned with counting functions over free monoids. An informal discussion is presented in § 2, and after formalizing the relevant concepts in § 3, the monoid version of Theorem 1.1 (that is, Corollary 3.3) is established in § 4 by constructing and analyzing the desired algorithm. Then in § 5 we explain how this algorithm has to be modified in the group case — this requires a lot of additional notation, but very few additional ideas. In § 6 we discuss in detail the implementations of the arithmetic operations for our coefficient groups, since these influence crucially the runtime of our algorithm.

Notation

In what follows we denote by $\mathbb N = \{1, 2, \dots\}$ the set of natural numbers (starting from $1$) and by $\mathbb N_0 = \{0, 1, 2, \dots\}$ its union with $\{0\}$. The letters $\mathbb{Z}$ and $\mathbb{Q}$ denote the ring of integers and the field of rational numbers, respectively. For real-valued functions $f,g \colon \mathbb N \to [0, \infty)$ we are going to use the standard Bachmann-Landau notation

$$ \begin{equation*} g(x)=O(f(x)) \quad\Longleftrightarrow \quad \limsup \frac{g(x)}{f(x)} < \infty. \end{equation*} \notag $$
By a slight abuse of notation we use the same notation also for functions that are only defined on a cofinite subset of $\mathbb N$.

Acknowledgements

The first author is grateful to the Mathematics Department of Technion for supporting his visits. The first author also would like to thank P. S. Kiyashko for reading the text carefully and M. N. Vyalyi for valuable discussions concerning rational arithmetic algorithms. The second author is grateful to the Simons Foundation for supporting his visit to the Steklov International Mathematical Center.

§ 2. Counting functions on monoids

Throughout this section let $n \geqslant 2$ be a fixed integer. We discuss the equivalence problem for counting functions over the free monoid $M_n$ with coefficients in either $\mathbb{Z}$ or $\mathbb{Q}$ and describe an algorithm for its solution in an informal way.

2.1. Representing counting functions

Let $\mathrm{A} = \{\mathrm{a_1}, \dots, \mathrm{a_n}\}$ be a finite set of cardinality $n \geqslant 2$. We identify elements of the free monoid $M_n$ with finite words over $\mathrm{A}$ (including the empty word $\varepsilon$) and, given $w \in M_n$, denote by $|w|$ the word length of $w$, that is, the number of letters of the word $w$. We also fix a coefficient group $\mathfrak N \in \{\mathbb{Z}, \mathbb{Q}\}$.

Given two words $v = s_1\dots s_l$ and $w = r_1 \dots r_m$ over $\mathrm{A}$, where $m \geqslant l \geqslant 0$, we denote by $\rho_v(w)$ the number of (possibly overlapping) occurrences of $v$ in $w$, that is,

$$ \begin{equation*} \rho_v(w)=\bigl|\bigl\{j \in \{1, \dots, m-l+1\} \mid s_i=r_{j+i} \text{ for all }i \in \{1, \dots, k\}\bigr\}\bigr|. \end{equation*} \notag $$
If $m< l$, then we set $\rho_v(w) := 0$ by convention. The function $\rho_v\colon M_n \to \mathbb N$ is then called the $v$-counting function on $M_n$. Using this definition we have $\rho_\varepsilon(w) = |w|$.

By a counting function with coefficients in $\mathfrak N$ we mean a function of the form

$$ \begin{equation*} f=\sum_{i=1}^N x_i \rho_{w_i}\colon M_n \to \mathfrak N, \end{equation*} \notag $$
where $N \in \mathbb N_0$, $w_1, \dots, w_n \in M_n$ are words and $x_1, \dots, x_N \in \mathfrak N$ are coefficients.

We say that two counting functions $f_1$ and $f_2$ are equivalent, which is denoted by $f_1 \sim f_2$, if

$$ \begin{equation*} \|f_1 -f_2\|_\infty < \infty. \end{equation*} \notag $$
In this section we are going to use two different encodings of counting functions over $M_n$ with coefficients in $\mathfrak N$, namely, word-number lists and weighted trees. While weighted trees are useful for visualizations purposes, we mostly describe our algorithms in terms of word-number lists, since these are more closely related to the data structures that we use in the actual implementations of our algorithms below.

2.1.1. Word-number lists

By a word-number list we mean a list of the form $L = ((w_1, x_1), \dots, (w_N, x_N))$, where $N \in \mathbb N_0$ and $w_j \in M_n$ and $x_j \in \mathfrak N$ for all $j \in \{1, \dots, N\}$. The associated counting function is

$$ \begin{equation} \rho_L=\sum_{i=1}^N x_i \rho_{w_i}. \end{equation} \tag{2.1} $$
Thus, every word-number list encodes a counting function, and by definition every counting function is encoded by a word-number list. However, this word-number list is not unique, since it can occur, for example, that some words in the list repeat.

2.1.2. Weighted trees

Following [9] we visualize every word-number list by a finite weighted tree as follows. Denote by $T_n$ the right-Cayley tree of $M_n$ with respect to a free generating set $\mathrm{A}$. Thus, by definition the vertex set of $T_n$ is $V(T_n) = M_n$, and two vertices $x$ and $y$ are joined by an edge if and only if there exists $i \in \{1, \dots, n\}$ such that $x = y\mathrm{a}_i$ or $y = x\mathrm{a}_i$. We consider $T_n$ as a rooted tree with root given by the empty word $\varepsilon$. Given a word-number list $L = ((w_1, x_1), \dots, (w_N, x_N)$, we denote by $T_L$ the subtree of $T_n$ given by the convex hull of the vertex set

$$ \begin{equation*} V(T_L):=\{w \in M_n \mid \exists\, j \in \{1, \dots, N\}\colon w=w_j \} \cup \{\varepsilon\}. \end{equation*} \notag $$
Then we define a weight function $\alpha_L\colon V(T_L) \to \mathfrak N$ by
$$ \begin{equation*} \alpha_L(w)=\sum_{w_j=w} x_j. \end{equation*} \notag $$
For example, the weighted tree $(T_L, \alpha_L)$ associated with the list
$$ \begin{equation*} L=\bigl((\varepsilon, -1), (\mathrm a_2, 6), (\mathrm a_3, -1), (\mathrm a_1^2, 4), (\mathrm {a}_1 \mathrm a_2, 4), (\mathrm a_1 \mathrm a_3, 4), (\mathrm a_3 \mathrm a_1, 1), (\mathrm a_3\mathrm a_2, 1), (\mathrm a_3^2, 1)\bigr), \end{equation*} \notag $$
is shown in Figure 1.

Different word-number lists can give rise to the same tree, but all of these lists correspond to the same counting function since

$$ \begin{equation*} f_L=\sum_{w \in V(T_L)} \alpha_L(w) \rho_w. \end{equation*} \notag $$
In what follows we prefer to work with word-number lists rather than weighted trees, but we use weighted trees occasionally to visualize some of our algorithms.

2.2. The equivalence problem

2.2.1. Equivalent counting functions and equivalent word-number lists

Recall that counting functions $f_1$ and $f_2$ are called equivalent (which is denoted by $f_1 \sim f_2$) if $f_1-f_2$ is a bounded function. For every $w \in M_n$ we have the obvious equivalences (see [9])

$$ \begin{equation} \rho_w \sim \sum_{i=1}^n \rho_{\mathrm a_iw} \quad\text{and}\quad \rho_w \sim \sum_{i= 1}^n \rho_{w \mathrm a_i}. \end{equation} \tag{2.2} $$
We refer to the two kind of basic equivalences in (2.2) as left-extension equivalences and right-extension equivalences, respectively. It was established in [9] that these basic equivalences span the space of all linear relations between equivalence classes of counting functions as $w$ ranges over all elements of $M_n$, but we do not need this fact here.1 Using left- and right-extensions relations we can transform counting functions into equivalent counting functions. We extend our notion of equivalence to word-number lists in an obvious way.

We also apply the same definitions to finite weighted trees.

2.2.2. The formulation of the main problem

The main algorithmic problem concerning counting functions on free monoids that we want to solve in this article is as follows.

Problem 2.2 (equivalence problem). Given two word-number lists $L_1$ and $L_2$, decide whether $L_1 \sim L_2$.

Note that, given word-number lists $L_1$ and $ L_2$, it is easy to construct a word-number list $L_3$ such that $f_{L_3} = f_{L_1} - f_{L_2}$. We may thus assume in Problem 2.2 that $L_2 = ()$ is the empty word-number list. To solve Problem 2.2 we need the notion of a minimal word-number list.

Definition 2.3. Let $L = ((w_1, x_1), \dots, (w_N, x_N))$ be a word-number list with ${N \in \mathbb N_0}$.

(i) If $N \geqslant 1$, then the maximum depth of $L$ is defined by

$$ \begin{equation*} \operatorname{depth}(L):=\max \{|w_j| \mid j=1, \dots, N\}; \end{equation*} \notag $$
if $N=0$, then we set $\operatorname{depth}(L) := -1$. We say that $L$ is of constant depth if $|w_j| = \operatorname{depth}(L)$ for all $j = 1, \dots, N$. (Using this definition, the empty list is of constant depth $-1$.)

(ii) $L$ is called minimal if

$$ \begin{equation*} \operatorname{depth}(L)=\min \{\operatorname{depth}(L') \mid L' \sim L \}. \end{equation*} \notag $$

We also employ similar terminology for weighted trees.

Remark 2.4. The following useful fact is immediate from the definition. If $L$ is a word-number list of maximum depth $\ell$ and $L'$ is a word-number list of maximum depth $<\ell$, then the concatenation $L \cup L'$ is minimal if and only if $L$ is minimal. In this sense the minimality of a word-number list depends only on the ‘bottom level’.

Remark 2.5. By definition every word-number list $L$ is equivalent to some minimal word-number list $L'$, and we observe that

$$ \begin{equation*} L \sim () \quad\Longleftrightarrow\quad \operatorname{depth}(L')=-1. \end{equation*} \notag $$

In order to solve Problem 2.2 it thus suffices to solve the following problem.

Problem 2.6 (minimality problem). Given a word-number list $L$, find a minimal word-number list $L'$ which is equivalent to $L$.

For the rest of this section we thus focus on Problem 2.6.

Remark 2.7. We say that a word-number list $L = ((w_1, x_1), \dots, (w_N, x_N))$ is normalized if all words $w_i$ are distinct, ordered by length and ordered lexicographically within groups of words of the same length, and if $x_j \neq 0$ for all $j \in \{1, \dots, N\}$. Clearly, every word-number list is strictly equivalent to a normalized word-number list, and so we will mainly investigate Problem 2.6 for normalized lists. We will see below that (for the encoding chosen) there is an efficient algorithm to normalize any given word-number list.

2.3. Related brotherhoods and minimality

In order to discuss our algorithms we introduce the following terminology. We recall that $T_n$ denotes the right-Cayley tree of $M_n$ with respect to $\mathrm{A}$; we identify elements of $M_n$ with vertices of $T_n$.

2.3.1. Brotherhoods

Let $w \in M_n = V(T_n)$. The depth of $w$ is defined as its distance from the root. Vertices on the geodesic between $w$ and the root (including $w$ and the root) are called ancestors of $w$.

From now on assume that $w$ is not the root. Then $w$ admits a unique ancestor $v=\operatorname{Fa}(w)$ of distance $1$, which is called its father. The vertices with the same father as $w$ are called its brothers, and their set, the brotherhood of $w$, is denoted by $\{\operatorname{Fa}(w) \ast \}$. Thus, by definition each brotherhood contains exactly $n$ elements.

The depth of a brotherhood is defined as the depth of any of its $n$ elements. We say that two elements $u$ and $v$ of $M_n$ are related, which is denoted by $u \smile v$, if they differ at most by their first letter. In this case we also say that the brotherhoods $\{\operatorname{Fa}(u)\ast \}$ and $\{\operatorname{Fa}(v)\ast \}$ are related.

If $w = \mathrm{a}_1 \dots \mathrm{a}_\ell \in M_n$ is of length $\ell \geqslant 2$, then the (possibly empty) subword $\mathrm{a_2} \dots \mathrm{a_{\ell-1}}$ is called the stem of $w$. If $B$ is a brotherhood, then all elements of $B$ have the same stem $\operatorname{stem}(B)$, and two brotherhoods $B_1$ and $B_2$ of depth $\geqslant 2$ are related if and only if $\operatorname{stem}(B_1) = \operatorname{stem}(B_2)$. Given $u \in M_n$, there are precisely $n$ (pairwise related) brotherhoods $B_1, \dots, B_n$ with stem $u$ and, up to reordering, these are given by $B_i = \{\mathrm{a}_i\mathrm{u}\ast\}$.

2.3.2. Weighted brotherhoods

If $B$ is some brotherhood, then a normalized word-number list $\mathcal B = ((w_1, x_1), \dots, (w_m, x_m))$ is called a weighted brotherhood of type $B$ if all words $w_1, \dots, w_m$ belong to $B$ and $m \leqslant n$. Thus, we allow explicitly a weighted brotherhood to contain fewer than $n$ pairs and even to be empty; if $\mathcal B$ is a nonempty weighted brotherhood, then its type is uniquely determined and denoted by $\operatorname{type}(\mathcal B)$. By definition, every weighted brotherhood is a list of constant depth.

We say that two nonempty weighted brotherhoods $\mathcal B$ and $\mathcal B'$ are related if their types are related; by convention we declare the empty weighted brotherhood to be related to every weighted brotherhood. A weighted brotherhood $\mathcal B = ((w_1, x_1), \dots, (w_m, x_m))$ is called constant if it is either empty or we have $m = n$ and ${{x}_1 \!=\! \cdots \!=\! {x}_m}$. Otherwise it is called nonconstant.

If $L = ((w_1, x_1), \dots, (w_N, x_N))$ is a normalized word-number list and $w \in M_n$, then we denote by $L_{\{w\ast\}}$ the normalized sublist of $L$ consisting of all pairs $(w_j, x_j)$ from $L$ such that $w_j \in \{w\ast \}$. By definition this is a (possibly empty) weighted brotherhood, and we refer to it as the weighted sub-brotherhood of $L$ of type $\{w\ast \}$.

2.3.3. Unbalanced word-number lists and minimality

Definition 2.8. A normalized word-number list $L$ of maximum depth $\ell$ is called unbalanced if there exist two related weighted sub-brotherhoods $\mathcal B_1$ and $\mathcal B_2$ of $L$ of depth $\ell$ such that $\mathcal B_1$ is nonconstant and $\mathcal B_2$ is empty; otherwise it is called balanced.

The following result was established in [9], Theorem 4.2 (where the language of weighted trees was used).

Theorem 2.9. Every unbalanced normalized word-number list is minimal.

Note that one can see immediately from the associated weighted tree whether a given normalized word-number list is unbalanced by only looking at its bottom level. For example, the following weighted tree (over $M_3$) represents an unbalanced list, since the nonconstant weighted brotherhood labelled by $(4,2,1)$ is related to the empty weighted sub-brotherhood whose father is the vertex labelled by $-6$ (see Figure 2).

2.4. Basic moves

We now single out two basic moves which allow one to transform a given (normalized) word-number list into an equivalent one; these correspond to the two types of basic equivalences from (2.2).

2.4.1. Pruning

Let $L$ be a normalized word-number list of maximum depth $\ell \geqslant 1$, and let $B = \{w \ast \}$ for some $w \in M_n$ of length $|w| = \ell-1$. Then we say that $L_B$ is prunable if it is constant and nonempty. This means that $L_B$ is of the form $L_B = ((w\mathrm{a}_1, x_1), \dots, (w\mathrm{a}_n, x_n))$ with $x_1 = \dots = x_n =: x$.

Definition 2.10. If $L_B$ is prunable, then the word-number list $\operatorname{Pr}_{w\ast}(L)$ obtained from $L$ by deleting $L_B$, appending the pair $(w,x)$ and normalizing the resulting list is said to be obtained from $L$ by pruning at $B$.

It is immediate from the right-extension equivalence that pruning transforms $L$ into an equivalent word-number list $\operatorname{Pr}_{w\ast}(L)$. We say that a normalized word-number list is pruned if it does not contain any prunable sub-brotherhood. Since pruning strictly reduces the number of entries in a word-number list, every word-number-list can be transformed into an equivalent pruned word-number list by a finite number of pruning moves, as illustrated by the following example (see Figure 3).

In this example the pruned word-number list is already unbalanced and therefore minimal by Theorem 2.9, that is, we have reached a minimal equivalent word-number list using only pruning moves. However, in general we also need to apply moves related to left-extension equivalences.

2.4.2. Transfer

Now we consider the case when $L$ is a pruned normalized balanced word-number list of maximum depth $\ell \geqslant 2$. Given a stem $u \in M_n$, we consider the $n$ related brotherhoods $ \{\mathrm{a}_1 \mathrm{u}\ast\}, \dots, \{\mathrm{a}_n \mathrm{u}\ast\}$ of depth $\ell$. Since $L$ has maximum depth $\ell$, there exists some $u \in M_n$ with $|u| = \ell -2$ such that $L_{\{\ast u \ast\}} := L_{\{\mathrm{a}_1 \mathrm{u}\ast\}} \cup \dots \cup L_{\{\mathrm{a}_n \mathrm{u}\ast\}}$ is nonempty, and we fix such an element $u$.

Now we form an $n \times n$ matrix $T(L, u)$ over $\mathfrak N$ whose entry $t_{ij}$ at the position $(i,j)$ is given by $t_{ij} := x_{ij}$ if there is a pair of the form $(\mathrm{a}_i\mathrm{u} \mathrm{a}_j, x_{ij})$ in $L_{\{\mathrm{a}_i \mathrm{u}\ast\}}$ and by $t_{ij} := 0$ otherwise. This matrix is called the transfer matrix for the pair $(L, u)$. Note that $T(L,u)$ is nonzero by our choice of $u$. By definition we have

$$ \begin{equation*} \rho_{L_{\ast u \ast}}=\sum_{i=1}^n \sum_{j=1}^n t_{ij} \rho_{\mathrm a_i\mathrm u \mathrm a_j}. \end{equation*} \notag $$
If we fix some $i' \in \{1, \dots, n\}$, then we can use the left-extension equivalence (2.2) to rewrite this as
$$ \begin{equation*} \begin{aligned} \, \rho_{L_{\ast u \ast}} &= \sum_{i=1}^n \sum_{j=1}^n (t_{ij}-t_{i'j}) \rho_{\mathrm a_i\mathrm u \mathrm a_j}+\sum_{i=1}^n \sum_{j=1}^n t_{i'j} \rho_{\mathrm a_i\mathrm u \mathrm a_j} \\ &= \sum_{i\neq i'} \sum_{j=1}^n (t_{ij}-t_{i'j}) \rho_{\mathrm a_i\mathrm u \mathrm a_j}+\sum_{j=1}^n t_{i'j} \sum_{i=1}^n \rho_{\mathrm a_i\mathrm u \mathrm a_j} \\ &\sim \sum_{i\neq i'} \sum_{j=1}^n (t_{ij}-t_{i'j}) \rho_{\mathrm a_i\mathrm u \mathrm a_j}+\sum_{j=1}^n t_{i'j} \rho_{\mathrm u \mathrm a_j}. \end{aligned} \end{equation*} \notag $$

Definition 2.11. If $i' \in \{1,\dots, n\}$, then the list $\operatorname{Tr}_{i', u}(L)$ which is obtained from $L$ by deleting $L_{\ast u \ast}$, appending the pair $(\mathrm{a}_i\mathrm{u} \mathrm{a}_j, t_{ij} - t_{i'j})$ for every $i \in \{1, \dots, n\} \setminus\{i'\}$ and $j \in \{1, \dots, n\}$, appending the pair $(\mathrm{u} \mathrm{a}_j, t_{i'j})$ for every $j \in \{1, \dots, n\}$ and normalizing the resulting list is said to be obtained from $L$ by transfer of the brotherhood $L_{\{\mathrm{a}_{i'}u\ast \}}$.

By the previous computation, $\operatorname{Tr}_{i', u}(L)$ is equivalent to $L$. The following picture shows the effect of a transfer move (applied to the weighted brotherhood labelled by $(1,2,3)$) on the associated tree (see Figure 4).

Definition 2.12. A matrix $M = (m_{ij}) \in \mathfrak N^{n \times n}$ is called a column-row sum of a column vector $c=(c_1,\dots,c_n)^\top \in \mathfrak N^n$ and a row vector $r=(r_1,\dots,r_n)\in (\mathfrak N^{n})^*$, which is denoted by $M=c \boxplus r$, if $m_{ij} = c_i+r_j$ for any $i,j\in \{1,2,\dots,n\}$.

Note that if $M = r \boxplus c$ is a column-row sum, then for any two of its rows, say $m_i$ and $m_k$, the difference $m_i-m_k$ satisfies

$$ \begin{equation*} m_i-m_k=(c_i-c_k, \dots, c_i-c_k), \end{equation*} \notag $$
hence is a constant vector. Conversely, if $M$ is a matrix with rows $m_1, \dots, m_k$ such that for all $i,k \in \{1, \dots, n\}$ the difference $m_i-m_k$ is a constant vector with entries equal to a constant $c_{ik}$, then $M$ is a column-row sum such that
$$ \begin{equation*} M=(c_{1k}, \dots, c_{nk})\boxplus m_k. \end{equation*} \notag $$

Proposition 2.13. If the transfer matrix $T(L,u) \in \mathfrak N^{n \times n}$ is not a column-row sum, then $L$ is minimal.

Proof. If $L' := \operatorname{Tr}_{i', u}(L)$, then $L'_{B_i'(u)}$ is empty. Now there are two cases. If all brotherhoods $L'_{B_i(u)}$ with $i \in \{1, \dots, n\}$ are constant, then for every $i \in \{1, \dots, n\}$ the difference $t_{ij}-t_{i'j}:=c_j$ is independent of $j$, so the matrix is a column-row sum. If at least one of the weighted brotherhoods $L'_{B_i(u)}$ is nonconstant (and, in particular, nonempty), then $L'$ has depth $\ell$ and is unbalanced, hence minimal by Theorem 2.9. Since $L$ and $L'$ are equivalent and of the same depth $\ell$, we deduce that $L$ is also minimal. The proposition is proved.

2.4.3. Transfer and prune

We keep the previous notation; in particular, $L$ is a pruned normalized balanced word-number list of maximum depth $\ell \geqslant 2$ and $u \in M_n$ is chosen so that $|u| = \ell -2$ and $L_{\{\ast u \ast\}}$ is nonempty.

If the transfer matrix $T(L, u)$ is not a column-row sum, then we already know from Proposition 2.13 that $L$ is minimal. Thus we consider the case when $T(L, u)$ is a column-row sum and fix $i' \in \{1, \dots, n\}$. Then we look at the list $L' := \operatorname{Tr}_{i', u}(L)$ obtained from $L$ by transferring the weighted brotherhood $L_{\{\mathrm{a}_{i'} \mathrm{u}\ast\}}$.

Since $T(L, u)$ is a column-row sum, all weighted sub-brotherhoods $L'_{\{\mathrm{a}_{i'} \mathrm{u}\ast\}}$ for $i \in \{1, \dots, n\} \setminus\{i'\}$ are constant (by the proof of Proposition 2.13), hence we can prune them to obtain a new equivalent list, which we denote by $\operatorname{TrP}_{i', u}(L)$ (for ‘transfer and prune’).

Our next goal is to provide an explicit formula for coefficients in $\operatorname{TrP}_{i', u}(L)$. By assumption $T(L, u)$ can be written as a column-row sum $T(L, u) = c \boxplus r$, and we want to compute such a decomposition explicitly. To do this it is convenient to fix some $j' \in \{1, \dots, n\}$. Then, since $T(L, u)$ is a column-row sum, we have

$$ \begin{equation} {t}_{ij}-{t}_{i'j}={t}_{ij'}-{t}_{i'j'} \quad \text{for all } i,j \in \{1, \dots, n\}. \end{equation} \tag{2.3} $$
Now, given $i,j \in \{1, \dots, n\}$, we set
$$ \begin{equation} y_i:=t_{ij'}-t_{i'j'} \quad\text{and}\quad z_j:=t_{i'j}, \end{equation} \tag{2.4} $$
so that
$$ \begin{equation*} T(L, u)=(y_1, \dots, y_n)^\top \boxplus (z_1,\dots, z_n). \end{equation*} \notag $$

We have thus found the required decomposition of $T(L, u)$. Note that $y_i$ is independent of the choice of $j'$ by (2.3). Then we obtain

$$ \begin{equation*} \begin{aligned} \, \rho_{L_{\ast u \ast}} &= \sum_{i=1}^n \sum_{j=1}^n t_{ij} \rho_{\mathrm a_i \mathrm u \mathrm a_j} = \sum_{i=1}^n \sum_{j=1}^n (t_{ij}-t_{i'j}) \rho_{\mathrm a_{i} \mathrm u \mathrm a_j}+ \sum_{i=1}^n \sum_{j=1}^n t_{i'j} \rho_{\mathrm a_{i} \mathrm u \mathrm a_j} \\ &= \sum_{i=1}^n (t_{ij'}-t_{i'j'}) \sum_{j=0}^n \rho_{\mathrm a_{i} \mathrm u \mathrm a_j}+\sum_{j=1}^n t_{i'j} \sum_{i=0}^n \rho_{\mathrm a_{i} \mathrm u \mathrm a_j} \sim \sum_{i=1}^n y_i \rho_{\mathrm a_i \mathrm u}+\sum_{j=1}^n z_j \rho_{\mathrm u \mathrm a_j}, \end{aligned} \end{equation*} \notag $$
and one can check that rewriting this way corresponds precisely to applying transfer followed by pruning the resulting constant brotherhoods. This shows that the following result holds.

Proposition 2.14. The list $\operatorname{TrP}_{i', u}(L)$ is obtained from $L$ by removing $L_{\{\ast u \ast\}}$, appending the elements $(\mathrm{a}_i \mathrm{u}, y_i)$ and $(\mathrm{u} \mathrm{a}_j, z_j)$ for $i,j \in \{1, \dots, n\}$ and normalizing the resulting list.

2.4.4. The case of maximum depth $\leqslant 1$

Using the above moves we can reduce any given word-number list to a minimal word-number list or a normalized word-number list of maximum depth $\leqslant 1$. Thus, from now on we assume that $L$ is a normalized word-number list of maximum depth $\leqslant 1$; we are going to construct a minimal list $L'$ equivalent to $L$.

If $L$ is empty, then $L' := L$ is minimal by definition. Now assume that $L$ has depth $0$; we claim that then $L' := L$ is also minimal. Indeed, since $L$ is assumed to be normalized, we have $\rho_L = x\rho_\varepsilon$ for some $x \neq 0$. Since $\rho_\varepsilon(w) = |w|$ for all $w \in M_n$, the function $\rho_L$ is unbounded, hence it is not equivalent to the function $0$, and thus $L$ is not equivalent to the empty list, which is the unique list of depth $< 0$.

Finally, consider the case when $L$ has maximum depth $1$; then

$$ \begin{equation} \rho_L=x \rho_{\varepsilon}+\sum_{i=1}^n x_i \rho_{\mathrm a_i}. \end{equation} \tag{2.5} $$
If $L$ is prunable, that is, $x_1 =\dots = x_n$, then applying a pruning move to the unique brotherhood of depth $1$ and normalizing yields either the empty list $L'$ (which is minimal by definition) or a list $L'$ of maximum depth $1$, which is minimal as seen above. We claim that, on the other hand, if a normalized word-number list $L$ of maximum depth $1$ is not prunable, then it is already minimal. Otherwise, for $\rho_L$ as in (2.5) the function $\sum_{i=1}^n x_i \rho_{\mathrm{a}_i}$ would be equivalent to $y\rho_{\varepsilon}$ for some coefficient $y$, and therefore the function $\sum_{i=1}^{n} (x_i-y) \rho_{a_i}$ would be bounded. Since at least one of the coefficients $x_i - y$ is nonzero, this is impossible as can be seen by evaluating at words of the form $a_j^N$ for large $N$.

2.5. An informal description of the algorithm

Now assume that we are given a word-number list $L$. Then we can proceed as follows to find a minimal word-number-list in the equivalence class of $L$.

Remark 2.15. Unfortunately, this algorithm, if implemented naively, is not efficient in all cases. Generally speaking, there are two things we want to avoid when applying a transfer-and-prune move. We do not want to create too many new entries in our list, and at the same time we do not want to transfer brotherhoods with large coefficients. We can easily avoid one of the two problems (by transferring the brotherhood with the minimal number of nonzero entries, or by transferring the brotherhood with the ‘smallest’ coefficients, respectively), but in general not both of them. In order to optimize the runtime of our algorithm we apply a mixed strategy.

Using this strategy we can ensure that the algorithm operates efficiently in all cases.

§ 3. Formalization of the problem

We now turn to the problem of formalizing the algorithms described in the previous section. Throughout this article we use the multi-tape Turing machine model (see [12], § 8.4.1) as our computational model. We emphasize the fact that our algorithms are designed to work with coefficients that can be large, and therefore we do not assume that arithmetic operations with them are performed in constant time.

3.1. Encoding the coefficients

In order to formalize our algorithm we need to discuss how our data are stored inside a multi-tape Turing machine. In particular, we have to choose an encoding for our coefficients. The following result is established in § 6. Here the function $T$ can be chosen as $T(N) := N \log N$ or as any other function satisfying the assumptions of Convention 6.3.

Theorem 3.1. For $\mathfrak N \in \{\mathbb{Z}, \mathbb{Q}\}$ there exist alphabets $\Sigma_{\mathfrak N}$, encoding subsets $ \operatorname{Num}_{\mathfrak N} \subset \Sigma^*$, surjective encoding maps

$$ \begin{equation*} \operatorname{Num}_{\mathfrak N} \to \mathfrak{N}, \qquad \mathrm x \mapsto \langle \mathrm x \rangle, \end{equation*} \notag $$
and maps $\oplus, \ominus\colon \operatorname{Num}_{\mathfrak N} \times \operatorname{Num}_{\mathfrak N} \to \operatorname{Num}_{\mathfrak N}$ and $\|\cdot\|\colon \operatorname{Num}_{\mathfrak N} \to \mathbb N_0$ with the following properties:

In order to obtain the estimates required in Theorem 3.1 in the case $\mathfrak N = \mathbb{Q}$ we have to choose an encoding that is not injective (namely an encoding by possibly nonreduced mixed fractions). We do not know whether similar bounds can be achieved using an injective encoding. Given $\mathrm{x}_1, \mathrm{x}_2 \in \operatorname{Num}_{\mathfrak N}$, we write $\mathrm{x}_1 \equiv \mathrm{x}_2$ if $\langle \mathrm{x}_1 \rangle = \langle \mathrm{x}_2 \rangle$.

From now on we assume that our coefficients and arithmetic operations between them are encoded as in Theorem 3.1. Given $x \in \operatorname{Num}_{\mathfrak N}$, we refer to $\|x\|$ as the size of $x$, since it is proportional to the amount of memory necessary to store $\mathrm{x}$.

3.2. Encoding counting functions

Throughout what follows we fix an integer $n \geqslant 2$. We want to encode counting functions over the free monoid $M_n$ with coefficients in either $\mathbb{Z}$ or $\mathbb{Q}$. In our informal discussion in § 2 we described counting functions by word-number lists, that is, lists of the form $L = ((w_1, x_1), \dots, (w_N, x_N))$, where $w_1, \dots, w_N$ are elements of $M_n$ and $x_1, \dots, x_N$ are elements of the coefficient group $\mathfrak N \in \{\mathbb{Z}, \mathbb{Q}\}$. In our formal discussion we will speak of the standard data structure of a doubly linked list (see [4], § 10.2), which in the multi-tape model can be presented as a long word written as on a separate tape, with list elements $(w_i,x_i)$ separated by commas, but we yet have to specify our encoding of the elements of $M_n$ and $\mathfrak N$, respectively. Since in most algorithms we deal with at most $n+1$ lists, this constant can serve as an estimate for the number of tapes one needs.

3.2.1. Word-coefficient pairs

To encode elements of $M_n$ we choose a set $\mathrm{A} := \{\mathrm{a}_1, \dots, \mathrm{a}_n\}$ of cardinality $n$ and identify $M_n$ with the set $\mathrm{A}^*$ of words over $\mathrm{A}$. This gives an encoding of $M_n$ over the alphabet $\mathrm{A}$ and, given $w \in M_n$, we denote by $|w|$ the word length of $w$ with respect to the alphabet $\mathrm{A}$ and call it the size of $w$. In reality, the amount of memory needed to encode $w$ is proportional to the canonical binary size $b(w)=\log_2(n) \cdot |w|$. However, we are primarily interested in the case when the rank $n$ of our monoid is small compared to the size of the list and/or the size of the coefficients, so that we treat $n$ as a constant throughout and thus consider $|w|$ to be proportional to the memory used by $w$.

For the coefficients we use the encoding $\operatorname{Num}_{\mathfrak N} \to \mathfrak N$ discussed in the previous section and, in more details, in § 6. Given $\mathrm{x} \in \operatorname{Num}_{\mathfrak N}$, we use the size $\|\mathrm{x}\|$ as defined in § 6 as a measure for the memory needed to store $\mathrm{x}$.

By a word-coefficient pair (or simply a pair) we always mean a pair of the form $(w, \mathrm{x}) \in \mathrm{A}^* \times \operatorname{Num}_{\mathfrak N}$. Here the first component $w$ is interpreted as an element of the monoid $M_n \cong \mathrm{A}^*$ and the second component $\mathrm{x}$ encodes a coefficient $x = \langle \mathrm{x} \rangle$. Given a pair $(w, \mathrm{x} )$, we define its total size by

$$ \begin{equation*} |(w,\mathrm x )|_{\mathrm{tot}}:=|w|+\|\mathrm x \|. \end{equation*} \notag $$
By the above discussion, this quantity is proportional to the memory needed to store such a pair.

3.2.2. Encoded lists

A finite list $\mathcal L = ((w_1, \mathrm{x}_1), \dots, (w_N, \mathrm{x}_N))$ of pairs is referred to as an encoded list, and the word-number list

$$ \begin{equation*} \langle \mathcal L \rangle:=((w_1, \langle \mathrm x_1\rangle), \dots, (w_N, \langle \mathrm x_N\rangle)) \end{equation*} \notag $$
is called its interpretation. Note that, as our encoding of coefficients is not injective, different encoded lists can share the same interpretation. We say that encoded lists are normalized, minimal, pruned, equivalent and so on if their interpretations have the corresponding property.

Given an encoded list $\mathcal L = ((w_1, \mathrm{x}_1), \dots, (w_N, \mathrm{x}_N))$, we set

$$ \begin{equation*} |\mathcal L|:=\sum_{i=1}^N |w_i|, \qquad \|\mathcal L\|:= \sum_{i=1}^N \|\mathrm x_i\| \quad\text{and}\quad |\mathcal L|_{\mathrm{tot}}:=|\mathcal L|+\|\mathcal L\| \end{equation*} \notag $$
and refer to these at the word size, coefficient size and total size of $\mathcal L$, respectively. Up to a constant (depending on $n$) the total size of $\mathcal L$ is the amount of memory used to store this list in our encoding.

Given an encoded list $\mathcal L$, the associated counting function is defined by ${\rho_{\mathcal L} \,{:=}\, \rho_{\langle \mathcal L\rangle}}$. For example, for $\mathfrak N = \mathbb{Z}$ the encoded list $\mathcal L := (\mathrm{(a_1, +11), (a_2a_1, -100), (a_2,-101)})$ represents the counting function $3\rho_{a_1}-4\rho_{a_2 a_1}-5\rho_{a_2}$. Our choice of a doubly linked list as the underlying data structure (rather than, for example, a structure similar to the weighted trees used for our visualizations above) is motivated by the fact that we do not want to allocate redundant memory for many zero coefficients. We will see the efficiency of this data structure in the analysis of our main algorithm.

3.3. The statement of the main result

On having fixed the notion of an encoded list as our encoding for a counting function, now we can formulate the main results of the present article, at least in the case of monoids.

Theorem 3.2. For every $n\geqslant 2$ there exists an algorithm FindMinimalList that takes as input an encoded list $\mathcal L$ over $M_n$ and outputs a minimal encoded list $\mathcal M$ equivalent to $\mathcal L$. Moreover, if $n \geqslant 3$, then this algorithm can be implemented in such a way that its time complexity is as follows.

In view of Remark 2.5 we have the following immediate consequence.

Corollary 3.3. For every $n \geqslant 2$ and $\mathfrak N \in \{\mathbb{Z}, \mathbb{Q}\}$ there exist algorithms to decide whether two counting functions over $M_n$ with coefficients in $\mathfrak N$ (given as encoded lists) are equivalent. Moreover, for $n \geqslant 3$ these algorithms can be implemented in such a way that their respective time complexities are as described in parts (a) and (b) of Theorem 3.2.

The raison d’être for the appearance of the nonlinear function $T$ in Theorem 3.2, (b), (and, consequently, in Corollary 3.3) is the nonlinear complexity of arithmetic operations over $\mathbb Q$; that is, the currently best known implementation of addition in $\mathbb Q$ has time complexity $T(n)$. If addition of rational numbers could be implemented more efficiently, then this complexity bound could be improved; see § 6 for a more detailed discussion. While our algorithm works for all $n \geqslant 2$, our runtime estimate requires the more restrictive condition $n \geqslant 3$. This condition is only used once in Lemma 4.8, to show that at the main processing step the sizes of the coefficients under consideration decreases at least by a factor if $d_n$, where $d_n>1$ for $n \geqslant 3$, which is crucial for estimates of the runtime.

3.4. Implementing basic moves

Three of the main subalgorithms in the informal algorithm from § 2.5 are pruning, the computation of transfer matrices and the ‘transfer-and-prune’ move. Now we discuss how these algorithms can be implemented on the level of encoded lists.

As concerns encoded lists, we use terminology similar to word-number lists. In particular, if $\mathcal L = ((w_1, \mathrm{x}_1), \dots, (w_N, \mathrm{x_N}))$ is a normalized encoded list and $w \in M_n$, then we define the weighted sub-brotherhood of $\mathcal L$ of type $\{w\ast \}$ as the normalized encoded list $\mathcal L_{\{w\ast\}}$ consisting of all pairs $(w_j, \mathrm{x}_j)$ in $\mathcal L$ such that $w_j \in \{w\ast\}$. Similarly, given a word $u \in M_n$, we denote by $\mathcal L_{\{\ast u \ast\}}$ the normalized encoded list given as the concatenation of the encoded lists $\mathcal L_{\{\mathrm{a}_1 \mathrm{u}\ast\}},\dots,\mathcal L_{\{\mathrm{a}_n \mathrm{u}\ast\}}$.

3.4.1. Pruning

Let $\mathcal L$ be a normalized encoded list with interpretation $L = \langle \mathcal L \rangle$ of maximum depth $\ell \geqslant 1$. If $w \in M_n$ is of length $\ell -1$ and $L_{\{w\ast\}}$ is prunable, then we would like to find an encoded list that represents $\operatorname{Pr}_{w\ast}(L)$. To do this we remove the sublist $\mathcal L_{\{w\ast\}}$ from $\mathcal L$ and append a pair of the form $(w,\mathrm{x})$, where $\mathrm{x} \equiv \mathrm{x}_1 \equiv \dots \equiv \mathrm{x}_n$.

Note that in applying such a pruning map to $\mathcal L$ we have the freedom of choosing $\mathrm{x}$. We can always choose $\mathrm{x} :=\mathrm{x}_1$, but in order to obtain an efficient algorithm it is better to choose $\mathrm{x}$ to be one of the $\mathrm{x}_j$ that have the smallest size $\|\mathrm{x}_j\|$.

3.4.2. The computation of transfer matrices

Let $\mathcal L$ be a normalized encoded list with interpretation $L = \langle \mathcal L \rangle$ of maximum depth $\ell$, and let $u \in M_n$ satisfy $|u| = \ell -2$. We are interested in computing the transfer matrix $T := T(L, u) = (t_{ij})$. We say that a matrix $\mathrm{T} = (\mathrm{t}_{ij}) \in \operatorname{Num}_{\mathfrak N}^{n \times n}$ represents $T$ if $t_{ij} = \langle \mathrm{t}_{ij} \rangle$. In this case we also call $\mathrm{T}$ an encoded transfer matrix and write $\mathrm{T}(\mathcal L, u)$ for $\mathrm{T}$.

To compute such a transfer matrix we compute $\mathcal L_{\{\ast u \ast\}}$ first. Then we read through the words in this list, and whenever we find a word starting with $\mathrm{a}_i$ and ending with $\mathrm{a}_j$, we read out the corresponding coefficient $\mathrm{x}_{ij}$ and set $\mathrm{t}_{ij} := \mathrm{x}_{ij}$. All the other entries of $\mathrm{T}$ are set to be $\varepsilon$. We will see that this can be carried out in linear time with respect to the total size of $\mathcal L_{\{\ast u \ast\}}$.

3.4.3. Transfer and prune

Let $\mathcal L$ be a normalized encoded list with interpretation $L = \langle \mathcal L \rangle$ of maximum depth $\ell \geqslant 2$. We assume that there is $u\in M_n$ with $|u| = \ell - 2$ such that the transfer matrix $T(L, u)$ is a column-row sum. Then we want to find an encoded list $\mathcal L'$ which represents the word-number list $\operatorname{TrP}_{i', u}(L)$ for some $i' \in \{1, \dots, n\}$.

To do this, first we form the encoded transfer matrix $\mathrm{T}(\mathcal L, u)=(\mathrm{t}_{ij})$. then for every $i \in \{1, \dots, n\}$ we choose $\mathrm{y}_i \in \operatorname{Num}_{\mathfrak N}$ such that $\mathrm{y}_i \equiv \mathrm{t}_{ij}\ominus \mathrm{t}_{i'j}$ for some $j \in \{1, \dots, n\}$. Then for every $j \in \{1, \dots, n\}$ we choose elements $\mathrm{z}_j \in \operatorname{Num}_{\mathfrak N}$ such that $\mathrm{z}_j \equiv t_{i'j}$. One possible choice is $\mathrm{z}_j := \mathrm{t}_{i'j}$ and $\mathrm{y}_{i} := \mathrm{t}_{ij'}\ominus\mathrm{t}_{i'j'}$ for some fixed $j' \in \{1, \dots, n\}$, but in general this choice may not be efficient. In any case, then we can modify $\mathcal L$ by removing $\mathcal L_{\{\ast u \ast\}}$ and appending the elements $(\mathrm{a}_i \mathrm{u}, \mathrm{y}_i)$ and $(\mathrm{u} \mathrm{a}_j, \mathrm{z}_j)$ for $i,j \in \{1, \dots, n\}$ (where we can omit the ones with coefficient representing $0$). Then by Proposition 2.14 the resulting list represents $\operatorname{TrP}_{i', u}(L)$.

§ 4. Description of the algorithm in the monoid case

Now we formalize the algorithm sketched in § 2.5. Then we analyze its complexity and establish Theorem 3.2. Our algorithm will work with encoded lists of word-coefficient pairs. For brevity we refer to a word-coefficient pair simply as a pair and to an encoded list simply as a list. We are going to deal with the integer and rational cases simultaneously. We set $T(N) := N$ if the coefficient group is $\mathfrak N = \mathbb{Z}$ and $T(N) := N \log N$ if the coefficient group is $\mathfrak N = \mathbb{Q}$, so that addition, subtraction and comparison of coefficients $\mathrm{x}_1, \mathrm{x}_2 \in \operatorname{Num}_{\mathfrak N}$ can be carried out in time $T(\|\mathrm{x}_1\| + \|\mathrm{x}_2\|)$ by Theorem 3.1, and $T$ satisfies properties (T1)–(T3) from Convention 6.3.

Remark 4.1. In running our main algorithm we often encounter algorithms of the following form. We are given an encoded list $\mathcal L$, which we split into finitely many nonempty sublists $\mathcal L_1, \dots, \mathcal L_k$ by a procedure of linear time complexity $O(|\mathcal L|_{\mathrm{tot}})$. Then we run the same procedure Proc over each list $\mathcal L_i$.

Fortunately, we will always be in the situation where the time complexity of the procedure is either linear or of the form $O(T(N))$, where $N$ is the size of the input. In this specific situation it follows from Lemma 6.4 that the time complexity of the whole algorithm is also of the form $O(|\mathcal L|_{\mathrm{tot}})$ (in the linear case) or of the form $O(T(|\mathcal L|_{\mathrm{tot}}))$, respectively.

4.1. Normalizing lists and detaching brotherhoods

We start the description of our algorithm by discussing some auxiliary procedures. First we consider a procedure to transform a given list into a normalized one.

Proof. Recall from [4], § 8.3, that it is possible to sort any list in linear time under the assumption that the size of the alphabet is constant by means of the famous RadixSort sorting algorithm. (It is quite clear that this algorithm can be realized on a multi-tape Turing machine with $n+1$ tapes, where $n$ is the number of symbols in the alphabet of the words to be sorted.) In order to normalize the input list $\mathcal L$ we apply RadixSort, using the respective words of list pairs as sorting keys. The result is a new list $\mathcal{L}'$ with entries ordered in such a way that shorter words $w_i$ go first, and within groups of words of the same length, words are ordered lexicographically. Clearly, such reordering does not change the total size of the list. Then we go through the sorted list $\mathcal{L'}$, and if we find several consecutive pairs with the same word $w$ and coefficients $x_1, \dots, x_m$, then we replace these entries by a pair $(w, x_1 \oplus \dots \oplus x_{m})$ unless $x_1\oplus \dots \oplus x_m \equiv 0$, in which case we simply eliminate them. The result is a normalized list $\mathcal L$; the runtime of RadixSort is linear, hence (ii) follows from Lemma 6.4 (the totalizing lemma) as applied to estimates for addition and comparison of numbers from Theorem 3.1.

The inequality $|\mathcal N|_{\mathrm{tot}}\leqslant |\mathcal L|_{\mathrm{tot}}$ follows from the facts that in the course of the algorithm adjacent number-word pairs of the form $(w,\mathrm x)$ and $(w,\mathrm y)$ are replaced by the single pair $(w,\mathrm x \oplus \mathrm y)$ and that we have

$$ \begin{equation*} |(w,\mathrm x \oplus \mathrm y)|_{\mathrm{tot}}=|w|+\|\mathrm x \oplus \mathrm y\|< 2|w|+\|\mathrm x\|+\|\mathrm y\|=|(w,\mathrm x)|_{\mathrm{tot}}+|(w,\mathrm y)|_{\mathrm{tot}}. \end{equation*} \notag $$
The lemma is proved.

One advantage of normalized lists is that it is easy to find sub-brotherhoods.

Proof. Just move the first pair $(\mathrm{a}_{i_1} \dots \mathrm{a}_{i_l}, \mathrm{x})$ of $\mathcal N$ to a separate list $\mathcal B$; then read the word $w'$ of the next pair and move this pair to $\mathcal B$ if $w' = \mathrm{a}_{i_1} \dots \mathrm{a}_{i_l-1}\mathrm{a}$ for some $\mathrm{a} \in \mathrm{A}$. Continue until you find a pair whose word is not of this form. Since we run through the list $\mathcal N$ until we find all entries of a nonempty list $\mathcal{B}$ plus we view at most the first $l+1$ letters of one more word, the runtime is $O(|\mathcal B|)$.

4.2. The procedure PruneList

Now we describe a procedure which takes a normalized list $\mathcal N$ of constant depth $\ell \geqslant 1$ and prunes all constant brotherhoods of $\mathcal N$. The result is stored in two separate normalized lists: the remaining pairs of depth $\ell$ are stored in a list $\mathcal N'$ and the newly produced pairs of depth $\ell-1$ are stored in a list $\mathcal L$. The list $\mathcal L$ is of much smaller size than $\mathcal N$, whereas the list $\mathcal N'$, which is handled below using a series of transfer-and-prune moves, can be of size comparable to $\mathcal N$ (and even equal to $\mathcal N$ if the latter is pruned from the outset).

Such a procedure can explicitly be described as follows.

Procedure PruneList.

Input: a normalized list $\mathcal N$ of constant depth $\ell \geqslant 1$.

Output: a normalized list $\mathcal N'$ of constant depth $\ell$ and a normalized list $\mathcal L$ of constant depth $\ell-1$ such that $\mathcal L$ is equivalent to the concatenation $\mathcal N' \cup \mathcal L$.

Proof of Lemma 4.4. It is clear that (ii) and (iii) hold since $\mathcal N' \cup \mathcal L$ is obtained from $\mathcal N$ by pruning all prunable sub-brotherhoods of depth $\ell$. (Note that $\mathcal N'$ is normalized as a sublist of $\mathcal N$ and $\mathcal L$ is normalized by construction.) The list $\mathcal N'$ is obtained from $\mathcal N$ by deleting several sublists of the form
$$ \begin{equation*} \mathcal C=((w\mathrm a_1,\mathrm x_1),(w\mathrm a_2, \mathrm x_2),\dots,(w\mathrm a_n,\mathrm c_n)), \end{equation*} \notag $$
and each time we delete such a sublist, we add the corresponding pair $(w, \mathrm{x}_{i_0})$ to $\mathcal L$. Since $|{w}| < |w\mathrm{a}_i|$ and $\|\mathrm{x}_{i_0}\| < \|\mathrm{x}_i\|$ for all $i \in \{1, \dots, n\}$, we have $|\{(w,\mathrm{x}_{i_0})\}|_{\mathrm{tot}}<\frac{1}{n}|\mathcal C|_{\mathrm{tot}}$, and thus we obtain (iv).

It remains to estimate the runtime of the procedure. To detach each brotherhood $\mathcal B$ we spend time $O(|\mathcal B|)$ according to Lemma 4.3. The comparison of $c_i$ and $c_{i-1}$ at step 2 takes time $O(T(||c_i||+||c_{i-1}||))$ by Theorem 3.1, and we have to apply this $(n-1)$ times (for $i=2,\dots,n$). Since $n$ is considered as a constant, we need time $O(|\mathcal B|)+O(T(||\mathcal B||) = O(T(|\mathcal B|_{\mathrm{tot}})))$ to deal with the brotherhood $\mathcal B$. It thus follows from Lemma 6.4 that the total time complexity is $O(T(|\mathcal N|_{\mathrm{tot}}))$.

4.3. The procedure TransferAndPrune

We now turn to the main step of our algorithm, at which we want to either show that a given pruned word-number list $L$ is minimal or otherwise apply a transfer-and-prune move. We consider one family of related brotherhoods at a time, hence our input is a single collection of related nonconstant weighted brotherhoods $(\mathcal B_1, \dots, \mathcal B_n)$. We may assume that the concatenation $\mathcal B = \mathcal B_1 \cup \dots \cup \mathcal B_n$ is normalized. This means that if ${u}$ denotes the common stem of the underlying brotherhoods, then elements of $\mathcal B_i$ are of the form $(\mathrm{a}_i {u} \mathrm{a}_j, \mathrm{x}_{ij})$, where $\mathrm{x}_{ij} \not \equiv 0$. We recall that the encoded transfer matrix $\mathrm{T} = \mathrm{T}(\mathcal B, u)$ is given by $\mathrm{T} = (\mathrm{t}_{ij})$, where $\mathrm t_{ij} := \mathrm{x}_{ij}$ if $\mathcal B_i$ contains a pair with word $\mathrm{a}_iu\mathrm{a}_j$ and $\mathrm{t}_{ij} := \varepsilon$ otherwise. In particular, the total size of its entries is $\|\mathrm{T}\| = \|\mathcal B\|$, and we refer to entries $\mathrm{t}_{ij}$ of $\mathrm{T}$ such that $\mathrm{t}_{ij} \neq \varepsilon$ as nontrivial entries.

Definition 4.5. We say that the encoded transfer matrix $\mathrm{T}$ is sparse if either $n \geqslant 4$ and $\mathrm{T}$ has fewer than $3n$ nontrivial entries or $n=3$ and $\mathrm{T}$ has fewer than $2n$ nontrivial entries.

The goal of this subsection is to describe and analyze a procedure TransferAndPrune with the following properties:

We compute $\mathcal L$ from $\mathcal B$ by applying a single transfer-and-prune move if possible. The precise implementation of this move depends strongly on whether or not the transfer matrix is sparse. In the nonsparse case a naive implementation of the transfer-and-prune move works fine, but in the sparse case we need to take extra care in order not to create too many new word-coefficient pairs. Our procedure look explicitly as follows.

Procedure TransferAndPrune.

Input: a family of nonconstant weighted brotherhoods $(\mathcal B_1, \dots, \mathcal B_n)$ of depth $\ell \geqslant 2$ such that $\mathcal B := \mathcal B_1 \cup \dots \cup \mathcal B_n$ is normalized.

Output: a Boolean variable minimal and a list $\mathcal L$.

First we check that this procedure is correct.

Proposition 4.6. The procedure TransferAndPrune has properties (i)–(iv) from Definition 4.5.

Proof. If the procedure stops during the execution of step 3, then the encoded transfer matrix is not a column-row sum, and so $\mathcal B$ is minimal by Proposition 2.13, hence the output is correct. Assume now that the transfer matrix is a column-row sum; then we want to perform a transfer-and-prune move. To do this we have to choose some $i' \in \{1, \dots, n\}$ and then choose elements $\mathrm{y}_i, \mathrm{z}_j \in \operatorname{Num}_{\mathfrak N}$ such that
$$ \begin{equation} \mathrm z_j \equiv t_{i'j} \quad\text{and}\quad \mathrm y_{i} \equiv \mathrm t_{ij'}-\mathrm t_{i'j'} \quad \text{for some } j'\in \{1, \dots, n\}. \end{equation} \tag{4.1} $$
Then the list $\mathcal L$ must consist of the pairs $(\mathrm{a}_i \mathrm{u}, \mathrm{y}_i)$ and $(\mathrm{u} \mathrm{a}_j, \mathrm{z}_j)$ such that $\mathrm{y}_i \not \equiv 0 \not \equiv \mathrm{z}_j$. We distinguish two cases.

Case 1. If the matrix $\mathrm{T}$ is nonsparse, then at step 5 we perform transfer for $i' := i_0$. Then the coefficients in our list $\mathcal L$ are $\mathrm{z}_j := \mathrm{t}_{i_0j}$ and $y_i := \mathrm{t}_{ij_0}-\mathrm{t}_{i_0j_0}$, where $j_0$ was chosen at step 5, (b). (The reason for this specific choice becomes clear from Lemma 4.8.) Then (4.1) holds by definition, hence the algorithm performs correctly.

Case 2. If $\mathrm{T}$ is sparse, then at step 6 we perform transfer with $i':= i_1$. For every $i$ we choose $\mathrm{y}_i := \mathrm{t}_{ij_1}$, where $j_1$ was chosen at step 6, (b), (i). Since $j_1 \in Z$, we have $\mathrm{t}_{i_1j_1} \equiv 0$ and so $\mathrm{y}_i = \mathrm{t}_{ij_1} \equiv \mathrm{t}_{ij_1} - \mathrm{t}_{i_1j_1}$ satisfies (4.1).

For reasons of efficiency, the coefficients $\mathrm{z}_j$ are chosen differently depending on whether or not $i_0 = i_1$. If $i_0 = i_1$, then we choose $\mathrm{z}_j := \mathrm{t}_{i_1j}$, which is obviously correct. If $i_0 \neq i_1$ then we choose $\mathrm{z}_j := \mathrm{t}_{i_0j} \ominus \mathrm{t}_{i_0j_0}$, where $j_0$ is chosen as at step 6, (d), (i). Since $j_0 \in Z$, we have $\mathrm{t}_{i_1j_0} \equiv 0$, and since $\mathrm{T}$ is a row-column-sum we have

$$ \begin{equation*} \mathrm t_{i_1j_0}\ominus \mathrm t_{i_0j_0} \equiv \mathrm t_{i_1j}\ominus \mathrm t_{i_0j} \quad\Longrightarrow\quad \mathrm z_j=\mathrm t_{i_0j} \ominus \mathrm t_{i_0j_0} \equiv \mathrm t_{i_1j} \ominus \mathrm t_{i_1j_0} \equiv \mathrm t_{i_1j}. \end{equation*} \notag $$
Hence (4.1) is also satisfied in this case and the algorithm is correct.

Concerning the run time of the procedure TransferAndPrune, we observe the following.

Proposition 4.7. The procedure TransferAndPrune terminates in time $O(T(|\mathcal B|_{\mathrm{tot}}))$.

Proof. We recall that the rank $n$ is treated as a constant throughout our estimates, hence all implied constants are allowed to depend on $n$.

Step 1 can be performed in linear time $O(|\mathcal B|_{\mathrm{tot}})$. The stem can be computed from any pair $(w, \mathrm{x})$ in time $O(|w|) \leqslant O(|\mathcal B|_{\mathrm{tot}})$. The transfer matrix can also be computed in time $O(|\mathcal B|_{\mathrm{tot}})$ by running through the pairs and copying the required data.

Step 2 can be made by the consecutive summation of the lengths of the coefficients in $\mathcal B_{i}$ and the update of the minimum, so it takes time at most $O(||\mathcal B_i||)$ for each row, hence at most $O(||\mathcal B||)$ altogether. (Note that here we add integers, rather than rational numbers, even in the case when $\mathfrak N=\mathbb Q$.)

At step 3 we need time at most $O(T(||\mathrm{t}_{ij}||+||\mathrm{t}_{i_0j}||+||\mathrm{t}_{i {j-1}}||+||\mathrm{t}_{i_0{j-1}}||))$ to decide whether $\mathrm{t}_{ij} \ominus \mathrm{t}_{i_0 j}\equiv \mathrm{t}_{i{j-1}} \ominus \mathrm{t}_{i_0 {j-1}}$ by Theorem 3.1. Then all comparisons at the $i$th iteration of step 3 take time at most $O(T(2(||\mathcal B_i||+||\mathcal B_{i_0}||)))$. By our choice of $i_0$ and property (T3) of the function $T$ (see Convention 6.3) we have $O(T(2(||\mathcal B_i||+||\mathcal B_{i_0}||))) = O(T(||\mathcal B_i||)$, and so by Lemma 6.4 step 3 takes time at most $O(T(|\mathcal B|_{\mathrm{tot}}))$. Step 4 can be carried out in time $O(1)$.

In the nonsparse case we execute step 5. Here part (a) takes time at most $|\mathcal B_{i_0}|_{\mathrm{tot}}$, part (b) takes time at most $O(||\mathcal B||)$ similarly to step 2, and part (c) takes time at most $O(T(|\mathcal B|_{\mathrm{tot}}))$ since each operation of subtraction $x_{i j_0}\ominus x_{i_0 j_0}$ takes time at most $O(T(||\mathcal B_i||+||\mathcal B_{i_0}||)$. We can thus apply Lemma 6.4 as at step 3 to obtain an estimate $O(T(||\mathcal B||))$ for the processing of coefficients and $O(|\mathcal B|)$ for the processing of words, obtaining at most $O(T(|\mathcal B|_{\mathrm{tot}}))$ in total. In the sparse case we execute step 6, whose time complexity can be analyzed similarly to step 5. Step 7 has time complexity $O(1)$ again. Thus every step of the algorithm takes time at most $O(T(|\mathcal B|_{\mathrm{tot}}))$, hence this bound also serves as a time estimate for the whole procedure. The proposition is proved.

The crucial point about the procedure TransferAndPrune is that it reduces the size of the output by a fixed constant $<1$, unless the input is already minimal.

Lemma 4.8. Assume that TransferAndPrune returns minimal${} = \mathrm{false}$ when applied to $(\mathcal B_1, \dots, \mathcal B_n)$. If $n \geqslant 3$, then the sizes of the output list $\mathcal L$ and the input list $\mathcal B :=\mathcal B_1 \cup \dots \cup \mathcal B_n$ are related by the inequality

$$ \begin{equation*} |\mathcal L|_{\mathrm{tot}} \leqslant \frac{8}{9} |\mathcal B|_{\mathrm{tot}}. \end{equation*} \notag $$

Proof. Since $|\mathcal L|_{\mathrm{tot}} = |\mathcal L| + ||\mathcal L||$ and $|\mathcal B|_{\mathrm{tot}} = |\mathcal B|_{\mathrm{A}} + ||\mathcal B||$, it suffices to estimate word lengths and coefficient sizes separately, that is, to show that
$$ \begin{equation*} |\mathcal L| \leqslant \frac{8}{9} |\mathcal B|, \qquad \|\mathcal L\| \leqslant \frac{8}{9} \|\mathcal B\|. \end{equation*} \notag $$
To do this we use the fact that if $m_1, \dots, m_n$ are rational numbers and $m_{i_0}$ is the smallest of them, then
$$ \begin{equation} m_{i_0} \leqslant \frac{1}{n} \sum_{i=1}^n m_i. \end{equation} \tag{4.2} $$
Let $\mathrm{T}$ be the coefficient matrix constructed at step 1. We distinguish two cases.

Case 1: the matrix $\mathrm{T}$ is nonsparse. Assume first that $n \geqslant 4$, so that our input contains at least $3n$ words of length $\ell$ and therefore $|\mathcal B| \geqslant 3n\ell$. The maximum number of words in our output is $n + (n-1) = 2n-1$ (since at step 5, (a), we create at most $n$ words and at step 5, (c), we create at most $(n-1)$ words), and each of them has length $\ell-1$, hence

$$ \begin{equation*} |\mathcal L| \leqslant (2n-1)(\ell-1) < (2n-1)\ell \leqslant \frac{2n-1}{3n} |\mathcal B| \leqslant \frac {2}{3} |\mathcal B| \leqslant \frac {8}{9}{|\mathcal B|}. \end{equation*} \notag $$
For $n=3$ the same bound follows by a similar argument from the inequalities
$$ \begin{equation*} {|\mathcal L|} \leqslant \frac{(2n-1)(\ell-1)}{2n\ell} |\mathcal B| \leqslant \frac{2n-1}{2n} |\mathcal B|=\frac{5}{6} |\mathcal B| \leqslant \frac {8}{9}{|\mathcal B|}. \end{equation*} \notag $$
On the other hand $||\mathcal B||$ is precisely the total size of the entries in the matrix $\mathrm{T}$, that is,
$$ \begin{equation} \|\mathcal B\|=\sum_{i=1}^n\sum_{j=1}^n \|\mathrm t_{ij}\|, \end{equation} \tag{4.3} $$
whereas in view of step 5, (a), and step 5, (c), the coefficient size of the output is
$$ \begin{equation*} \|\mathcal L\|=\sum_{j=1}^n \|\mathrm t_{i_0j}\|+\sum_{i\neq i_0} \|\mathrm t_{ij_0}\ominus \mathrm t_{i_0j_0}\|. \end{equation*} \notag $$
In view of our choice of $i_0$ at step 2 we can use (4.2) to deduce the estimate
$$ \begin{equation} \sum_{j=1}^n \|\mathrm t_{i_0j}\| \leqslant \frac{1}{n} \sum_{i=1}^n \biggl(\sum_{j=1}^n \|\mathrm t_{ij}\|\biggr)=\frac{\|\mathcal B\|}{n}. \end{equation} \tag{4.4} $$
Similarly, in view of our choice of $j_0$ at step 5, (b), we have the estimate
$$ \begin{equation*} (n-2) \|\mathrm t_{i_0j_0}\|+\sum_{i=1}^n \|\mathrm t_{ij_0}\| \leqslant \frac{1}{n} \sum_{j=1}^n \biggl((n-2) \|\mathrm t_{i_0j}\|+\sum_{i=1}^n \|\mathrm t_{ij}\| \biggr). \end{equation*} \notag $$

Combining these two estimates and using Lemma 6.4 we obtain

$$ \begin{equation*} \begin{aligned} \, \|\mathcal L\| &\leqslant \frac{\|\mathcal B\|}{n}+\sum_{i\neq i_0} (\|\mathrm t_{ij_0}\|+ \|\mathrm t_{i_0j_0}\|) =\frac{\|\mathcal B\|}{n}+(n-2) \|\mathrm t_{i_0j_0}\|+\sum_{i=1}^n\|\mathrm t_{ij_0}\| \\ &\leqslant \frac{\|\mathcal B\|}{n}+\frac{1}{n} \sum_{j=1}^n \biggl((n-2) \|\mathrm t_{i_0j}\|+\sum_{i=1}^n \|\mathrm t_{ij}\| \biggr) \\ &= \frac{\|\mathcal B\|}{n}+\frac{n-2}{n} \sum_{j=1}^n \|\mathrm t_{i_0j}\|+\frac{1}{n} \sum_{i=1}^n \sum_{j=1}^n \|\mathrm t_{ij}\| \\ &\leqslant \frac{\|\mathcal B\|}{n}+\frac{n-2}{n} \frac{\|\mathcal B\|}{n}+\frac{1}{n} \|\mathcal B\| =\frac{3n-2}{n^2} \|\mathcal B\|. \end{aligned} \end{equation*} \notag $$
Hence for $n \geqslant 3$ we obtain
$$ \begin{equation*} \|\mathcal L\| \leqslant \frac{7}{9} \|\mathcal B\| \leqslant \frac{8}{9} \|\mathcal B\|, \end{equation*} \notag $$
which finishes case 1.

Case 2: the matrix $\mathrm{T}$ is sparse. Let $\lambda$ be the minimum number of nonzero entries in a row of $\mathrm{T}$, that is, the size of the set $S$ constructed at step 6, (a). Since the weighted brotherhoods $\mathcal B_1, \dots, \mathcal B_n$ are nonconstant, we have $\lambda \geqslant 1$, and since $\mathrm{T}$ is sparse we have either $\lambda \leqslant 2$ and $n \geqslant 4$ or $\lambda \leqslant 1$ and $n = 3$. In either case we have $|Z| \geqslant 2$ and $\lambda \in \{1,2\}$. If $\lambda = 1$, then $S$ is actually a singleton, say, $S = \{j_s\}$. In this case we define the sets

$$ \begin{equation*} A:=\{i \in \{1, \dots, n\}\setminus\{i_1 \} \mid \mathrm t_{ij_s} \equiv \mathrm t_{i_1j_s}\} \end{equation*} \notag $$
and
$$ \begin{equation*} A^c:=\{i \in \{1, \dots, n\}\setminus\{i_1 \}\mid \mathrm t_{ij_s} \not \equiv \mathrm t_{i_1j_s}\}. \end{equation*} \notag $$
Then we set $|A| := r$, so that $|A^c| = n-r-1$.

First assume that $\lambda = 2$. Then $\mathcal B$ contains at least $2n$ words, so hence $|\mathcal B| \geqslant 2\ell$. At step 6, (b), (ii), we create at most $(n-1)$ words of length $(\ell-1)$ and at step 6, (c) or (d), we create at most $|S| = 2$ words. Thus, for $\lambda = 2$ and $n \geqslant 3$ we obtain

$$ \begin{equation*} |\mathcal L| \leqslant (n+1)(\ell-1) < (n+1)\ell \leqslant \frac{n+1}{2n} |\mathcal B| \leqslant \frac{4}{6}|\mathcal B| \leqslant \frac{8}{9}|\mathcal B|. \end{equation*} \notag $$
Now consider the case $\lambda = 1$. Since the matrix $\mathrm{T}$ is a column-row sum, the $i$th row of $X$ becomes constant after subtracting $\mathrm{t}_{i_1j_s}$ from $\mathrm{t}_{ij_s}$. Thus, if $i \in A$, then $\mathcal B_i$ contains only one entry, whereas if $i \in A^c$, then $\mathcal B_i$ contains at least $(n-1)$ entries. Hence we obtain
$$ \begin{equation*} |\mathcal B| \geqslant (1+|A|+|A^c| (n-1)) \ell \geqslant (1+r+2(n-r-1)) \ell \geqslant (2n-r-1) \ell. \end{equation*} \notag $$

At step 6, (b), (ii), we create $(n-r-1)$ words and at step 6, (c) or (d), we create at most one word, all of which are of length $\leqslant \ell-1$, hence

$$ \begin{equation*} |\mathcal L| \leqslant ((n-r-1)+1)(\ell-1) \leqslant (n-r) \ell \leqslant \frac{n-r}{2n-r-1} |\mathcal B|. \end{equation*} \notag $$
If $r = 0$, then $ {(n-r)}/{(2n-r-1)} = n/(2n-1) \leqslant 3/5$, and if $r \geqslant 1$, then $2n-r-1 \leqslant 2(n-r)$, hence ${(n-r)}/{(2n-r-1)} \leqslant 1/2$. Thus, in any case
$$ \begin{equation*} |\mathcal L| \leqslant \frac 8 9 |\mathcal B|. \end{equation*} \notag $$

We now turn to coefficient sizes. First consider the coefficients created at step 6, (b), (ii). For each $i\in \{1,2,\dots,n\}\setminus \{i_1\}$ two cases are possible. Either no pair is created (if the $i$th and $i_1$th rows encode equal vectors) or there are at least two nontrivial entries $\mathrm{t}_{ij_2}$ and $\mathrm{t}_{ij_3}$ in the $i$th row such that $\{j_2, j_3\} \subset Z$ (since $|Z| \geqslant 2$), and the smallest of these coefficients is copied to $\mathcal N$. Either way the coefficients created at the $i$th iteration of step 6, (b), (ii), are of total size at most $||\mathcal B_i||/2$ and, consequently, the total size of all coefficients created in step 6, (b), (ii), is at most $||\mathcal B||/2$.

Now consider the coefficients created at parts (c) and (d) of step 6. If $i_0 \neq i_1$, then their total size is given by

$$ \begin{equation*} \begin{aligned} \, &\sum_{j \in S}\|\mathrm t_{i_0j} \ominus \mathrm t_{i_0j_0}\| \leqslant \sum_{j \in S}(\|\mathrm t_{i_0j}\|+\|\mathrm t_{i_0j_0}\|)\leqslant |S|\,\|\mathrm t_{i_0j_0}\|+\sum_{j \in S}\|\mathrm t_{i_0j}\| \\ &\qquad= |S| \min_{j \in \{1, \dots, n\}} \|\mathrm t_{i_0j}\|+\sum_{j \in S}\|\mathrm t_{i_0j}\| \leqslant \ |Z| \min_{j \in \{1, \dots, n\}} \|\mathrm t_{i_0j}\|+\sum_{j \in S}\|\mathrm t_{i_0j}\| \\ &\qquad \leqslant \sum_{j \in Z}\|\mathrm t_{i_0j}\|+\sum_{j \in S}\|\mathrm t_{i_0j}\| \leqslant \sum_{j=1}^n \|\mathrm t_{i_0j}\| \leqslant \frac{1}{n} \sum_{i=1}^n \sum_{j=1}^n \|\mathrm t_{ij}\|= \frac{1}{n} \|\mathcal B\|. \end{aligned} \end{equation*} \notag $$
Here in the second line we have used that $|S| \leqslant 2 \leqslant |Z|$ and in the third line we have used (4.2). If $i_0 = i_1$, then the choice for $i_0$ we made at step 2 and the estimate (4.4) immediately show that the total size of the coefficients created at step 6, (c), is bounded by $\frac{1}{n} \|\mathcal B\|$. Either way, we see that for all $n\geqslant 3$,
$$ \begin{equation*} \|\mathcal L\| \leqslant \frac{\|\mathcal B\|}{2}+\frac{\|\mathcal B\|}{n}=\frac{n+2}{2n} \|\mathcal B\| \leqslant \frac{5}{6} \|\mathcal B\| \leqslant \frac{8}{9} \|\mathcal B\|. \end{equation*} \notag $$
This finishes case 2. Lemma 4.8 is proved.

4.4. The main processing step

In this section we describe the algorithm MainProcessingStep and then prove Lemma 4.9, which is used afterwards to prove our main result, Theorem 3.2.

We can describe such an algorithm explicitly as follows:

Procedure MainProcessingStep.

Input: a normalized list $\mathcal N$ of constant depth $\ell \geqslant 2$.

Output: a Boolean variable minimal and a normalized list of pairs $\mathcal L$.

Proof of Lemma 4.9. The correctness of the procedure follows from the facts that the procedure PruneList prunes correctly all constant brotherhoods of maximum depth in $\mathcal N$ and that the procedure TransferAndPrune applies correctly a transfer-and-prune move if possible and otherwise sets the Boolean variable minimal to $\mathrm{true}$.

By Lemma 4.4, step 2 takes time $O(T(|\mathcal N|_{\mathrm{tot}}))$ to turn the initial list $\mathcal N$ turns to an equivalent list $\mathcal L \cup \mathcal N'$ such that

$$ \begin{equation*} 0 \leqslant |\mathcal L|_{\mathrm{tot}} \leqslant \frac{1}{n}(|\mathcal N|_{\mathrm{tot}}-|\mathcal N'|_{\mathrm{tot}}). \end{equation*} \notag $$
At step 3 the list $\mathcal N'$ is split into lists $\mathcal A_1, \mathcal A_2, \dots, \mathcal A_n$ whose union is equal to $\mathcal N'$, and this takes time at most $O(|\mathcal N'|_{\mathrm{tot}})$.

Now consider iterations of step 4. After each iteration the length of the list $\mathcal A_1\cup \mathcal A_2\cup \dots\cup \mathcal A_n$ decreases by some integer $|\mathcal B|_{\mathrm{tot}}$, where $\mathcal B$ is the union $\mathcal B_1\cup \dots \cup B_k$ of the brotherhoods produced in part (a). Each such step takes time $O(|\mathcal B|_{\mathrm{tot}})$ by Lemma 4.3, and its output has size at most $\frac89 |\mathcal B|_{\mathrm{tot}}$ if minimal${} = \mathrm{false}$ after its execution. Using Lemma 6.4 we see that the time complexity of step 4 is at most

$$ \begin{equation*} O(T(|\mathcal A_1 \cup \dots \cup \mathcal A_n|_{\mathrm{tot}})) \leqslant O(T(|\mathcal N'|_{\mathrm{tot}}))\leqslant O(T(|\mathcal N|_{\mathrm{tot}})). \end{equation*} \notag $$

If $n \geqslant 3$ and minimal${} = \mathrm{false}$ at the end of step 4, then the original list $\mathcal L$ is extended to a list $\mathcal L'$, the total size of which is bounded above by

$$ \begin{equation*} |\mathcal L|_{\mathrm{tot}}+\frac{8}{9} |\mathcal N'|_{\mathrm{tot}} \leqslant \frac{1}{n} ( |\mathcal N|_{\mathrm{tot}}-|\mathcal N'|_{\mathrm{tot}} )+\frac 8 9 |\mathcal N'|_{\mathrm{tot}} \leqslant \max\biggl\{\frac 1 n, \frac 8 9\biggr\} |\mathcal N'|_{\mathrm{tot}} \leqslant \frac 8 9 |\mathcal N|_{\mathrm{tot}}. \end{equation*} \notag $$
In any case the list $\mathcal L'$ obtained by appending $\mathcal A_1, \dots, \mathcal A_n$ to the list obtained after step 4 always satisfies $|\mathcal L'|_{\mathrm{tot}} \leqslant |\mathcal N|_{\mathrm tot}$ (even if $n=2$ and/or minimal${} = \mathrm{true}$), so that the final step can also be carried out in time at most $O(T(|\mathcal N|_{\mathrm{tot}}))$ and does not increase the size of the output. In the case when $n=3$ and minimal${} = \mathrm{false}$ the size of the output is thus smaller than the input by a factor of at least $\frac{8}{9}$. The lemma is proved.

4.5. The final algorithm

Using all procedures described above we are now finally ready to describe an algorithm FindMinimalList which, given an arbitrary encoded list, finds an equivalent minimal list. This algorithm will be used to prove Theorem 3.2.

Algorithm FindMinimalList.

Input: an encoded list $\mathcal L$.

Output: a normalized encoded list $\mathcal M$ which is equivalent to $\mathcal L$.

Proof of Theorem 3.2. We just have to combine all previous lemmas.

First we show that the algorithm FindMinimalList gives a correct result. If $\mathcal N_0, \dots, \mathcal N_d$ denotes the lists created in step 4, then the list $\mathcal N_d \cup \mathcal{N}_{d-1} \cup \dots \cup \mathcal N_1 \cup \mathcal N_0 = \mathcal N$ is equivalent to $\mathcal L$. Now, at step 6 one of two cases occurs: either the iteration never breaks and we obtains lists $\mathcal M_d, \dots, \mathcal M_1$ (in which case we set $t := 1$) or the algorithm produces lists $\mathcal M_d, \dots, \mathcal M_t$ for some $t \geqslant 2$ and breaks at the next iteration. In either case one shows by descending induction on $i$ that for all $i \in \{d, d-1, \dots, t\}$ the list $\mathcal M_i$ is a list of constant depth $\ell=i$ which is equivalent to the union $\mathcal N_d \cup \dots \cup \mathcal N_i$. Indeed, for $i := d$ there is nothing to show, and if the claim holds for $i$ and the algorithm does not break at the next step, then the correctness of the MainProcessingStep ensures that $\mathcal M_{i-1}$ is of constant depth $i-1$ and satisfies

$$ \begin{equation*} \mathcal M_{i-1} \sim \mathcal N' \cup \mathcal N_{i-1} \sim \mathcal N_d \cup \dots \cup \mathcal N_i \cup \mathcal N_{i-1}, \end{equation*} \notag $$
which finishes induction.

Now, at step 7 two cases are possible.

Case I. If during the execution of step 6 a break occurs after computing $\mathcal M_d, \dots, \mathcal M_t$ for some $t \geqslant 2$, then the correctness of MainProcessingStep ensures that $\mathcal M_t$ is minimal and, in fact, unbalanced. This implies that also $\mathcal M := \mathcal M_t \cup \mathcal N_{t-1} \cup \dots \cup \mathcal N_0$ is unbalanced, hence minimal. Moreover, by the previous induction argument we have

$$ \begin{equation*} \mathcal M \sim \mathcal M_t \cup \mathcal N_{t-1} \cup \dots \cup \mathcal N_0 \sim \mathcal N_d \cup \dots \cup \mathcal N_{t} \cup \mathcal N_{t-1} \cup \dots \cup \mathcal N_0 \sim \mathcal L. \end{equation*} \notag $$
Thus $\mathcal M$ is a minimal list, which is equivalent to $\mathcal L$ in this case.

Case II. If all iterations of step 6 were carried out without break, then we obtain a list $\mathcal M_1$ of constant depth $1$ which is equivalent to $\mathcal N_d \cup \dots \cup \mathcal N_1$ by the above induction argument, and therefore satisfies $\mathcal M_1 \cup \mathcal N_0 \sim \mathcal L$. If the list $\mathcal N'$ produced at step 7, (a), is nonempty, then $\mathcal M_1$ is a nonconstant brotherhood of depth $1$, and using an argument similar to § 2.4.4 we deduce that $\mathcal M:=\mathcal M_1\cup \mathcal N_0$ is minimal, and thus is a minimal list equivalent to $\mathcal L$. On the other hand, if $\mathcal N'$ is empty, then $\mathcal M_1$ is equivalent to $\mathcal K$, and so $\mathcal L \sim \mathcal M_1 \cup \mathcal N_0 \sim \mathcal K \cup \mathcal N_0$, that is, $\mathcal L$ is equivalent to $\mathcal M:={}$NormalizeList$(\mathcal K \cup \mathcal N_0)$. Moreover, $\mathcal M$ is a normalized list of maximum depth $\leqslant 0$, hence minimal. This finishes the proof of correctness in case II.

It remains to estimate the time complexity of the algorithm; we proceed similarly to the complexity analysis in Lemma 4.9. It suffices to show that each of the seven steps of the algorithm is executed in time at most $O(T(|\mathcal N|))_{\mathrm{tot}}$. This is obvious for step 2. By Lemma 4.2, step 1 takes time $O(T(|\mathcal L|_{\mathrm{tot}}))$ and produces a list $\mathcal N$ with $|\mathcal N|_{\mathrm{tot}}\leqslant|\mathcal L|_{\mathrm{tot}}$. The latter implies, in particular, that step 3 can be performed in time $O(|\mathcal N|_{\mathrm{tot}})\leqslant O(|\mathcal L|_{\mathrm{tot}})$. Similarly, the split at step 4 takes time $O(|\mathcal N|_{\mathrm{tot}})\leqslant O(|\mathcal L|_{\mathrm{tot}})$ since it can be done by a single run over the list $\mathcal N$. The complexity of step 5 is also linear.

Now we consider the iterations of step 6. Assume that these iterations produce lists $\mathcal M_d, \dots, \mathcal M_t$ before an iteration breaks. We claim that

$$ \begin{equation*} |\mathcal M_{i}|_{\mathrm{tot}} \leqslant \frac89|\mathcal M_{i+1}|_{\mathrm{tot}}+|\mathcal N_i|_{\mathrm{tot}} \quad \text{for all } t \leqslant i \leqslant d-1. \end{equation*} \notag $$
Indeed, this inequality holds for trivial reasons if $\mathcal M_{i+1}$ is empty, and otherwise it follows from Lemma 4.9, (iv). If we denote the empty list by $\mathcal M_{d+1}$, then it also holds for $i = d$ since $|\mathcal M_d|_{\mathrm{tot}}=|\mathcal N_d|_{\mathrm{tot}}$. Then using descending induction on $i$ we find that
$$ \begin{equation*} |\mathcal M_{i}|_{\mathrm{tot}} \leqslant \sum_{k=i}^d\biggl(\frac89\biggr)^{k-i}|\mathcal N_i|_{\mathrm{tot}}, \qquad i \in \{t, \dots, d\}. \end{equation*} \notag $$
Summing the left- and right-hand sides over $i=t, \dots, d$, we obtain
$$ \begin{equation*} \sum_{i=t}^{d}|\mathcal M_{i}|_{\mathrm{tot}} \leqslant \sum_{i=t}^{d}\biggl( \sum_{k=i}^d\biggl(\frac89\biggr)^{k-i}|\mathcal N_i|_{\mathrm{tot}}\biggr) \leqslant \sum_{i=t}^d \biggl( \sum_{j=0}^\infty \biggl(\frac89\biggr)^{j}\biggr)|\mathcal N_i|_{\mathrm{tot}}= 9\sum_{i=t}^d|\mathcal N_{i}|_{\mathrm{tot}}. \end{equation*} \notag $$
Thus, if $\mathcal M_d, \dots, \mathcal M_t$ are produced before an iteration breaks, then
$$ \begin{equation} \sum_{i=t}^{d}|\mathcal M_i|_{\mathrm{tot}} \leqslant 9\sum_{i=t}^d|\mathcal N_{i}|_{\mathrm{tot}}\leqslant 9 |\mathcal N|_{\mathrm{tot}}. \end{equation} \tag{4.6} $$

Given $i \in \{t, \dots, d\}$, the $(d-i+1)$th iteration of step 6, (a), takes time $O(T(|\mathcal M_i|_{\mathrm{tot}})))$ by Lemma 4.9, (v). It thus follows from Lemma 6.4 that all iterations of step 6, (a), taken together, take time at most $O(|\mathcal M_t| + \dots + O(|\mathcal M_d|))$, and thus time at most $O(T(|\mathcal N|_{\mathrm{tot}})))$ by inequality (4.6) and the fact that the function $T$ has property (T3) from Convention 6.3.

Similarly, for every $i \in \{t, \dots, d\}$ the call of the procedure NormalizeList at the $(d-i+1)$th iteration of step 6, (b), takes time at most $O(T(\frac89|\mathcal M_{i}|_{\mathrm{tot}}+|\mathcal N_{i-1}|_{\mathrm{tot}}))$. Summing over $i$ and using Lemma 6.4 (the totalling lemma) we obtain that all these calls take total time at most $O(T(\frac89|\mathcal M_t|_{\mathrm{tot}} + \dots + \frac89|\mathcal M_d|_{\mathrm{tot}} +|\mathcal N|_{\mathrm{tot}}))$, which is $O(T(|\mathcal N|_{\mathrm{tot}})$ again by inequality (4.6) and property (T3). We have thus established that the whole of step 6 can be carried out in time $O((|\mathcal N|_{\mathrm{tot}})$.

Now assume that $\mathcal M_d, \dots, \mathcal M_t$ were produced at step 6 before a break. If $t \geqslant 2$, that is, a break occurred, then at step 7 the algorithm just returns a minimal list $\mathcal M := \mathcal M_t \cup \mathcal N_{t-1} \cup \dots \cup \mathcal N_0$ of size at most

$$ \begin{equation*} |\mathcal M|_{\mathrm{tot}}= |\mathcal M_t|_{\mathrm{tot}}+\sum_{i=0}^{t-1} |\mathcal N_i|_{\mathrm{tot}}\leqslant 9\sum_{i=t}^{d}|\mathcal N_i|_{\mathrm{tot}}+9 \sum_{i=0}^{t-1}|\mathcal N_i|_{\mathrm{tot}}= 9 |\mathcal N|_{\mathrm{tot}}. \end{equation*} \notag $$
If $t=1$, that is, no break occurs, then step 7 operates initially with the list $\mathcal M_1$, which has size at most $m :=9 (|\mathcal N|_{\mathrm{tot}}-|\mathcal N_0|_{\mathrm{tot}})$ according to (4.6). Then pruning at step 7, (a), takes time $O(T(|\mathcal M_1|_{\mathrm{tot}})) = O(T(m)) = O((|\mathcal N|_{\mathrm{tot}})$ to produce a list $\mathcal N' \cup \mathcal K$ of size at most $m$ by Lemma 4.4. The potentially necessary normalization at step 7, (b), also takes time $O(T(m)) = O(|\mathcal N|_{\mathrm{tot}})$, and so the whole of step 7 can be accomplished in time $O((|\mathcal N|_{\mathrm{tot}})$ and produces a list $\mathcal M$ of total size at most $9 |\mathcal N|_{\mathrm{tot}}$.

Theorem 3.2 is proved.

§ 5. Group case

In this section we explain how the techniques of the previous sections have to be modified to deal with counting functions on a free group rather than on a free monoid. We will see that only at very few places some modifications are necessary, although the notation is more involved.

5.1. Encoding counting functions on free groups

Again, in this section we consider a finite alphabet $\mathrm{A} = \{\mathrm{a}_1, \dots, \mathrm{a}_n\}$ of size $n \geqslant 2$. Then we define the extended alphabet $\mathrm{A}^\pm := \{\mathrm{a}_1, \dots, \mathrm{a}_n, \mathrm{a}^{-1}_1, \dots, \mathrm{a}^{-1}_n\}$, where $\mathrm{a}^{-1}_1, \dots, \mathrm{a}^{-1}_n$ are further symbols chosen so that $|\mathrm{A}^\pm| = 2n$. Then we identify $F_n$ with the subset ${\mathrm{A}^{\pm \ast} \subset (\mathrm{A}^\pm)^*}$ consisting of the words that are reduced, that is, do not contain subwords of the form $\mathrm{a}_j\mathrm{a}^{-1}_j$ or $\mathrm{a}^{-1}_j\mathrm{a}_j$ for some $j \in \{1, \dots, n\}$. Given a word $w \in \mathrm{A}^{\pm \ast}$, we denote by $w_{\mathrm{in}} \in \mathrm{A}^\pm$ and $w_{\mathrm{fin}} \in \mathrm{A}^\pm$ its initial and final letters, respectively.

From now on let $\mathfrak N \in \{\mathbb{Z}, \mathbb{Q}\}$. By a word-coefficient pair (or pair for short) we mean a pair $(w, \mathrm{x})$, where $w \in \mathrm{A}^{\pm \ast}$ and $x \in \operatorname{Num}_{\mathfrak N}$, and by an encoded list (or list for short) we mean a doubly linked list of word-coefficient pairs. Any such list represents a counting function over $F_n$. For example, if $\mathcal L = ((w_1, x_1), \dots, (w_N, x_N))$, then the associated counting function is

$$ \begin{equation*} \rho_{\mathcal L}=\sum_{i=1}^N \langle x_i \rangle \rho_{w_i}. \end{equation*} \notag $$
As in the monoid case, we say that a list $\mathcal L = ((w_1, \mathrm{x}_1), \dots, (w_N, \mathrm{x}_N))$ has the maximum depth
$$ \begin{equation*} \ell:=\sup\{|w_j| \mid \mathrm x_j \not \equiv 0\}, \end{equation*} \notag $$
and we say that $\mathcal L$ is minimal if it is not equivalent to a list of smaller maximum depth. We also say that $\mathcal L$ is normalized if all words $w_j$ are distinct, ordered by length and ordered lexicographically within groups of words of the same length (with respect to some fixed total order on $\mathrm{A}^\pm$), and if $\mathrm{x}_j \not \equiv 0$ for all $j \in \{1, \dots, N\}$.

Remark 5.1. As in the monoid case, there is a procedure NormalizeList which replaces a given list $\mathcal L$ by an equivalent normalized list $\mathcal N$ of total size $|\mathcal N|_{\mathrm{tot}} \leqslant |\mathcal L|_{\mathrm{tot}}$ in time at most $O(T(|\mathcal L|_{\mathrm{tot}}))$.

Similarly to the monoid case, we have two kinds of basic equivalences between counting functions on $F_n$ (see [9]), but these are now given by the slightly different formulae

$$ \begin{equation} \rho_w \sim \sum_{\mathrm a \in \mathrm A^\pm \setminus \{w_{\mathrm{in}}^{-1}\}} \rho_{\mathrm a w} \quad\text{and}\quad \rho_w \sim \sum_{\mathrm a \in \mathrm A^\pm \setminus \{w_{\mathrm{fin}}^{-1}\}} \rho_{w\mathrm a}. \end{equation} \tag{5.1} $$
To take this into account, we need to modify our pruning and transfer moves slightly.

In particular, we are interested in counting functions $f$ which are antisymmetric (that is, $f(g^{-1}) = -f(g)$), since these represent classes in the bounded cohomology of $F_n$ (see [5]). We also call the corresponding lists antisymmetric.

As before, two lists are called equivalent if the corresponding counting functions are equivalent, that is, are at a bounded distance from each other, and this equivalence is denoted by $\sim$. We also say that two symmetric counting functions $f_1$ and $f_2$ and, by extension, any two lists representing them, are cohomologous, which is denoted by $f_1 \equiv f_2$, if $f_1-f_2$ is at a bounded distance from a homomorphism and therefore $f_1$ and $f_2$ define the same class in the bounded cohomology. We want to solve the following two problems algorithmically.

Problem 5.2 (equivalence problem). Given two lists $\mathcal L_1$ and $\mathcal L_2$, decide whether ${\mathcal L_1 \sim \mathcal L_2}$.

Problem 5.3 (cohomological problem). Given two antisymmetric lists $\mathcal L_1$ and $\mathcal L_2$, decide whether $\mathcal L_1 \equiv \mathcal L_2$.

As in the monoid case, both problems can be reduced to the following problem (note that a list equivalent to an antisymmetric list is cohomologous to the empty list if and only if it is equivalent to a list of maximum depth ${\leqslant 1}$).

Problem 5.4 (minimality problem). Given a list $\mathcal L$, find a minimal list $\mathcal L'$ equivalent to $\mathcal L$.

5.2. Brotherhoods and basic moves

Let $T_n$ denote the right-Cayley tree of $F_n$ with respect to the extended generating set $\mathrm{A}^{\pm}$, considered as an oriented rooted tree with root $\varepsilon$ and edges oriented away from $\varepsilon$. We identify elements of $F_n$ with vertices of $T_n$. Within $T_n$, we can talk about fathers, brothers, brotherhoods and related brotherhoods as in the monoid case, and we use the same notation. Note, however, that $|\{\varepsilon \ast\}| = 2n$ whereas $|\{w\ast\}| = 2n-1$ for all $w \neq \varepsilon$, that is, the unique brotherhood of depth $1$ is different in size from all other brotherhoods. Given a normalized list $\mathcal L$ and a brotherhood $B$, we define the weighted brotherhood $\mathcal L_B$ exactly as in the monoid case. Then we can define a procedure DetachBrotherhood with the same properties and runtime as in Lemma 4.3 in the group case too.

5.2.1. Pruning

Let $\mathcal L$ be a normalized list of maximum depth $\ell \geqslant 1$. As in the monoid case, if $B = \{w\ast\}$ is a brotherhood and the weighted brotherhood $\mathcal L_B = ((w\mathrm{a}_1, \mathrm{x_1}), \dots, (w\mathrm{a_n}^{-1}, \mathrm{x}_{2n}))$ is nonempty and constant, with $\mathrm{x}_1 \equiv \dots \equiv \mathrm{x}_{2n} \equiv \mathrm{x}_n \equiv \mathrm{x}$ for some $\mathrm{x} \in \operatorname{Num}_{\mathfrak N}$, then we can remove $\mathcal L_B$ from $\mathcal L$, append $\{\mathrm{w}, \mathrm{x}\}$ and normalize the resulting list. This is called a pruning move, and a normalized list is called pruned if it does not allow for any pruning moves.

We can now extend the procedure PruneList from the monoid case to the group case. Our new procedure carries out exactly the same steps as in the monoid case. This procedure has the same properties listed in Lemma 4.4, except that the estimate in part (iv) becomes

$$ \begin{equation*} |\mathcal L|_{\mathrm{tot}} \leqslant \frac{1}{h(n)}(|\mathcal N|_{\mathrm{tot}}-|\mathcal N'|_{\mathrm{tot}}), \end{equation*} \notag $$
where $h(n)$ is the size of a nonempty constant brotherhood of depth $\ell$, that is, $h(n) = 2n-1$ if $\ell \geqslant 2$ and $h(n) = 2n$ if $\ell = 1$.

5.2.2. Generic transfer-and-prune moves

As in the monoid case, we also have the notion of a transfer move in the group case. In fact, we have two slightly different transfer moves, depending on whether the brotherhood, which is transferred, has depth ${\geqslant 3}$ (the generic case) or depth $2$ (the special case). In this subsection we discuss exclusively the generic case, leaving the special case to the next subsection. Thus let $u \in F_n \setminus \{\varepsilon\}$, and let $\mathcal L$ be a normalized list of depth $\ell \geqslant 3$, where $|u| = \ell - 2 \geqslant 1$. We set

$$ \begin{equation*} \mathrm A^{\pm}_{\mathrm{in}}:=\mathrm A^{\pm} \setminus \{u_{\mathrm{in}}^{-1}\} \quad\text{and}\quad \mathrm A^{\pm}_{\mathrm{fin}} :=\mathrm A^{\pm} \setminus \{u_{\mathrm{fin}}^{-1}\}. \end{equation*} \notag $$
As in the monoid case, we write $\mathcal L _{\{\ast u \ast\}}$ for the concatenation of the weighted brotherhoods $\mathcal L_{\{\mathrm{a} \mathrm{u} \ast\}}$, where $\mathrm{a} \in \mathrm A^{\pm}_{\mathrm{in}}$. Then entries of $\mathcal L _{\{\ast u \ast\}}$ are of the form
$$ \begin{equation} (\mathrm {a} \mathrm u \mathrm a', \mathrm x_{\mathrm a\mathrm a'}), \qquad\text{where}\quad \mathrm a \in \mathrm A^{\pm}_{\mathrm{in}}\quad \text{and}\quad \mathrm a' \in \mathrm A^{\pm}_{\mathrm{fin}}. \end{equation} \tag{5.2} $$

We can thus define the encoded transfer matrix $\mathrm T(\mathcal L, u)$ as the matrix of size $(2\ell-1) \times (2\ell -1)$ whose rows and columns are indexed by $\mathrm A^{\pm}_{\mathrm{in}}$ and $\mathrm A^{\pm}_{\mathrm{fin}}$, respectively, and whose entry $\mathrm t_{\mathrm{a} \mathrm{a'}}$ with index $(\mathrm{a}, \mathrm{a'}) \in \mathrm A^{\pm}_{\mathrm{in}} \times \mathrm A^{\pm}_{\mathrm{fin}}$ is equal to $\mathrm{x}_{\mathrm{a}\mathrm{a'}}$ if $\mathcal L _{\{\ast u \ast\}}$ contains an entry of the form (5.2) and $t_{\mathrm{a} \mathrm{a'}} := \varepsilon$ otherwise. If we fix $\mathrm{b} \in \mathrm A^{\pm}_{\mathrm{in}}$, then we have

$$ \begin{equation*} \begin{aligned} \, \rho_{\mathcal L_{\{\ast u \ast\}}} &= \sum_{\mathrm a \in \mathrm A^\pm_{\mathrm{in}}} \sum_{\mathrm a' \in \mathrm A^\pm_{\mathrm{fin}}} \mathrm t_{\mathrm a \mathrm a'}\rho_{\mathrm a u \mathrm a'} \\ &= \sum_{\mathrm a \in \mathrm A^\pm_{\mathrm{in}}} \sum_{\mathrm a' \in \mathrm A^\pm_{\mathrm{fin}}} (\mathrm t_{\mathrm a \mathrm a'}-\mathrm t_{\mathrm b \mathrm a'})\rho_{\mathrm a u \mathrm a'} \rho_{\mathrm a_i\mathrm u \mathrm a_j}+\sum_{\mathrm a \in \mathrm A^\pm_{\mathrm{in}}} \sum_{\mathrm a' \in \mathrm A^\pm_{\mathrm{fin}}}\mathrm t_{\mathrm b \mathrm a'} \rho_{\mathrm a u \mathrm a'} \\ &\sim \sum_{\mathrm a \in \mathrm A^\pm_{\mathrm{in}} \setminus\{\mathrm b\}} \sum_{\mathrm a' \in \mathrm A^\pm_{\mathrm{fin}}} (\mathrm t_{\mathrm a \mathrm a'}-\mathrm t_{\mathrm b \mathrm a'})\rho_{\mathrm a u \mathrm a'} +\sum_{\mathrm a' \in \mathrm A^\pm_{\mathrm{fin}}}\mathrm t_{\mathrm b \mathrm a'} \rho_{u \mathrm a'}. \end{aligned} \end{equation*} \notag $$
The list $\mathcal L$ is thus equivalent to any list $\mathcal L'$ obtained from $\mathcal L$ by deleting $\mathcal L_{\{\ast u \ast\}}$, appending a pair of the form $(\mathrm{a}\mathrm{u} \mathrm{a}', \mathrm{y}_{\mathrm{a}\mathrm{a'}})$, where $\mathrm{y}_{\mathrm{a}\mathrm{a'}} \equiv \mathrm{t}_{\mathrm a \mathrm a'}-\mathrm{t}_{\mathrm b \mathrm a'}$ for any $\mathrm a \in \mathrm{A}^\pm_{\mathrm{in}} \setminus\{\mathrm b\}$ and $\mathrm{a}' \in \mathrm{A}^\pm_{\mathrm{fin}}$ such that $\mathrm{t}_{\mathrm a \mathrm a'}\not \equiv \mathrm{t}_{\mathrm b \mathrm a'}$, appending a pair of the form $(\mathrm{u} \mathrm{a}', \mathrm{z}_{\mathrm{a'}})$, where $\mathrm{z}_{\mathrm{a'}} \equiv \mathrm{t}_{\mathrm b \mathrm a'}$, for any $\mathrm{a'} \in \mathrm{A}^\pm_{\mathrm{fin}}$with $ \mathrm{t}_{\mathrm b \mathrm a'} \neq \varepsilon$, and normalizing the resulting list. We say that any such list $\mathcal L'$ is obtained from $\mathcal L$ by a transfer move with stem $u$ and special letter $\mathrm b$.

According to Theorem 4.2 in [9], Theorem 2.9 holds mutatis mutandis also in the group case, that is, a list of maximum depth $\ell$ is minimal if it admits two related weighted brotherhoods one of which is empty and the other is nonconstant. If $\mathcal L'$ is obtained from $\mathcal L$ by a transfer move with stem $u$ and special letter $\mathrm b$, then $\mathcal L'_{\mathrm{b}u*}$ is empty, and so $\mathcal L'$ is minimal unless all weighted brotherhoods of the form $\mathcal L'_{\mathrm{a}u*}$, where $\mathrm{a} \in \mathrm{A}^\pm_{\mathrm{in}} \setminus\{\mathrm b\}$, are constant. As in the monoid case, this implies that $\mathcal L'$ is minimal unless $\mathrm{T}(\mathcal L, u)$ is a column-row sum.

Now assume that $\mathrm{T}(\mathcal L, u) = (\mathrm t_{\mathrm{a}\mathrm{a'}})$ is a column-row sum. This means that for every $\mathrm{a} \in \mathrm{A}^\pm_{\mathrm{in}}$ the number $\langle \mathrm{t_{aa'}} \ominus \mathrm{t}_{\mathrm{ba'}} \rangle$ is independent of $\mathrm{a'} \in \mathrm{A}^\pm_{\mathrm{fin}}$. Consequently, if we choose elements $y_{\mathrm{a}}, z_{\mathrm{a'}} \in \operatorname{Num}_{\mathfrak N}$ such that

$$ \begin{equation} \mathrm y_{\mathrm a} \equiv \mathrm t_{\mathrm a\mathrm a'} \ominus \mathrm t_{\mathrm b\mathrm a'} \quad\text{and}\quad \mathrm z_{\mathrm a'} \equiv \mathrm t_{\mathrm b \mathrm a'}, \qquad \mathrm a \in \mathrm A^\pm_{\mathrm{in}}, \quad \mathrm a' \in \mathrm A^\pm_{\mathrm{fin}}, \end{equation} \tag{5.3} $$
then from the above formula for $\rho_{\mathcal L_{\ast u \ast}}$ we obtain
$$ \begin{equation*} \begin{aligned} \, \rho_{\mathcal L_{\ast u \ast}} &\sim \sum_{\mathrm a \in \mathrm A^\pm_{\mathrm{in}} \setminus\{\mathrm b\}} \sum_{\mathrm a' \in \mathrm A^\pm_{\mathrm{fin}}} (\mathrm t_{\mathrm a \mathrm a'}-\mathrm t_{\mathrm b \mathrm a'})\rho_{\mathrm a u \mathrm a'} +\sum_{\mathrm a' \in \mathrm A^\pm_{\mathrm{fin}}}\mathrm t_{\mathrm b \mathrm a'} \rho_{u \mathrm a'} \\ &=\sum_{\mathrm a \in \mathrm A^\pm_{\mathrm{in}} \setminus\{\mathrm b\}} \mathrm y_{\mathrm a}\rho_{\mathrm a u} +\sum_{\mathrm a' \in \mathrm A^\pm_{\mathrm{fin}}}\mathrm z_{\mathrm a'} \rho_{u \mathrm a'}. \end{aligned} \end{equation*} \notag $$
Thus, if $\mathrm{T}(\mathcal L, u)$ is a column-row sum and $\mathcal L'$ is obtained from $\mathcal L$ by deleting $\mathcal L_{\ast u \ast}$, appending pairs $(\mathrm{a}u, \mathrm{y_a})$ and $(u\mathrm{a'}, \mathrm{z_{a'}})$ subject to (5.3) and normalizing, then $\mathcal L'$ is equivalent to $\mathcal L$. We say that $\mathcal L'$ is obtained from $\mathcal L$ by a transfer-and-prune move with stem $u$ and special letter $\mathrm b$. Using this move we can now generalize the results of § 4.3.

Lemma 5.5. There exists a procedure TransferAndPrune with the following properties:

There are two main differences from the results in § 4.3: first, we allow $n=2$, whereas in the monoid case the condition $n \geqslant 3$ was needed to ensure the estimate in (iv). On the other hand we have to assume here that $\ell \geqslant 3$; however, we will deal with the case $\ell = 2$ (which corresponds to an empty stem) separately.

Proof of Lemma 5.5. We follow the algorithm of the same name described in § 4.3 as closely as possible; in particular, we perform seven steps which correspond one-to-one to the seven steps in the monoid case.

At the first step we compute the stem and the transfer matrix just as in the monoid case. The transfer matrix has size $m \times m$, where $m := (2n-1)$, and since $n \geqslant 2$ we have $m \geqslant 3$. The transfer matrix is now indexed by $\mathrm A^{\pm}_{\mathrm{in}} \times \mathrm A^{\pm}_{\mathrm{fin}}$ rather than by $\{1, \dots, m\} \times \{1, \dots, m\}$, but apart from this change in indexing we can carry out steps 2 and 3 as in the monoid case.

At step 4 the sparseness condition has to be chosen depending on the size $m$ of the matrix rather than on $n$. Thus we set Sparse to be true if and only if $m\geqslant 4$ (that is, $n \geqslant 3$) and the transfer matrix $\mathrm{T}$ has fewer than $3m = 6n-3$ nontrivial entries, or if $m=3$ (that is, $n=2$) and $\mathrm{T}$ has fewer than $2m = 4n-2 = 6$ nontrivial entries.

The remaining steps (steps 5–7) are then carried out precisely as in the monoid case, except for the difference in indexing. For example, at step 5, (a), we append the pairs of the form $(u\mathrm{a'}, \mathrm{t}_{\mathrm{b}\mathrm{a'}})$ for $\mathrm{a'}\in A^{\pm}_{\mathrm{fin}}$, where $\mathrm{b} \in \mathrm A^{\pm}_{\mathrm{in}}$ is chosen so that

$$ \begin{equation*} \sum_{\mathrm a' \in A^{\pm}_{\mathrm{fin}}} \|\mathrm t_{\mathrm b\mathrm a'}\|= \min_{\mathrm a\in \mathrm A^{\pm}_{\mathrm{in}}} \sum_{\mathrm a' \in A^{\pm}_{\mathrm{fin}}} \|\mathrm t_{\mathrm a\mathrm a'}\|, \end{equation*} \notag $$
and similarly for the other steps. It is clear that this change in indexing does not affect the runtime or the size of the output. The latter always satisfies (iv), since for all $n\geqslant 2$ we have $m\geqslant 3$, and therefore the proof of Lemma 4.8 applies to the $m \times m$ matrix $\mathrm{T}$. The lemma is proved.

5.2.3. Special transfer-and-prune moves

The transfer-and-prune move discussed in the previous section works for all $n \geqslant 2$ under the condition that the brotherhood that is transferred is of depth at least $3$. For brotherhoods of depth $\ell = 2$ there also exists a transfer-and-prune move but it is more complicated to describe. Nevertheless, we can establish the following lemma.

Lemma 5.6. The statement of Lemma 5.5 remains true also for $\ell = 2$, except that (iv) has to be replaced by

In fact, this estimate is far from optimal, but it is easily established and sufficient for our purposes. For the proof of the lemma let $\mathcal L$ be a normalized list of constant depth $2$. We fix a letter $\mathrm{b} \in \mathrm{A}^\pm$ (which we do not even bother to choose optimally) and set $\mathrm{A}^\pm_{\mathrm{in}} := \mathrm{A}^\pm \setminus \{\mathrm{b}^{-1}\}$. We want to define a move which clears out the sub-brotherhood of type $\{\mathrm{b}\ast\}$ in $\mathcal L$. First we define a transfer matrix $\mathrm{T} = (\mathrm{t_{\mathrm{a}\mathrm{a'}}})$ over $\mathrm{A}^\pm \times \mathrm{A}^\pm$ by setting $\mathrm{t_{aa'}}$ to be the coefficient of $\mathrm{aa'}$ in $\mathcal L$ if such a word exists in $\mathcal L$ and by setting $\mathrm{t_{aa'}} := \varepsilon$ otherwise. (In particular, then we have $\mathrm{t}_{\mathrm{a}\mathrm{a}^{-1}} = \varepsilon$ for all $\mathrm{a} \in \mathrm{A}^\pm$.) Using the fact that for $a' \in \mathrm{A}^\pm_{\mathrm{in}} $ we have

$$ \begin{equation*} \langle \mathrm t_{\mathrm b\mathrm a'}\rangle \rho_{\mathrm b\mathrm a'} \sim \langle\mathrm t_{\mathrm b\mathrm a'}\rangle\rho_{\mathrm a'}-\sum_{\mathrm a \neq (\mathrm a')^{-1}} \langle\mathrm t_{\mathrm b\mathrm a'}\rangle\rho_{\mathrm a\mathrm a'}, \end{equation*} \notag $$
we obtain
$$ \begin{equation*} \begin{aligned} \, \rho_{\mathcal L} &= \sum_{\mathrm a \in \mathrm A^\pm} \sum_{\mathrm a' \neq \mathrm a^{-1}} \langle\mathrm t_{\mathrm a\mathrm a'}\rangle\rho_{\mathrm a\mathrm a'} \\ &\sim \sum_{a' \in \mathrm A^\pm_{\mathrm{in}}} \langle\mathrm t_{\mathrm b\mathrm a'}\rangle\rho_{\mathrm a'}+\sum_{a \in \mathrm A^\pm\setminus\{b\}} \langle\mathrm t_{\mathrm a\mathrm b^{-1}}\rangle\rho_{\mathrm a\mathrm b^{-1}}+\sum_{\mathrm a \in \mathrm A^\pm \setminus\{\mathrm b\}}\sum_{\mathrm a'\in \mathrm A^\pm_{\mathrm{in}}\setminus\{\mathrm a^{-1}\}} \langle\mathrm t_{\mathrm a\mathrm a'}\ominus \mathrm t_{\mathrm b\mathrm a'}\rangle\rho_{\mathrm a\mathrm a'}. \end{aligned} \end{equation*} \notag $$
We deduce that $\mathcal L$ is minimal unless
$$ \begin{equation} \mathrm t_{\mathrm a\mathrm b^{-1}}l \equiv \mathrm t_{\mathrm a\mathrm a'}\ominus \mathrm t_{\mathrm b\mathrm a'} \qquad \text{for all }\quad \mathrm a \in \mathrm A^\pm \setminus\{\mathrm b\}\quad \text{and}\quad \mathrm a' \in \mathrm A_{\mathrm{in}}^\pm \setminus\{\mathrm a^{-1}\}. \end{equation} \tag{5.4} $$
On the other hand, if (5.4) holds, then we can apply pruning moves to obtain
$$ \begin{equation*} \begin{aligned} \, \rho_{\mathcal L} &\sim \sum_{a' \in \mathrm A^\pm_{\mathrm{in}}} \langle \mathrm t_{\mathrm b\mathrm a'}\rangle \rho_{\mathrm a'}+\sum_{a \in \mathrm A^\pm\setminus\{b\}} \langle \mathrm t_{\mathrm a\mathrm b^{-1}}\rangle \rho_{\mathrm a} \\ &= \langle \mathrm t_{\mathrm b\mathrm b}\rangle \rho_{\mathrm b}+\langle \mathrm t_{\mathrm b^{-1}\mathrm b^{-1}}\rangle \rho_{\mathrm b^{-1}}+\sum_{a \in \mathrm A^{\pm} \setminus\{b, b^{-1}\}} \langle \mathrm t_{\mathrm b\mathrm a} \oplus \mathrm t_{\mathrm a\mathrm b}^{-1}\rangle\rho_{\mathrm a}. \end{aligned} \end{equation*} \notag $$
This shows that the following procedure works correctly.

Procedure SpecialTransferAndPrune. Input: a family of nonconstant related brotherhoods $(\mathcal B_1, \dots, \mathcal B_n)$ of depth $\ell = 2$ such that $\mathcal B := \mathcal B_1 \cup \dots \cup \mathcal B_n$ is normalized.

Output: a Boolean variable minimal and a list $\mathcal L$.

It is easy to see that the runtime of this procedure is $O(T(|\mathcal B|_{\mathrm{tot}}))$. As for (iv$'$), in the nonminimal case we have $\|\mathcal L\| \leqslant \|\mathrm{T}\| = \|\mathcal B\|$, since each nontrivial coefficient of $\mathrm{T}$ is copied at most once to $\mathcal L$ and, clearly, $|\mathcal L| \leqslant |\mathrm{A}^\pm| = 2n$. Hence (iv$'$) holds.

5.3. The algorithm in the group case

5.3.1. The main processing step

At this point we have extended the procedures NormalizeList, DetachBrotherhood, PruneList and TransferAndPrune from the monoid case to the group case. The latter procedure works only for lists of depth $\ell \geqslant 3$, but the case $\ell = 2$ can be dealt with by the additional procedure SpecialTransferAndPrune from § 5.2.3. Using these procedures we can now define a procedure MainProcessingStep almost literally as in the monoid case (see § 4.4), except that at step 4, (b), for $\ell = 2$ the procedure TransferAndPrune is replaced by the procedure SpecialTransferAndPrune. Then the same analysis as in the monoid case shows the following.

Lemma 5.7. For every nonabelian free group $F_n$, where $n\geqslant 2$, there exists an algorithm MainProcessingStep with the following properties:

Here the difference in (iv) (from Lemma 4.9) comes from the use of the procedure SpecialTransferAndPrune.

5.3.2. The final algorithm

We are now ready to establish the main result of the present article; as before, we set $T(N) := N$ if $\mathfrak N = \mathbb{Z}$ and $T(N) := N \log N$ if $\mathfrak N = \mathbb{Q}$.

Theorem 5.8. For every $n\geqslant 2$ and $\mathfrak N \in \{\mathbb{Z}, \mathbb{Q}\}$ there exists an algorithm FindMinimalList that takes as input an encoded list $\mathcal L$ over $F_n$ with coefficients in $\mathfrak N$, terminates within time $O(T(|\mathcal L|_{\mathrm{tot}}))$ and gives as output a minimal encoded list $\mathcal M$ equivalent to $\mathcal L$.

This implies the following more precise version of Theorem 1.1 from § 1.

Corollary 5.9. For every $n \geqslant 2$ and $\mathfrak N \in \{\mathbb{Z}, \mathbb{Q}\}$ there exist algorithms of time complexity $O(T(N))$ (where $N$ denotes the size of the input) to decide whether two counting functions (respectively, counting quasimorphisms) over $F_n$ with coefficients in $\mathfrak N$ (encoded as encoded lists) are equivalent (respectively, cohomologous).

The proof of Theorem 5.8 is analogous to the proof of Theorem 3.2 in the monoid case. In fact, if we use the group versions of the procedures NormalizeList, PruneList and MainProcessingStep instead of the monoid versions, then we can define the algorithm FindMinimalList literally as in the monoid case. The proof of the correctness of this algorithm is then as in the monoid case.

As for the runtime analysis of the algorithm in the group case, there is only one difference from the monoid case, which is caused by the difference between Lemma 4.9 and Lemma 5.7. Namely, at step 6 we apply the procedure MainProcessingStep to generate lists $\mathcal M_d, \dots, \mathcal M_t$. Here either $t = 1$ or $t \geqslant 2$ and a break happens before $\mathcal M_{t-1}$ is computed. As long as $i \in \{t,\dots, d\}$ satisfies $i \geqslant 2$, we have

$$ \begin{equation*} |\mathcal M_{i}|_{\mathrm{tot}} \leqslant \frac89|\mathcal M_{i+1}|_{\mathrm{tot}}+|\mathcal N_i|_{\mathrm{tot}}, \end{equation*} \notag $$
just as in the monoid case, but in view of the difference between Lemma 4.9 and Lemma 5.7, in the special case we only obtain the weaker estimate
$$ \begin{equation*} |\mathcal M_1|_{\mathrm{tot}} \leqslant 2n |\mathcal M_{2}|_{\mathrm{tot}}+|\mathcal N_1|_{\mathrm{tot}}. \end{equation*} \notag $$
Of course, this difference only occurs if $t=1$, that is, if no break occurs at step 6. Assume that this is the case. Then, this case we have, as in the monoid case, we have the estimates
$$ \begin{equation*} |\mathcal M_2|_{\mathrm{tot}} \leqslant \sum_{i=2}^{d}|\mathcal M_i|_{\mathrm{tot}} \leqslant 9\sum_{i=2}^d|\mathcal N_{i}|_{\mathrm{tot}}, \end{equation*} \notag $$
and therefore
$$ \begin{equation*} \begin{aligned} \, \sum_{i=1}^d |\mathcal M_i|_{\mathrm{tot}} &\leqslant |\mathcal M_1|_{\mathrm{tot}}+9\sum_{i=2}^d|\mathcal N_{i}|_{\mathrm{tot}} \leqslant 2n |\mathcal M_{2}|_{\mathrm{tot}}+9\sum_{i=1}^d|\mathcal N_{i}|_{\mathrm{tot}} \\ &\leqslant (18n+9) \sum_{i=1}^d|\mathcal N_{i}|_{\mathrm{tot}}, \end{aligned} \end{equation*} \notag $$
that is, for all $i \in \{1,\dots, d\}$ we obtain
$$ \begin{equation*} |\mathcal M_i| \leqslant \sum_{i=1}^d |\mathcal M_i|_{\mathrm{tot}} \leqslant (18n+9) (|\mathcal N|_{\mathrm{tot}}-|\mathcal N_0|_{\mathrm{tot}}). \end{equation*} \notag $$

This inequality can now be used to replace (4.6) for the remaining runtime analysis. The rest of the proof of Theorem 5.8 is identical to the proof of Theorem 3.2, except for the slightly worse constant $(18n+9)$ instead of $9$. Of course, for this argument to be valid it is important for $n$ to be sufficiently small, so as to be regarded as a constant.

§ 6. Encoding arithmetic operations

In this section we discuss the encodings for arithmetic operations which are used in the main body of this text. For each of the two cases $\mathfrak N \in \{\mathbb{Z}, \mathbb{Q}\}$ we choose an alphabet $\Sigma_{\mathfrak N}$, an encoding subset $ \operatorname{Num}_{\mathfrak N} \subset \Sigma^*$ of the set $\Sigma^*$ of words over $\Sigma$ and a surjective map

$$ \begin{equation*} \operatorname{Num}_{\mathfrak N} \to \mathfrak{N}, \qquad \mathrm x \mapsto \langle \mathrm x \rangle. \end{equation*} \notag $$
Given $\mathrm{x}, \mathrm{y} \in \operatorname{Num}_{\mathfrak N}$, we write $\mathrm{x} \equiv \mathrm{y}$ if $\langle \mathrm{x} \rangle = \langle \mathrm{y} \rangle$.

6.1. Encoding integer arithmetic

To encode the semiring $\mathbb N_0$ of nonnegative integers we choose an auxiliary alphabet $\Sigma_{\mathbb N_0}:=\{0,1\}$. Then we define $\operatorname{Num}_{\mathbb N_0}$ as the union of the singleton $\{0\}$ and the set of all finite words over $\Sigma_{\mathbb N_0}$ that start with $1$. We obtain a bijective encoding

$$ \begin{equation*} \operatorname{Num}_{\mathbb N_0} \to \mathbb N_0, \qquad \mathrm x \mapsto \langle \mathrm x \rangle, \end{equation*} \notag $$
by interpreting each word in $\operatorname{Num}_{\mathbb N_0}$ as a binary expansion of a natural number, so that, for example, $111$ represents $7$. To encode the ring $\mathbb{Z}$ of integers we use the alphabet $\Sigma_{\mathbb Z}:=\{0,1,+,-\}$ and set
$$ \begin{equation*} \operatorname{Num}_{\mathbb Z}:=\{\varepsilon\} \cup \{+\mathrm x \mid \mathrm x \in \operatorname{Num}_{\mathbb N_0}\} \cup \{-\mathrm x \mid \mathrm x \in \operatorname{Num}_{\mathbb N_0}\}, \end{equation*} \notag $$
where $\varepsilon$ denotes the empty word. Then we define an encoding
$$ \begin{equation*} \operatorname{Num}_{\mathbb Z} \to \mathbb Z, \qquad \mathrm x \mapsto \langle \mathrm x \rangle, \end{equation*} \notag $$
as follows. The empty word is interpreted as $0$, and for nonempty words we interpret the first letter as the sign and the rest of the word as the binary expansion of the absolute value; for example, $-111$ represents $-7$. This encoding is almost injective except that $\varepsilon \equiv +0 \equiv -0$. This nonuniqueness will sometimes be convenient for us. If $\mathrm{x} \in \operatorname{Num}_{\mathbb{Z}} \setminus \{\varepsilon\}$, then the amount of memory used to store $\mathrm{x}$, that is, its word length in the alphabet $\Sigma_\mathbb{Z}$ is given by
$$ \begin{equation*} |\mathrm x|_{\Sigma_\mathbb Z}:=\lceil\log_2(|\langle \mathrm x \rangle|+1)\rceil+1, \end{equation*} \notag $$
while $|\varepsilon|_{\Sigma_\mathbb{Z}} = 0$. Given $\mathrm{x}\in \Sigma_\mathbb{Z}$, we also write $\|x\| := |\mathrm{x}|_{\Sigma_\mathbb{Z}}$ and refer to $\|\mathrm{x}\|$ as the size of $\mathrm{x}$. The following is a standard elementary exercise in the theory of computing.

Lemma 6.1. Let $\mathrm{x}_1,\mathrm{x}_2 \in \operatorname{Num}_\mathbb{Z}$. Then there exist elements $\mathrm{x}_3 =: \mathrm{x}_1 \oplus \mathrm{x}_2 \in \operatorname{Num}_\mathbb{Z}$ and $\mathrm{x}_4 =: \mathrm{x}_1 \ominus \mathrm{x}_2 \in \operatorname{Num}_\mathbb{Z}$ of sizes $\|\mathrm{x}_3\|, \|\mathrm{x}_4\|\leqslant \max\{\|\mathrm{x}_1\|,\|\mathrm{x}_2\|\}$ which can be computed in time $O(\max\{\|\mathrm{x}_1\|,\|\mathrm{x}_2\|\}+1)$ and satisfy $\langle \mathrm{x}_3 \rangle = \langle \mathrm{x}_1 \rangle + \langle \mathrm{x}_2 \rangle$ and $\langle \mathrm{x}_4 \rangle = \langle \mathrm{x}_1 \rangle - \langle \mathrm{x}_2 \rangle$.

Remark 6.2. Apart from adding and subtracting integers, we also need to be able to decide whether two words in $\operatorname{Num}_\mathbb{Z}$ represent the same integer. It turns out that equality can be checked in linear time. Indeed, for words of length $\geqslant 3$ one just has to compare words over $\Sigma_\mathbb{Z}$. For shorter words the only relation one has to take into account is the fact that $\mathrm{+0} \equiv \mathrm{-0} \equiv \varepsilon$, but this can also be checked in bounded time.

While addition, subtraction and comparison of large integers can be performed in linear time (compared to the size of the input), this is no longer the case for multiplication. In fact, the construction of efficient multiplication algorithms for large integers is an important problem in the theory of computation. In practice, multiplication is usually realized by the Toom-Cook algorithm (see [20] and [3]), which for medium-sized inputs is one of the fastest algorithms known. Given $\mathrm{x}_1, \mathrm{x}_2 \in \operatorname{Num}_\mathbb{Z}$, it computes the expression $\mathrm{x}_1 \odot \mathrm{x}_2 \in \operatorname{Num}_\mathbb{Z}$ representing $\langle \mathrm{x}_1\rangle \cdot \langle \mathrm{x}_2 \rangle$ in time $O(T_1(\|\mathrm{x}_1\|+\|\mathrm{x}_2\|))$, where $T_1(N)=N^{\theta}$ with $\theta=\log 5/\log 3\approx 1.465$.

However, for theoretical purposes (or very large inputs) there are multiplication algorithms that are even faster: Schönhage and Strassen presented an algorithm which, given $\mathrm{x}_1, \mathrm{x}_2 \in \operatorname{Num}_\mathbb{Z}$, computes an expression $\mathrm{x}_1 \odot \mathrm{x}_2 \in \operatorname{Num}_\mathbb{Z}$ representing $\langle \mathrm{x}_1\rangle \cdot \langle \mathrm{x}_2 \rangle$ in time $O(T_2(\|\mathrm{x}_1\|+\|\mathrm{x}_2\|))$, where $T_2(N) := N \log(N) \log\log(N)$ and conjectured that the optimal time complexity of such an algorithm must be $O(T_3(\|\mathrm{x}_1\|+\|\mathrm{x}_2\|))$, where $T_3(N) := N \log(N)$. An algorithm with this time complexity was provided very recently by Harvey and van der Hoeven (see [11]), but it is still unknown whether its time complexity is optimal. Moreover, no superlinear lower bound is currently known.

It is easy to check that all the above functions $T_1$, $T_2$ and $T_3$ satisfy conditions (T1)–(T3); they also satisfy (T4) as shown by Toom-Cook, Schönhage-Strassen and Harvey-van der Hoeven, respectively.

6.2. A complexity convexity lemma

In § 6.1 we have seen that the time complexity of integer multiplication is governed by a function $T$ which is both superadditive and asymptotically at least linear. These properties of $T$ have the following crucial consequence.

Lemma 6.4. Let $g$ be a real-valued function, and let $T$ be a real-valued function satisfying properties (T1) and (T2) from Convention 6.3. If $g(x) = O(T(x))$, then

$$ \begin{equation*} g(x_1)+\dots+g(x_k)=O(T(x_1+\dots+x_k)), \end{equation*} \notag $$
where the implied constants are independent of $k$.

More explicitly, the conclusion of the lemma says that

$$ \begin{equation*} \limsup \biggl\{\frac{g(x_1)+\dots+g(x_k)}{T(x_1+\dots+x_k)} \biggm| k \in \mathbb N, x_1, \dots, x_k \in \mathbb N\biggr\}< \infty. \end{equation*} \notag $$

Proof of Lemma 6.4. Let $x := x_1 + \dots + x_k$. Since $x_j \in \mathbb N$, we have $x_j \geqslant 1$ for all $j=1, \dots, k$, and so $k \leqslant x$. Since $g(x) = O(T(x))$, there exist constants $C_0, C_1, C_2 \in \mathbb N$ such that
$$ \begin{equation*} g(x) \leqslant \begin{cases} C_1 T(x) & \text{if }x > C_0, \\ C_2 & \text{if }x \leqslant C_0. \end{cases} \end{equation*} \notag $$
Up to reordering the $x_j$ we may assume that $x_1, \dots, x_r > C_0$ and $x_{r+1}, \dots, x_k \leqslant C_0$. Then using the superadditivity of $T$ we obtain
$$ \begin{equation*} \begin{aligned} \, &g(x_1)+\dots+g(x_k) \leqslant C_1T(x_1)+\dots+C_1T(x_r)+C_2+\dots+C_2 \\ &\qquad= C_1(T(x_1)+\dots+T(x_r))+C_2(k-r) \leqslant C_1 T(x_1+\dots+x_r)+C_2 k \\ &\qquad\leqslant C_1T(x)+C_2 x. \end{aligned} \end{equation*} \notag $$
Since $T$ is asymptotically at least linear, the lemma follows.

The relevance of this property for our runtime analysis was explained in Remark 4.1.

6.3. Encoding arithmetic of rational numbers

Now we encode the field $\mathbb{Q}$ of rational numbers. We will store such numbers as mixed fractions, and is crucial for many of our algorithms to allow unreduced mixed fractions to appear in computations. In particular, this is necessary in order to implement addition of rational numbers efficiently (since reducing fractions takes even more time than multiplication). Working with unreduced mixed fractions has a side effect: infinitely many different words represent the same rational number, but this does not cause any problems.

To define our encoding we set $\Sigma_{\mathbb Q} := \{0,1,+,-,/\}$ and define

$$ \begin{equation*} \operatorname{Num}_{\mathbb Q}:=\{\varepsilon\} \cup \{\mathrm s \mathrm K/\mathrm m/\mathrm n\mid \mathrm s \in \{+,-\},\; \mathrm K,\mathrm m,\mathrm n \in \operatorname{Num}_{\mathbb N_0}, \;\langle \mathrm m \rangle < \langle \mathrm n \rangle \}. \end{equation*} \notag $$
Then we define an encoding
$$ \begin{equation*} \operatorname{Num}_{\mathbb Q} \to \mathbb Q, \qquad \mathrm x \mapsto \langle \mathrm x \rangle, \end{equation*} \notag $$
as follows. The empty word is interpreted as $0$, and if $K,m,n \in \operatorname{Num}_{\mathbb N_0}$, then we set
$$ \begin{equation*} \langle \mathrm s \mathrm K/\mathrm m/\mathrm n \rangle:=\langle \mathrm s\mathrm K \rangle+\frac{\langle \mathrm m\rangle}{\langle \mathrm n \rangle}. \end{equation*} \notag $$
Since we do not assume $\mathrm{m}$ and $\mathrm{n}$ to be relatively prime, the map $\mathrm{x} \to \mathrm{\langle x \rangle}$ is not injective. For example, both $-11/10/101$ and $-11/100/1010$ represent $-3\frac{2}{5} = -3\frac{4}{10}$. If $ \mathrm{s} \mathrm{K}/\mathrm{m}/\mathrm{n} \in \operatorname{Num}_{\mathbb{Q}} \setminus \{\varepsilon\}$, then the amount of memory needed to store $\mathrm{s}\mathrm{K}/\mathrm{m}/\mathrm{n}$ is
$$ \begin{equation*} | \mathrm s \mathrm{K}/\mathrm{m}/\mathrm{n}|_{\Sigma_\mathbb Q}=\lceil\log_2(|\langle \mathrm{K} \rangle|+1)\rceil+\lceil\log_2(|\langle \mathrm{m} \rangle|+1)\rceil+\lceil\log_2(|\langle \mathrm{n} \rangle|+1)\rceil+3, \end{equation*} \notag $$
while $|\varepsilon|_{\Sigma_\mathbb{Q}} = 0$. However, it is computationally convenient to work with a slightly different notion of size: We define the size of an element of $\operatorname{Num}_{\mathbb{Q}}$ by $\|\varepsilon\| := 0$ and
$$ \begin{equation} \| \mathrm s \mathrm{K}/\mathrm{m}/\mathrm{n}\|:=\lceil\log_2(|\langle \mathrm{K} \rangle|+1)\rceil+2\lceil\log_2(|\langle \mathrm{n} \rangle|+1)\rceil+3. \end{equation} \tag{6.1} $$
Then, since $0\leqslant m < n$, we have
$$ \begin{equation} \|x\|<|x|_{\Sigma_\mathbb Q}<2\|x\|. \end{equation} \tag{6.2} $$
Hence we do not lose anything by working with the size $\|\cdot\|$. The main advantage of our notion of size is that it behaves better with respect to arithmetic operations, as the following lemma shows.

Lemma 6.5. Let $T$ be a function as in Convention 6.3. Then for all $\mathrm{x}_1,\mathrm{x}_2 \in \operatorname{Num}_\mathbb{Q}$ there exist elements $\mathrm{x}_3 =: \mathrm{x}_1 \oplus \mathrm{x}_2 \in \operatorname{Num}_\mathbb{Q}$ and $\mathrm{x}_4 =: \mathrm{x}_1 \ominus \mathrm{x}_2 \in \operatorname{Num}_\mathbb{Q}$ of size $\|\mathrm{x}_3\|, \|\mathrm{x}_4\| \leqslant \|\mathrm{x}_1\| + \|\mathrm{x}_2\|$ which can be computed in time $O(T(\|\mathrm{x}_1\|+\|\mathrm{x}_2\|))$ and satisfy $\langle \mathrm{x}_3 \rangle = \langle \mathrm{x}_1 \rangle + \langle \mathrm{x}_2 \rangle$ and $\langle \mathrm{x}_4 \rangle = \langle \mathrm{x}_1 \rangle - \langle \mathrm{x}_2 \rangle$.

Proof. The case of subtraction follows immediately from the case of addition, so we focus on the latter. Cases when either $\mathrm{x}_1$ or $\mathrm{x}_2$ are empty are obvious.

Thus assume that $\mathrm{x}_1 = \mathrm{s_1K_1/m_1/n_1}$ and $\mathrm{x}_2 = \mathrm{s_2K_2/m_2/n_2}$, and let $K_1$, $m_1$, $n_1$, $K_2$, $m_2$ and $n_2$ denote the respective interpretations of $\mathrm{K}_1$, $\mathrm{m}_1$, $\mathrm{n}_1$, $\mathrm{K}_2$, $\mathrm{m}_2$ and $\mathrm{n}_2$.

First consider the case when both signs $\mathrm{s_1}$ and $\mathrm{s_2}$ are positive. The sum of the fractional parts of $\mathrm{x}_1$ and $\mathrm{x}_2$ is

$$ \begin{equation*} \frac{m_1}{n_1}+\frac{m_2}{n_2}=\frac{p}{q},\qquad \text{where}\quad p:=m_1n_2+m_2n_1\quad\text{and}\quad q:= n_1n_2. \end{equation*} \notag $$
Thus we define
$$ \begin{equation*} \mathrm p:=(\mathrm{m}_1 \otimes \mathrm{n}_2) \oplus (\mathrm{m}_2 \otimes \mathrm{n}_1) \quad\text{and}\quad \mathrm{q}:=\mathrm{n}_1 \otimes \mathrm{n}_2. \end{equation*} \notag $$
It is clear from the formulae that both $\mathrm{p}$ and $\mathrm{q}$ have size $O(||\mathrm{x}_1||+||\mathrm{x}_2||)$ and in view of properties (T1) and (T3) of $T$ we can compute them in time $O(T(||\mathrm{x}_1||+||\mathrm{x}_2||))$ using three multiplication and two addition routines.

Once $\mathrm{p}$ and $\mathrm{q}$ have been computed, we can compute the integer part of the fraction $p/q$ in linear time $O(\|\mathrm{x}_1\|+\|\mathrm{x}_2\|)$. Indeed, since $0\leqslant m_1<n_1$ and $0\leqslant m_2<n_2$, we have $m_1n_2<n_1n_2$ and $m_2n_1<n_1n_2$, hence $p<2q$. Then $K'=\lfloor p/q \rfloor$ can be equal to $0$ or $1$, so to find $K'$ we need only to check the condition $p<q$. If it is true, then $K'=0$, otherwise $K'=1$, and this can be checked in time $O(\|\mathrm{x}_1\|+\|\mathrm{x}_2\|)$. If $K' = 1$, then we can also compute $\mathrm{p}'=\mathrm{p}\ominus \mathrm{q}$ in linear time. Note that in this case $p/q=K'\frac{p'}{q}$, where $p' := \langle \mathrm{p}' \rangle$.

Finally, set $\mathrm{K} := \mathrm{K_1} \oplus \mathrm{K}_2$ if $K' = 0$ and $\mathrm{K} := \mathrm{K}_1\oplus \mathrm{K}_2 \oplus (\mathrm{+1})$ if $K' = 1$. Again, this can be computed in linear time. If $K' = 0$ then we set $\mathrm{x}_3 := \mathrm{+K}/\mathrm{p}/{\mathrm{q}}$. If ${K' = 1}$, then we set $\mathrm{x}_3 := \mathrm{+K}/\mathrm{p'}/{\mathrm{q}}$. In either case $\mathrm{x}_3$ represents $ \langle \mathrm{x}_1 \rangle + \langle \mathrm{x}_2 \rangle$ and has been computed in time $O(T(\|\mathrm{x}_1\| + \|\mathrm{x}_2\|))$.

It remains to prove the inequality $||\mathrm{x}_3||\leqslant ||\mathrm{x}_1||+||\mathrm{x}_2||$. For any $n_1,n_2\geqslant 1$ we have

$$ \begin{equation*} \lceil\log_2(n_1n_2+1)\rceil \leqslant \log_2((n_1+1)(n_2+1))+1 \leqslant \log_2(n_1+1)+\log_2(n_2+1)+1. \end{equation*} \notag $$
Hence the denominator $n_1n_2$ can be stored as a word of length
$$ \begin{equation} \lceil\log_2(n_1+1)\rceil+\lceil\log_2(n_2+1)\rceil+1. \end{equation} \tag{6.3} $$
Since the nominator of the fractional part (which is $p$ or $p'$) is strictly smaller than the denominator $n_1n_2$, it can be stored as a word of smaller length.

Similarly, for any $K_1,K_2 \geqslant 1$ we have

$$ \begin{equation*} \lceil\log_2(K_1+K_2+2)\rceil \leqslant \log_2((K_1+1)(K_2+1))+1 \leqslant \log_2(K_1+1)+\log_2(K_2+1)+1. \end{equation*} \notag $$
In the case when $0\leqslant K_1, K_2 \leqslant 1$ this inequality is also true and is easily verified. This ensures that $\mathrm{K}$ can be stored using $\lceil\log_2(K_1+1)\rceil+\lceil\log_2(K_2+1)\rceil+1$ symbols. Adding to this one symbol for the sign, two symbols for the separators and two words of length (6.3), which we use to save the fractional part, we see that $\mathrm{x}_3$ contains fewer than $N$ symbols altogether, where
$$ \begin{equation*} \begin{aligned} \, N&=\lceil\log_2(K_1+1)\rceil+\lceil\log_2(K_2+1)\rceil +2(\lceil\log_2(n_1+1)\rceil+\lceil\log_2(n_2+1)\rceil)+6 \\ &= \|\mathrm x_1\|+\|\mathrm x_2\|. \end{aligned} \end{equation*} \notag $$
This finishes the proof in the case $\mathrm{s}_1=\mathrm{s}_2=+$.

If both signs $s_1$ and $s_2$ are negative, then we make the same computations, but set instead $\mathrm{x}_3 := \mathrm{-K}/\mathrm{p}/{\mathrm{q}}$ (or $\mathrm{x}_3 := \mathrm{-K}/\mathrm{p'}/{\mathrm{q}}$ if $K'=1$). Clearly, this does not influence the result.

If the signs $s_1$ and $s_2$ are different, then the computations are similar, except that some sums are replaced by differences and we need a few more comparisons to produce the integer part and the sign. The time complexity is still $O(T(\|\mathrm{x}_1\| + \|\mathrm{x}_2\|))$, and the size estimates are exactly the same, since the denominator of the fractional part is equal to $n_1n_2$ again and the absolute value of the integer part does not exceed $|K_1|+|K_2|+1$.

Lemma 6.5 is proved.

Concerning the comparison of rational numbers we have the following statement.

Lemma 6.6. Let $\mathrm{x}_1,\mathrm{x}_2 \in \operatorname{Num}_\mathbb{Q}$. Then we can decide whether $\mathrm{x}_1 \equiv \mathrm{x}_2$ or not in time $O(T(||\mathrm{x}_1||+||\mathrm{x}_2||))$.

Proof. Using the same notation as in the proof of Lemma 6.5, the lemma follows from the straightforward check of one of the equalities $K_1n_1+m_1=K_2n_2+m_2$ or $K_1n_1+m_1=-(K_2n_2+m_2)$. The lemma is proved.

Bibliography

1. R. Brooks, “Some remarks on bounded cohomology”, Riemann surfaces and related topics: Proceedings of the 1978 Stony Brook conference (State Univ. New York, Stony Brook, NY 1978), Ann. of Math. Stud., 97, Princeton Univ. Press, Princeton, NJ, 1981, 53–63  crossref  mathscinet  zmath
2. D. Calegari, scl, MSJ Mem., 20, Math. Soc. Japan, Tokyo, 2009, xii+209 pp.  crossref  mathscinet  zmath
3. S. Cook, On the minimum computation time of functions, Ph.D. thesis, Harvard Univ., Cambridge, MA, 1966
4. Th. H. Cormen, Ch. E. Leiserson, R. L. Rivest and C. Stein, Introduction to algorithms, 2nd ed., MIT Press, Cambridge, MA; McGraw-Hill Book Co., Boston, MA, 2001, xxii+1180 pp.  mathscinet  zmath
5. R. Frigerio, Bounded cohomology of discrete groups, Math. Surveys Monogr., 227, Amer. Math. Soc., Providence, RI, 2017, xvi+193 pp.  crossref  mathscinet  zmath
6. R. I. Grigorchuk, “Some results on bounded cohomology”, Combinatorial and geometric group theory (Edinburgh 1993), London Math. Soc. Lecture Notes Ser., 204, Cambridge Univ. Press, Cambridge, 1995, 111–163  crossref  mathscinet  zmath
7. T. Hartnick and P. Schweitzer, “On quasioutomorphism groups of free groups and their transitivity properties”, J. Algebra, 450 (2016), 242–281  crossref  mathscinet  zmath
8. T. Hartnick and A. Sisto, “Bounded cohomology and virtually free hyperbolically embedded subgroups”, Groups Geom. Dyn., 13:2 (2019), 677–694  crossref  mathscinet  zmath
9. T. Hartnick and A. Talambutsa, “Relations between counting functions on free groups and free monoids”, Groups Geom. Dyn., 12:4 (2018), 1485–1521  crossref  mathscinet  zmath
10. A. Hase, Dynamics of $\operatorname{Out}(F_n)$ on the second bounded cohomology of $F_n$, arXiv: 1805.00366
11. D. Harvey and J. van der Hoeven, “Integer multiplication in time $O(n\log n)$”, Ann. of Math. (2), 193:2 (2021), 563–617  crossref  mathscinet  zmath
12. J. E. Hopcroft, R. Motwani and J. D. Ullman, Introduction to automata theory, languages, and computation, 3rd ed., Pearson Education, Inc., Boston, MA, 2006, xvii+535 pp.  mathscinet  zmath
13. P. Kiyashko, Bases for counting functions on free monoids and groups, arXiv: 2306.15520
14. I. Krasikov and Y. Roditty, “On a reconstruction problem for sequences”, J. Combin. Theory Ser. A, 77:2 (1997), 344–348  crossref  mathscinet  zmath
15. V. I. Levenstein, “Efficient reconstruction of sequences from their subsequences and supersequences”, J. Combin. Theory Ser. A, 93:2 (2001), 310–332  crossref  mathscinet  zmath
16. M. Lothaire, Combinatorics on words, Cambridge Math. Lib., 2nd ed., Cambridge Univ. Press, Cambridge, 1997, xviii+238 pp.  crossref  mathscinet  zmath
17. D. Osin, “Acylindrically hyperbolic groups”, Trans. Amer. Math. Soc., 368:2 (2016), 851–888  crossref  mathscinet  zmath
18. M. V. Sapir, Combinatorial algebra: syntax and semantics, With contributions by V. S. Guba, M. V. Volkov, Springer Monogr. Math., Springer, Cham, 2014, xvi+355 pp.  crossref  mathscinet  zmath
19. A. Schonhage and V. Strassen, “Schnelle Multiplikation großer Zahlen”, Computing (Arch. Elektron. Rechnen), 7 (1971), 281–292  crossref  mathscinet  zmath
20. A. L. Toom, “The complexity of a scheme of functional elements realizing the multiplication of integers”, Soviet Math. Dokl., 4 (1963), 714–716  mathnet  mathscinet  zmath

Citation: A. L. Talambutsa, T. Hartnick, “Efficient computations with counting functions on free groups and free monoids”, Mat. Sb., 214:10 (2023), 116–162; Sb. Math., 214:10 (2023), 1458–1499
Citation in format AMSBIB
\Bibitem{TalHar23}
\by A.~L.~Talambutsa, T.~Hartnick
\paper Efficient computations with counting functions on free groups and free monoids
\jour Mat. Sb.
\yr 2023
\vol 214
\issue 10
\pages 116--162
\mathnet{http://mi.mathnet.ru/sm9683}
\crossref{https://doi.org/10.4213/sm9683}
\mathscinet{http://mathscinet.ams.org/mathscinet-getitem?mr=4716510}
\adsnasa{https://adsabs.harvard.edu/cgi-bin/bib_query?2023SbMat.214.1458T}
\transl
\jour Sb. Math.
\yr 2023
\vol 214
\issue 10
\pages 1458--1499
\crossref{https://doi.org/10.4213/sm9683e}
\isi{https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=Publons&SrcAuth=Publons_CEL&DestLinkType=FullRecord&DestApp=WOS_CPL&KeyUT=001191118300006}
\scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85187864600}
Linking options:
  • https://www.mathnet.ru/eng/sm9683
  • https://doi.org/10.4213/sm9683e
  • https://www.mathnet.ru/eng/sm/v214/i10/p116
  • Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Математический сборник Sbornik: Mathematics
    Statistics & downloads:
    Abstract page:344
    Russian version PDF:15
    English version PDF:35
    Russian version HTML:69
    English version HTML:70
    References:27
    First page:8
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2024