Izvestiya: Mathematics
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Forthcoming papers
Archive
Impact factor
Guidelines for authors
Submit a manuscript

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Izv. RAN. Ser. Mat.:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Izvestiya: Mathematics, 2024, Volume 88, Issue 2, Pages 236–269
DOI: https://doi.org/10.4213/im9480e
(Mi im9480)
 

Algorithmic complexity for theories of commutative Kleene algebras

S. L. Kuznetsov

Steklov Mathematical Institute of Russian Academy of Sciences, Moscow
References:
Abstract: Kleene algebras are structures with addition, multiplication and constants $0$ and $1$, which form an idempotent semiring, and the Kleene iteration operation. In the particular case of $*$-continuous Kleene algebras, Kleene iteration is defined, in an infinitary way, as the supremum of powers of an element. We obtain results on algorithmic complexity for Horn theories (semantic entailment from finite sets of hypotheses) of commutative $*$-continuous Kleene algebras. Namely, $\Pi_1^1$-completeness for the Horn theory and $\Pi^0_2$-completeness for its fragment, where iteration cannot be used in hypotheses, is proved. These results are commutative counterparts of the corresponding theorems of D. Kozen (2002) for the general (non-commutative) case. Several accompanying results are also obtained.
Keywords: Kleene algebras, algorithmic complexity, infinitary logic.
Funding agency Grant number
Russian Science Foundation 21-11-00318
This work was supported by the Russian Science Foundation under grant no. 21-11-00318, https://rscf.ru/en/project/21-11-00318/.
Received: 05.04.2023
Revised: 19.07.2023
Bibliographic databases:
Document Type: Article
UDC: 510.6
Language: English
Original paper language: Russian

§ 1. Introduction

1.1. Kleene algebras

The notion of Kleene algebra goes back to of Kleene [1]. The signature of Kleene algebras consists of two binary operations: addition and multiplication, the unary operation of iteration (“Kleene star”), and constants $0$ and $1$. Addition in Kleene algebras (see the definition below) is idempotent and induces a partial order: $a \leqslant b$ means $a+b=b$.

Definition 1. A Kleene algebra is an algebraic structure $(\mathcal{A}; +, {\cdot}\,, {}^*, 0, 1)$ such that:

In what follows, we shall sometimes call Kleene algebras just “algebras”.

A natural example of a Kleene algebra is the algebra of formal languages over an alphabet $\Sigma$. The domain of this algebra is the powerset of the set of words over $\Sigma$, that is, $\mathcal{A}=\mathcal{P}(\Sigma^*)$. Multiplication is defined as elementwise concatenation of words

$$ \begin{equation*} A\cdot B=\{ uv \mid u \in A,\, v \in B \}, \end{equation*} \notag $$
addition is the union of languages, and the Kleene star is defined by
$$ \begin{equation*} A^*=\bigcup_{n \geqslant 0} A^n=\{ u_1 \cdots u_n \mid n \geqslant 0, u_1,\dots, u_n \in A\}. \end{equation*} \notag $$
Constants $0$ and $1$ correspond to the empty language, $\varnothing$, and the singleton of the empty word, $\{\varepsilon\}$, respectively.

Another natural example of a Kleene algebra is the algebra of binary relations over a set $W$ whose domain is $\mathcal{A}=\mathcal{P}(W \times W)$. In this algebra, multiplication is composition of relations, sum is union, and Kleene star is the reflexive-transitive closure; $0=\varnothing$, $1=\{ (x,x) \mid x \in W \}$. Elements of $W$ are considered as states of a system, binary relation model non-deterministic actions (operations) on this set of states, for example, transitions in a finite automaton. This understanding is close to Kleene’s original motivation [1].

Both the algebra of languages and the algebra of relations belong to a special class of Kleene algebras known as $*$-continuous Kleene algebras.

Definition 2. A Kleene algebra is called $*$-continuous if, for any $a,b,c \in \mathcal{A}$,

$$ \begin{equation*} b \cdot a^* \cdot c=\sup\nolimits_{\leqslant} \{ b \cdot a^n \cdot c \mid n \geqslant 0 \} \end{equation*} \notag $$
(where $a^0=1$, $a^{n+1}=a \cdot a^n$).

The new condition on iteration is stronger than the corresponding condition from Definition 1. Moreover, it is strictly stronger: there exist Kleene algebras which are not $*$-continuous. The latter can be proved indirectly, using the difference in complexity of logical theories connected with these two classes of algebras (see below), or by compactness theorem. An explicit example of a non-$*$-continuous Kleene algebra was constructed by Kozen (see § 3 in [2]).

The present work is devoted to the commutative case.

Definition 3. A Kleene algebra is called commutative, if so is its multiplication operation: $a \cdot b=b \cdot a$ for all $a, b \in \mathcal{A}$.

Pratt [3] motivates considering commutative Kleene algebras in the following way. If one understands elements of a Kleene algebra as actions, then the commutative case corresponds to parallel composition of actions (as opposed to sequential composition for the non-commutative case).

A commutative variant for algebras of formal languages appears in Red’ko [4], where this variant is called the algebra of commutative events. Usual (non- commutative) formal languages may be considered as subsets of the free monoid generated by the alphabet $\Sigma$. Analogously, commutative “languages” (events, in the terminology of Glushkov and Red’ko) are subsets of the free commutative monoid generated by the set $\Sigma$. The analog of a word in the commutative case is a finite multiset of elements of $\Sigma$. If the alphabet is finite: $\Sigma=\{a_1, \dots, a_n \}$, commutative “words” can be equivalently represented as Parikh vectors [5]: $w \mapsto (|w|_{a_1}, \dots, |w|_{a_n}) \in \mathbb{N}^n$, where $|w|_{a_i}$ is the multiplicity (number of copies) of letter $a_i$ in the multiset $w$.

1.2. Theories of classes of Kleene algebras

For a given class $\mathcal{K}$ of algebraic structures, for example, the class of commutative $*$-continuous Kleene algebras, one defines logical theories of this class. These theories consist of all formulae which are generally valid on the class $\mathcal{K}$. Such theories, for one and the same class, may differ depending on the expressive capacity of the formula language in use. Let us recall the basic definitions.

Definition 4. We fix a countable set $\mathrm{Var}=\{ x_1, x_2, \dots \}$ of individual variables. The set of terms in the language of Kleene algebras, denoted by $\mathrm{Tm}$, is the inclusion-minimal set satisfying the following conditions:

In what follows, we shall omit parentheses in the notation of terms, regarding that multiplication binds tighter than addition and iteration has higher priority than multiplication; addition and multiplication associate to the left.

Definition 5. An atomic (in other terms, equational) formula is an expresson of the form $A=B$, where $A, B \in \mathrm{Tm}$.

Note that an “inequational” formula of the form $A \leqslant B$ is understood as an alternative notation for $A+B=B$. Thus, it is a particular case of an equational formula.

Definition 6. Let $\mathcal{H}=\{ A_1=B_1, \dots, A_n=B_n \}$ be a finite set of atomic formulae and let $C= D$ be one more atomic formula. An expression of the form $\mathcal{H} \Rightarrow C=D$ is called a Horn formula. Atomic formulae from the set $\mathcal{H}$ are called hypotheses, and $C=D$ is the conclusion of the given Horn formula.

An atomic formula may be consdered as a particular case of a Horn formula (with $\mathcal{H}= \varnothing$).

One may also consider a much more expressive language of first-order formulae built from atomic ones in a standard way using Boolean operations and quantifiers over individual variables. In particular, a Horn formula is written in the first-order language in the following way: $(A_1=B_1 \mathop{\&} \cdots \mathop{\&} A_n=B_n) \Rightarrow C=D$. However, as we shall see below, already in the language of Horn formulae the theories of classes of algebras which we are interested in achieve the maximal possible levels of algorithmic complexity. Therefore, it is meaningless, from the point of view of complexity, to consider more expressive languages.

Having fixed a Kleene algebra with domain $\mathcal{A}$ and a valuation function for variables $\alpha \colon \mathrm{Var} \to \mathcal{A}$, one can define valuation of terms (that is, extend the valuation function to $\overline{\alpha} \colon \mathrm{Tm} \to \mathcal{A}$) in a standard way. Furthermore, a standard definition gives the notion of truth of atomic and Horn formulae. A formula (atomic or Horn) is generally valid on a given class of algebras $\mathcal{K}$, if it is true under any valuation on any algebra from the given class.

Definition 7. The equational theory of a class of algebras $\mathcal{K}$ is the set of all atomic formulae which are generally valid on class $\mathcal{K}$. The Horn theory is the set of all Horn theories with the same property.

Weaker Horn theories, with restrictions on the sets of hypotheses, are also considered.

1.3. Complexity of theories in the non-commutative and in the commutative case

The work of Kozen [6] is devoted to algorithmic complexity of theories of Kleene algebras in the non-commutative case: $\mathcal{K}$ is either the class of all Kleene algebras or the class of all $*$-continuous Kleene algebras. Complexity results which are interesting for us (both belonging to Kozen and previously known), as expressed in the aforementioned work of Kozen, can be gathered into the following table:

Theory…of all Kleeneof all $*$-continuous
algebrasKleene algebras
equationaldecidable, pspace-complete
Horn with hypothesesopen question1undecidable,
of the form $A \cdot B=B \cdot A$$\Pi^0_1$-complete
Horn with iteration-freeundecidable,undecidable,
hypotheses$\Sigma^0_1$-complete$\Pi^0_2$-complete
Hornundecidable,undecidable,
(unrestricted)$\Sigma^0_1$-complete$\Pi^1_1$-complete

In this paper, we present complexity results for the commutative case. In turns out that they are similar to the non-commutative ones:

Theory…of all commutativeof all commutative
Kleene algebras$*$-continuous Kleene algebras
equationaldecidable
Horn with iteration-freeundecidable,undecidable,
hypotheses$\Sigma^0_1$-complete$\Pi^0_2$-complete
Hornundecidable,undecidable,
(unrestricted)$\Sigma^0_1$-complete$\Pi^1_1$-complete

The interesting complexity results here are those for Horn theories in the $*$-continuous case, that is, featuring $\Pi^0_2$- and $\Pi^1_1$-completeness. They are expressed in § 7 and § 8, respectively. Other results easily reduce to previously known ones (see Propositions 3 and 4 below) and are added in order to make this article self-contained. Upper bounds (see § 2) are also easily obtained from known results for the non-commutative case.

The result of $\Pi^0_2$-completeness for Horn theories with iteration-free hypotheses was presented at the Logica 2021 conference (Hejnice, Czech Republic, September 27–October 1, 2021) and published in its proceedings [8]. The $\Pi_1^1$-completeness result for the unrestricted Horn theory is a new one, and has never been published before. The ideas of the proof generally go back to Kozen’s constructions for the non-commutative case [6]. A significant difference, however, is in the usage of counter (Minsky) machines instead of Turing ones, and the usage of syntactic methods instead of semantic ones. Moreover, unlike Kozen’s work, we prove that there exist concrete fixed sets of hypotheses $\mathcal{H}$, for which the $\Pi^0_2$-hardness and $\Pi^1_1$-hardness results hold (Theorems 7 and 11 below). This strengthening of complexity result is achieved via universal computable function. Such strengthened lower bounds also apply to the non-commutative situation (Corollaries 1 and 4 below).

§ 2. Upper complexity bounds

2.1. Embedding commutative theories into non-commutative ones

Let us show that commutative Horn theories for $*$-continuous Kleene algebras are actually a particular case of non-commutative ones. Due to this observation, upper complexity bounds transfer to the commutative case from the non-commutative one.

Let $V \subset \mathrm{Var}$ be a finite set of variables. Define the following set of hypotheses:

$$ \begin{equation*} \mathcal{C}_V=\{ x \cdot y=y \cdot x \mid x,y \in V \}. \end{equation*} \notag $$

Proposition 1. Let $V$ be a finite set of variables which includes all variables occurring in atomic formulae from the set $\mathcal{H} \cup \{ C= D \}$. The Horn formula $\mathcal{H} \Rightarrow C=D$ is generally valid on the class of all commutative $*$-continuous Kleene algebras if and only if the Horn formula $\mathcal{H} \cup \mathcal{C}_V \Rightarrow C=D$ is generally valid on the class of all $*$-continuous Kleene algebras.

Proof. The “if” direction is nearly obvious. Let the formula $\mathcal{H} \cup \mathcal{C}_V \Rightarrow C=D$ be generally valid on the class of all $*$-continuous Kleene algebras. Take the Horn formula $\mathcal{H} \Rightarrow C=D$ and consider an arbitrary valuation on a commutative $*$-continuous Kleene algebra which makes formulae from $\mathcal{H}$ true. Since the algebra is commutative, formulae from $\mathcal{C}_V$ will also be true, and the general validity of $\mathcal{H} \cup\, \mathcal{C}_V \Rightarrow C=D$ yields the fact that $C=D$ is also true. Therefore, the Horn formula $\mathcal{H} \Rightarrow C=D$ is generally valid on the class of commutative $*$-continuous Kleene algebras.

In order to prove the “only if” statement, suppose the contrary. Let the Horn formula $\mathcal{H} \cup \mathcal{C}_V \Rightarrow C=D$ be falsified under some interpretation $\alpha$ on a $*$-continuous Kleene algebra $\mathcal{A}$. By $\mathrm{Tm}_V$ let us denote the set of terms which use only formulae from the set $V$. Define a subalgebra $\mathcal{A}$ in $\mathcal{A}|_V$, whose domain is $\{ \overline{\alpha}(B) \mid B \in \mathrm{Tm}_V \}$. It is easy to see that this set is closed under Kleene algebra operations. Moreover, the valuation $\overline{\alpha}$ for any term from $\mathrm{Tm}_V$ by definition belongs to $\mathcal{A}|_V$ (and other terms, which include variables not from $V$, do not matter for us).

When passing from an algebra to its subalgebra, the truth of atomic formulae does not change. Therefore, for the same valution $\alpha$, formulae from $\mathcal{H}$ are true and formula $C=D$ is false in $\mathcal{A}|_V$, as well as in $\mathcal{A}$. It remains to prove that $\mathcal{A}|_V$ is commutative — this will provide us with the desired counterexample showing that the Horn formula $\mathcal{H} \Rightarrow C=D$ is not generally valid on the class of commutative $*$-continuous Kleene algebras.

Let $a=\overline{\alpha}(A)$ and $b=\overline{\alpha}(B)$, where $A, B \in \mathrm{Tm}_V$, be two arbitrary elements of $\mathcal{A}|_V$. Let us prove $a \cdot b=b \cdot a$ by induction on the summary complexity of the terms $A$ and $B$. If $A=x$ and $B=y$ are variables, then by assumption the formula $x \cdot y=y \cdot x$ from the set $\mathcal{C}_V$ is true on algebra $\mathcal{A}|_V$ under valuation $\alpha$. Therefore, $a \cdot b=\overline{\alpha}(x \cdot y)= \overline{\alpha}(y \cdot x)= b \cdot a$. The designated elements $0$ and $1$ also commute with all elements of the algebra.

Now let $A$ be a complex term; the case of complex $B$ is considered symmetrically. Consider possible cases. If $A=A_1 \cdot A_2$, then by the induction hypothesis $a_1 \cdot b=b \cdot a_1$ and $a_2 \cdot b=b \cdot a_2$ (where $a_i=\overline{\alpha}(A_i)$). By associativity, $a \cdot b=a_1 \cdot a_2 \cdot b=a_1 \cdot b \cdot a_2=b \cdot a_1 \cdot a_2= b \cdot a$, and this is what we need. If $A=A_1+A_2$, we use distributivity: $a \cdot b= (a_1+ a_2) \cdot b=a_1 \cdot b+ a_2 \cdot b=b \cdot a_1+b \cdot a_2=b \cdot (a_1+a_2)=b \cdot a$. Finally, if $A=A_1^*$, then by the induction hypothesis we have $a_1 \cdot b=b \cdot a_1$. Hence, $a^n_1 \cdot b=b \cdot a_1^n$ for any natural $n$ (induction on $n$). By the $*$-continuity condition, we have $a \cdot b=a_1^* \cdot b=\sup_{\leqslant} \{ a_1^n \cdot b \mid n \geqslant 0 \}= \sup_{\leqslant} \{ b \cdot a_1^n \mid n \geqslant 0 \}=b \cdot a_1^*=b \cdot a$.

This proves the commutativity of $\mathcal{A}|_V$.

2.2. Upper complexity bound

Proposition 1 is capable of reducing Horn theories for commutative $*$-continuous Kleene algebras to Horn theories of all (including non-commutative) $*$-continuous Kleene algebras. Namely, each Horn formula $\mathcal{H} \Rightarrow C=D$ is replaced by $\mathcal{H} \cup \mathcal{C}_V \Rightarrow C= D$, where $V$ is the set of variables occurring in atomic formulae from the set $\mathcal{H} \cup \{ C=D \}$. Notice that formulae from $\mathcal{C}_V$ do not include iteration. Thus, we get the following upper bounds.

Proposition 2. The Horn theory of commutative $*$-continuous Kleene algebras belongs to the $\Pi_1^1$ complexity class. The fragment of this theory with iteration-free hypotheses belongs to the $\Pi_2^0$ complexity class.

Later on we shall prove that these bounds are exact.

In the case of equational theory (or, equivalently, the theory with hypotheses of the form $A \cdot B=B \cdot A$, which are generally valid in the commutative case), we have the following result.

Proposition 3. The equational theory of commutative $*$-continuous Kleene algebras is algorithmically decidable and coincides with the equational theory of all commutative Kleene algebras.

Proof. The situation here is a bit more complicated. The equational theory of commutative $*$-continuous Kleene algebras reduces to the non-commutative Horn theory with sets of hypotheses of the form $\mathcal{C}_V$. Since all such hypotheses are of the form $A \cdot B=B \cdot A$, we get a $\Pi^0_1$ upper bound (see [6], Theorem 4.1(i)).

On the other hand, Red’ko [4] constructed a finite complete set of identities (equational laws) for a concrete family of $*$-continuous commutative Kleene algebras, namely, the algebra of commutative formal languages (in Red’ko’s terminology, commutative events, see above). It happens that all identities listed by Red’ko are actually generally valid on all commutative Kleene algebras, including non-$*$-continuous ones. Therefore, any equation (atomic formula), which is true on all $*$-continuous commutative Kleene algebras, is also true on all Kleene algebras. The converse also holds, because narrowing of the class of algebras could only extend the theory. Thus, the equational theory of commutative $*$-continuous Kleene algebras coincides with the equational theory of all commutative Kleene algebras.

Moreover, this theory is enumerable, thanks to Red’ko’s finite axiomatisation. Being at the same time co-enumerable (that is, belonging to the $\Pi_1^0$ class), this theory is decidable by Post’s theorem.

§ 3. Commutative infinitary action logic with exponential

Let us introduce an auxiliary calculus which axiomatises an extension of the equational theory of commutative $*$-continuous Kleene algebras. Namely, let us add the division operation (linear implication, $\multimap$) and the exponential modality, denoted by $!$. Adding division yields the equational theory of commutative $*$-continuous residuated Kleene algebras, that is, the commutative variant of infinitary action logic (see [9], [10]). Adding the exponential modality, in the spirit of Girard’s linear logic [11], yields the commutative variant of infinitary action logic with exponential [12].

The purpose of introducing such an extension is as follows. Using division, one can internalise inequations2 in the original language of Kleene algebras, writing them down as terms: $A \multimap B$ for $A \leqslant B$. Furthermore, the exponential modality allows us to formulate a modalised version of deduction theorem, thanks to which Horn hypotheses are placed inside the formula: $A \leqslant B \Rightarrow C \leqslant D$ is rewritten as ${!}(A \multimap B) \cdot C \leqslant D$. Thus, entailment (derivability) from hypotheses is replaced by “pure” derivability of formulae, but in an extended language. Finally, for the latter, a convenient sequent (Gentzen-style) calculus was developed, and cut elimination was proved [12]. This facilitates the derivation analysis.

Terms of the extended language, also called linear formulae, are built from variables and constants $0$ and $1$ using binary operations ${\cdot}\,$, $+$, $\multimap$ and unary operations $^*$ and ${!}$. (Usually additive conjunction (intersection) $\wedge$ is also added to the language. We do not need it. For union (additive disjunction) we keep the notation $+$, instead of $\vee$ or $\oplus$.) Iteration is traditionally written in the postfix form: $A^*$, while for the exponential modality the prefix one is used: ${!}A$. Finite multisets of linear formulae are denoted by Greek letters (including the empty one); Latin letters are used for individual linear formulae. Recall that in a multiset the multiplicity (number of occurrences) of an element is important, while the order is not. Sequents are expressions of the form $\Gamma \to B$. A sequent of the form $A_1, \dots, A_n \to B$ corresponds to the atomic formula $A_1 \cdot \ldots \cdot A_n \leqslant B$; if the left-hand side is empty ($n=0$), the corresponding formula is $1 \leqslant B$.

The calculus in question is denoted by $\boldsymbol{!}\mathbf{CommACT}_\omega$; its axioms and inference rules are as follows.

The rules ${!}\mathrm{W}$ and ${!}\mathrm{C}$ (jointly) are called structural rules; when considered separately, they are called the weakening and contraction rules, respectively.

Let us discuss the rule ${}^*\mathrm{L}$ in more detail. By $A^n$ in this rule (and further on) we denote the sequence of $n$ copies of formula $A$ (in particular, $A^0$ is the empty sequence). The ${}^* \mathrm{L}$ rule, having infinitely many premises, is an omega-rule. In the presence of an omega-rule derivations may be infinite, but should obey the well-foundedness condition: any chain of sequents going upwards the proof tree from the goal sequent, should reach an axiom in a finite number of steps.

Each derivation is associated with an ordinal called its rank.

Definition 8. If a derivation consists of just an axiom, then its rank is equal to $1$. If the derivation is more complex, one takes its final rule and computes the rank as the supremum (in fact, for all rules except ${}^*\mathrm{L}$ it is actually the maximum) of ranks of its premises’ derivations, plus $1$.

The well-foundedness condition guarantees that any derivation has a correctly defined rank. It follows from [12] that each derivable sequent has a derivation whose rank is less than the Church–Kleene ordinal $\omega_1^{\mathrm{CK}}$.

Also notice that in the formulation of the $\mathrm{Id}$ axiom we have required the left-hand side (which is the same as the right-hand side) to be a variable. For complex formulae $A$, however, the sequents $A \to A$ are also derivable (but for further convenience of derivation analysis we do not declare them being axioms).

The calculus $\boldsymbol{!}\mathbf{CommACT}_\omega$ constructed above is very closely related to the calculus $\boldsymbol{!}\mathbf{ACT}_\omega$ from [12]. The difference is in commutativity, the absence of additive conjunction $\wedge$, and the usage of only one exponential modality instead of a family of subexponential ones. Thus, several properties of $\boldsymbol{!}\mathbf{ACT}_\omega$ transfer to the new system $\boldsymbol{!}\mathbf{CommACT}_\omega$ with essentially the same proofs. These properties include cut elimination (see Theorem 4.2 in [12]) and the $\Pi_1^1$ upper complexity bound on the derivability problem (see Theorem 5.2 in [12]).

Theorem 1. Any sequent which is derivable in $\boldsymbol{!}\mathbf{CommACT}_\omega$ is derivable without using the $\mathrm{Cut}$ rule.

Theorem 2. The complexity of the derivability problem in $\boldsymbol{!}\mathbf{CommACT}_\omega$ belongs to the $\Pi_1^1$ class.

In what follows we shall prove (Corollary 2) that this bound is tight: the derivability problem in $\boldsymbol{!}\mathbf{CommACT}_\omega$ is $\Pi_1^1$-complete, that is, it has the same complexity as $\boldsymbol{!}\mathbf{ACT}_\omega$ [12].

In fact, one may include a modality which allows permutation (and no other structural rules) into the family of subexponential modalities of the $\boldsymbol{!}\mathbf{ACT}_\omega$ calculus. Using this modality, it is possible to construct a conservative embedding of the calculus $\boldsymbol{!}\mathbf{CommACT}_\omega$ into $\boldsymbol{!}\mathbf{ACT}_\omega$ (under this embedding, the exponential modality maps into another subexponential, which allows structural rules of permutation, weakening, and contraction). Such an embedding, for a multiplexing modality instead of the exponential one, is presented in [13]. Using such an embedding, one can obtain the aforementioned results as direct corollaries (rather than analogs) of the corresponding theorems from [12].

Finally, we shall need an embedding of the Horn theory of commutative $*$-continuous Kleene algebras into the calculus $\boldsymbol{!}\mathbf{CommACT}_\omega$. In order to construct such an embedding, let us define, given a finite set of atomic formulae $\mathcal{H}=\{ A_1 \leqslant B_1, \dots, A_n \leqslant B_n \}$, the following multiset of linear formulae:

$$ \begin{equation*} {!}\mathcal{H}={!} (A_1 \multimap B_1), \dots, {!} (A_n \multimap B_n). \end{equation*} \notag $$
(If $\mathcal{H}$ includes a formula $A=B$, which is not an inequation, this formula can be equivalently replaced by two formulae: $A \leqslant B$ and $B \leqslant A$.)

Theorem 3. A Horn formula $\mathcal{H} \Rightarrow C_1 \cdot \ldots \cdot C_n \leqslant D$ is generally valid on the class of all commutative $*$-continuous Kleene algebras if and only if the sequent ${!}\mathcal{H}, C_1, \dots, C_n \to D$ is derivable in the calculus $\boldsymbol{!}\mathbf{CommACT}_\omega$.

(If the conclusion of the given Horn formula is not of the form of inequation, its general validity can be reduced to derivability, in $\boldsymbol{!}\mathbf{CommACT}_\omega$ of two sequents: ${!}\mathcal{H}, C \to D$ and ${!}\mathcal{H}, D \to C$. We shall not need this for complexity lower bounds which we shall present below.)

The proof of Theorem 3 repeats that of Lemma 5.3 and Theorem 5.7 from [12], up to commutativity. Moreover, in Lemma 5.3 of [12] one should omit assertion 2, concerning a restricted (“relevant”) modality ${!}^c$, and go from assertion 1 directly to assertion 3.

§ 4. $\Sigma_1^0$-completeness of the Horn theory of all commutative Kleene algebras

This work is mostly dedicated to complexity of theories of the class of commutative $*$-continuous Kleene algebras. As noticed in the previous section, the $*$-continuity condition translates, on the syntactic level, to an omega-rule. This makes derivations infinite, which yields interesting levels of algorithmic complexity (up to $\Pi_1^1$). In this section, however, we shall handle the much easier case of the class of arbitrary commutative Kleene algebras. The definition of Kleene algebra (in the general, not $*$-continuous case) is formulated in the language of first order logic by a finite set of axioms. Therefore, not only the Horn theory, but even the first order theory of the class of all commutative Kleene algebras is enumerable, that is, it belongs to the $\Sigma^0_1$ complexity class.

This complexity bound can be shown to be exact. Moreover, $\Sigma^0_1$-completeness is achieved already for Horn theories which do not include iteration (neither in hypotheses, nor in conclusions).

Proposition 4. The general validity problem for Horn formulae without iteration on the class of all commutative Kleene algebras is $\Sigma_1^0$-complete. Therefore, so is the full Horn theory of the class of all commutative Kleene algebras.

Proof. As noticed above, the non-trivial bound here is the lower one. Let us show that this lower bound immediately follows from the result of Lincoln, Mitchell, Scedrov, and Shankar (see Corollary 3.8 in [14]) on undecidability of intuitionistic propositional linear logic, denoted by $\mathbf{ILL}$. In our notation, $\mathbf{ILL}$, or, more precisely, its fragment without additive conjunction, is obtained from $\boldsymbol{!}\mathbf{CommACT}_\omega$ by removing the rules for Kleene iteration, that is, ${}^*\mathrm{L}$ and ${}^*\mathrm{R}$ (after that, all derivations become finite). It follows from the construction of Lincoln, Mitchell, Scedrov, and Shankar (see § 3 in [14]) that the derivability problem in $\mathbf{ILL}$ for sequents of the form ${!}\mathcal{H}, C_1, \dots, C_n \to D$ (where individual formulae are built using only $\cdot$ and $+$) is $\Sigma^0_1$-complete.

Now let us prove that derivability of this sequent in $\mathbf{ILL}$ is equivalent to general validity of the Horn formula $\mathcal{H} \Rightarrow C_1 \cdot \ldots \cdot C_n \leqslant D$ on the class of all commutative Kleene algebras. Indeed, if the Horn formula in question is generall valid on the class of all commutative Kleene algebras, then it is of course generally valid on the class of commutative $*$-continuous Kleene algebras. By Theorem 3, the sequent ${!}\mathcal{H}, C_1, \dots, C_n \to D$ is derivable in $\boldsymbol{!}\mathbf{CommACT}_\omega$. It remains to notice that cut elimination yields conservativity: if the goal sequent does not include Kleene iteration, then iteration also may not occur in its (cut-free) derivation. Therefore, the sequent involved is derivable in $\mathbf{ILL}$.

For the opposite direction, suppose that the sequent ${!}\mathcal{H}, C_1, \dots, C_n \to D$ is derivable in the calculus $\mathbf{ILL}$. Repeating the argument from the proof of Theorem 3, namely, induction on the derivation in $\mathbf{ILL}$, we show that the Horn formula $\mathcal{H} \Rightarrow C_1 \cdot \ldots \cdot C_n \leqslant D$ is generally valid on the class of all commutative Kleene algebras, not only $*$-continuous. Indeed, since the derivation in $\mathbf{ILL}$ does not include Kleene iteration, the argument would work for a non-$*$-continuous Kleene algebra as well.

Thus, we have proved $\Sigma_1^0$-hardness of the general validity problem for iteration- free Horn formulae on the class of all commutative Kleene algebras.

§ 5. Disbalancing of derivations in $\boldsymbol{!}\mathbf{CommACT}_\omega$

In what follows (§ 6–§ 8), we shall use the calculus $\boldsymbol{!}\mathbf{CommACT}_\omega$ to encode various properties of computations on counter machines (Minsky machines, § 6.1). In such encodings, the transition from computation on a counter machines to the derivation of the corresponding sequent in $\boldsymbol{!}\mathbf{CommACT}_\omega$ is performed rather easily. The converse transformation, however, from derivation to computation, is not that straightforward. Namely, an arbitrary derivation, even a cut-free one, does not always exactly correspond to a computation on the counter machine.

In order to simplify the derivation-to-computation transition, we shall consider not arbitrary derivations, but derivations of a specific form, which we call disbalanced ones. Disbalancing transformations, which translate arbitrary derivations into a disbalanced form, were introduced in [15]. The disbalancing technique is a simplified verison of focusing of derivations [16] and aims to make the structure of a cut-free derivation more regular and more convenient for analysis.

In this section, we shall require all derivations to be cut-free: by Theorem 1, any derivable sequent has a derivation without cut. Moreover, we consider only derivations which do not include the rules ${\multimap} \mathrm{R}$ and ${!}\mathrm{R}$. This circumstance makes disbalancing much easier than focusing of derivations which may include arbitrary inference rules of the calculus. On the other hand, the disbalancing technique presented below is obtained to potentially infinite derivations, since the usage of the omega-rule ${}^{*}\mathrm{L}$ is not forbidden. This makes this technique essentially different from disbalancing [15] or focusing [16] of finite derivations.

The absence of the rules ${\multimap}\mathrm{R}$ and ${!}\mathrm{R}$ in the cut-free derivation can be guaranteed by a simple syntactic property of the goal sequent. Namely, it should not include formulae of the form $A \multimap B$ and formulae of the form ${!}A$ in the positive polarity. (Polarity of an occurrence of a formula is defined in a standard way: polarity of a formula as a subformula of itself is always positive, and further on polarity is changed when the formula is put into the left-hand side of linear implication ($\multimap$) or to the left-hand side of a sequent; in other cases it is kept as is.) All sequents, which will be used for encoding counter machines, will obey this condition. Therefore, they will be derivable without using the rules $\mathrm{Cut}$, ${\multimap}\mathrm{R}$, and ${!}\mathrm{R}$.

We shall call right rules the rules which introduce operations into the right-hand side of the sequent (in the names of these rules letter $\mathrm{R}$ is used). Left rules are rules which introduce operations into the left-hand side of the sequent (names with letter $\mathrm{L}$) and also structural rules ${!}\mathrm{W}$ and ${!}\mathrm{C}$. Axiom $0\mathrm{L}$ is regarded as a left rule, axiom $1\mathrm{R}$ is regarded as a right rule.

Definition 9. A derivation (in the calculus $\boldsymbol{!}\mathbf{CommACT}_\omega$) that does not include the rules $\mathrm{Cut}$, ${\multimap}\mathrm{R}$, and ${!}\mathrm{R}$ is called a disbalanced one if it satisfies the following conditions.

The second condition explains the term “disbalanced”: the main track of the proof, which uses the left rules (including those for the exponential) in such a derivation should always turn right on applications of the ${\multimap}\mathrm{L}$ rule. Derivations of its left premises, in contrast, are always short and relatively trivial. Thus, the derivation tree is far from a balanced tree, where all paths from the root to a leaf have comparable length.

Theorem 4. If a sequent has a derivation in the calculus $\boldsymbol{!}\mathbf{CommACT}_\omega$ which does not use rules $\mathrm{Cut}$, ${\multimap}\mathrm{R}$, and ${!}\mathrm{R}$, then this sequent has a disbalanced derivation.

Proof. The transition from an arbitrary derivation (without forbidden rules) to a disbalanced one is achieved via a series of transformations, aimed to satisfy conditions 1–3. The derivations in question, however, may be infinite, so the argument should be accurately laid out as transfinite induction.

Let us first achieve condition 1. If the derivation does not satisfy condition 1, then it has a left rule application, which is immediately followed by a right rule application (right rule below left rule). In this case, the rule applications can be interchanged. For example, this is how it is done for ${\multimap} \mathrm{L}$ and ${\cdot}\mathrm{R}$:

transforms into

Another example involves the rules ${}^*\mathrm{L}$ and ${}^*\mathrm{R}$:

transforms into an application of the ${}^* \mathrm{L}$ rule:
Each of the infinite number of its premises, in turn, is derived by the rule ${}^*\mathrm{R}$:

Other pairs of a left rule and a right one are considered in a similar way.4

Let us prove that the derivation can be transformed, by applying such transformations, into a form which satisfies condition 1. Let us first prove an auxiliary statement (a particular case): if the lowermost rule in a derivation of a sequent is a right rule, and derivations of its premises satisfy condition 1, then the goal sequent also has a derivation which satisfies condition 1. (In other words, if condition 1 could have been violated only in the very end of the derivation, then this violation can be fixed.)

Let us call a premise of the lowermost rule application a “bad” one (that is, violating condition 1) if the lowermost rule in the derivation of this premise is a left one. We proceed by transfinite induction on the lexicographical order on pairs $(\beta, k)$, where $\beta$ is the maximal rank of the derivation of a “bad” premise and $k$ is the number of “bad” premises with derivations of rank5 $\beta$.

Let us apply the appropriate transformation, as described above, to one of the “bad” premises with derivation of rank $\beta$. This transformation interchanges the right rule with the left one. In this event, the right rule can “duplicate”, even an infinite number of times. Each of the new applications of the right rules the rank of the “bad” premise, which was the one the transformation was applied to, is decreased at least by 1. Let us call this premise the active one. Other premises (including “bad” ones), together with the ranks of their derivations, remain intact. Other premises (including “bad” ones), together with the ranks of their derivations, remain intact. If the right rule had other “bad” premises of rank $\beta$, besides the active one, then the parameter $\beta$ does not change and the parameter $k$ is decreased by 1. In the other case, where all other “bad” premises have rank less than $\beta$, then the parameter $\beta$ is decreased. (For these purposes, it does not matter whether the active premise remained being “bad” or not.) In both cases, we may apply the induction hypothesis. Now all premises of the left rule, which became the lowermost one after the transformations, have derivations which satisfy condition 1. Appending, to the bottom, an application of a left rule does not violate condition 1. The auxiliary statement is proved.

Now let us prove that any derivation (without forbidden rules) can be transformed into a derivation which satisfies condition 1. We proceed by transfinite induction on the rank of the derivation, and consider different cases depending on the lowermost rule. By induction hypothesis, premises of this rule already have derivations satisfying condition 1. If the lowermost rule is a left one or (trivial case) is an axiom, then adding this rule will not violate condition 1. If the lowermost rule is a right one, we apply the auxiliary statement proved above.

Now let us achieve condition 2, keeping condition 1 valid. Here, we need to get rid of “incorrect” applications of the ${\multimap}\mathrm{L}$, rule, whose left premise is derived using a left rule. Again, let us first prove an auxiliary statement: if the lowermost rule in a derivation is ${\multimap}\mathrm{L}$ and derivations of its premises satisfy conditions 1 and 2, then the goal sequent also has a derivation which satisfies conditions 1 and 2. (Here, again, condition 2 could possibly be violated only at the very bottom of the derivation, and we wish to eliminate this violation.)

We proceed by transfinite induction on the rank of the derivation of the left premise of the lowermost rule ${\multimap}\mathrm{L}$. Consider three cases.

1st case: this left premise is derived by a right rule. By condition 1, all rules in its derivation are right ones, and the derivation of the goal sequent already satisfies condition 2, and also condition 1 (adding a left rule could not violate condition 1).

2nd case: the left premise of the lowermost rule ${\multimap}\mathrm{L}$ is derived by another application of the ${\multimap}\mathrm{L}$ rule:

Let us transform the derivation using the following disbalancing transformation:
The derivation of $\Phi \to E$ does not contain left rules, since originally all applications of ${\multimap}\mathrm{L}$, except the last one, were “correct”, that is, condition 2 was not violated. The derivation rank of the sequent $\Pi, F \to A$ is less than that of the sequent $\Pi, \Phi, E \multimap F \to A$. Hence, we may apply induction hypothesis: the sequent $\Gamma, \Pi, F, A\multimap B \to C$ has a derivation which satisfies condition 2. Combining this derivation with the derivation of $\Phi \to E$ and the final application of the ${\multimap}\mathrm{L}$ rule, we obtain the desired derivation of the goal sequent $\Gamma, \Pi, \Phi, E \multimap F, A\multimap B \to C$.

3rd case: the left premise of the active application of ${\multimap}\mathrm{L}$ is derived using another left rule (logical or structural). All such rules have the same form: they operate only inside the left-hand side of the sequent, deriving a sequent of the form $\Pi \to A$ from sequent(s) of the form $\widetilde{\Pi} \to A$.

In the case of exactly one premise,

we transform the derivation as follows:
Again, the rank of the left premise is decreased, and we may apply the induction hypothesis. For the cases where the given rule has zero (the $0\mathrm{L}$ rule), two (the ${+}\mathrm{L}$ rule), or infinitely many (the ${}^*\mathrm{L}$ rule) premises, the argument is similar.

Now let us prove that any derivation (without forbidden rules) which satisfies condition 1 can be transformed into a derivation which satisfies conditions 1 and 2. We proceed by transfinite induction on derivation rank. Consider the lowermost inference rule. The axiom case is trivial. In the case of right rule, by condition 1 the whole derivation consists only of right rules, which makes condition 2 hold automatically. If the lowermost rule is a left one, but not ${\multimap}\mathrm{L}$, then by the induction hypothesis each of its premises has a derivation which satisfies conditions 1 and 2. Adding the lowermost rule would not violate these conditions. Finally, if the lowermost rule is ${\multimap}\mathrm{L}$, we apply induction hypothesis together with the auxiliary statement.

Finally, let us achieve condition 3. Assume that there is a derivation without forbidden rules which satisfies conditions 1 and 2. Let us introduce an additional parameter for transfinite induction, the ordinal complexity measure. This measure was defined by Palka [9]. The measure $\mu(\Pi\to A)$ of a sequent $\Pi \to A$ is the ordinal $\dots+\omega^k \cdot c_k+\dots+\omega^2 \cdot c_2+\omega \cdot c_1+c_0$, where $c_i$ is the number of subformulae of complexity $i$ in the sequent $\Pi \to A$. (The complexity of a formula is computed as the total number of occurrences of connectives and constants $0$ and $1$.) We proceed by induction on the lexicographical order on pairs $(\beta,\mu)$, where $\beta$ is the derivation rank and $\mu$ is the measure of the goal sequent.

Consider the lowermost rule in the given derivation (the case of axiom, including $0\mathrm{L}$, is trivial).

1st case: the lowermost rule is a right one. By condition 1, the whole derivation consists of only right rules. The left-hand side of the sequent consists only of variables and does not contain complex formulae or constants. Condition 3 holds automatically.

2nd case: the lowermost rule is one of ${\cdot}\mathrm{L}$, ${+}\mathrm{L}$, ${}^* \mathrm{L}$, and $1\mathrm{L}$. In this case condition 3 also holds. Indeed, the goal sequent obviously does not violate this condition, and sequents in derivations of premises satisfy it by the induction hypothesis (these derivations have smaller ranks).

3rd case: the lowermost rule is ${\multimap}\mathrm{L}$ or one of the rules operating ${!}$: ${!}\mathrm{L}$, ${!}\mathrm{C}$, ${!}\mathrm{W}$. If the left-hand side includes the constant 0, we replace the whole derivation with an application of the $0\mathrm{L}$ axiom. If this sequent does not have formulae of the form $B \cdot C$, $B+C$, $B^*$, or $1$, then the derivation satisfies condition 3, and the proof is complete.

Otherwise all such formulae also occur in the premise of the lowermost rule ${!}\mathrm{L}$, ${!}\mathrm{C}$, or ${!}\mathrm{W}$, or in one of the premises of the lowermost rule ${\multimap}\mathrm{L}$. Notice that by condition 2 it should necessarily be the second (right) premise of ${\multimap}\mathrm{L}$.

Since the derivation of this premise satisfies conditions 1–3, this derivation ends with an application of the left rule introducing the given formula. For example, this is the form of the ending of the derivation for rules ${\multimap}\mathrm{L}$ and ${+}\mathrm{L}$:

For ${!}\mathrm{C}$ and ${+}\mathrm{L}$, we have

Let us apply, to each of these premises, our lowermost rule, for example:

Notice that the derivation rank for these sequents is not greater than that of the original goal sequent. At the same time, the ordinal measure $\mu$ is strictly smaller. (In particular, this holds for the ${}^* \mathrm{L}$ rule. Formulae $A^n$ contribute less to $\mu$ than $A^*$, since the latter is more complex and adds $1$ to the coefficient of a greater power of $\omega$.) Thus, we may apply the induction hypothesis: either the rank is strictly decreased, or the rank remained the same, but the second parameter $\mu$ is decreased.

By the induction hypothesis, the sequents obtained in the aforementioned way have derivations which satisfy conditions 1–3. Now let us apply the rule which introduces the formula of the necessary form, $B \cdot C$, $B+C$, $B^*$, or one 1. This yields the original goal sequent. (In our examples this is the ${+}\mathrm{L}$ rule.) Since it is a left rule, but not ${\multimap}\mathrm{L}$, for this augmented derivation conditions 1–2 keep valid. Condition 3 is also satisfied: inside derivations of the premises of the lowermost rule it holds by the induction hypothesis, and the goal sequent is derived using the required rule.

Theorem 4 is proved.

§ 6. Encoding counter machines

6.1. Counter machines

Unlike the non-commutative case [6], in the commutative one there is no natural way to encode ordered sequences of symbols (words), which is necessary for representing Turing machine configurations. Due to this reason, instead of Turing machines or other computational models which work with words, we use counter machines, or Minsky machines [17], which operate with natural numbers.

The idea of using counter machines is not new. In [14], such machines were used to prove undecidability of commutative linear logic (with the exponential modality). In [18], counter machines were used to prove undecidability of commutative action logic. Let us notice that for encoding counter machines using the addition operation (also called disjunction or union) is essential, while encoding of Turing machines [6] actually requires only multiplication. The same happens for linear logic [14], where the question of whether the multiplicative-exponential (that is, without additive disjunction and conjunction) fragment is decidable remains open [19].

Recall the definition of counter machines. We consider only deterministic ones.

Definition 10. Fix a finite set of counter identifiers $\mathcal{R}$ (for example, $\mathcal{R}=\{ \mathsf{a}, \mathsf{b}, \mathsf{c} \}$ for a machine with three counters). A deterministic counter machine $\mathtt{M}$ with the set of counters $\mathcal{R}$ consists of this set $\mathcal{R}$, a finite set of states $\mathcal{Q}$ in which the initial state $q_I \in \mathcal{Q}$ and the set of final states $\mathcal{Q}_F \subset \mathcal{Q}$ are designated, and a finite set of instructions $\mathcal{I}$ (that is, $\mathtt{M}=(\mathcal{R}, \mathcal{Q}, q_I, \mathcal{Q}_F, \mathcal{I})$). Each instruction has one of the following two forms, with the corresponding description of how it operates (here, $r \in \mathcal{R}$ is a counter and $p,q,q_0 \in \mathcal{Q}$ are states):

inc$(p,r,q)$being in state $p$, increase counter $r$ by 1 and change the state to $q$;
jzdec$(p,r,q_0,q)$being in state $p$: if the value of $r$ equals zero, change the state to $q_0$, otherwise, decrease $r$ by 1 and change the state to $q$.

For each $p \in \mathcal{Q}-\mathcal{Q}_F$, the set $\mathcal{I}$ contains exactly one instruction with $p$ as its first argument; for $p \in \mathcal{Q}_F$ there is no such instruction. (This is the determinacy condition for $\mathtt{M}$.)

Definition 11. A configuration of a counter machine $\mathtt{M}$ is a pair $\mathbf{c}=(p,v)$, where $p \in \mathcal{Q}$ is a state and $v \colon \mathcal{R} \to \mathbb{N}$ is a function which assigns values to counters (since $\mathcal{R}$ is finite, $v$ is given by a finite table). Configuration $\mathbf{c}_2$ is the immediate successor of configuration $\mathbf{c}_1$ if $\mathbf{c}_2$ is obtained from $\mathbf{c}_1$ by application of an instruction of the machine $\mathtt{M}$. Due to determinacy, the immediate successor exists and is unique for any configuration with $p \notin \mathcal{Q}_F$. A configuration of the form $(q_F, v)$, where $q_F \in \mathcal{Q}_F$, does not have an immediate successor. Machine $\mathtt{M}$ reaches configuration $\mathbf{c}_2$ from configuration $\mathbf{c}_1$ if there exists a sequence of configurations from $\mathbf{c}_1$ to $\mathbf{c}_2$, in which each next configuration is the immediate successor of the previous one. (In particular, the sequence may contain only one element, that is, the machine reaches any configuration from itself.)

We shall use the following notation for configurations of a counter machine:

$$ \begin{equation*} (q, (\mathsf{a}_1 \mapsto k_1, \dots, \mathsf{a}_n \mapsto k_n)). \end{equation*} \notag $$
Moreover, if the value of a counter equals zero, it can be omitted in this notation. For example, the configuration of a machine with $\mathcal{R}=\{ \mathsf{a}, \mathsf{b}, \mathsf{c}, \mathsf{d} \}$, where the machine is in state $q$, and the values of its counters $\mathsf{a}$, $\mathsf{b}$, $\mathsf{c}$, $\mathsf{d}$ equal $6$, $0$, $12$, $0$, respectively, may be denoted by $(q, (\mathsf{a} \mapsto 6, \mathsf{c} \mapsto 12))$. In particular, the notation $(q,())$ denotes the configuration where the machine is in state $q$ and the values of all counters equal zero.

Let us fix one of the counters, $\mathsf{a} \in \mathcal{R}$, which will be used for input-output.

Similarly, one defines computation of a function with several arguments. In order to give this definition, one fixes several counters $\mathsf{a}_1, \dots, \mathsf{a}_n$, which will be used for input data. The resulting value is put into the counter $\mathsf{a}=\mathsf{a}_1$.

Machines with three counters ($\mathcal{R}=\{ \mathsf{a}, \mathsf{b}, \mathsf{c} \}$) are sufficient to compute any computable (for example, Turing computable) partial function $f \colon \mathbb{N} \dashrightarrow \mathbb{N}$:

Theorem 5 (M. L. Minsky, 1961; R. Schroeppel, 1972). A partial function $f \colon \mathbb{N} \dashrightarrow \mathbb{N}$ is computable if and only if it is computed by a machine $\mathtt{M}$ with three counters and a one-element set of final states $\mathcal{Q}_F=\{q_F\}$.

The idea of this theorem, that is, the reduction of Turing machines to counter ones, is due to Minsky [17]. Moreover, machines with two counters are also capable of computing all Turing computable functions, but this requires a specific encoding of input and output values. Therefore it is more convenient to use machines with three counters. A detailed proof of Theorem 5 is given in Schroeppel’s memoir [20].

Theorem 5 can be easily generalized to the case of several arguments: any computable function $f \colon \mathbb{N}^n \dashrightarrow \mathbb{N}$ is computed by a machine with $n+2$ counters. The interesting case for us is $n=2$:

Proposition 5. A partial function $h \colon \mathbb{N} \times \mathbb{N} \dashrightarrow \mathbb{N}$ is computable if and only if it is computed by a machine with four counters (two of which are used for input data) and a one-element set of final states $\mathcal{Q}_F=\{q_F\}$.

Let us fix the following coding of pairs of natural numbers (we shall need it in what follows):

$$ \begin{equation*} [x,y]=\dfrac{(x+y) \cdot (x+y+1)}{2}+x. \end{equation*} \notag $$
It is easy to see that the coding is a computable (in both directions) one-to-one correspondence between $\mathbb{N} \times \mathbb{N}$ and $\mathbb{N}$.

Proof. The idea of the proof is similar to that used in Lemma 2 in [18]. Let $\mathcal{R}=\{ \mathsf{a}, \mathsf{b}, \mathsf{c}, \mathsf{d} \}$, where the first two counters are used for input data. Given two input values $x$ and $y$, located in counters $\mathsf{a}$ and $\mathsf{b}$, one computes the code of their pair $[x,y]=(x+y) \cdot (x+y+1) / 2+x$. This is done as follows. Let us decrease (jzdec) the value of $\mathsf{a}$, until it reaches zero, at the same time increasing (inc) the values of $\mathsf{b}$ and $\mathsf{d}$. Now $\mathsf{b}$’s value is $z=x+y$, the value of $\mathsf{d}$ is $x$, other counters hold zero. By Theorem 5 we compute the value $z \cdot (z+1) / 2$ on three counters $\mathsf{a}$, $\mathsf{b}$, $\mathsf{c}$, keeping $\mathsf{d}$ intact. The result is stored in $\mathsf{a}$. Now we add $x$ to this value, decreasing $\mathsf{d}$ and increasing $\mathsf{a}$. Now everything is ready for computing the function $h' \colon [x,y] \mapsto h(x,y)$. (If $h(x,y)$ is undefined, then so is $h'([x,y])$. Recall that the pair coding used is a one-to-one correspondence between $\mathbb{N}$ and $\mathbb{N} \times \mathbb{N}$.) Indeed, counter $\mathsf{a}$ holds $[x,y]$, others hold zero. The function $h'$ is computed using three counters $\mathsf{a}$, $\mathsf{b}$, $\mathsf{c}$ by Theorem 5.

An important particular case of a two-argument function is the universal function. In order to define it, let us fix a natural encoding of three-counter machines by natural numbers. The code of a machine $\mathtt{M}$ will be denoted by $\ulcorner\mathtt{M}\urcorner$. The universal function $U \colon \mathbb{N} \times \mathbb{N} \dashrightarrow \mathbb{N}$ is defined as follows:

$$ \begin{equation*} \begin{aligned} \, & U(\ulcorner\mathtt{M}\urcorner, x)=f_{{\mathtt{M}}}(x) && \text{if }f_{\mathtt{M}}(x)\text{ is defined}; \\ & U(\ulcorner\mathtt{M}\urcorner, x)\text{ is undefined} && \text{if }f_{\mathtt{M}}(x)\text{ is undefined}; \\ & U(w,x)\text{ is undefined} && \text{if } w\text{ is not a correct code} \\ & && \text{of a three-counter machine}. \end{aligned} \end{equation*} \notag $$
This function is computable, and by Proposition 5 it is computed by a fixed machine $\mathtt{U}$ with four counters and $\mathcal{Q}_F=\{ q_F \}$. Fixing a concrete machine $\mathtt{U}$ will allow us to obtain complexity results for fragments of Horn theories with a fixed set of hypotheses $\mathcal{H}$.

Let us recall the algorithmic problems connected to the universal function $U$, which are complete in the corresponding complexity classes. The halting problem: for given $w$ and $x$, determine whether $U(w,x)$ is defined, is $\Sigma^0_1$-complete. The dual problem of non-halting: determine whether $U(w,x)$ is undefined, is $\Pi^0_1$-complete. A harder problem is the totality problem: given $w$, determine whether $U(w,x)$ is defined for all $x$. This is a well-known $\Pi_2^0$-complete problem (see, for example, § B.II.3 in [21]). In what follows (see § 8), we shall also need the $\Pi^1_1$-complete problem of checking noetherianity of a graph on the set $\mathbb{N}$, defined in a computable way.

6.2. Encoding four-counter machines

Now let us describe the encoding of a machine $\mathtt{M}$ with four counters in the language of commutative Kleene algebras. Let $\mathcal{R}=\{ \mathsf{a}, \mathsf{b}, \mathsf{c}, \mathsf{d} \}$, $\mathcal{Q}$ be the set of states of the machine $\mathtt{M}$, and let us suppose that the set of variables includes all elements of $\mathcal{Q} \cup \mathcal{R}$ and additionally the following four elements: $z_\mathsf{a}$, $z_\mathsf{b}$, $z_\mathsf{c}$ and $z_\mathsf{d}$. The set of atomic formulae $\mathcal{H}_{\mathtt{M}}$ is constructed as follows:

Notice that the set $\mathcal{H}_{\mathtt{M}}$ does not contain iteration. The multiset ${!}\mathcal{H}_{\mathtt{M}}$, respectively, includes linear formulae of the following forms: ${!}(p \multimap (q \cdot r))$, ${!}((p \cdot r) \multimap q)$, and ${!}(p \multimap (q_0+ z_r))$.

Let us also introduce the following auxiliary term:

$$ \begin{equation*} Z=(z_\mathsf{a} \cdot \mathsf{b}^* \cdot \mathsf{c}^* \cdot \mathsf{d}^*) + (z_\mathsf{b} \cdot \mathsf{a}^* \cdot \mathsf{c}^* \cdot \mathsf{d}^*) + (z_\mathsf{c} \cdot \mathsf{a}^* \cdot \mathsf{b}^* \cdot \mathsf{d}^*) + (z_\mathsf{d} \cdot \mathsf{a}^* \cdot \mathsf{b}^* \cdot \mathsf{c}^*). \end{equation*} \notag $$

For convenience we introduce the notation $A^{{\bullet} n}=A \cdot \ldots \cdot A$ ($n$ times), where $A$ is an arbitrary linear formula. This notation is a bit weird, and this is due to the fact that the notation $A^n$ was already used for the sequence of $n$ copies of the given linear formula $A$. The notation $A^{{\bullet} 0}$ denotes constant $1$, but we shall omit it when it is used in multiplication: $B \cdot A^{{\bullet} 0}$ means just $B$.

Let us formulate and prove the principal result on encoding of four-counter machine execution.

Theorem 6. The machine $\mathtt{M}$ reaches configuration $\mathbf{c}_2=(q_2, (\mathsf{a} \mapsto a_2, \mathsf{b} \mapsto b_2, \mathsf{c} \mapsto c_2, \mathsf{d} \mapsto d_2))$ from configuration $\mathbf{c}_1=(q_1, (\mathsf{a} \mapsto a_1, \mathsf{b} \mapsto b_1, \mathsf{c} \mapsto c_1, \mathsf{d} \mapsto d_1))$ if and only if the sequent

$$ \begin{equation*} {!}\mathcal{H}_\mathtt{M}, q_1, \mathsf{a}^{a_1}, \mathsf{b}^{b_1}, \mathsf{c}^{c_1}, \mathsf{d}^{d_1} \to q_2 \cdot \mathsf{a}^{{\bullet}a_2} \cdot \mathsf{b}^{{\bullet}b_2} \cdot \mathsf{c}^{{\bullet}c_2} \cdot \mathsf{d}^{{\bullet}d_2}+Z \end{equation*} \notag $$
is derivable in the calculus $\boldsymbol{!}\mathbf{CommACT}_\omega$ (that is, due to Theorem 3, when the Horn formula $\mathcal{H}_\mathtt{M} \Rightarrow q_1 \cdot \mathsf{a}^{{\bullet}a_1} \cdot \mathsf{b}^{{\bullet}b_1} \cdot \mathsf{c}^{{\bullet}c_1} \cdot \mathsf{d}^{{\bullet}d_1} \leqslant q_2 \cdot \mathsf{a}^{{\bullet}a_2} \cdot \mathsf{b}^{{\bullet}b_2} \cdot \mathsf{c}^{{\bullet}c_2} \cdot \mathsf{d}^{{\bullet}d_2}+Z$ is generally valid on the class of all commutative $*$-continuous Kleene algebras).

Our encoding is similar to that used in [14]. In the presence of Kleene iteration, however, we do not need to perform “garbage collection” when using the jzdec instruction and variable $z_{r}$ ($r \in \mathcal{R}$). This variable stands for a “pseudo-state”, which is used to check that the value of $r$ is zero. Other counters may hold arbitrary value (which corresponds to iteration in the $Z$ formula), and we do not need to care of making them zero.

6.3. From computation to derivation

Let us start with the easier “only if” direction, from a computation on the counter machine to derivability of the sequent. First, let us notice that the sequent

$$ \begin{equation*} q, \mathsf{a}^a, \mathsf{b}^b, \mathsf{c}^c, \mathsf{d}^d \to q \cdot \mathsf{a}^{{\bullet} a} \cdot \mathsf{b}^{{\bullet} b} \cdot \mathsf{c}^{{\bullet} c} \cdot \mathsf{d}^{{\bullet} d} \end{equation*} \notag $$
is derivable (using the ${\cdot}\mathrm{R}$ rule) for any configuration $\mathbf{c}=(q, (\mathsf{a} \mapsto a$, $\mathsf{b} \mapsto b$, $\mathsf{c} \mapsto c$, $\mathsf{d} \mapsto d))$.

We proceed by induction on the length of the sequence of immediate successors from $\mathbf{c}_1$ to $\mathbf{c}_2$. If configurations $\mathbf{c}_1$ and $\mathbf{c}_2$ coincide (induction base), we have the following derivation. (Here and in what follows, a double horizontal line means several applications of a rule.)

The induction step: let the sequence from $\mathbf{c}_1$ to $\mathbf{c}_3$ start from the step from $\mathbf{c}_1$ to its immediate successor $\mathbf{c}_2$, and then continue with a shorter sequence from $\mathbf{c}_2$ to $\mathbf{c}_3$:

The left premise of the cut rule (from $\mathbf{c}_1$ to $\mathbf{c}_2$) will be established below. The right one is derived from the induction hypothesis (from $\mathbf{c}_2$ to $\mathbf{c}_3$) as follows:

Thus, it is sufficient to derive the given sequent in the case where $\mathbf{c}_2$ is the immediate successor of $\mathbf{c}_1$. There are three cases to consider, depending on what instruction was applied. In all cases we shall suppose, without loss of generality, that the “active” counter $r$ of this instruction is $\mathsf{a}$. Thus, $b_2=b_1$, $c_2=c_1$, $d_2=d_1$; we denote these values by $b$, $c$, $d$, respectively.

1st case: $\mathbf{c}_2$ is obtained from $\mathbf{c}_1$ by applying the instruction inc$(p, \mathsf{a}, q)$. In this case, $q_1=p$, $q_2=q$, $a_2=a_1+1$. The multiset ${!}\mathcal{H}_\mathtt{M}$ includes the linear formula ${!}(p \multimap (q \cdot \mathsf{a}))$. The desired sequent is derived as follows:

2nd case: $\mathbf{c}_2$ is obtained from $\mathbf{c}_1$ by applying the instruction jzdec$(p, \mathsf{a}, q_0, q)$, provided that $a_1 \ne 0$, that is, $q_1=p$, $q_2=q$, $a_1= a_2+1$. The multiset ${!}\mathcal{H}_\mathtt{M}$ contains the linear formula ${!}((p \cdot \mathsf{a}) \multimap q)$, and the desired sequent is derived as follows:

3rd case (most interesting): $\mathbf{c}_2$ is obtained from $\mathbf{c}_1$ by applying the instruction jzdec$(p, \mathsf{a}, q_0, q)$, and $a_1=0$. We have $q_1=p$, $q_2=q_0$ and $a_2=a_1=0$. Let us first derive an auxiliary sequent:

Now let us recall that the multiset ${!}\mathcal{H}_\mathtt{M}$ contains the linear formula ${!}(p \multimap (q_0+ z_\mathsf{a}))$. This allows us to construct the following derivation of the desired sequent:

6.4. From derivation to computation

Before proving the “if” direction in Theorem 6, let us establish the following auxiliary result.

Lemma 1. Let ${!}\mathcal{H}'$ be a submultiset of ${!}\mathcal{H}_\mathtt{M}$ and let $\Pi$ be a multiset of linear formulae of the form $A$, where ${!}A$ belongs to ${!}\mathcal{H}_\mathtt{M}$. Let the sequent

$$ \begin{equation*} {!}\mathcal{H}', \Pi, z_\mathsf{a}, \mathsf{a}^a, \mathsf{b}^b, \mathsf{c}^c, \mathsf{d}^d \to C+Z, \end{equation*} \notag $$
where $C$ is of the form $q_2 \cdot \mathsf{a}^{{\bullet} a_2} \cdot \mathsf{b}^{{\bullet} b_2} \cdot \mathsf{c}^{{\bullet} c_2} \cdot \mathsf{d}^{{\bullet} d_2}$, be derivable. Then $a=0$. Similarly for $\mathsf{b}$, $\mathsf{c}$, $\mathsf{d}$.

Proof. By Theorem 4, this sequent (since it does not include occurrences of ${\multimap}$ and ${!}$ in the positive polarity) has a disbalanced derivation. We proceed by induction of the size of this derivation.7 If the lowermost rule in this derivation is ${!}\mathrm{W}$, ${!}\mathrm{C}$, or ${!}\mathrm{L}$, we immediately refer to the induction hypothesis. Indeed, the premise of such a rule has the same form, but a shorter derivation (derivation of a smaller rank). The rules ${\cdot}\mathrm{L}$ and ${+} \mathrm{L}$ may not be applied, because the left-hand side of the given sequent does not include (on the top level, not as subformulae) formulae of the form $E \cdot F$ and $E+F$.

The only remaining left rule is ${\multimap}\mathrm{L}$, which introduces a linear formula from the multiset $\Pi$. Recall that formulae from $\Pi$ have one of the following forms: $p \multimap (q \cdot r)$, $(p \cdot r) \multimap q_1$, and $p \multimap (q_0+z_r)$. In each of these cases, the left premise of ${\multimap}\mathrm{L}$ has the form $\Phi \to p$ or $\Phi \to p \cdot r$, where $\Phi$ is a submultiset of ${!}\mathcal{H}'$, $\Pi$, $z_\mathsf{a}$, $\mathsf{a}^a$, $\mathsf{b}^b$, $\mathsf{c}^c$, $\mathsf{d}^d$. By condition 2 of the definition of a disbalanced derivation, this left premise is derived using only right rules. Hence, $\Phi$ may not contain complex formulae from ${!}\mathcal{H}'$ and $\Pi$. In this case, however, $\Phi$ does not contain the variable $p$, and therefore the sequent $\Phi \to p$ or $\Phi \to p \cdot r$ may not be derivable (since the derivation should include the axiom $p \to p$). A contradiction.

The remaining case is where the sequent ${!}\mathcal{H}', \Pi, z_\mathsf{a}, \mathsf{a}^a, \mathsf{b}^b, \mathsf{c}^c, \mathsf{d}^d \to C+Z$ is derived using only right rules. In this case, ${!}\mathcal{H}'$ and $\Pi$ are empty. The lowermost rule applications, due to the format of the formula $C+Z$, should be applications ${+}\mathrm{R}$. Moreover, these applications should choose, out of the sum $C+Z$, the formula $z_\mathsf{a} \cdot \mathsf{b}^* \cdot \mathsf{c}^* \cdot \mathsf{d}^*$. Otherwise, one could not obtain the axiom $z_\mathsf{a} \to z_\mathsf{a}$. Thus, the following sequent is derivable:

$$ \begin{equation*} z_\mathsf{a}, \mathsf{a}^a, \mathsf{b}^b, \mathsf{c}^c, \mathsf{d}^d \to z_\mathsf{a} \cdot \mathsf{b}^* \cdot \mathsf{c}^* \cdot \mathsf{d}^*. \end{equation*} \notag $$
The right-hand side of this sequent does not include the variable $\mathsf{a}$. Therefore, there should also be no occurrence of $\mathsf{a}$ on the left (otherwise, we could not obtain the axiom $\mathsf{a} \to \mathsf{a}$). This means that $a=0$, which is what we wanted.

The cases of $\mathsf{b}$, $\mathsf{c}$, $\mathsf{d}$ are considered similarly.

Now we are ready to prove the “if” statement in Theorem 6. Recall that we are given derivability of the following sequent:

$$ \begin{equation*} {!}\mathcal{H}_\mathtt{M}, q_1, \mathsf{a}^{a_1}, \mathsf{b}^{b_1}, \mathsf{c}^{c_1}, \mathsf{d}^{d_1} \to q_2 \cdot \mathsf{a}^{{\bullet} a_2} \cdot \mathsf{b}^{{\bullet} b_2} \cdot \mathsf{c}^{{\bullet} c_2} \cdot \mathsf{d}^{{\bullet} d_2}+Z, \end{equation*} \notag $$
and need to prove that the machine $\mathtt{M}$ reaches configuration $\mathbf{c}_2=(q_2, (\mathsf{a} \mapsto a_2, \mathsf{b} \mapsto b_2, \mathsf{c} \mapsto c_2, \mathsf{d} \mapsto d_2))$ from configuration $\mathbf{c}_1=(q_1, (\mathsf{a} \mapsto a_1, \mathsf{b} \mapsto b_1, \mathsf{c} \mapsto c_1, \mathsf{d} \mapsto d_1))$.

In order to facilitate the induction argument, as in Lemma 1, let us weaken the condition a bit. Suppose that a sequent

$$ \begin{equation*} {!}\mathcal{H}', \Pi, q_1, \mathsf{a}^{a_1}, \mathsf{b}^{b_1}, \mathsf{c}^{c_1}, \mathsf{d}^{d_1} \to q_2 \cdot \mathsf{a}^{{\bullet} a_2} \cdot \mathsf{b}^{{\bullet} b_2} \cdot \mathsf{c}^{{\bullet} c_2} \cdot \mathsf{d}^{{\bullet} d_2}+Z \end{equation*} \notag $$
is derivable, where ${!}\mathcal{H}'$ is a submultiset of ${!}\mathcal{H}_\mathtt{M}$ and formulae from $\Pi$ are obtained from (some) formulae from ${!}\mathcal{H}_\mathtt{M}$ by removing the ${!}$ sign.

We proceed by induction on the size of a disbalanced derivation (the derivation is finite) of the given sequent. Consider the lowermost rule of this derivation. As in the proof of Lemma 1, the cases of ${!}\mathrm{W}$, ${!}\mathrm{C}$, and ${!}\mathrm{L}$ are immediately reduced to the induction hypothesis, and cases of ${\cdot}\mathrm{L}$ and ${+}\mathrm{L}$ are impossible.

The most interesting rule is ${\multimap}\mathrm{L}$, whose application introduces a formula $A$ from $\Pi$. There are three cases to consider depending on the form of formula $A$. Without loss of generality, in all cases we suppose that $r=\mathsf{a}$.

1st case: $A=p \multimap (q \cdot \mathsf{a})$. The left premise of the ${\multimap} \mathrm{L}$ rule has the form $\Phi \,{\to}\, p$ and should be an axiom. Indeed, in its derivation left rules are not allowed and right rules are impossible due to triviality of its right-hand side. Therefore, $\Phi=p=q_1$, since $q_1$ is the only variable in the left-hand side (of the original sequent) which belongs to the set $\mathcal{Q}$.

Moreover, due to disbalance (condition 3), the right premise of ${\multimap} \mathrm{L}$ is derived by applying the rule ${\cdot}\mathrm{L}$, introducing the formula $q \cdot \mathsf{a}$. Thus, we get the following fragment of the derivation (here, $\Pi=\Pi', q_1 \multimap (q \cdot \mathsf{a})$):

Since the formula ${!}A={!}(q_1 \multimap (q \cdot \mathsf{a}))$ belongs to the multiset ${!}\mathcal{H}_0$, the machine $\mathtt{M}$ contains the instruction inc$(q_1, \mathsf{a}, q)$. Therefore, configuration $\mathbf{c}=(q, (\mathsf{a} \mapsto a_1+ 1, \mathsf{b} \mapsto b_1, \mathsf{c} \mapsto c_1, \mathsf{d} \mapsto d_1))$ is the immediate successor of configuration $\mathbf{c}_1$. Moreover, by the induction hypothesis derivability of the sequent

$$ \begin{equation*} {!}\mathcal{H}', \Pi, q, \mathsf{a}^{a_1+1}, \mathsf{b}^{b_1}, \mathsf{c}^{c_1}, \mathsf{d}^{d_1} \to q_2 \cdot \mathsf{a}^{{\bullet} a_2} \cdot \mathsf{b}^{{\bullet} b_2} \cdot \mathsf{c}^{{\bullet} c_2} \cdot \mathsf{d}^{{\bullet} d_2}+Z \end{equation*} \notag $$
entails the fact that the machine $\mathtt{M}$ reaches configuration $\mathbf{c}_2$ from configuration $\mathbf{c}$. Hence, $\mathtt{M}$ reaches $\mathbf{c}_2$ from $\mathbf{c}_1$.

2nd case: $A=(p \cdot \mathsf{a}) \multimap q$. Here, the left-hand side is of the form $\Phi \to p \cdot \mathsf{a}$ and should be derived using only right rules. The only possible case is as follows: $\Phi=p,\mathsf{a}$, that is, the left premise is of the form $p,\mathsf{a} \to p \cdot \mathsf{a}$. In particular, $p=q_1$, because other elements of $\mathcal{Q}$ do not occur in the left-hand side of the original sequent. Moreover, $a_1>0$, as otherwise the left-hand side does not include $\mathsf{a}$.

Thus, we get the following application of the ${\multimap}\mathrm{L}$ rule:

This rule corresponds to applying the instruction jzdec$(q_1, \mathsf{a}, q_0, q)$ (here, $q_0$ is some state which is not used, since $a_1>0$). Configuration $\mathbf{c}=(q, (\mathsf{a} \mapsto a_1- 1, \mathsf{b} \mapsto b_1, \mathsf{c} \mapsto c_1, \mathsf{d} \mapsto d_1))$ is the immediate successor of configuration $\mathbf{c}_1$. From $\mathbf{c}$, in its turn, the machine $\mathtt{M}$ reaches configuration $\mathbf{c}_2$ (the induction hypothesis).

3rd case: $p \multimap (q_0+z_\mathsf{a})$. Here, the left premise of ${\multimap}\mathrm{L}$ is of the form $p \to p$, moreover, $p=q_1$. By condition 3 of disbalance, the right premise is derived using the ${+}\mathrm{L}$ rule:

Here, we have used a shortened notation $C_2=q_2 \cdot \mathsf{a}^{{\bullet} a_2} \cdot \mathsf{b}^{{\bullet} b_2} \cdot \mathsf{c}^{{\bullet} c_2} \cdot \mathsf{d}^{{\bullet} d_2}$.

Let us apply Lemma 1: derivability of ${!}\mathcal{H}', \Pi', z_\mathsf{a}, \mathsf{a}^{a_1}, \mathsf{b}^{b_1}, \mathsf{c}^{c_1}, \mathsf{d}^{d_1} \to C_2+ Z$ yields $a_1=0$. Hence, configuration $\mathbf{c}_1=(q_1, (\mathsf{a} \mapsto 0, \mathsf{b} \mapsto b_1, \mathsf{c} \mapsto c_1, \mathsf{d} \mapsto d_1))$, under the instruction jzdec$(q_1, \mathsf{a}, q_0, q)$, changes into its immediate successor $\mathbf{c}=(q_0, (\mathsf{a} \mapsto 0, \mathsf{b} \mapsto b_1, \mathsf{c} \mapsto c_1, \mathsf{d} \mapsto d_1))$. On the other hand, by the induction hypothesis, from derivability of $\mathcal{H}', \Pi', q_0, \mathsf{b}^{b_1}, \mathsf{c}^{c_1}, \mathsf{d}^{d_1} \to C_2+Z$ we conclude that the machine $\mathtt{M}$ reaches $\mathbf{c}_2$ from $\mathbf{c}$, and therefore also from $\mathbf{c}_1$.

This completes the proof of Theorem 6.

§ 7. $\Pi_2^0$-completness for the case of iteration-free hypotheses

Let us apply our encoding (Theorem 6) to the universal machine $\mathtt{U}$ (see § 6.1). Suppose that the first argument is put in counter $\mathsf{a}$ and the second one is put in counter $\mathsf{b}$.

Proof. The equivalence of statements 2 and 3 follows directly from Theorem 3. Let us prove the equivalence of statements 1 and 2.

Let us first fix a value of $x$. The statement that $U(w,x)$ is defined and equals zero is equivalent (by definition of computing a function on a counter machine) to saying that the machine $\mathtt{U}$ reaches $(q_F, ())$ from configuration $(q_I, (\mathsf{a} \mapsto w, \mathsf{b} \mapsto x))$. By Theorem 6, this is equivalent to derivability of the sequent

$$ \begin{equation*} {!}\mathcal{H}_{\mathtt{U}}, q_I, \mathsf{a}^w, \mathsf{b}^x \to q_F+Z \end{equation*} \notag $$
in the calculus $\boldsymbol{!}\mathbf{CommACT}_\omega$.

If this is true for all values of $x$, then by the omega-rule ${}^* \mathrm{L}$ we get derivability of the goal sequent ${!}\mathcal{H}_{\mathtt{U}}, q_I, \mathsf{a}^w, \mathsf{b}^* \to q_F+Z$. In the opposite direction, if this sequent is derivable, then, for an arbitrary value of $x$, the sequent ${!}\mathcal{H}_{\mathtt{U}}, q_I, \mathsf{a}^w, \mathsf{b}^x \to q_F+Z$ is derivable using the cut rule:

Thus, $U(w,x)$ is defined and equals zero.

Theorem 7. The problem of general validity for Horn formulae with fixed iteration- free set $\mathcal{H}=\mathcal{H}_{\mathtt{U}}$, on the class of all commutative $*$-continuous Kleene algebras, is $\Pi_2^0$-complete. The same holds for the fragment of the Horn theory of commutative $*$-continuous Kleene algebras with iteration-free hypotheses.

Proof. The upper complexity bounds follows from Propositon 2, as the sets of hypotheses in question are iteration-free.

In order to prove the lower bound, let us use $\Pi^0_2$-hardness of the totality problem. Usually, the totality problem is formulated as follows: for a given $w$ determine, whether it is true that $U(w,x)$ is defined for all $x$. However, the following problem is also $\Pi^0_2$-hard: for a given $w$, determine whether $U(w,x)$ is defined and equals zero for all $x$. The reduction of the old problem to the new one is performed via a function which translates the code $w$ of a machine into $w'$, which encodes a machine with “garbage collection” added. Namely, the counter $\mathsf{a}$ should be reduced to zero after the machine reaches the final state $q_F$. Evidently, this function is computable, and the following equivalence holds: $U(w,x)$ is defined for all $x$ if and only if $U(w',x)$ is defined and equals zero for all $x$.

By Lemma 2, for a given number $w$ the value $U(w,x)$ is defined and equals zero for all $x$ if and only if the Horn formula $\mathcal{H}_\mathtt{U} \Rightarrow q_I \cdot \mathsf{a}^{{\bullet} w} \cdot \mathsf{b}^* \leqslant q_F+ Z$ is generally valid on the class of all commutative $*$-continuous Kleene algebras.

As a corollary, we also get a new complexity result for the non-commutative case.

Corollary 1. There exists a concrete finite set of iteration-free atomic formulae $\widetilde{\mathcal{H}}_0$ such that the general validity problem for Horn formulae with this set $\widetilde{\mathcal{H}}_0$ as the set of hypotheses on the class of all (not necessarily commutative) $*$-continuous algebras is $\Pi^0_2$-complete.

Proof. Let us take $\widetilde{\mathcal{H}}_0=\mathcal{H} \cup \mathcal{C}_V$, where $\mathcal{H}$ is given by the previous theorem and the set $\mathcal{C}_V$ is constructed as described in § 2 for the set of variables $V=\mathcal{Q} \cup \{ \mathsf{a},\mathsf{b},\mathsf{c},\mathsf{d}, z_{\mathsf{a}}, z_{\mathsf{b}}, z_{\mathsf{c}}, z_{\mathsf{d}} \}$; $\mathcal{Q}$ is the set of states of $\mathtt{U}$. By Proposition 1, general validity of the Horn formula $\mathcal{H} \Rightarrow t \cdot \mathsf{a}^{{\bullet} n} \leqslant r+Z$ on the class of all commutative $*$-continuous Kleene algebras is equivalent to that of the Horn formula $\widetilde{\mathcal{H}}_0 \Rightarrow t \cdot \mathsf{a}^{{\bullet} n} \leqslant r+ Z$ on the class of all $*$-continuous Kleene algebras.

This corollary is a variant of Kozen’s result (see Theorem 4.1(ii) in [6]). The difference is that here we use a fixed set of hypotheses instead of a varying one. On the other hand, Kozen’s result is somehow stronger, as in Kozen’s constructions hypotheses are not only $*$-free, but also do not include addition. It is also possible to obtain a strengthening of Kozen’s result proper, that is, $\Pi_2^0$-completeness for a fixed set of hypotheses without iteration and addition, in the non-commutative case. This can be achieved by applying Kozen’s construction with Turing machines, instead of counter ones, to the universal Turing machine (encoding Turing machines does not require addition). A detailed presentation of this argument is beyond the scope of the present paper.

§ 8. $\Pi^1_1$-completeness in the general case

8.1. Noetherianity for a recursively defined graph

For a proof of $\Pi^1_1$-hardness of the (unrestricted) Horn theory of the class of commutative $*$-continuous Kleene algebras, we shall use, following Kozen’s ideas [6], the noetherianity problem for a recursively defined binary relation, in other words, a directed graph without parallel edges. (In what follows, “binary relation” and “graph” will be used as synonyms.) The set of vertices of such a graph is the set of natural numbers $\mathbb{N}$.

Let us recall the basic definitions.

Definition 14. A binary relation $R \subseteq \mathbb{N} \times \mathbb{N}$ is called well-founded (Artinian) if there is no infinite descending chain $(m_0, m_1,\dots)$ such that $(m_{i+1}, m_i) \in R$. A relation $R$ is called Noetherian if there is no infinite ascending chain $(m_0, m_1,\dots)$, where $(m_i, m_{i+1}) \in R$ (that is, the inverse relation $R^{-1}$ is Artinian).

Notice that only irreflexive relations can possibly satisfy this definition. In particular, if $R$ is a partial order, it should be a strict one.

We shall also need a “local” version of noetherianity, for a given element.

Definition 15. For a given relation $R \subseteq \mathbb{N} \times \mathbb{N}$, the set $\mathsf{WF}_R$ is defined as follows: $n \in \mathsf{WF}_R$ if there is no infinite ascending chain $(n, m_1, m_2, \dots)$ starting from $n$.

We shall be interested in recursively defined graphs, that is, in graphs for which there exists an algorithm which checks whether a given pair $(n,m)$ belongs to the corresponding relation. The classical definition of a recursive relation is as follows.

Definition 16. A binary relation $R \subseteq \mathbb{N} \times \mathbb{N}$ is called recursive if its characteristic function

$$ \begin{equation*} G(n,m)=\begin{cases} 1 &\text{if }(n,m) \in R, \\ 0 &\text{if }(n,m) \notin R \end{cases} \end{equation*} \notag $$
is computable.

Recursive binary relations are defined by total computable functions $G \colon \mathbb{N} \times \mathbb{N} \to \{0,1\}$. In what follows, however, we shall use the universal computable function $U$, which is essentially partial. (In particular, it cannot be augmented to a total one in a computable way.) Due to this reason, it will be convenient for us to extend the definition of recursive binary relation to the case of a partial function $G \colon \mathbb{N} \times \mathbb{N} \dashrightarrow \{ 0,1 \}$. Such a function defines a “fuzzy”, or three-valued graph on the set of vertices $\mathbb{N}$.

Definition 17. Let $G \colon \mathbb{N} \times \mathbb{N} \dashrightarrow \{0,1\}$ be a computable partial function. Let us say that $G$ defines a recursive three-valued graph (denoted by the same letter) on the set $\mathbb{N}$, and let us use the following terminology for its edges.

Let us denote the sets of definitely present, definitely absent, and undefined edges by $G^{+}$, $G^{-}$, and $G^{?}$, respectively.

Accordingly, let us modify the definiton of the $\mathsf{WF}$ set (local noetherianity).

Definition 18. Let $G$ be a recursive three-valued graph. Then $n \in \mathsf{WF}_G$ if there is neither an infinite chain $(n=m_0, m_1, \dots)$ such $(m_i, m_{i+1}) \in G^{+}$ for all $i$, nor a chain $(n=m_0, m_1, \dots, m_s, k)$ such that $(m_i, m_{i+1}) \in G^{+}$ for $i<s$ and $(m_s, k) \in G^{?}$. In other words, any infinite sequence of numbers, starting from $n$, has a prefix of the following form: $(n,m_1) \in G^+$, …, $(m_{i-1}, m_i) \in G^+$, $(m_i, m_{i+1}) \in G^-$ (for some $i \geqslant 0$). This sequence goes along edges from $G^+$ and “breaks down” on an edge from $G^-$. (The “breakdown” could happen immediately, at the first step.)

Now we are ready to formulate the baseline $\Pi_1^1$-complete problem, which we shall reduce to the Horn theory of commutative $*$-continuous Kleene algebras.

Theorem 8. There exists a recursive three-valued graph $G_0$ such that the set $\mathsf{WF}_{G_0}$ (for the given fixed $G_0$) is $\Pi_1^1$-complete.

Recall that the $\Pi_1^1$ complexity class is defined as follows. A set $A \subseteq \mathbb{N}$ belongs to this class if there exists an arithmetical formula $\varphi(n, X)$ with free parameters $n$ ranging over natural numbers and $X$ ranging over sets of numbers such that

$$ \begin{equation*} n \in A\quad\Longrightarrow\quad \mathbb{N} \vDash \forall X \, \varphi(n,X). \end{equation*} \notag $$
The set $\mathsf{WF}_{G_0}$ allows such a description. For the outer quantifier “$\forall\, X$” one takes the quantifier over all infinite chains starting from number $n$ (such a chain can be encoded as the set of pairs $(i,m_i)$). The desired property: there exists a natural $i$ such that $(m_i, m_{i+1}) \in G^{-}_0$ and $(m_j, m_{j+1}) \in G^{+}_0$ for all $j<i$ is already an arithmetical one, due to computability of $G_0$. Thus, the upper complexity bound for $\mathsf{WF}_{G_0}$ is obtained, and we are now interested in the lower bound, that is, in $\Pi_1^1$-hardness.

For the lower bound, we shall need the following characterisation theorem for $\Pi_1^1$-sets, which follows from the results of Kleene [22] and Spector [23] (the formulation is given according to Odifreddi’s book (see [24], Theorem IV.2.20).

Theorem 9 (S. C. Kleene, C. Spector, 1955). A set $A \subseteq \mathbb{N}$ belongs to the $\Pi_1^1$ complexity class if and only if there exists a recursive sequence $\{ \prec_x \mid x \in \mathbb{N} \}$ of recursive linear orders such that

$$ \begin{equation*} x \in A\quad\Longrightarrow\quad \prec_x\textit{ is well-founded}. \end{equation*} \notag $$

Let us clarify the term “recursive sequence of recursive linear orders”. Informally, this means that a computable function $f$, for any given $x \in \mathbb{N}$, returns the code of an algorithm computing a function $h_x \colon \mathbb{N} \times \mathbb{N} \to \{ 0,1 \}$. Furthermore, it is (externally) known that $h_x$ is the characteristic function of a linear order. (In particular, $h$ is total.) A more precise definition is given via the universal function:

$$ \begin{equation*} h_x(n,m)=U(f(x), [n,m]). \end{equation*} \notag $$
(Notice that in order to use the previously introduced universal function with two arguments, we have “packed” two arguments $n$ and $m$ into their pair code $[n,m]$.) Upon that, we impose an external requirement on the function $f$: for each $x \in \mathbb{N}$, the above function $h_x$ should be total and should be the characteristic function of a linear order.

Thus, Theorem 9 (more precisely, its more interesting “only if” direction) can be reformulated as follows.

Let a set $A \in \mathbb{N}$ belong to the $\Pi_1^1$ complexity class. Then there exists a total computable function $f \colon \mathbb{N} \to \mathbb{N}$ such that:

For further convenience, let us replace well-foundedness (artinianity) of the relation $\prec_x$ to noetherianity of the inverse relation $\succ_{\! x}$. Let us also voluntarily add the maximal element to each linear order. The numerical value of this maximal element will be zero. This is achieved by the following modification of the functions $h_x$:

$$ \begin{equation*} h'_x(n,m) = \begin{cases} h_x(m-1,n-1) &\text{if }m,n>0, \\ 1 &\text{if }n=0,\ \ m>0, \\ 0 &\text{if }m=0. \end{cases} \end{equation*} \notag $$

Now we have $x \in A$ if and only if the relation $\succ'_{\! x}$ with characteristic function $h'_x$ is Noetherian. Moreover, this holds if and only if there is no infinite chain $n_0 \succ'_{\! x} n_1 \succ'_{\! x} n_2 \succ'_{\! x} \cdots$ in this preorder, starting with zero ($n_0=0$). In other words, this means that $0 \in \mathsf{WF}_{\succ'_{\!x}}$.

Now we are ready to define the desired recursive three-valued graph $G_0$. Namely, it is defined by the following two-argument function $G_0$, where both arguments are considered as codes of pairs of numbers (recall that the encoding of pairs is a computable bijection between $\mathbb{N}$ and $\mathbb{N} \times \mathbb{N}$):

$$ \begin{equation*} G_0([w, n], [w', m]) = \begin{cases} U(w,[m-1,n-1]) &\text{if }w'=w\text{ and }m,n>0, \\ 1 &\text{if }w'=w\text{ and }n=0,\ m>0, \\ 0 &\text{if }m=0\text{ or }w' \ne w. \end{cases} \end{equation*} \notag $$
Notice that this function, unlike previously defined ones, is not total.

Let us prove that the set $\mathsf{WF}_{G_0}$ is $\Pi_1^1$-hard by constructing a reduction of an arbitrary set $A$ from the $\Pi_1^1$ class to $\mathsf{WF}_{G_0}$. Theorem 9, in the above reformulation, yields a function $f$ corresponding to the given set $A$. Since $f$ is computable, there exists $p \in \mathbb{N}$ (here, $p$ depends only on $A$) such that, for any $x \in \mathbb{N}$,

$$ \begin{equation*} f(x)=U(p,x). \end{equation*} \notag $$
As the desired reducing function, let us take the function $g$ defined by
$$ \begin{equation*} g(x)=[U(p,x),0]. \end{equation*} \notag $$
This function is total and computable, since so is the original function $f$. We claim that
$$ \begin{equation*} g(x) \in \mathsf{WF}_{G_0} \quad\Longrightarrow\quad x \in A. \end{equation*} \notag $$

Indeed, for a fixed $w=U(p,x)$, the graph $G_0$ includes an isolated component $G_0^{(w)}$ with the set of vertices $\{ [w,n] \mid n \in \mathbb{N} \}$. Edges which “look outside” this component are definitely absent in the graph: $G_0([w,n], [w',m])=0$ for $w' \ne w$. The component $G_0^{(w)}$ itself, since $G_0([w,n],[w,m])=h'_x(n,m)$, is isomorphic to the graph $(\mathbb{N}, \succ'_{\! x})$. (In particular, if $w$ is of the form $U(p,x)$, where $p$ was constructed from a set $A$ using Theorem 9, then there are no undefined edges inside $G_0^{(w)}$.)

Thus, $g(x)=[U(p,x),0]=[w,0] \in \mathsf{WF}_{G_0}$ if and only if $0 \in \mathsf{WF}_{\succ'_{\! x}}$, that is, if the order $\succ'_{\! x}$ is Noetherian (since $0$ is the maximal element). In its turn, this is equivalent to $x \in A$. This completes the proof of Theorem 8.

8.2. Encoding of membership in $\mathsf{WF}_{G_0}$

Now in order to prove the necessary $\Pi_1^1$-hardness results it is sufficient to construct a reduction of the set $\mathsf{WF}_{G_0}$ to derivability of a specially constructed sequent in $\boldsymbol{!}\mathbf{CommACT}_\omega$.

For convenience, let us suppose, following Kozen [6], that a 4-counter machine $\mathtt{G}_0$ defines the graph $G_0$ in the following way. The machine $\mathtt{G}_0$ has one initial state $q_I=s$ and two final states: $\mathcal{Q}_F=\{ r, t \}$. The machine $\mathtt{G}_0$ starts from configuration $(s, (\mathsf{a} \mapsto n, \mathsf{b} \mapsto m))$ and works as follows:

Constructing such $\mathtt{G}_0$ is easy: if the function $G_0$ is computable, then so is the function $G'_0$, defined by the equality $G'_0(n,m)=(m+1) \cdot G_0(n,m)$ (if $G_0(n,m)$ is undefined, then so is $G'_0(n,m)$). By Proposition 5, this function is computed by a four-counter machine with one final state $q_F$. The desired machine $\mathtt{G}_0$ is obtained by adding two new states $r,t$ and a check for zero (if the result is non-zero, this yields exactly $\mathsf{a} \mapsto m$): jzdec$(q_F, \mathsf{a}, r, t)$.

Let

$$ \begin{equation*} \mathcal{H}=\mathcal{H}_{\mathtt{G}_0} \cup \{ t \leqslant s \cdot \mathsf{b}^* \}. \end{equation*} \notag $$

Proof. As in the proof of Lemma 2, the interesting part here is the equivalence of the first and the second statements. The second and the third ones are equivalent by Theorem 3.

In the rest of the proof, we shall write $\mathsf{WF}$ instead of $\mathsf{WF}_{G_0}$. Let us start with the implication $1 \Rightarrow 2$.

First, let us notice that one may perform transfinite induction on the set $\mathsf{WF}$ along the three-valued graph $G_0$. This is done in the following way. Let $\psi(x)$ be a statement with a natural parameter. If

$$ \begin{equation*} \forall\, n \in \mathsf{WF} \ \bigl((\forall\, m \in \mathbb{N} \ ((n,m) \in G_0^{+} \Rightarrow \psi(m))) \Rightarrow \psi(n) \bigr), \end{equation*} \notag $$
then $\forall\, n \in \mathsf{WF} \;\; \psi(n)$. In other words, when proving $\psi(n)$ for $n \in \mathsf{WF}$, one may use, as the induction hypothesis, the validity of $\psi(m)$ for all $m$, to which there is an edge from $n$ that is definitely present in $G_0$.

Indeed, if this principle were invalid, there would have been an $n \in \mathsf{WF}$ for which $\psi(n)$ is false. But then there is also $m_1$ such that $(n,m_1) \in G_0^{+}$ (and therefore $m_1 \in \mathsf{WF}$) and $\psi(m_1)$ is false. Furthermore, there exist such $m_2$, $m_3$, …, which gives an infinite chain. This violates the definition of $\mathsf{WF}$.

Let $n \in \mathsf{WF}$. Let us prove the statement $\psi(n) =$ “the sequent ${!}\mathcal{H}, t, \mathsf{a}^n \to r+Z$ is derivable” using the above-described induction principle along $G_0$ on $\mathsf{WF}$. For any natural $m$, we have either $(n,m) \in G_0^{-}$ (the edge from $n$ to $m$ is obviously absent), or $(n,m) \in G_0^{+}$ (the edge is obviously present), and in the second case, we may use the statement $\psi(m)$ as the given induction hypothesis. In particular, by definition of $\mathsf{WF}$ it is impossible that $(n,m) \in G_0^{?}$. (Also, $(n,n)$ is always in $G_0^{-}$, because otherwise we get an infinite chain $n,n,n,\ldots$ This observation avoids the “vicious circle” situation, where $\psi(n)$ is supposed to be derived from itself.)

The multiset ${!}\mathcal{H}$ contains the linear formula $t \multimap (s \cdot \mathsf{b}^*)$. Let us start deriving the goal sequent ${!}\mathcal{H}, t, \mathsf{a}^n \to r+Z$ as follows:

It remains to derive sequents ${!}\mathcal{H}, s, \mathsf{a}^n, \mathsf{b}^m \to r+Z$ for each $m$ ($n$ is fixed). As noticed above, there are two possible cases:

1st case: $(n,m) \in G_0^{-}$ (the edge is definitely absent). In this case, the counter machine $\mathtt{G}_0$ reaches configuration $(r, ())$ from configuration $(s, (\mathsf{a} \mapsto n, \mathsf{b} \mapsto m))$. By Theorem 6, the sequent ${!}\mathcal{H}_{\mathtt{G}_0}, s, \mathsf{a}^n, \mathsf{b}^m \to r+Z$ is derivable. Now the desired sequent ${!}\mathcal{H}, s, \mathsf{a}^n, \mathsf{b}^m \to r+Z$ can be derived by the rule ${!}\mathrm{W}$ (weakening) by adding the lacking linear formula ${!}(t \multimap (s \cdot \mathsf{b}^*))$ to ${!}\mathcal{H}_{\mathtt{G}_0}$.

2nd case: $(n,m) \in G_0^{+}$, and the induction hypothesis holds for $m$. The latter means that the sequent

$$ \begin{equation*} {!}\mathcal{H}, t, \mathsf{a}^m \to r+Z \end{equation*} \notag $$
is derivable. Also, by Theorem 6 the sequent
$$ \begin{equation*} {!}\mathcal{H}_{\mathtt{G}_0}, s, \mathsf{a}^n, \mathsf{b}^m \to t \cdot \mathsf{a}^{{\bullet} m}+Z, \end{equation*} \notag $$
is derivable, because the machine $\mathtt{G}_0$ reaches configuration $(t, (\mathsf{a} \mapsto m))$ from configuration $(s, (\mathsf{a} \mapsto n, \mathsf{b} \mapsto m))$. Now the desired sequent can be derived as follows:

Now let us prove the implication $2 \Rightarrow 1$. We first need an analog of Lemma 1.

Lemma 3. Let ${!}\mathcal{H}'$ be a submultiset of ${!}\mathcal{H}$ and let $\Pi$ be a multiset of linear formulae obtained from (some) formulae from ${!}\mathcal{H}$ by removing the ${!}$ sign. Let the sequent

$$ \begin{equation*} {!}\mathcal{H}', \Pi, z_{\mathsf{a}}, \mathsf{a}^a, \mathsf{b}^b, \mathsf{c}^c, \mathsf{d}^d \to C+Z, \end{equation*} \notag $$
where $C$ has the form $q_2 \cdot \mathsf{a}^{{\bullet} a_2} \cdot \mathsf{b}^{{\bullet} b_2} \cdot \mathsf{c}^{{\bullet} c_2} \cdot \mathsf{d}^{{\bullet} d_2}$, be derivable. Then $a=0$. Similarly for $\mathsf{b}$, $\mathsf{c}$, $\mathsf{d}$.

Proof. The argument repeats the proof of Lemma 1, up to a small change connected to the possible presence of the linear formula ${!}(t \multimap (s \cdot \mathsf{b}^*))$ in ${!}\mathcal{H}'$ or the linear formula ${!}(t \multimap (s \cdot \mathsf{b}^*))$ in $\Pi$. This could possibly allow the usage of the ${}^* \mathrm{L}$ rule, which could make the derivation infinite. Instead of usual induction, we shall now use transfinite induction on the rank of a disbalanced (Theorem 4) derivation. The only additional case, if compared with Lemma 1, is an application of the ${\multimap}\mathrm{L}$ rule which introduces the formula $t \multimap (s \cdot \mathsf{b}^*)$. The left premise of this rule (by disbalancing) is of the form $t \to t$, therefore $t$ should occur in the left-hand side of the original sequent. However, this is not the case. Other cases are considered in the same way as in the proof of Lemma 1.

Let us return to the proof of the implication $2 \Rightarrow 1$ in Theorem 10. We are going to prove, by transfinite induction on the rank of a disbalanced (Theorem 4) derivation, the following two statements simultaneously. Here, as in Lemma 3, ${!}\mathcal{H}'$ is a submultiset of ${!}\mathcal{H}$ and $\Pi$ is a multiset of linear formulae obtained from (some) formulae from ${!}\mathcal{H}$ by removing the ${!}$ sign.

Due to disbalance of the derivations of given sequents, their lowermost rules are left ones, except the second statement for $q=r$. Indeed, otherwise ${!}\mathcal{H}'$ and $\Pi$ must be empty, and $t$ or $q$ on the left would not find their pair on the right.

As before, the cases of the rules ${!}\mathrm{L}$, ${!}\mathrm{W}$, ${!}\mathrm{C}$, for both statements, are handled by immediate reference to the induction hypothesis. (Their premises has the same form as the conclusion and at the same time a smaller rank.) Since the principal connective of all complex formula in $\Pi$ is $\multimap$, the only remaining case is the rule ${\multimap}\mathrm{L}$. Let this rule introduce a formula $A$. In this respect, ${!}A$ belongs to ${!}\mathcal{H}$.

Let us start with statement 1. We show that the following cases are impossible: $A=p \multimap (q \cdot u)$, $A=(p \cdot u) \multimap q$, and $A=p \multimap (q_0+z_{u})$, where $u \in \{ \mathsf{a}, \mathsf{b}, \mathsf{c}, \mathsf{d} \}$. Indeed, $t$ is one of the final states of the machine $\mathtt{G}_0$. Hence, the machine $\mathtt{G}_0$ does not have an instruction with $t$ as the first argument. Therefore, $p \ne t$. In view of disbalance, the left premise of the rule ${\multimap} \mathrm{L}$ should be of the form $p \to p$ or $p, u \to p \cdot u$. However, the left-hand side of the conclusion does not contain the variable $p$. A contradiction.

The remaining case is $A=t \multimap (s \cdot \mathsf{b}^*)$. By disbalancing (condition 2), the left premise is $t \to t$. Furthermore (condition 3), the lowermost rules in the derivation of the right premise are ${}^*\mathrm{L}$ and ${\cdot}\mathrm{L}$:

Each premise of the omega-rule ${}^*\mathrm{L}$ has a smaller rank than the goal sequent. Let us apply statement 2 of the induction hypothesis to each of these premises. (Notice that configuration $\mathbf{c}_0=(s, (\mathsf{a} \mapsto n, \mathsf{b} \mapsto m))$ is reachable from itself.) Now for any $m$ we have either $(n,m) \in G_0^{-}$, or $(n,m) \in G_0^{+}$ and $m \in \mathsf{WF}$. This yields $n \in \mathsf{WF}$.

Now let us consider statement 2. We are given that the following sequent is derivable:

$$ \begin{equation*} {!}\mathcal{H}', \Pi, q, \mathsf{a}^a, \mathsf{b}^b, \mathsf{c}^c, \mathsf{d}^d \to r+Z, \end{equation*} \notag $$
and that the machine $\mathtt{G}_0$ reaches configuration $\mathbf{c}=(q, (\mathsf{a} \mapsto a, \mathsf{b} \mapsto b, \mathsf{c} \mapsto c, \mathsf{d} \mapsto d))$ from the initial configuration of the form $\mathbf{c}_0=(s, (\mathsf{a} \mapsto n, \mathsf{b} \mapsto m))$. There are three cases to consider.

1st case: $q \notin \{ r,t \}$, that is, $q$ is not a final state. Consider the lowermost rule in the derivation. This rule should be a left one (right rules may not be applied, because there is no $q$ in the right-hand side). The interesting case is the ${\multimap}\mathrm{L}$ rules; rules operation ${!}$ are considered in a trivial way. Moreover, the ${\multimap}\mathrm{L}$ rule may not introduce the formula $t \multimap (s \cdot \mathsf{b}^*)$, since there is no $t$ in the left-hand side of the goal sequent. Hence, it introduces one of the formulae from ${!}\mathcal{H}_{\mathtt{G}_0}$ (with the ${!}$ sign removed). By the same argument as in Theorem 6 (see § 6.4), we show that the goal sequent was derived from a sequent of the form

$$ \begin{equation*} {!}\mathcal{H}', \Pi', q', \mathsf{a}^{a'}, \mathsf{b}^{b'}, \mathsf{c}^{c'}, \mathsf{d}^{d'} \to r+Z, \end{equation*} \notag $$
provided that configuration $\mathbf{c}'=(q', (\mathsf{a} \mapsto a', \mathsf{b} \mapsto b', \mathsf{c} \mapsto c', \mathsf{d} \mapsto d'))$ is the immediate successor of configuration $\mathbf{c}$. Now we apply the transfinite induction hypothesis: the rank of the derivation of the new sequent is smaller than that of the original one, and the new configuration $\mathbf{c}'$ is reachable from $\mathbf{c}_0$ (via $\mathbf{c}$). Thus, when started from configuration $\mathbf{c}'$, and therefore, also when started from $\mathbf{c}$, the machine $\mathtt{G}_0$ reaches one of its final configurations: either $(r,())$, or $(t,(\mathsf{a} \mapsto m))$. Furthermore, in the latter case we have $m \in \mathsf{WF}$.

2nd case: $q=r$. Since the machine $\mathtt{G}_0$ has reached configuration $\mathbf{c}= (r, (\mathsf{a} \mapsto a, \mathsf{b} \mapsto b, \mathsf{c} \mapsto c, \mathsf{d} \mapsto d))$ from the initial configuration $\mathbf{c}_0$, we have $a=b=c=d=0$. Hence, the sequent in question is of the form ${!}\mathcal{H}', \Pi, r \to r+Z$. The machine $\mathtt{G}_0$, starting from configuration $\mathbf{c}=(r,())$, reaches $(r,())$, that is, the same configuration, which is what we wanted.

3rd case: $q=t$. Again, since $\mathtt{G}_0$ has reached configuration $\mathbf{c}=(t, (\mathsf{a} \mapsto a$, $\mathsf{b} \mapsto b$, $\mathsf{c} \mapsto c$, $\mathsf{d} \mapsto d))$ from $\mathbf{c}_0$, we have $b=c=d=0$ and $a=m$. The sequent in question is of the form

$$ \begin{equation*} {!}\mathcal{H}', \Pi, t, \mathsf{a}^m \to r+Z. \end{equation*} \notag $$
By statement 1 (proved above), we have $m \in \mathsf{WF}$. At the same time, $\mathtt{G}_0$ reaches configuration $\mathbf{c}=(t, (\mathsf{a} \mapsto m))$ from itself.

The implication $2 \Rightarrow 1$ in Theorem 10 is obtained by applying statement 1 to $\mathcal{H}'=\mathcal{H}$ and an empty $\Pi$.

8.3. $\Pi_1^1$-completeness results

Let us now apply the construction explained above to prove $\Pi^1_1$-completeness results.

Theorem 11. There exists a concrete finite set of atomic formulae $\mathcal{H}$ such that the general validity problem for Horn formulae with the given set $\mathcal{H}$ as the set of hypotheses on the class of commutative $*$-continuous Kleene algebras is $\Pi^1_1$-complete. Therefore, the whole Horn theory of the class of all commutative $*$-continuous Kleene algebras is $\Pi^1_1$-complete.

Proof. Let us take, as the desired one, the set $\mathcal{H}$ from the previous section. By Theorem 10, general validity of the Horn formula $\mathcal{H} \Rightarrow t \,{\cdot}\, \mathsf{a}^{{\bullet} n} \leqslant r\,{+}\,Z$ is equivalent to $n \in \mathsf{WF}_{G_0}$. The problem of checking membership in the set $\mathsf{WF}_{G_0}$, for an appropriate choice of the three-valued graph $G_0$, is $\Pi_1^1$-complete by Theorem 8. This establishes $\Pi_1^1$-hardness of the general validity problem for Horn formulae with the given $\mathcal{H}$. The upper bound is due to Proposition 2.

Corollary 2. The derivability problem for the calculus $\boldsymbol{!}\mathbf{CommACT}_\omega$ is $\Pi_1^1$-complete.

This corollary extends the result of [12] to the commutative case. Notice, however, that [12] considered subexponential modalities, and for $\Pi_1^1$-hardness it was sufficient to have a modality with the contraction rule (in its non-local form), while permutation and contraction were not required. Similarly, in the commutative case it is possible to prove $\Pi_1^1$ completeness for a variant of $\boldsymbol{!}\mathbf{CommACT}_\omega$ with a “relevant” modality ${!}$, for which there are rules ${!}\mathrm{L}$, ${!}\mathrm{R}$, ${!}\mathrm{C}$, but not ${!}\mathrm{W}$. In order to achieve this, it is sufficient to add, together with each formula ${!}A$, the formula ${!}(({!}A) \multimap 1)$ to the multiset ${!}\mathcal{H}$. The rules ${\multimap}\mathrm{L}$ and ${1}\mathrm{L}$ now will simulate the rule ${!}\mathrm{W}$.

Corollary 3. The first-order (elementary) theory of the class of commutative $*$-continuous Kleene algebras is $\Pi_1^1$-complete.

The lower bound here follows from the fact that the Horn theory is a conservative fragment of the first-order one. The upper bound follows from the possibility to axiomatise by a calculus with an omega-rule. For such calculi, there is a general $\Pi^1_1$ upper bound result (see [12]; the exact form of the calculus does not matter).

Corollary 4. There exists a concrete finite set of atomic formulae $\widetilde{\mathcal{H}}$ such that the general validity problem for Horn formulae with the given set $\widetilde{\mathcal{H}}$ as the set of hypotheses on the class of all (non necessarily commutative) $*$-continuous Kleene algebras is $\Pi_1^1$-complete.

The proof is similar to that of Corollary 1.

The last corollary strengthens the corresponding Kozen’s result (Theorem 5.1 in [6]), which states $\Pi_1^1$-completeness only for a varying set of hypotheses.


Bibliography

1. S. C. Kleene, “Representation of events in nerve nets and finite automata”, Automata studies, Ann. of Math. Stud., 34, Princeton Univ. Press, Princeton, NJ, 1956, 3–41  crossref  mathscinet  zmath
2. D. Kozen, “On Kleene algebras and closed semirings”, Mathematical foundations of computer science (Banská Bystrica 1990), Lecture Notes in Comput. Sci., 452, Springer-Verlag, Berlin, 1990, 26–47  crossref  mathscinet  zmath
3. V. Pratt, “Action logic and pure induction”, Logics in AI (Amsterdam 1990), Lecture Notes in Comput. Sci., 478, Lecture Notes in Artificial Intelligence, Springer-Verlag, Berlin, 1991, 97–120  crossref  mathscinet  zmath
4. V. N. Red'ko, “On the algebra of commutative events”, Ukrain. Mat. Zh., 16:2 (1964), 185–195 (Russian)  mathscinet  zmath
5. R. J. Parikh, “On context-free languages”, J. Assoc. Comput. Mach., 13:4 (1966), 570–581  crossref  mathscinet  zmath
6. D. Kozen, “On the complexity of reasoning in Kleene algebra”, Inform. and Comput., 179:2 (2002), 152–162  crossref  mathscinet  zmath
7. S. L. Kuznetsov, “On the complexity of reasoning in Kleene algebra with commutativity conditions”, Theoretical aspects of computing – ICTAC 2023 (Lima 2023), Lecture Notes in Comput. Sci., 14446, Springer, Cham, 2023, 83–99  crossref
8. S. L. Kuznetsov, “Reasoning in commutative Kleene algebras from $*$-free hypotheses”, The Logica yearbook 2021, College Publications, London, 2022, 99–113  mathscinet
9. E. Palka, “An infinitary sequent system for the equational theory of $*$-continuous action lattices”, Fund. Inform., 78:2 (2007), 295–309  mathscinet  zmath
10. W. Buszkowski and E. Palka, “Infinitary action logic: complexity, models and grammars”, Studia Logica, 89:1 (2008), 1–18  crossref  mathscinet  zmath
11. J.-Y. Girard, “Linear logic”, Theoret. Comput. Sci., 50:1 (1987), 1–101  crossref  mathscinet  zmath
12. S. L. Kuznetsov and S. O. Speranski, “Infinitary action logic with exponentiation”, Ann. Pure Appl. Logic, 173:2 (2022), 103057  crossref  mathscinet  zmath
13. S. L. Kuznetsov and S. O. Speranski, “Infinitary action logic with multiplexing”, Studia Logica, 111:2 (2023), 251–280  crossref  mathscinet  zmath
14. P. Lincoln, J. Mitchell, A. Scedrov, and N. Shankar, “Decision problems for propositional linear logic”, Ann. Pure Appl. Logic, 56:1-3 (1992), 239–311  crossref  mathscinet  zmath
15. S. L. Kuznetsov, “Kleene star, subexponentials without contraction, and infinite computations”, Sib. Èlektron. Mat. Izv., 18:2 (2021), 905–922  mathnet  crossref  mathscinet  zmath
16. V. Danos, J.-B. Joinet, and H. Schellinx, “The structure of exponentials: uncovering the dynamics of linear logic proofs”, Computational logic and proof theory (Brno 1993), Lecture Notes in Comput. Sci., 713, Springer-Verlag, Berlin, 1993, 159–171  crossref  mathscinet  zmath
17. M. L. Minsky, “Recursive unsolvability of Post's problem of “tag” and other topics in theory of Turing machines”, Ann. of Math. (2), 74:3 (1961), 437–455  crossref  mathscinet  zmath
18. S. L. Kuznetsov, “Commutative action logic”, J. Logic Comput., 33:6 (2023), 1437–1462  crossref  mathscinet
19. L. Straßburger, “On the decision problem for MELL”, Theor. Comput. Sci., 768 (2019), 91–98  crossref  mathscinet  zmath
20. R. Schroeppel, A two counter machine cannot calculate $2^N$, Report no. AIM-257, MIT, Cambridge, MA, 1972 https://dspace.mit.edu/handle/1721.1/6202
21. E. Börger, Computability, complexity, logic, Stud. Logic Found. Math., 128, North-Holland Publishing Co., Amsterdam, 1989  mathscinet  zmath
22. S. C. Kleene, “Arithmetical predicates and function quantifiers”, Trans. Amer. Math. Soc., 79 (1955), 312–340  crossref  mathscinet  zmath
23. C. Spector, “Recursive well-orderings”, J. Symb. Log., 20:2 (1955), 151–163  crossref  mathscinet  zmath
24. P. Odifreddi, Classical recursion theory. The theory of functions and sets of natural numbers, Stud. Logic Found. Math., 125, North-Holland Publishing Co., Amsterdam, 1989  mathscinet  zmath

Citation: S. L. Kuznetsov, “Algorithmic complexity for theories of commutative Kleene algebras”, Izv. Math., 88:2 (2024), 236–269
Citation in format AMSBIB
\Bibitem{Kuz24}
\by S.~L.~Kuznetsov
\paper Algorithmic complexity for theories of commutative Kleene algebras
\jour Izv. Math.
\yr 2024
\vol 88
\issue 2
\pages 236--269
\mathnet{http://mi.mathnet.ru//eng/im9480}
\crossref{https://doi.org/10.4213/im9480e}
\mathscinet{http://mathscinet.ams.org/mathscinet-getitem?mr=4727549}
\zmath{https://zbmath.org/?q=an:07838022}
\adsnasa{https://adsabs.harvard.edu/cgi-bin/bib_query?2024IzMat..88..236K}
\isi{https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=Publons&SrcAuth=Publons_CEL&DestLinkType=FullRecord&DestApp=WOS_CPL&KeyUT=001202745700003}
\scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85193753606}
Linking options:
  • https://www.mathnet.ru/eng/im9480
  • https://doi.org/10.4213/im9480e
  • https://www.mathnet.ru/eng/im/v88/i2/p44
  • Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Известия Российской академии наук. Серия математическая Izvestiya: Mathematics
    Statistics & downloads:
    Abstract page:556
    Russian version PDF:23
    English version PDF:55
    Russian version HTML:39
    English version HTML:235
    References:83
    First page:10
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2024