Startseite Mathematik Compression with wildcards: All exact or all minimal hitting sets
Artikel Open Access

Compression with wildcards: All exact or all minimal hitting sets

  • Marcel Wild EMAIL logo
Veröffentlicht/Copyright: 15. September 2023

Abstract

Our objective is the compressed enumeration (based on wildcards) of all minimal hitting sets of general hypergraphs. To the author’s best knowledge, the only previous attempt towards compression, due to Toda, is based on binary decision diagrams and much different from our techniques. Traditional one-by-one enumeration schemes cannot compete when the number of minimal hitting sets is large and the degree of compression is high. Our method works particularly well in these two cases: either compressing all minimum cardinality hitting sets, or compressing all exact hitting sets.

MSC 2010: 05C65; 05A15; 05B35; 05C85

1 Introduction

Let W be a finite set (such as all sets in this article) and P ( W ) its powerset. Given a hypergraph (=  set-system) H P ( W ) , a ( H -)hitting set is a set X W such that X H for all hyperedges H H . Let HS ( H ) be the set of all hitting sets, and MHS ( H ) the subset of all (inclusion-)minimal hitting sets, henceforth called MHSes. The famous minimal hitting set problem is as follows: given H P ( W ) , is it possible to enumerate MHS ( H ) in polynomial total time,[1] i.e., polynomial in w W , h H , and mhs MHS ( H ) ? We refer to [1,2] for the history and the state of the art concerning this problem. Suffice it to say that polynomial total time is possible for various fixed parameters, e.g., if the size of hyperedges is bounded from above. And that Hagen in his thesis [2] managed to show that three popular algorithms do provably not run in total polynomial time. There is no point to further review “old-school” (i.e., one-by-one) enumeration because for this article, it being dedicated to the compression of MHS ( H ) , there is little to compare it with. As to “little,” the only work the author is aware of is due to Toda [3]. In Section 11, it will be compared with our approach.

The latter can be described in picturesque ways as follows. For fixed H , identify the MHSes with diamonds and the ordinary hitting sets (i.e., the members of HS ( H ) \ MHS ( H ) ) with worthless pebbles, which, however, may be hard to distinguish from diamonds. Some friendly sponsor provides R many nonempty boxes that are filled with both kinds of stones. All diamonds are distributed among the boxes but usually not all pebbles (which is just as well). Our main quest is to retrieve all diamonds (and only them) as efficiently as possible. A box is good if it contains at least one diamond, and bad otherwise. A box 100% filled with diamonds is very-good. As will be seen, depending on the structure of H , very-good boxes can be both numerous and heavy! Furthermore, the number of diamonds in a very-good box is found at once, and the diamonds themselves are arranged in a pleasant, compressed manner.

To obtain a first impression of the quality of boxes, the Monte-Carlo method picks (say) 20 stones at random from each box ρ and determines the number α ( ρ ) of diamonds among them. If 0 < α ( ρ ) < 20 , then ρ is merely-good, i.e., good but not very-good. However, if α ( ρ ) = 0 , then ρ is only likely-bad in the sense that picking 20 random pebbles does not imply the absence of diamonds in that box, but it makes it rather likely.[2] Similarly, if α ( ρ ) = 20 , then ρ is likely-very-good. If a likely-very-good box contains thousands of stones, then classifying the stones one-by-one is time-consuming. We will provide three criteria for very-goodness that settle the issue faster. Efficient criteria for badness are harder to come by but an elegant sufficient condition exists. As to merely-good boxes ρ , there are two approaches, each with benefits and drawbacks. The first is to classify the stones one-by-one. The second uses subtle machinery but has the benefit that the diamonds in ρ get repackaged into brandnew very-good boxes.

Here comes the section break-up, phrased in more mathematical terms. The preliminaries in Section 2 concern Boolean functions and three kinds of wildcards: the e -, the n -, and the g -wildcard. All of them generalize the don’t-care symbol familiar from describing partial models of Boolean functions. Strings of mentioned symbols yield various kinds of “rows,” e.g., 012e-rows or 01g-rows. If the type is clear, we simply speak of rows. Furthermore, we adopt the vertical layout (VL) technique used in data mining. It substitutes set operations that involve many small sets by set operations with few large sets. Section 3 discloses the sponsor of boxes as the transversal e -algorithm of [4]. And Section 4 discloses the mathematical nature of the R boxes as 01g-rows. We then establish (Theorem 1) the pleasant fact that all minimum-cardinality MHSes occur in very-good rows, which, moreover, can be pinpointed at once among the R rows in total.

Unfortunately, in the remainder, we face deeper waters in our quest to find all MHSes. For starters, Section 5 elaborates the first aforementioned approach towards merely-good 01g-rows ρ by offering four algorithms to classify one-by-one the bitstrings (=  stones) in ρ . Algorithm 1 relies on the diamonds (= MHSes) retrieved so far, whereas Algorithm 2 only relies on the knowledge of H . By definition, X W is an MC-set if each x X has a private hyperedge H H , i.e., one that cuts x sharply in the sense that H X = { x } . The set-system MC ( H ) of all MC-sets is dual to HS ( H ) in that the former is a set-ideal and the latter a set-filter. It holds (proven in [5], recast in Theorem 2) that MC ( H ) HS ( H ) = MHS ( H ) . Algorithm 3 exploits this, and Algorithm 4 additionally uses facts that are fully justified only in Section 9.

Section 6 elaborates the second aforementioned approach towards merely-good rows. It relies on Sections 7 and 8 that deliver two criteria for very-goodness, each of which is sufficient and necessary. The first is based on inclusion–exclusion, and the second on matroid theory (Rado’s theorem).

As to Section 9, the subsets of W that are not MC, yet all their proper subsets are MC, are nice to know. They are collected in the set-system MinNotMC ( H ) . For instance, it allows us to calculate the cardinality MHS ( H ) without knowing MHS ( H ) . Section 10 calculates MinNotMC ( H ) . It exploits the fact that minimal set-coverings are cryptomorphic to minimal hitting sets and can hence be handled with the transversal e -algorithm.

Section 11 features numerical experiments with Mathematica. In a nutshell, our compression with wildcards (specifically the number and thickness of very-good rows) is the more impressive the fewer and the larger the hyperedges are. Although some ideas of previous sections have not yet been implemented in Mathematica, in 11.6, we attempt a preliminary comparison of our methods with the two winners [3] and [5] of a competition carried out in [1].

Section 12 at first seems to abandon minimal hitting sets and turn to exact hitting sets (EHSes). Are they that different? By definition, Y W is an EHS for H if Y H = 1 for all H H . Under the mild assumption that H = W each EHS must be an MHS, yet the converse fails severely in that some hypergraphs have plenty MHSes but no EHSes. Nevertheless, our previously used g -wildcards can sometimes compress the set-system EHS ( H ) of all EHSes. As to “sometimes,” any fixed hypergraph H P ( W ) induces a natural, apparently novel equivalence relation on W . It turns out that compressing EHS ( H ) is possible iff is nontrivial. Furthermore, Knuth’s popular Dancing-Link algorithm appears in Section 12, and in Theorem 4, we enumerate the perfect matchings of any graph without K 3 , 3 -minor in polynomial total time.

2 Preliminaries on Boolean functions, partial models, wildcards, and VL

After Boolean functions (Section 2.1), we turn to e -wildcards (2.2–2.3), followed by n -wildcards and g -wildcards (2.4). In 2.5, we sieve the minimal members of any set-system S P ( W ) and 2.6 introduces VL.

Throughout the article for any integer w 1 , we put [ w ] { 1 , 2 , , w } . For convenience, usually W [ w ] . If the powerset is concerned, we write P [ w ] instead of P ( [ w ] ) . Furthermore, we use the shorthand “iff” for “if and only if,” and write (as opposed to ) for proper inclusion.

2.1 We freely identify bitstrings of length w (also called 01-rows) with subsets of [ w ] in the usual way; thus, X = { 2 , 4 , 5 } (viewed, say, as subset of [6]) matches x = ( 0 , 1 , 0 , 1 , 1 , 0 , 0 ) . Depending on circumstances, one or the other view is preferable. We now extend 01-rows to 012-rows such as

r = ( 0 , 2 , 2 , 1 , 0 , 2 ) .

The following type of notation that refers to the positions of the various symbols will be used throughout:

(1) zeros ( r ) { 1 , 5 } , ones ( r ) { 4 } , twos ( r ) { 2 , 3 , 6 } .

While 01-rows encode sets, 012-rows encode set-systems because “2” is viewed[3] as don’t-care symbol that can be freely replaced by 0 or 1. Thus, r = ( 0 , 2 , 2 , 1 , 0 , 2 ) encodes, and in fact will be identified [4] with, the set-system:

r = { { 4 } , { 4 , 2 } , { 4 , 3 } , { 4 , 6 } , { 4 , 2 , 3 } , { 4 , 2 , 6 } , { 4 , 3 , 6 } , { 4 , 2 , 3 , 6 } } ,

which, with obvious shorthand notation (that will only be applied to sets of 1-digit numbers), can also be rendered (since elements of sets can be listed in arbitrary order) as:

{ 4 , 42 , 43 , 46 , 423 , 426 , 436 , 4236 } or as { 4 , 24 , 34 , 46 , 234 , 246 , 346 , 2346 } .

As to a general 012-row r , if it is viewed as a set-system, this set-system is { ones ( r ) S : S twos ( r ) } . While zeros ( r ) does not come up here, the 0’s are as important as the 1’s in the sequel (ponder what would become of r = ( 0 , 2 , 2 , 1 , 0 , 2 ) without the 0’s).

2.1.1 That leads us to { 0 , 1 } viewed as Boolean algebra[5] and to Boolean functions f : { 0 , 1 } n { 0 , 1 } whose basic features are assumed to be familiar to the reader, so that we only need to fix notation here. Any bitstring x { 0 , 1 } n with f ( x ) = 1 is a model of f . Apart from other means, Boolean functions can be defined by Boolean formulas. Thus, by writing f ( x ) x 1 x 2 x 3 , we define[6] a Boolean function f : { 0 , 1 } 3 { 0 , 1 } that, e.g., satisfies f ( ( 0 , 1 , 1 ) ) = 0 1 1 = 1 . It is clear that only ( 0 , 0 , 0 ) fails to be a model, and so the modelset is

Mod ( f ) = ( 2 , 2 , 2 ) \ { ( 0 , 0 , 0 , ) } = ( 1 , 2 , 2 ) ( 2 , 1 , 2 ) ( 2 , 2 , 1 ) .

The union on the righthand side is not disjoint since, e.g., ( 1 , 0 , 1 ) ( 1 , 2 , 2 ) ( 2 , 2 , 1 ) . Fortunately, this can be cured as follows (here and henceforth signifies disjoint union):

Mod ( x 1 x 2 x 3 ) = ( 1 , 2 , 2 ) ( 0 , 1 , 2 ) ( 0 , 0 , 1 ) .

This idea is long known, and its visualization has been coined Abraham-flag in [7]. Thus, a general n × n Abraham-flag has 1’s in the main diagonal, 0’s below it, and 2’s above it. The row-cardinalities sum up to 2 n 1 + 2 n 2 + + 1 , which equals 2 n 1 , as is to be expected. In connection with Boolean functions, 012-rows usually describe partial models. For instance, ( 1 , 2 , 2 ) is a partial model of x 1 x 2 x 3 in the sense that replacing the 2’s by 0 or 1 in any way results in a model of x 1 x 2 x 3 .

2.2 In addition to the don’t-care symbol “2,” we will use three further wildcards. For starters, instead[7] of using an s × s Abraham-flag to spell out Mod ( x 1 x s ) , we can, better still, simply define

( e , e , , e ) Mod ( x 1 x s ) .

Roughly speaking, s symbols e (not necessarily adjacent) demand bitstrings to have “at least one 1 in that area.” Combining such e -wildcards (distinguished by subscripts) gives rise to 012e-rows like

(2) r = ( e 1 , 0 , 2 , e 1 , e 2 , 1 , 0 , e 2 , 2 , 2 ) ,

which, by definition, consists of those subsets S [ 8 ] that satisfy

  • 2 , 7 S (because zeros ( r ) = { 2 , 7 } ),

  • 6 S (because ones ( r ) = { 6 } ),

  • { 1 , 4 } S (because of e 1 , e 1 ),

  • { 5 , 8 } S (because of e 2 , e 2 ).

The fact that twos ( r ) = { 3 , 9 , 10 } reflects the fact that 3, 9, and 10 do not occur in any of the conditions. By e-bubble, we mean the position-set of any given e -wildcard. Thus, the e 2 -bubble of the e 2 -wildcard in (2) is { 5 , 8 } . It is easy to see that

r = 2 3 ( 2 2 1 ) ( 2 2 1 ) ,

and that 2 2 1 generalizes to 2 s 1 for e -bubbles of size s .

Alternatively (but clumsier), r in (2) could be defined[8] as follows:

r = Mod ( x 2 ¯ x 7 ¯ x 6 ( x 1 x 4 ) ( x 5 x 8 ) ) . ( 2 )

2.2.1 Observe that the intersection ρ ρ of an 012e-row ρ with an 012-row ρ is either empty (when 0’s and 1’s clash) or is again a 012e-row, which arises in obvious ways:

ρ = ( e 1 , e 1 , e 1 , e 2 , e 2 , e 2 , e 3 , e 3 , e 3 ) , ρ = ( 2 , 2 , 0 , 0 , 2 , 2 , 1 , 2 , 0 ) , ρ ρ = ( e 1 , e 1 , 0 , 0 , e 2 , e 2 , 1 , 2 , 0 ) .

2.2.2 The set of all minimal [9] members contained in a 012e-row will play a crucial role. One checks that the set-system Min ( r ) of all minimal members of the set-system r in (2) equals

(3) Min ( r ) = { 615 , 618 , 645 , 648 } .

Generally, if the 012e-row r has t 1 many e -wildcards of cardinalities ε 1 , , ε t , then[10] each X Min ( r ) is of type X = ones ( r ) T , where T cuts each e -bubble in exactly one element. Thus, T = t . If we define the degree of r as:

(4) deg ( r ) ones ( r ) + t ,

then

(5) Min ( r ) = { X r : X = deg ( r ) } and Min ( r ) = ε 1 ε 2 ε t .

For general set-systems S , it will be more demanding (2.5) to sieve Min ( S ) from S . Nevertheless, (5) will keep coming back even in that context.

2.3 Let us introduce higher-level Abraham-flags, i.e., constituted by certain 012e-rows as opposed to the 012-rows in 2.1. Consider

(6) r ( e 1 , e 1 , e 2 , e 2 , e 3 , e 1 , e 2 , e 2 , e 3 ) .

Soon, we need to be able to, e.g., sieve those bitstrings ( x 1 , , x 9 ) from r that have at least one 1 among { x 1 , , x 5 } . In other words, we need to “impose” ( e , e , e , e , e ) upon r , i.e., putting r ( e , e , e , e , e , 2 , 2 , 2 , 2 ) the intersection r r of two 012e-rows must be rewritten in a handy format. The answer is ( e 1 , e 1 , e 2 , e 2 , e 3 , e 1 , e 2 , e 2 , e 3 ) ( e , e , e , e , e , 2 , 2 , 2 , 2 ) = r 1 r 2 r 3 , where

(7) r 1 ( e 1 , e 1 , e 2 , e 2 , e 3 , 2 , e 2 , e 2 , e 3 ) , r 2 ( 0 , 0 , e 2 , e 2 , e 3 , 1 , 2 , 2 , e 3 ) . r 3 ( 0 , 0 , 0 , 0 , 1 , 1 , e 2 , e 2 , 2 ) .

The first part of the righthand side is a novel 3 × 3 Abraham-flag in the sense that the boldface main diagonal entries are either 1 (as in 2.1) or full e -wildcards. Likewise, the entries below the main diagonal are again 0’s. What happens above the main diagonal, and how all of this affects the last four columns in (7), has been discussed[11] in [4] (see also Section 3.1).

2.4 Dually to e -wildcards, we will encounter n -wildcards that demand “at least one 0 here.” Thus, for instance,

( n , n , n , n ) Mod ( x 1 ¯ x 2 ¯ x 3 ¯ x 4 ¯ ) = ( 0 , 2 , 2 , 2 ) ( 1 , 0 , 2 , 2 ) ( 1 , 1 , 0 , 2 ) ( 1 , 1 , 1 , 0 ) .

Mutatis mutandis as in 2.2, we define n -bubbles and 012 n -rows.

2.4.1 Apart from e -wildcards and n -wildcards,[12] a third type of wildcard takes care of the requirement “exactly one 1 here.” Namely, by definition:

( g , g , , g ) { ( 1 , 0 , , 0 ) , ( 0 , 1 , , 0 ) , , ( 0 , 0 , , 1 ) } .

One trivial application of these g -wildcards (and coupled g-bubbles) is the compression of MHS ( H ) for hypergraphs with disjoint hyperedges. Thus, if H 1 = { 123 , 45 , 6789 } , then MHS ( H 1 ) = ( g 1 , g 1 , g 1 , g 2 , g 2 , g 3 , g 3 , g 3 , g 3 ) . Slightly more subtle and important later, one checks that r = ( e 1 , 0 , 2 , e 1 , e 2 , 1 , 0 , e 2 , 2 , 2 ) from (2) has Min ( r ) = ( g 1 , 0 , 0 , g 1 , g 2 , 1 , 0 , g 2 , 0 , 0 ) . Expressions like this are called 01g-rows.

2.5 Let S P [ w ] be any set-system. The problem to obtain[13] the set-system Min ( S ) of all minimal members of S occurs frequently in discrete mathematics. The naive way to proceed is to decide for each X S whether there is another Y S with Y X . Clearly, X belongs to Min ( S ) iff no such Y exists. Since deciding whether or not Y X costs O ( w ) , the overall cost is O ( S 2 w ) .

To the author’s best knowledge (and surprise), the following refinement has not appeared in the literature before. Start by grouping the members of S according to their cardinalities m 1 < m 2 < < m s (often m i + 1 = m i + 1 ). This induces the decomposition S = S [ 1 ] S [ 2 ] S [ s ] and costs O ( S w ) . It suffices to show how to calculate Min [ i ] S [ i ] Min ( S ) for all 1 i s .

Clearly, Min [ 1 ] = S [ 1 ] since minimum-cardinality implies minimal. Set S [ i ] S [ i ] for 2 i s . Throughout the remainder, we will have Min [ i ] S [ i ] S [ i ] and the set-systems S [ i ] keep shrinking until they reach S [ i ] = Min [ i ] . To begin with, pick any X Min [ 1 ] and remove all[14] Y S [ i ] ( i 2 ) from S [ i ] whenever X Y . This costs O ( S w ) . Doing the same for all members X Min [ 1 ] costs O ( S w Min [ 1 ] ) = O ( S w min ) , where min Min ( S ) . It is clear that afterward, S [ 2 ] = Min [ 2 ] . Next for each X Min [ 2 ] and all Y S [ i ] ( i 3 ) remove Y from S [ i ] whenever X Y . Clearly, afterward, S [ 3 ] = Min [ 3 ] . And so it goes on until we obtain S [ s ] = Min [ s ] . The overall cost is O ( S w min s ) = O ( S w 2 min ) .

2.6 The operations and on { 0 , 1 } extend to operations on { 0 , 1 } m (and they match union/intersection of sets in P [ m ] ). Adopting Mathematica terminology, we call the extended operations BitOr and BitAnd. For instance, referring to the columns of the 8 × 6 matrix A with rows Z 1 to Z 8 (Table 1), it holds that BitAnd ( col 2 , col 6 ) = ( 0 , 0 , 1 , 1 , 0 , 0 , 0 , 1 ) T (where the T means “transposed”).

Table 1

Illustration of VL

col 1 col 2 col 3 col 4 col 5 col 6
Z 1 = 1 1 1 0 0 0
Z 2 = 1 0 0 0 1 0
Z 3 = 1 1 0 0 0 1
Z 4 = 0 1 0 0 1 1
Z 5 = 1 0 1 1 0 0
Z 6 = 0 0 1 1 1 0
Z 7 = 0 0 1 1 0 1
Z 8 = 0 1 0 1 0 1

2.6.1 What is this good for? The fact that BitAnd ( col 2 , col 6 ) had a component 1 exactly on the 3th, 4th, and 8th position tells us that among the sets Z 1 , , Z 8 , the ones that contain the set { 2 , 6 } are exactly Z 3 , Z 4 , and Z 8 . This is, e.g., relevant for speeding up the method of 2.5.

2.6.2 Here comes another application. Consider the set-system

(8) G { { 1 , 2 , 3 } , { 1 , 5 } , { 1 , 2 , 6 } , { 2 , 5 , 6 } , { 1 , 3 , 4 } , { 3 , 4 , 5 } , { 3 , 4 , 6 } , { 2 , 4 , 6 } } .

The straightforward (=  “horizontal”) way to see whether X = { 1 , 2 , 5 } is a G -transversal checks whether any intersection X Y ( Y G ) is empty. In contrast, VL demands[15] to build the 8 × 6 matrix A ( G ) whose i th row Y i is the characteristic bitstring of the i th set Y i listed in (8). It happens that A ( G ) is rendered in Table 1. A moment’s reflection confirms the following. The fact that BitOr ( col 1 , col 2 , col 5 ) = ( 1 , 1 , 1 , 1 , 1 , 1 , 0 , 1 ) T does not equal ( 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) T is tantamount to X not being a G -hitting set ( X Y 7 = ). Although the formal complexities of the horizontal and vertical way coincide, in practice, VL is the faster the more (small) sets G contain. Simply put, computer hardware prefers doing few operations with long bitstrings to doing many operations with short bitstrings.

3 Review of the transversal e -algorithm

We survey the transversal e -algorithm (3.1) and adapt it to count or generate hitting sets of fixed cardinality (3.2). In 3.3, we indicate how the transversal e -algorithm dualizes to the noncover n -algorithm.

3.1 Consider the task to enumerate the set HS ( H 2 ) of all hitting sets of the hypergraph H 2 whose five hyperedges X [ 6 ] are

(9) H 1 = { 1 , 2 , 5 } , H 2 = { 3 , 4 } , H 3 = { 4 , 5 , 6 } , H 4 = { 1 , 3 , 5 } , and H 5 = { 2 , 6 } .

One idea is to first compute the hitting sets of the hypergraph { H 1 } , then the ones of { H 1 , H 2 } , and so forth until we obtain the hitting sets of { H 1 , , H 5 } = H 2 . Calculating HS ( { H 1 } ) is easy in view of 2.2. It consists of all bitstrings (= subsets of [6]) that have at least 1 on the positions 1, 2, 5, and so HS ( { H 1 } ) = ( e , e , 2 , 2 , e , 2 ) . Likewise, HS ( { H 1 , H 2 } ) = ( e 1 , e 1 , e 2 , e 2 , e 1 , 2 ) r .

Now, it gets trickier because H 3 intersects H 1 and H 2 , i.e., the e 3 -wildcard supposed to be modeling H 3 interferes with existing e -wildcards. In 2.3, we indicated how this is to be handled. Recall that the row in (6), which suffered the same predicament as r mentioned earlier, had to be replaced by three candidate sons in (7). The essence of the transversal e -algorithm is to keep on picking the topmost row r of a “to do” stack of 012e-rows and to impose some e -wildcard upon r , which in turn can trigger up[16] to t candidate sons. Each candidate son r i must be feasible in the sense that r i HS ( H ) , for otherwise further processing of r i cannot possibly yield any hitting sets. The feasible candidate sons are put on top of the last-in-first-out (LIFO)-stack, and the others are discarded. Fortunately, deciding feasibility is easy:

(10) r is feasible iff ( H H ) ( H zeros ( r ) ) .

The effect of discarding infeasible candidate sons is that in each set of candidate sons, at least one will be feasible. This in turn is the reason that the e -algorithm runs in total polynomial time, in fact in O ( R h 2 w 2 ) time. For the fine details of this transversal e -algorithm,[17] the reader is referred to [4]. To summarize, for any given hypergraph H P [ w ] , the transversal e -algorithm renders HS ( H ) as a disjoint union of R many 012e-rows; thus,

(11) HS ( H ) = i = 1 R ρ i ¯ .

3.1.1 Applied to H 2 , the transversal e -algorithm yields HS ( H 2 ) = ρ 1 ¯ ρ 4 ¯ , where the ρ i ¯ ’s are defined in Table 2.

Table 2

Representation of HS ( H 2 ) as disjoint union of 012 e -rows

ρ 1 ¯ = e e 1 0 0 1
ρ 2 ¯ = 2 e 1 e 2 e 2 1 e 1
ρ 3 ¯ = 0 1 1 1 0 2
ρ ¯ 4 = 1 e 2 1 0 e

In view of 2.2, we conclude that

HS ( H 2 ) = ρ 1 ¯ + ρ 4 ¯ = ( 2 2 1 ) + 2 ( 2 2 1 ) 2 + 2 + 2 ( 2 2 1 ) = 29 .

3.2 Let μ μ ( H ) be the minimum cardinality achieved by any hitting set of the hypergraph H . In our scenario, μ gets usually known[18] only after (11) has been obtained. For all c { μ , μ + 1 , , w } , we put

(12) HS ( H , c ) { X HS ( H ) : X = c } .

Of particular interest is the set-system MCHS ( H ) of all minimum-cardinality hitting set, i.e.

(13) MCHS ( H ) HS ( H , μ ) M S H ( H ) .

3.2.1 In some circumstances (e.g., in [10]), it is irrelevant whether the H -hitting sets are minimal; just their cardinality matters. Let us hence calculate HS ( H , c ) for any fixed c μ . Viewing (11) for any such c , let I ¯ be the set of indices i R such that the 012e-row ρ i ¯ has degree c (that is because ρ i ¯ HS ( H , c ) = if i I ¯ ). Putting S ( ρ i ¯ ) { X ρ i ¯ : X = c } , we obtain HS ( H , c ) by summing up the numbers S ( ρ i ¯ ) ( i I ¯ ) . It is easy to calculate the numbers S ( ρ i ¯ ) with inclusion–exclusion; for a faster way, see [4, Thm.1].

3.2.2 Suppose the set HS ( H , c ) itself needs to be calculated. By the aforementioned each fixed set-family HS ( H , c ) is the disjoint union of all sets S ( ρ i ¯ ) ( i I ¯ ) . But sieving S ( ρ i ¯ ) from ρ i ¯ is more cumbersome than calculating S ( ρ i ¯ ) . Leaving ways of compression to the future, we only note that if S ( ρ i ¯ ) has α elements, then by [4, Thm.2], it can be enumerated one-by-one in total polynomial time O ( α w 2 ) .

3.2.3 If HS ( H , c ) is of interest cardinality-wise (or the members themselves) for all values μ c w , then upon running the transversal e -algorithm, each c gets processed as discussed in 3.2.1 (or 3.2.2). However, if only values c d for some bound d are relevant, then it pays to adjust the transversal e -algorithm as follows. In addition to (10), the arising candidate sons should also satisfy deg ( ρ ¯ ) d . That is because deg ( ρ ¯ ) > d implies that all successor rows ρ i ¯ of ρ ¯ will have deg ( ρ i ¯ ) deg ( ρ ¯ ) > d and so cannot contain any members of HS ( H , c ) . Problem is that, in contrast to the remarks after (10), it can now happen that some rows loose all their candidate sons. Nevertheless, performance in practice may be good.

3.3 The family HS ( H ) of all H -hitting sets is a set-filter in the sense that ( X and X Y ) Y . Now, let S P [ w ] be a set-system. Call Z P [ w ] a S -noncover if Z Y for all Y S . Then the family N C ( S ) of all S -noncovers is a set-ideal J in the sense that ( X J and Y X ) Y J . Consider any set-filter P [ w ] . The minimal members of are called its generators, and they determine uniquely. Likewise, for any set-ideal[19] J P [ w ] , the maximal members of J are called its facets, and they determine J uniquely. Furthermore, let and J be complementary set-systems in the sense that J = P [ w ] . It then holds that is a set-filter iff J is a set-ideal.

Given H P [ w ] , the transversal e -algorithm renders the set-filter HS ( H ) in the convenient format (11). Since set-filter and set-ideal are dual concepts, and so are e -wildcards and n -wildcards, it comes as no surprise that some noncover n-algorithm (see, e.g., [6]), fed with S renders the set-ideal N C ( S ) as a disjoint union of R many 012n-rows:

(11′) N C ( S ) = i = 1 R σ i ¯ .

4 From minimum-cardinality toward inclusion-minimal

We argue why all 012e-rows ρ i ¯ in (11) should get “shaved” and become certain 01g-rows ρ i ρ i ¯ . Thus, (11) will improve to (17). It pleasantly turns out that MCHS ( H ) in (13) is the union of some such rows ρ i . In 4.2, we comment on situations where moreover MCHS ( H ) = MHS ( H ) , and in 4.4, resume the Monte-Carlo method of Section 1 in order to obtain an estimate for MHS ( H ) .

4.1 Let H P [ w ] be a hypergraph. In the remainder of this article, we assume that the transversal e -algorithm has rendered HS ( H ) as a disjoint union of R many 012e-rows as in (11). Different from [4] where these rows were coined “final,” here the availability of them is not the end but only the beginning. That is why, we henceforth call them

semifinal 012 e -rows .

Suppose X [ w ] is any minimal H -hitting set. Then, X is contained in some semifinal 012e-row ρ i ¯ because of (11). Being minimal within HS ( H ) , a fortiori X is minimal within the smaller set-system ρ i ¯ HS ( H ) , i.e., X Min ( ρ i ¯ ) . In view of (5), it follows that for all 1 i R :

(14) ρ i ¯ MHS ( H ) Min ( ρ i ¯ ) = { X ρ i ¯ : X = deg ( ρ i ¯ ) } .

In particular, consider Y MCHS ( H ) MHS ( H ) (see (13)). As before, Y Min ( ρ j ¯ ) for some j R . But all sets in Min ( ρ j ¯ ) have the same cardinality as Y , and so are themselves in MCHS ( H ) . Hence, in (14) becomes =. To summarize:

Theorem 1

Assume that HS ( H ) is represented as disjoint union of 012e-rows ρ i ¯ as in (11). Then, MCHS ( H ) is the disjoint union of those sets Min ( ρ i ¯ ) that have deg ( ρ i ¯ ) = μ .

To illustrate, consider HS ( H 2 ) = ρ 1 ¯ ρ 4 ¯ in Table 2. One checks that all these rows happen to have degree 3, and so μ = 3 . It follows from Theorem 1 and the fact (see 2.4.1) that sets of type Min ( ρ i ¯ ) can conveniently be rendered by single 01 g -rows ρ i that

(15) MCHS ( H 2 ) = ρ 1 ρ 2 ρ 3 ρ 4 ,

where the ρ i ’s are defined as follows:

(16) Min ( ρ 1 ¯ ) = { 136 , 236 } = ( g , g , 1 , 0 , 0 , 1 ) ρ 1 , Min ( ρ 2 ¯ ) = { 235 , 245 , 356 , 456 } = ( 0 , g 1 , g 2 , g 2 , 1 , g 1 ) ρ 2 , Min ( ρ 3 ¯ ) = { 234 } = ( 0 , 1 , 1 , 1 , 0 , 0 ) ρ 3 , Min ( ρ 4 ¯ ) = { 124 , 146 } = ( 1 , g , 0 , 1 , 0 , g ) ρ 4 .

4.2 As opposed to (15) where incidentally MCHS ( H 2 ) = MHS ( H 2 ) , for general hypergraphs H , only a few semifinal 012e-rows ρ i ¯ will have degree μ ! If only MCHS ( H ) is sought, then all rows ρ i ¯ with deg ( ρ i ¯ ) > μ are superfluous. Yet to avoid them, one cannot proceed as in 3.2.3 because usually, d μ is not known in advance. However, guessing and working with some slightly larger d > μ will still beat computing all R rows. (If it happens that one guesses a d < μ , then the proposed method will not deliver any semifinal 012e-rows. But it will improve the next guess, and with binary search, one can even pin down μ .)

4.2.1 In the following set-up μ , is known[20] in advance; it even happens that MHS ( H ) = MCHS ( H ) . Namely, if H is the family of all cocircuits [8, p. 653] of a matroid, then MHS ( H ) is the set of all matroid bases and μ is easy to come by. In arXiv:2002.09707 (submitted), this has been implemented for the scenario where the cocircuits are the minimal cutsets of a graph G , in which case MHS ( H ) = MCHS ( H ) is the set of all spanning trees of G .

4.2.2 Suppose that μ is known, be it by binary search or by theoretical reasoning as in 4.2.1. Then, one still sits with the problem (mentioned in 3.2.3) that some top-rows of the LIFO-stack may loose all their candidate sons. That this cannot happen in 4.2.1 is one of the (numerically well-supported) conjectures raised in arXiv:2002.09707. In another vein, if all H H have H = 2 , so H is the edge-set of a graph, then instead of MHSes, one rather speaks of minimal vertex-covers. In this scenario, μ remains hard to compute, but at least “loosing all candidate sons” can be avoided (work in progress).

4.3 Generalizing Table 2 and (16), each semifinal 012e-row ρ ¯ ρ i ¯ appearing in (11) yields the

semifinal 01 g -row ( or simply : semifinal row )

ρ Min ( ρ ¯ ) , where all 2’s of ρ ¯ have been replaced by 0’s and each e -wildcard of length ε j has been replaced by a g -wildcard of the same length γ j ε j . Hence, akin to (4) and (5), the semifinal 01g-row ρ has t many g -wildcards, and it holds that the γ 1 γ 2 γ t members of ρ all have cardinality ones ( ρ ) + t . It follows from (11) that:

(17) MHS ( H ) i = 1 R ρ i SF ( H ) .

Accordingly, we have

(17′) mhs = MHS ( H ) sf SF ( H ) .

The following terminology will be handy as well. A semifinal 01g-row ρ is bad if MHS ( H ) ρ = , and good otherwise. Additionally, call ρ very-good if ρ MHS ( H ) , and call ρ merely-good if it is good but not very-good. Each X ρ \ MHS ( H ) a dud. For 012-rows, it holds that good very-good .

4.4 Recall from Section 1 that a simple attempt to settle “good or bad?” is the Monte-Carlo way. That is, pick uniformly and at random X ρ , and check (in whatever way) whether or not X M H S . If yes, then ρ is good. If no, test some more X . The more often the answer persists to be no, the likelier ρ is bad. Specifically, the density d ρ MHS ( H ) ρ can be estimated to any desired precision ε as follows. Given ε , δ > 0 , standard statistics yields a value d such that (with error-probability < δ ), it holds that d [ ( 1 ε ) d , ( 1 + ε ) d ] . Since ρ = γ 1 γ t is known, d also yields an estimate for ρ MHS ( H ) , and hence in view of (17) for MHS ( H ) .

5 Four algorithms to sieve the MHSes one-by-one from the semifinal 01g-rows

Let ρ be a fixed semifinal 01g-row. In this section, we present four methods (Algorithm 1 to Algorithm 4) to classify all X ρ one-by-one, i.e., to decide whether X is an MHS or a dud. Algorithm 1 relies on 2.5 and 2.6.1, whereas Algorithm 2 uses the kind of VL in 2.6.2. Algorithms 3 and 4 rely on set-systems PotKi ( ρ ) and MC ( H ) , some of whose features will be postponed to Section 9.

5.1 Referring to 2.5, let m 1 < m 2 < < m s be the numbers that occur as cardinalities of H -hitting sets. Then,[21] m 1 = μ and m s = w . Putting S SF ( H ) , we have Min ( S ) = MHS ( H ) , and following 2.5, we obtain MSH ( H ) in time O ( s f w m h s s ) . This method, call it Algorithm 1, compares favorably to future methods if we only care for MHSes of rather small cardinality, say μ + 2 . Generally, for upper-bounded MHS-size Algorithm 1 costs O ( sf w m h s ) .

5.2 Let us view the hyperedges of H as bitstrings and take them as the rows of an h × w matrix A . Fix a semifinal 01g-row ρ and put k deg ( ρ ) . For each fixed Y ρ (hence a hitting set), it holds that Y MHS ( H ) iff no set X Y \ { a } ( a Y ) is a hitting set. Whether or not VL based on A (see 2.6) is used, the formal cost to classify X is O ( k h ) . Hence, classifying Y costs O ( k 2 w ) . Furthermore, finding ρ MHS ( H ) costs O ( ρ k 2 h ) , and finding MHS ( H ) with this method (call it Algorithm 2) costs O ( s f w 2 h ) , since Y = k gives way to Y w .

5.2.1 As seen earlier, it costs O ( k 2 h ) to decide whether any k -hitting set of a hypergraph with h hyperedges is minimal. While this bound is obvious, let us indicate a surprising improvement of it. To fix ideas, suppose that k = 5 and that the minimality of a H -hitting Y = { 1 , 2 , 3 , 4 , 5 } (where H = h ) needs to be decided. In 2.6.2, the VL way to handle Y demands to calculate

col 1234 col 1 col 2 col 3 col 4 and col 1235 col 1 col 2 col 3 col 5 and col 1245 col 1 col 2 col 4 col 5 and col 1345 col 1 col 3 col 4 col 5 and col 2345 col 2 col 3 col 4 col 5 .

This required 5 3 = 15 basic BitOr operations, but one can improve that to 11:

col 12 col 1 col 2 , col 123 col 12 col 3 , col 124 col 12 col 4 , col 1234 col 123 col 4 , col 1235 col 123 col 5 , col 1245 col 124 col 5 , col 45 col 4 col 5 , col 345 col 3 col 45 , col 245 col 2 col 45 , col 1345 col 1 col 345 , col 2345 col 3 col 245 .

Driving this idea further,[22] one can improve O ( k 2 w ) to O ( k 4 3 w ) .

5.3 Given a semifinal 01g-row ρ , suppose it was possible (more on that in 9.3) to obtain a set-system PotKi ( ρ ) P [ w ] such that any given X ρ is a dud iff it gets killed by some Z PotKi ( ρ ) in the sense that Z X . So suppose the toy row ρ ˜ ( 1 , g 1 , g 1 , g 1 , g 2 , g 2 ) has PotKi ( ρ ˜ ) = { 15 , 126 } . Since 15 kills 152, and 153, 154, and 126 kills 126, we have four duds, and hence, ρ ˜ MHS ( H ) = { 136 , 146 } . The availability of PotKi ( ρ ) facilitates a lot the calculation of:

Duds ( ρ ) { X ρ : X is a dud } ( = ρ \ MHS ( H ) ) .

Namely, embarking onto VL (which makes the more sense the larger ρ ), we view the members of ρ as bitstrings and take them as the rows of a ρ × w matrix A . Starting with Duds ( ρ ) , we process PotKi ( ρ ) one by one and update Duds ( ρ ) accordingly as follows (Algorithm 3). Say Z = { 2 , 4 , 7 } PotKi ( ρ ) . If col i is the i -th column of A , we calculate c o l col 2 col 4 col 7 . Then, ones ( col ) is the set of row-numbers whose corresponding rows of A get killed by Y . Thus, we update Duds ( ρ ) = Duds ( ρ ) ones ( col ) .

5.3.1 Picked from the author’s random experiments, here comes a more demanding semifinal 01 g -row ρ . It is defined by zeros ( ρ ) , ones ( ρ ) { 4 , 5 , 6 } , has g -bubbles { 1 , 8 } , { 2 , 10 } , { 3 , 11 } , { 7 , 9 , 12 } , and has

PotKi ( ρ ) = { Z 1 , Z 2 , Z 3 , Z 4 } { { 2 , 4 , 6 } , { 2 , 5 , 6 } , { 1 , 5 , 10 } , { 5 , 8 , 10 } } .

Here, Z 1 kills (exactly) 12 sets of type { 4 , 5 , 6 } { a , 2 , c , d } , Z 2 kills the sets of type { 4 , 5 , 6 } { a , 2 , c , d } , i.e., the same as before, Z 3 kills the six sets of type { 4 , 5 , 6 } { 1,10 , c , d } , and Z 4 the six sets of type { 4 , 5 , 6 } { 8,10 , c , d } . Since the killed sets happen to be either identical or disjoint, it follows from 12 + 6 + 6 = ρ that ρ gets killed entirely. It is an example of a “sophisticated-bad” row, the exact definition following in 9.1. A sophisticated version of Algorithm 3 itself follows in 9.5.

5.4 Fix some hypergraph H P [ w ] . Following [5], we say[23] S [ w ] is an MC-set (or: is MC) iff each b S has a private hyperedge H H in the sense that H S = { b } . Each other H H either cuts out b sharply as well, or has H S 2 , or has H S = . It is evident that a subset of a MC-set is again a MC-set. Hence, the family

(18) MC ( H ) { S [ w ] : S is MC } is a set-ideal.

Note that MC-sets need not be hitting sets. To witness, take H 3 { { 1 , 3 } , { 2 , 4 } , { 3 , 4 } } . One checks that { 1 , 2 } is a MC-set yet not an H -hitting set. Using other terminology, the following was proven in [5].

Theorem 2

For any hypergraph H , it holds that HS ( H ) MC ( H ) = MHS ( H ) .

Proof

Take any X MHS ( H ) , and fix b X . There are b -hyperedges H , i.e., with b H , since otherwise X \ { b } would remain a hitting set, in contradiction to X being minimal. Suppose none of the b -hyperedges were to cut b sharply from X . Then, ( X \ { b } ) H for all b -hyperedges H , and of course, ( X \ { b } ) H for all other hyperedges H . This contradicts X MHS ( H ) , and hence shows that X MC ( H ) . From MHS ( H ) HS ( H ) follows MHS ( H ) HS ( H ) MC ( H ) .

Conversely, pick Y HS ( H ) MC ( H ) . Since by assumption Y is a hitting set, it suffices to show that Y \ { b } is no hitting set for all b Y . In view of Y MC ( H ) , some H 0 H cuts b sharply from Y ; hence, H 0 ( Y \ { b } ) = ; hence, Y \ { b } is no hitting set.□

As another consequence of Theorem 2, we find that for each semifinal 01g-row ρ from (17), we have

(19) ρ MC ( H ) = ρ HS ( H ) MC ( H ) = ρ MHS ( H ) .

This suggests an elegant method for classifying any X ρ . Namely, initialize a testset T to T . Process all H H , and update T T ( X H ) (programmer’s speak) whenever X H = 1 . As soon as T = X occurs, we know that X MC ( H ) , and so X MHS ( H ) by (19). If T = X never occurs, then X MHS ( H ) . Since classifying X that way costs O ( h w ) we have a method, call it Algorithm 4, that calculates ρ MHS ( H ) in time O ( ρ h w ) .

5.4.1 Let us indicate how VL may further speed up Algorithm 4. For starters, the w h many sets

S ( a , H ) { Z ρ : a H Z , H Z 2 } ( a [ w ] , H H )

need to be calculated. To do so, initialize all of them as S ( a , H ) . Next for each fixed Z ρ , do the following. Using VL determines all K H with K Z 2 . For any such K , add[24] Z to all sets S ( a , K ) ( a K Z ) . For a [ w ] and Z ρ with a Z , call Z an a-dud if there is no hyperedge that sharply cuts a from Z (and so Z MHS ( H ) ). It is easy to see that

S ( a ) { S ( a , H ) : a H H }

is the set of all a -duds and that VL speeds up the calculation of S ( a ) the more the bigger ρ . Consequently,

Duds { S ( a ) : a [ w ] }

is the set of all duds contained in ρ . Put another way, ρ MHS ( H ) = ρ \ Duds .

6 Replacement of merely-good rows by very-good rows

In Section 5, we presented four algorithms to unravel the MHSes contained in a fixed semifinal 01g-row ρ . Any such MHS, viewed as bitstring x { 0 , 1 } w , is[25] a very-good row on its own, and so one could say that each semifinal row ρ is either bad or can be represented as a disjoint union of very-good rows. But it would be nice to use fewer than ρ very-good rows to exhaust ρ .

Suppose we possess (more on that later) criteria that allow us to quickly classify each semifinal 01g-row as bad, merely-good, and very-good, respectively. The bad ones are thrown away, the very-good ones are in optimal shape, but what about the merely-good rows ρ ? Are we not back to Square 1 and need to scan ρ one by one? Not so. We start with a toy example in 6.1 and follow up with theory in 6.2.

6.1 Consider the hypergraph H 4 P [ 6 ] with hyperedges:

(20) H 1 = { 1 , 5 , 6 } , H 2 = { 3 , 4 , 5 } , H 3 = { 2 , 3 } , and H 4 = { 1 , 4 , 6 } .

Feeding the transversal e-algorithm with H 4 yields (among others) the semifinal 01g-row r in Table 3. It is good since it, e.g., contains the minimal H 4 -hitting set { 1 , 2 , 5 } . Yet r is not very-good since { 1 , 3 , 5 } r \ MHS ( r ) is a dud (viewing that { 1 , 3 } HS ( H 4 ) ).

Table 3

Replacement of a merely-good row by very-good rows

1 2 3 4 5 6
r = g 1 g 2 g 2 g 1 1 g 1 Merely-good
r 1 = 1 g 2 g 2 0 1 0 Merely-good
r 2 = 0 g 2 g 2 1 1 0 Very-good
r 3 = 0 g 2 g 2 0 1 1 Merely-good
r 4 = 1 1 0 0 1 0 Very-good
r 5 = 0 1 0 0 1 1 Very-good
ρ 1 = g 1 1 0 g 1 1 g 1 Very-good
ρ 2 = g 1 0 1 g 1 1 g 1 Merely-good
ρ 3 = 0 0 1 1 1 0 Very-good

We strive to replace r by disjoint rows that are very-good and jointly contain the same minimal hitting sets as r . It is natural to start by picking any g-wildcard of r , say ( g 1 , g 1 , g 1 ) , and expand r accordingly as r = r 1 r 2 r 3 (Table 3). We call r 1 , r 2 , and r 3 the sons of r . One checks that { 2 , 4 , 5 } , { 3 , 4 , 5 } MHS ( H 4 ) , and so r 2 is very-good. As to r 1 , it is merely-good. Specifically, by expanding the second g-wildcard ( g 2 , g 2 ) , one obtains r 1 = ( 1 , 0 , 1 , 0 , 1 , 0 ) ( 1 , 1 , 0 , 0 , 1 , 0 ) , where the first son is bad since { 1 , 3 } HS ( H 4 ) , and the second (call it r 4 ) is very-good. Also r 3 is merely-good; it decomposes as r 3 = ( 0 , 0 , 1 , 0 , 1 , 1 ) ( 0 , 1 , 0 , 0 , 1 , 1 ) , where the first son is bad ( 36 HS ( H 4 ) ) and the second (call it r 5 ) is very-good. To summarize, we managed to replace the semifinal merely-good row r by the final very-good rows r 2 , r 4 , and r 5 .

Alternatively, one can start by expanding ( g 2 , g 2 ) . This yields the rows ρ 1 and ρ 2 in Table 3. One checks that ρ 1 is very-good, but ρ 2 is not. Specifically, when expanding ( g 1 , g 1 , g 1 ) in ρ 2 , two of the three arising 01-rows are bad. The third one (labeled ρ 3 ) is very-good. To summarize, r can even be replaced by two very-good rows, i.e., ρ 1 and ρ 3 .

6.2 The aforementioned example suggests the following method to replace a semifinal good row r by final very-good rows that jointly contain the same MHSes as r . There is nothing to do if r is already very-good. By induction, assume that a “mg-stack” is filled with disjoint merely-good 012g-rows that jointly contain exactly those MHSes of r , which are not in a very-good row of some “vg-stack.” (Initially, r is the only member of the mg-stack.) Remove the top row r from the mg-stack. Expanding any g -wildcard of r yields candidate[26] sons r 1 , r 2 , akin to 6.1. The very-good candidate sons are put on the vg-stack. The bad ones are thrown away, and the merely-good ones are put on top of the mg-stack. It is clear that the new mg-stack maintains the induction hypothesis. When the mg-stack is empty, then r is the disjoint union of the very-good rows in the vg-stack.

Rather than “expanding any” g -wildcard of r , Section 9.5 will show that some extra gadget allows us to create candidate sons in alternative and more efficient ways. As to “the bad ones are thrown away,” criteria for badness follow in (31) and (37).

7 Deciding very-goodness using inclusion–exclusion

The larger our semifinal rows ρ in (17), the more desirable is it to have efficient criteria for very-goodness and badness. In particular, in Section 6, we reduced the handling of merely-good rows, to large extent, to the existence of such tests. In this and the next section, we offer two very-goodness tests. The one in Section 7 relies on inclusion–exclusion.

7.1 Consider a fixed semifinal 01g-row ρ triggered by H P [ w ] . We say that Z [ w ] is a potential ρ -spoiler if there is a Y ρ with Z { a } = Y . In Table 4, the set-system of all potential ρ -spoilers of some semifinal ρ is represented as disjoint union d 1 d 5 of 01g-rows. Its cardinality is 24 + + 6 = 74 . Generally, the following holds:

(21) With γ 1 , , γ t being the lengths of the g - wildcards of ρ , the number of potential ρ - spoilers of the semifinal row ρ is Pot = ( γ 2 γ t ) + ( γ 1 γ 3 γ t ) + ( γ 1 γ t 1 ) + ones ( ρ ) γ 1 γ t .

Table 4

Counting ρ -spoilers by applying inclusion–exclusion

1 2 3 4 5 6 7 8 9 10 11 12 Cardinality
ρ = g 1 g 1 0 g 2 g 2 g 2 g 3 g 3 g 3 g 3 1 1 630
d 1 = g 1 g 1 0 g 2 g 2 g 2 g 3 g 3 g 3 g 3 0 1 24
d 2 = g 1 g 1 0 g 2 g 2 g 2 g 3 g 3 g 3 g 3 1 0 24
d 3 = 0 0 0 g 2 g 2 g 2 g 3 g 3 g 3 g 3 1 1 12
d 4 = g 1 g 1 0 0 0 0 g 3 g 3 g 3 g 3 1 1 8
d 5 = g 1 g 1 0 g 2 g 2 g 2 0 0 0 0 1 1 6
δ 1 = g 1 g 1 0 1 0 0 0 0 g 3 g 3 0 1 4
δ 2 = g 1 g 1 0 1 0 0 0 0 g 3 g 3 1 0 4
δ 3 = 0 0 0 1 0 0 0 0 g 3 g 3 1 1 2
δ 4 = g 1 g 1 0 0 0 0 0 0 g 3 g 3 1 1 4
δ 5 = g 1 g 1 0 1 0 0 0 0 0 0 1 1 2

For a semifinal ρ , we define an ρ -spoiler as a potential ρ -spoiler that happens to be an H -hitting set. If S p = S p ( ρ , H ) is the number of ρ -spoilers, then a moment’s reflection confirms:

(22) The semifinal row  ρ  is very-good iff  S p = 0 .

If say H i , H j , and H are the hyperedges of H , then we define N ( i , j , ) as the number of potential ρ -spoilers Z with Z H i = Z H j = Z H = . Since a potential spoiler is a spoiler iff it cuts all hyperedges of H , we can compute S p with inclusion–exclusion as:

(23) S p = Pot N ( 1 ) N ( 2 ) N ( h ) + N ( 1 , 2 ) + + ( 1 ) h N ( 1 , 2 , , h ) .

Calculating 2 h terms N ( . . ) may seem inefficient but the larger ρ and w , and the smaller h , the more inclusion–exclusion will prevail over the “naive” way in 5.2, which spends O ( h k 2 ) time per k -element member X ρ .

7.2 Furthermore, based on the first three Bonferroni[27] inequalities, these implications often alleviate full-blown inclusion–exclusion:

  1. Pot N ( 1 ) N ( h ) > 0 S p > 0 (not very-good),

  2. Pot N ( 1 ) N ( h ) + N ( 1 , 2 ) + + N ( h 1 , h ) = 0 S p = 0 (very-good),

  3. Pot N ( 1 ) + N ( 1 , 2 ) + + N ( h 1 , h ) N ( 1 , 2 , 3 ) N ( h 2 , h 1 , h ) > 0 S p > 0 .

7.3 Full-blown inclusion–exclusion can also be avoided by other means. Recall that N ( i 1 , , i t ) is the number of potential spoilers Z , with Z H i 1 = = Z H i t = . But this is equivalent to Z ( H i 1 H i t ) = . If the hyperedges are all very large (say of cardinality > w 3 ), then it is likely that U H i 1 H i t = [ w ] even for small index sets { i 1 , , i t } [ h ] . But then, N ( i 1 , , i t ) = 0 . (More generally, “=0” happens iff U contains a g -bubble or cuts ones ( ρ ) .)

This appeals to the following more general endeavor (work in progress, arXiv:1309.6927v3). In every inclusion–exclusion problem, the family of relevant index sets { i 1 , , i t } , i.e., the ones that satisfy N ( i 1 , , i t ) 0 , constitute a set-ideal N P [ h ] . If this so-called nerve N is small and can be obtained in clever ways (i.e., not by scanning P [ h ] ), then inclusion–exclusion speeds up considerably.

7.4 According to (22), it follows from S p > 0 that ρ is not very-good. But ρ stays merely-good (as opposed to bad) unless S p sky-rockets. To make this more precise, let us generally order the sizes of the g -wildcards occurring in ρ as γ 1 γ t . Then, each ρ -spoiler Z can prevent at most γ t many X ρ from being in MHS ( H ) . Since ρ = ( γ 1 γ t 1 ) γ t , we conclude:

(24) If γ 1 γ t 1 γ t and S p ( ρ ) < γ 1 γ 2 γ t 1 , then ρ is good .

Although the bound ε 1 ε t 1 is sharp, in practice,[28] it is likely that for much higher values of S P ( ρ ) , the row ρ remains merely-good.

8 Deciding very-goodness using Rado’s theorem

Our second method to decide the very-goodness of a semifinal 01g-row ρ j is based on certain “critical” pairs ( ρ i , ρ j ) . Matroids [9] will play a crucial role. Let us jump into medias res with Rado’s theorem [9, p. 702]:

(25) Consider any matroid M on a set E and any family { Q i : i I } of subsets of E . Then , this family has a hitting set which is M - independent iff J rank ( { Q j : j J } ) for all J I .

8.1 Apart from inviting matroids, here comes the second ingredient:

(26) The semifinal row ρ j in ( 17 ) is not very - good iff there is a semifinal row ρ i ρ j such that X Y for some X ρ i and Y ρ j .

Proof of (25)

Assume that such X and Y with X Y exist. Since X = Y is impossible ( ρ i ρ j = ), we have X Y . Since Y properly contains a H -hitting set, we conclude Y MHS ( H ) , and so ρ j is not very-good. Conversely, suppose that ρ j is not very-good. Picking any dud Y ρ j \ MHS ( H ) , there is X MHS ( H ) with X Y . This X belongs to a unique semifinal row ρ i by (17). We have ρ i ρ j since deg ( ρ i ) = X < Y = deg ( ρ j ) .□

In view of (26), we call ( X , Y ) a spoiling pair for ρ j (not to be confused with the “spoilers” in Sec. 7) if

( Y ρ j ) ( i j ) ( X ρ i ) X Y .

When ( X , Y ) is a spoiling pair for ρ j , necessarily there is some i such that ( ρ i , ρ j ) is a critical pair in the sense that deg ( ρ i ) < deg ( ρ j ) and ones ( ρ i ) zeros ( ρ j ) = . This speeds up searching spoiling pairs ( X , Y ) for likely-very-good rows ρ j .

8.2 To illustrate consider a hypothetical hypergraph that has triggered the two semifinal rows ρ 1 and ρ 2 in Table 5. In fact, ( ρ 2 , ρ 1 ) is a critical pair since deg ( ρ 2 ) = 4 < 5 = deg ( ρ 1 ) and ones ( ρ 2 ) zeros ( ρ 1 ) = { 5 } { 1 , 2 , 3 } = . In order to efficiently decide the existence of a spoiling pair ( X , Y ) for ρ 1 (with X ρ i = ρ 2 ), note that any such ( X , Y ) has X zeros ( ρ 1 ) = , and so X ρ 2 (Table 5). But why does ρ 2 also differ from ρ 2 on the rightmost part? Because the g 1 g 1 in ρ 2 was forced to become 01 . Now, 1 in ρ 2 triggers a 1 at the same location in ρ 1 , which transforms g 3 g 3 g 3 in ρ 1 to 100, i.e., replaces ρ 1 by ρ 1 . Dropping the common 0’s of ρ 1 , ρ 2 , one obtains two 1 g -rows ρ 1 , ρ 2 with the same index set, in our case E { 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 } .

Table 5

Deciding the existence of a spoiling pair with a theorem of Rado

1 2 3 4 5 6 7 8 9 10 11 12 13 14
ρ 1 = 0 0 0 1 1 g 1 g 1 g 1 g 2 g 2 g 2 g 3 g 3 g 3
ρ 2 = g 1 g 2 g 3 g 2 1 g 2 g 2 g 3 g 3 g 3 g 2 g 1 g 2 g 2
ρ 1 = 0 0 0 1 1 g 1 g 1 g 1 g 2 g 2 g 2 1 0 0
ρ 2 = 0 0 0 g 2 1 g 2 g 2 g 3 g 3 g 3 g 2 1 0 0
ρ 1 = 1 1 g 1 g 1 g 1 g 2 g 2 g 2 1
ρ 2 = g 2 1 g 2 g 2 g 3 g 3 g 3 g 2 1

That is when the matroid takes over. Namely, the partition E = { 4 } { 5 } { 6 , 7 , 8 } { 9 , 10 , 11 } { 12 } determined by the 1’s and g -wildcards of ρ 1 defines a so-called partition matroid M = M ( E ) , where, by definition, X E is M -independent iff X cuts each part of the partition in at most one element. In contrast, the analogous partition induced by ρ 2 is not used for a second matroid but rather yields the set-system { Q i : i I } in (25). In our case, I = { 1 , 2 , 3 , 4 } and Q 1 = { 4 , 6 , 7 , 11 } , Q 2 = { 5 } , Q 3 = { 8 , 9 , 10 } , Q 4 = { 12 } . Consequently, if X is an M -independent transversal of { Q i : i I } , then X extends to a spoiling pair ( X , Y ) of ρ 1 (and each spoiling pair arises this way). The existence of such spoiling pairs is handled by the rank condition in statement (25). Take say J = { 2 , 3 , 4 } . Then,

J = 3 4 = rank ( Q 2 Q 3 Q 4 ) = rank ( { 5 , 8 , 9 , 10 , 12 } ) .

One sees that generally, the cardinality of I in (25) equals deg ( ρ 2 ) , which, even for large hypergraphs H , often is a modest number (and so all J I can be evaluated painlessly).

9 The benefits of having MC ( H ) and MinNotMC ( H )

Section 9 is the longest section, and it boosts our understanding and our exploitation of the set-ideal MC ( H ) introduced in 5.4.

Subsection 9.1 is about the facets of MC ( H ) and about easy-bad rows. The set-system MinNotMC ( H ) (consisting of the generators of the complementary set-filter of MC ( H ) ) is introduced in 9.2. Our third criterion (after Sections 7 and 8) for very-goodness appears in 9.3. Here, PotKi ( ρ ) from 5.3 gets trimmed to K i ( ρ ) . After 9.4 (yet another VL-elaboration), in 9.5, we enhance Algorithm 3 from 5.3. Specifically, by virtue of K i ( ρ ) , the one-by-one output of ρ M H S ( H ) gives way to a compressed representation of ρ M H S ( H ) .

The remainder of Section 9 relies on the dual companion of the transversal e -algorithm, i.e., the noncover n -algorithm which we glimpsed in 3.3. In 9.6, the latter represents MC ( H ) as a disjoint union (32) of 012n-rows. In a sense, (32) dualizes (11). The dualization continues in 9.7 in that 01g-rows get accompanied by 01 g -rows. Furthermore, MHS ( H ) is represented as disjoint union of set-systems ρ i σ j , where the ρ i ’s are 01g-rows and the σ j ’s are 01 g -rows. Using inclusion–exclusion, ρ i σ j can be calculated quickly (9.8). This enables us to calculate MHS ( H ) without knowing MHS ( H ) . Merely deciding whether or not ρ i σ j = works faster still and it, e.g., leads to the badness-criterion (37).

9.1 Consider any X HS ( H ) MC ( H ) , and suppose X was not a facet of MC ( H ) . Then, there was a facet Y with X Y , and so Y HS ( H ) MC ( H ) . But in view of Theorem 2, this yields the contradiction of two comparable members of MHS ( H ) . We conclude that

(27) At most, the facets of MC ( H ) can be minimal H -hitting sets.

In 5.4, we found that with respect to H 3 , the set { 1 , 2 } is MC but no hitting set. One checks that { 1 , 2 } is a facet of MC ( H 3 ) . This shows that, due to (27), necessary condition of being a facet of MC ( H ) is not sufficient for being an MHS. Because MC ( H ) P [ w ] is a set-ideal by (18), we can consider the complementary set-filter P [ w ] \ MC ( H ) (see 3.3). This yields a neat sufficient condition for badness:

(28) If the semifinal 01 g -row  ρ  is such that ones ( ρ ) is not MC, then ρ is bad.

To prove it, all X ρ are supersets of ones ( ρ ) , and so ones ( ρ ) implies X .

A semifinal 01g-row satisfying (28) will be[29] called easy-bad. A bad row that is not easy-bad is sophisticated-bad; an example was given in 5.3.1.

9.2 By definition, the set-system

(29) MinNotMC ( H ) ( of cardinality mnMC )

consists of the generators of the set-filter in 9.1. To spell it out, MinNotMC ( H ) consists of those subsets of [ w ] that are not MC, but all their proper subsets are MC. While MinNotMC ( H ) is beneficial, it is also expensive to compute. Before we turn to the benefits, here comes a toy example.

9.2.1 It turns out (see Sec. 10) that MinNotMC ( H 2 ) is the set-system G in (8). Here, H 2 is from (9). To summarize,

(30) H 2 = { 125 , 34 , 456 , 135 , 26 } has MinNotMC ( H 2 ) = { 123 , 15 , 126 , 256 , 134 , 345 , 346 , 246 } .

For instance, Z { 2 , 5 , 6 } is not MC since no H 2 -hyperedge cuts 6 sharply: Z { 4 , 5 , 6 } = { 5 , 6 } and Z { 2 , 6 } = { 2 , 6 } . However, let us verify that all 2-subsets Z Z (and hence all subsets) are MC. For instance, take Z = { 5 , 6 } . While still Z { 4 , 5 , 6 } = { 5 , 6 } , now Z { 2 , 6 } works, i.e., equals { 6 } . Since also Z { 1 , 2 , 5 } = { 5 } , the set Z is MC. Similarly, one checks that the other 2-subsets of Z , i.e., { 2 , 6 } and { 2 , 5 } , are MC-sets.

9.3 We are now fit to return to the set-systems PotKi ( ρ ) in 5.3. It follows from (19) that X ρ is a dud iff X is no MC-set, i.e., iff X contains some member of MinNotMC ( H ) . In other words, setting

PotKi ( ρ ) MinNotMC ( H )

fulfills the requirement of 5.3 for whatever semifinal 01g-row ρ . Trouble is, the set PotKi ( ρ ) may be bigger than it need to be. Put another way, many members of PotKi ( ρ ) are just potential killers, i.e., dangerous for other rows, but not harming any X ρ . Thus, if Z MinNotMC ( H ) is such that Z X for all X ρ , we are led to say Z is ρ -harmless. Putting

Harmless ( ρ ) { Z MinNotMC ( H ) : Z is ρ -harmless }

and

K i ( ρ ) MinNotMC ( H ) \ Harmless ( ρ ) ,

we hence obtain a third very-goodness criterion:

(31) A semifinal 01 g -row  ρ  is very-good iff  K i ( ρ ) = .

Recall that H 2 triggered the R = 4 semifinal 01g-rows ρ 1 , , ρ 4 in (16), all of which happened to be of the same degree and hence very-good. In accordance with (31), one verifies that indeed K i ( ρ 1 ) = = K i ( ρ 4 ) = .

Recalling the definition of “easy-bad” in 9.1 and putting K i [ 0 ] { Z K i ( ρ ) : Z ones ( ρ ) } , we further claim:

(32) K i [ 0 ] iff  ρ  is easy-bad.

Proof of (32)

If Z K i [ 0 ] , then Z (being a killer) is not-MC, hence the superset ones ( ρ ) is not-MC, hence ρ is easy-bad. If, conversely, ρ is easy-bad, then ones ( ρ ) (being not-MC) contains some Z MinNotMC ( H ) . Obviously, Z K i [ 0 ] .□

9.4 As crisp as (31) may look, viewing that MinNotMC ( H ) is hard to find (Section 10), the criteria for very-goodness derived in Sections 7 and 8 remain attractive. The good news is that once MinNotMC ( H ) has been conquered, VL will yield K i ( ρ i ) simultaneously for all semifinal 01g-rows ρ i ( 1 i R ) . Namely,[30] we start by initializing certain auxiliary sets to H a ( i ) for all 1 i R . For each fixed Z MinNotMC ( H ) , we will calculate the set I ( Z ) of all i R that have Z H a r m l e s s ( ρ i ) , and accordingly, update H a ( i ) H a ( i ) { Z } for all i I ( Z ) . Hence, once all Z MinNotMC ( H ) have been processed, all H a ( i ) will have the correct content H a ( i ) = H a r m l e s s ( ρ i ) (and so K i ( ρ i ) = MinNotMC ( H ) \ H a ( i ) is obtained).

Calculating I ( Z ) for fixed Z works as follows. Say ρ 1 = ( 0 , 1 , 0 , g 1 , g 1 , g 2 , g 2 , g 2 , g 3 , g 3 ) . It will trigger the first three 01 -rows r 1 , r 2 , and r 3 of the matrix A that underlies the VL application to come. Turning all existing 1’s to 0’s, setting all existing 0’s to (more on that in moment), and filling exactly one g i -wildcard with 1’s and the others with 0’s yield

r 1 = ( , 0 , , 1 , 1 , 0 , 0 , 0 , 0 , 0 ) , r 2 = ( , 0 , , 0 , 0 , 1 , 1 , 1 , 0 , 0 ) , r 3 = ( , 0 , , 0 , 0 , 0 , 0 , 0 , 1 , 1 ) .

In order to remember the number i = 1 of the semifinal 01g-row ρ i triggering r 1 , r 2 , and r 3 we set nsf ( 1 ) = nsf ( 2 ) = nsf ( 3 ) 1 . Say ρ 2 has two g -bubbles. Then, it triggers analogous 01 -rows r 4 , r 5 (written below r 1 to r 3 ), and we record nsf ( 4 ) = nsf ( 5 ) 2 . And so, it goes on with ρ 3 up to ρ R .

Having calculated A (say it has dimensions 41 × 10 ), we can begin to process all Z MinNotMC ( H ) . If say Z 1 = { 5 , 6 , 7 } , calculate the column c o l col 5 + col 6 + col 7 . Then, col = ( 1 , 2 , 0 , ) , where the fact that the second component is 2 testifies that Z 1 cannot be contained in any member of the semifinal 01g-row with number nsf ( 2 ) = 1 (since Z 1 cuts one g -bubble of that row in 2 elements). As another example, suppose that Z 2 is such that the corresponding length 41 column col” has 20 components equal to 1, 20 equal to 0, and the 13th component is . How does this translate to plain language? It means that Z 2 is harmless only for the semifinal 01g-row ρ j ( j nsf ( 13 ) ) because Z 2 zeros ( ρ j ) , and so Z 2 cannot be contained in any member of ρ j . (For all other semifinal 01g-rows, Z 2 is a killer since it does not clash with their 0’s and cuts all their g-bubbles in at most one element.) For general Z MinNotMC ( H ) with coupled column col , let J be the position-set of the components 2 that occur in c o l . By the above, it is clear that I ( Z ) = { nsf ( j ) : j J } (it does not matter that nsf ( j ) = nsf ( j ) for j j is possible).

9.5 Here comes an enhanced version of Algorithm 3 from 5.3 in that one-by-one enumeration of the MHSes in a semifinal 01 g -row ρ gives way to “processing” all killers Z K i ( ρ ) . Akin to 6.2, this leads to an mg-stack and vg-stack such that in the end, ρ MSH ( H ) is the disjoint union of all vg-rows in the vg-stack. Specifically, let Z be the killer that is pending to be “imposed” on the top row ρ of the mg-stack. If say ρ is as in Table 6 and Z [ 6 ] , then imposing Z upon ρ yields the candidate sons ρ 1 , , ρ 4 of ρ . They are such that ρ 1 ρ 4 is the set of all X ρ with Z X .

Table 6

Use of a 01-Abraham-Flag to impose the killer Z { 1 , 2 , , 6 } upon ρ

1 2 3 4 5 6 7 8 9 10 11 12 13
ρ = g 1 g 2 g 3 g 4 1 1 g 1 g 1 g 2 g 3 g 3 g 4 0
ρ 1 = 0 g 2 g 3 g 4 1 1 g 1 g 1 g 2 g 3 g 3 g 4 0
ρ 2 = 1 0 g 3 g 4 1 1 0 0 1 g 3 g 3 g 4 0
ρ 3 = 1 1 0 g 4 1 1 0 0 0 g 3 g 3 g 4 0
ρ 4 = 1 1 1 0 1 1 0 0 0 0 0 1 0

Note that if Z K i ( ρ ) intersects some g -bubble of ρ in 2 elements, then it already holds that Z X for all X ρ .

9.6 Recall from 3.3 that the noncover n -algorithm yields (for each set-system S ), the family N C ( S ) of all S -noncovers as a disjoint union of 012n-rows σ j ¯ . If, in particular, S MinNotMC ( H ) , then N C ( S ) = MC ( H ) . Therefore, ( 11 ) specializes to:

(33) MC ( H ) = j = 1 R σ j ¯ .

For instance, recall that applying the transversal e -algorithm to H 2 yielded HS ( H 2 ) = ρ 1 ¯ ρ 4 ¯ (Table 2). If we dually apply the noncover n -algorithm to MinNotMC ( H 2 ) , from (30), we obtain MC ( H 2 ) = σ 1 ¯ σ 6 ¯ (Table 7).

Table 7

Representation of MC ( H 2 ) as disjoint union of 012n-rows

1 2 3 4 5 6 Row-maximal sets
σ 1 ¯ = 0 2 n n n 0 Max ( σ 1 ¯ ) = { 234 , 235 , 245 } = ( 0 , 1 , g , g , g , 0 ) σ 1
σ 2 ¯ = 0 n 1 0 n 1 Max ( σ 2 ¯ ) = { 236 , 356 } = ( 0 , g , 1 , 0 , g , 1 ) σ 2
σ 3 ¯ = 0 0 0 2 2 1 Max ( σ 3 ¯ ) = { 456 } = ( 0 , 0 , 0 , 1 , 1 , 1 ) σ 3
σ 4 ¯ = 0 1 0 0 0 1 Max ( σ 4 ¯ ) = { 26 } = ( 0 , 1 , 0 , 0 , 0 , 1 ) σ 4
σ 5 ¯ = 1 0 n n 0 2 Max ( σ 5 ¯ ) = { 136 , 146 } = ( 1 , 0 , g , g , 0 , 1 ) σ 5
σ 6 ¯ = 1 1 0 2 0 0 Max ( σ 6 ¯ ) = { 124 } = ( 1 , 1 , 0 , 1 , 0 , 0 ) σ 6

9.7 Let us keep on dualizing. To begin with, for each 012 n -row σ ¯ in (33), one obtains Max ( σ ¯ ) by turning all 2’s to 1’s and all n -wildcards to g -wildcards, where, by definition, ( g , g , , g ) means “exactly one 0 here.” For instance, σ 1 ¯ in Table 7 becomes σ 1 . Generally, each σ j ¯ from (33) induces such a 01 g -row σ j . Akin to (17), we claim that:

(34) MHS ( H ) j = 1 R σ j .

Proof of (34)

From (33) and Theorem 2 follows MHS ( H ) j = 1 R σ j ¯ . Hence, each X MHS ( H ) is in a unique row σ j ¯ . We claim that X Max ( σ j ¯ ) = σ j . Indeed, since X is a maximal member of MC ( H ) by (27), it is a fortiori maximal within σ j ¯ MC ( H ) .□

In view of (34), we can carry over the concepts good, bad, very-good, and so on to 01 g -rows. For instance, as it is forced by (17) and (34), the nine MHSes of H 2 appear both in (16) and, Table 7, yet R = 4 6 = R . However, all ρ i were very-good, and σ 4 is bad; its only member { 2 , 6 } is MC but no MHS.

9.7.1 Recall from Section 1 that our main quest is to retrieve the diamonds from the boxes (= semifinal 01g-rows) as efficiently as possible. As is evident from (34), one could also retrieve the diamonds from dual boxes (=  semifinal 01 g -rows). In fact, this is attempted in [5], yet in a one-by-one fashion based directly on MC ( H ) . Let us explore inasmuch as the two approaches can be beneficially combined. For technical reasons (see footnote in 9.8.1), the original 012e-rows and 012n-rows will have a comeback. For starters, unless all involved rows are 01-rows, it holds that we have proper inclusions ρ i ρ i ¯ and σ j σ j ¯ . Nevertheless, this takes place:

(35) For all ρ i ¯ , ρ i in (11) and (17), and all σ j ¯ , σ j in (33) and (34), we have ρ i ¯ σ j ¯ = ρ i σ j .

Proof of (35)

It suffices to show ρ i ¯ σ j ¯ ρ i σ j . As we long know, ρ i ¯ MHS ( H ) ρ i . Similarly, as shown earlier, σ j ¯ MHS ( H ) σ j . Together with Theorem 2 follows that ρ i ¯ σ j ¯ = ( ρ i ¯ HS ( H ) ) ( σ j ¯ MC ( H ) ) = ( ρ i ¯ MHS ( H ) ) ( σ j ¯ MHS ( H ) ) ρ i σ j .□

From Theorem 2, (11), (33), the distributivity of over , and (35) follows

(36) MHS ( H ) = i = 1 R ρ i ¯ j = 1 R σ j ¯ = i , j 1 ( ρ i ¯ σ j ¯ ) = i , j 1 ( ρ i σ j ) .

9.7.2 To illustrate (36), taking ρ 2 ¯ from Table 2 and σ 1 ¯ from Table 7, it holds that ρ 2 ¯ σ 1 ¯ = { 235 , 245 } . Generally speaking, intersecting 012e-rows with 012n-rows (or 01g-rows with 01 g -rows) is no easier than intersecting two 012e-rows (see 2.3). As one way out, one can ponder to expand either the 012e-row or the 012n-rows as a disjoint union of 012-rows. For instance, σ 1 ¯ expands as shown in Table 8.

Table 8

Expansion of a 012n-row into 012-rows

1 2 3 4 5 6
σ 1 ¯ = 0 2 n n n 0
σ 11 ¯ = 0 2 0 2 2 0
σ 12 ¯ = 0 2 1 0 2 0
σ 13 ¯ = 0 2 1 1 0 0

It follows that ρ 2 ¯ σ 1 ¯ = ( ρ 2 ¯ σ 11 ¯ ) ( ρ 2 ¯ σ 12 ¯ ) ( ρ 2 ¯ σ 13 ¯ ) . Each term on the right, and generally each intersection of a 012e-row with a 012-row, is either empty or again a 012e-row (2.2.1). In our particular case, ρ 2 ¯ σ 11 ¯ = ( 0 , 1 , 0 , 1 , 1 , 0 ) , ρ 2 ¯ σ 12 ¯ = ( 0 , 1 , 1 , 0 , 1 , 0 ) , and ρ 2 ¯ σ 13 ¯ = .

Let us argue that in this present scenario, such intersections are always either empty or 01-rows. So suppose ρ ¯ is from (11) and σ ¯ from (33) got again expanded into 012-rows σ ¯ . Since each MHS X contained in σ ¯ is maximal within σ ¯ , it will also be maximal within the 012-row σ ¯ it happens to lie. Any two MHSes being incomparable, there cannot be another MHS in σ ¯ . Because ρ ¯ σ ¯ , if nonempty, is a 012e-row that by (36) consists entirely of MHSes; this 012e-row is actually a 01-row that matches X .

The bottom line is this. Formula (36) cannot be exploited to compress MHS ( H ) . At most, it can be used for one-by-one enumeration, but this cannot compete with the enhanced Algorithm 3, which (a) offers compression and (b) doesnot require the construction of 012n-rows.

9.8 Instead, the true calling of (36) is to find the cardinality MHS ( H ) ! Namely:

(37) For each ρ i ¯ in (11), we can obtain (preferably few) 012 n -rows τ 1 ¯ , , τ m ¯ such that ρ i ¯ MHS ( H ) ( ρ i ¯ τ 1 ¯ ) ( ρ i ¯ τ m ¯ ) and τ 1 ¯ , , τ m ¯ MC ( H ) .

In view of (36), Statement (37) is plausible. A full proof, that also touches on K i ( ρ i ¯ ) and on computational issues, will be given in Section 11.5. Accepting (37), we first note that from ρ i ¯ τ j ¯ HS ( H ) MC ( H ) = MHS ( H ) follows that “ ” in (37) in fact is “=.” Hence, ρ i ¯ MHS ( H ) = ρ i ¯ τ 1 ¯ + + ρ i ¯ τ m ¯ . Because MHS ( H ) is the sum of R terms ρ i ¯ MHS ( H ) , calculating MHS ( H ) boils down to calculating ρ ¯ σ ¯ for an arbitrary 012e-row ρ ¯ and 012n-row σ ¯ (this problem occurs in other circumstances as well). Let us apply inclusion–exclusion to do so.

9.8.1 To fix ideas, take ρ ¯ ( e 1 , e 1 , e 2 , e 2 , e 2 , e 2 ) and σ ¯ ( n 1 , n 2 , n 1 , n 2 , n 3 , n 3 ) . (The presence of entries 0, 1, and 2 would only cause trivial changes in the sequel.) Let N ( e 1 ) , N ( e 2 ) , and N ( e 1 e 2 ) be the numbers of bitstrings x σ ¯ that violate,[31] respectively, the e 1 -bubble, the e 2 -bubble, and both e i -bubbles. By inclusion–exclusion, it holds that:

ρ ¯ σ ¯ = ρ ¯ N ( e 1 ) N ( e 2 ) + N ( e 1 e 2 ) = 27 12 4 + 1 = 12

in view of ρ ¯ = 3 3 3 (see 2.2), N ( e 1 ) = ( 0 , 0 , 2 , 2 , n 3 , n 3 ) = 12 , N ( e 2 ) = ( 2 , 2 , 0 , 0 , 0 , 0 ) = 4 , and N ( e 1 e 2 ) = ( 0 , 0 , 0 , 0 , 0 , 0 ) = 1 .

Similarly (using obvious notation), one obtains 12 as:

ρ ¯ σ ¯ = σ ¯ N ( n 1 ) N ( n 2 ) N ( n 3 ) + N ( n 1 n 2 ) + N ( n 1 n 3 ) + N ( n 2 n 3 ) N ( n 1 n 2 n 3 ) = 45 16 16 12 + 4 + 4 + 4 1 = 12

in view of τ ¯ = 3 15 , , N ( n 3 ) = ( e 1 , e 1 , 2 , 2 , 1 , 1 ) = 12 , , N ( n 1 n 2 n 3 ) = ( 1 , 1 , 1 , 1 , 1 , 1 ) = 1 .

In general, we launch inclusion–exclusion on the row with the fewer wildcards. Interestingly, deciding merely whether or not ρ ¯ τ ¯ is empty, works even faster than inclusion–exclusion (more on that in 11.5.2). This speed of deciding the emptiness of ρ ¯ τ ¯ prompts us to state a second badness criterion for semifinal 012e-rows. As opposed to (28), it is sufficient and necessary (albeit somewhat clumsy):

(38) Suppose the 012 e -row ρ i ¯ is as in (37). Then , ρ i ¯ is bad iff ρ i ¯ τ j ¯ = for all 1 j m .

10 How to calculate MinNotMC ( H ) in the first place

In order to understand how MinNotMC ( H 2 ) in (30) was computed,[32] it pays to momentarily relabel[33] the hyperedges of H 2 in obvious ways:

(39) H 1 = { a , b , e } , H 2 = { c , d } , H 3 = { d , ε , f } , H 4 = { a , c , ε } , and H 5 = { b , f }

Let us refine the property “ T is MC.” Thus, for any set T and fixed u T , we say “ T is u-MC” if crit ( u , T ) { H H : H T = { u } } is nonempty. Consequently, it holds for all T W { a , b , c , d , ε , f } that:

(40) T is MC T is u -MC for all u T ,

(41) T is not-MC T is not- u -MC for some u T crit ( u , T ) = for some u T .

For instance, T = { d , c , f } is not- d -MC because from d T H i always follows T H i 2 , the relevant indices being i = 2 , 3 .

10.1 For u W , put S u { i [ h ] : u H i } . Therefore,

(42) S a = { 1 , 4 } , S b = { 1 , 5 } , S c = { 2 , 4 } , S d = { 2 , 3 } , S ε = { 1 , 3 , 4 } , and S f = { 3 , 5 } .

The fact hat { d , c , f } is not- d -MC can now be seen as tantamount to S d S c S f . Generally, the not- u -MC sets bijectively match the set coverings of S u by other S v ’s.

Our aim is to calculate the family MinNotMC ( H 2 ) of minimal not-MC sets. According to (41), they are found among the minimal not- u -MC sets, where u ranges over W . Let us hence find for each fixed u W all minimal set coverings of S u . The systematic method follows in 10.2, but for H = H 2 , we can proceed by inspecting (42):

  • The minimal set coverings of S a are { S b , S c } , and { S ε } ,

  • The minimal set coverings of S b are { S a , S f } and { S ε , S f } ,

  • The minimal set coverings of S c are { S a , S d } and { S d , S ε } ,

  • The minimal set coverings of S d are { S c , S ε } and { S c , S f } ,

  • The minimal set coverings of S ε are { S a , S d } , { S a , S f } , { S b , S c , S d } , and { S b , S c , S f } ,

  • The minimal set coverings of S f are { S b , S d } and { S b , S ε } .

Therefore, the minimal not- a -MC sets are { a , b , c } and { a , ε } , and so forth until the minimal not- f -MC sets are { f , b , d } and { f , b , ε } . The inclusion-minimal sets among these sets[34] are (in shorthand notation) a b c , a ε , b a f , b ε f , c a d , c d ε , d c f , and f b d . Relabeling back a 1 , , f 6 yields MinNotMC ( H 2 ) in (30).

10.2 As is well known, finding minimal set coverings is cryptomorphic to finding minimal hypergraph transversals. Let us make this cryptomorphism explicit by recalculating the set coverings of the set S ε by the set-system S { S a , S b , S c , S d , S f } . Because S ε = { 1 , 3 , 4 } , at least one member of S must cover 1; only S a and S b can do that. Similarly, only S d and S f can contain 3, and only S a and S c can contain 4. Thus, we define the auxiliary hypergraph triggered by ε as:

(43) H 2 aux ( ε ) { { S a , S b } , { S d , S f } , { S a , S c } } .

It follows that the H 2 aux ( ε ) -hitting sets are exactly the minimal set-coverings of S ε by other S u ’s. It is natural to use again the transversal e -algorithm to calculate all minimal H 2 aux ( ε ) -hitting sets.

The transversal e-algorithm starts by imposing the hyperedge { S a , S b } of H 2 aux ( ε ) and then imposes { S d , S f } . Since the two happen to be disjoint, this is achieved by the single 012e-row r 1 in Table 9. Imposing { S a , S c } upon r 1 yields the two final rows r 2 and r 3 . It happens that both of them are very-good, i.e., Min ( r 2 ) and Min ( r 3 ) need not be pruned further.

Table 9

Calculation of all the minimal set coverings of S ε with the e -algorithm

S a S b S c S d S f Min ( r i )
r 1 = e e 2 e e
r 2 = 1 2 2 e e { { S a , S d } , { S a , S f } }
r 3 = 0 1 1 e e { { S b , S c , S d } , { S b , S c , S f } }

11 Numerical experiments

While terminology and overall structure of the article in front of you have improved a lot compared to the 2021-version (arXiv:2008.08996v2), there is a problem concerning the 2021 Mathematica experiments: some of the 2021 subroutines could not be replaced in time by implementations of the superior ideas discussed in previous sections. I thus decided to pick a few of the most telling numerical experiments performed in 2021, recast them in Table 10, and describe them thoroughly with adapted terminology (i.e., from this article). In all experiments, very-good rows ρ were either detected after one-by-one enumeration or by virtue of criterion (31). Since repackaging merely-good rows into very-good rows (Section 6) has not yet been implemented, in all experiments, the merely-good rows ρ are either enumerated one-by-one (by Algorithm 3 or 4, never 1 or 2) or ρ MHS ( H ) is determined with inclusion–exclusion (9.8). Whatever way the merely-good rows ρ were handled, we sum up all numbers ρ MHS ( H ) in order to record MHS ( H ) (or an extrapolation of it).

Table 10

Numerical evaluation and extrapolation of the minhit algorithm

( w , h , k ) R , av.deg mnMC content (abs/rel) vg, mg, bad
( 60 , 20 , 5 ) 26701, 13  ( 13 s ) 309 (1 s) 1914, 77% (5042 s) 43, 49, 8
( 70 , 20 , 30 ) 77448, 5  ( 39 s ) 8.5, 20% (186 s) 13, 62, 25
( 30 , 50 , 7 ) 123584, 10  ( 56 s ) 55538 (2564 s) 1.02, 20% (66 s) 15, 26, 59
( 70 , 20 , 5 ) 13577, 14  ( 8 s ) 256 (3 s) 113116, 86% 68, 32, 0,
( 70 , 20 , 6 ) 41319, 12  ( 21 s ) 730 (3 s) 1694, 82% 37, 62, 2
( 70 , 20 , 12 ) 917377, 10  ( 1546 s ) 42, 27% 33, 60, 7
( 100 , 40 , 3 ) 10367, 33  ( 13 s ) 113 (0.4 s) 3 × 1 0 10 , 99 % 94, 6, 0
( 30 , 5000 , 7 ) 1000 , 18 ( 103 s ) 0.45, 22% 23, 14, 64
( 100 , 80 , 3 ) 1000 , 34 ( 2 s ) 437 (2 s) 3000, 29% 6, 73, 21
( 10000 , 100 , 1000 ) 1000 , 14 ( 158 s ) 1 0 6 , 77 % 55, 40, 5

Here are some more details. All experiments are characterized by the signature ( w , h , k ) that refers to a hypergraph H P [ w ] whose h hyperedges H H are random and have uniform cardinality H = k . For some signatures (in 11.1), we managed to calculate MHS ( H ) exactly. For other signatures, MHS ( H ) could only be extrapolated; e.g., because MinNotMC ( H ) could not be conquered (11.2), or even the R semifinal rows could not be conquered (11.3). In 11.4 and 11.5, we speculate on future improvements. Center stage in all of that takes MinNotMC ( H ) ; either its very computation or its subsequent exploitation. The reader may wish to skip the rather technical Section 11.5 at a first reading. Finally, 11.6 compares our “wildcard-approach” with an algorithm of Toda [3], which is based on binary decision diagrams and which therefore also offers some kind of compression.

11.1 Whenever MHS ( H ) could be determined exactly, the procedure usually was as follows. The (transversal) e -algorithm, fed with H , terminates and outputs R many semifinal 01g-rows ρ i (see (17)). Whenever MinNotMC ( H ) could be calculated, then likewise all R set-systems K i ( ρ i ) could be calculated (though not yet with the VL way of 9.4). As mentioned earlier, then the potential very-goodness of ρ i (and if yes, ρ i ) is found at once in view of (31). How to process the remaining merely-good or bad rows ρ j ? We mostly used Algorithm 3 (from 5.3, not yet 9.5) or[35] 9.6–9.8 (combining the n -algorithm with inclusion–exclusion).

Thus, one hypergraph H of signature (60,20,5) (Table 10) triggered R = 26,701 semifinal 01g-rows ρ i of average degree 13. The calculation took 13 s. Calculating MinNotMC ( H ) of cardinality mnMC = 309 took 1s. Using the 9.6–9.8 way, we found that H had 51’109’682 MHSes. Perhaps more informative than knowing MHS ( H ) is it to know the average 1,914 of the (absolute) contents ρ i MHS ( H ) , as well as the average relative content ρ i MHS ( H ) ρ i = 0.77 ( 77 % ) . (Up to small rounding error, the reader can retrieve MHS ( H ) by multiplying with R the average absolute content.) As to the (30, 50, 70)-hypergraph, since its semifinal rows have little content and MinNotMC ( H ) is large, Algorithm 3 was faster.

For some (70, 20, 30)-hypergraph, the precise value of MHS ( H ) was obtained without the aid of MinNotMC ( H ) because Algorithm 4 from 5.4 managed to process all semifinal rows (including the very-good-ones) one-by-one.

11.2 For some ( w , h , k ) -hypergraphs H , it was possible to calculate all R semifinal rows but not the exact value of MHS ( H ) . That is because either MinNotMC ( H ) was too hard to calculate (see also 11.4) and Algorithm 4 not up to the task. Or, while MinNotMC ( H ) could be obtained, either R or the sizes ρ i were too large to process, in whatever way, the not very-good rows (see also 11.5). In this situation, we picked 1,000 among the R semifinal rows at random[36] and used them to extrapolate the average content of semifinal rows.

There is one H that does not quite fit “In this situation.” For this H of signature (100, 40, 3), the 113 potential killers in MinNotMC ( H ) could be calculated in just 0.4 s. Among the 10,367 semifinal rows, 94% were very-good (identified by (31), i.e., K i ( ρ i ) = ) and their cardinalities summed up to 3190986028403520327. The remaining semifinal rows were all merely-good and of high density. Algorithm 3 being out of question due to the size of ρ i , the author speculates (but does not remember fore sure) that the 9.6–9.8 variant also must have failed due to the inferior 2021 subroutine for inclusion–exclusion (see 11.5.2).

11.3 In some cases, not all semifinal rows could be generated, i.e., the transversal e -algorithm failed and so R was unknown. Nevertheless, one can still use the transversal e -algorithm to generate 1,000 random semifinal 01g-rows. The last three lines in Table 10 arose this way. It is interesting to compare the signatures (100, 40, 3) and (100, 80, 3), as well as (30, 50, 7) and (30, 5,000, 7). As usual, if w , k stay fixed while h increases, the absolute content “deteriorates.”

As to the last column in Table 10, if all R semifinal 012e-rows (and whence semifinal 01g-rows) could be classified (whether or not MHS ( H ) was achieved), then we evidently obtain the exact percentages of very-good, merely-good, and bad rows. They appear (rounded) in the last column. If not all semifinal rows could be computed, then the numbers in the last column were extrapolated by applying the Monte-Carlo method to the 1,000 semifinal rows (be it in 11.2 or 11.3) that were computed.

11.4 As to calculating MinNotMC ( H ) , considerably less time was spent for running the w many auxiliary transversal e-algorithms than for minimizing the resulting set-system S to Min ( S ) = MinNotMC ( H ) . For instance, for the (30, 50, 7)-instance, it took only 61 s to calculate S (of cardinality 252211), but 2503 s to shrink S to MinNotMC ( H ) (of cardinality 55538). For the (70, 20, 30)-instance, MinNotMC ( H ) could not be calculated in reasonable time. Problem is that the minimization method used was inferior to the ideas in 2.5 and 2.6.1.

11.4.1 Recall that one purpose of MinNotMC ( H ) is to obtain K i ( ρ ) for each semifinal 01 g -row. The set-system K i ( ρ ) , e.g., enables the very-goodness criterion (31). There are two alternatives to criterion (31). First, while criterion (22) of Section 7 has been experimented with only for small values h (arXiv:2008.08996v1, Section 6.3), there is hope (7.3) to trim it considerably. Second, also criterion (26) based on Rado’s theorem (Section 8) invites experimentation.

So much for K i ( ρ ) and (31). Observe that K i ( ρ ) also underlies the enhanced version of Algorithm 3 in 9.5. Unfortunately, the latter has not yet been implemented.

11.5 As to merely calculating MHS ( H ) , let us first prove (37) from 9.8. There are two approaches to obtain the required 012n-rows τ i ¯ . Both are based on possessing MinNotMC ( H ) . The first approach obtains the rows σ j ¯ ( j R ) in (33) by feeding the whole of MinNotMC ( H ) to the noncover n -algorithm. Then, for all 1 j R , we check whether or not ρ i ¯ σ j ¯ = (see 11.5.2) and take as { τ 1 ¯ , , τ m i ¯ } the set of all σ j ¯ with ρ i ¯ σ j ¯ . The second approach only feeds K i ( ρ i ¯ ) instead of MinNotMC ( H ) to the n -algorithm and thus obtains m i many 012n-rows τ that also do the job.

11.5.1 What are the pros and cons of the two aforementioned approaches to provide each semifinal 012e-row ρ i ¯ with “its” 012n-rows guaranteed by (37)? For starters, recall (9.4) that the calculation of the R set-systems K i ( ρ i ¯ ) is based on MinNotMC ( H ) but requires little extra work. The extra time is often amortized since the sets K i ( ρ i ¯ ) are way smaller than MinNotMC ( H ) , and so applying the n -algorithm to a single K i ( ρ i ¯ ) takes much less time than applying it to MinNotMC ( H ) .

How does m i compare to m i ? In lockstep with the shorter time, also the number m i of produced 012n-rows will be smaller than the corresponding number m i . Finally, observe that by construction, all m i rows τ j ¯ intersect ρ i ¯ , whereas this need not be the case for the m i many rows τ . If, while running the n -algorithm on K i ( ρ i ¯ ) , one keeps on discarding candidate sons τ with ρ i ¯ τ = (see 11.5.2), then it is guaranteed that no final 012n-row will be disjoint from ρ i ¯ . In this way, one can further reduce m i but perhaps that is not worth the effort. More computational experiments need to be carried out to clarify all of 11.5.1.

11.5.2 Two more loose ends must be addressed. First, the type of inclusion–exclusion proposed in 9.8.1 for calculating ρ ¯ τ ¯ is superior to the type of inclusion–exclusion employed in the 2021-experiments of Table 10. Namely, as detailed in [arXiv:2008.08996v1, Sec.7.2], this slower kind of inclusion–exclusion relies on a bipartite graph whose shores are the e -wildcards of ρ ¯ and the n -wildcards of τ ¯ , respectively. Since in 9.8.1, we only need one kind of wildcards, the 9.8.1 implementation in spe is up to 2 t times faster that the current implementation. Here, t is the number of wildcards ( e or n ) of which there are more.

11.5.3 Second, deciding merely whether or not ρ ¯ τ ¯ is empty works faster still than 9.8.1 type inclusion–exclusion. For starters, the intersection is clearly empty when 1’s in one row clash with 0’s in the other row. However, there can be more hidden reasons for emptiness, e.g., ( 1 , 1 , e , e ) ( n 1 , n 2 , n 1 , n 2 ) = . The gory details of deciding the emptiness of ρ ¯ τ ¯ have been tackled in [arXiv:2008.08996v1, Sec.8], yet all of that will be recast in a separate publication that also relates the matter to deciding the satisfiability of certain Boolean functions (of type Horn AntiHorn ). Another issue is the Mathematica implementation of it all, and its possible overhead that slows it down for small size inputs.

11.6 In [1], 19 methods to calculate MHS ( H ) have been pitted against each other on a common platform, using a variety of real-life datasets. Our method[37] does not post factum fit that platform. It is implemented in high-level Mathematica code and so far only ran on the author’s laptop (Dell Latitude 7410). Furthermore, much different from [1], all hypergraphs in Table 10 have random and equicardinal hyperedges. Nevertheless, let us attempt a preliminary comparison with two specific algorithms investigated in [1]. First, the Murakami-Uno-algorithm [5] (like us to some extent) relies on the MC-condition but proceeds one-by-one. Second, building on ideas of Knuth, the Toda-algorithm [3], like us, uses compression, but in more implicite ways binary decision diagrams (BDD’s). These two algorithms also happen to be the champions[38] in [1]. Since the MC-condition has received plenty attention in Sections 9 and 10, let us devote the remainder of 11.6 to Toda’s algorithm. Here come four aspects where our method seems to win out (but since talk is cheap only direct confrontation can ultimately determine the pros and cons of both).

  1. As is well known (and repeated in [6]), having the BDD of a Boolean function f yields at once the cardinality of the model set Mod ( f ) . With a bit more effort (but in linear total time), one obtains the model set of f as a disjoint union of 012-rows. Unfortunately, when Mod ( f ) = MHS ( H ) , the models are mutually incomparable, and so all 012-rows are necessarily 01-rows, i.e., no compression is achieved. (This is akin to the situation at the end of 9.7.2.) Matters are alleviated but not cured by Toda’s use of zero-suppressed BDD’s (= zero-suppressed decision diagrams [ZDD’s]). Thus the ZDD provides an implicite compression of MHS ( H ) , which often provided MHS ( H ) faster than the 18 competitors in [1]. But since MHS ( H ) is only[39] output one-by-one, this did not always mean overall victory. The Toda-algorithm is probably faster than us whenever our compression rate[40] is low, such as for the (30, 5000, 7) signature. With increasing compression rate the tables begin to turn. Also keep in mind: our compressed representation of MHS ( H ) may be desirable enough that spending extra time on it is worthwhile.

  2. Even when the final BDD is moderate in size, intermediate BDD’s can be excessively large, thus causing memory problems. In contrast, the LIFO-stack used by the transversal e -algorithm can never contain more than h rows (this is a classic result about LIFO-stacks).

  3. In [3, p. 101], Toda hopes to eventually parallelize one part of his algorithm, i.e., the calculation of a BDD that captures HS ( H ) . In contrast, parallelizing[41] our equivalent part (the transversal e-algorithm) is straightforward.

  4. Like our method, some algorithms in [1] also have the potential for cut-off (4.3). Toda’s algorithm seems to be not among them since it does not appear in Table 9 or Table 10 of [1].

12 Enumerating all exact hitting sets

In our last section, all our hypergraphs H P [ w ] of cardinality h H are full in the sense that H = [ w ] (to avoid trivial cases). An exact hitting set (EHS) with respect to a hypergraph H is a subset X [ w ] such that X H = 1 for all H H . Because of H = [ w ] , each a X belongs to some hyperedge H . This implies that each EHS X is “very MC” in the sense that for each a X , every H containing a cuts it out sharply. Consequently, each exact hitting set is a minimal hitting set by Theorem 2. The converse fails.[42]

In the sequel, we compress the set EHS ( H ) of all exact H -hitting sets by “imposing” the hyperedges one after the other (12.2–12.3). In doing so, the previously used 01g-cards will be applicable even more directly, yet the trivial feasibility test (10) becomes much harder. One consequence (12.4) concerns the enumeration of all perfect matchings in certain graphs. Sections 12.1 and 12.5 deal with a natural (apparently novel) equivalence relation induced on [ w ] by every hypergraph H P [ w ] . It prompts one to distinguish between “degenerate” and “nondegenerate” hypergraphs.

12.1 For a hypergraph H = { K 1 , , K h } P ( W ) , we say that x , y W are ( H -)equivalent, (written ) if they belong to the same hyperedges. In formulas, if ( 1 i h ) , x K i y K i . If the equivalence relation is the identity relation, then H is called nondegenerate, otherwise degenerate. For instance, if H is the hypergraph of all stars of a graph (see 12.4), then H is nondegenerate. For each index set I [ h ] , let H ( I ) be the set of a W that are in all K i ’s ( i I ) and nowhere else. Formally,

(44) H ( I ) { K i : i I } { W \ K i : i [ h ] \ I } .

If H ( I ) , then H ( I ) is a class, and each class arises this way.[43] It follows that 2 h < w is a sufficient condition for H to be degenerate.

(45) Let H be a hypergraph and let r be any 01 g -row contained in EHS ( H ) . Then, each  g -bubble { a , b , } of  r  is contained in a  class.

Proof of (45)

Let K H be arbitrary with a K . By symmetry, it suffices to show that b K . By way of contradiction, suppose b K . Fix any X r with a X (by definition of 01 g -row, there is such X ). Then, X K = { a } since X is an EHS. If Y arises from X by switching a with b , then still Y r . But Y K = , which contradicts the fact that Y (being in r ) is an (exact) hitting set.□

12.2 Consider the hypergraph H 5 P [10] consisting of the three hyperedges

(46) K 1 = { 2 , 3 , 4 , 6 } , K 2 = { 1 , 2 , 3 , 4 , 5 , 7 } , K 3 = { 2 , 8 , 9 } .

If instead of { K 1 , K 2 , K 3 } we just have { K 1 } , then the set of { K 1 } -hitting sets, i.e.,

{ X [ 9 ] : X { 2 , 3 , 4 , 6 } = 1 } , can be written[44] as the 012g-row r 0 below.

In order to sieve the { K 1 , K 2 } -EHSes X from r 0 , we observe that K 1 K 2 = { 2 , 3 , 4 } , and accordingly, write r 0 = γ 1 γ 2 (Table 11). That helps because sieving the { K 1 , K 2 } -EHSes from the auxiliary rows r 1 , r 2 is easy. It results in r 1 and r 2 , respectively. For both rows, the imposition of K 3 is still pending. Each row in the stack must be tagged with this kind of information. Picking the top row of the current working stack { r 1 , r 2 } , we focus on r 1 . It is evident that the subset of all X r 1 with X K 3 = 1 can be written as the 012g-row r 3 in Table 11. Row r 3 is final in the sense that all hyperedges have been imposed on it; this amounts to r 3 EHS ( H ) . We hence remove r 3 from the working stack and make it the first final row. It is clear that imposing K 3 on the last row r 2 in the working stack yields the final rows r 4 , r 5 . We hence have E H S ( H 5 ) = r 3 r 4 r 5 . In particular, H 5 has 3 2 + 2 2 + 1 = 11 EHSes.

Table 11

The working stack for the g -algorithm

1 2 3 4 5 6 7 8 9
r 0 = 2 g g g 2 g 2 2 2 Pending K 2
r 1 = 2 0 0 0 2 1 2 2 2
r 2 = 2 g g g 2 0 2 2 2
r 1 = g 0 0 0 g 1 g 2 2 Pending K 3
r 2 = 0 g g g 0 0 0 2 2 Pending K 3
r 3 = g 1 0 0 0 g 1 1 g 1 g 2 g 2 Final
r 2 = 0 g g g 0 0 0 2 2 Pending K 3
r 4 = 0 0 g 1 g 1 0 0 0 g 2 g 2 Final
r 5 = 0 1 0 0 0 0 0 0 0 Final

12.3 In order to generally impose a hyperedge K upon a 012g-row, we erect a certain Abraham-flag (boldface in Table 12) akin to (7). Thus, imposing K = { 1 , 2 , 6 } upon the 012g-row[45] r 0 ¯ in Table 11 yields r 1 ¯ to r 4 ¯ .

Table 12

Use of a 0g0 Abraham-flag to impose the exact hyperedge { 1 , , 6 } upon r 0 ¯

1 2 3 4 5 6 7 8 9 10 11 12
r 0 ¯ = g 1 g 1 g 2 g 2 g 3 g 4 g 1 g 1 g 2 g 3 g 3 g 4
r 1 ¯ = g 1 g 1 0 0 0 0 0 0 1 g 3 g 3 1
r 2 ¯ = 0 0 g 2 g 2 0 0 g 1 g 1 0 g 3 g 3 1
r 3 ¯ = 0 0 0 0 1 0 g 1 g 1 1 0 0 1
r 4 ¯ = 0 0 0 0 0 1 g 1 g 1 1 g 3 g 3 0

Adhering to the terminology of 3.1, we call r 1 ¯ to r 4 ¯ the candidate sons of r 0 ¯ (that arise upon imposing K on r 0 ¯ ). Again, we need to know which of the candidate sons r i ¯ are feasible in the sense that r i ¯ EHS ( H ) . Infeasible candidate sons (= duds) should be canceled. The popular Dancing-Links algorithm of Knuth, which decides (though not in polynomial time) whether or not a given hypergraph admits an EHS, is easily adapted to a feasibility test for candidate sons. Again, the surviving candidate sons of r 0 ¯ are called its sons. The described method will be coined[46] the g-algorithm.

Theorem 3

Let H P [ w ] be a hypergraph. Then, EHS ( H ) can be enumerated as a disjoint union of R many 01g-rows in time O ( Rhw feas ( h , w ) ) . Here, feas ( h , w ) upper-bounds the time for any chosen subroutine (e.g., Dancing-Links) to decide whether a hypergraph with w vertices and h hyperedges has an EHS.

Proof

Throughout the g -algorithm, the top rows in the LIFO-stack match the nodes of a computation tree (rooted at r 0 ) whose R leaves are the final rows. The length of a root-to-leaf path equals the number of impositions that were required to generate that leaf (= final row), and hence, that length is at most h . In the worst case (i.e., when all root-to-leaf paths are mutually disjoint and have maximal length), the number of non-root nodes, i.e., the number of impositions, equals R h .

What is the maximum cost imp ( h , w ) of imposing a hyperedge on a LIFO top row r ? Building the at most τ = τ ( H ) max { H : H H } candidate sons r i of r (by way of 0g0-Abraham-Flags) costs O ( τ w ) . Letting feas ( h , w ) be any time bound[47] for checking the feasibility of a 012g-row, it costs O ( τ feas ( h , w ) ) to discard the infeasible candidate sons. A surviving son r i satisfies a fixed hyperedge K iff in r i the bits with indices in K are all 0’s except for one 1. Hence, it costs O ( τ h w ) to tag each son with its pending hyperedge. We conclude that imp ( h , w ) = O ( τ w + τ h w + τ feas ( h , w ) ) , and therefore:

(47) The overall cost of imposing the hyperedges of H in order to pack all exact hitting sets of H into R disjoint 01 g -rows is O ( Rh imp ( h , w ) ) = O ( Rh τ ( h w + feas ( h , w ) ) ) .

Since we postulated f ( h , w ) h w and since τ w , we have O ( R h τ ( h w + feas ( h , w ) ) ) = O ( R h w feas ( h , w ) ) .□

12.4 An important kind of EHS arises from any graph G with vertex set V and edge-set E . Namely, if star ( v ) is the set of all edges incident with vertex v and H { star ( v ) : v V } P ( E ) , then the EHSes of H are exactly the perfect matchings of G . Recall that K 3 , 3 is the complete bipartite graph, both shores of which having three vertices. A graph G is K 3 , 3 minor-free if one cannot obtain K 3 , 3 from G by deleting edges and vertices of G , nor by contracting edges of G .

Theorem 4

All perfect matchings of a K 3 , 3 minor-free graph G can be enumerated in polynomial total time.

Proof

In our context, each feasibility test performed by the g -algorithm on a 01 g -row r de facto decides whether a certain minor G ( 0 , 1 ) of G of has a perfect matching. Specifically, the 0’s in r delete edges from G , which thus becomes a sparser graph G ( 0 ) . The 1’s in r constitute a partial matching P in G ( 0 ) , which wants to be extended to a perfect matching of G ( 0 ) . This is possible iff a certain subgraph G ( 0 , 1 ) of G ( 0 ) has a perfect matching. Namely, G ( 0 , 1 ) is obtained by removing all edges of P , along with all edges incident with them. The arising isolated vertices are also removed. With G , also its minor G ( 0 , 1 ) is K 3 , 3 -minor-free. By Corollary 1 in [11], one can decide in polynomial time (in fact even NC-time) whether G ( 0 , 1 ) has a perfect matching. Hence, the function feas ( h , w ) in Theorem 3 is bound by a polynomial in h , w , causing the overall algorithm to run in total polynomial time.□

One can dispense with K 3 , 3 -minor-freeness if one allows for randomization because deciding the existence of a perfect matching is in RNC [12, p. 347] (As to the definition of RNC, see [12, p. 337]). Perfect matchings in bipartite graphs have been dealt with before [13].

12.5 Let H = { K 1 , , K h } P [ w ] be a hypergraph. Generally, if a class C intersects K i , then it must be contained in K i ; otherwise, there were x , y C , one in K i , the other not, which is impossible. Therefore, if K i ¯ denotes the set of classes contained in K i , then K i = K i ¯ . The reduced hypergraph H ¯ { K 1 ¯ , , K h ¯ } has h 0 h hyperedges and is nondegenerate. For instance, for H 5 in (47), the H 5 -classes are 1 ¯ ( = 5 ¯ = 7 ¯ ) = { 1 , 5 , 7 } , 2 ¯ = { 2 } , 3 ¯ = { 3 , 4 } , 6 ¯ = { 6 } , and 8 ¯ = { 8 , 9 } . Hence, H 5 ¯ = { K 1 ¯ , K 2 ¯ , K 3 ¯ } , where K 1 ¯ = { 2 ¯ , 3 ¯ , 6 ¯ } , K 2 ¯ = { 1 ¯ , 2 ¯ , 3 ¯ } , and K 3 ¯ = { 2 ¯ , 8 ¯ } .

Let us connect all of this with g -wildcards. The g -bubble of the g -wildcard in row r 0 of Table 11 is { 2 , 3 , 4 , 6 } . Since this is just K 1 , it is a union of classes. It follows at once from induction and the design Abraham-Flags that this property gets perpetuated:

(48) When applying the g -algorithm to the hypergraph H , each occuring g -bubble is a union of -classes.

However, once the g -algorithm has terminated, all final 01g-rows are subsets of EHS ( H ) , and so by (45), all their g -bubbles are contained in single classes. This is compatible with (48) only if each g -bubble of a final row actually is an H -class.

12.5.1 In particular, when applying the g -algorithm to a nondegenerate hypergraph, each final 01 g -row must be a 01-row (= bitstring). For instance, applying the g -algorithm to the nondegenerate hypergraph H 5 ¯ would give the final 01-rows in the left part of Table 13.

Table 13

The g -algorithm necessarily enumerates EHS ( H 5 ¯ ) one-by-one

1 ¯ 2 ¯ 3 ¯ 6 ¯ 8 ¯ 1 5 7 2 3 4 6 8 9
1 0 0 1 1 g 1 g 1 g 1 0 0 0 1 g 2 g 2
0 0 1 0 1 0 0 0 0 g g 0 g g
0 1 0 0 0 0 0 0 1 0 0 0 0 0

One retrieves the final 01g-rows on the right in Table 13 by inflating each 1 at position k ¯ on the left to a g -wildcard as large as the class k ¯ (with the understanding that 1 stays 1 if k ¯ is a singleton).

12.6 What is the bottom line in all of that? A devil’s advocate might argue: for nondegenerate hypergraphs, the g -algorithm offers no compression, and for degenerate hypergraphs H , the compression can also be achieved by enumerating the EHSes of H ¯ with any other algorithm, and then inserting g -wildcards in a trivial manner.

Here is the defender’s argument: as elementary as they are, the concepts “degenerate” and “nondegenerate” are new. Likewise for “ g -wildcards” and “Abraham-Flags.” Concerning “other algorithm,” the author could not google any publication concerning the enumeration of all EHSes of a general hypergraph. Even concerning specific hypergraphs, the algorithm in [13] seems to be the only publication.

12.6.1 What is the importance of “degenerate/or not” in the context of MHS ( H ) ? As testified by H 2 in (9), the MHSes of nondegenerate hypergraphs are often compressible, nevertheless. For degenerate H , one could, as we did for EHSes, run all our techniques on the reduced hypergraph H ¯ and later compress further. Whether that actually gives better compression than just sticking to H remains to be seen.

13 Conclusion

This article promotes the compression of MHS ( H ) by the use of wildcards. This approach is beneficial for sparse hypergraphs, i.e., with few but potentially large hyperedges (see the (10,000, 100, 1,000)-example in Table 10), but is not advisable for dense hypergraphs. Our further comments hence concern sparse hypergraphs, albeit “sparse” is not a clear-cut notion.

As observed already in [4], what works particularly well in the sparse case is the compression of MCHS ( H ) , (Section 4). That is because the (unprocessed) semifinal 01 g -rows of degree μ are all very-good and their union equals MCHS ( H ) . Furthermore, the transversal e -algorithm, which produces them, invites parallelization. In several real-life applications, the minimum-cardinality hitting sets are sufficient (see, for instance, Section 3 in [14] where all (suitable) minimum set-coverings are sought; recall from Section 10 that set-coverings are cryptomorphic to hitting sets). Also, [15] makes the case that often MHSes of small (or minimum) cardinality suffice. Furthermore, note that a semifinal 01 g -row is a convenient data structure for sampling, at random, hitting sets of prescribed cardinalities.

But suppose we insist to compress the whole of MHS ( H ) . Then, we need to weed out the bad rows, identify the remaining (i.e., of degree > μ ) very-good rows, and repackage the merely-good rows into fresh very-good rows. The author apologizes for having overwhelmed (or not?) the reader with an ensuing plethora of topics: Algorithms 1 to 4, three criteria for very-goodness, inclusion–exclusion, matroid theory, many uses of VL, the fact that MHS ( H ) = HS ( H ) MC ( H ) , the proposal and calculation of MinNotMC ( H ) , the primal-dual approach ( e - and n -wildcards) for finding MHS ( H ) , and more. While often illustrated with luscious toy-examples, many of these ideas await implementation and comparison with other approaches (collaboration is welcome).

As a “side show,” Section 12 turned to exact (as opposed to minimal) hitting sets. The issue of when EHS ( H ) can be compressed is more clear-cut (12.6) than it was for MHS ( H ) . In contrast, testing the feasibility of 012 g -rows (e.g., with Dancing-Links) is harder than the feasibility test (10) in the (normal) hitting set framework. One surprising application (Theorem 4) concerns the enumeration of all perfect matchings of a graph.

Acknowledgments

One anonymous reviewer (after finally entering the stage) excelled with detailed and constructive criticism, which improved the article a lot.

  1. Conflict of interest: The author states no conflict of interest.

References

[1] A. Gainer-Dewar and P. Vera-Licona, The minimal hitting set generation problem: algorithms and computation, SIAM J. Discrete Math. 31 (2017), no. 1, 63–100. 10.1137/15M1055024Suche in Google Scholar

[2] M. Hagen, Algorithmic and computational complexity issues of MONET, PhD thesis, Friedrich-Schiller-Universiät, Jena, 2008. Suche in Google Scholar

[3] T. Toda, Hypergraph transversal computation with binary decision diagrams, in: V. Bonifaci, C. Demetrescu, and A. Marchetti-Spaccamela (Eds.), Experimental Algorithms. SEA 2013, Lecture Notes in Computer Science, Vol. 7933, Springer, Berlin, 2013.10.1007/978-3-642-38527-8_10Suche in Google Scholar

[4] M. Wild, Counting or producing all fixed cardinality transversals, Algorithmica 69 (2014), no. 1, 117–129. 10.1007/s00453-012-9716-5Suche in Google Scholar

[5] K. Murakami and T. Uno, Efficient algorithms for dualizing large-scale hypergraphs, Discrete Appl. Math. 170 (2014), 83–94. 10.1016/j.dam.2014.01.012Suche in Google Scholar

[6] T. Eiter, Exact transversal hypergraphs and application to Boolean μ-functions, J. Symbolic Comput. 17 (1994), no. 3, 215–225. 10.1006/jsco.1994.1013Suche in Google Scholar

[7] M. Wild, ALLSAT compressed with wildcards: From CNF’s to orthogonal DNF’s by imposing the clauses one by one, Comput. J. 65 (2022), 1073–1087. 10.1093/comjnl/bxaa142Suche in Google Scholar

[8] A. Schrijver, Combinatorial optimization, Algorithms and Combinatorics, Vol. 24, Springer-Verlag, Berlin, 2003. Suche in Google Scholar

[9] L. Shi and X. Cai, An exact fast algorithm for minimum hitting set, 2010 Third International Joint Conference on Computational Science and Optimization, Huangshan, 2010, pp. 64–67. 10.1109/CSO.2010.240Suche in Google Scholar

[10] M. Wild, S. Janson, S. Wagner, and D. Laurie, Coupon collecting and transversal of hypergraphs, Discrete Math. Theor. Comput. Sci. 15 (2013), no. 2, 259–270. 10.46298/dmtcs.608Suche in Google Scholar

[11] V. V. Vazirani, NC algorithms for computing the number of perfect matchings in K3,3-free graphs and related problems, Inform. and Comput. 80 (1989), no. 2, 152–164. 10.1016/0890-5401(89)90017-5Suche in Google Scholar

[12] R. Motwani and P. Raghavan, Randomized Algorithms, Cambridge University Press, Cambridge, 1995. 10.1017/CBO9780511814075Suche in Google Scholar

[13] T. Uno, A fast algorithm for enumerating bipartite perfect matchings, in: P. Eades, T. Takaoka (Eds.), Algorithms and Computation, ISAAC 2001, Lecture Notes in Computer Science, Vol. 2223, Springer, Berlin, Heidelberg, 2001, pp. 367–379. 10.1007/3-540-45678-3_32Suche in Google Scholar

[14] T. E. Ideker, V. Thorson, and R. M. Karp, Discovery of regulatory interactions through perturbation: Inference and experimental design, Pacific Symposium Biocomputing 2000 (1999), 302–313. 10.1142/9789814447331_0029Suche in Google Scholar PubMed

[15] I. Pill and T. Quaritsch, Optimization for the Boolean approach to computing minimal hitting sets, Frontiers in Artificial Intelligence and Applications, Vol. 242, ECAI 2012, pp. 648–653. Suche in Google Scholar

Received: 2021-05-19
Revised: 2023-05-26
Accepted: 2023-06-02
Published Online: 2023-09-15

© 2023 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Artikel in diesem Heft

  1. Special Issue on Future Directions of Further Developments in Mathematics
  2. What will the mathematics of tomorrow look like?
  3. On H 2-solutions for a Camassa-Holm type equation
  4. Classical solutions to Cauchy problems for parabolic–elliptic systems of Keller-Segel type
  5. Control of multi-agent systems: Results, open problems, and applications
  6. Logical perspectives on the foundations of probability
  7. Subharmonic solutions for a class of predator-prey models with degenerate weights in periodic environments
  8. A non-smooth Brezis-Oswald uniqueness result
  9. Luenberger compensator theory for heat-Kelvin-Voigt-damped-structure interaction models with interface/boundary feedback controls
  10. Special Issue on Fractional Problems with Variable-Order or Variable Exponents (Part II)
  11. Positive solution for a nonlocal problem with strong singular nonlinearity
  12. Analysis of solutions for the fractional differential equation with Hadamard-type
  13. Hilfer proportional nonlocal fractional integro-multipoint boundary value problems
  14. A comprehensive review on fractional-order optimal control problem and its solution
  15. The θ-derivative as unifying framework of a class of derivatives
  16. Review Articles
  17. On the use of L-functionals in regression models
  18. Minimal-time problems for linear control systems on homogeneous spaces of low-dimensional solvable nonnilpotent Lie groups
  19. Regular Articles
  20. Existence and multiplicity of solutions for a new p(x)-Kirchhoff problem with variable exponents
  21. An extension of the Hermite-Hadamard inequality for a power of a convex function
  22. Existence and multiplicity of solutions for a fourth-order differential system with instantaneous and non-instantaneous impulses
  23. Relay fusion frames in Banach spaces
  24. Refined ratio monotonicity of the coordinator polynomials of the root lattice of type Bn
  25. On the uniqueness of limit cycles for generalized Liénard systems
  26. A derivative-Hilbert operator acting on Dirichlet spaces
  27. Scheduling equal-length jobs with arbitrary sizes on uniform parallel batch machines
  28. Solutions to a modified gauged Schrödinger equation with Choquard type nonlinearity
  29. A symbolic approach to multiple Hurwitz zeta values at non-positive integers
  30. Some results on the value distribution of differential polynomials
  31. Lucas non-Wieferich primes in arithmetic progressions and the abc conjecture
  32. Scattering properties of Sturm-Liouville equations with sign-alternating weight and transmission condition at turning point
  33. Some results for a p(x)-Kirchhoff type variation-inequality problems in non-divergence form
  34. Homotopy cartesian squares in extriangulated categories
  35. A unified perspective on some autocorrelation measures in different fields: A note
  36. Total Roman domination on the digraphs
  37. Well-posedness for bilevel vector equilibrium problems with variable domination structures
  38. Binet's second formula, Hermite's generalization, and two related identities
  39. Non-solid cone b-metric spaces over Banach algebras and fixed point results of contractions with vector-valued coefficients
  40. Multidimensional sampling-Kantorovich operators in BV-spaces
  41. A self-adaptive inertial extragradient method for a class of split pseudomonotone variational inequality problems
  42. Convergence properties for coordinatewise asymptotically negatively associated random vectors in Hilbert space
  43. Relating the super domination and 2-domination numbers in cactus graphs
  44. Compatibility of the method of brackets with classical integration rules
  45. On the inverse Collatz-Sinogowitz irregularity problem
  46. Positive solutions for boundary value problems of a class of second-order differential equation system
  47. Global analysis and control for a vector-borne epidemic model with multi-edge infection on complex networks
  48. Nonexistence of global solutions to Klein-Gordon equations with variable coefficients power-type nonlinearities
  49. On 2r-ideals in commutative rings with zero-divisors
  50. A comparison of some confidence intervals for a binomial proportion based on a shrinkage estimator
  51. The construction of nuclei for normal constituents of Bπ-characters
  52. Weak solution of non-Newtonian polytropic variational inequality in fresh agricultural product supply chain problem
  53. Mean square exponential stability of stochastic function differential equations in the G-framework
  54. Commutators of Hardy-Littlewood operators on p-adic function spaces with variable exponents
  55. Solitons for the coupled matrix nonlinear Schrödinger-type equations and the related Schrödinger flow
  56. The dual index and dual core generalized inverse
  57. Study on Birkhoff orthogonality and symmetry of matrix operators
  58. Uniqueness theorems of the Hahn difference operator of entire function with a Picard exceptional value
  59. Estimates for certain class of rough generalized Marcinkiewicz functions along submanifolds
  60. On semigroups of transformations that preserve a double direction equivalence
  61. Positive solutions for discrete Minkowski curvature systems of the Lane-Emden type
  62. A multigrid discretization scheme based on the shifted inverse iteration for the Steklov eigenvalue problem in inverse scattering
  63. Existence and nonexistence of solutions for elliptic problems with multiple critical exponents
  64. Interpolation inequalities in generalized Orlicz-Sobolev spaces and applications
  65. General Randić indices of a graph and its line graph
  66. On functional reproducing kernels
  67. On the Waring-Goldbach problem for two squares and four cubes
  68. Singular moduli of rth Roots of modular functions
  69. Classification of self-adjoint domains of odd-order differential operators with matrix theory
  70. On the convergence, stability and data dependence results of the JK iteration process in Banach spaces
  71. Hardy spaces associated with some anisotropic mixed-norm Herz spaces and their applications
  72. Remarks on hyponormal Toeplitz operators with nonharmonic symbols
  73. Complete decomposition of the generalized quaternion groups
  74. Injective and coherent endomorphism rings relative to some matrices
  75. Finite spectrum of fourth-order boundary value problems with boundary and transmission conditions dependent on the spectral parameter
  76. Continued fractions related to a group of linear fractional transformations
  77. Multiplicity of solutions for a class of critical Schrödinger-Poisson systems on the Heisenberg group
  78. Approximate controllability for a stochastic elastic system with structural damping and infinite delay
  79. On extremal cacti with respect to the first degree-based entropy
  80. Compression with wildcards: All exact or all minimal hitting sets
  81. Existence and multiplicity of solutions for a class of p-Kirchhoff-type equation RN
  82. Geometric classifications of k-almost Ricci solitons admitting paracontact metrices
  83. Positive periodic solutions for discrete time-delay hematopoiesis model with impulses
  84. On Hermite-Hadamard-type inequalities for systems of partial differential inequalities in the plane
  85. Existence of solutions for semilinear retarded equations with non-instantaneous impulses, non-local conditions, and infinite delay
  86. On the quadratic residues and their distribution properties
  87. On average theta functions of certain quadratic forms as sums of Eisenstein series
  88. Connected component of positive solutions for one-dimensional p-Laplacian problem with a singular weight
  89. Some identities of degenerate harmonic and degenerate hyperharmonic numbers arising from umbral calculus
  90. Mean ergodic theorems for a sequence of nonexpansive mappings in complete CAT(0) spaces and its applications
  91. On some spaces via topological ideals
  92. Linear maps preserving equivalence or asymptotic equivalence on Banach space
  93. Well-posedness and stability analysis for Timoshenko beam system with Coleman-Gurtin's and Gurtin-Pipkin's thermal laws
  94. On a class of stochastic differential equations driven by the generalized stochastic mixed variational inequalities
  95. Entire solutions of two certain Fermat-type ordinary differential equations
  96. Generalized Lie n-derivations on arbitrary triangular algebras
  97. Markov decision processes approximation with coupled dynamics via Markov deterministic control systems
  98. Notes on pseudodifferential operators commutators and Lipschitz functions
  99. On Graham partitions twisted by the Legendre symbol
  100. Strong limit of processes constructed from a renewal process
  101. Construction of analytical solutions to systems of two stochastic differential equations
  102. Two-distance vertex-distinguishing index of sparse graphs
  103. Regularity and abundance on semigroups of partial transformations with invariant set
  104. Liouville theorems for Kirchhoff-type parabolic equations and system on the Heisenberg group
  105. Spin(8,C)-Higgs pairs over a compact Riemann surface
  106. Properties of locally semi-compact Ir-topological groups
  107. Transcendental entire solutions of several complex product-type nonlinear partial differential equations in ℂ2
  108. Ordering stability of Nash equilibria for a class of differential games
  109. A new reverse half-discrete Hilbert-type inequality with one partial sum involving one derivative function of higher order
  110. About a dubious proof of a correct result about closed Newton Cotes error formulas
  111. Ricci ϕ-invariance on almost cosymplectic three-manifolds
  112. Schur-power convexity of integral mean for convex functions on the coordinates
  113. A characterization of a ∼ admissible congruence on a weakly type B semigroup
  114. On Bohr's inequality for special subclasses of stable starlike harmonic mappings
  115. Properties of meromorphic solutions of first-order differential-difference equations
  116. A double-phase eigenvalue problem with large exponents
  117. On the number of perfect matchings in random polygonal chains
  118. Evolutoids and pedaloids of frontals on timelike surfaces
  119. A series expansion of a logarithmic expression and a decreasing property of the ratio of two logarithmic expressions containing cosine
  120. The 𝔪-WG° inverse in the Minkowski space
  121. Stability result for Lord Shulman swelling porous thermo-elastic soils with distributed delay term
  122. Approximate solvability method for nonlocal impulsive evolution equation
  123. Construction of a functional by a given second-order Ito stochastic equation
  124. Global well-posedness of initial-boundary value problem of fifth-order KdV equation posed on finite interval
  125. On pomonoid of partial transformations of a poset
  126. New fractional integral inequalities via Euler's beta function
  127. An efficient Legendre-Galerkin approximation for the fourth-order equation with singular potential and SSP boundary condition
  128. Eigenfunctions in Finsler Gaussian solitons
  129. On a blow-up criterion for solution of 3D fractional Navier-Stokes-Coriolis equations in Lei-Lin-Gevrey spaces
  130. Some estimates for commutators of sharp maximal function on the p-adic Lebesgue spaces
  131. A preconditioned iterative method for coupled fractional partial differential equation in European option pricing
  132. A digital Jordan surface theorem with respect to a graph connectedness
  133. A quasi-boundary value regularization method for the spherically symmetric backward heat conduction problem
  134. The structure fault tolerance of burnt pancake networks
  135. Average value of the divisor class numbers of real cubic function fields
  136. Uniqueness of exponential polynomials
  137. An application of Hayashi's inequality in numerical integration
Heruntergeladen am 24.2.2026 von https://www.degruyterbrill.com/document/doi/10.1515/math-2022-0596/html
Button zum nach oben scrollen