1 Introduction
Graph based algorithms play a very important role in constraint programming, especially within propagators for global constraints. For example, Regin’s propagator for the AllDifferent constraint is based on a perfect matching algorithm [1], whilst his propagator for the Gcc constraint is based on a network flow algorithm [2]. Both these graph algorithms are derived from the bipartite value graph, in which nodes represent variables and values, and edges represent domains. For example, the Gcc propagator finds a flow in such a graph in which each unit of flow represents the assignment of a particular value to a variable. In this paper, we identify a new way to build graph based propagators for global constraints: we convert the global constraint into a linear program and then convert this into a network flow. These encodings contain several novelties. For example, variables domain bounds can be encoded as costs along the edges. We apply this approach to the Sequence family of constraints. Our results widen the class of global constraints which can be propagated using flowbased algorithms. We conjecture that these methods will be useful to propagate other global constraints.
2 Background
A constraint satisfaction problem (CSP) consists of a set of variables, each with a finite domain of values, and a set of constraints specifying allowed combinations of values for subsets of variables. We use capital letters for variables (e.g. , and ), and lower case for values (e.g. and ). A solution is an assignment of values to the variables satisfying the constraints. Constraint solvers typically explore partial assignments enforcing a local consistency property using either specialized or general purpose propagation algorithms. A support for a constraint is a tuple that assigns a value to each variable from its domain which satisfies . A bounds support is a tuple that assigns a value to each variable which is between the maximum and minimum in its domain which satisfies . A constraint is domain consistent (DC) iff for each variable , every value in the domain of belongs to a support. A constraint is bounds consistent (BC) iff for each variable , there is a bounds support for the maximum and minimum value in its domain. A CSP is DC/BC iff each constraint is DC/BC. A constraint is monotone iff there exists a total ordering of the domain values such that for any two values , if then is substitutable for in any support for .
We also give some background on flows. A flow network is a weighted directed graph where each edge has a capacity between nonnegative integers and , and an integer cost . A feasible flow in a flow network between a source and a sink , flow, is a function that satisfies two conditions: , and the flow conservation law that ensures that the amount of incoming flow should be equal to the amount of outgoing flow for all nodes except the source and the sink. The value of a flow is the amount of flow leaving the sink . The cost of a flow is . A minimum cost flow is a feasible flow with the minimum cost. The FordFulkerson algorithm can find a feasible flow in time. If , , then a minimum cost feasible flow can be found using the successive shortest path algorithm in time, where is the complexity of finding a shortest path in the residual graph. Given a flow in , the residual graph is the directed graph , where is
There are other asymptotically faster but more complex algorithms for finding either feasible or minimumcost flows [3].
In our flowbased encodings, a consistency check will correspond to finding a feasible or minimum cost flow. To enforce DC, we therefore need an algorithm that, given a minimum cost flow of cost and an edge checks if an extra unit flow can be pushed (or removed) through the edge and the cost of the resulting flow is less than or equal to a given threshold . We use the residual graph to construct such an algorithm. Suppose we need to check if an extra unit flow can be pushed through an edge . Let be the corresponding arc in the residual graph. If , , then it is sufficient to compute strongly connected components (SCC) in the residual graph. An extra unit flow can be pushed through an edge iff both ends of the edge are in the same strongly connected component. If , , the shortest path between and in the residual graph has to be computed. The minimal cost of pushing an extra unit flow through an edge equals . If , then we cannot push an extra unit through . Similarly, we can check if we can remove a unit flow through an edge.
3 The Sequence Constraint
The Sequence constraint was introduced by Beldiceanu and Contejean [4]. It constrains the number of values taken from a given set in any sequence of variables. It is useful in staff rostering to specify, for example, that every employee has at least 2 days off in any 7 day period. Another application is sequencing cars along a production line (prob001 in CSPLib). It can specify, for example, that at most 1 in 3 cars along the production line has a sunroof. The Sequence constraint can be defined in terms of a conjunction of Among constraints. holds iff . That is, between and of the variables take values in . The Among constraint can be encoded by channelling into 0/1 variables using and . Since the constraint graph of this encoding is Bergeacyclic, this does not hinder propagation. Consequently, we will simplify notation and consider Among (and Sequence) on 0/1 variables and . If , Among is an AtMost constraint. AtMost is monotone since, given a support, we also have support for any larger assignment [5]. The Sequence constraint is a conjunction of overlapping Among constraints. More precisely, holds iff for , holds. A sequence like is a window. It is easy to see that this decomposition hinders propagation. If , Sequence is an AtMostSeq constraint. Decomposition in this case does not hinder propagation. Enforcing DC on the decomposition of an AtMostSeq constraint is equivalent to enforcing DC on the AtMostSeq constraint [5].
Several filtering algorithms exist for Sequence and related constraints. Regin and Puget proposed a filtering algorithm for the Global Sequencing constraint (Gsc) that combines a Sequence and a global cardinality constraint (Gcc) [6]. Beldiceanu and Carlsson suggested a greedy filtering algorithm for the CardPath constraint that can be used to propagate the Sequence constraint, but this may hinder propagation [7]. Regin decomposed Gsc into a set of variable disjoint Among and Gcc constraints [8]. Again, this hinders propagation. Bessiere et al. [5] encoded Sequence using a Slide constraint, and give a domain consistency propagator that runs in time. van Hoeve et al. [9] proposed two filtering algorithms that establish domain consistency. The first is based on an encoding into a Regular constraint and runs in time, whilst the second is based on cumulative sums and runs in time down a branch of the search tree. Finally, Brand et al. [10] studied a number of different encodings of the Sequence constraint. Their asymptotically fastest encoding is based on separation theory and enforces domain consistency in time down the whole branch of a search tree. One of our contributions is to improve on this bound.
4 Flowbased Propagator for the Sequence Constraint
We will convert the Sequence constraint to a flow by means of a linear program (LP). We shall use as a running example. We can formulate this constraint simply and directly as an integer linear program:
where . By introducing surplus/slack variables, and , we convert this to a set of equalities:
where . In matrix form, this is:
This matrix has the consecutive ones property for columns: each column has a block of consecutive 1’s or ’s and the remaining elements are 0’s. Consequently, we can apply the method of Veinott and Wagner [11] (also described in Application 9.6 of [3]) to simplify the problem. We create a zero last row and subtract the th row from th row for to . These operations do not change the set of solutions. This gives:
This matrix has a single and in each column. Hence, it describes a network flow problem [3] on a graph (that is, it is a network matrix). Each row in the matrix corresponds to a node in and each column corresponds to an edge in . Down each column, there is a single row equal to 1 and a single row equal to 1 corresponding to an edge in the graph. We include a source node and a sink node in . Let
be the vector on the right hand side of the equation. If
is positive, then there is an edge that carries exactly amount of flow. If is negative, there is an edge that caries exactly amount of flow. The bounds on the variables, which are not expressed in the matrix, are represented as bounds on the capacity of the corresponding edges.The graph for the set of equations in the example is given in Figure 1. A flow of value in the graph corresponds to a solution. If a feasible flow sends a unit flow through the edge labeled with then in the solution; otherwise . Each even numbered vertex represents a window. The way the incoming flow is shared between and reflects how many variables in the
’th window are equal to 1. Odd numbered vertices represent transitions from one window to the next (except for the first and last vertices, which represent transitions between a window and nothing). An incoming
edge represents the variable omitted in the transition to the next window, while an outgoing edge represents the added variable.Theorem 4.1
For any constraint , there is an equivalent network flow graph with edges, vertices, a maximum edge capacity of , and an amount of flow to send equal to . There is a onetoone correspondence between solutions of the constraint and feasible flows in the network.
The time complexity of finding a maximum flow of value is using the FordFulkerson algorithm [12]. Faster algorithms exist for this problem. For example, Goldberg and Rao’s algorithm finds a maximum flow in time where is the maximum capacity upper bound for an edge [13]. In our case, this gives time complexity. We follow Régin [1, 2] in the building of an incremental filtering algorithm from the network flow formulation. A feasible flow in the graph gives us a support for one value in each variable domain. Suppose is in the solution that corresponds to the feasible flow where is either zero or one. To obtain a support for , we find the SCC of the residual graph and check if both ends of the edge labeled with are in the same strongly connected component. If so, has a support; otherwise can be removed from the domain of . Strongly connected components can be found in linear time, because the number of nodes and edges in the flow network for the Sequence constraint is linear in by Theorem 4.1. The total time complexity for initially enforcing DC is if we use the FordFulkerson algorithm or if we use Goldberg and Rao’s algorithm.
Still following Régin [1, 2], one can make the algorithm incremental. Suppose during search is fixed to value . If the last computed flow was a support for , then there is no need to recompute the flow. We simply need to recompute the SCC in the new residual graph and enforce DC in time. If the last computed flow is not a support for , we can find a cycle in the residual graph containing the edge associated to in time. By pushing a unit of flow over this cycle, we obtain a flow that is a support for . Enforcing DC can be done in after computing the SCC. Consequently, there is an incremental cost of when a variable is fixed, and the cost of enforcing DC down a branch of the search tree is .
5 Soft Sequence Constraint
Soft forms of the Sequence constraint may be useful in practice. The ROADEF 2005 challenge [14], which was proposed and sponsored by Renault, puts forward a violation measure for the Sequence constraint which takes into account by how much each Among constraint is violated. We therefore consider the soft global constraint, . This holds iff:
(1) 
As before, we can simplify notation and consider SoftSequence on 0/1 variables and .
We again convert to a flow problem by means of a linear program, but this time with an objective function. Consider . We introduce variables, and to represent the penalties that may arise from violating lower and upper bounds respectively. We can then express this SoftSequence constraint as follows. The objective function gives a lower bound on .
where , , and are nonnegative. In matrix form, this is:

If we transform the matrix as before, we get a minimum cost network flow problem:

The flow graph for this system is presented in Figure 2. Dashed edges have cost , while other edges have cost . The minimal cost flow in the graph corresponds to a minimal cost solution to the system of equations
Theorem 5.1
For any constraint , there is an equivalent network flow graph. There is a onetoone correspondence between solutions of the constraint and feasible flows of cost less than or equal to .
Using Theorem 5.1, we construct a DC filtering algorithm for the SoftSequence constraint. The SoftSequence constraint is DC iff the following conditions hold:

value belongs to , iff there exists a feasible flow of cost at most that sends a unit flow through the edge labeled with .

value belongs to , iff there exists a feasible flow of cost at most that does not send any flow through the edge labeled with .

there exists a feasible flow of cost at most .
The minimal cost flow can be found in time [3]. Consider the edge in the residual graph associated to variable and let be its residual cost. If the flow corresponds to an assignment with , pushing a unit of flow on results in a solution with . Symmetrically, if the flow corresponds to an assignment with , pushing a unit of flow on results in a solution with . If the shortest path in the residual graph between and is , then the shortest cycle that contains has length . Pushing a unit of flow through this cycle results in a flow of cost which is the minimumcost flow that contains the edge . If , then no flows containing the edge exist with a cost smaller or equal to . The variable must therefore be fixed to the value taken in the current flow. Following Equation 1, the cost of the variable must be no smaller than the cost of the solution. To enforce BC on the cost variable, we increase the lower bound of to the cost of the minimum flow in the graph .
To enforce DC on the variables efficiently we can use an all pairs shortest path algorithm on the residual graph [15]. This takes time using Johnson’s algorithm [12]. This gives an time complexity to enforce DC on SoftSequence. The penalty variables used for SoftSequence arise directly out of the problem description and occur naturally in the LP formulation. We could also view them as arising through the methodology of [16], where edges with costs are added to the network graph for the hard constraint to represent the softened constraint.
6 Generalized Sequence Constraint
To model real world problems, we may want to have different size or positioned windows. For example, the window size in a rostering problem may depend on whether it includes a weekend or not. An extension of the Sequence constraint proposed in [9] is that each Among constraint can have different parameters (start position, , , and ). More precisely, holds iff for where . Whilst the methods in Section 4 easily extend to allow different bounds and for each window, dealing with different windows is more difficult. In general, the matrix now does not have the consecutive ones property. It may be possible to reorder the windows to achieve the consecutive ones property. If such a reordering exists, it can be found and performed in time, where is the number of nonzero entries in the matrix [17]. Even when reordering cannot achieve the consecutive ones property there may, nevertheless, be an equivalent network matrix. Bixby and Cunningham [18] give a procedure to find an equivalent network matrix, when it exists, in time. Another procedure is given in [19]. In these cases, the method in Section 4 can be applied to propagate the GenSequence constraint in time down the branch of a search tree.
Not all GenSequence constraints can be expressed as network flows. Consider the GenSequence constraint with , identical upper and lower bounds ( and ), and 4 windows: [1,5], [2,4], [3,5], and [1,3]. We can express it as an integer linear program:
(2) 
Applying the test described in Section 20.1 of [19] to Example 2, we find that the matrix of this problem is not equivalent to any network matrix.
However, all GenSequence constraint matrices satisfy the weaker property of total unimodularity. A matrix is totally unimodular iff every square nonsingular submatrix has a determinant of or . The advantage of this property is that any totally unimodular system of inequalities with integral constants is solvable in iff it is solvable in .
Theorem 6.1
The matrix of the inequalities associated with GenSequence constraint is totally unimodular.
In practice, only integral values for the bounds and are used. Thus the consistency of a GenSequence constraint can be determined via interior point algorithms for LP in time. Using the failed literal test, we can enforce DC at a cost of down the branch of a search tree for any GenSequence constraint. This is too expensive to be practical. We can, instead, exploit the fact that the matrix for each GenSequence constraint has the consecutive ones property for rows (before the introduction of slack/surplus variables). Corresponding to the row transformation for matrices with consecutive ones for columns is a changeofvariables transformation into variable for matrices with consecutive ones for rows. This gives the dual of a network matrix. This is the basis of an encoding of Sequence in [10] (denoted there ). Consequently that encoding extends to GenSequence. Adapting the analysis in [10] to GenSequence, we can enforce DC in time down the branch of a search tree.
In summary, for a compilation cost of , we can enforce DC on a GenSequence constraint in down the branch of a search tree, when it has a flow representation, and in when it does not.
7 A SlidingSum Constraint
The SlidingSum constraint [20] is a generalization of the Sequence constraint from Boolean to integer variables, which we extend to allow arbitrary windows. SlidingSum holds iff holds where is, as with the generalized Sequence, a window. The constraint can be expressed as a linear program called the primal where is a matrix encoding the inequalities. Since the constraint represents a satisfaction problem, we minimize the constant 0. The dual is however an optimization problem.
(3) 
Von Neumann’s Strong Duality Theorem states that if the primal and the dual problems are feasible, then they have the same objective value. Moreover, if the primal is unsatisfiable, the dual is unbounded. The SlidingSum constraint is thus satisfiable if the objective function of the dual problem is zero. It is unsatisfiable if it tends to negative infinity.
Note that the matrix has the consecutive ones property on the columns. The dual problem can thus be converted to a network flow using the same transformation as with the Sequence constraint. Consider the dual LP of our running example:

Our usual transformation will turn this into a network flow problem:

The flow associated with this example is given in Figure 3. There are nodes labelled from 1 to where node is connected to node with an edge of cost and node is connected to node with an edge of cost . For each window , we have an edge from to with cost and an edge from to with cost . All nodes have a null supply and a null demand. A flow is therefore simply a circulation i.e., an amount of flow pushed on the cycles of the graph.
Theorem 7.1
The SlidingSum constraint is satisfiable if and only there are no negative cycles in the flow graph associated with the dual linear program.
Proof
If there is a negative cycle in the graph, then we can push an infinite amount of flow resulting in a cost infinitely small. Hence the dual problem is unbounded, and the primal is unsatisfiable. Suppose that there are no negative cycles in the graph. Pushing any amount of flow over a cycle of positive cost results in a flow of cost greater than zero. Such a flow is not optimal since the null flow has a smaller objective value. Pushing any amount of flow over a null cycle does not change the objective value. Therefore the null flow is an optimal solution and since this solution is bounded, then the primal is satisfiable. Note that the objective value of the dual (zero) is in this case equal to the objective value of the primal. ∎
Based on Theorem 7.1 we build a BC filtering algorithm for the SlidingSum constraint. The SlidingSum constraint is BC iff the following conditions hold:

value is the lower bound of a variable , iff is the smallest value in the domain of such that there are no negative cycles through the edge weighted with and labeled with the lower bound of .

value is the upper bound of a variable , iff is the greatest value in the domain of such that there are no negative cycles through the edge weighted with and labeled with the upper bound of
The flow graph has nodes and edges. Testing whether there is a negative cycle takes time using the BellmanFord algorithm. We find for each variable the smallest (largest) value in its domain such that assigning this value to does not create a negative cycle. We compute the shortest path between all pairs of nodes using Johnson’s algorithm in time which in our case gives time. Suppose that the shortest path between and has length , then for the constraint to be satisfiable, we need . Since is a value potentially taken by , we need to have . We therefore assign . Similarly, let the length of the shortest path between and be . For the constraint to be satisfiable, we need . Since is a value potentially taken by , we have . We assign . It is not hard to prove this is sound and complete, removing all values that cause negative cycles. Following [10], we can make the propagator incremental using the algorithm by Cotton and Maler [21] to maintain the shortest path between pairs of nodes in time upon edge reduction. Each time a lower bound is increased or an upper bound is decreased, the shortest paths can be recomputed in time.
8 Experimental Results
To evaluate the performance of our filtering algorithms we carried out a series of experiments on random problems. The experimental setup is similar to that in [10]. The first set of experiments compares performance of the flowbased propagator on single instance of the Sequence constraint against the propagator^{1}^{1}1We would like to thank WillemJan van Hoeve for providing us with the implementation of the algorithm. (the third propagator in [9]), the encoding of [10], and the Among decomposition () of Sequence. The second set of experiments compares the flowbased propagator for the SoftSequence constraint and its decomposition into soft Among constraints. Experiments were run with ILOG 6.1 on an Intel Xeon 4 CPU, 2.0 Ghz, 4G RAM. Boost graph library version was used to implement the flowbased algorithms.
8.1 The Sequence constraint
For each possible combination of , , , we generated twenty instances with random lower bounds in the interval . We used random value and variable ordering and a time out of sec. We used the FordFulkerson algorithm to find a maximum flow. Results for different values of are presented in Tables 1, 2 and Figure 4. Table 1 shows results for tight problems with and Table 2 for easy problems with . To investiage empirically the asymptotic growth of the different propagators, we plot average time to solve 20 instances against the instance size for each combination of parameters and in Figure 4. First of all, we notice that the encoding is the best on hard instances () and the decomposition is the fastest on easy instances (). This result was first observed in [10]. The propagator is not the fastest one but has the most robust performance. It is sensitive only to the value of and not to other parameters, like the length of the window() or hardness of the problem(). As can be seen from Figure 4, the propagator scales better than the other propagators with the size of the problem. It appears to grow linearly with the number of variables, while the propagator display quadratic growth.
500  7  8  / 2.13  20  / 0.13  20  / 0.35  20  / 0.30 
15  6  / 0.01  20  / 0.09  20  / 0.30  20  / 0.29  
50  2  / 0.02  20  / 0.07  20  / 0.26  20  / 0.28  
2000  7  4  / 0.04  20  / 4.25  20  / 18.52  20  / 4.76 
15  0  /0  20  / 1.84  20  / 15.19  20  / 4.56  
50  1  /0  20  / 1.16  20  / 13.24  20  / 4.42  
5000  7  1  /0  20  / 64.05  15  / 262.17  20  / 36.09 
15  0  /0  20  / 24.46  17  / 211.17  20  / 34.59  
50  0  /0  20  / 8.24  19  / 146.63  20  / 31.66  
TOTALS  
solved/total  37  /360  360  /360  351  /360  360  /360  
avg time for solved  0.517  9.943  60.973  11.874  
avg bt for solved  17761  429  0  0 
500  7  20  / 0.01  20  / 0.58  20  / 0.15  20  / 0.44 
15  20  / 0.01  20  / 0.69  20  / 0.25  20  / 0.44  
50  18  / 0.02  20  / 0.20  20  / 0.37  20  / 0.42  
2000  7  20  / 0.07  20  / 32.41  20  / 7.19  20  / 6.62 
15  20  / 0.07  20  / 39.71  20  / 14.89  20  / 6.63  
50  5  / 5.19  20  / 9.52  20  / 13.71  20  / 6.94  
5000  7  20  / 0.36  0  /0  20  / 109.18  20  / 46.42 
15  20  / 0.36  6  / 160.99  17  / 215.97  20  / 45.97  
50  9  / 0.48  20  / 108.34  11  / 210.53  20  / 44.88  
TOTALS  
solved/total  296  /360  308  /360  345  /360  360  /360  
avg time for solved  0.236  52.708  50.698  16.200  
avg bt for solved  888  1053  0  0 
8.2 The Soft Sequence constraint
We evaluated performance of the soft Sequence constraint on random problems. For each possible combination of , , and (where is the number of Sequence constraints), we generated twenty random instances. All variables had domains of size 5. An instance was obtained by selecting random lower bounds in the interval . We excluded instances where
to avoid unsatisfiable instances. We used a random variable and value ordering, and a timeout of
sec. All Sequence constraints were enforced on disjoint sets of cardinality one. Instances with are hard instances for Sequence propagators [10], so that any DC propagator could solve only few instances. Instances with are much looser problems, but they are still hard do solve because each instance includes four overlapping Sequence constraints. To relax these instances, we allow the Sequence constraint to be violated with a cost that has to be less than or equal to of the length of the sequence. Experimental results are presented in Table 3. As can be seen from the table, the algorithms is competitive with the decomposition into soft Among constraints on relatively easy problems and outperforms the decomposition on hard problems in terms of the number of solved problems.We observed that the flowbased propagator for the SoftSequence constraint () is very slow. Note that the number of backtracks of is three order of magnitudes smaller compared to . We profiled the algorithm and found that it spends most of the time performing the all pairs shortest path algorithm. Unfortunately, this is difficult to compute incrementally because the residual graph can be different on every invocation of the propagator.
50  7  6  / 19.30  7  / 27.91  20  / 0.01  20  / 2.17 

15  8  / 36.07  13  / 20.41  11  / 49.49  10  / 30.51  
25  6  / 0.73  10  / 23.27  10  / 6.40  10  / 7.41  
100  7  1  /0  3  / 7.56  19  / 10.50  18  / 16.51 
15  0  /0  5  / 6.90  3  / 0.01  3  / 7.20  
25  0  /0  5  / 4.96  5  / 19.07  5  / 23.99  
TOTALS  
solved/total  21  /120  43  /120  68  /120  66  /120  
avg time for solved  19.463  18.034  13.286  13.051  
avg bt for solved  245245  343  147434  128 
Comments
There are no comments yet.