Three Generalizations of the FOCUS Constraint

The FOCUS constraint expresses the notion that solutions are concentrated. In practice, this constraint suffers from the rigidity of its semantics. To tackle this issue, we propose three generalizations of the FOCUS constraint. We provide for each one a complete filtering algorithm as well as discussing decompositions.


Introduction
Many discrete optimization problems have constraints on the objective function. Being able to represent such constraints is fundamental to deal with many real world industrial problems. Constraint programming is a promising approach to express and filter such constraints. In particular, several constraints have been proposed for obtaining well-balanced solutions [Pesant and Régin, 2005;Schaus et al., 2007;. Recently, the FOCUS constraint [Petit, 2012] was introduced to express the opposite notion. It captures the concept of concentrating the high values in a sequence of variables to a small number of intervals. We recall its definition. Throughout this paper, X = [x 0 , x 1 , . . . , x n−1 ] is a sequence of variables and s i,j is a sequence of indices of consecutive variables in X, such that s i,j = [i, i + 1, . . . , j], 0 ≤ i ≤ j < n. We let |E| be the size of a collection E.
Definition 1 ( [Petit, 2012]). Let y c be a variable. Let k and len be two integers, 1 ≤ len ≤ |X|. An instantiation of X ∪ {y c } satisfies FOCUS(X, y c , len, k ) iff there exists a set S X of disjoint sequences of indices s i,j such that three conditions are all satisfied: (1) |S X | ≤ y c (2) ∀x l ∈ X, x l > k ⇔ ∃s i,j ∈ S X such that l ∈ s i,j (3) ∀s i,j ∈ S X , j − i + 1 ≤ len FOCUS can be used in various contexts including cumulative scheduling problems where some excesses of capacity can be tolerated to obtain a solution [Petit, 2012]. In a cumulative scheduling problem, we are scheduling activities, and each activity consumes a certain amount of some resource. The total quantity of the resource available is limited by a capacity. Excesses can be represented by variables [De Clercq et al., 2011]. In practice, excesses might be tolerated by, for example, renting a new machine to produce more resource. Suppose the rental price decreases proportionally to its duration: it is cheaper to rent a machine during a single interval than to make several rentals. On the other hand, rental intervals have generally a maximum possible duration. FOCUS can be set to concentrate (non null) excesses in a small number of intervals, each of length at most len.
Unfortunately, the usefulness of FOCUS is hindered by the rigidity of its semantics. For example, we might be able to rent a machine from Monday to Sunday but not use it on Friday. It is a pity to miss such a solution with a smaller number of rental intervals because FOCUS imposes that all the variables within each rental interval take a high value. Moreover, a solution with one rental interval of two days is better than a solution with a rental interval of four days. Unfortunately, FOCUS only considers the number of disjoint sequences, and does not consider their length.
We tackle those issues here by means of three generalizations of FOCUS. SPRINGYFOCUS tolerates within each sequence in s i,j ∈ S X some values v ≤ k . To keep the semantics of grouping high values, their number is limited in each s i,j by an integer argument. WEIGHTEDFOCUS adds a variable to count the length of sequences, equal to the number of variables taking a value v > k . The most generic one, WEIGHTEDSPRINGYFOCUS, combines the semantics of SPRINGYFOCUS and WEIGHTEDFOCUS. Propagation of constraints like these complementary to an objective function is well-known to be important [Petit and Poder, 2008;Schaus et al., 2009]. We present and experiment with filtering algorithms and decompositions therefore for each constraint.

Springy FOCUS
In Definition 1, each sequence in S X contains exclusively values v > k. In many practical cases, this property is too strong. Consider one simple instance of the problem in the introduction, in Figure 1, where one variable x i ∈ X is defined per point in time i (e.g., one day), to represent excesses of capacity. Inintialy, 4 activities are fixed and one activity a remains to be scheduled (drawing A), of duration 5 and that can start from day 1 to day 5. If FOCUS(X, y c = 1, 5, 0) is imposed then a must start at day 1 (solution B). We have one 5 day rental interval. Assume now that the new machine may not be used every day. Solution (C) gives one rental of 3 days instead of 5. Furthermore, if len = 4 the problem will have no solution using FOCUS, while this latter solution still exists in practice. This is paradoxical, as relaxing the condition that sequences in the set S X of Definition 1 take only values v > k deteriorates the concentration power of the constraint. Therefore, we propose a soft relaxation of FOCUS, where at most h values less than k are tolerated within each sequence in S X .
Definition 2. Let y c be a variable and k , len, h be three integers, 1 ≤ len ≤ |X|, 0 ≤ h < len −1. An instantiation of X ∪ {y c } satisfies SPRINGYFOCUS(X, y c , len, h, k ) iff there exists a set S X of disjoint sequences of indices s i,j such that four conditions are all satisfied: Bounds consistency (BC) on SPRINGYFOCUS is equivalent to domain consistency: any solution can be turned into a solution that only uses the lower bound min(x l ) or the upper bound max(x l ) of the domain D(x l ) of each x l ∈ X (this observation was made for FOCUS [Petit, 2012]). Thus, we propose a BC algorithm. The first step is to traverse X from x 0 to x n−1 , to compute the minimum possible number of disjoint sequences in S X (a lower bound for y c ), the focus cardinality, denoted fc(X). We use the same notation for subsequences of X. fc(X) depends on k , len and h.
Definition 3. Given x l ∈ X, we consider three quantities.
Proof. By construction from Definitions 2 and 3.
To compute the quantities of Definition 3 for x l ∈X we use plen(x l ), the minimum length of a sequence in S [x0,x1,...,x l ] containing x l among instantiations of [x 0 , x 1 , . . . , x l ] where the number of sequences is fc([x 0 , x 1 , . . . , x l ]). plen(x l )=0 if ∀s i,j ∈ S [x0,x1,...,x l ] , j = l. card (x l ) is the minimum number of values v ≤ k in the current sequence in S [x0,x1,...,x l ] , equal to 0 if ∀s i,j ∈ S [x0,x1,...,x l ] , j = l. card (x l ) assumes that x l > k. It has to be decreased it by one if x l ≤ k. For sake of space, proofs of next lemmas are given in Appendix.
The principle of Algorithm 2 is the following. First, lb = f c(X) is computed with x n−1 . We execute Algorithm 1 from x 0 to x n−1 and conversely (arrays pre and suf ). We thus have for each quantity two values for each variable x l . To aggregate them, we implement regret mechanisms directly derived from Propositions 2 and 1, according to the parameters len and h.
Line 4 is optional but it avoids some work when the variable y c is fixed, thanks to the same property as FOCUS (see [Petit, 2012]). Algorithm 2 performs a constant number of traversals of the set X. Its time complexity is O(n), which is optimal.

Weighted FOCUS
We present WEIGHTEDFOCUS, that extends FOCUS with a variable z c limiting the the sum of lengths of all the sequences in S X , i.e., the number of variables covered by a sequence in S X . It distinguishes between solutions that are equivalent with respect to the number of sequences in S X but not with respect to their length, as Figure 2 shows.
Integer regret := 0; Integer add := 0; Definition 4. Let y c and z c be two integer variables and k , len be two integers, such that 1 ≤ len ≤ |X|. An Definition 5 ( [Petit, 2012]). Given an integer k , a variable We say x l ∈ P k iff x l is labeled P k , and similarly for U k and N k . Dynamic Programming (DP) Principle Given a partial instantiation I X of X and a set of sequences S X that covers all penalizing variables in I X , we consider two terms: the number of variables in P k and the number of undetermined variables, in U k , covered by S X . We want to find a set S X that minimizes the second term. Given a sequence of variables s i,j , the cost cst(s i,j ) is defined as cst(s i,j ) = {p|x p ∈ U k , x p ∈ s i,j }. We denote cost of S X , cst(S X ), the sum cst(S X ) = si,j ∈S X cst(s i,j ). Given I X we consider |P k | = |{x i ∈ P k }|. We have: We start with explaining the main difficulty in building a propagator for WEIGHTEDFOCUS. The constraint has two optimization variables in its scope and we might not have a solution that optimizes both variables simultaneously.
Example 1 suggests that we need to fix one of the two optimization variables and only optimize the other one. Our algorithm is based on a dynamic program [Dasgupta et al., 2006]. For each prefix of variables [x 0 , x 1 , . . . , x j ] and given a cost value c, it computes a cover of focus cardinality, denoted S c,j , which covers all penalized variables in [x 0 , x 1 , . . . , x j ] and has cost exactly c. If S c,j does not exist we assume that S c,j = ∞. S c,j is not unique as Example 2 demonstrates.
Consider the subsequence of variables [x 0 , . . . , x 5 ] and S 1,5 . There are several sets of minimum cardinality that cover all penalized variables in the prefix [x 0 , . . . , x 5 ] and has cost 2, e.g. S 1 1,5 = {s 0,2 , s 3,5 } or S 2 1,5 = {s 0,4 , s 5,5 }. Assume we sort sequences by their starting points in each set. We note that the second set is better if we want to extend the last sequence in this set as the length of the last sequence s 5,5 is shorter compared to the length of the last sequence in S 1 1,5 , which is s 3,5 . Example 2 suggests that we need to put additional conditions on S c,j to take into account that some sets are better than others. We can safely assume that none of the sequences in S c,j starts at undetermined variables as we can always set it to zero. Hence, we introduce a notion of an ordering between sets S c,j and define conditions that this set has to satisfy.
Ordering of sequences in S c,j . We introduce an order over sequences in S c,j . Given a set of sequences in S c,j we sort them by their starting points. We denote last(S c,j ) the last sequence in S c,j in this order. If x j ∈ last(S c,j ) then |last(S c,j )| is, naturally, the length of last(S c,j ), otherwise |last(S c,j )| = ∞.
Ordering of sets S c,j , c ∈ [0, max(z c )], j ∈ {0, 1, . . . , n − 1}. We define a comparison operation between two sets S c,j and S c ,j . S c,j ≤ S c ,j iff |S c,j | < |S c ,j | or |S c,j | = |S c ,j | and last(S c,j ) ≤ last(S c ,j ). Note that we do not take account of cost in the comparison as the current definition is sufficient for us. Using this operation, we can compare all sets S c,j and S c,j of the same cost for a prefix [x 0 , . . . , x j ]. We say that S c,j is optimal iff satisfies the following 4 conditions. Proposition 3 (Conditions on S c,j ).

Bounds disentailment Each cell in the dynamic programming table
We say that f c,j /q c,j /l c,j is a dummy (takes a dummy value) iff f c,j = {∞, ∞}/q c,j = ∞/l c,j = ∞. If y 1 = ∞ and y 2 = ∞ then we assume that they are equal. We introduce a dummy variable x −1 , D(x −1 ) = {0} and a row f −1,j , j = −1, . . . , n − 1 to keep uniform notations. Algorithm 3 gives pseudocode for the propagator. The intuition behind the algorithm is as follows. We emphasize again that by cost we mean the number of covered variables in U k .
If x j ∈ P k then we do not increase the cost of S c,j compared to S c,j−1 as the cost only depends on x j ∈ U k . Hence, the best move for us is to extend last(S c,j−1 ) or start a new sequence if it is possible. This is encoded in lines 9 and 10 of D(x0) D(x1) D(x2) D(x3) D(x4) D(x5) D(x6) D(x7)  the algorithm. Figure 3(a) gives a schematic representation of these arguments. If x j ∈ U k then we have two options. We can obtain S c,j from S c−1,j−1 by increasing cst(S c−1,j−1 ) by one. This means that x i will be covered by last(S c,j ). Alternatively, from S c,j−1 by interrupting last(S c,j−1 ). This is encoded in line 12 of the algorithm (Figure 3(b)). If x j ∈ N k then we do not increase the cost of S c,j compared to S c,j−1 . Moreover, we must interrupt last(S c,j−1 ), line 14 (Figure 3(c), ignore the gray arc).
First we prove a property of the dynamic programming table. We define a comparison operation between f c,j and f c ,j induced by a comparison operation between S c,j and S c ,j : f c,j ≤ f c ,j if (q c,j < q c ,j ) or (q c,j = q c ,j and l c,j ≤ l c ,j ). In other words, as in a comparison operation between sets, we compare by the cardinality of sequences, |S c,j | and |S c ,j |, and, then by the length of the last sequence in each set, last(S c,j ) and last(S c ,j ). We omit proofs of the next two lemmas due to space limitations (see Appendix).
The dynamic programming table f c,j = {q c,j , l c,j } c ∈ [0, z U c ], j = 0, . . . , n − 1, is correct in the sense that if f c,j exists and it is non-dummy then a corresponding set of sequences S c,j exists and satisfies conditions 1-4. The time complexity of Algorithm 3 is O(n max(z c )).
Bounds consistency To enforce BC on variables x, we compute an additional DP table b, b c,j , c ∈ [0, z U c ], j ∈ [−1, n − 1] on the reverse sequence of variables x.
Lemma 7. Consider WEIGHTEDFOCUS(X, y c , len, k , z c ). Bounds consistency can be enforced in O(n max(z c )) time.
Proof. (Sketch) We build dynamic programming tables f and b. We will show that to check if x i = v has a support it is sufficient to examine O(z U c ) pairs of values f c1,i−1 and b c2,n−i−2 , c 1 , c 2 ∈ [0, z U c ] which are neighbor columns to the ith column. It is easy to show that if we consider all possible pairs of elements in f c1,i−1 and b c2,n−i−2 then we determine if there exists a support for such pairs. The main part of the proof shows that it sufficient to consider O(z U c ) such pairs. In particular, to check a support for a variable-value pair x i = v, v > k, for each f c1,i−1 it is sufficient to consider only one element b c2,n−i−2 such that b c2,n−i−2 is non-dummy and c 2 is the maximum value that satisfies inequality c 1 + c 2 + 1 ≤ z U c . To check a support for a variable-value pair x i = v, v ≤ k, for each f c1,i−1 it is sufficient to consider only one element b c2,n−i−2 such that b c2,n−i−2 is non-dummy and c 2 is the maximum value that satisfies inequality c 1 + c 2 ≤ z U c .
We observe a useful property of the constraint. If there exists f c,n−1 such that c < max(z c ) and q c,n−1 < max(y c ) then the constraint is BC. This follows from the observation that given a solution of the constraint S X , changing a variable value can increase cst(S X ) and |S X | by at most one.
Alternatively we can decompose WEIGHTEDFOCUS using O(n) additional variables and constraints.

Weighted Springy FOCUS
We consider a further generalization of the FOCUS constraint that combines SPRINGYFOCUS and WEIGHTEDFOCUS. We prove that we can propagate this constraint in O(n max(z c )) time, which is same as enforcing BC on WEIGHTEDFOCUS. Definition 6. Let y c and z c be two variables and k , len, h be three integers, such that 1 ≤ len ≤ |X| and 0 < h < len − 1. An instantiation of X ∪ {y c } ∪ z c satisfies WEIGHTEDSPRINGYFOCUS(X, y c , len, h, k , z c ) iff there exists a set S X of disjoint sequences of indices s i,j such that five conditions are all satisfied: (1) |S X | ≤ y c (2) ∀x l ∈ X, We can again partition cost of S into two terms. si,j ∈S |s i,j | = si,j ∈S cst(s i,j ) + |P k |. However, cst(s i,j ) is the number of undetermined and neutral variables covered s i,j , cst(s i,j ) = {p|x p ∈ U k ∪ N k , x p ∈ s i,j } as we allow to cover up to h neutral variables.
The propagator is again based on a dynamic program that for each prefix of variables [x 0 , x 1 , . . . , x j ] and given cost c computes a cover S c,j of minimum cardinality that covers all penalized variables in the prefix [x 0 , x 1 , . . . , x j ] and has cost exactly c. We face the same problem of how to compare two sets S 1 c,j and S 2 c,j of minimum cardinality. The issue here is how to compare last(S 1 c,j ) and last(S 2 c,j ) if they cover a different number of neutral variables. Luckily, we can avoid this problem due to the following monotonicity property. If last(S 1 c,j ) and last(S 2 c,j ) are not equal to infinity then they both end at the same position j. Hence, if last(S 1 c,j ) ≤ last(S 2 c,j ) then the number of neutral variables covered by last(S 1 c,j ) is no larger than the number of neutral variables covered by last(S 2 c,j ). Therefore, we can define order on sets S c,j as we did in Section 3 for WEIGHTEDFOCUS.
Our bounds disentailment detection algorithm for WEIGHTEDSPRINGYFOCUS mimics Algorithm 3. We omit the pseudocode due to space limitations but highlight two not-trivial differences between this algorithm and Algorithm 3. The first difference is that each cell in the dynamic The new parameter h c,j stores the number of neutral variables covered by last(S c,j ). The second difference is in the way we deal with neutral variables. If x j ∈ N k then we have two options now. We can obtain S c,j from S c−1,j−1 by increasing cst(S c−1,j−1 ) by one and increasing the number of covered neutral variables by last(S c,j−1 ) (Figure 3(c), the gray arc). Alternatively, we can obtain S c,j from S c,j−1 by interrupting last(S c,j−1 ) (Figure 3(c), the black arc). BC can enforced using two modifications of the corresponding algorithm for WEIGHTEDFOCUS (a proof is given in Appendix). Lemma 8. Consider WEIGHTEDSPRINGYFOCUS(X, y c , len, h, k , z c ). BC can be enforced in O(n max(z c )) time.
WEIGHTEDSPRINGYFOCUS can be encoded using the cost-REGULAR constraint. The automaton needse 3 counters to compute len, y c and h. Hence, the time complexity of this encoding is O(n 4 ). This automaton is non-deterministic as on seeing v ≤ k , it either covers the variable or interrupts the last sequence. Unfortunately the non-deterministic cost-REGULAR is not implemented in any constraint solver to our knowledge. In contrast, our algorithm takes just O(n 2 ) time. WEIGHTEDSPRINGYFOCUS can also be decomposed using the GCC constraint [Régin, 1996]. We define the following variables for all i ∈ [0, max(y c )−1] and j ∈ [0, n−1]: S i the start of the ith sub-sequence. D(S i ) = {0, .., n + max(y c )}; E i the end of the ith sub-sequence. D(E i ) = {0, .., n + max(y c )}; T j the index of the subsequence in S X containing x j . D(T j ) = {0, .., max(y c )}; Z j the index of the subsequence in S X containing x j s.t. the value of x j is less than or equal to k. D(Z j ) = {0, .., max(y c )}; last c the cardinality of S X . D(last c ) = {0, .., max(y c )}; Card, a vector of max(y c ) variables having {0, .., h} as domains. WEIGHTEDSPRINGYFOCUS(X, yc, len, h, k , zc

Experiments
We used the Choco-2.1.5 solver on an IntelXeon 2.27GHz for the first benchmarks and IntelXeon 3.20GHz for last ones, both under Linux. We compared the propagators (denoted by F) of WEIGHTEDFOCUS and WEIGHTEDSPRINGYFOCUS against two decompositions (denoted by D 1 and D 2 ), using the same search strategies, on three different benchmarks. The first decomposition, restricted to WEIGHTEDFOCUS, is shown in proposition 4, while the second one is shown in Section 4. In the tables, we report for each set the total number of solved instances (#n), then we average both the number of backtracks (#b) and the resolution time (T) in seconds. 2 Sports league scheduling (SLS). We extend a single roundrobin problem with n = 2p teams. Each week each team plays a game either at home or away. Each team plays exactly once all the other teams during a season. We minimize the number of breaks (a break for one team is two consecutive home or two consecutive away games), while fixed weights in {0, 1} are assigned to all games: games with weight 1 are important for TV channels. The goal is to group consecutive weeks where at least one game is important (sum of weights > 0), to increase the price of TV broadcast packages. Packages are limited to 5 weeks and should be as short as possible. Table 2 shows results with 16 and 20 teams, on sets of 50 instances with 10 random important games and a limit of 400K backtracks. max(y c ) = 3 and we search for one solution with h ≤ 7 (instances n-1), h ≤ 6 (n-2) and h ≤ 5 (n-3). In our model, inverse-channeling and ALLDIFFERENT constraints with the strongest propagation level express that each team plays once against each other team. We assign first the sum of breaks by team, then the breaks and places using the DomOverWDeg strategy. 2 Cumulative Scheduling with Rentals. Given a horizon of n days and a set of time intervals [s i , e i ], i ∈ {1, 2, . . . , p}, a company needs to rent a machine between l i and u i times within each time interval [s i , e i ]. We assume that the cost of the rental period is proportional to its length. On top of this, each time the machine is rented we pay a fixed cost. The problem is then defined as a conjunction of one WEIGHTEDSPRINGYFOCUS(X, y c , len, h, 0, z c ) with a set of AMONG constraints. The goal is to build a schedule for rentals that satisfies all demand constraints and minimizes simultaneously the number of rental periods and their total length. We build a Pareto frontier over two cost variables, as Figure 4 shows for one of the instances of this problem. We generated instances having a fixed length of sub-sequences of size 20 (i.e., len = 20), 50% as a probability of posting an Among constraint for each (i, j) s.t. j ≥ i+5 in the sequence.   Each set of instances corresponds to a unique sequence size ({40, 43, 45, 47, 50}) and 20 different seeds. We summarize these tests in table 3. Results with decomposition are very poor. We therefore consider only the propagator in this case. 2 Sorting Chords. We need to sort n distinct chords. Each chord is a set of at most p notes played simultaneously. The goal is to find an ordering that minimizes the number of notes changing between two consecutive chords. The full description and a CP model is in [Petit, 2012]. The main difference here is that we build a Pareto frontier over two cost variables. We generated 4 sets of instances distinguished by the numbers of chords ({14, 16, 18, 20}). We fixed the length of the subsequences and the maximum notes for all the sets then change the seed for each instance.
Tables 2, 3 and 4 show that best results were obtained with our propagators (number of solved instances, average backtracks and CPU time over all the solved instances 1 ). Figure 4 confirms the gain of flexibility illustrated by Figure 1 in Section 2: allowing h = 1 variable with a low cost value into each sequence leads to new solutions, with significantly lower values for the target variable y c .

Conclusion
We have presented flexible tools for capturing the concept of concentrating costs. Our contribution highlights the expressive power of constraint programming, in comparison with other paradigms where such a concept would be very difficult to represent. Our experiments have demonstrated the effectiveness of the proposed new filtering algorithms.