Neural Network And Local Search To Solve Binary CSP

Received Jan 15, 2018 Revised Mar 12, 2018 Accepted Mar 28, 2018 Continuous Hopfield neural Network (CHN) is one of the effective approaches to solve Constrain Satisfaction Problems (CSPs). However, the main problem with CHN is that it can reach stabilisation with outputs in real values, which means an inconsistent solution or an incomplete assignment of CSP variables. In this paper, we propose a new hybrid approach combining CHN and min-conflict heuristic to mitigate these problems. The obtained results show an improvement in terms of solution quality, either our approach achieves feasible soluion with a high rate of convergence, furthermore, this approach can also enhance theperformance more than conventional CHN in some cases, particularly, when the network crashes.


INTRODUCTION
In general, a Constraint Satisfaction Problem (CSP) consists of a finite set of variables; each one has a finite domain of values and a set of constraints which imposes a variable domain restriction. And the goal is finding a complete assignment of variables which satisfies all constraints. Many problems from the real world can be reformulated as CSP. For example, qualitative and symbolic reasoning, diagnosis, scheduling, spatial and temporal planning, hardware design and verification, real-time systems and robot planning, etc. It is known that solving a CSP with a finite domain is an NP-complete problem, which requires a combination of heuristics [1] and combinatory search methods, in order to be solved in a reasonable time [2]. Mainly, approaches to solve CSPs can be classified into two categories: exact approaches and heuristic ones. As for exact approaches, most of them have the backtracking algorithm (BT) as a main algorithm for solving constraint satisfaction problems. The BT applies a Depth-First search in order to instantiate variables and a backtrack mechanism when dead-ends appear. Many works have been devoted to improve its forward and backward phases by introducing look-ahead and look-back schemes. Some other search algorithms for classical CSPs like Forward Checking, Partial Look-ahead, Full Look-ahead, and Really Full Look-ahead are introduced and their performances are studied widely in the literature [3][4][5][6]. For small problems exact method are better but for big instances of NP-Hard problem like CSP need high time cost, due to all methods have at last experiential complexity. In this case non exact approaches give an acceptable solution in reasonable times. As far as heuristic approaches are concerned, we find that, a very different approach has investigated discrete Hopfield Neural Network (HNN) to solve CSP [7]. In this neural network approach, the CSP constraints are associated with the network topology, biases, and connection strengths. Recently, many  [8][9][10] or problems which can be formulated as CSP [11]. As we can see in [10], the authors propose mapping CSP to a quadratic problem and elaborate an appropriate parameters setting of network energy, then they solve it by finding the network equilibrium point [8]. Based on the same convergence procedure, Calabuig [12] adds dynamically linear constraints to confine the network to the feasible subspace of solutions; this improvement ensures the final solution validity. Practically, there are two important problems with approaches based on conventional neural network architectures. The first problem is that HNN partially mitigates the problem of getting stuck in local optimum. The second one is due to Hopfield network dynamic which continuously explores the search space and will not always stabilize at border 0 or 1. If the same case appears, we get low solution quality or an incomplete assignment of variables problem. In order to tackle these problems, we propose to improve the CHN solution by the Min-Conflict heuristic(MNC) [13]. The MNC works as follows: it selects the value from each variable domain for which the total error in the next configuration will be minimal. Precisely, MNC is a repair-based stochastic approach, which starts with a complete but inconsistent assignment and then repairs variables implicated in constraint violations repetitively until a consistent assignment is achieved, this approach can solve large-scale problems in a practical time.
This paper is organized as follows: In section 1, we present a modelization of the binary CSP as 0-1 quadratic program, and generalizes energy function associated with the CSPs. Section 2 is devoted to giving implementation details of the proposed approach. In the last section, we show the complexity analysis and the results of the numerical experiments against other approaches like the Genetic Algorithm [14][15][16].

BINARY CSPS SOLVED BY CHN 2.1. Quadratic model of Constraint satisfaction problem
A large number of real problems such as artificial intelligence, scheduling, assignment problem can be formulated as a Constraint Satisfaction Problem. Solving a CSP requires to finding an assignment of all variables problem under constraints restriction. The CSP can be formulated as three sets: 1) Set of N variables X={X i ; 1 ≤ i ≤ N }. Each constraint C i associates an ordered variables subset which is called the scope of C i . The arity of a constraint is the number of involved variables. We can easily reformulate CSP as a Quadratic Problem (QP), by introducing a binary variable x ik for each CSP variable x i , where k varies over the range of x i , given as follows: For each binary constraint Cij , between the variables yi and yj , we associate a state function defined as: From all the equations defined in (2), which correspond to problem constraints, we deduce the objective function of its equivalent QP: fully initialized to 1 ). So, the model is given as follows: Systematically, to solve the last Quadratic Optimization Problem with Hopfield model, we need to build an energy function such as the feasible solutions of the problem corresponding to the minimal of CHN energy function.

Hopfield neural network
Hopfield neural network was introduced by Hopfield and Tank [17] [18] [19]. At the beginning, it was developed to solve combinatorial optimization problems. Then it was extended extensively to other areas of application, such as recognition and optimization. Hopfield neural network is classified as an efficient local search, which gives an acceptable solution to hard optimization problems at a reasonable time. Basically, this neural network model is a fully connected neural network and its dynamic stats are governed by the following differential equation Where:  x : vector of neurons input  y : vector of output  T : matrix of weight between each neurones pairs  i b : The biases which describe the neurons self-internal noises.
Hopfield proved that if T is symmetric then the network energies is a Lyapunov function: For each neurons, the input is governed by an activation function x = g(y) which varies between 0 and 1. This function is given by: We define the energy function of CHN as to be associate with the cost of the problem to be minimized; this is down by taken too large: In this paper, we use the ability of CHN to resolve Quadratic model optimization. So we choose a CHN energy function similar to [10], which is adapted to solve the quadratic formulation of the binary CSPs: The first term penalizes two values taken by two linked variables which caused a violation, the second term restricts each variable to take one value, the third term aggregates all linear constraints (Ax = b), and the last term satisfies the propriety of integrity which ensures neurons to stabilize at 0 or 1 state. By corresponding (9) and (10), we deduce the link weight and biases of the network.
To obtain an equilibrium point of the CHN, the parameters setting procedure is used [10] [8]. The Principe of this procedure is to move the network in the direction of . This process speeds up the neural network convergence significantly. Furthermore, to assure the stability, we use the hyperplane analysis method which calculates the energie function evolution. Let , the following conditions must be imposed to ensure the convergence.
• > 0 to have a minimization problem • The first one concerns avoiding the case when two values are assigned to the same variable, with and their corresponding neurons x ki =x kj =1, so E(x) must verify: • The second ensures that all variable must be assigned and escape the case where x ir =0 so E(x) must verify also: With: Empirically, the best value of is 10 -5 parameter setting by solving [10] The CHN parameters setting and the starting point used in this paper are similar to [10]. Furthermore, a procedure which allows each neuron of the same variable to take 0, if one converges to the boarder 1, is added. This procedure is called Moderator (Figure 1).  This approach, which only uses Continuous Hopfield neural Network (CHN), was extensively studied over different instances of Constraints satisfaction problem [8], but simulation proved that this approach has many disadvantages. For example, sometimes the network in continuous dynamic does not gives a valid solution, and it is attracted by the nearest local minimal to the starting point. This attracting effect remains a result of its deterministic input-output interaction of the units in the network. Consequently, the network is not able to escape from a local solution closed to the starting point. Also, for Hopfield neural network with a continuous dynamics case, each unit output can take any value between 0 and 1. So, the network can be stranded at a local minimum which contains some units that still take real values. In this case, we obtain an invalid solution for CSP. To overcome this neural network weakness, the main idea of this work is to repair the solution given by the CHN and improve it by a known Min-conflict algorithm.

CHN AND MIN CONFLICT HEURISTIC TO SOLVE CSPS
There are many methods which combine two or more no exacts approaches to solve a given optimization problem [11,[20][21][22][23]. In the same direction we introduce a hybrid approach based CHN and MNC. The MNC algorithm [13] is a very simple and fast local repairing method to resolve CSPs, which aims at assigning all the variables randomly. Next, it iteratively selects one variable from the set of the variables with conflicts which violates one or more constraints of the CSP. Then, it assigns a value to the selected variable, so that it can minimize the number of conflicts. MNC has demonstrated to be able to solve the queens problem in minutes [24]. MNC is widely used to construct hybrid algorithms with other optimizations [25][26][27][28]. In this way, the basic idea of our proposed approach is to use MNC to improve the solution reached by CHN. This will be done in tw o steps (see Figure 2). First, MNC visits all assigned variables; for each one, we apply Min-Conflict directly to the neural network structure, then, it returns the best assignment for the current variable (see Figure 3), the decision will be taken by the sum of all activated neurons weight. Second, we propagate this assignment to other set variables not yet assigned iteratively by applying the MNC heuristic to guarantee as much consistency as possible. The diagram of our proposed algorithm is described by (Figure 4).
The second phase will take m 4 which is not yet assigned : • m 4 • w(0)=w (1)

Generated randomly instances
To evaluate the performance of proposed resolving methods, we run some preliminary experiments on the randomly generated problems and we compare the solution quality by the number of violations. Simulation has been done with the following machine characteristics: I5 core(TM) 2,35GHz processor and limitation of memory to 2GO. For CHN parameters, the best values were founded empirically: To generate random problems, we use a random generator based extended model B as it is described in [29][30][31]. This extended model which is called Model RB is able to generate a hard satisfiable instance. To summarize this generation method, a random CSP is defined by the following five input parameters: • k: denotes the arity of each constraint, fixed at k=2, so all constraints are binary • n : denotes the number of variables, • α: determines the domain size d=n α of each variable, • r>0: determines the number m=r n ln(n) of constraints, • 1>p>0: determines the number t=p d k of disallowed tuples of each relation.
In the following, a class of CSPs will be denoted by a tuple of the form <n,d,m,k,p>. For simulation, we generate 50 instances for each p value for k=2, α =0.8, r=3 and n in {20, 30, 40}, and we calculate the average of violated constraints of each instance in the same class of p value. For the model parameter used, we get <20,11,180,p>, <30,15,306,p> and <40,19,443,p>. The corresponding load curve is given in Figures  5, 6 and 7 respectively. Comparison is made between the solution obtained by the proposed algorithm and the solution given by the original approach [10]. As we can see Min-Conflict improves the quality of the solution considerably around mean 50%, and the success of network is up to 100%, but for CHN approach, we find the empirically mean value 72% over all runs.

Typical instances
For showing the practical interest of our approach, we also study its performance over problems of different natures (random, academic and real-world problems) [32]. The goal is to evaluate its performance with other evolutionary algorithms [33,34]. Thus, we compare its efficiency with the Genetic Algorithm (GA) [35] and the Particular Swarm Optimization (PSO) [36,37]. In practice, rather than authors settings we have empirically searching the Genetic Algorithm and PSO goods ones, adopted to CSP problem. So, for GA the population was 200 individuals, mutation rate equals to 5% and crossing rate equals to 72%, as for PSO [36,37] we chose and population size fixed at 100. We also run each one 200 times. The CHN parameters setting and the starting point used in this paper are similar to [10], we have used the Variable Updating Step (VUS) technique proposed by Talaván and Yáñez in [8]. Table 1 shows the comparison between our approach CHN-MNC and original CHN. The description of table columns is the following: • V: the number of variables.  NB:composed* is the instance composed-25-10-20-5 Table 1 we learn that GA and CHN-MNC are close and both better than PSO, but regarding the mean time taken by all compared algorithms CHN-MNC is the short one. GA [35] have the same principle with our approach while they use GA and Minimization of conflicts, but they improve the best individual. It's not sure that improving a best solution will give the good one, it can be near the worst one in the population. For this reason, the means value of the multiple runs of our approach was the best because it improves each founded solution.

CONCLUSION
In this paper, we have proposed a new approach for solving binary constraint satisfaction problems. Our hybrid algorithm gives a good solution quality rather than using CHN alone, this improvement is done by adding a no importuned time computation. Furthermore the rate of network success to give a valid solution is up to 100% by repairing. Also, the results of the numerical example show that our approach is competitive with GA and PSO. The basic Hopfield neural network architecture described above includes only binary constraints, but some classes of problems are natively n-ary. The extension of this approach to solve these problems is possible if we can translate any n-ary problem to an equivalent binary one. Other studies are in progress to apply this approach to many real-world problems such as aircraft conflict, timetabling and Meeting scheduling.