NK Hybrid Genetic Algorithm for Clustering

The NK hybrid genetic algorithm (GA) for clustering is proposed in this paper. In order to evaluate the solutions, the hybrid algorithm uses the NK clustering validation criterion 2 (NKCV2). NKCV2 uses information about the disposition of <inline-formula> <tex-math notation="LaTeX">${N}$ </tex-math></inline-formula> small groups of objects. Each group is composed of <inline-formula> <tex-math notation="LaTeX">${K+1}$ </tex-math></inline-formula> objects of the dataset. Experimental results show that density-based regions can be identified by using NKCV2 with fixed small <inline-formula> <tex-math notation="LaTeX">${K}$ </tex-math></inline-formula>. In NKCV2, the relationship between decision variables is known, which in turn allows us to apply gray box optimization. Mutation operators, a partition crossover (PX), and a local search strategy are proposed, all using information about the relationship between decision variables. In PX, the evaluation function is decomposed into <inline-formula> <tex-math notation="LaTeX">${q}$ </tex-math></inline-formula> independent components; PX then deterministically returns the best among <inline-formula> <tex-math notation="LaTeX">${2^{q}}$ </tex-math></inline-formula> possible offspring with computational complexity <inline-formula> <tex-math notation="LaTeX">${O(N)}$ </tex-math></inline-formula>. The NK hybrid GA allows the detection of clusters with arbitrary shapes and the automatic estimation of the number of clusters. In the experiments, the NK hybrid GA produced very good results when compared to another GA approach and to state-of-art clustering algorithms.


I. INTRODUCTION
C LUSTERING algorithms are important tools for the analysis and visualization of large complex datasets.Clustering is a grouping problem, where objects in the same clusters should be more similar to each other than to objects in other clusters.An important type of clustering is hard partitional clustering, where a single discrete label should be assigned for each object in the dataset [1].Unsupervised learning algorithms are used to find good partitions in clustering.In this case, only internal validation criteria can be applied in clustering, i.e., the disposition of the objects in the input space is the only information available to evaluate partitions.
Unfortunately, a unique definition for clusters does not exist [2] and several internal validation criteria have been proposed [3].For example, the validation criterion used in k-means is the sum of distances from objects to the centroids of the clusters [4].The validation criterion used in k-means is not appropriate when the number of clusters is unknown.Besides, among many of the internal validation criteria, it works well only in problems with spherical clusters [5].
When employed in clustering and other grouping problems [6], evolutionary algorithms and other metaheuristics use fitness functions based on internal validation criteria [7].In fact, external validation criteria could be used to evaluate the individuals; however, the information about the labels is not present in real-word clustering problems.Different internal validation criteria have been used to evaluate the partitions given by the individuals.For example, in [8], the weighted sum of cluster intradistances and interdistances is employed to evaluate solutions in a genetic algorithm used to group small clusters into larger clusters.In [9], the silhouette width criterion is used in the clustering genetic algorithm.Both internal validation criteria, as well as others that are generally used to evaluate individuals in evolutionary algorithms, are indicated for datasets with spherical clusters.
In [10], we proposed the NK clustering validation (NKCV) criterion.In the NKCV criterion, neighbourhood relations inside small groups of objects are used in order to improve the identification of clusters with arbitrary shapes.The NKCV function is composed of N subfunctions, where N is the number of objects in the dataset.Each subfunction is influenced by a small group of K + 1 objects.Other clustering internal criteria were proposed to allow the detection of clusters with arbitrary shapes [11], but only a fitness function based on the NKCV criterion allows the application of partition crossover [12], [13].Partition crossover is an efficient deterministic recombination operator that finds the best of 2 q possible offspring, where q is the number of recombining subsets, at O(N ) cost per recombination if K is a constant.Partition crossover can only be applied when the evaluation function is decomposable, i.e., it cannot be applied when traditional internal clustering validation criteria are employed.
Here, we introduce the NK hybrid genetic algorithm (GA) for clustering.The NK hybrid GA evaluates candidate solutions using NKCV2, a new version of the NKCV criterion.Unlike other clustering validation criteria, NK clustering validation criteria make explicit the relationship between decision variables.As a consequence, it is possible to use gray box optimization [14].In other words, we can use the information about the relationship between variables, described in the evaluation function, to efficiently search for new solutions.In this work, we propose new search operators and strategies that enhance the optimization process in clustering by using the information about the relationship between variables.Gray box optimization is a new promising approach for clustering; similar search operators and strategies have shown to be very efficient in other applications [13], [15], [16].
The main contributions of the paper are: i) Proposing a new NK clustering validation criterion (NKCV2) that uses the density of data points, as well as the distances, to define the local groups.In the NKCV criterion [10], only the distances between the objects are used.Unlike the NKCV criterion, the NKCV2 criterion can be used in problems with noise.Using a definition similar to the one presented in [11], the NKCV2 criterion considers that noisy objects present small local density and are distant from other objects in the group.ii) Proposing mutation operators and a local search method that use the information about the structure of an evaluation function given by the NKCV2 criterion.In other words, the operators explicitly explore the information about the interaction between decision variables obtained by analysing the subfunctions that form the evaluation function.This is also true for the partition crossover.The proposed operators and local search method can be applied to clustering only when the NKCV2 function is employed to evaluate the solutions.iii) Proposing the NK hybrid GA, that incorporates the NKCV2 function to evaluate the solutions, partition crossover, the new mutation operators, and the new local search method.
The proposed NK hybrid GA allows to identify clusters with arbitrary shape and the automatic estimation of the number of clusters.Although algorithms based on the density of the points in the dataset, like DBSCAN (density-based spatial clustering of applications with noise) [17] and the density peaks (DP) clustering algorithm [5], also efficiently identify clusters with arbitrary shape and automatically estimate the number of clusters, those are sensitive to the algorithm parameters or to the manual selection of candidates for prototypes.In fact, it is possible to run the algorithms with different combinations of parameters and find the best partition according to an evaluation function based on an internal validation criterion.The same approach can be used to select the best parameter k, that indicates the number of clusters, in k-means.The automatic selection of the number of clusters by generating a large number of solutions and selecting the best ones according to the NKCV2 criterion is intrinsic to the proposed hybrid genetic algorithm.
The remainder of the paper is organized as follows.Related work and some concepts related to hard partitional clustering are presented in Section II.Sections III and IV present the methodology proposed in this work: Section III presents the NKCV2 criterion, while Section IV presents the NK hybrid genetic algorithm.Comparative experiments with k-means, DBSCAN, DP clustering algorithm, and another genetic algorithm approach are presented in Section V. Finally, the paper is concluded in Section VI.

II. BACKGROUND AND RELATED WORK
Each object is represented by an l-dimensional attribute vector y j , while a dataset with N objects is represented by In hard partitional clustering, given a dataset Y , the best hard partition should be found.Clustering is an NP-hard grouping problem [6].
Three types of encoding are common in evolutionary algorithms applied to hard partitional clustering [7].In the binary encoding, a bit is associated with each object.A value of 1 at the i-th position of the solution vector x means that the i-th object is a prototype for a cluster.A prototype is a particular vector representing a cluster.In many works, centroids and medoids of cluster distributions are used as prototypes.A different kind of binary representation uses a binary matrix to indicate if the i-th object (at the i-th column) belongs to the jth cluster (at the j-th line).On the other hand, the coordinates of each prototype are represented in the solution vector x ∈ R l in the real representation.In this case, the number of clusters, N c , is known in advance.
In the integer representation, used in the hybrid genetic algorithm proposed here, the number of clusters can change during the optimization process.A solution representing a partition is given by an N -dimensional vector x ∈ N N .An element x j = i indicates that y j ∈ C i , i.e., the object y j belongs to cluster C i .When noise is considered, label 0 can be used to indicate a noisy object, i.e., x i = 0 indicates that the i-th object is noise.One can observe that the integer representation is redundant.For example, a partition can be represented by x = {1, 1, 1, 1, 3, 3, 2, 2, 2, 2} or by x = {2, 2, 2, 2, 1, 1, 3, 3, 3, 3}.Such redundancy can cause problems when two solutions are recombined.A way to reduce the effects of redundancy is to use a renumbering process whenever two solutions are recombined [6].
The proposed NK hybrid GA (Section IV) is compared in Section V to another GA approach for clustering: the clustering GA (CGA) proposed in [9].The CGA contains strategies and operators that are usually employed in population metaheuristics applied to clustering.In this way, it is a good option for testing the performance of our approach relative to other population metaheuristics.Like the NK hybrid GA, CGA uses integer codification.The silhouette width criterion (Section II-A) is employed to evaluate the individuals.Two mutation operators are used (merge and split).In the first, a randomly chosen cluster is merged with the nearest cluster.The distance between centroids is used to measure the distance between clusters.In the second, a random cluster is split: objects closer to the farthest object (from the centroid of the original cluster) are moved to the new cluster.CGA uses a context-sensitive recombination operator similar to the crossover used in the grouping GA proposed by Falkenauer [6].In the CGA crossover, k 1 clusters of the offspring are inherited from the first parent.The next step is to copy the non-overlapping clusters from the second parent.Finally, each of the remaining objects is allocated to the nearest cluster according to their distances to the centroids.Crossover is applied with rate p c ; if crossover is not applied, one of the two mutations are applied with equal probability.

A. Clustering Internal Validation
Many clustering internal validation measures have been proposed for hard partitional clustering.The k-means algorithm uses the sum of square Euclidean distances from the objects to the centroids of the clusters, i.e.: where .represents the Euclidean norm, z i (x) it the i-th centroid, and N c (x) = k is the number of clusters in solution x.Other distance measures can also be used.
Prototypes are necessary for computing Eq. ( 1).Also, this validation criterion is not generally used in population metaheuristics approaches applied to clustering because the number of clusters (k) should be known in advance.Alternatively, the silhouette width criterion [2] can be employed: where: Let d(y j , C i (x)) be the average dissimilarity (e.g., the average Euclidean distance) of y j to all objects belonging to cluster In other words, a(y j , x) is the average dissimilarity of y j ∈ C i (x) to all objects in the same cluster C i (x).The term b(y j , x) is the average dissimilarity of y j to its closest neighbour cluster, i.e., b(y j , x) = min i=1,...,Nc(x);yj ∈Ci(x) When applied as the evaluation function, Eq. ( 2) should be maximized.Also, by definition, s(y j , x) = 0 if there is only one object in the cluster.The procedure to compute the evaluation function given by Eq. ( 2) has cost O(N 2 ).However, this cost can be reduced to O(N ) if the distances of y j to the centroids are used instead of the distances to all objects in the clusters [7].The silhouette width criterion has been used as evaluation function in many population metaheuristics approaches applied to clustering [7].Unlike the silhouette width criterion, the density-based clustering validation (DBCV) [11] was proposed for the validation of clusters with arbitrary shapes.DBCV was developed for the validation of clusters composed of dense objects, separated by low dense areas.In order to capture the shape and density of clusters with arbitrary shape, a minimum spanning tree is built for each cluster.The idea of using the minimum spanning trees, and more generally, graphs, to represent the structure of the clusters is not new.It has already been used in other density-based validation criteria, e.g., in [18] and [19].The main contribution of DBCV is the use of a symmetric reachability distance to build the minimum spanning tree [11].DBCV also incorporates a mechanism to handle noisy datasets.

B. Local density
In k-means [4], Eq. ( 1) is minimized by successively recomputing the centroids of k clusters and reallocating the objects to the closest current centroids.By allocating each object to the cluster with the nearest centroid, spherical clusters are found.
Another approach is to use the local density of the objects in order to create the clusters.The local density of the i-th object of a dataset with N objects is defined as: where K(y i − y j ) is a kernel function.If the flat kernel is employed, K(y i − y j ) = 1 if y i − y j < ǫ and 0 otherwise.In this case, ρ i is equal to the number of objects where the distance to y i is smaller than the cutoff distance ǫ.We use, in this work, the Gaussian kernel, given by: In DBSCAN [17], the definition of the parameters has a great impact in the performance of the algorithm.Clusters are formed by connecting neighbouring objects if their densities (ρ i ) are higher than a threshold minP ts.Otherwise, the object is considered as noise.The definition of the parameters has less importance in the DP clustering algorithm [5].Besides the local density ρ i , the distance to the nearest object with higher density, δ i , is computed for each object y i .The next step is to plot ρ i × δ i for all objects and to manually find the objects presenting the highest densities and the highest distances δ i .In other words, a rectangle is manually defined in the upperright corner of the plot ρ i × δ i in order to select the objects with highest densities and highest distances δ i .Those objects are considered as prototypes of the clusters, while the objects with lower densities and higher distances are considered as noise.
The DP clustering algorithm involves a manual selection of the number of prototypes.However, it is not difficult to think of a mechanism similar to the one used to select k in kmeans, i.e., by running the algorithm with different parameters and using internal validation to select the best partitions.In fact, we use this approach in Section V when the DP algorithm is employed.A more serious disadvantage is the application in problems where the points of highest densities are not the prototypes of the clusters.For example, one can imagine circular clusters where all the objects have the same local density.
Both DP clustering algorithm and DBSCAN can be applied in problems where the number of clusters is unknown beforehand.Also, both algorithms have intrinsic mechanisms to deal with noisy objects.Like other density-based clustering algorithms, e.g., the mean shift algorithm [20], they are able to detect clusters with arbitrary forms.
Here, the local density of the objects (Eq.( 4)) is used to build the interaction graph for the NK clustering internal validation criterion.We use the rule of thumb proposed in [5] for the definition of the cutoff distance: ǫ is chosen in order that the average number of neighbours is 2% of the total number of objects in the dataset.This rule showed to be robust to several datasets.More accurate ways to estimate ǫ exist and can be used for datasets with few objects [20].

III. NK CLUSTERING INTERNAL VALIDATION CRITERION
The silhouette width criterion (Eq.( 2)) is composed of a sum of N subfunctions.The same is done in the NKCV2 criterion.However, in the latter, each subfunction depends on a small number of objects K << N , while each subfunction can depend on a large number of objects in the former.This is the main innovation of the NKCV2 criterion: using the local information in order to build the evaluation function.Neighbourhood relations inside a small subset of objects are used to compute the subfunctions.As the subsets are not necessarily disjoint, links are formed.The neighbourhood relations represent the linkage between elements of the solution vector.
The NKCV2 is given by: where each subfunction f i : N K+1 → R is influenced by a subset of K + 1 elements of x, i.e., the subfunctions f i are (K+1)-bounded.A mask m i ∈ B N with K + 1 ones can be used to represent the elements of x that influence subfunction f i .If the element of m i at position j is equal to one, then x j influences subfunction f i .As we will see in Section III-B, the disposition of the objects in the space also influences each subfunction f i .However, given a dataset, the disposition is fixed and we can consider that only the labels influence the subfunctions when a partition is evaluated.
The interactions among the elements of x can be represented by an adjacency matrix M = [m 1 m 2 . . .m N ].The directed graph with adjacency matrix M is called interaction graph and is represented by G ep .In the interaction graph, the vertex v i , i = 1, . . ., N , represents the i-th element of the solution vector x.The edge (v j , v i ) indicates that element x j influences f i .
The term group is used here for the subset of K + 1 objects interacting in a subfunction.We expect that groups are uniform, i.e., are in the same cluster, if their respective objects are close.However, groups should not be uniform if they are composed of distant objects.By using Eq. ( 6), clusters with spherical distributions will not always be favoured.The distributions of labels inside the groups are important for the evaluation of the partition.Decreasing K, smaller groups are considered.
One can observe that the NKCV2 function (Eq.( 6)) is similar to the function used in the NK landscape model [21].In the NK landscape model, random or adjacent interaction graphs are generally employed and the values for the subfunctions f i are randomly generated.In NKCV2, the interaction graph is given by the disposition of the objects in the attribute space.Therefore, it does not depend on the partitions (x).The values of the subfunctions f i are not random; they depend on the disposition of the objects in the attribute space and also on the partitions (x).The method for building the interaction graph, i.e., to define the masks m i , is described in Section III-A.The subfunctions f i are described in Section III-B.

A. Building the interaction graph
After creating a vertex with self-loop, v i , for each object y i of the dataset, two procedures are applied to create the interaction graph G ep .In the first procedure, for each vertex v i representing y i , the vertex representing the nearest object with higher density, v ai , is identified.Then, an edge is created from v ai to v i .We expect that, by connecting objects with densities that are local maxima, connections between different clusters can be formed.This is important because objects in a cluster should be more similar to other objects in the same cluster than to objects in other clusters.Computing the densities of the objects involves computing all the distances between the objects, what has O(N 2 ) time complexity.However, computing the densities involves also computing the cutoff distance ǫ used in Eq. ( 5), what has O(N 2 log N ) time and memory complexity when the method presented in [5] is used.
In the second procedure, edges between vertices representing close objects are added.Incident edges from the nearest objects are added to each vertex v i until its indegree becomes equal to K + 1.The time complexity for the second procedure is O(N 2 ).
An example for building G ep is shown in Figure 1.The evaluation function (Eq.( 6)) created from the interaction graph (Figure 1.c) is:

B. Subfunctions f i
The subfunctions f i (.) are influenced by the labels (x) and disposition of the objects inside the groups (y).For simplicity, the dependency on y is not shown in f i (.).The equation for subfunctions f i (x) is defined as follows: where M i indicates the indices of the K + 1 objects inside the i-th group, i.e., all indices j for the vertices v j such that The subset M i contains the index i and other K indices of the objects that influence function f i .In Eq. ( 8), the term α(x i , x j ) is given by: When x i = 0, α(x i , x j ) is given by: where d i,j = y i − y j is the distance between objects y i and y j , and c 2 , c ρ ∈ R + are thresholds (see Figure 2.b).We are using the same definition employed in [5] for noise, i.e., the i-th object is considered noise if its density (ρ i ) is small and if it is distant from other objects in the group.When the labels x i and x j for the two objects inside group M i are equal, we expect that the objects are close.Thus: where c 1 , c 3 ∈ R + are thresholds (see Figure 2.a).Otherwise, i.e., when the labels for the two objects x i and x j inside group M i are different, we expect that the objects are distant: One can observe that the density of the j-th object, ρ j , is employed in the equations.Objects with higher density have more impact on f i than objects with lower density.
If the matrix of distances between the objects is available, the procedure to compute c 1 , c 2 , and c 3 has O(N K) time complexity.If the densities ρ i are available, the procedure to compute c ρ has O(N ) time complexity.The thresholds can be finely adjusted for each dataset.However, using general rules based on the means and standard deviations for computing c 1 , c 2 , c 3 , and c ρ is much simpler.Experiments (not shown here) indicate that the NKCV2 criterion is robust to the choice of the thresholds.Also, experimental results (Section V) indicate that setting the thresholds based on equations ( 13)-( 16) results in good performance for diverse datasets.
Finding the interaction graph G ep and computing the thresholds and densities are done only one time for each dataset.The labels of the objects, given in x, are not used for building G ep and computing the thresholds and densities.If more than one partition is evaluated for a given clustering instance, it is not necessary to build G ep again and recompute the thresholds and densities.This is the case for meta-heuristics applied to clustering, because several individuals are evaluated for only one dataset.Given G ep as well as the thresholds and densities, the time and memory complexity for evaluating one individual using Eq. ( 6) is

IV. NK HYBRID GENETIC ALGORITHM
In the NK hybrid GA, individuals represent clustering partitions using the integer encoding (Section II).The individuals are evaluated using the NKCV2 function (Eq.6), that is similar to the function used in the NK landscape model.The NK landscape model belongs to a class of optimization problems where information about the relationship between variables is available by the inspection of the evaluation function [14].In this class of problems, the interaction graph, G ep , can be obtained by analysing the problem structure, i.e., the variables that interact in the subfunctions.Unlike black box optimization algorithms, gray box optimization algorithms use the problem structure, described in the evaluation function, for designing operators and strategies.
The information about the interaction between variables in subfunctions of the NKCV2 function is employed in a mutation operator and in the local search proposed in this paper for clustering.Two other mutation operators that explore the local density of the objects are still proposed.The local search and the mutation operators are described in Section IV-A.Partition crossover for clustering, presented in Section IV-B, also uses the interaction graph.By using G ep and the common elements found in the parents, it is possible to decompose the evaluation function in order to find an efficient crossover mask.
The pseudo-code for the NK hybrid GA is presented in Algorithm 1.The population P is composed of size pop individuals.The initial population is formed by random individuals optimized by local search.Elitism and tournament selection are employed.Partition crossover is applied with rate p c .Before applying crossover, the labels in one of the parents are mapped to the labels of the other parent using a renumbering process [7].After crossover, clusters with same label are relabeled using procedure f ixLabels.In procedure f ixLabels, the labels of all clusters are compared.A cluster is relabeled whenever its label is already used by another cluster.When crossover is not applied, the individual is transformed using one of three types of mutation.The first type, that changes only one element of the parent solution, is applied in 60% of the cases.Each of the other two types occur in 20% of the mutations.In the experiments with the NK hybrid GA (Section V), we used standard values for the parameters or values adjusted in previous experiments (not shown here).Our experience indicated that the performance of the GA is, in general, robust to the choice of parameters like crossover and mutation rates.In order to preserve the diversity of the population, 30% of the population is replaced by random individuals optimized by local search every 100 generations.Local search is also applied in the remaining individuals.

A. Designing Mutation Operators and a Local Search Method Based on the NK Clustering Validation Criterion
The methods proposed here use the interaction graph, G ep , in order to change a solution x.All methods use ∆-evaluation of f to speedup the computations.∆ i,v (x) denotes the change in the value of f (x) when variable x i is assigned a new value v.In formal terms, ∆ i,v (x) = f (y) − f (x), where y j = x j for all j = i and y i = v.In the ∆-evaluation, it is necessary to recompute only the subfunctions influenced by the label of the i-th object.In the algorithms, M p is the subset of indices for the K +1 objects inside the p-th group, i.e., the neighbours with edges to v p in G ep .The time complexity of computing In the first mutation (mutationN K), the label of the ith object, x i , is changed to the label of one of the objects with index belonging to the i-th group (Algorithm 2).In other words, x i assumes the value of one of the K variables x j =i that influence f i .The chosen label is the one that generates the minimum value of ∆ i,v (x).The time complexity for finding the best solution for all possible changes in The other two mutation operators are cluster-oriented, i.e., they are not designed to change a single object, but to transform a whole cluster.In mutationM erge (Algorithm 3), two clusters are merged, i.e., the objects in one cluster assume the labels of the other cluster.In mutationSplit (Algorithm 4), one cluster is split, i.e., some objects of the selected cluster receives a new label.Several works in the literature use merge and split operators [7].Here, we propose the use of the local densities of the objects when splitting and merging clusters.In Algorithms 3 and 4, the subset labels(x) contain N c (x) labels found in partition x.
In mutationM erge, the label of one cluster (cluster 1 ) is randomly chosen.Then, the prototype of each cluster is found.We consider here that the prototype of a cluster is the object with highest density in the cluster.The next step is to choose the label of a second cluster (cluster 2 ) according to probabilities proportional to the scaled distances between the prototypes.Clusters close to C cluster1 have higher probability of being selected than distant clusters.The last step is to change the labels of all objects in C cluster2 to label cluster 1 .
The local densities and distances between objects are computed only one time for each dataset, i.e., they are computed only in the beginning of the run (Section III).Thus, the time complexity of Algorithm 3 is O(K out K|C cluster2 |).Observe that the cost can be worse than the cost O(N K) of evaluating a new solution using the NK evaluation function (Eq.( 6)) for large clusters, i.e., if K out |C cluster2 | is close to N .This can occur because some subfunctions are evaluated more than one time when the ∆-evaluation is done in Algorithm 3. As a consequence, when K out |C cluster2 | is close to N , it is better to evaluate the solution outside mutationM erge using Eq.(6).
4: for ∀i ∈ labels(x), i = cluster 1 do 5: , ∀i ∈ labels(x), i = cluster 1 10: for ∀i ∈ C cluster 2 (x) do 11: x ′ i = cluster 1 12: end for In mutationSplit, the label of one cluster is randomly chosen according to the distribution of clusters' sizes.Larger clusters have higher probabilities of being split.The two objects with highest densities are then selected; they are the new prototypes for the two new clusters.The labels of the objects inside the clusters are defined according to their distances to the prototypes: if the i-th object is closer to the second prototype, than to the first prototype, then x i receives the new label.The time complexity of Algorithm 4 is O(K out |C cluster1 |K).Thus, like in mutationM erge, if K out |C cluster1 | is close to N , it is better to evaluate the solution outside mutationSplit using Eq.(6).
5: end for 6: randomly choose cluster 1 according to probabilities p i , ∀i ∈ labels(x) 10: for ∀i ∈ C cluster 1 (x) do 11: end if 14: end for Finally, we propose a local search method based on mutationN K.However, instead of computing the best im- end if 8: end while provement for each variable x i , we use the first improvement strategy.Also, neutral moves, i.e., when ∆ i,v (x) = 0, are allowed.In Algorithm 5, the index i for the variable to be changed is randomly chosen.Like in mutationN K, variable x i is changed to the label of one of the objects that influences f i .Only the subfunctions affected by changing x i are recomputed when the new solution is evaluated.This procedure is repeated until stopping criterion is met.

B. Partition Crossover for Clustering
In [12], Whitley et al. proposed a deterministic recombination operator, named partition crossover (PX), for the travelling salesman problem.PX explores the decomposition of the evaluation function by finding the connected components of the weighted graph formed by the union of two parent solutions.If q connected components are found, 2 q ways to recombine the solutions are possible by individually selecting the edges inside the connected components from one or another parent.In PX, it is possible to compute the partial evaluations of each connected component for each parent.Thus, if the best partial solutions are chosen, the best among 2 q possible offspring is found at O(N ) computational cost.
In [13], Tinós et al. extended PX for K-bounded pseudo-Boolean optimization.An example of K-bounded pseudo-Boolean optimization is the NK landscape model [21].The key idea for PX in this case is the observation that, if both parents share the same value for one element of the vector solution, the influence of inheriting this element from one or another parent in subfunctions are equal.Thus, the edges from vertices with decision variables shared by both parents can be removed from the interaction graph G ep .Finding the connected components in the resulting graph, named the recombination graph (G rec ), allows the decomposition of the evaluation function.
In PX, the values of the subfunctions in the offspring are inherited from the parents.As a consequence, it is possible to write the evaluation function of a solution x ′ , obtained by applying PX in two parents x p1 and x p2 , as: where S j indicates the subset of variables at the j-th connected component and S = S p1 ∪S p2 indicates that the set of indexes of all component functions can be separated into two disjoint subsets S p1 and S p2 ; one with partial evaluations computed using variables in x p1 and one using variables in x p2 .When, the variables inside the j-th connected component are inherited from parent x, the partial evaluation is given by: For minimization, the best solution, out of 2 q possible offspring, can be obtained by selecting the components, S j , with best evaluation, i.e., the i-th decision variable of the offspring is: Here, PX is used in clustering.Thus, the integer codification is used instead of binary codification, used in K-bounded pseudo-Boolean optimization.This does not prevent the use of PX.Since the NKCV2 function is similar to the function used in the NK landscape model, PX can be used without adaptation.Here, the time and memory complexity for PX is O(N K), which is O(N ) if K is a constant.PX for clustering is described in Algorithm 6 1 .
PX can only be applied when the evaluation of the offspring can be computed as a sum of partial evaluations found in the parents (Eq.( 17)).In other words, the evaluation function should be decomposable.In order to be decomposable, the evaluation function should be a linear sum of subfunctions, each one depending on a small number of objects K << N (see Eq. ( 6)).To the best of the authors' knowledge, other internal clustering criteria previously described in the literature are not decomposable.As a consequence, PX can be applied only when the NKCV2 criterion is used to evaluate the candidate solutions.
Delete vertex v i with all its edges from Grec 5: end if 6: end for 7: Use breadth first search to find the q connected components of Grec 8: for j=1 to q do 9: Compute the partial evaluations of the j-th connected component for both parents (h j (x p1 ) and h j (x p2 )) 10: if h j (x p1 ) < h j (x p2 ) then 11: x ′ i = x p1 i for all vertices v i in the j-th connected component 12: else 13: x ′ i = x p2 i for all vertices v i in the j-th connected component 14: end if 15: end for 16: x ′ i = x p1 i for all remaining elements

V. EXPERIMENTS
Experiments with synthetic and real-world datasets (Section V-A) are presented.First, the NKCV2 criterion is compared to other criteria.Then, experiments where the NK hybrid GA is compared to other approaches are presented.

A. Experimental Design 1) Datasets:
Experiments with three different types of datasets are presented.The first type of dataset is generated using the Gaussian model, presented in [22].Different internal validation criteria have been compared using datasets generated from the Gaussian model [22], [3].In datasets generated from the Gaussian model, N objects are randomly generated from truncated multivariate normal distributions.Each one of the l variables of an object is a normally distributed random variable bounded below and above.The model generates N c non-overlapping clusters from N c truncated normal distributions, each one with different center and standard deviation.The boundaries of the clusters in each of the l dimensions do not overlap, except for the first dimension.The center and the boundary lengths of each cluster are randomly defined.For each dimension, the standard deviation of the respective normal distribution is 1/3 of the boundary length.In the experiments for testing the NK hybrid GA, datasets generated from the Gaussian model with noise are presented.The noisy objects are randomly generated according to a uniform distribution.The percentage of noisy objects is 1%2 .In the experiments for testing NKCV2, results for datasets generated from the Gaussian model with and without noise are presented.
We present results with different number of clusters (N c = {2, 5, 8, 11}) and number of dimensions (l = {2, 5}).The number of objects in the dataset is N = 800 and the size of each cluster differs according to one of three levels of balance.In the first level of balance, the clusters have approximately the same size.In the second and third levels, the size of the first cluster is, respectively, 0.1N and 0.6N .The remaining objects are equally distributed among the other clusters.For each combination of l, N c , and level of balance, 3 random datasets are generated.In this way, the number of datasets generated from the Gaussian model is 72 (3 times the number of combinations of 4 sizes of clusters, 2 dimensions, and 3 levels of balance).
Results for experiments with benchmark problems are also presented (Table I).Seven shape datasets, available in [23], are employed.Results for experiments with four datasets from the UCI machine learning repository [24] are also presented.Some observations about the number of clusters in those datasets are necessary.First, the number of classes may not be adequate to evaluate the number of clusters in class-labelled datasets [25], e.g., the four datasets from the UCI machine learning repository employed here.For example, it is well-known that the Iris dataset, with 3 classes, contains only two clusters with obvious separation.Second, the values of N c presented in the tables for the shape datasets are those suggested in the respective references 3 .It is important to observe that N c is not intrinsic to all of the benchmark problems.For example, 5 clusters can be distinguished for the dataset Aggregation [26] if we interpret that a cluster can be formed by 2 dense regions connected by close objects.Otherwise, 7 clusters can be identified.
2) Experiments with the NK Clustering Internal Criterion: In this set of experiments, evaluation functions based on different internal validation criteria are compared using external validation.The method for comparing the evaluation functions is based on that presented in [27].Given a dataset with known structure, the steps are: i. Run clustering algorithms with different parameters, obtaining thus different partitions (candidate solutions); ii.Evaluate the quality of the candidate solutions using internal validation criteria; iii.Identify the best candidate solution according to each internal validation criterion; iv.Evaluate, using external validation, the best candidate solutions.Two external validation measures are employed for evaluating the best candidate solutions.The first one is the predicted number of clusters, i.e., the number of clusters in the best candidate solution selected according to each internal validation criterion.Several datasets are generated from the Gaussian model for each combination of N c and l.In this way, for these datasets, the percentage of hits for the predicted number of clusters is presented.
The second external validation measure is the adjusted Rand index (ARI) for the best partition (candidate solution).The ARI is a measure of similarity between two partitions, according to the cluster agreements and disagreements for pairs of objects [28].Given two partitions A and B, the ARI is computed based on the number of pairs of objects that: i) are in the same cluster in A and in the same cluster in B; ii) are in different clusters in A and in different clusters in B; iii) are in the same cluster in A and in different clusters in B; iv) are in different clusters in A and in the same cluster in B. When computing the ARI for a candidate solution x, the partition represented by x is compared to the optimal partition.The internal validation criteria, compared in the experiments, are: the proposed NK clustering validation criterion (NKCV2) with K = 2, K = 3 and K = 4 (Section III); silhouette width criterion; DBCV (Section II-A), and the previous version of the NK clustering validation criterion (NKCV) proposed in [10].
3) Experiments with the NK Hybrid Genetic Algorithm: First, the NK hybrid GA (Section IV) is compared to another genetic algorithm: the CGA, described in Section II-A.Then, the NK hybrid GA is compared to k-means, DBSCAN, and DP algorithm.Algorithms k-means and DBSCAN are two of the most used state-of-art algorithms for clustering.DP clustering algorithm is a recent approach that presents very good performance in many problems.
Here, CGA uses tournament selection with size of the pool equal to 3 and elitism.The same is used in the NK hybrid GA.Also, the size of the population (size pop ) is equal: 100 individuals.The crossover rate for the NK hybrid GA is p c = 0.6 and K = 3 for the NKCV2 criterion used to evaluate the solutions.For CGA, p c = 0.5 [9].Both algorithms are executed for N/2 seconds, i.e., the stopping criteria in Algorithm 1 is the execution time.The number of runs is 25, each one with a different initial random population.All experiments were executed in a server with 2 processors Intel Xeon E5-2620 v2 (15 MB Cache, 2.10 GHz) and 32 GB of RAM.
The solutions for k-means, DBSCAN, and DP clustering algorithm are generated as described in the last section.When testing the internal evaluation criteria, each algorithm was not analysed independently, i.e., all partitions generated by the three algorithms are used to evaluate the internal criteria independently of knowing which algorithm generates each partition.For the evaluation of the clustering algorithms, this information is relevant and the tables show the results for each algorithm.In a real-world clustering application, we do not know the information about the desired labels, i.e., we cannot apply external validation to select the best partition.In this way, here, we use an internal validation criteria to select the best partition for each algorithm.We use DBCV to select the best solutions for each algorithm.For the NK hybrid GA, NKCV2 is used to evaluate the solutions during the execution of the algorithm and DBCV is used to select the best solution among the best solutions produced in all runs.DBCV is used because it presented better performance than the silhouette width criterion in the experiments with different clustering internal criteria (Section V-B1).Also, it is not fair to use NKCV2 because the NK hybrid GA optimizes this criterion.It is important to observe that selecting the partitions according to an internal criterion does not result in the best external validation ever (ARI is used for the external validation).However, this is more realistic than selecting the partitions according to the best ARI because this information is not available in real-word clustering.

B. Experimental Results
1) NK Clustering Internal Criterion: Tables II-III present the results for the comparison of the clustering internal valida-tion criteria.NKCV2 with K = 3 and with K = 4 present the best results for datasets generated from the Gaussian model without noise (Table II).For datasets generated from the Gaussian model with noise (Table II), three criteria present the best results: NKCV2 with K = 3, DBCV, and silhouette width criterion.DBCV presents the best results for the shape datasets (Table III), followed by NKCV2 with K = 3; while DBCV presents better ARI in 5 datasets, NKCV2 with K = 3 presents better results in 3 datasets.The results for dataset Path-based 2 (Spiral) are particularly interesting.In this dataset, the objects are disposed in spirals with coincident center.The silhouette width criterion is unable to identify good partitions, while DBCV and NKCV2 identify the optimal partitions.DBCV and NKCV2 are able to identify the best partitions in this dataset because they efficiently explore the density of the objects in the evaluation of the partitions.NKCV2 does not present good results for 2 shape datasets (Flame and A.K. Jain s Toy Problem).We will discuss this when the results for the NK hybrid GA are presented in Section V-B2.For the UCI machine learning datasets (Table III), DBCV presents the best ARI for 1 dataset, while NKCV2 with K = 3 and silhouette width criterion present the best ARI for 2 datasets each.
When analysing the total number of datasets in each experiment, we observe that NKCV2 with K = 3 yields good performance: for each experiment, NKCV2 with K = 3 was the best or second best when we compute the number of times that internal validation criterion resulted in the best ARI for more datasets.NKCV2 presents, for all datasets, better performance than the previous version of the NK clustering validation criterion (NKCV) proposed in [10] (the results for UCI machine learning and shape datasets are not presented here).Unlike the previous version, the NKCV2 criterion: i) can be used in problems with noise; ii) uses the density of the objects when finding the local groups.
2) NK Hybrid Genetic Algorithm: Tables IV-V show the results of the experiments comparing CGA and NK hybrid GA.The NK hybrid GA presents better performance in all experiments with datasets generated from the Gaussian model with noise.The performance is better for 5 out of 7 shape datasets, and in 3 out of 4 UCI machine learning datasets.For the shape datasets, the NK hybrid GA presents worse results for instances Flame and A.K. Jain s Toy Problem; NKCV2 presented bad performance in the experiments comparing the internal evaluation criteria for these instances too (Section V-B1).
CGA and NK hybrid GA use different evaluation criteria.However, the better performance of the NK hybrid GA is not only due to the use of a different evaluation criterion.When comparing the results obtained by CGA (second and third columns in tables IV-V) with the results obtained by k-means, DBSCAN, and DP, selected according to the best silhouette width criterion (tables II-III), it is clear that the performance of CGA is worse than the performance of the best of the three algorithms many times.The NK hybrid GA obtains better or similar results (a comparison of NK hybrid GA with k-means, DBSCAN, and DP is made in the second part of this section), which is due to the efficient use of transformation operators and local search mechanism that explicitly use information about the problem structure available in the evaluation function.
Tables VI-VII present the results for the comparison of kmeans, DBSCAN, DP, and the NK hybrid GA.In the tables, the ARI for the best partitions found by each algorithm is presented.One can remember that each algorithm was executed several times with different parameters (in the case of the NK hybrid GA, with different random seeds).The results shown in the tables are from the best partition found according to DBCV for each algorithm.It is important to observe that the best partition found according to one validation criterion is not necessarily the partition with the best ARI among the partitions generated by the algorithm.
The NK hybrid GA presents the best average ARI for the datasets generated according to the Gaussian model with noise.When the evaluation (DBCV) is analysed 4 , the best results are for k-means and the NK hybrid GA.However, the NK hybrid GA is capable of detecting noise, and k-means, in the standard form employed here, cannot.As a consequence, the average ARI is better for the NK hybrid GA in Table VI.
For the shape datasets, DP presents the best performance, with 4 best results, followed by the NK hybrid GA, with 3 best results (Table VII).When DP is better, the NK hybrid GA presents competitive results, with the exception for instance Flame.In this problem, the NK evaluation function did not yield good results (Table III).This problem highlights a disadvantage of NKCV2: when there is superposition of two or more clusters, vertices of the interaction graph corresponding to objects located in different clusters are connected by short distance edges.As a result, the corresponding clusters are merged.In the case of the Flame dataset, almost all objects are close to other objects, i.e., are connected by dense regions.As a consequence, the NK hybrid GA finds partitions where practically all objects are located in only one cluster.This observation helps to explain the merging of some clusters in some instances generated by the Gaussian model with noise 5 .For the UCI machine learning datasets (Table VII), kmeans presents the best performance in 3 out of 4 instances.DBSCAN presented the best performance for the Ionosphere dataset.
DBCV was used to select the best solutions generated for every algorithm, including the NK hybrid GA.The NK hybrid GA does not employ DBCV during the optimization.However, even using a different criterion for selecting the best partitions, the NK hybrid GA produced good performance because it efficiently employs the transformation operators and the local search procedure.If NKCV2 is also employed for selecting the best partitions for all algorithms, the results for ARI are similar, despite the fact that the NK hybrid GA presents the best evaluations (NKCV2) when compared with the other algorithms for almost all instances 6 .As observed in Section IV, the computational complexity of some methods proposed here depends on the maximum outdegree, K out , of the interaction graph.In the experiments, we observed 7 that K out is small, when compared to N .It is also possible to observe that there is a correlation between K out and l, but not between K out and N or N c .
In order to test the impact of the size of the datasets and the dimensionality on the performance of the algorithms, additional experiments were executed.First, the clustering algorithms were applied in two datasets from the UCI machine learning repository [24]: Pen-Based Recognition of Handwritten Digits (Pendigits) and Libras Movement (Librasm).While the first dataset has many objects (N = 10, 992), the second dataset has many attributes (l = 90).Also, the algorithms were applied in instance D31, a large shape dataset with 3,100 objects [23].Second, the clustering algorithms were applied in instances generated from the Gaussian model with fixed number of attributes (l = 10) and different number of objects (N ): 2,000; 4,000; 6,000; 8,000; 10,000.Finally, the algorithms were applied in instances generated from the Gaussian model with fixed number of objects (N = 1, 000) and different number of attributes (l): 100; 300; 500; 700; 900.For the instances generated from the Gaussian model, the number of clusters is N c = 10.All clusters have approximately the same size (first level of balance).
In the experiments with large datasets, the NK hybrid GA was executed for 3N/2 seconds, except for dataset Pendigits where the algorithm was executed for 4N seconds.The best results for datasets Pendigits, Librasm, and D31 were obtained by k-means, followed by the NK hybrid GA (Table VIII).The NK hybrid GA is generally worse than k-means when there is superposition of spherical clusters or when spherical clusters are very close to each other.For dataset D31, k-means found 33 clusters (the optimal solution is composed of 31 clusters). 6Tables S4-S9 in the supplementary materials. 7Table S10 in the supplementary materials shows Kout for the datasets used here.
However, the NK hybrid GA found only 7 clusters because some of spherical clusters were merged 8 .
For the large Gaussian model datasets, k-means, DBSCAN and the NK hybrid GA obtained the best results (Table VIII).The ARI for the Gaussian model datasets with large number of objects for the algorithms were even better than the ARI for the datasets with N = 800 (Table VI).Increasing the size of the search space (by increasing l) results in clusters separated by longer distances.Increasing the number of objects in the dataset results in clusters with higher density.These two properties result in well-defined clusters, making easier the partitioning of the datasets.Similar results 9 were obtained for the G2 datasets [23].

VI. CONCLUSIONS
The NK hybrid GA 10 is proposed for hard partitional clustering.The NK hybrid GA uses NKCV2 to evaluate the partitions.Unlike traditional clustering validation measures, 8 Figure S13 in the supplementary materials. 9Table S11 in the supplementary materials. 10The source code is freely available in GitHub (https://github.com/rtinos/NKGAclust).NKCV2 is computed using information about the disposition of small groups of objects.Specifically, it is composed of N subfunctions, each one computed using information from a group of K + 1 objects.Objects can belong to different groups, which results in a complex network, called interaction graph (G ep ).Experimental results show that density-based regions can be identified by using NKCV2 with fixed small K.In the NK hybrid GA, after computing G ep and the densities of the objects in the beginning of the run, the time complexity for evaluating each individual using NKCV2 becomes O(N K).Experiments showed that a fixed small K is efficient for different numbers of objects (N ), which reduces the complexity to O(N ).
The information about the relationship between variables allows us to design efficient transformation operators and a local search strategy.In other words, we propose the use of gray box optimization in hard partitional clustering problems.This is possible because NKCV2 allows one to obtain information about the relationship between variables by inspecting G ep .Three mutation operators and a local search strategy are proposed, all using the information contained in G ep .The computational complexity of the mutations and local search strategy depends on the indegree (K) and maximum outdegree (K out ) of G ep .Experimental results show that K out is small for the clustering instances investigated here.Also, small values of K result in good performance.NKCV2 allows the use of partition crossover for clustering.The evaluation function is decomposed into q independent components by using partition crossover, which allows one to deterministically find the best among 2 q possible offspring with complexity O(N K), or O(N ) if K is a constant.In the experiments, the NK hybrid GA produced very good performance when compared to another GA approach and to state-of-art clustering algorithms.Experiments also show the efficiency of the NKCV2 criterion, when compared to other criteria.
In the experiments, the initial population of NK hybrid GA was randomly generated.However, solutions from different algorithms, like DBSCAN, k-means, and DP, can be used as initial population.Also, the outdegree of G ep was not restricted.As K out has an impact in the computational complexity, the investigation of ways of restricting the outdegree of G ep is an important future work.NKCV2 is similar to the evaluation function used in the NK landscapes problem.Dynamic programming can be efficiently employed in some types of NK landscapes.Investigating the adaptation for clustering of dynamic programming, and other state-of-art optimization algorithms used in NK landscapes, is a relevant future work.We would like to thank Dr. Pablo A. Jaskowiak for providing the source code of DBCV.

Fig. 1 .
Fig.1.Building the interaction graph.a) Dataset with 7 objects (N = 7).b) Each object of the dataset is associated with a vertex with self-loop.For each vertex, an incident edge from the vertex representing the nearest object with higher density is created (red edges).By definition, the incident edge for the vertex with highest density is from the vertex representing the nearest object.c) The next step is to add edges between near objects until each vertex has indegree equal to K + 1 (in this example, K = 2).The interaction graph has N = 7 vertices and N (K + 1) edges.d) Each subfunction f i is influenced by a group of K + 1 decision variables.The group of labels influencing subfunction f i is defined by the K + 1 incident edges to v i .The threshold c ρ is computed based on the mean (m ρ ) and standard deviation (σ ρ ) of the densities ρ i , for i = 1, . . ., N .The thresholds c 1 , c 2 , and c 3 are computed based on the mean (m y ) and standard deviation (σ y ) of the distances d i,j inside all groups.The equations for computing the thresholds are based on the 3 sigma rule (or 68-85-99.7 rule).The equations are: c 1 = m y(13)
Three algorithms are employed to generate the candidate solutions for each dataset: k-means, DBSCAN, and DP clustering algorithm (see Section II).Candidate solutions are generated by k-means running the algorithm with the following values of k: 2 ≤ k ≤ √ N [3].The best result for running the algorithm 20 times for each value of k is recorded.The candidate solutions found by DBSCAN are obtained by running the algorithm with 9 different combinations of parameters ǫ and minP ts.The parameter minP ts assumes one of the following values: {τ 1 , τ 2 , τ 3 } = {3, 4, 5} if l = 2, or {τ 1 , τ 2 , τ 3 } = {3, l + 1, 2l} otherwise.The values for ǫ are determined by the k-nearest neighbour graph with k = {τ 1 , 5τ 2 , 10τ 3 }.In the original DP algorithm, the number of prototypes ( Nc ) is manually selected.Here (see Section II), candidate solutions are obtained by running DP with the following values of Nc : 2 ≤ k ≤ √ N .Thus, the number of candidate solutions evaluated is 2 √ N + 7 ( √ N − 1 for k-means, 9 for DBSCAN, and √ N −1 for the DP clustering algorithm).

TABLE II EVALUATION
OF CLUSTERING INTERNAL VALIDATION CRITERIA FOR DATASETS GENERATED FROM THE GAUSSIAN MODEL.THE CRITERIA ARE: NKCV2 WITH K = 2, K = 3 AND K = 4; SILHOUETTE WIDTH CRITERION (SWC); DENSITY-BASED CLUSTERING VALIDATION (DBCV); AND NKCV [10] WITH K = 3.THE MEAN ARI FOR THE BEST HIT AND THE PERCENTAGE OF HITS FOR THE CORRECT NUMBER OF CLUSTERS (INSIDE PARENTHESES) ARE PRESENTED.THE BEST RESULTS FOR THE MEAN ARI ARE IN BOLD.

TABLE VIII ARI
FOR LARGE DATASETS.