Solving the Problem of Target k-Coverage in WSNs Using Fuzzy Clustering Algorithm

The purpose of the present research was to introduce an algorithm to solve the coverage problem in wireless multimedia networks that can be used to optimize energy consumption and network lifetime. In this regard, the problem of target k-coverage in WSNs was solved by dividing the environment into the proportional area and random selection. This can be done using a fuzzy clustering algorithm. It is worth noting that the results of the proposed algorithm were compared with previous methods such as genetic and annealing algorithm. The simulation results and comparison with other algorithms show a 27% superiority of the proposed algorithm. It is hoped that this method can be used in networks with larger dimensions in the future

Aznoli and Navimipour [7] believed in the issue of coverage when the view of the cameras is not considered in this type of network has been investigated because, in the typical case, the view of the cameras is directional. As a result, the proposed methods for non-directional coverage are different from the existing methods for directional coverage. Much research has also been done on barriers to the environment and their effect.
Cardei et al. [8] stated the barriers affect the data received from the environment by the cameras. As a result, distributed methods for camera sensor networks have been proposed using a large number of inexpensive video sensors. The proposed methods for receiving images are used when the direct distance should have existed between the camera and the object, and there should be no barriers.
Si et al. [9] suggested the most coverage problems focus on 2D) and 3D environments. In the real world, a complex 3D surface is present. Network segmentation algorithms and optimization ideas can be used to improve coverage at coordinates and deviation angles of the nodes. The proposed method, which is, in fact, the same as the self-rotation algorithm, increases the coverage in the environment and reduces the overlap. The cameras automatically update their angle of view according to the condition of the neighboring nodes.
Ping Shih et al. [11] stated that mobile camera sensors had been used to increase coverage in the monitored area. In this project, the camera sensors are randomly placed in a connected lattice location. Then a weighted and oriented graph is constructed that indicates the areas are completely covered, and the areas are connected. After the graph is formed, using the shortest path algorithm of the new path, the path of the camera's barriers, connected to the full coverage areas, is revealed. By using this algorithm, the map of the desired area covered with the least number of cameras is determined in the form of a minimum coverage tree. Rahimi et al. [12] proposed a new SL-based distributed algorithm to activate the camera sensors to cover the desired geographical space. In this algorithm, the SLs are selected in a hierarchical node, so that each small location is placed at least twice the depth of the node fields and prevents possible overlap between the camera scenes.
Bibhuprada et al. [13] suggested a new algorithm for calculating multimedia coverage. This algorithm starts exchanging messages between neighbors to gather neighborhood data. All sensors emit a HELLO-MSG message to identify the corresponding sensor IDs, and to introduce their spatial coordinates. Early messaging ensures that each sensor is aware of its neighbors and their locations. In this algorithm, there are two main phases that field of view detection and self-direction algorithm. This algorithm uses local data, and as a result, overhead occurs only between neighboring nodes with complexity O (n). This algorithm is also sufficiently distributed, which can operate after initial placement and updates the direction of multimedia sensors. This algorithm can also increase coverage.
Tezcan and Wang [14] used a genetic algorithm to solve the coverage problem in multimedia sensor networks. In this method, 50 nodes are considered. The initial population is considered to be the same number and is repeated 50 times in the environment. Since each environment contains camera nodes, then the rotation angle of each node is genes. This algorithm consists of 8 steps. The results of this algorithm show that with increasing the number of nodes, the percentage of coverage also increases, and it reaches 100% with 200 nodes.
Luo et al. [15] mentioned that one of the classic algorithms is the link tree with first depth search (ASM-ST-DFS). However, its temporal complexity is sensitive to string length, and a pruning strategy can solve this problem.
Hu et al. [16] stated there are challenges such as the scope of work, the type of antennas and targets, as well as the type of memory technology and electrical energy and the user interface in WSNs.
Potdar et al. [17] proposed a new observation model with camera network reliability that optimizes coverage rate and reliability of multi-camera networks. In this method, a directional sensor model is first designed. In order to do this, a Boolean 2D model is used, which can be defined as a parametric array. The perimeter of the sensor is a vector, and by changing the direction of the sensor, the camera can cover a circle. Continuous observations of camera nodes are used to extract the signal needed to create an environmental model. Here is a network of cameras in which each camera node can perform local calculations and extract some discrete symbolic observations. Discrete observations are then used to construct a model of the environment.
Guo et al. [18] expressed that open schemas are a tool for analyzing useful coded genetic algorithms (GAs). By using several different cross-operators, failure points can be predicted for coded GAs (Eshelman and chaffer,19).
Rossi et al. tried to solve the problem by using meta-innovative methods and a combination of genetic algorithm and linear programming. In the problem, two models are considered for sensor directions. In the first model, the directions are fixed and predetermined, and in the second model, the directions are considered to be adjustable so that these adjustments are made at the beginning of the network and according to the geographical location of the targets. However, the sensor directions in both models are assumed to be uniform and without overlap, which is not very consistent with the realities of camera networks and affects the performance of the algorithm in achieving the optimal solution.
Rossi et al. [20] proposed a greedy heuristic algorithm and genetic metaheuristic algorithm to solve the coverage problem. Greedy algorithms usually work faster to find the response than other algorithms, but due to the nature of the local search, the response may fall into the local optimal point, so the authors in this article use the evolutionary global search technique (genetic algorithm) to find the optimal response. The genetic algorithm uses a 2D and matrix representation of coverage sets in the form of chromosomes, then tries to adjust the orientation of the sensors in the next generations with two operations of fertilization and genetic mutation, and finally applying a fit function will lead to finding the maximum possible coverage set. This will continue until the remaining energy of the sensors is wholly depleted or where not all targets can be covered [21].
Xiao et al. investigated the 3D coverage of uneven grounds in directional sensor networks. They then presented algorithms to improve the area coverage area using lattice splitting, simulated refrigeration, and local optimal point methods. First, they simplified the 3D model of directional sensors. They then proposed algorithms to optimize the surface coverage by shifting the position of the sensors and changing their angle of view (from above) to the uneven surface. This improvement is achieved only by covering cavities caused by natural surface irregularities, and the authors do not cover the area during operation and increase the lifetime of the network. However, as far as the study is addressed, this is one of the few studies in sensor network coverage that has been simulated using refrigeration's evolutionary method in recent years. Barrier overage is an essential issue in WSNs. In wireless camera sensor networks, the position and angle of the camera sensor effect in a specific range to capture images or videos of objects. Therefore, the problem of barrier coverage in the camera sensor network is different from the scalar sensor network.
Min Gil and Han [22], according to the full definition of coverage, focused on the minimum camera barrier coverage in wireless camera sensor networks where the camera sensors are randomly targeted in a field. First, the target field is divided into several areas, including areas fully covered or areas not fully observed. The target areas and their relations are then modeled as a weighted guided graph. Based on the graph, an algorithm is proposed to solve the problem of minimum camera barrier coverage to prove the solution's correctness. In addition to the above, another optimal algorithm is proposed for the problem. The simulation results in this article shows acceptable performance.
Funk [23] mentioned that HSLS EC-based distributed method is proposed in wireless multimedia sensor networks. A new approach to HSLS-EC has been suggested. Initially, all cameras and scalars are randomly used in the area to be tracked. Cameras and scalar players all broadcast the camera sensor information message (CSIM) and the scalar sensor information message (SSIM), respectively. CSIM and SSIM are messages transferred by a camera and scalar sensors, respectively, that include the ID and location of the sensors. There is also a list called the activation list that stores the active cameras' IDs, which is initially empty. Additionally, the IDs of all cameras are stored in a list called my waiting list. After receiving the SSIM and SSIM messages, the sensors calculate the Euclidean distance from each other. If the calculated distance between a camera and a scalar is less than the depth of the camera field, it can be concluded that the scalar is inside the depth of the camera field. The proposed method activates the minimum number of cameras that cover an area to reduce data transfer.
Elhoseny et al. [24] used Newton's gravitational algorithm to cover the area and reduce energy consumption optimally. This article uses four parameters to evaluate the performance, including the minimum number of nodes in the coverage area, network lifetime, including the time interval from the start of the first node to the time of the shutdown of the first node and network power. The results of repeated simulations show that the proposed algorithm performs better than the previous two methods in terms of performance parameters.
Serik and Kaddour [25] stated that one way to increase the performance and lifetime of the sensor network is to arrange the nodes so that they cover the entire target area. However, in some areas, such as mountainous environment, nodes may spread through the aircraft, and coverage of the area becomes a challenge. Therefore, researchers believe that solving the coverage problem is very complicated.
Elhabyan et al. [26] used BA to select the optimal sensor and the resulting path to reduce energy consumption. The primary purpose was to increase the lifetime of the network by increasing the life of operational sensors, and the data collected by the above node is sent to the sink. Due to the simplicity and flexibility of simple BA execution, this algorithm was chosen. The results of the proposed algorithm with different values of the parameters were presented. The results show that the algorithm is scalable. That is, the algorithm responds well not only to a particular state but also to different network states. Finally, the results of the proposed algorithm were compared with MSS, MSGSA, MSACO, EEDG, GSA, and EDTC methods. The simulation results and comparison with other algorithms show a 27% superiority of the proposed algorithm.
Sangaiah et al. [27] believed the proposed activity scheduling algorithm provides an approach that enables any MSN in WMSN and takes advantage of the network energy by observing the essential points. By prioritizing each volume set, they introduce a new way to select the most appropriate MSN coverage set based on the correlation of visual data from camera observations. The work of simulating them and comparing the results with the existing approaches shows that their proposed method exceeds the current results in terms of network lifetime, recording rate, and percentage of inactive and active nodes.
AlNuaimi et al. [28] stated that a new and efficient design is presented in the form of a genetic algorithm, which overcomes several existing metaphorical weaknesses, along with an accurate method for calculating performance. The proposed genetic algorithm includes an innovative heuristic population method, the calculation of the exact integral DOI: 10.5281/zenodo.5196330 Received: March 27, 2021 Accepted: August 04, 2021 60 area for fitness performance, and a combination of the Laplace Crossover and Arithmetic Crossover operators. The results show that this algorithm provides the best performance in terms of solution quality and stability in most cases tested.
Hanh et al. [29] stated that to achieve the desired target, the SL's selection to activate the camera is done in a hierarchical manner, as shown in Figure (3). Selecting SL activates cameras that have the least overlap in the depth of field, thus reducing redundancy in data transfer. Fig. 1. The hierarchical arrangement of SL [15] Finally, the subject of coverage in the scientific research network can be seen in the diagram of Fig. (2). This diagram has been compiled and prepared according to the amount of research done in the IEEE scientific database. According to the researchers' research on the previous studies, two challenges were posed as a common subject in this regard.
• By focusing on one aspect of problem-solving factors of network coverage, this problem has different aspects.

Proposed Method
In the proposed method, by default, a node must have the following characteristics in order to be an appropriate option for covering a target: 1-Distance from targets, other nodes, and location of data collection, 2-Battery level, 3-Energy consumption process In the proposed method, the process of variable energy consumption is influenced by five factors: 1-Number of stored bits, 2-Number of bits in the receiving queue, 3-Number of bits being processed, 4-Number of bits in the sending queue, 5-Cost of moving from point x to point y.
To solve the problem of covering the processed environment, targets are divided into several areas based on the density. To select an appropriate sensor in an area of the proposed method, a fitting function is required based on the three variables of distance, battery level, and power consumption process. This function must be able to determine the suitability of each category of sensors to cover all the targets in each section.
After selecting the most appropriate category of sensors in that area, the best point for the existence of a cluster head sensor is selected based on the distance from the targets, and the sensors will be arranged based on the amount of fit and the cost of energy consumed to move to that area. The one with the highest rank will be selected as the cluster head, and all the collected data from the targets will be transferred to the desired sensor, and through it will be transferred to the data collection center. The proposed method is in modeled mode and includes the following steps:

Dividing the environment into the proportional area with the number of targets
In the proposed method for dividing the areas, the total number of targets is calculated first, then the ratio of targets to the area is determined using Equation 3-3: In this equation, T and A are the total numbers of targets and the surface area, respectively. The environment will be divided into rA sections. If the density of targets in one area exceeds the standard value for sensors, each area is subdivided into other areas using Equation 1. In this regard, each ci is a subset of sensors that can cover all targets according to three factors, including distance from the target, energy level, and energy consumption process. In other words, each node in the ci set must be able to meet the following two conditions: • Its distance from at least one target is less than the range of the node.
• The node's battery level is higher than the estimated energy for collecting and sending data to the cluster center or data collection center.
Estimating the amount of energy consumed to perform the task of the target k-coverage state available to a sensor node is calculated using Equation 3-5:

Selection of the best subset
The best subset of all identified responses is ci, which has the highest value in Equation 3-6.
In Equation 3-6, Ej, Eaj, dist are the battery level of j th node, the amount of power consumed to perform the operations related to the target k-coverage in j th node, and the sum of the distances of the targets covered by the j th sensor node.

Identifying the cluster head for each area
In the proposed method, each area will contain a cluster head. The cluster head is selected from the top subcategory and is responsible for sending and receiving data to other cluster heads and the data center. In addition to the amount of energy needed to do the work, the cluster head must also have the energy to send and receive data. The best point (ideal point) of each area is calculated according to the targets' location and the relationship with the nodes. The node with the highest value in Equation 3-7 will be selected as the cluster head node.
In Equation 3-7, Emj and distg are the energy required to move the j th sensor to the ideal point and the distance from the ideal point to all targets that can be covered by the j th sensor. Fig. 3 shows an overview of the proposed method. After determining the appropriate cluster head for each subset, data is collected from the targets and sent to the data collection center, and then the network performance will be calculated with Equation 3-2. If the network does not improve compared to before, the location of the sensors will change. In this section, the proposed method was reviewed, and by stating the desired point, which is to increase the amount of EN, the problems in achieving this target were examined. Existing targets are obviously not completely achievable, so it needs a method that can obtain the best results in general by creating a balance in achieving the targets. The proposed method by fuzzy clustering tries to provide a way to increase the performance of networks in the optimal identification of targets.

Challenges in the proposed method
Based on the structure and type of sensor nodes, the set of nodes selected for sensor networks can be of the homogeneous or heterogeneous type. A homogeneous group is a set of sensor nodes that are entirely similar in terms of energy, communication power, and other such parameters. However, in the heterogeneous group, there are usually more robust nodes (in terms of communication power, radio radius, energy storage, and fault tolerance) that are called cluster head and can categorize and collect data obtained by weaker nodes t to process and send this data to the base station or sink according to the desired program.
Various structures of homogeneous nodes have been introduced, which stated to improve their use, several strategies, and solutions for each of the applications. In most of these solutions, the need to know the exact distance between nodes is considered, which depends on the measurement of each node in the same state. In this article, the nodes are assumed to be homogeneous, but by changing the measurement of nodes to prove the effectiveness, a special algorithm has been stated. Perhaps the most essential factor in developing coverage models is considering energy consumption limitations. Sensor nodes typically use a battery to supply energy, which in most cases, is not rechargeable or replaceable. Therefore, it is essential to carry out operations to reduce energy consumption and increase the sensor node's lifetime. There are several ways to access this. One way is to put a number of extra nodes to sleep, which is a popular and efficient method. The second method is to adjust the transfer range to the neighboring node's size so that only the distance of the neighboring node is needed to feel and transfer data.
When sensor nodes are in a hierarchical structure, cluster head nodes can collect data and transfer it to the sink, respectively. This will remove the burden of routing and data transfer from the interface nodes and will last longer. The productivity of data collection and routing can also play an essential role in reducing energy consumption. If several nodes start collecting the same data, of course, energy consumption increases. Elimination of redundancy is one of the most essential issues in WSNs.

Network model and problem statement
The issue of coverage in WSNs depends on several factors that must be considered when distributing sensor nodes at the desired location. Many of these factors have a software aspect that must be considered in the ability of sensor nodes. Most researchers have focused on a single model. This article tries to introduce an algorithm that can be used in many different cases. The proposed method is described below. In order to solve the coverage problem in WSNs, this problem is first formulated. Then, the theoretical aspects of the proposed method and the reasons for its effectiveness have been examined.

Problem formulation
The first step in the development of WSNs is to identify essential targets and parameters for monitoring. Typically, the entire area for identifying multiple targets or for a defect in a border area has been searched. The problem of coverage in this article is defined as follows: When the coverage takes place in an area where every point in the area is detected by at least one category of sensors. An appropriate solution to the target coverage problem should include an optimal selection for the following three factors: A. It increases network life (Nl) as much as possible. This means that nodes can collect data from all targets and send them to the base station over a more extended period of time.
B. It maximizes network throughput (Nt). This means that the maximum targets can be covered at each stage of the network, and the data of those targets can be sent to the base station.
C. It increases the ability to change network structure (Na) as much as possible. This variable indicates that if the network targets change, it can change its structure accordingly.
If the optimality of a solution to solve the maximum coverage problem is shown in Equation 3-1.
Equation 3-1 5 ∝ 6 + 7 + # In order to convert Equation 3-1 into an equal equation, the coefficients should be added to the desired factors. These coefficients were displayed as w1, w2, and w3. Equation 3-2 5 = 8 6 + 9 7 + : # The coefficients w1, w2, and w3 according to the conditions and the importance of each of the variables in the problem explicitly initialized for each problem of target coverage. These coefficients will be equal to 1. In other words, the values of these three factors are considered equally. Obviously, the purpose of the proposed method will be to increase the amount of EN in Equation 3-2 5 = 8 6 + 9 7 + 3

Simulation and numerical results
The architecture of existing wireless networks is not designed to meet all users' current needs, including telecommunications service providers, customers, and companies. In other words, wireless network structure designers at the time of design due to limitations such as design complexity, conflicting policies in the use of network resources, lack of scalability and dependence on a particular node, lack of coordination between market economic needs and capabilities in the wireless network and lack of accountability to meet all the needs of the IT science and technology network has led to deviations. In an ideal wireless network, packets enter and exit routers through ports. Routing and network topology tables are made up of several sections and layers. The connection between two nodes in a wireless network does not necessarily require a physical connection between the two nodes. Also, in this network, requests from all sections of the network are covered and responded to it.
In this section, the results of testing hypotheses in a wireless network with a randomized structure are compared in terms of three variables, including network lifetime, the response rate to requests sent to the network, and network compatibility with targets. These three factors ultimately indicate the performance of the proposed method in maximizing the load. To implement, the studied network was examined in two structures, which include the wireless node structure and the wireless network topology. Also, using three scenarios (network life, response to send requests, and network compatibility with targets), the network is evaluated. The results are calculated to eliminate the effect of random results in the test, after performing ten simulations (and removing the best and worst results) as well as taking the average of these results. The range of requests in the network environment is 100 by 100. In this article, 0.1 seconds is considered for the final measurement of each step of the network's studied variables. To compare the performance of the proposed method, this method compared with two methods of random state change and the method used in reference [29].

Test environment
To create the same conditions for comparing results, it needs to limit the environment to build a similar environment for tests. Therefore, in the present article, to clarify the conditions of the test, this section has been explained. The OMNET ++ software was used to simulate the mentioned hypotheses. OMNET ++ simulator is an extensible, modulebased simulator software with C++ programming language that uses the libraries and framework of this language. The hardware and conditions defined for wireless network simulation are shown in Table 1. The main tasks of this simulator include the following tasks:

The structure of a node
A member of the network in the wireless network, which will be called a node from now on, will consist of 3 layers of data transfer and connections, control, and application. There are two types of nodes in the target network, including ordinary nodes and base station nodes. In terms of structure and hardware, these nodes are no different. It is assumed that this node is connected to the energy source only in the base station node, and its energy is never reduced or finished.
Data transfer layer: Each node that operates in the tested wireless network has three properties in this layer. These properties are: 1. Node ID value: The node ID is the node name and is a unique number.
2. Gate: This gate has 2 data entry and exit gates. All wireless exchanges of a node are done this way.
3. The energy capacity of a node: This capacity is defined as the total capacity and the remaining energy capacity in each node. This property generally indicates the network lifetime according to the amount of processing performed on the node.
Control layer: In the control layer of the wireless network, there are five sections, these sections include: 1. Node starter: In this part, initialization will be done for the required properties of the nodes. The data that is uploaded at this stage includes: A-The node's number of connections with its adjacent nodes, B-The amount of energy capacity of each node, C-The ID of each node, and like. This data will be sent to the base station at the beginning of the network operation.
2. Message maker: This section aims to create a message (by the standard 802.11) with the following features. These features are:

Network topology
The nodes' final interlock of the desired wireless network is made through the connections of their gates. This interlock will be a random interlock. Also, requests in the network are created in this way with the feature of random location in each area. To create a request, it is assumed that a change has occurred in one area of the network and a data receiver (this device can be a camera, voice recorder or any node with the ability to receive data and send it with wireless technology based on standard 802.11) should report this change to the base station. The graphical structure of the implemented network called k-coverage LTA is shown in Fig. 5.

Scenario 1
In this scenario, the lifetime of the network was examined. Network lifetime in this scenario is when it takes for the first network node to reach zero energy level called network lifetime. This variable is measured by the t/s unit. In Tables 2 and 3, the studied algorithms were tested for the variable of network lifetime affected by the number of nodes and also the network with 20 nodes affected by the number of requests. The results obtained from the simulations show that increasing the number of nodes will increase the lifetime of the network. Also, these results show that increasing the number of requests in the network's average lifetime will have a ratio of 12%. In fact, the average rate of reduction of network lifetime in the proposed method is 40% less than random selection and 7% less than reference [29]. Figs. 6 and 7 compare these results visually. These changes are more evident than the compared methods. In this scenario, the response to the sent requests includes the number of requests from the network that are responded correctly. This means that the request is recorded from the location created by the first wireless receiver and sent to the base station. Three studied methods were examined as the variable before the response rate to the sent requests affected by two variables, increase the number of nodes, and increase the number of requests. The results can be shown in Tables 4 and 5. In general, it can be said that in the proposed method, the best improvements have been achieved over time, as well as the worst results have been obtained in the early times. From Figs. 8 and 9, it can be concluded that the proposed method responds to the requests on average, 2% more than the method in the reference [29], and average 16% more than the random selection.

Scenario 3
In this scenario, the compatibility of the network with the targets of this variable is determined by measuring the average Euclidean distance between the locations of the responded requests in the network. In fact, this variable is formulated to measure the ability to change the network's structure or compatibility with requests. In this section, three methods studied in the style of previous variables in terms of network compatibility rate with the targets affected by two variables, increasing the number of nodes and increasing the number of requests, were examined. These results are shown in Tables 6 and 7. If it is intended to extract a general conclusion that the changes in this variable can be expressed according to its two independent variables, namely the number of nodes and the number of requests, it should be said that the proposed method was able to respond to more different requests in terms of spatial dimension than the methods in reference [29] and random selection. In other words, the average distance of responded requests is 46% more than random selection and 2.5% more than the reference [29], respectively. These changes can be seen in Figs. 10 and 11.

Verification
In order to analyze and simulate this idea, three scenarios have been proposed. In the first scenario, the network lifetime is compared with another article in reference (29) and the random selection, the results shown in Tables 2 and  3. In the second scenario, the response rates to the submitted requests, which include the number of requests from the network that are responded correctly, have been reviewed. The proposed method responded to the requests by an average of 2% compared to the reference (29) method and 16% more than the random selection method. Also, in the third scenario, the compatibility of the network with the targets of this variable is determined Fig. 12. Comparison of the performance of the proposed method with the two methods of Ki-coverage and random selection by measuring the average Euclidean distance between the location of the responded requests in the network, which is shown in Tables 6 and 7. Finally, a diagram shows the improvement of the ideas compared to the previous works.

Discussion
In order to increase the coverage in WSNs, previous works in this field have been reviewed. In previous works, in order to increase the lifetime of the network, in some sending periods, some nodes that used a lower percentage fall asleep to save their energy for subsequent sending. In this article, the method of categorizing data and helping neighboring nodes have been used in such a way that after collecting data, it is delivered to a neighboring node that its distance is less than the range of the primary node. There are also nodes called sinks that have an inexhaustible energy source, for example, connected to a solar energy source. So far, sensors have been used to collect data in the military, environmental, commercial, so on fields. For example, sensors can be used to build a hospital or a prominent place. The results of previous work are compared with the idea of this article. Existing sensors have the ability to collect data from the surrounding environment and transfer it to the nearest neighbor and the central node. Its achievements are increasing the lifetime of the network and storing the amount of energy of each sensor after the steps are performed. The best advantage of sensor networks is its fast setup, and one of its disadvantages is the short battery life of each sensor.
The main limitation of this article is the data amount and the lack of access to the real environment of wireless multimedia networks. In other words, there is no practical access to data from various networks according to the existing rules on administrative structure (as owners of such networks) and security limitations in this area. Another limitation is the high cost of laboratory hardware equipment for testing physical networks.

Conclusion
The purpose of this research was to increase the coverage in WSNs. Thus, the sensors were placed in an environment sequentially or randomly. The method is that first, the environment is divided into different areas, then for the sensors were placed in each section, and an appropriate cluster head is identified for each section. All sensors collect data from the surrounding environment and deliver it to the cluster head, and it sends the data to the collection center. Then the network efficiency is measured by three variables, including network lifetime, responsiveness rate to requests sent to the network, and network compatibility with the targets. If this process does not optimize the previous process, the location of the sensors will change.
The advantages of this method are as follows: • The proposed method, considering the amount of weight used in Equation 3-2 for each of the variables measured by 1.3, has better efficiency than the two methods compared.
• Increasing the network lifetime • Increasing the responsiveness rate to requests sent to the network • Increasing the compatibility of more sensors