Full Text:   <277>

CLC number: TP181

On-line Access: 2026-01-09

Received: 2025-07-28

Revision Accepted: 2025-11-26

Crosschecked: 2026-01-11

Cited: 0

Clicked: 312

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Xiaocheng LIU

https://orcid.org/0009-0001-4105-7789

Meilong LE

https://orcid.org/0000-0002-1748-0819

Yupu LIU

https://orcid.org/0009-0003-1355-6104

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2025 Vol.26 No.12 P.2397-2420

http://doi.org/10.1631/FITEE.2500534


SPID: a deep reinforcement learning-based solution framework for siting low-altitude takeoff and landing facilities#


Author(s):  Xiaocheng LIU, Meilong LE, Yupu LIU, Minghua HU

Affiliation(s):  College of Civil Aviation, Nanjing University of Aeronautics and Astronautics,Nanjing 211106,China

Corresponding email(s):   lxc2307084@nuaa.edu.cn, lemeilong@126.com, liuyupu@nuaa.edu.cn, minghuahu@nuaa.edu.cn

Key Words:  Low-altitude planning, Vertiport siting, Deep reinforcement learning, Algorithm exploration


Xiaocheng LIU, Meilong LE, Yupu LIU, Minghua HU. SPID: a deep reinforcement learning-based solution framework for siting low-altitude takeoff and landing facilities#[J]. Frontiers of Information Technology & Electronic Engineering, 2025, 26(12): 2397-2420.

@article{title="SPID: a deep reinforcement learning-based solution framework for siting low-altitude takeoff and landing facilities#",
author="Xiaocheng LIU, Meilong LE, Yupu LIU, Minghua HU",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="26",
number="12",
pages="2397-2420",
year="2025",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2500534"
}

%0 Journal Article
%T SPID: a deep reinforcement learning-based solution framework for siting low-altitude takeoff and landing facilities#
%A Xiaocheng LIU
%A Meilong LE
%A Yupu LIU
%A Minghua HU
%J Frontiers of Information Technology & Electronic Engineering
%V 26
%N 12
%P 2397-2420
%@ 2095-9184
%D 2025
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2500534

TY - JOUR
T1 - SPID: a deep reinforcement learning-based solution framework for siting low-altitude takeoff and landing facilities#
A1 - Xiaocheng LIU
A1 - Meilong LE
A1 - Yupu LIU
A1 - Minghua HU
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 26
IS - 12
SP - 2397
EP - 2420
%@ 2095-9184
Y1 - 2025
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2500534


Abstract: 
Siting low-altitude takeoff and landing platforms (vertiports) is a fundamental challenge for developing urban air mobility (UAM). This study formulates this issue as a variant of the capacitated facility location problem, incorporating flight range and service capacity constraints, and proposes SPID, a deep reinforcement learning (DRL)-based solution framework that models the problem as a Markov decision process. To handle dynamic coverage, the designed DRL framework-based SPID uses a multi-head attention mechanism to capture spatiotemporal patterns, followed by integrating dynamic and static information into a unified input state vector. Afterward, a gated recurrent unit (GRU) is used to generate the query vector, thereby enhancing sequential decision-making. The action network within the DRL network is regulated by a loss function that integrates service distance costs with unmet demand penalties, enabling end-to-end optimization. Subsequent experimental results demonstrate that SPID significantly enhances solution efficiency and robustness compared with traditional methods under flight and capacity constraints. Especially, across the social performance metrics emphasized in this study, SPID outperforms the suboptimal solutions produced by traditional clustering and graph neural network (GNN)-based methods by up to approximately 29%. This improvement comes with an increase in distance-based cost that is kept within 10%. Overall, we demonstrate an efficient, scalable approach for vertiport siting, supporting rapid decision-making in large-scale UAM scenarios.

SPID:一个基于深度强化学习的低空起降设施选址解决方案框架

刘小程,乐美龙,刘玉璞,胡明华
南京航空航天大学民航学院,中国南京市,211106
摘要:低空起降平台(垂直起降机场)的选址是发展城市空中交通(UAM)面临的核心挑战。本研究将该问题转化为带容量约束的设施选址问题,整合飞行距离与服务容量限制,并提出基于深度强化学习(DRL)的SPID解决方案框架,通过马尔可夫决策过程对问题进行建模。为处理动态覆盖需求,基于DRL框架设计的SPID采用多头注意力机制捕捉时空模式,将动态与静态信息整合为统一输入状态向量。随后通过门控循环单元(GRU)生成查询向量,从而增强序列决策能力。DRL网络中的动作网络通过损失函数进行调控,该函数整合了服务距离成本与未满足需求惩罚,实现端到端优化。后续实验结果表明,在飞行与容量约束条件下,SPID相较传统方法显著提升了解决方案的效率与鲁棒性。尤其在本文重点关注的社会绩效指标维度,SPID相较传统聚类法和基于图神经网络(GNN)的方法所产生的次优解,性能提升高达约29%。该提升伴随距离相关成本的增加,但增幅控制在10%以内。总体而言,我们为垂直起降机场选址提供了高效且可扩展的解决方案,为大规模的城市空中交通场景提供快速决策支持。

关键词:低空规划;垂直起降机场选址;深度强化学习;算法探索

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Abdel-Basset M, Mohamed R, Hezam IM, et al., 2024. Multiobjective trajectory optimization algorithms for solving multi-UAV-assisted mobile edge computing problem. J Cloud Computing, 13:35.

[2]Apaza RD, Li HX, Han RX, et al., 2023. Multi-agent deep reinforcement learning for spectrum and air traffic management in UAM with resource constraints. IEEE/AIAA 42nd Digital Avionics Systems Conf, p.1-7.

[3]Berger R, 2023. Urban Air Mobility Market Update. https://www.rolandberger.com [Accessed on June 30, 2025].

[4]Church R, ReVelle C, 1974. The maximal covering location problem. Papers Region Sci, 32:101-118.

[5]Deloitte, 2023. Advanced Air Mobility Industry Insight. https://www2.deloitte.com [Accessed on June 28, 2025].

[6]Escribano Macias J, Khalife C, Slim J, et al., 2023. An integrated vertiport placement model considering vehicle sizing and queuing: a case study in London. J Air Transp Manage, 113:102486.

[7]Federal Aviation Administration, 2023. Aerospace Forecast Fiscal Years 2023–2043. https://www.faa.gov/data_research/aviation/aerospace_forecasts/faa-aerospace-forecast-fy-2023-2043 [Accessed on June 28, 2025].

[8]Garcia CP, Li WG, Hirata NST, et al., 2023. ISUAM: intelligent and safe UAM with deep reinforcement learning. IEEE 29th Int Conf on Parallel and Distributed Systems, p.378-383.

[9]Gopi SP, Magarini M, 2021. Reinforcement learning aided UAV base station location optimization for rate maximization. Electronics, 10(23):2953.

[10]Hakimi SL, 1965. Optimum distribution of switching centers in a communication network and some related graph theoretic problems. Oper Res, 13(3):462-475.

[11]Han RX, Li HX, Apaza R, et al., 2022. Deep reinforcement learning assisted spectrum management in cellular based urban air mobility. IEEE Wirel Commun, 29(6):14-21.

[12]Han XY, Xie MX, Yu K, et al., 2024. Combining graph neural network with deep reinforcement learning for resource allocation in computing force networks. Front Inform Technol Electron Eng, 25(5):701-712.

[13]Hottung A, Tierney K, 2020. Neural large neighborhood search for the capacitated vehicle routing problem. 24th European Conf on Artificial Intelligence, p.443-450.

[14]Kim D, Lee K, Moon I, 2019. Stochastic facility location model for drones considering uncertain flight distance. Ann Oper Res, 283(1-2):1283-1302.

[15]Kool W, Van Hoof H, Welling M, 2019. Attention, learn to solve routing problems!

[16]Kumar PK, Witter J, Paul S, et al., 2023. Graph learning based decision support for multi-aircraft take-off and landing at urban air mobility vertiports. AIAA SciTech 2023 Forum.

[17]Lee S, Cho N, 2025. Optimal location of urban air mobility (UAM) vertiport using a three-stage geospatial analysis framework. Fut Transp, 5(2):58.

[18]Li M, Zhao P, Bao ZA, et al., 2025. A reinforcement learning-based iterative method for capacitated hub location problems in UAV networks. Integrated Communications, Navigation and Surveillance Conf, p.1-11.

[19]Liang HJ, Wang SH, Li HL, et al., 2024. SpoNet: solve spatial optimization problem using deep reinforcement learning for urban spatial decision analysis. Int J Digit Earth, 17(1):2299211.

[20]Lynskey J, Thar K, Oo TZ, et al., 2019. Facility location problem approach for distributed drones. Symmetry, 11(1):118.

[21]Mahmoodi A, Sajadi SM, Sadeq AM, et al., 2025. Enhancing unmanned aerial vehicles logistics for dynamic delivery: a hybrid non-dominated sorting genetic algorithm II with Bayesian belief networks. Ann Oper Res, early access.

[22]Meng ZY, Yu K, Qiu R, 2024. Location-routing optimization of UAV collaborative blood delivery vehicle distribution on complex roads. Complex Intell Syst, 10(6):8127-8141.

[23]Meskar M, Ahmadi-Javid A, 2024. Optimizing drone delivery paths from shared bases: a location-routing problem with realistic energy constraints. J Intell Robot Syst, 110(4):142.

[24]National Aeronautics and Space Administration, 2020. STEM LEARNING: Package Delivery Drone Simulation Coding Activity Guide. https://www.nasa.gov/wp-content/uploads/2020/05/aam-package-delivery-drone-simulation-activity-guide_0.pdf [Accessed on June 26, 2025].

[25]Nazari M, Oroojlooy A, Takáč M, et al., 2018. Reinforcement learning for solving the vehicle routing problem. Proc 32nd Int Conf on Neural Information Processing Systems, p.9861-9871.

[26]Park C, Kim GS, Park S, et al., 2023. Multi-agent reinforcement learning for cooperative air transportation services in city-wide autonomous urban air mobility. IEEE Trans Intell Veh, 8(8):4016-4030.

[27]Paul A, Levin MW, Waller ST, et al., 2024. Data-driven optimization for drone delivery service planning with online demand.

[28]Premkumar VGR, Van Scoy B, 2025. Optimal positioning of unmanned aerial vehicle (UAV) base stations using mixed-integer linear programming. Drones, 9(1):44.

[29]Rahman MA, Basheer MA, Khalid Z, et al., 2024. Logistics hub location optimization: a k-means and p-median model hybrid approach using road network distances.

[30]Ryu K, Kim W, 2024. Energy efficient deployment of aerial base stations for mobile users in multi-hop UAV networks. Ad Hoc Netw, 157:103463.

[31]Shavarani SM, Nejad MG, Rismanchian F, et al., 2018. Application of hierarchical facility location problem for optimization of a drone delivery system: a case study of Amazon Prime Air in the city of San Francisco. Int J Adv Manuf Technol, 95(9):3141-3153.

[32]Shavarani SM, Mosallaeipour S, Golabi M, et al., 2019. A congested capacitated multi-level fuzzy facility location problem: an efficient drone delivery system. Comput Oper Res, 108:57-68.

[33]Song KH, Lee H, 2025. Network topology-driven vertiport placement strategy: integrating urban air mobility with the Seoul Metropolitan railway system. Appl Sci, 15(7):3965.

[34]Sun J, Shu SJ, Hu HD, et al., 2025. Location optimization of unmanned aerial vehicle (UAV) drone port for coastal zone management: the case of Guangdong coastal zone in China. Ocean Coast Manag, 262:107576.

[35]Volakakis V, Mahmassani HS, 2024. Vertiport infrastructure location optimization for equitable access to urban air mobility. Infrastructures, 9(12):239.

[36]Williams RJ, 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach Learn, 8(3-4):229-256.

[37]Zhang CX, Du WB, Guo T, et al., 2025. Multi-objective hub location for urban air mobility via self-adaptive evolutionary algorithm. Adv Eng Inform, 64:102974.

[38]Zhao Y, Feng T, 2024. Strategic integration of vertiport planning in multimodal transportation for urban air mobility: a case study in Beijing, China. J Clean Prod, 467:142988.

[39]Zheng L, Xu G, Chen WB, 2024. Using improved particle swarm optimization algorithm for location problem of drone logistics hub. Comput Mater Con, 78(1):935-957.

[40]Zhu TK, Boyles SD, Unnikrishnan A, 2022. Two-stage robust facility location problem with drones. Transp Res Part C Emerg Technol, 137:103563.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2026 Journal of Zhejiang University-SCIENCE