Zhou, S. and Chen, Y. and Liang, W. and Li, K. and Meng, W. (2026) TrustHFL : An Efficient Aggregation Method for Trustworthy Hierarchical Federated Learning. IEEE Internet of Things Journal. ISSN 2327-4662
IoT-61203-2026_Proof_hi.pdf - Accepted Version
Available under License Creative Commons Attribution.
Download (12MB)
Abstract
The hierarchical federated learning framework significantly reduces communication burdens on central servers; however, the integration of edge servers introduces potential risks such as Single Point of Failure (SPOF) and ongoing challenges like imbalanced data distribution. To tackle these challenges, we propose TrustHFL, an innovative aggregation method designed for secure hierarchical federated learning. TrustHFL enhances training efficiency by employing group training that clusters clients with similar data distribution characteristics. Inside each cluster, synchronous aggregation is implemented, while asynchronous aggregation is utilised between clusters to alleviate delays from bottleneck clients. We also introduce a robust access control mechanism for secure interactions between clients and edge servers, ensuring data privacy and system integrity. Moreover, our design favours off-chain computation and training, limiting on-chain storage to essential information and thereby minimising both storage and computational demands on the blockchain, ultimately enhancing training efficiency. Extensive experimental results demonstrate that the proposed method accelerates convergence speed and enhances model accuracy. Compared to existing classical federated learning methods, the model accuracy is improved by an average of 1.98% under various data distribution scenarios, while the time required to achieve the same accuracy is reduced by an average of 65.54%.