TY - GEN
T1 - Enabling Privacy-Preserving Edge AI Federated Learning Enhanced with Forward-Forward Algorithm
AU - Ghader, Mohammadnavid
AU - Reza Kheradpisheh, Saeed
AU - Farahani, Bahar
AU - Fazlali, Mahmood
N1 - © 2024 IEEE.
PY - 2024/8/15
Y1 - 2024/8/15
N2 - Artificial Intelligence (AI) has emerged as a pivotal technology across various sectors, including healthcare, trans-portation, and the development of smart cities, revolutionizing service delivery and operational efficiency. However, the adoption and introduction of new data-driven services leveraging central-ized training models have been hindered by significant concerns over privacy and data security, as these traditional techniques potentially expose sensitive information to breaches. Federated Learning (FL) presents a compelling solution to this dilemma, enabling decentralized data processing without compromising privacy. Integrating Edge AI into this framework, FL enables the collaborative training of models based on data distributed across different clients. Nevertheless, implementing FL on edge devices introduces a set of challenges due to the limited computational and memory resources available on such tiny devices. Specifically, the backpropagation (BP) phase of training models is notably resource-intensive, posing a barrier to efficient deployment. To address this, we replaced the backpropagation phase with a Forward-Forward (FF) algorithm. Moreover, we integrated and compared several loss functions, namely Hinton, Symba, and Swish, to assess their compatibility and efficiency in the context of forward-forward training within the federated learning framework. The study indicates that our novel method leads to a slight decrease in accuracy for large and complex datasets compared to the traditional BP technique. However, it has the potential to enhance runtime and reduce memory overhead. The proposed technique represents a promising path toward the broader adoption of Edge AI by effectively addressing critical technical challenges, namely privacy concerns and on-chip model training.
AB - Artificial Intelligence (AI) has emerged as a pivotal technology across various sectors, including healthcare, trans-portation, and the development of smart cities, revolutionizing service delivery and operational efficiency. However, the adoption and introduction of new data-driven services leveraging central-ized training models have been hindered by significant concerns over privacy and data security, as these traditional techniques potentially expose sensitive information to breaches. Federated Learning (FL) presents a compelling solution to this dilemma, enabling decentralized data processing without compromising privacy. Integrating Edge AI into this framework, FL enables the collaborative training of models based on data distributed across different clients. Nevertheless, implementing FL on edge devices introduces a set of challenges due to the limited computational and memory resources available on such tiny devices. Specifically, the backpropagation (BP) phase of training models is notably resource-intensive, posing a barrier to efficient deployment. To address this, we replaced the backpropagation phase with a Forward-Forward (FF) algorithm. Moreover, we integrated and compared several loss functions, namely Hinton, Symba, and Swish, to assess their compatibility and efficiency in the context of forward-forward training within the federated learning framework. The study indicates that our novel method leads to a slight decrease in accuracy for large and complex datasets compared to the traditional BP technique. However, it has the potential to enhance runtime and reduce memory overhead. The proposed technique represents a promising path toward the broader adoption of Edge AI by effectively addressing critical technical challenges, namely privacy concerns and on-chip model training.
KW - Edge AI
KW - Federated Learning
KW - Forward-Forward Algorithm
KW - Privacy-Preserving ML
UR - http://www.scopus.com/inward/record.url?scp=85202517184&partnerID=8YFLogxK
U2 - 10.1109/COINS61597.2024.10622150
DO - 10.1109/COINS61597.2024.10622150
M3 - Conference contribution
AN - SCOPUS:85202517184
T3 - IEEE International Conference on Omni-Layer Intelligent Systems, COINS 2024
BT - 2024 IEEE International Conference on Omni-Layer Intelligent Systems, COINS 2024
PB - Institute of Electrical and Electronics Engineers (IEEE)
CY - London
T2 - 2024 IEEE International Conference on Omni-Layer Intelligent Systems, COINS 2024
Y2 - 29 July 2024 through 31 July 2024
ER -