电工技术学报  2024, Vol. 39 Issue (10): 2937-2952    DOI: 10.19595/j.cnki.1000-6753.tces.230265
电工理论 |
基于残差U-Net和自注意力Transformer编码器的磁场预测方法
金亮1,2, 尹振豪1,2, 刘璐1, 宋居恒1, 刘元凯1
1.省部共建电工装备可靠性与智能化国家重点实验室(河北工业大学) 天津 300401;
2.河北省电磁场与可靠性重点实验室(河北工业大学) 天津 300401
Magnetic Field Prediction Method Based on Residual U-Net and Self-Attention Transformer Encoder
Jin Liang1,2, Yin Zhenhao1,2, Liu Lu1, Song Juheng1, Liu Yuankai1
1. State Key Laboratory of Reliability and Intelligence of Electrical Equipment Hebei University of Technology Tianjin 300401 China;
2. Key Laboratory of Electromagnetic Field and Electrical Apparatus Reliability of Hebei Province Hebei University of Technology Tianjin 300401 China
全文: PDF (8943 KB)   HTML
输出: BibTeX | EndNote (RIS)      
摘要 利用有限元方法对几何结构复杂的电机和变压器进行磁场分析,存在仿真时间长且无法复用的问题。因此,该文提出一种基于残差U-Net和自注意力Transformer编码器的磁场预测方法。首先建立永磁同步电机(PMSM)和非晶合金变压器(AMT)有限元模型,得到深度学习训练所需的数据集;然后将Transformer模块与U-Net模型结合,并引入短残差机制建立ResUnet-Transformer模型,通过预测图像的像素实现磁场预测;最后通过Targeted Dropout算法和动态学习率调整策略对模型进行优化,解决拟合问题并提高预测精度。计算实例证明,ResUnet-Transformer模型在PMSM和AMT数据集上测试集的平均绝对百分比误差(MAPE)均小于1%,且仅需500组样本。该文提出的磁场预测方法能减少实际工况和多工况下精细模拟和拓扑优化的时间和资源消耗,亦是虚拟传感器乃至数字孪生的关键实现方法之一。
服务
把本文推荐给朋友
加入我的书架
加入引用管理器
E-mail Alert
RSS
作者相关文章
金亮
尹振豪
刘璐
宋居恒
刘元凯
关键词 有限元方法电磁场深度学习U-NetTransformer    
Abstract:Accurate simulation of electromagnetic characteristics in electrical equipment relies on the finite element method. However, the increasing complexity of large electrical machines and transformers poses challenges, leading to prolonged simulation time and significant computational resource consumption. At the same time, the finite element method cannot establish a priori model. When design parameters, structures, or operating conditions change, it is necessary to reestablish the model. Considering the powerful feature extraction ability of deep learning, this paper proposes a magnetic field prediction method based on a residual U-Net and a self-attention Transformer encoder. The finite element method is used to obtain the dataset for deep learning training. The deep learning model can be trained once and used for multiple predictions, addressing the limitations of the finite element method and reducing computational time and resource consumption.
Firstly, this paper leverages the inherent advantages of the convolutional neural network (CNN) in image processing, particularly the U-shaped CNN, known as U-Net, based on the encoder and decoder structure. This architecture exhibits a stronger ability to capture fine details and learn from limited samples than the traditional CNN. To mitigate network degradation and address convolutional operation limitations, short residual connections and Transformer modules are introduced to the U-Net architecture, creating the ResUnet- Transformer model. The short residual connections accelerate network training, while the self-attention mechanism from the Transformer network facilitates the effective interaction of global features. Secondly, this paper introduces the Targeted Dropout algorithm and adaptive learning rate to suppress overfitting and enhance the accuracy of magnetic field predictions. The Targeted Dropout algorithm incorporates post-pruning strategies into the training process of neural networks, effectively mitigating overfitting and improving the model’s generalization. Additionally, an adaptive learning rate is implemented using the cosine annealing algorithm based on the Adam optimization algorithm, gradually reducing the learning rate as the objective function converges to the optimal value and avoiding oscillations or non-convergence. Finally, the ResUnet-Transformer model is validated through engineering cases involving permanent magnet synchronous motors (PMSM) and amorphous metal transformers (AMT).
On the PMSM dataset, training the ResUnet-Transformer model with 250 samples and testing it with 100 samples, the mean square error (MSE) and mean absolute percentage error (MAPE) are used as performance evaluation metrics. Compared to CNN, U-Net, and Linknet models, the ResUnet-Transformer model achieves the highest prediction accuracy, with an MSE of 0.07×10-3 and a MAPE of 1.4%. The prediction efficiency of the 100 test samples using the ResUnet-Transformer model surpasses the finite element method by 66.1%. Maintaining consistency in structural and parameter settings, introducing the Targeted Dropout algorithm and cosine annealing algorithm improves the prediction accuracy by 36.4% and 26.3%, respectively. To evaluate the model's generalization capability, the number of training samples for PMSM and AMT datasets is varied, and the model is tested using 100 samples. Inadequate training samples result in poor magnetic field prediction performance. When the training dataset size increases to 300, the prediction error does not decrease but shows a slight rise. However, with further increases in the training dataset size, the error significantly decreases, and the MAPE for the PMSM and AMT datasets reaches 0.7% and 0.5%, respectively, with just 500 training samples.
Key wordsFinite element method    electromagnetic field    deep learning    U-net    Transformer   
收稿日期: 2023-03-08      出版日期: 2024-06-07
PACS: TM153  
基金资助:国家自然科学基金面上项目(51977148),国家自然科学基金重大研究计划项目(92066206)和中央引导地方科技发展专项自由探索项目(226Z4503G)资助
通讯作者: 金亮 男,1982年生,博士,教授,研究方向为工程电磁场与磁技术、电磁场云计算和电磁无损检测等。E-mail: jinliang_email@163.com   
作者简介: 尹振豪 男,2000年生,硕士研究生,研究方向为深度学习在电工装备中的应用。E-mail: 1691150589@qq.com
引用本文:   
金亮, 尹振豪, 刘璐, 宋居恒, 刘元凯. 基于残差U-Net和自注意力Transformer编码器的磁场预测方法[J]. 电工技术学报, 2024, 39(10): 2937-2952. Jin Liang, Yin Zhenhao, Liu Lu, Song Juheng, Liu Yuankai. Magnetic Field Prediction Method Based on Residual U-Net and Self-Attention Transformer Encoder. Transactions of China Electrotechnical Society, 2024, 39(10): 2937-2952.
链接本文:  
https://dgjsxb.ces-transaction.com/CN/10.19595/j.cnki.1000-6753.tces.230265          https://dgjsxb.ces-transaction.com/CN/Y2024/V39/I10/2937