Transactions of China Electrotechnical Society  2024, Vol. 39 Issue (19): 6244-6256    DOI: 10.19595/j.cnki.1000-6753.tces.231338
Current Issue| Next Issue| Archive| Adv Search |
SOC Prediction of Lithium-Ion Batteries Based on Sequence-to-Sequence Model with Adversarial Weighted Attention Mechanism
Chen Zhiming, Liu Jianhua, Ke Tianci, Chen Kewei
1. School of Computer Science and Mathematics Fujian University of Technology Fuzhou 350018 China;
2. Fujian Provincial Key Laboratory of Big Data Mining and Applications Fuzhou 350018 China

Download: PDF (1820 KB)   HTML (1 KB) 
Export: BibTeX | EndNote (RIS)      
Abstract  In practical use, lithium-ion batteries often face extreme temperatures, and the behavior of different battery systems varies under different discharge conditions. Traditional state of charge (SOC) prediction methods, based on physical models, often overlook these differences, resulting in significant prediction errors. In recent years, as deep learning techniques, including recurrent neural networks (RNNs) and Transformers, have gained attention in time series forecasting, most of these methods have shown limited progress in improving prediction accuracy. To address these challenges, this paper introduces an adversarial weighted attention sequence-to-sequence (AWAS) model to enhance SOC prediction.
First, features are extracted from the lithium-ion battery dataset to create an input matrix. This matrix is then sent through an encoder with gated recurrent units (GRU) to capture feature correlations. Next, on top of the multi-head attention mechanism, an additional linear layer is introduced to compute a weight matrix W related to the number of attention heads. The output of this linear layer matches the number of attention heads, facilitating extra linear transformations of queries, keys, and values, and mapping them into the weight space of the multi-head attention. This enhances the computational flexibility of the attention mechanism. Then, the relevant information extracted by the GRU is input into the improved weight attention mechanism and is then passed to the GRU in the decoder. This process strengthens the extraction of correlated information among the features. Finally, the concept of adversarial training is introduced, using a three-layer convolutional layer as the core of the discriminator. The output of the decoder's GRU is considered as sequence 1, while the corresponding SOC values for the given time period are treated as sequence 2. The truthfulness of sequence 1 and sequence 2 is evaluated using the sigmoid function. Adversarial training mitigates the issue of gradient vanishing in the GRU, resolving long-term dependencies. Ultimately, the encoder's output is processed through a fully connected layer to obtain the SOC prediction. The results show that for single-step prediction tasks, our proposed model achieved significantly reduced root mean square error (RMSE) and mean absolute error (MAE) on the LG-HG2 dataset, with values of 0.141 2% and 0.109 4% respectively. On the Panasonic dataset, the errors further decreased to 0.101 3% and 0.080 3%. The Sparse Informer model, which outperformed others in control experiments, achieved errors of 0.260 2%, 0.218 2%, 0.380 1%, and 0.278 2%. In the case of a 12-step prediction task, our model achieved the lowest average MAE of 0.108 7% and mean absolute percentage error (MAPE) of 0.173 4% on the Panasonic dataset at -20℃.The results of ablation experiments indicated that the average MAPE for 12-step predictions decreased to 0.347 2%, while the average RMSE and MAE for single-step predictions reduced to 0.096 6% and 0.083 7%, respectively. These findings validate the effectiveness of our model architecture.
The experiments lead to these conclusions: (1) Compared to numerous RNN and transformer models, the incorporation of adversarial training into the Seq2Seq model effectively mitigates the issues of gradient vanishing and exploding. Across various prediction tasks, the error evaluations for the three different datasets consistently yielded the lowest values. Thus, the introduction of adversarial training is deemed appropriate. (2) Conventional multi-head attention mechanisms exhibit a substantial increase in MAPE as the lithium battery temperature rises. In contrast, the proposed weight attention mechanism reduces errors and curbs the upward trend in errors. From the experimental results, it is evident that the weight attention mechanism is more practical.
Key wordsState of charge      sequence-to-sequence model      generative adversarial network      sparse informer      attention     
Received: 17 August 2023     
PACS: TM912  
Service
E-mail this article
Add to my bookshelf
Add to citation manager
E-mail Alert
RSS
Articles by authors
Chen Zhiming
Liu Jianhua
Ke Tianci
Chen Kewei
Cite this article:   
Chen Zhiming,Liu Jianhua,Ke Tianci等. SOC Prediction of Lithium-Ion Batteries Based on Sequence-to-Sequence Model with Adversarial Weighted Attention Mechanism[J]. Transactions of China Electrotechnical Society, 2024, 39(19): 6244-6256.
URL:  
https://dgjsxb.ces-transaction.com/EN/10.19595/j.cnki.1000-6753.tces.231338     OR     https://dgjsxb.ces-transaction.com/EN/Y2024/V39/I19/6244
Copyright © Transactions of China Electrotechnical Society
Supported by: Beijing Magtech