Open access peer-reviewed chapter

Advanced Driving Assistance System for an Electric Vehicle Based on Deep Learning

Written By

Abdelaziz Sahbani and Hela Mahersia

Reviewed: 13 June 2021 Published: 24 August 2021

DOI: 10.5772/intechopen.98870

From the Edited Volume

New Perspectives on Electric Vehicles

Edited by Marian Găiceanu

Chapter metrics overview

380 Chapter Downloads

View Full Metrics

Abstract

This chapter deals with a design of a new speed control method using artificial intelligence techniques applied to an autonomous electric vehicle. In this research, we develop an Advanced Driver Assistance System (ADAS) which aims to enhance the driving manner and the safety, especially when traveling too fast. The proposed model is a complete end-to-end vehicle speed system controller that proceeds from a detected speed limit sign to the regulation of the motor’s speed. It recognizes the speed limit signs before extracting from them, a speed information that will be sent, as reference, to a NARMA-L2 based controller. The study is developped specially for electric vehicle using Brushless Direct Current (BLDC) motor. The simulation results, implemented using Matlab-Simulink, show that the speed of the electric vehicle is controlled successfully with different speed references coming from the image processing unit.

Keywords

  • Brushless DC
  • Deep learning
  • Intelligent electrical vehicle
  • NARMA-L2 controller
  • Speed control
  • Traffic sign recognition

1. Introduction

Nowadays, internal combustion engine vehicles are the major source of air pollution and damage to our health. To solve these problems, electric vehicles are one of the most encouraging energy saving and environmental protection solutions. However, such solutions show luck of autonomy. The current trend in vehicle transportation is to implement solution where the driver and an intelligent system coexist and communicate to improve road safety and security. Advanced driver assistance systems (ADAS) are bridging the gap between traditional electric vehicles and the vehicles of tomorrow which are intelligent, autonomous and safe for the drivers, the passengers and everyone else on the road.

Generally, an autonomous electric vehicle consists of 4 main modules: the energy source module, the auxiliary module, the intelligent module composed of computers, sensors and actuators, and an electrical propulsion module [1], which consists of an electronic controller, a power converter, a mechanical transmission and an electric motor of either AC or DC, as shown in Figure 1. The motors convert the electrical energy that comes from the battery into a mechanical energy that allows the vehicle to move. They can also be considered as generators when sending energy back to the source of energy.

Figure 1.

Block of diagram of an Autonomous electric vehicle.

According to the specific requirements needed for autonomous electric vehicle’s applications, such as, robustness, power, speed range and level of noise, the choice of motors can vary depending on their types. DC motors have been widely used since they fulfill some of the requirements mentioned before [2]. However, in recent years, in order to compensate for their electrical losses and mechanical friction, they have been replaced by Brushless DC Motors (BLDC), given their high efficiency and low noise [3, 4, 5]. Besides, BLDC are known to have a big initial torque with small size and low weight. They also can be built in the tires to reduce the complexity and weight of the driving mechanism.

Speed regulation is an important control challenge for any BLDC motor. In [6], the authors proposed an implementation of Space Vector Pulse Width Modulation (SVPWM) for the control of the power converter and the BLDC motor. They showed that using SVPWM methodology, offers the minimum switching losses, reduces harmonics compared with the other Pulse Width Modulation (PWM) methods.

In order to enhance and perfect the speed adaptation of the autonomous electric vehicle, while assisting at the same time the driver, and increasing its safety and security, ADAS has been developed, relying on inputs from multiple data sources, including automotive imaging, image processing, and in-vehicle networking.

In fact, It has been said that 80–90% of the driver’s performance depends on visual information [2]. This is why, a huge number of collisions occur when the driver is not looking forward or unable to stop at an intersection due to excessive speed. It will be very important to automatically detect the speed limit sign and control the vehicle’s speed when it is traveling too fast. Various ADAS are now designed with a large number of sensors and actuators to analyze the environment and take the appropriate actions. Among the existing ADAS systems, the traffic sign recognition holds a lot of potential as they enhance the driver’s safety by notifying him about possible dangers related to speed.

The automatic recognition of these signs, however, is not easy to carry due to the weather’s conditions, the blur resulting from moving vehicles and the lighting conditions. Figure 2 shows some of these factors that make it difficult to identify the road signs.

Figure 2.

Difficulties that affect the traffic sign recognition systems.

To handle these challenges, researchers recommended the use of image processing and machine learning techniques. The automatic recognition of traffic signs includes, mainly, the traffic sign detection and the traffic sign classification.

Traffic signs have several distinctive features like colors, shapes and symbols. In the detection stage, the input images are preprocessed, enhanced and then, segmented according to their color or geometry. Color-based methods usually use normalized RGB space [7, 8, 9, 10] or HSV space [11, 12, 13, 14, 15, 16] or YUV space [17] to distinguish between traffic and non traffic signs. However, these methods are generally affected by the weather conditions and the illumination variations.

Geometry-based methods are, on the contrary, robust to the illumination changes since they characterize the shape of the traffic sign. Mainly, authors used corner detection [11, 18], distance transform [19], Hough transform [20, 21] or radial geometry voting [22, 23, 24].

Recently, many researchers combined the color-based methods with the shape-based methods to achieve better results [10, 13, 16, 25, 26]. For instance, authors in [16], used color segmentation to roughly identify the sign before applying a matching technique based on the logical XOR operator using the shape extracted details. Similarly, authors in [10] used color segmentation to roughly locate the signs and then, used the shape information to eliminate the false candidates.

As for the classification stage, many methods have been used to identify the class of the traffic signs, such as, Support Vector Machine (SVM) [8, 13, 16], Viola-Jones detector [27, 28], neural networks [2, 15] and random forest [29, 30].

As we can see, all the presented researches focused only on recognizing traffic signs without trying to integrate and test them into a complete system controlling mechanical and electrical part. In [2], the authors designed an end-to-end system, where they proposed a NARMA-L2 neuro controller for speed regulation based on steerable decomposition and Bayesian neural networks. Despite the acceptable accuracy rate (about 0.975), it failed, unfortunately, to accurately recognize noisy signs.

In this chapter, we continue to design an end-to-end system, based on deep learning approaches, that enhances autonomous vehicle speed control and presents better performances in the presence of noisy inputs.

The rest of this chapter is organized as follows: in Section 2, we describe the proposed model. Section 3 will be reserved to the experimental results and finally, Section 4 will conclude the paper.

Advertisement

2. The proposed speed controller based on traffic sign recognition

The proposed speed control system comprises two main modules, an image processing module and a control module. The first module recognizes the speed limit signs before extracting from them, a speed information that will be sent to the control module. An overview of these units is shown in Figure 3.

Figure 3.

The proposed speed control system.

2.1 The image processing module

This first module aims to recognize the speed limit traffic sign whatever the weather conditions are. Figure 4 presents the proposed model of the speed limit sign recognition system. This system will perform two important tasks: detecting the sign and then identifying the speed. As it is shown in Figure 4, this unit receives an input image, performs grayscale transformation before normalization and noise removal. Lastly, each preprocessed image will be fed to a CNN unit for a classification process.

Figure 4.

The proposed speed limit sign recognition module.

2.1.1 The preprocessing stage

In order to prepare the deep neural network to learn relevant features from speed-limit images, additional processing is required: first of all, we expand the training images, then, we normalize the augmented images, and finally, we filter them with a median filter.

  • Data normalization:

    In this step, we normalized the gray scale images in order to reduce poor lighting variations observed in the database. Let Imij denotes the grayscale value of pixel ij, Me and Std denote the estimated mean and standard deviation of Im, respectively, and Norij denotes the normalized grayscale value of pixel ij. The normalized image is defined using Eq. (1):

Norij=ImMeConstant1Std+Constant2E1

Constant1 and Constant2 are two constants set experimentally to 50 and 100 respectively. Figure 5 shows the images obtained by 1. Finally, a median filter is applied to the normalized input image to obtain an enhanced speed sign image.

  • Data augmentation: Deep neural networks require a huge learning database to perform the speed limit recognition task. However, most publicly available databases suffer from lack of data. Increasing these databases is, therefore, a crucial step for an accurate sign recognition. Moreover, an augmentation on the training data makes the proposed model more robust to geometric changes. Figure 6 shows a sample speed limit image with different augmentation techniques applied to it: vertical flipping, rotation with small angles 88 and horizontal translation of 1 unit to both sides right and left.

Figure 5.

Data normalization: a. Original images. b. After Grayscale transformation, c. After normalization by Eq. (1) and d. after median filtering.

Figure 6.

Some of the transformations used for data augmentation.

2.1.2 The deep learning stage

In this chapter, in order to extract features and organize speed signs in categories, a Convolutional Neural Network was therefore used. The architecture of the proposed CNN, which has three convolution layers, and one output layer, is presented in Figure 7. The first, third and fifth layers are convolution layers with 8, 16 and 32 kernels, respectively, with a size of 5x5. The activation function in the CNN is a rectified linear unit (Relu) function. Sub-sampling is presented in the second, fourth and sixth layers. We used also a max pooling layer with a kernel size of 2x2 and a step size of 2. We flatted the sub-sampling to a 1152-dimensional vector and directly connected the output layer with a soft-max activation function.

Figure 7.

Proposed CNN’s structure.

2.2 The speed control module

The aim of this unit is to control the speed of the autonomous electric vehicle using information coming from the image processing unit as speed reference and then, input it to the control unit. The studied traction system is composed of BLDC motor, three phase Mosfets inverter, a gearbox bloc associated with mechanical differential used for speed adaptation for the shaft and the two wheels (see Figure 8). The traction system can be changed by a single wheel [31] if we don’t take in consideration the mechanical differential part.

Figure 8.

Control strategy block diagram of the EV drive system using BLDC motor.

Figure 8 shows the control strategy of the AEV using BLDC motor. The feedback signals are the motor speed signal measured by the speed sensor and the rotor’s position taken from the three hall sensors. The speed controller unit receive the reference speed signal from the digital processing unit (detected from the trafic speed sign) and the actual speed of the motor (actual vehicle speed). Then, the generated PWM refers to the error between the reference speed and the measured speed as well as the commutation sequence of three hall sensor signals Hsa, Hsb and Hsc (Figure 9) in order to control the three phase inverter switches [32].

Figure 9.

Hall sensors output signals in a 360 electrical degrees cycle.

The three phase inverter consists of six Mosfet switches Qi,i=1,..,6 and six freewheeling diode as shown in Figure 10. Considering that the motor is in clockwise revolution, the state of the six switches (Sw1, Sw2, Sw3, Sw4, Sw5 and Sw6), depending to the three Hall sensors state (Has, Hsb and Hsc), are shown in Table 1.

Figure 10.

Equivalent circuit of the BLDC motor associated with three-phase inverter.

2.2.1 Mathematical model of the studied BLDC motor

The BLDC motor has three windings coupled in Y-connected on the stator with a permanent magnets rotor with. If we neglect any saturation effects with a constant parameters in the three phase.

The electrical equations of the BLDC motor are described by:

VasVbsVcs=Rs000Rs000Rs+ddtLaaLabLacLbaLbbLbcLcaLcbLcciaibic+eaebecE2

Where:

  • Vas, Vbs and Vcs are the Stator voltages,

  • Rs is the stator resistance (Ra = Rb = Rc = Rs),

  • ia, ib and ic are the stator currents,

  • Laa, Lbb and Lcc are Self Inductances of phases a, b and c, respectively,

  • Lab, Lac and Lbc are Mutual Inductances,

  • ea, eb and ec are back EMF’s.

IfLaa=Lbb=Lcc=LLab=Lba=Lac=Lca=Lbc=Lcb=ME3

The state space representation of motor becomes:

VasVbsVcs=Rs000Rs000Rs+ddtLMMMLMMMLiaibic+eaebecE4

In addition, at balanced condition of motor phase, we have:

ia+ib+ic=0E5

and

Ls=LME6

so the state space representation is:

VasVbsVcs=Rs000Rs000Rs+ddtLs000Ls000Lsiaibic+eaebecE7

The three back EMF (have trapezoidal form) are represented by:

eaebec=ωm.λmfasθrfbsθrfcsθrE8

Where:

  • ωm is the rotor speed (rad/s),

  • θm is the rotor position (rad),

  • The functions fasθr, fbsθr and fcsθr are represented by Table 2.

CycleHall SensorsSwitches State
HaHbHcSw1Sw2Sw3Sw4Sw5Sw6
1101100100
2100100001
3110001001
4010011000
5011010010
6001000110

Table 1.

Hall sensors output and the switch state.

θefasθrfbsθrfcsθr
0-601−116θeπ
60-12016θeπ3−1
120-18056θeπ1−1
180-240−116θeπ7
240-300−196θeπ1
300-3606θeπ11−11

Table 2.

Functions fasθr, fbsθr and fcsθr.

2.2.2 Implementation of NARMA-L2 neuro controller for speed regulation

Many speed controllers have been frequently used in literature, such as PI (proportional integral) and PID (Proportional integral derivative) (PID), given their simple structure, rapid-reaction and reasonable cost. However, they exhibit a slow response when associated with dynamic loads. Recently, intelligent-based controller, such as neural networks control (NNC), genetic algorithms and fuzzy logic control, were exploited in the speed control of BLDC [2]. Among these techniques, the neural networks are considered in this chapter, because they are the most suitable to handle the non-linearity of the BLDC system that contains uncertainties. Thus, an intelligent neuronal controller is proposed, based on Nonlinear Auto-regressive Moving Average Level-2 model (NARMA L2).

There are two steps involved in the control process. The first step is the feedback linearization to identify the system to be controlled, while the second step is the training of the system’s dynamics. Generally, the NARMA L2 nonlinear description of the system is represented by a discrete-time nth order equation (Eq. (9))

yk+d=f(yk,yk1,ykn+1,uk,uk1,,ukn+1)+g(yk,yk1,ykn+1,uk,uk1,,ukn+1).ukE9

Where uk is the system input, yk the system output and d is the system delay. f. and g. are the additive and the multiplicative non-linear terms respectively, to be approximated in the training step. The Figure 11 shows the structure of NARMA-L2 Model (Figure 12).

Figure 11.

NARMA-L2 Model.

Figure 12.

Specifications of the plant model.

Advertisement

3. Experiments and discussion

In this section we present the dataset and the details of the experiments performed in this study. Experiments were performed to select the best number of folds, the best number of layers and the best optimization’s function. The final proposed model was implemented using Matlab 2018 with an Intel Core i7 1.8 GHz CPU working in a windows 10 environment. The simulink model of the proposed algorithm is presented by Figure 13.

Figure 13.

Simulink model of our algorithm.

3.1 Used database

In order to validate the proposed method, we used the German traffic sign recognition benchmark (GTSRB). To solve the traffic sign recognition problem, this database has been made with different visual indications. The image qualities vary depending on the illumination, the contrast and the color. Table 3 shows some details of the data-set used in our work.

Speed limit signClassNb of Training imageNb of Testing image
301222017
502225027
6031410229
7041980304
8051858573
10061439281
12071410450
Total images125671881

Table 3.

Characteristics of the used database.

3.2 Speed sign recognition result

The partitioning of the input data was performed randomly according to 10-folds cross-validation procedure: we divided the training database into ten folders. Each time, we used one folder as a validation set and we trained the nine remaining folders. The process will continue until the validation error starts increasing. At this moment, the training procedure will be stopped, and then saved. After running the 10-folds, we have selected the best neural network obtained with the best validation performances. An example can be shown in Figure 14. Concerning the final recognition rate, it will be calculated after trying the test database with the selected network that assigns the value of 1 for exact speed limit sign and 0 for all other signs (Figure 15). The average speed limit sign recognition rate obtained after several tries is 90.8% with a validation accuracy 0f 94.75% (Table 4).

Figure 14.

Example of the best validation performance. The training stops when the validation error is increasing.

Figure 15.

Confusion matrix of the deep learning process.

K-FoldsValidation accuracyTesting accuracyAUC
3-folds93.005 (%)86.44 (%)0.98
5-folds93.95 (%)87.9 (%)0.998
8-folds95.41 (%)90.1 (%)0.993
10-folds94.75 (%)90.8 (%)0.999

Table 4.

Speed limit recognition accuracy with different k-folds.

As can be seen from Figure 16, 100 and 60 are two of the most difficult signs to classify, whereas 30 and 50 are the easiest to recognize. Table 4 summarizes the recognition rates obtained with different folds: 3, 5, 8 and 10 as well as the Area-Under-Curve of the ROC curve. The best recognition rates were obtained with 10-folds. Besides, we show in Table 5 that the adam function is a best choice for the optimization of the deep learning network.

Figure 16.

Average recognition rates per speed limit sign.

FunctionTesting accuracy
Sgdm86.28 (%)
Rmsprop84.68 (%)
adam90.8 (%)

Table 5.

Speed limit recognition accuracy with different optimization function, K = 10 folds.

3.3 The NARMA L2 controller’s result

The speed detected from the sign recognition process will be taken as a reference input to the NARMA L2 controller. Here, we must also train the controller to adjust the vehicle speed. The training data obtained from the NARMA-L2 controller are illustrated in Figure 17.

Figure 17.

Training data of NARMA L2 controller.

Figure 18, displays the speed response of the proposed system with both increase and decrease of speed reference. We notice that the speed reaches rapidly the desired value. The proposed controller presents a good behaviour upon disturbances, and thus it is stabilized after duration of 0.05 s with an almost no static error.

Figure 18.

Desired Speed curve with the NARMA L2 controller.

Advertisement

4. Conclusions

Deep learaning, image processing and NARMA-L2 controllers have been successfully developed and simulated using MATLAB to control the speed of a BLDC Motor by recognizing the traffic sign images. Simulation results show effectiveness of the proposed controllers for dealing with the motor system with nonlinearity under wide dynamic operation regimes.

References

  1. 1. K.-V. Singh, H.-O. Bansal, and D. Singh, A comprehensive review on hybrid electric vehicles: architectures and components, Journal of Modern Transportation, pp. 1-31, 2019
  2. 2. A. Sahbani, NARMA-L2 Neuro controller for speed regulation of an intelligent vehicle based on image processing techniques, In Proc. of the 21st IEEE Saudi Computer Society National Computer Conference (NCC 2018), 2018
  3. 3. A.-J. Godfrey and V. Sankaranarayanan, A new electric braking system with energy regeneration for a BLDC motor driven electric vehicle, Engineering Science and Technology, Vol. 21, pp. 704-713, 2018
  4. 4. M.-S. Kumar and S.-T. Revankar, Development scheme and key technology of an electric vehicle: an overview, Renew. Sustain. Energy Rev. Vol. 70, pp.1266-1285, 2017
  5. 5. C.-L. Jeong and J. Hur, A novel proposal to improve reliability of spoke-type BLDC motor using ferrite permanent magnet, IEEE Trans. Ind. Appl. Vol. 52, No. 5, pp. 3814-3821, 2016
  6. 6. F.-R. Yasien and R.-A. Mahmood, International Design New control System for Brushless DC motor Using SVPWM, Journal of Applied Engineering, Vol. 13, No. 1, pp. 582-589, 2018
  7. 7. H.Y. Yalic and A.B. Can, Automatic recognition of traffic signs in Turkey roads, In. Proc. of the 19 IEEE Signal Processing and Communications applications conference, 2011
  8. 8. X. Yuan, X.L. Hao, H.J. Chen and X.Y. Wei, Robust traffic sign recognition based on color global and local oriented edge magnitude patterns, IEEE Trans. on Intelligent Transportation Systems, Vol. 15, pp. 1466-1477, 2014
  9. 9. R. Timofte, K. Zimmermann, and L. Van Gool, Multi-view traffic sign detection, recognition, and 3D localisation, Mach. Vis. Appl., Vol. 25, No. 3, pp. 633-647, 2014
  10. 10. A. Alam and Z.-A. Jaffery, Indian Traffic Sign Detection and Recognition, International Journal of Intelligent Transportation Systems Research, pp. 1-15, 2019
  11. 11. A. Escalera, J.M. Armingol and M. Mata, Traffic sign recognition and analysis for intelligent vehicles, Image Vis. Comput., Vol. 21, No. 3, pp. 247-258, 2003
  12. 12. S. Maldonado, S. Arroyo, P. Jimenez, H. Moreno, and F. Ferreras, Road-sign detection and recognition based on support vector machines, IEEE. Trans. on Intelligent Transportation Systems, Vol. 8, pp. 264-278, 2007
  13. 13. J. Lillo-Castellano, I. Mora-Jimenez, C. Figuera-Pozuelo and J. Rojo-Alvarez, Traffic sign segmentation and classification using statistical learning methods, Neurocomputing, Vol. 153, pp. 286-299, 2015
  14. 14. A. Ellahyani, M. Ansari and I. Jaafari, Traffic sign detection and recognition based on random forests, Applied Soft Computing, Vol. 46, pp. 805-815, 2016
  15. 15. M.A. Sheikh, A. Kole and T. Maity, Traffic Sign Detection and Classification using Colour Feature and Neural Network, In proc. ot the International Conference on Intelligent Control Power and Instrumentation (ICICPI2016), 2016
  16. 16. A. Madani and R. Yusof, Traffic sign recognition based on color, shape, and pictogram classification using support vector machines, Neural Computing and Applications, V. 30, No. 9, pp 2807-2817, 2018
  17. 17. J. Miura, T. Kanda, S. Nakatani, and Y. Shirai, An active vision system for on-line traffic sign recognition, IEICE Trans. Inf. Syst., Vol. E85-D, No. 11, pp. 1784-1792, 2002
  18. 18. C.F. Paulo and P.L. Correia, Automatic detection and classification of traffic signs, In Proc. of the Eighth International Workshop on Image Analysis for Multimedia Interactive Services, WIAMIS’07, pp. 1-11, 2007
  19. 19. E. Moomivand and E. Abolfazli, A modified structural method for shape recognition, In. Proc. of IEEE symposium on industrial electronics and applications, Malaysia, 2001
  20. 20. A. Gonzalez, Automatic traffic signs and panels inspection system using computer vision, IEEE Trans. Intell. Transp. Syst., Vol. 12, No. 2, pp. 485?499, Jun. 2011
  21. 21. W.-J. Kuo and C.-C. Lin, Two-stage road sign detection and recognition, In Proc. Int. Conf. Multimedia Expo, pp. 1427-1430, 2007
  22. 22. G.B. Loy and N.M. Barnes, Fast shape-based road sign detection for a driver assistance system, In Proc. Int. Conf. Intell. Robots Syst., pp. 70-75, 2004
  23. 23. N. Barnes, A. Zelinsky, and L.-S. Fletcher, Real-time speed sign detection using the radial symmetry detector, IEEE Trans. Intell. Transp. Syst., Vol. 9, No. 2, pp. 322-332, 2008
  24. 24. Y. Gu, T. Yendo, M. P. Tehrani, T. Fujii, and M. Tanimoto, Traffic sign detection in dual-focal active camera system, In Proc. IEEE Intell. Veh. Symp., pp. 1054-1059, 2011
  25. 25. C.S. Liu, F.L. Chang and Z.X. Chen, Rapid multiclass traffic sign detection in high-resolution images, IEEE Trans. on Intelligent Transportation Systems, Vol. 15, pp. 2394-2403, 2014
  26. 26. H.J. Li, F.M. Sun, L.J. Liu, and L. Wang, A novel traffic sign detection method via color segmentation and robust shape matching, Neurocomputing, Vol. 169, pp. 77-88, 2015
  27. 27. M. Mathias, R. Timofte, R. Benenson and L. V. Gool, Traffic sign recognition?How far are we from the solution?, In Proc. Int. Joint Conf. Neural Netw., pp. 1?8, 2013
  28. 28. S. Segvic, K. Brkic, Z. Kalafatic and A. Pinz, Exploiting temporal and spatial constraints in traffic sign detection from a moving vehicle, Mach. Vis. Appl., Vol. 25, No. 3, pp. 649-665, 2014
  29. 29. A. Kouzani, Road-sign identification using ensemble learning, In proc. of the IEEE Intelligent Vehicles Symposium, pp. 438-443, 2007
  30. 30. F. Zaklouta and B. Stanciulescu, Real-time traffic sign recognition in three stages, Robotics and Autonomous Systems, Vol. 62, pp. 16-24, 2014
  31. 31. W. Lhomme, A. Bouscayrol and P. Barrade, Simulation of series hybrid electric vehicles based on energetic macroscopic representation, Proc. of IEEE-ISIE’04, Ajaccio, 2004
  32. 32. A. Chen, B. Xie and E. Mao, Electric Tractor Motor Drive Control Based on FPGA, IFAC-PapersOnLine, 49-16, 271-27, 2016

Written By

Abdelaziz Sahbani and Hela Mahersia

Reviewed: 13 June 2021 Published: 24 August 2021