Open access peer-reviewed chapter - ONLINE FIRST

Advanced Driving Assistance System for an Electric Vehicle Based on Deep Learning

By Abdelaziz Sahbani and Hela Mahersia

Submitted: March 18th 2021Reviewed: June 13th 2021Published: August 24th 2021

DOI: 10.5772/intechopen.98870

Downloaded: 27

Abstract

This chapter deals with a design of a new speed control method using artificial intelligence techniques applied to an autonomous electric vehicle. In this research, we develop an Advanced Driver Assistance System (ADAS) which aims to enhance the driving manner and the safety, especially when traveling too fast. The proposed model is a complete end-to-end vehicle speed system controller that proceeds from a detected speed limit sign to the regulation of the motor’s speed. It recognizes the speed limit signs before extracting from them, a speed information that will be sent, as reference, to a NARMA-L2 based controller. The study is developped specially for electric vehicle using Brushless Direct Current (BLDC) motor. The simulation results, implemented using Matlab-Simulink, show that the speed of the electric vehicle is controlled successfully with different speed references coming from the image processing unit.

Keywords

  • Brushless DC
  • Deep learning
  • Intelligent electrical vehicle
  • NARMA-L2 controller
  • Speed control
  • Traffic sign recognition

1. Introduction

Nowadays, internal combustion engine vehicles are the major source of air pollution and damage to our health. To solve these problems, electric vehicles are one of the most encouraging energy saving and environmental protection solutions. However, such solutions show luck of autonomy. The current trend in vehicle transportation is to implement solution where the driver and an intelligent system coexist and communicate to improve road safety and security. Advanced driver assistance systems (ADAS) are bridging the gap between traditional electric vehicles and the vehicles of tomorrow which are intelligent, autonomous and safe for the drivers, the passengers and everyone else on the road.

Generally, an autonomous electric vehicle consists of 4 main modules: the energy source module, the auxiliary module, the intelligent module composed of computers, sensors and actuators, and an electrical propulsion module [1], which consists of an electronic controller, a power converter, a mechanical transmission and an electric motor of either AC or DC, as shown in Figure 1. The motors convert the electrical energy that comes from the battery into a mechanical energy that allows the vehicle to move. They can also be considered as generators when sending energy back to the source of energy.

Figure 1.

Block of diagram of an Autonomous electric vehicle.

According to the specific requirements needed for autonomous electric vehicle’s applications, such as, robustness, power, speed range and level of noise, the choice of motors can vary depending on their types. DC motors have been widely used since they fulfill some of the requirements mentioned before [2]. However, in recent years, in order to compensate for their electrical losses and mechanical friction, they have been replaced by Brushless DC Motors (BLDC), given their high efficiency and low noise [3, 4, 5]. Besides, BLDC are known to have a big initial torque with small size and low weight. They also can be built in the tires to reduce the complexity and weight of the driving mechanism.

Speed regulation is an important control challenge for any BLDC motor. In [6], the authors proposed an implementation of Space Vector Pulse Width Modulation (SVPWM) for the control of the power converter and the BLDC motor. They showed that using SVPWM methodology, offers the minimum switching losses, reduces harmonics compared with the other Pulse Width Modulation (PWM) methods.

In order to enhance and perfect the speed adaptation of the autonomous electric vehicle, while assisting at the same time the driver, and increasing its safety and security, ADAS has been developed, relying on inputs from multiple data sources, including automotive imaging, image processing, and in-vehicle networking.

In fact, It has been said that 80–90% of the driver’s performance depends on visual information [2]. This is why, a huge number of collisions occur when the driver is not looking forward or unable to stop at an intersection due to excessive speed. It will be very important to automatically detect the speed limit sign and control the vehicle’s speed when it is traveling too fast. Various ADAS are now designed with a large number of sensors and actuators to analyze the environment and take the appropriate actions. Among the existing ADAS systems, the traffic sign recognition holds a lot of potential as they enhance the driver’s safety by notifying him about possible dangers related to speed.

The automatic recognition of these signs, however, is not easy to carry due to the weather’s conditions, the blur resulting from moving vehicles and the lighting conditions. Figure 2 shows some of these factors that make it difficult to identify the road signs.

Figure 2.

Difficulties that affect the traffic sign recognition systems.

To handle these challenges, researchers recommended the use of image processing and machine learning techniques. The automatic recognition of traffic signs includes, mainly, the traffic sign detection and the traffic sign classification.

Traffic signs have several distinctive features like colors, shapes and symbols. In the detection stage, the input images are preprocessed, enhanced and then, segmented according to their color or geometry. Color-based methods usually use normalized RGB space [7, 8, 9, 10] or HSV space [11, 12, 13, 14, 15, 16] or YUV space [17] to distinguish between traffic and non traffic signs. However, these methods are generally affected by the weather conditions and the illumination variations.

Geometry-based methods are, on the contrary, robust to the illumination changes since they characterize the shape of the traffic sign. Mainly, authors used corner detection [11, 18], distance transform [19], Hough transform [20, 21] or radial geometry voting [22, 23, 24].

Recently, many researchers combined the color-based methods with the shape-based methods to achieve better results [10, 13, 16, 25, 26]. For instance, authors in [16], used color segmentation to roughly identify the sign before applying a matching technique based on the logical XOR operator using the shape extracted details. Similarly, authors in [10] used color segmentation to roughly locate the signs and then, used the shape information to eliminate the false candidates.

As for the classification stage, many methods have been used to identify the class of the traffic signs, such as, Support Vector Machine (SVM) [8, 13, 16], Viola-Jones detector [27, 28], neural networks [2, 15] and random forest [29, 30].

As we can see, all the presented researches focused only on recognizing traffic signs without trying to integrate and test them into a complete system controlling mechanical and electrical part. In [2], the authors designed an end-to-end system, where they proposed a NARMA-L2 neuro controller for speed regulation based on steerable decomposition and Bayesian neural networks. Despite the acceptable accuracy rate (about 0.975), it failed, unfortunately, to accurately recognize noisy signs.

In this chapter, we continue to design an end-to-end system, based on deep learning approaches, that enhances autonomous vehicle speed control and presents better performances in the presence of noisy inputs.

The rest of this chapter is organized as follows: in Section 2, we describe the proposed model. Section 3 will be reserved to the experimental results and finally, Section 4 will conclude the paper.

Advertisement

2. The proposed speed controller based on traffic sign recognition

The proposed speed control system comprises two main modules, an image processing module and a control module. The first module recognizes the speed limit signs before extracting from them, a speed information that will be sent to the control module. An overview of these units is shown in Figure 3.

Figure 3.

The proposed speed control system.

2.1 The image processing module

This first module aims to recognize the speed limit traffic sign whatever the weather conditions are. Figure 4 presents the proposed model of the speed limit sign recognition system. This system will perform two important tasks: detecting the sign and then identifying the speed. As it is shown in Figure 4, this unit receives an input image, performs grayscale transformation before normalization and noise removal. Lastly, each preprocessed image will be fed to a CNN unit for a classification process.

Figure 4.

The proposed speed limit sign recognition module.

2.1.1 The preprocessing stage

In order to prepare the deep neural network to learn relevant features from speed-limit images, additional processing is required: first of all, we expand the training images, then, we normalize the augmented images, and finally, we filter them with a median filter.

  • Data normalization:

    In this step, we normalized the gray scale images in order to reduce poor lighting variations observed in the database. Let Imijdenotes the grayscale value of pixel ij, Meand Stddenote the estimated mean and standard deviation of Im, respectively, and Norijdenotes the normalized grayscale value of pixel ij. The normalized image is defined using Eq. (1):

Norij=ImMeConstant1Std+Constant2E1

Constant1and Constant2are two constants set experimentally to 50and 100respectively. Figure 5 shows the images obtained by 1. Finally, a median filter is applied to the normalized input image to obtain an enhanced speed sign image.

  • Data augmentation: Deep neural networks require a huge learning database to perform the speed limit recognition task. However, most publicly available databases suffer from lack of data. Increasing these databases is, therefore, a crucial step for an accurate sign recognition. Moreover, an augmentation on the training data makes the proposed model more robust to geometric changes. Figure 6 shows a sample speed limit image with different augmentation techniques applied to it: vertical flipping, rotation with small angles 88and horizontal translation of 1 unit to both sides right and left.

Figure 5.

Data normalization: a. Original images. b. After Grayscale transformation, c. After normalization byEq. (1)and d. after median filtering.

Figure 6.

Some of the transformations used for data augmentation.

2.1.2 The deep learning stage

In this chapter, in order to extract features and organize speed signs in categories, a Convolutional Neural Network was therefore used. The architecture of the proposed CNN, which has three convolution layers, and one output layer, is presented in Figure 7. The first, third and fifth layers are convolution layers with 8, 16 and 32 kernels, respectively, with a size of 5x5. The activation function in the CNN is a rectified linear unit (Relu) function. Sub-sampling is presented in the second, fourth and sixth layers. We used also a max pooling layer with a kernel size of 2x2and a step size of 2. We flatted the sub-sampling to a 1152-dimensional vector and directly connected the output layer with a soft-max activation function.

Figure 7.

Proposed CNN’s structure.

2.2 The speed control module

The aim of this unit is to control the speed of the autonomous electric vehicle using information coming from the image processing unit as speed reference and then, input it to the control unit. The studied traction system is composed of BLDC motor, three phase Mosfets inverter, a gearbox bloc associated with mechanical differential used for speed adaptation for the shaft and the two wheels (see Figure 8). The traction system can be changed by a single wheel [31] if we don’t take in consideration the mechanical differential part.

Figure 8.

Control strategy block diagram of the EV drive system using BLDC motor.

Figure 8 shows the control strategy of the AEV using BLDC motor. The feedback signals are the motor speed signal measured by the speed sensor and the rotor’s position taken from the three hall sensors. The speed controller unit receive the reference speed signal from the digital processing unit (detected from the trafic speed sign) and the actual speed of the motor (actual vehicle speed). Then, the generated PWM refers to the error between the reference speed and the measured speed as well as the commutation sequence of three hall sensor signals Hsa, Hsband Hsc(Figure 9) in order to control the three phase inverter switches [32].

Figure 9.

Hall sensors output signals in a 360 electrical degrees cycle.

The three phase inverter consists of six Mosfet switches Qi,i=1,..,6and six freewheeling diode as shown in Figure 10. Considering that the motor is in clockwise revolution, the state of the six switches (Sw1, Sw2, Sw3, Sw4, Sw5and Sw6), depending to the three Hall sensors state (Has, Hsband Hsc), are shown in Table 1.

Figure 10.

Equivalent circuit of the BLDC motor associated with three-phase inverter.

2.2.1 Mathematical model of the studied BLDC motor

The BLDC motor has three windings coupled in Y-connected on the stator with a permanent magnets rotor with. If we neglect any saturation effects with a constant parameters in the three phase.

The electrical equations of the BLDC motor are described by:

VasVbsVcs=Rs000Rs000Rs+ddtLaaLabLacLbaLbbLbcLcaLcbLcciaibic+eaebecE2

Where:

  • Vas, Vbsand Vcsare the Stator voltages,

  • Rsis the stator resistance (Ra= Rb= Rc= Rs),

  • ia, iband icare the stator currents,

  • Laa, Lbband Lccare Self Inductances of phases a, band c, respectively,

  • Lab, Lacand Lbcare Mutual Inductances,

  • ea, eband ecare back EMF’s.

IfLaa=Lbb=Lcc=LLab=Lba=Lac=Lca=Lbc=Lcb=ME3

The state space representation of motor becomes:

VasVbsVcs=Rs000Rs000Rs+ddtLMMMLMMMLiaibic+eaebecE4

In addition, at balanced condition of motor phase, we have:

ia+ib+ic=0E5

and

Ls=LME6

so the state space representation is:

VasVbsVcs=Rs000Rs000Rs+ddtLs000Ls000Lsiaibic+eaebecE7

The three back EMF (have trapezoidal form) are represented by:

eaebec=ωm.λmfasθrfbsθrfcsθrE8

Where:

  • ωmis the rotor speed (rad/s),

  • θmis the rotor position (rad),

  • The functions fasθr, fbsθrand fcsθrare represented by Table 2.

CycleHall SensorsSwitches State
HaHbHcSw1Sw2Sw3Sw4Sw5Sw6
1101100100
2100100001
3110001001
4010011000
5011010010
6001000110

Table 1.

Hall sensors output and the switch state.

θefasθrfbsθrfcsθr
0-601−116θeπ
60-12016θeπ3−1
120-18056θeπ1−1
180-240−116θeπ7
240-300−196θeπ1
300-3606θeπ11−11

Table 2.

Functions fasθr, fbsθrand fcsθr.

2.2.2 Implementation of NARMA-L2 neuro controller for speed regulation

Many speed controllers have been frequently used in literature, such as PI (proportional integral) and PID (Proportional integral derivative) (PID), given their simple structure, rapid-reaction and reasonable cost. However, they exhibit a slow response when associated with dynamic loads. Recently, intelligent-based controller, such as neural networks control (NNC), genetic algorithms and fuzzy logic control, were exploited in the speed control of BLDC [2]. Among these techniques, the neural networks are considered in this chapter, because they are the most suitable to handle the non-linearity of the BLDC system that contains uncertainties. Thus, an intelligent neuronal controller is proposed, based on Nonlinear Auto-regressive Moving Average Level-2 model (NARMA L2).

There are two steps involved in the control process. The first step is the feedback linearization to identify the system to be controlled, while the second step is the training of the system’s dynamics. Generally, the NARMA L2 nonlinear description of the system is represented by a discrete-time nthorder equation (Eq. (9))

yk+d=f(yk,yk1,ykn+1,uk,uk1,,ukn+1)+g(yk,yk1,ykn+1,uk,uk1,,ukn+1).ukE9

Where ukis the system input, ykthe system output and dis the system delay. f.and g.are the additive and the multiplicative non-linear terms respectively, to be approximated in the training step. The Figure 11 shows the structure of NARMA-L2 Model (Figure 12).

Figure 11.

NARMA-L2 Model.

Figure 12.

Specifications of the plant model.

Advertisement

3. Experiments and discussion

In this section we present the dataset and the details of the experiments performed in this study. Experiments were performed to select the best number of folds, the best number of layers and the best optimization’s function. The final proposed model was implemented using Matlab 2018 with an Intel Core i7 1.8 GHz CPU working in a windows 10 environment. The simulink model of the proposed algorithm is presented by Figure 13.

Figure 13.

Simulink model of our algorithm.

3.1 Used database

In order to validate the proposed method, we used the German traffic sign recognition benchmark (GTSRB). To solve the traffic sign recognition problem, this database has been made with different visual indications. The image qualities vary depending on the illumination, the contrast and the color. Table 3 shows some details of the data-set used in our work.

Speed limit signClassNb of Training imageNb of Testing image
301222017
502225027
6031410229
7041980304
8051858573
10061439281
12071410450
Total images125671881

Table 3.

Characteristics of the used database.

3.2 Speed sign recognition result

The partitioning of the input data was performed randomly according to 10-folds cross-validation procedure: we divided the training database into ten folders. Each time, we used one folder as a validation set and we trained the nine remaining folders. The process will continue until the validation error starts increasing. At this moment, the training procedure will be stopped, and then saved. After running the 10-folds, we have selected the best neural network obtained with the best validation performances. An example can be shown in Figure 14. Concerning the final recognition rate, it will be calculated after trying the test database with the selected network that assigns the value of 1 for exact speed limit sign and 0 for all other signs (Figure 15). The average speed limit sign recognition rate obtained after several tries is 90.8% with a validation accuracy 0f 94.75% (Table 4).

Figure 14.

Example of the best validation performance. The training stops when the validation error is increasing.

Figure 15.

Confusion matrix of the deep learning process.

K-FoldsValidation accuracyTesting accuracyAUC
3-folds93.005 (%)86.44 (%)0.98
5-folds93.95 (%)87.9 (%)0.998
8-folds95.41 (%)90.1 (%)0.993
10-folds94.75 (%)90.8 (%)0.999

Table 4.

Speed limit recognition accuracy with different k-folds.

As can be seen from Figure 16, 100 and 60 are two of the most difficult signs to classify, whereas 30 and 50 are the easiest to recognize. Table 4 summarizes the recognition rates obtained with different folds: 3, 5, 8 and 10 as well as the Area-Under-Curve of the ROC curve. The best recognition rates were obtained with 10-folds. Besides, we show in Table 5 that the adamfunction is a best choice for the optimization of the deep learning network.

Figure 16.

Average recognition rates per speed limit sign.

FunctionTesting accuracy
Sgdm86.28 (%)
Rmsprop84.68 (%)
adam90.8 (%)

Table 5.

Speed limit recognition accuracy with different optimization function, K = 10 folds.

3.3 The NARMA L2 controller’s result

The speed detected from the sign recognition process will be taken as a reference input to the NARMA L2 controller. Here, we must also train the controller to adjust the vehicle speed. The training data obtained from the NARMA-L2 controller are illustrated in Figure 17.

Figure 17.

Training data of NARMA L2 controller.

Figure 18, displays the speed response of the proposed system with both increase and decrease of speed reference. We notice that the speed reaches rapidly the desired value. The proposed controller presents a good behaviour upon disturbances, and thus it is stabilized after duration of 0.05 s with an almost no static error.

Figure 18.

Desired Speed curve with the NARMA L2 controller.

Advertisement

4. Conclusions

Deep learaning, image processing and NARMA-L2 controllers have been successfully developed and simulated using MATLAB to control the speed of a BLDC Motor by recognizing the traffic sign images. Simulation results show effectiveness of the proposed controllers for dealing with the motor system with nonlinearity under wide dynamic operation regimes.

DOWNLOAD FOR FREE

chapter PDF

© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Abdelaziz Sahbani and Hela Mahersia (August 24th 2021). Advanced Driving Assistance System for an Electric Vehicle Based on Deep Learning [Online First], IntechOpen, DOI: 10.5772/intechopen.98870. Available from:

chapter statistics

27total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us