Open access peer-reviewed chapter

Perspective Chapter: Future Perspectives of Intelligent Autonomous Vehicles

Written By

Yuan Yin

Submitted: 14 August 2022 Reviewed: 30 August 2022 Published: 30 October 2022

DOI: 10.5772/intechopen.107486

From the Edited Volume

The Dynamics of Vehicles - Basics, Simulation and Autonomous Systems

Edited by Hüseyin Turan Arat

Chapter metrics overview

105 Chapter Downloads

View Full Metrics

Abstract

The chapter explains the effects of intelligent autonomous vehicles from future perspectives. The chapter gives readers an overview of the future intelligent autonomous vehicles and promotes the development potential on intelligent. To be specific, the chapter first gives the readers an overview of the development of autonomous vehicles. Then, the chapter introduces the potential of intelligent autonomous vehicles, key technologies that are needed for future intelligent autonomous vehicles, and how intelligent autonomous vehicles affect the future. Finally, the chapter discusses barriers in intelligent autonomous vehicles development. The chapter will be contributed as a start point for people who want to keep working on intelligent autonomous vehicles and help them understand the general condition of future intelligent autonomous vehicles.

Keywords

  • autonomous vehicles
  • autonomous driving
  • self-driving
  • future
  • intelligence

1. Introduction

Autonomous driving is a field with a promising future [1, 2]. The human society is extremely dependent on transportation. According to statistics, there are about more than 1.4 billion vehicles in the world [3]. Tens of millions or even hundreds of millions of drivers manipulate vehicles around the world every day, which is a very large consumption of manpower [4]. Intelligent autonomous vehicles, including driverless vehicles, have become a hot topic of concern for regulators and industry in recent years. Intelligent autonomous driving has been defined as the direction of development in the next 10 years [5].

Based on the degree to which dynamic driving tasks that the intelligent automation driving system can perform, the role assigned in the execution of dynamic driving tasks, and with or without the design operating condition restrictions, the driving automation is divided into levels 0–5 (Table 1).

LevelVehicle motion controlTargets and events detection and response
Level 0No AutomationDriverDriver
Level 1Driver assistanceDriver and systemDriver
Level 2Partial AutomationDriver and systemDriver
Level 3Conditional AutomationSystemDriver or system
Level 4High AutomationSystemSystem
Level 5Fully AutomationSystemSystem

Table 1.

The automotive driving division elements.

Level 0 (No Automation): Drivers are in full control of the vehicle. The automation driving system cannot continuously control vehicle motion in dynamic driving tasks, but has the ability to continuously detect and respond to partial target and event in dynamic driving tasks. Typical vehicles currently on the road are classified as Level 0.

Level 1 (Driver assistance): The automation driving system continuously controls vehicle motion in dynamic driving tasks within its designed operating conditions and has the ability to appropriately detect and respond to partial targets and events that related to vehicle motion control. In other words, Level 1 automated systems are sometimes able to assist the driver in certain driving tasks.

Level 2 (Partial Automation): The driving automation system continuously controls vehicle motion in dynamic driving tasks within the conditions for which it is designed to operate and has the capability to detect and appropriately respond to some of the targets and events that are related to vehicle motion control. In other words, the automation system is capable of performing certain driving tasks, but the driver needs to monitor the driving environment and complete the remainder. Also, drivers need to take over driving when problems arising at any time. At this level, the automation system asks the driver ready to correct errors in perception and judgment, which has been able to be provided by most of car companies. Level 2 condition can be split by speed and environment into different usage scenarios, such as low-speed traffic jams on ring roads, fast travel on highways, and automatic parking with the driver in the car [6]. Most of the new vehicles on the market are classified as Level 1 or Level 2. These new vehicles are equipped with assisted driving features such as lane centering and speed control, which are useful for both parking and driving [7]. However, the car remains in the hands of the driver.

Level 3 (Conditional automation): The automation system is capable of both performing certain driving tasks and monitoring the driving environment in certain situations, but the driver must be ready to regain driving control when they are requested to do it by the automated system. At this level, drivers are still unable to sleep or take a deep rest. Currently, the highest level that commercial vehicles can reach is at Level 3 at most [8]. Conditional automation requires that the vehicle be able to drive autonomously under ideal conditions, such as at a given speed and road type, but there are many limitations when these conditions are exceeded. The meaningful deployments seen so far are upgraded high-speed road condition.

Level 4 (High Automation): Automation systems are able to perform driving tasks and monitor the driving environment in certain environments and under specific conditions. Specifically, the driving automation system performs all dynamic driving tasks continuously and takeovers dynamic driving tasks under the designed operating conditions. Under this stage, in the range of automatic driving can operate, all tasks related to driving and the driver have nothing to do. The perception of external responsibility is all in the automatic driving system. Level 4 vehicles are almost fully automated, but their automation systems can only be used in known use cases. They cannot be applied in driving off-road or in extreme weather conditions. In these unknown situations, the driver must steer the vehicle. Most of deployments of Level 4 are currently set in urban-based condition [9]. It is expected for fully automated valet parking or can be done directly in conjunction with a taxi service.

Level 5 (Fully Automation): The automation system can perform all driving tasks under all conditions. Specifically, the driving automation system continuously performs all dynamic driving tasks and takeovers dynamic driving tasks under any drivable condition. Level 5 vehicles are truly driverless cars. To reach Level 5, a vehicle must be able to navigate autonomously through any road condition or hazardous obstacle.

Level 1 and Level 2 cannot be considered autonomous driving; instead, they are Advanced Driver Assistance Systems (ADAS) [10]. Level 3 and above can be called autonomous driving. The majority of Level 3 vehicles is still test vehicles and is not really commercially available [11]. The current autonomous driving is still not perfect. Although the car is in automatic situation, the driver must be alert and ready to take over the vehicle to deal with accidents. It can be phased, in the future, autonomous driving will become a mature and reliable technology. It is not yet mature, but it will have a bright future.

Advertisement

2. The development of autonomous vehicles

2.1 Development history

In August 1925, a radio-controlled car called “American Miracle” was unveiled by Francis P. Houdina, a U.S. Army electrical engineer [12]. By radio control, the steering wheel, clutch, brakes, and other parts of the vehicle can be remotely controlled. According to the New York Times, the radio-controlled vehicle can start its engine, turn its gears, and honk its horn like a phantom hand at the wheel. It’s a long way from “automation driving,” but it was the first documented self-driving vehicle in human history.

In 1939, General Motors exhibited the world’s first self-driving concept vehicle—Futurama—at the New York World’s Fair [13, 14]. Futurama is an electric car guided by a radio-controlled electromagnetic field, which is generated by magnetized metal spikes embedded in the road. It was until 1958, however, that General Motors brought this concept vehicle to life [14]. General Motors embedded sensors called pickup coils in the front of the car. The current flowing, which goes through the wires embedded in the road, could then be manipulated and tells the vehicle to move the steering wheel to the left or right [15].

During the next close to 20 years, the development of autonomous driving technology hit a bottleneck and developed more slowly, with no significant progress. It was until the 1970s, especially after computers and IT technology began to develop at a rapid pace, that autonomous driving technology again ushered in a period of rapid development. In 1977, Japan’s Tsukuba Mechanical Engineering Laboratory improved the pulse signal control method previously used by General Motors. They used a camera system that forwards data to a computer to process road images. This allows the car to follow white road signs at 30 kilometers per hour on its own, though it still needs the assistance of steel rails. A decade later, the Germans improved the camera and developed VaMoR. A vehicle equipped with the camera could drive safely at 90 km/h [16]. As technology has advanced, the environment detection and reaction ability of self-driving cars also has been improved.

In 1984, the Defense Advanced Research Projects Agency (DARPA), in partnership with the Army, launched the Autonomous Ground Vehicle (ALV) program [17]. The goal of this program is to give vehicles full autonomy and allow them to detect terrain through cameras and calculate solutions such as navigation and driving routes through computer systems. The second DARPA Challenge in 2005 was the first time in history that five driverless cars successfully navigated a desert track with rough road conditions using a recognition system.

Since 2009, Google has been secretly developing a driverless car project, which is now known as Waymo [18]. In 2010, it was reported that Google was testing self-driving cars at night on Highway for 1000 miles without human intervention and that occasional human intervention had been required for a total of more than 140,000 miles [19]. In 2014, Google demonstrated a prototype of a driverless car without a steering wheel, gas or brake pedal, resulting in 100% self-driving [20]. As driverless program steadily advances, the potential and opportunities of “autonomous driving” are being discovered by more and more people.

2.2 Industrial

In recent 10 years, self-driving cars have become a key area of concern for many companies [21]. There are two evolutions of autonomous driving industry. The first-time evolution is triggered from the expectation of autonomous-driving industrialization. There are many landmark events, such as Waymo’s spin-off, General Motors’ acquisition of Cruise, Ford’s investment in ArgoAI [22]. The background of this evolution is that artificial intelligence technology has improved tremendously, and deep learning has started to be applied on a large scale in the field of perception technology, such as image recognition [23]. Also, sensor technology has developed greatly [24]. However, at that time, the long tail problem has become an important constraint for the implementation of high-level autonomous driving. Specific challenges exist in the robustness of hardware, redundancy of the system, and perfection of testing [25].

The second evolution comes from the commercialization of autonomous driving. After 3–4 -year development of technology, the core technologies of autonomous driving such as LiDAR, chips, perception, and decision algorithms have further developed [26, 27]. Autonomous driving in specific scenarios (such as mines, ports, and airports) has been able to be commercialized.

At present, the global autonomous vehicle industry is developing in a better trend. However, few areas can achieve mass production [28]. Autonomous vehicle technology is developing together with 5G communication technology and related technology of new energy vehicles.

Advertisement

3. Why intelligent autonomous vehicles have a huge potential

Autonomous vehicles technology has the potential to transform commuting and long-distance travel experiences, take people away from high-risk work environments, and allow for a higher degree of development and collaboration across industries. It is also the key to building our cities of the future.

3.1 Living

In the future, human’s relationship with vehicles will be redefined from reducing carbon emissions and paving the way for more sustainable lifestyles levels. It is estimated that 30% of greenhouse gas emissions in the United States come from transportation [29]. Autonomous vehicles will be able to travel more efficiently on the road than vehicles operated by human drivers. This will lead to a reduction in greenhouse gas emissions. Also, vehicles can be grouped into “platoons” and reduced number of accidents. These can keep traffic flow continuously. Less congestion means that passengers can get to their destinations faster and spend less time on the road, which in turn makes more fuel efficiency and reduces CO2 emissions. Vehicles will also likely communicate with road infrastructure, such as traffic signals, to adjust fuel consumption and emissions accordingly.

Cab services, car sharing, and public transportation will become faster and cheaper as the autonomous vehicles revolution progresses. The cost of these services is therefore expected to decrease as maintenance, gas, and labor requirements decrease. In this way, the cost gaps between purchasing one’s own car and using these travel services will be stretched to a degree that will redefine how people travel. More importantly, the reduced cost of transportation services will drive economic mobility for vulnerable populations. Geographic locations that previously have been inaccessible to certain populations due to commuting costs will become accessible, resulting in some new beneficial effects on the working population.

When autonomous vehicles become available, navigation will be more efficient and traffic congestion will be reduced. Also, because cabs and car-sharing services will be cheaper than purchasing a car, there will be fewer cars on the road. These will ultimately improve commuting efficiency. The saved commute time can be spent on work, socializing, and relaxing. This thus reduces the anxiety commuters when they arrive at work and being able to work in a better condition. In addition, the reduction in travel time also helps to improve daily productivity.

3.2 Human nature

Laziness is often the first driver of technological progress. Looking back at the history of technological progress, people have invented home appliances in order to reduce the labor of household chores. To save the pain of walking, human beings have made carriages, bicycles, motorcycles, cars, trains, and airplanes. Humans hate repetitive and inefficient tasks. They are too lazy to do it themselves. Therefore, they let the tools do it and automate the repetitive tasks. Driving is a relatively repetitive and inefficient task. Developing tools to achieve automation is clearly in line with the trend of technological development. From efficiency perspective, the popularity of intelligent autonomous vehicles can improve efficiency and save time.

3.3 Safety

Safety of intelligent autonomous vehicles is defined as their ability to keep people safe and reduce the accident rates when autonomous vehicles are operating properly. Research shows that intelligent autonomous vehicles are safer than the average human drivers [30]. Firstly, machines are much better at perception than humans. The autonomous vehicle has various sharp sensors, radar, and cameras, which can perceive a wider range than the human when driving. The hardware upgrades allow vehicles to “see” the world that humans cannot see in a farther, wider, and clearer way. Also, it can see many different angles at the same time, which is beyond the range of human perception. For example, Tesla has eight cameras mounted around the body (two in the front, two on the left and right side, and two in the rear), providing a 360-degree view and a range of 250 meters [31]. Front-enhanced radar is installed to provide clearer and more accurate detection data in adverse weather conditions (such as rain, fog, and smog). All of these are beyond the ability of human perception. Therefore, autonomous vehicles can make decisions earlier than humans and react faster, which makes driving safer. Furthermore, for, autonomous vehicles, vehicle-to-vehicle communication will be possible. There will be more communication in various scenarios and further improve safety.

Secondly, autonomous vehicles are more energetic than humans. Globally, fatigue driving has become one of the major causes of traffic accidents. According to the National Highway Traffic Safety Administration, there are about 100,000 traffic accidents on U.S. roads each year due to drivers going to sleep while driving [32]. Studies show that approximately 94% of crashes are caused by human error. The World Health Organization estimates that more than 1.3 million people die in road traffic accidents each year [32]. The number of deaths and injuries from car accidents caused by distracted drivers continues to increase [32]. Autonomous vehicles do not need a human driver. Instead, the driver is a microcomputer, which runs a large amount of computer code and connects to different sensors inside and outside the vehicle. The data and sensors are connected to the cloud and can simulate the external environment around the vehicle real time. In this way, the intelligent autonomous vehicles can anticipate the actions that need to be taken based on the current surrounding traffic conditions. These actions are performed normally regardless of the climate, environmental, and traffic conditions. People get fatigued while autonomous vehicles do not. Thus, safety levels of autonomous vehicles driving are higher than human drivers.

Thirdly, autonomous vehicles are more rational than humans. People have emotions. They may make dangerous actions because of panic and rage. Autonomous vehicles do not make these mistakes, which is a major advantage of autonomous vehicles. At present, autonomous vehicles may be worse than humans in making decisions, especially in the face of various extreme situations and uncertainty. However, this piece is constantly improving. One aim of intelligent autonomous vehicles research is to improve the ability of intelligent autonomous vehicles to deal with various extreme situations and cover a variety of possible extreme cases to increase safety of intelligent autonomous vehicles. In addition, the optimal future situation is that all vehicles are intelligent autonomous, which makes most of the driving behavior predictable.

It is also acknowledged that autonomous vehicles cannot guarantee a 100% safety rate. They will have failures and flawed algorithms. However, humans cannot achieve a 100% safety rate either. As long as intelligent autonomous vehicles outperform humans, intelligent autonomous vehicles will help reduce the deaths in traffic accidents worldwide each year.

Advertisement

4. Key technologies needed for future intelligent autonomous vehicles

From the future perspective, intelligent autonomous vehicles will develop from low speed to high speed, from carrying goods to people, from commercial to civil use. Autonomous vehicles are a complex engineering system that requires the integration and precise cooperation of various technologies, which include algorithm, client system, and cloud platform [33]. The algorithm includes sensing, which is used to extract meaningful information from the collected sensor raw data; positioning, which is used to precisely control the driving direction of the intelligent autonomous vehicles; perception, which is used to understand the surrounding environment of the vehicle and provide safe and reliable planning for the vehicle’s travel and arrival. To achieve these algorithms, professionals are expected in the following areas: computer vision (including image classification, target detection, target recognition, and semantic segmentation), Kaman filtering (including vehicle position prediction, measurement, updating, and multiple sensor data fusion), Markov localization (including vehicle motion models for vehicle localization, Markov processes, Bayesian principles and position filtering, target observation localization, and particle filtering), vehicle control decisions (PID control; including lane departure error control, PID hyperparameter adaptive adjustment), model predictive control (MPC; including vehicle dynamics, vehicle trajectory prediction, finding optimal execution parameters such as steering angle and acceleration) [23, 34, 35]. Client system consists of operating system and hardware system, which can cooperate with the algorithm part to meet the requirements of real-time, reliability, safety, and green energy consumption. Cloud platform provides offline computing and storage functions to support testing of continuously updated algorithms, generating high-precision maps and large-scale deep learning model training.

4.1 Scenario operation

The future of intelligent autonomous vehicles driving also requires the cooperation of technology and scenario operation. The structured scenario can already realize Level 4 intelligent autonomous vehicles operation. However, different scenario has different requirements for technical details. Mining scenario has magnetic field interference, poor GPS signal, and harsh working environment with more dust [36]. Port scenario often encounters rain and wet weather [37]. Airport logistics needs to require high security [38]. Under these conditions, intelligent autonomous vehicles need to be connected to the overall operation management and dispatching system. Therefore, a deep understanding of the scenario and targeted changes to the algorithm are expected to achieve the implementation of intelligent autonomous vehicle driving. Also, continuous data acquisition through operation is needed to continuously iterate on the technology.

4.2 Machine learning

Machine learning behind autonomous driving is an interesting problem. Dataset of autonomous driving is on-policy, meaning that it changes with the driving strategy. Also, not all data are useful. For autonomous driving, a lot of data are monotonous and repetitive, such as nice weather, and no cars or pedestrians around, which do not help much to improve the driving strategy. How to overcome the shortcomings of each sensor to provide real-time, accurate, and non-redundant environmental information is a further requirement of deep learning.

Another area that future needs to solve is the learning improvement of driving strategy. Assuming that the driving strategy performs poorly at the beginning and requires human intervention every kilometer, for every kilometer some important data are collected, such as video and radar data, which are associated with a few seconds before the accident. Then, data are used to train the current model and learn a better strategy to avoid accidents. With better strategies, manual intervention is reduced once for every 10 kilometers. Then, the data are used to train the current model again. The learning cycles thus are built up. From the learning, it will be found that the better the quality of the driving strategy it is, the less frequent the manual intervention occurs, the less effective training data can be gotten, and the harder it is to continue improving. As the quality of driving improves, the decay in the rate of manual intervention is very slow. The key question then is whether it can surpass the level of a real driver? The real scenarios have a variety of strange and bizarre situations. There will be some corner case never seen by the algorithm [39]. As long as the machine learning algorithm still needs a lot of data, the driving strategy seems difficult to achieve human levels. Some people think that the problems can be solved by adding more sensors [40]. However, with the increase of sensors, at first the effect of signals will raise. But then the deployment, maintenance, and coordination costs will rise significantly.

The current research on perception algorithms is mainly focused on vision. The research on applying deep learning for target recognition, segmentation, and tracking is very hot [41, 42]. However, what will happen if there is a person wearing a t-shirt painted with a stop sign on the road? More problems happen on the counter-sample. Stop-sign with a few sticky notes may be recognized as yield sign. Printing a few special patterns on the clothes may be stealthy. These are still considered counter-sample on the visual level. In the future, strategy-level counter-sample problems may generate with more self-driving cars becoming available.

Using artificial intelligence to make decisions is also imperative. At present, a series of research studies on decision making such as deep augmented learning, predictive learning, and imitation learning are gradually carried out.

4.3 Sensors

Autonomous vehicles require the use of multiple sensors. This at the same time is the challenge for autonomous vehicle—how to balance these different sensing modalities [43]. To overcome this challenge, an intelligent way is expected to combine the various sensors to provide a safe, high-quality autonomous driving system without significantly increasing size, weight, power, or cost [6]. The improvement of vehicle sensing focuses on collecting data from individual sensors and applies sensor fusion strategies to maximize complementary and compensate the weaknesses of different sensors in various conditions [6]. For example, cameras are excellent at recognizing signs and colors, but perform poor in bad weather or low light environments. These weaknesses can be compensated by radar [44]. However, the current systems are mainly independent, with little interaction between individual sensing systems. Also, no single sensing approach can be applied to all applications or environmental conditions.

4.4 Road condition recognition

For road condition recognition, vehicles need to control and recognize surrounding obstacles, traffic signals, pedestrians, and other vehicle states. Inertial Measurement Units (IMUs) can detect sudden jumps or deviations caused by potholes or obstacles. Through real-time connectivity, these data can be sent to a central database and used to warn other vehicles about potholes or obstacles [45]. The same is true for camera, radar, LIDAR, and other sensor data [46]. These data are compiled, analyzed, and fused so that the vehicle can use it to make predictions about its driving environment. This allows the vehicle to become a learning machine and is promised to make better and safer decisions than humans. To date, however, very few intelligent autonomous vehicles have been able to achieve it.

4.5 Storage

The data of intelligent autonomous vehicles are massive. Data of intelligent autonomous vehicles are mostly images, sound, and other natural information. They are unstructured data, whose data volumes are much larger than structured data. In addition, intelligent autonomous vehicles require a very high level in real time. According to calculations, the amount of data collected by autonomous driving technology is as high as 500GB to 1 TB per hour per vehicle. The core of data flow in vehicles is the storage of data. The memory capacity of general computers is in a few or dozens of GigaByte (GB). The memory capacity of large servers is in hundreds of GB. It is nearly impossible to store all the data into the computer of intelligent autonomous vehicles considering the memory for computing capabilities under the existing hardware system. A potential solution is to load the data into high-efficiency hard disk storage and send it to the memory and CPU or GPU for computation when needed [47].

Another problem encountered in data storage for autonomous vehicles driving is environment [36, 37, 38, 48]. A computer hard drive is a fragile component. However, vehicles are often used in harsh environment. When the vehicle is on the road, it will experience constant vibration, extreme weather, and even sudden power outages or accidents. The hard drive, which is the “data tank” for autonomous driving, must ensure efficient and stable operation under all these circumstances. Otherwise, the data will not be sent to the computing unit, which will lead to serious consequences.

4.6 Electromagnetic interference (EMI)

There are two types of electromagnetic interference (EMI) emissions: conducted and radiated [49]. Conducted emissions are connected to the product through wires and alignments [49]. Since the noise is limited to a specific terminal or connector in the design, in the early development process, with the help of a good layout or filter design, it usually can be relatively easy to ensure compliance with the conducted emission requirements [50]. However, radiated emissions are in a different condition. Anything on a circuit board that carries current radiates electromagnetic fields. Every alignment on a circuit board is an antenna, and every copper layer is a resonator. Things, other than a pure sine wave or DC voltage, will generate noise across the signal spectrum. The power-supply designer does not know how bad the radiated emissions will be until the system is tested. To make things worse, radiated emissions can only be tested formally after the design is essentially complete. People often use filters to attenuate a specific frequency or a certain frequency range of signal strength, in this way to reduce EMI [51]. In addition, the energy of radiation, which goes through the space propagation, can be attenuated by adding metal and magnetic shielding. EMI cannot be eliminated, but can be attenuated to a level acceptable to other communications and digital devices. This is what the future needs to further detect.

4.7 Chips

Under the trend of “software-defined car,” chips, operating systems, algorithm, and data form the closed loop of intelligent autonomous vehicle computing ecology. Chips are the core of intelligent autonomous vehicle ecological development. This also gives rise to a strong demand for autonomous vehicles chips. The value of chips in the autonomous vehicles is not small. The value of all the chips in a cell phone is a hundred dollars, but the value of all the chips in autonomous vehicles is up to hundreds of dollars. At the same time, there are many kinds of chips above the autonomous vehicles. Family cars need hundreds of chips, such as tire pressure monitoring, sunroof, lights to back-up camera, back-up radar, and remote control keys. When any chip misses, the car cannot be delivered.

In 2022, the global “chips shortage” still has not been effectively alleviated. The most shortage of chips is MCU chips, which is the most common chip on the autonomous vehicles. Window control, seat control, and Electronic Control Unit (ECU) are inseparable from it. MCU accounts for about 30% of the total number of autonomous vehicles chips. A car needs dozens to hundreds of MCU chips [52]. Chips shortage prompts many dealers under the temptation of high profits, which makes the market situation of chips shortage become more and more serious.

Chips are a piece that contains integrated circuit components. Chips can be roughly divided into two categories—functional chips (such as CPU and communication base station processing chip) and storage chips. To manufacture a chip, the industry first needs to design the chip. Which function that the chip can have has been determined in the design of this step. This requires professionals to carry out the design of the circuit. Next is the production. This step is also the most tedious. Finally, it is the packaging. The finished chip is mounted into a saleable product. Among these three steps, the most difficult is the design, while the easiest is the packaging.

The chip crisis is bringing a change to reshape the autonomous vehicles business model. The “0 inventory” management can no longer adapt to the development. The pursuit of supply chain costs brings a greater risk of hidden danger. Secondly, cooperation with chip foundries needs to be more open. Previously, vehicle chip foundries were relatively closed. They were unable to respond unexpected situations flexibly. How to cooperate with more chip suppliers on an equal footing in the future is a problem that must be faced. Finally, autonomous vehicle chips need to follow the chip industry upgrade. Now, the vehicles chip is mainly used in 8-inch production line. However, from a technical point of view, 8 inches compared with 12 inches is “past tense”; from the economic efficiency point of view, the 12-inch production line is more efficient than the 8-inch production line.

Autonomous vehicles driving also makes the E–E architecture of vehicles a new trend. The EE architecture of vehicles refers to the layout of the electrical and electronic systems in the vehicle scheme. This concept was first proposed by Delphi in 2007. The electrical architecture is dedicated to providing integrated electrical system layout solutions for automakers to address the development of electrification of vehicles. The original EE architecture was a distributed architecture. Each controller targets one function, which makes add an additional function simple and quick by adding a controller. In the stage of mechanical vehicles, because there are not many electrical and electronic components, the distributed architecture is still comfortable to use. As the level of automotive intelligence increased, distributed architecture began to be caught off guard by the explosive growth of electrical and electronic components.

Today’s autonomous vehicles do not only need abundant acceleration, but also need to be able to see and hear all directions and realize human-machine interaction at any time. With so many functions to achieve, the number of ECUs has increased dramatically; the complexity of the wiring harness in the car has risen; and the maintenance of electrical equipment has become cumbersome to update. How to ensure these functions do not become “congested” in the process of use, and how to call different ECU functions at the same time to react quickly when dealing with complex instructions? The wave of vehicle electrification makes these problems even more difficult.

The new architecture of EE-Centralized Architecture has emerged. It divides the whole vehicle into several domains according to the functions of electronic components (such as powertrain domain, vehicle safety domain, intelligent cabin domain, and intelligent driving domain). Then, a domain controller (DCU) with better processing power is used to unify the control of each domain. The emergence of a centralized architecture has led to a significant increase in the integration of vehicle functions. The role of individual ECU is integrated by integrated management. Complex data processing and control functions are unified in the DCU. With the signals received and analyzed become more complex, the EE architecture will also evolve to multi-domain controller MDC. Because of the new trend of automotive EE architecture, the integration of controllers and the introduction of domain controllers will affect the chip market simultaneously. DCUs, ECUs, and other electronic devices will also be further developed.

The centralized EE architecture also puts new demands on the automotive software architecture. With the centralization of automotive EE architecture, domain controllers and central computing platform deploy in a hierarchical or service-oriented architecture. The number of ECUs is significantly reduced. The underlying hardware platform needs to provide more powerful arithmetic support. The software is no longer based on a fixed hardware development, but to have portable, iterative, and scalable features. Therefore, at the level of software architecture, automotive software architecture is also gradually upgraded from Signal-Oriented Architecture to Service-Oriented Architecture to better achieve hardware and software decoupling and rapid software iteration.

4.8 Navigation

In the future, more advanced driver assistance features that can be synchronized with navigation and GPS systems will be developed. Each situation’s data that a self-driving car might encounter will be collected and accumulated. Mapping companies need to enhance 3D mapping data of cities. Automakers and high-tech automotive system suppliers need to work closely with each other to ensure that light detectors, LIDAR, radar sensors, GPS, and cameras work together.

4.9 Iteration

There are currently two types of iterations: evolutionary and revolutionary. The evolutionary route enables accumulation of more perception and driving data (such as scenario road conditions). These data (such as calibrated image data and corner case driving road conditions) and decision algorithms can be migrated to Level 4 intelligent autonomous vehicles. Revolutionary aims to develop fully autonomous vehicles directly. It remains unclear which path alone will be successful. However, the more likely outcome is a symbiotic fusion of the two.

Advertisement

5. How intelligent autonomous vehicles affect the future

The automobile was invented in 1886. More than 100 years after the invention of the automobile, the human social life has been changed. Life, living, working, and entertainment have changed dramatically. Therefore, autonomous vehicles may change human life living once again in the following areas.

The value chain of the automobile industry will be reconstructed with the development of intelligent autonomous vehicles. Nowadays, vehicles are mainly hardware devices, and safety is the core. However, the core of the vehicles is changing from hardware equipment to IT services, including autonomous driving system and vehicle networking system. In addition, automobile insurance is being changed. In the future, insurance for human driving may be not needed. Without the needs on insurance for human driving, the insurance companies need to seek new business. Insurance for autonomous vehicles, such as network security and system failure, may become a new trend.

Because of the installation of cameras, radar, LIDAR, and artificial intelligence systems, the cost of autonomous vehicles will be high. This means intelligent autonomous vehicles will be more likely to be firstly used for specific traffic industries and groups [53]. The senior and disabled people may be first group people that use intelligent autonomous vehicles. Intelligent autonomous vehicles give them the freedom to travel without relying on friends and family. Car-hailing, buses, cabs, logistics vehicles may be the first group services that adopt autonomous vehicles. This will significantly reduce traffic congestion and environmental degradation. Because of the changing public transportation [54], the offline shopping mall will be subverted too. From small street stores to the large shopping malls, there are usually large parking lots underneath the malls. However, in the future, after the popularity of self-driving, the vacancy time of the car almost does not exist, and people travel and shopping simply do not need to consider whether there is a large parking lot near the mall. This will have a direct impact on shopping malls.

Advertisement

6. Barriers in intelligent autonomous vehicles development

In the future, it is inevitable that autonomous driving may replace human driving in most cases. However, new technologies have not been tested over a large amount of time and are not guaranteed to be free of pitfalls. These pitfalls mainly include reliability, legal, and ethical issues. The two pitfalls may barrier the development of intelligent autonomous vehicles.

Reliability of intelligent autonomous vehicles is defined as the ability to perform a given function without failure for a given period of time and under certain conditions. Failures can come from various directions such as control system failures and being hacked. Generally, the more complex the system is that the more automated and intelligent it is, the greater risk of potential stability it will meet. For example, in Arizona, USA, a self-driving road test vehicle struck and killed a cyclist. There was a person behind the wheel of the autonomous vehicles at the time of the accident, but that person was not actually steering the vehicle. These types of accidents reduce public confidence in the ability of autonomous vehicles.

Ethical and legal are one of the most serious issues that autonomous vehicles need to solve if they want to go forward in the future, especially involving the definition of responsibility for accidents. It includes a few of sub-questions. Firstly, the question is related to whether an autonomous driving system can be a legally recognized driver or not? The function of driving a vehicle has historically been assigned to a licensed human driver. This determination will have a direct impact on whether self-driving cars can be considered as a legal driver. Secondly, the issue is related to liability. If autonomous vehicles malfunction and cause damage to the owner, whether the owner can seek compensation from the insurance company, and how liability will be allocated between the human driver and the manufacturer in an accident. Legislation on intelligent autonomous vehicles depends on the simultaneous development of technology, regulations, and public consensus building. Thirdly, the issue is related to ethics. In times of emergency condition, whether self-driving cars protect the people inside the car or the people outside the car? Traditional cars are centered on protecting the drivers and passengers for it is decided by human drivers. But self-driving cars can face such ethical controversies because they can be prEEngineered. Moreover, the ethical issues related to the condition that when facing the necessity of hitting one of the two persons in certain situations, who should be the victim? Due to Moore’s law, self-driving technology will continue to mature and safety. The legal and ethical issues will become the biggest obstacle for the development of intelligent autonomous vehicles.

Advertisement

7. Conclusion

Under the impetus of automated driving, artificial intelligence, and other technologies, the major reform of automobile industry has become an inevitable trend. The integration of information communication and automobile industry has become an inevitable move. As an important product reflecting the country’s industrial strength, intelligent autonomous vehicles are a new representative of the potential growth of the national economy. Also, it is a manifestation of the integration of the country’s industrial manufacturing sector with new technologies. At present, the development space of intelligent autonomous vehicles is getting bigger and bigger, and with the penetration of this type of vehicles, the proportion of autonomous vehicles in the future automobile market will continue to increase.

The chapter reviews the autonomous driving levels and the development of autonomous vehicles first. Then, the chapter explains why autonomous vehicles have huge potential, key technologies that are needed for future intelligent autonomous vehicles, and how intelligent autonomous vehicles affect the future. Finally, barriers in intelligent autonomous vehicles development are discussed. This chapter contributes as a start point for people who want to keep working on intelligent autonomous vehicles and help them understand the general condition of future intelligent autonomous vehicles.

References

  1. 1. Claussmann L, Revilloud M, Gruyer D, Glaser S. A review of motion planning for highway autonomous driving. IEEE Transactions on Intelligent Transportation Systems. 2019;21(5):1826-1848
  2. 2. Parida S, Franz M, Abanteriba S, Mallavarapu S. Autonomous Driving Cars: Future Prospects, Obstacles, User Acceptance and Public Opinion. Orlando, Florida, USA: Springer; 2018
  3. 3. Liaqat M, Ghadi Y, Adnan M, Fazal MR. Multi-criteria evaluation of portable energy storage technologies for electric vehicles. IEEE Access. 2022;10:64890-64903
  4. 4. Kim B. ICT-based business communication with customers in the 4th industrial revolution era. Business Communication Research and Practice. 2019;2(2):55-61
  5. 5. Zheng M, Ming X. Construction of cyber-physical system–integrated smart manufacturing workshops: A case study in automobile industry. Advances in Mechanical Engineering. 2017;9(10):1687814017733246
  6. 6. Bengler K, Dietmayer K, Farber B, Maurer M, Stiller C, Winner H. Three decades of driver assistance systems: Review and future perspectives. IEEE Intelligent Transportation Systems Magazine. 2014;6(4):6-22
  7. 7. Yaqoob I, Khan LU, Kazmi SA, Imran M, Guizani N, Hong CS. Autonomous driving cars in smart cities: Recent advances, requirements, and challenges. IEEE Network. 2019;34(1):174-181
  8. 8. Van Brummelen J, O’Brien M, Gruyer D, Najjaran H. Autonomous vehicle perception: The technology of today and tomorrow. Transportation Research part C: Emerging Technologies. 2018;89:384-406
  9. 9. Amarnath A, Pal S, Kassa HT, Vega A, Buyuktosunoglu A, Franke H, et al. Heterogeneity-aware scheduling on SoCs for autonomous vehicles. IEEE Computer Architecture Letters. 2021;20(2):82-85
  10. 10. Leiman T. Law and tech collide: Foreseeability, reasonableness and advanced driver assistance systems. Policy and Society. 2021;40(2):250-271
  11. 11. Litman T. Autonomous Vehicle Implementation Predictions. BC, Canada: Victoria Transport Policy Institute Victoria; 2017
  12. 12. Emch CB. Why the birth of autonomous driving is the death of our “right” to drive. Pace L. Review. 2019;40:288
  13. 13. Arakawa T, Oi K. Verification of autonomous vehicle over-reliance. In: Proceedings of the Measuring Behavior. 2016. pp. 177-182
  14. 14. Horsch JD, Viano DC, DeCou J, Horsch JH. History of safety research and development on the general motors energy-absorbing steering system. SAE Transactions. 1991;100:1818-1863
  15. 15. Raj P, Sohail A, Sharma A, Sharma M, Mishra A. Self Driving Car [Diploma dissertation]. India: Galgotias University; 2022
  16. 16. Dickmanns ED. Developing the sense of vision for autonomous road vehicles at UniBwM. Computer. 2017;50(12):24-31
  17. 17. Strat T, Chellappa R, Patel V. Vision and robotics. AI Magazine. 2020;41(2):49-65
  18. 18. Margulis C, Goulding CW, vs. Uber may be the next Edison vs. Westinghouse. The Journal of the Patent and Trademark Official Society. 2017;99:500
  19. 19. Goodall NJ. Ethical decision making during automated vehicle crashes. Transportation Research Record. 2014;2424(1):58-65
  20. 20. Greenblatt JB, Shaheen S. Automated vehicles, on-demand mobility, and environmental impacts. Current Sustainable/Renewable Energy Reports. 2015;2(3):74-81
  21. 21. Takács Á, Rudas I, Bösl D, Haidegger T. Highly automated vehicles and self-driving cars [industry tutorial]. IEEE Robotics and Automation Magazine. 2018;25(4):106-112
  22. 22. Wu X. Fast Forward: Technography of the Social Integration of Connected and Automated Vehicles into UK Society [Postgraduate dissertation]. UK: University of Edinburgh; 2022
  23. 23. Cheng L, Yu T. A new generation of AI: A review and perspective on machine learning technologies applied to smart energy and electric power systems. International Journal of Energy Research. 2019;43(6):1928-1973
  24. 24. Xu R, Xiang H, Xia X, Han X, Li J, Ma J. OPV2V: An open benchmark dataset and fusion pipeline for perception with vehicle-to-vehicle communication. In: 2022 International Conference on Robotics and Automation (ICRA). 2022. pp. 2583-2589. DOI: 10.1109/ICRA46639.2022.9812038
  25. 25. Koopman P, Wagner M. Challenges in autonomous vehicle testing and validation. SAE International Journal of Transportation Safety. 2016;4(1):15-24
  26. 26. Marti E, De Miguel MA, Garcia F, Perez J. A review of sensor technologies for perception in automated driving. IEEE Intelligent Transportation Systems Magazine. 2019;11(4):94-108
  27. 27. Yeong DJ, Velasco-Hernandez G, Barry J, Walsh J. Sensor and sensor fusion technology in autonomous vehicles: A review. Sensors. 2021;21(6):2140
  28. 28. Hussain R, Zeadally S. Autonomous cars: Research results, issues, and future challenges. IEEE Communications Surveys and Tutorials. 2018;21(2):1275-1313
  29. 29. Barth M, Boriboonsomsin K. Traffic congestion and greenhouse gases. Access Magazine. 2009;1(35):2-9
  30. 30. Nees MA. Safer than the average human driver (who is less safe than me)? Examining a popular safety benchmark for self-driving cars. Journal of Safety Research. 2019;69:61-68
  31. 31. Mehr G, Ghorai P, Zhang C, Nayak A, Patel D, Sivashangaran S, et al. X-CAR: An experimental vehicle platform for connected autonomy research. In: IEEE Intelligent Transportation Systems Magazine. 2022. pp. 2-19. DOI: 10.1109/MITS.2022.3168801
  32. 32. Sahayadhas A, Sundaraj K, Murugappan M. Detecting driver drowsiness based on sensors: A review. Sensors. 2012;12(12):16937-16953
  33. 33. Boughanja M, Mazri T. Attacks and defenses on autonomous vehicles: A comprehensive study. In: Proceedings of the 4th International Conference on Networking, Information Systems & Security. April 2021. pp. 1-6
  34. 34. Vishnukumar HJ, Butting B, Müller C, Sax E. Machine learning and deep neural network—Artificial intelligence core for lab and real-world test and validation for ADAS and autonomous vehicles: AI for efficient and quality test and validation. In: 2017 Intelligent Systems Conference (IntelliSys). 2017. pp. 714-721. DOI: 10.1109/IntelliSys.2017.8324372
  35. 35. Liu S, Liu L, Tang J, Yu B, Wang Y, Shi W. Edge computing for autonomous driving: Opportunities and challenges. Proceedings of the IEEE. 2019;107(8):1697-1716
  36. 36. Park S, Choi Y. Applications of unmanned aerial vehicles in mining from exploration to reclamation: A review. Minerals. 2020;10(8):663
  37. 37. Yuniar D, Djakfar L, Wicaksono A, Efendi A. Truck driver behavior and travel time effectiveness using smart GPS. Civil Engineering Journal. 2020;6(4):724-732
  38. 38. Norin A. Airport Logistics: Modeling and Optimizing the Turn-Around Process. Linköping: Linköping University Electronic Press; 2008
  39. 39. Sallab AE, Abdou M, Perot E, Yogamani S. Deep reinforcement learning framework for autonomous driving. Electronic Imaging. 2017;2017(19):70-76
  40. 40. Sun Z, Bebis G, Miller R. On-road vehicle detection: A review. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2006;28(5):694-711
  41. 41. De La Escalera A, Moreno LE, Salichs MA, Armingol JM. Road traffic sign detection and classification. IEEE Transactions on Industrial Electronics. 1997;44(6):848-859
  42. 42. Wang L, Zhang C, Luo Z, Liu C, Liu J, Zheng X. PDAAA: Progressive defense against adversarial attacks for deep learning-as-a-service in internet of things. In: 2021 IEEE 20th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). IEEE; October 2021. pp. 879-886
  43. 43. Fayyad J, Jaradat MA, Gruyer D, Najjaran H. Deep learning sensor fusion for autonomous vehicle perception and localization: A review. Sensors. 2020;20(15):4220
  44. 44. Waldschmidt C, Meinel H. Future trends and directions in radar concerning the application for autonomous driving. In: 2014 11th European Radar Conference. 2014. pp. 416-419. DOI: 10.1109/EuRAD.2014.6991296
  45. 45. Madli R, Hebbar S, Pattar P, Golla V. Automatic detection and notification of potholes and humps on roads to aid drivers. IEEE Sensors Journal. 2015;15(8):4313-4318
  46. 46. Liu Z, Cai Y, Wang H, Chen L, Gao H, Jia Y, et al. Robust target recognition and tracking of self-driving cars with radar and camera information fusion under severe weather conditions. IEEE Transactions on Intelligent Transportation Systems. 2021;23(7):6640-6653
  47. 47. Chilimbi T, Suzue Y, Apacible J, Kalyanaraman K. Project adam: Building an efficient and scalable deep learning training system. In: 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI 14). 2014. pp. 571-582
  48. 48. Khayyam H, Javadi B, Jalili M, Jazar RN. Artificial Intelligence and Internet of Things for Autonomous Vehicles. Cham: Springer; 2020
  49. 49. Hegarty T. An overview of radiated EMI specifications for power supplies. Texas Instruments Whitepaper. 2018
  50. 50. Ott HW. Electromagnetic Compatibility Engineering. Hoboken, New Jersey: John Wiley & Sons; 2011
  51. 51. Xu S, Xu S, Xu D, Qian Q , Sun W, Zhu J. A review on recent effort of conductive EMI suppression methods in high-frequency power converters. IET Power Electronics. 2022
  52. 52. Cronin MJ. Smart Products, Smarter Services: Strategies for Embedded Control. New York: Cambridge University Press; 2010
  53. 53. Latham A, Nattrass M. Autonomous vehicles, car-dominated environments, and cycling: Using an ethnography of infrastructure to reflect on the prospects of a new transportation technology. Journal of Transport Geography. 2019;81:102539
  54. 54. Fantin Irudaya Raj E, Appadurai M. Internet of Things-Based Smart Transportation System for Smart Cities. Singapore: Springer; 2022

Written By

Yuan Yin

Submitted: 14 August 2022 Reviewed: 30 August 2022 Published: 30 October 2022