Open access peer-reviewed chapter

Future Internet of Things: Connecting the Unconnected World and Things Based on 5/6G Networks and Embedded Technologies

Written By

Seifeddine Messaoud, Rim Amdouni, Adnen Albouchi, Mohamed Ali Hajjaji, Abdellatif Mtibaa and Mohamed Atri

Reviewed: 25 March 2022 Published: 08 February 2023

DOI: 10.5772/intechopen.104673

From the Edited Volume

Internet of Things - New Trends, Challenges and Hurdles

Edited by Manuel Domínguez-Morales, Ángel Varela-Vaca and Lourdes Miró-Amarante

Chapter metrics overview

159 Chapter Downloads

View Full Metrics

Abstract

Undeniably, the Internet of Things (IoT) ecosystem keeps on advancing at a fast speed, far above all predictions for growth and ubiquity. From sensor to cloud, this massive network continues to break technical limits in a variety of ways, and wireless sensor nodes are likely to become more prevalent as the number of Internet of Things devices increases into the trillions to connect the world and unconnected objects. However, their future in the IoT ecosystem remains uncertain, as various difficulties as with device connectivity, edge artificial intelligence (AI), security and privacy concerns, increased energy demands, the right technologies to use, and continue to attract opposite forces. This chapter provides a brief, forward-looking overview of recent trends, difficulties, and cutting-edge solutions for low-end IoT devices that use reconfigurable computing technologies like FPGA SoC and next-generation 5/6G networks. Tomorrow’s IoT devices will play a critical role. At the end of this chapter, an edge FPGA SoC computing-based IoT application is proposed, to be a novel edge computing for IoT solution with low power consumption and accelerated processing capability in data exchange.

Keywords

  • IoT
  • challenges
  • AI
  • 5/6G networks
  • FPGA SoC

1. Introduction

Lately, the whole field of networks has undergone a significant technical revolution. Network automation is a trendy issue that has been discussed for a long time. IoT technology complements it, which paves the way for the provision of this aspect. The Internet of Things [1] is a cross-device environment created by gadgets that focus on three key tasks: data transmission, data reception, and data processing. Initially, local physical devices connected to the Internet for real-time data analysis were considered the IoT network. The size of IoT has grown over time, from local workstations to industrial IoT frameworks [2]. IoT research describes the proliferation of IoT in healthcare [3], industry setup [4], business analytics, education, area networks, and more. Therefore has the associated risks are due to the expected increase in IoT devices in a diverse environment.

The Internet of Things is one of the most critical and revolutionary trends of the twenty-first century. The Internet of Things (IoT) is a global network of billions of interconnected “things” that can detect, act, and communicate with one another and/or the Internet [1, 2]. Current forecasts exceed initial forecasts for IoT growth: While Gartner predicts 14.2 billion interconnected things in 2019 (which might rise to 25 billion by 2021 [3]), Arm predicts one trillion additional devices will be manufactured between 2017 and 2035 [4]. This tendency is generating exponential increase in the number of chances for businesses and service providers by affecting all sectors of technology, allowing today’s organizations, large and small, to gather data on basically everything, from anywhere, at any time. The rise of IoT would be inextricably connected to the wireless trend, which began with Radio Frequency Identification (RFID), also benefiting from the continued development of other conventional technologies naming Wi-Fi, Bluetooth and devices based on IEEE 802.15.4., extensively utilized in traditional wireless sensors [5]. Those kind of systems are typically ad hoc wireless networks made up of a massive number of nodes, i.e. nodes, with limited resources, which work unitedly to reach a common goal (e.g., environmental monitoring and intelligent traffic control, industrial, surveillance systems, etc.) which is capable of transforming physical phenomena into digital data and move them to the Internet.

In the last few years, Motes have been used in a variety of sectors feedback systems, process control, counting monitoring, automotive and automation. Nevertheless, while developing these devices with limited resources, the requirements of small size, weight, low power consumption, and low cost (SWaP-C) are always sought. Physical constraints would continue and be increased by the demands of recent trends as technologies around the IoT edge expand rapidly and boost their potential, namely: (i) Data transfer over the Internet to specific online services in a standardized manner is enabled by connectivity and subsequent interoperability [6, 7, 8]; (ii) the need for higher intelligence at the network’s edge, allowing systems to make choices faster while consuming less energy [9, 10]; (iii) devices developed for security, mitigating risks from a large number of massive attack surfaces present in the IoT network [11, 12]; and (iv) new energy-saving techniques, allowing autonomous and durable devices [13, 14].

Recent advances in reconfigurable computer technology, specifically Field Programmable Gate Arrays (FPGAs), continue to support the IoT field [3]. Even with low-end IoT endpoints, programmable hardware may give performance advancement, flexibility, scalability [15], hardware-enhanced security, and improved power ratios, making it a suitable choice to handle a wide range of difficulties. Using modern FPGAs in IoT allows for a combination of scalable and flexible resources that are aligned with the SWaP-C premises while also allowing the technology to migrate from the cloud to the edge.

This forward-looking chapter presents a concise and forward-looking assessment of the usage of reconfigurable technology on upcoming low-end IoT motes. This chapter is organized in six section. Section refsec1 focuses into the key trends and issues confronting current low-end IoT devices. The section provides a full review and up-to-date explanation of the application of reconfigurable computing technology to solve such trends and difficulties, as well as a comparative examination of current FPGA SoC-based low-end IoT motes. 2. Section 3 presents the connectivity evolution beyond the 5G revolution. In section 4 we present a real QoS-QoR aware CNN FPGA accelerator co-design approach for future IoT word. Finally, we conclude this chapter in Section 5.

Advertisement

2. IoT edge: trends and challenges

There are four primary trends and problems in the design of IoT devices at the network’s edge currently: The presence of high levels of attack vectors and security vulnerabilities necessitates the consideration of scalable security primitives early in the process, and there is a growing trend to deploy intelligence at the edge as data collection increases and even more, meaningful decisions are required. There is constant compression of an already low power envelope due to device design.

2.1 Basis for connectivity and interoperability

Myriads of smart devices might now be linked to the internet as IoT becomes more prevalent. A genuinely standard and lightweight communication stack is necessary to provide connection and compatibility among all existing heterogeneous wireless technologies. A variety of wireless technologies have already been used, causing huge communication heterogeneity and interoperability problems when developing linked IoT devices [16, 17]. The IoT infrastructure’s variability makes standardization exceedingly challenging. With the presence of many strong competitors competing for the market dominance, “wars of standards” are unavoidable. In addition, no single technology is capable to provide a single solution that fully and simultaneously meets all the requirements of the IoT network, including power consumption, endpoint cost, bandwidth, connection density, latency, quality of service, operational expenses, and range. Normalization, on the other hand, is critical because it lowers barriers and promotes interoperability across different vendors and devices, permitting new goods and services to coexist with long-standing support. Guideline would be critical in the development and diffusion of IoT, since any communication stack must use methodical algorithms and lightweight protocols to save processing power and save energy [18].

The Internet, as known, links billions of devices using Internet Protocol (IP), specifically IP version 4 (IPv4) [8]. Nevertheless, because of the underlying 32-bit addressing method, IPv4 had major scaling issues, that were solved with the development of IPv6. This edition includes a distinctive 128-bit address for every connected device, as well as an updated protocol architecture to support a wide range of IoT-based heterogeneous devices [19]. concerns, numerous standards bodies, including the Institute of the Internet Engineering Task Force (IETF) and Electrical and Electronics Engineers Standards Association (IEEE-SA), have outlined a foundation for developing communication protocols and wireless technologies that will be implemented using the IoT market [6]. The IEEE 802.x family of standards was one of these organizations’ most popular achievements. The IEEE 802.15.4 standard, that specifies a short-range radio frequency transmission protocol for low-power lossy (LLN) networks, low-power, low-rate, has aided in the seamless transition of wireless systems, existing wireless sensors to Internet-connected low-end devices [8]. In addition to its physical (PHY) and medium access control (MAC) layers, additional protocols (e.g., ZigBee, Thread, ISA100.11a, WirelessHART, and so on) have arisen, expanding the heterogeneity of the IoT domain. For the present, the IETF IPv6 over Low Power WPAN (6LoWPAN) working group committed to the definition of the 6LoWPAN adaption layer, that allows IPv6 datagrams to be sent across IEEE 802.15.4 networks. The collaboration of IEEE 802.15.4 compliant radios with the 6LoWPAN protocol allows for easy integration of limited devices with the Internet, which seems to be an important factor in interoperability and communication between low-end IP devices [6, 8, 18, 19].

2.2 Edge intelligence

Massive volumes of data are created, processed, communicated, saved, and analyzed when connection and internet technologies are implemented on in-vehicle devices and the IoT. According to the International Data Corporation (IDC), by 2025, the volume of data generated globally would be predominantly from the edge and would exceed 163 zettabytes (over 1000 billion gigabytes), a tenfold increase over data produced in 2016 [4, 9]. This ideal change would force designers, engineers and technology providers to reconsider how they construct new hardware solutions that go beyond the norm and cope with artificial intelligence (AI) workloads at the edge. Cloud service enterpriser have been at the front line of introducing AI to develop and improve their workloads and services over the last decade. Cloud services will be essential for the next generation of smart industries, smart cities, and smart households. Nonetheless, decreased latency requirements, growing privacy concerns, communication bandwidth constraints, and restricted power budgets have fueled the deployment of intelligence at the edge [10, 20]. Cloud-based decisions should be avoided in safety-critical applications such as autonomous driving since the time it takes to conduct a query/decision might compromise the vehicle’s safety, for example, collision avoidance. As a result, local and real-time choices have to take precedence. On top of that, with the end of Moore’s Law, we could never rely on the rising and heavy processing power of cloud core technologies to handle the quantity of data created by next-generation IoT systems [21]. Cloud services will be critical for doing high-level analytics, yet AI deployment at the edge is also increasingly critical. Deploying and utilizing intelligence at the edge has inherent dangers as well as a set of needs, both in terms of security and SWaP-C. In terms of security, the increasing complexity of the edge exponentially widens the security flaws, bringing up new attack routes in an infrastructure that is already striving to give increased protection. Advanced computing techniques, such as machine learning, can greatly increase the processing capabilities of wireless sensor nodes while also lowering total network power consumption through decreased wireless transmissions [22]. Pushing these tasks (data analysis and decision inference) as far as feasible would eventually optimize resource efficiency and responsiveness, leading to more autonomous and intelligent systems [23].

2.3 Security

Security in the Internet of Things era is not voluntary, and it should be a fundamental layout priority from the start and throughout the device’s lifecycle. As IoT grows deep inside important enterprise infrastructures, the value of the assets contained within such devices rises, making them attractive targets for attackers and hackers. Therefore, ignoring device safety as an initial design issue may endanger the whole supply chain, resulting in revenue and brand reputation risks, as well as grave life-threatening circumstances in some cases. The success of the Internet’s next phase is highly reliant on the inherent trust and security of billions of linked heterogeneous network devices [6, 11, 12].

A flexible, multi-layered strategy capable of providing end-to-end security from device to cloud and everything in between is necessary to deliver a security architecture solution that completely covers an IoT platform. While the majority of the initial architectural proposals involves a three-layer design (perception, network, and application layers) [11], a common dominating choice has yet to be determined. In subsequent versions, more abstraction has been incorporated, culminating in a five-layer structure (service management, object abstraction, application layer, objects and business layer [24]. Each layer’s technologies are distinct, with their own set of goals, needs, constraints, and tradeoffs. Nonetheless, the IoT’s diverse set of security challenges and vulnerabilities has an inherent impact on all layers of the architecture.

Info concerning the IoT architecture was transmitted across all levels and entities, i.e., users, service providers, and devices, to ensure full compatibility between services and devices. This, however, considerably expands the entire attack surface. The four primary categories of attacks include hardware-based attacks (e.g., changing techniques or channel violent attacks), communication attacks (e.g., weak random number generators, man-in-the-middle), life cycle attacks (e.g., degradation code, oversupply at the factory) and software attacks (e.g., return oriented programming approaches, malware). Countermeasures must be implemented for each form of attack because a single weakness may split the entire device and span the whole network. A list of technologies and mitigation methods can be chosen based on the offered assets of an IoT-based product to fulfill the essential security standards that must be enforced. Meeting these standards is essential in establishing a reliable and secure IoT infrastructure that provides rigorous guarantees on security primitives. Among these security primitives are the following:

  • Authentication is an essential aid in ensuring the security of communications between different parties [25]. The first barrier of protection against intrusion is access control management. These mechanisms are essential in order to identify and classify objects and manage their identities, establish a mutual trust relations between various objects, users, or systems by verifying and distinguishes their identities, and grant, deny, or limit entities’ access to data, resources, or applications (i.e., authorization) [26].

  • Resource availability is one of the fundamentals of IoT security which may be maintained through strict hardware maintenance and safe software/hardware resources. Additional security measures, for example software firewalls and intrusion detection systems, could be deployed to prevent malicious behaviors like denial of service (DoS) attacks.

  • Information authenticity is linked to the source of the data [27]. End-to-end security methods are required to guarantee that data is coming from valid sources. Globally notable identifiers and hierarchical identification methods are fundamental to assuring IoT authenticity [28].

  • Integrity is about maintaining the consistency of data, ensuring that unauthorized entities, or even unidentified causes cannot modify it undetected [29]. Cryptography is commonly used to verify data integrity.

  • Privacy aims to prevent important information from reaching the hands of unauthorized people or devices. This is usually accomplished by establishing several degrees of access for the wanted asset (user/password), biometric verification, two-factor authentication, robust data encryption technologies, security tokens, and other methods.

Since no security primitive by itself provides a standardized solution, it is essential to take a relevant layered approach to give the complex foundation to copiously defend the entire IoT device architecture, infrastructure, commonly known as defense in depth [30]. Figure 1 shows a high-level overview of the various types of security solutions [31]. All of the layers lead to strengthen the safety of the IoT system, and every one addresses a distinct security issue.

  • The Foundation Functions layer provides core modules that service the layers above it, such as cryptographic algorithms/engines-backed true random number generator (TRNG) modules [31]. The following cryptographic schemes stand out from the rest: (i) the Advanced Encryption Standard (AES) symmetric key protocol for mass information encryption, (ii) the Secure Hash Algorithm (SHA) cryptographic functions, and (iii) the Elliptical Curve Cryptography (ECC) or RSA asymmetric key algorithms for authentication and secure session key transactions. This layer offers a system that provides a special device identification, that is silicon bound, to enable various cryptographic algorithms (root key) [31]. A root key is usually held in a single-use programmable memory, which is configurated during platform manufacturing, or in physically unclonable function (PUF) mechanisms. It gives a strong method for encrypting, additional keys and data.

  • The system security layer is concerned with a system-wide approach to platform security, integrating device and memory access management. Memory protection units (MPUs) are commonly utilized for this purpose. ARM, on the other hand, has lately moved its TrustZone technology, which was previously limited to its microprocessors (Cortex-A), to the level of microcontrollers (Cortex-M). The latter helps to divide applications that handle sensitive peripherals or memory sections of the operating system as well as other hardware modules on the platform. Arm TrustZone-M advocates hardware as the first root of trust and allows any system resource (e.g., CPU, memory, and peripherals) to be trusted. The security layer of the platform is also in charge of verifying the integrity and authenticity of the software executing in the IoT device. Secure boot is the key technology in this regard [31].

  • The advanced protection layer contains a collection of technologies that defend physical tampering threats that might compromise the system’s integrity, availability or confidentiality. As a result, this layer includes technologies to stop: unauthorized access to the IP code, data, or keys (confidentiality), unauthorized changes to the code, data, or keys stocked in the apparatus for trying to take control of the system (integrity), and methods for interrupting the system’s normal operation, rendering it not available or operating in safe mode (availability). Also physical sabotage attacks, whether or not they involve physical attacks, can be classed as invasive or non-invasive. Infiltration or damage to the device’s packaging, respectively [32]. While detecting invasive attacks is simple with an on/off switch connected to a treatment system’s GPIO pins, detecting non-invasive attacks is significantly more costly.

Figure 1.

A layered overview of the main security technologies used in IoT (adapted from [31]).

Non-invasive attacks are often classified into three categories: side-channel attacks, fault injection attacks, and software attacks [32]. Side-channel attacks concentrate on monitoring the system’s behavior in terms of time (temporal attack), electromagnetism, and power consumption, simple power analysis (SPA), and differential power analysis (DPA), while ‘it executes secure operations (e.g. cryptography) to extract the keys. The most effective technique to prevent synchronization attacks is to ensure that all operations inside a security function spend the same amount of time. Intel has solved this issue by developing a fully dedicated Advanced Encryption Standard (AES) instruction set that operates data-independent. Kocher [33] presented a platform-independent method for updating the secret key for each executive session of a cryptographic scheme, causing the synchronization patterns. Rambus [34] suggested a set of software libraries and hardware cores that are immune to secondary channel attacks such as temporal, electromagnetic, SPA, and DPA attacks. In fact, their methods are based on strategies that reduce the signal-to-noise ratio on side channels and introduce randomization into cryptographic operations. They even implement protocol-level countermeasures, changing cryptographic protocols to include key update methods.

2.4 Energy awareness

Recent technical advancements in the information and communication technologies (ICT) industry have come at a cost, which is now associated with a 2% increase in the average carbon footprint. Nonetheless, because of the increasing of ICT scenarios and their requirements (including a massive and promising IoT ecosystem), it is predicted that by 2020, ICT improvement would be in the range of 6–8% [13]. The rapid spread of IoT technologies and their broad acceptance will require further sensory, communication, and performance add-ons, putting even more pressure on these devices’ energy budgets. On the other hand, while IoT infrastructure will boost carbon footprint over the next few years, it also has the potential to be explored to minimize the environmental footprint of several major sectors of society: habitat monitoring, energy, smart cities and transportation systems (e.g., smart grid, smart traffic jam, etc.).A smart grid anchored by IoT nodes, for example, may improve total energy consumption. From a macro and “green” standpoint, IoT devices require a more efficient and sustainable use of resources, with the problem of energy consumption at the heart of any IoT system’s design and development [13, 14].

IoT devices should use minimal power as possible. Because these devices require continuoual technique indefinitely, stable and reliable power sources are important enablers: repair and replacement of the battery or device are not cost-effective methods. Recent advancements in energy harvesting systems provide fundamental approaches for increasing battery life, mobility, and range [35, 36]. Furthermore, system designers must rely on existing and next-generation power management strategies (e.g., low-leak processing technologies, low-power flash memory and non-volatile memory technologies, low-power clock and operational diagrams, protocols) to minimize the total energy budget. The effective and sustainable use of power resources is critical since energy consumption determines the life of a particular battery capacity [37], which necessitates the implementation of a set of control methods and intelligent energy management. As a general rule, Motes often function cyclically, periodically alternating phases of active and low power operation to reduce their average power consumption and hence lengthen their longevity [36]. When a device is in active operation, it often demands wireless communications, that is commonly needs the most power state of a node. In brief, as the IoT’s backbone, wireless sensors would address rising energy demands and problems by introducing new energy- functional primitives.

Advertisement

3. Roles of reconfigurable platforms

Over the past years, the semiconductor enterprise has consistently reduced the size of its devices while increasing their power and efficiency. Moore’s Law drove down the cost per transistor dramatically each time the total number of transistors created was duplicated (approximately 45%) [36]. The pace with the fast demand for quicker and smaller goods has driven this technology to its limitations, making it increasingly hard to rise the density of transistors on a chip also its operating frequency, that appears to be nearly saturated [21]. This Moore’s Law deceleration raises several challenges for system designers, who expected better performance-to-energy ratios from each new generation of devices. This technical deadlock has prepared the way for the introduction of reprogramed platforms (i.e., FPGA-based platforms) as a novel hardware method to addressing these difficulties across a wide range of on-board use areas [5, 15].

Figure 2 illustrates the various software and hardware architectures that are now available on the market and commonly employed in the creation of embedded systems [21]. Microcontrollers (MCUs) provide the most flexibility, whereas ASICs (application-specific integrated circuits) offer the maximum performance. FPGA-based solutions, on the other hand, could offer the best of both worlds by providing high crippled processing abilities, leading in greater performance raise than MCUs and the ability to be reprogrammed at any moment. In comparison to ASICs, over time execution via partial or dynamic reconfiguration techniques provides greater flexibility. However, because MCUs are software devices, they offer best flexibility, making them useful in basic, low-cost embedded systems [38].

Figure 2.

Performance versus flexibility of different processing platforms.

FPGA manufacturers have begun to integrate embedded processors (soft or hard) into their gadgets in recent years, leading to so-called Field Programmable Chip Systems (FPGA SoC), which have emerged as the world’s greatest option for balancing flexibility with efficient computing power. FPGA SoCs have progressed from a single or dual-core processor-only platform to a far more powerful platform with graphics processing units (GPUs), real-time processors, multi-core processors, real-time processors and specialized hardware blocks such as digital signal processors (DSPs) and video compression components. With this varied array of resources ranging from systems that are efficient intended at high-end applications to a better resource-constrained platform, these heterogeneous reprogrammed technologies are better technical options for dealing with the ever-increasing diversity of Low-end IoT applications. Nevertheless, depending on the target context and uses situation, each IoT deployment may use a distinct network data and transmission architecture, technology and design processes that are widely used, based on the general requirements imposed by the environment’s natural evolution the IoT ecosystem.

3.1 Connectivity and interoperability

Hardware-assisted technologies that can speed widely recognized protocols and standards at the network edge are steadily resolving connectivity and interoperability problems in reconfigurable systems. For example, some data privacy-related communication operations (e.g., authentication, data encryption/decryption) take quite a long time and cost a lot of power. Offloading such activities to hardware (e.g., cryptographic protocols and algorithms) can result in improved performance-to-power tradeoffs. Gomes et al. [39] suggested a 6LoWPAN accelerator that can analyze and filter packets received by a radio transceiver. When compared to software filtering, the findings demonstrated a nearly 13.24% reduction in performance overhead. In addition to speeding up these computing operations, reconfigurable systems can help to reduce the obsolescence of cryptographic primitives through dynamic partial reconfiguration (DPR) [40]. Furthermore, some IoT-based applications have consistently employed FPGAs for networking reasons and obtained promising results in recent years, fostering the growth of different solutions in the industry. In [38], Andina et al. discussed many research that demonstrate the benefits of employing FPGAs to tackle connectivity challenges on IoT systems.

The growth of software-defined radio systems has coincided with the evolution of radio communications (SDR). An SDR is a radio communication system in which standard FPGA hardware components (e.g., mixers, filters, amplifiers, modulators/demodulators, detectors) are integrated in software. Indeed, this method facilitates the generation of smart communication strategies with great usefulness in a variety of sectors (e.g., mobile phones or military applications), where protocols and radio settings (e.g., new modulation designs, filters) may be modified in real time. The benefits of reprogrammable platforms paired with the SDR paradigm give up a new pair of possibilities in which new hardware modules (specified in software but speeded in hardware) may be developed and dynamically installed on reconfigurable systems using DPR [41, 42].

3.2 Intelligence

The current trend to solve the problems of excess information created at the edge and latencies engendered by its transmission through the network has given rise to the concept of edge computing, in which the edge node uses AI, specifically deep learning methodologies, to properly accomplish data analysis at the source. Because of their inherent parallel compilation ability and performance per watt benefits, FPGA-based platforms are well suited to address AI needs in this situation. By offering hardware-accelerated inference techniques, these systems can fulfill the strict effectiveness and power restrictions of edge devices.

The latest generation low density FPGAs, such as the Xilinx 7000 family, can speed neural networks in the 1 W to 1 mW region. Each FPGA series has a convolutional neural network (CNN) accelerator that may be configured for accuracy or power consumption [20]. In comparison to previous platforms, Intel’s new FPGA SoC combine DSP blocks with unique floating point capabilities into the FPGA fabric, considerably reducing logic resources consumption and improving overall performance [9]. To be fair, the high computational storage and power capacities requirement of classic neural network designs continue to confront even the newest FPGA-based platforms.

While the market has recently lauded FPGAs’ capabilities for AI acceleration, academia has as well thoroughly researched this subject, presenting many accelerators and demonstrating interesting results. In [43] the authors developed a low-precision CNN accelerator that delivered about the precision of a standard CNN while outperforming other tasks by up to 6 times. The authors of [43] developed a CNN-based image classifier accelerated on a high-end FPGA SoC that investigates both the integrated hardcore and the FPGA fabric holistically. When compared to standard hardware platforms, the solution achieves outstanding performance/power consumption ratios (e.g., CPU, GPU). Other similar papers offer techniques based on hardware accelerated AI as well. In the works cited in [44], the results presented make it possible to accelerate the performance of 4x compared to other solutions while reducing energy consumption. Although the Deep Learning Accelerator written in OpenCL is capable of accelerating AlexNet up to 10 times quicker than the different leading edge approaches. From another point of view, the authors of [45] have proposed a series of efficient design techniques (p. To meet the limitations of devices with limited resources. A common element in all these works is that solutions based on FPGA strike a true balance between compute performance and energy efficiency. Furthermore, these platforms are the only option capable of continuously adapting to the high speed of development in AI frameworks, both in terms of algorithm implementation and performance/power needs for future generation workloads.

3.3 Security

IoT system security is a critical necessity. The attack vector spectrum is expanding, and IoT system developers want solid and very secure countermeasures to efficiently protect the upcoming devices generation. Faced with today’s security demands, new reconfigurable systems include a variety of security blocks ranging from core hardware encryption engines.

The basic functions include numerous techniques that support greater security standards, as aa example we mention data encryption/decryption before performing data transmission. The latest generation of FPGAs provide a diverse set of integrated, hardware-accelerated blocks and cryptographic resources (e.g., ECC, AES, SHA, and HMAC). Microsemi’s SmartFusion, SmartFusion2, and IGLOO2 devices, for example, provide hardware accelerators for AES-128/256 and SHA-256, that may be utilized for performing design also data security (e.g., to validate the integrity and authenticity of a bit stream). In addition, the advantages of employing FPGA-based cryptographic accelerators have been extensively discussed in the literature; Piedra et al. [46] compared the performance and power consumption of cryptographic primitives in commercial IoT nodes to an FPGA-based cryptographic accelerator. The outcomes shown that the latter technique may significantly improve the execution time of sophisticated cryptographic algorithms, and hence power consumption. Another feature of FPGA-based cryptographic systems is their inherent reconfigurability, that may be used to simply upgrade limited or obsolete cryptographic algorithms and protocols.

A TRNG block is necessary to support cryptographic engines. It generates random cryptographic keys from a statistically independent source of random values. While exhibiting many physical sources of entropy (e.g., clock jitter, thermal noise, shot noise, etc), FPGA-based TRNGs may as well attain ideal high-speed ratios and function as a source of truly random numbers [47]. Newer FPGA-based applications, such as Microsemi’s FPGA SoC, are also equipped with a non-deterministic TRNG that is certified to handle cryptographic applications. The protected root keys, which must be uniquely tied to the device, are another important feature of the basic function class [31]. Today’s cutting-edge implementation depends on PUF technology, which, due to unavoidable differences in the nanoscale manufacturing process, makes PUFs a viable physical device attribute for generating a peculiar silicon fingerprint [48]. Root keys created from PUFs are fetched from the chip rather than kept on it. PUFs are now present in a wide range of devices, from small sensors and microcontrollers to FPGA-based systems. PUF-based applications in [49] serve as a means for security software on an MCU as well as a basis for authenticating IoT devices in the cloud. On contrast [50], proposes a secure protocol based on PUF to secure a DPR compliant IoT design deployed in FPGA.

The wide system solutions that provide secure primitives at the CPU level provide platform security as well as access control to system components (e.g., FPGA blocks, peripherals, memory). Arm, the dominating architecture in the mobile and in-vehicle categories (with 50 billion devices deployed), launched the most powerful current platform security mechanism, Arm TrustZone, in 2014. Arm TrustZone is a hardware security system that covers both low and high-end Arm CPUs. The later provides a compartmentalized method to security by giving two hardware-reinforced regions of protection: secure and regular worlds. The different worlds are totally separated from hardware and have uneven privileges, preventing insecure software from immediately accessing secure global components. The Trust Zone bit is not contained within the CPU; it extends from the processor to the bus to the hardware’s internal circuitry, Zynq-based FPGA SoC are a great example. This technology has been widely employed in academia and business as a significant enabler for the use of Trusted Execution Contexts (TEEs) and to offer strict isolation (security by separation) in critical environments.

3.4 Energy

Users would expect tinier, smarter, and longer-lasting IoT items offered by ultra-low power IT-optimized solutions. FPGAs have lowered power consumption per operation by more than a factor of 1000 since their introduction [51]. These advancements have been driven mostly by process technology and the desire to reach new markets, particularly the consumer sector. Power concerns are now at the forefront of FPGA architecture considerations, and newly FPGA categories are all geared towards low-cost, high-volume applications. The majority of FPGAs are based on SRAM technology, which necessitates extra non-volatile memory to keep their configuration pattern, increasing power consumption. Nonetheless, these platforms have changed significantly over time, and newer devices are more energy effective. For example, Lattice’s iCE40 family of FPGAs can operate at 10 mA in active mode and up to 35 μA in standby mode. Due to the uncertain initial state of the SRAM cells, Lattice systems are prone to spikes in starting current (inrush current) like SRAM-based FPGAs. iCE FPGAs, on the other hand, have a maximum inrush current of 1.2 mA, which is a very high efficient number for battery-powered uses.

Flash FPGAs have always fallen behind SRAM-based devices in regards of performance, density, and on-chip IP. Although, new developments in flash technology (e.g., flash memory cell reduction, flash memory integration into advanced logic operations) have dramatically increased these platforms. This technique has a very low static energy consumption as well as minimal inrush and setup power. Microsemi FPGAs investigate flash memory. The IGLOO series, specifically developed for today’s portable and energy efficient devices, may deliver standby power consumption rates as low as 2 μW in their FPGA portfolio. Furthermore, Microsemi’s FPGA SoC have Flash * Freeze technology, that places the FPGA design in a low-power sleep mode while maintaining the prior state of memory, enabling for quick FPGA shutdown and restart. Sensor arrays, which are invariably turned on and off on a regular basis, might benefit immensely from this feature. Furthermore, a system designer may take use of the extra combination of such technology with other low power modes provided by the integrated hard-core MCU to fulfill the rigorous energy requirements of numerous IoT applications.

To improve energy efficiency, some solutions incorporate a dynamic power management (DPM) module in their reconfigurable hardware, that permits individual resources to be totally turn off in standby or low power mode, as well as a reset function. The voltage and frequency dynamic scales used to govern the digital processing component. The later function is a power management approach that allows the voltage and speed of the MCU to be altered and decreased to lower levels when not in use to reduce power consumption. Furthermore, reconfigurable solutions appear to be an excellent option for heterogeneous grains of low power by studying low consumption operating partners with extremely low static power consumption also employing a DPM system paired with DVFS approaches [52].

3.5 Combination of reconfigurable platforms and IoT Motes

The platforms based on FPGA are quite diverse, extending from compact form dimensions, ultra-low power consumption, and production-priced solutions to fully SoC-enabled platforms with considerable hardware resources to fulfill customer nominations this days. This technology, which has a high level of maturity, is a good option for designing personalized solutions for wireless detecting uses. The authors of [53] have published a complete survey addressing a wide range of hardware devices available for low-end IoT mobiles. By providing numerous solutions based on stand-alone FPGA platforms or heterogeneous designs that integrate an MCU and an FPGA, this paper focuses on the rising focus in researching reprogrammed architectures used in this industry. FPGA-based designs have permit the optimization of numerous components of wireless sensors in aspects of performance and power consumption, while some work has also boosted device security.

Several methods aimed at wireless sensor systems have previously been presented, including PowWow [54], CookiesWSN [37], HaLoMote, and CUTE mote [55]. In these references, the recent state of the art, are well highlighted, on low-end IoT motes which leverage reprogrammable technology on their design, describing their variations from previously recognized CGUs, in addition their most essential qualities and attributes: network accelerators available, radio device used, SoC adopted, MCU design, local security related hardware/software, application specific accelerators, and maturity level. PowWow and CookiesWSN are the first low-end motes implementations that integrate a low-power MCU (TIMSP430) with a tiny low-power Flash FPGA as well as a radio transceiver. The first solution looks into using the FPGA to build low-level network-bound accelerators such forward error correction (FEC) methods. PowWow investigates energy management approaches to manage the digital processing element in order to enhance energy efficiency. While both feature an Elliptical Curve Cryptography Accelerator (ECC), CookiesWSN adds an application-specific Sensor Data Processing Accelerator (SDP) as well as a reprogrammed Kalman filter to reduce noisy samples in the process of data acquisition processing. Despite the major accomplishments of PowWow and CookiesWSN, the utilization of discrete MCU and radio frequency (RF) components resulted in slower communications and worse power efficiency.

Recent alternatives, such as HaLoMote and CUTE mote, have solved some of the previous methods’ limitations. HaLoMote, a hardware-accelerated low-power mote aimed at IoT, combines an RF-SoC transceiver (ATmega256RFR2) with a Microsemi IGLOO M1AGL1000 crawled to speed up massive computation tasks mentioning sensor data aggregation in an SDP. Furthermore, the system offers a DPM accelerator, which enables low power standby modes with extremely low static power consumption, resulting in decreased power consumption. The CUTE mote, on the opposite, is described as a programmable and dependable terminal device that is specifically built for low power IoT applications. The design is implemented on an FPGA SoC (Microsemi martFusion2) platform, which combines an Arm Cortex-M3 hardcore MCU closely linked with a Flash-based FPGA and an externally connected IEEE 802.15.4 radio transceiver. Offloaded hardware accelerators are provided as hardware devices to the MCU and are accessed using a standard on-chip communication protocol, which simplifies design and minimizes access time. The contribution in [55] used a micro-positioning measurement system to evaluate and install their platform. A specific application SDP, a root mean square (RMS) statistical procedure for information evaluation and analysis a Fast Fourier Transform (FFT) method for digital differential signal processing, a finite impulse response (FIR) filter for signal processing, and other signal and image compression techniques have been used. Other relevant contributions in this field [56], despite being at a low maturity level, analyze significant improvements in reprogrammed systems dedicated for FPGA-based wireless sensing uses and conforming to standards, Low-end IoT, where it is still suggested to deploy specific, network, and security related tasks in the FPGA. Although they contribute to a common vision, some contributions are still in the design phase.

Despite variations on multiple levels, all of the prior studies referenced have a common point of view, that we defend through the following chapter: Indeed, in the future of IoT-enabled devices reconfigurable platforms will make a crucial role, where essential problems like as connection and interoperability, cutting-edge AI, hardware and energy efficiency and data security, will surely keep being the top trends and difficulties for future low-end IoT Words.

Advertisement

4. Connected the unconnected world and things: an evolution in connectivity beyond the 5G revolution

The future of the connected world is not only about the latest cutting-edge technologies, such as the constellations of high-speed 5G and low earth orbit satellites. Much will be defined by the advancement and development of current advanced connectivity technologies, such as fiber, low to medium band 5G, 6G, and different other long and short-range solutions. The modern connectivity architecture also includes cloud and edge computing which is accessible with less expensive and more efficient devices and platforms as well as the FPGA SoC (discussed in the above section), as depicted in Figure 3. Computing power, storage, and sensors are all getting more robust and reasonable. With the converges of these trends, the connectivity ecosystem will be dominated with more technologies, services, and vendors more than before.

Figure 3.

The future trend of the connected world and things.

The new and improved networks will enable and complement other critical technologies such as cloud computing and FPGA SoC-based edge computing. These developments, when combined, will allow some of the most data-intensive applications of the future. Cloud computing will keep to serve as a processing backbone for use cases that need a high level of computing power, storage capacity, and complex data analysis capabilities. This computing is required for a variety of tasks ranging from storing films to training artificial intelligence systems. Users’ devices may not be able to run the most complex applications without a boost from cloud computing, or they may have to be considerably more expensive. On numerous fronts, FPGA SoC-based edge computing tries to alleviate some of the constraints of cloud computing. Instead of sending data to central cloud servers that may be hundreds or even thousands of miles away from the end user, FPGA edge computing delivers computing power, storage, and networking closer to where data is created or consumed. Actual computing could then take place in smaller-scale data centers on the outskirts of major cities (the metro periphery), at the base of radio access network base stations (the micro-periphery), in wiring closets at end-user premises (the Edge Gateway), or even on the device that generates data itself (the Edge device).

A number of factors are driving the urge to bring processing and storage closer to the end-user. The first is the proliferation of linked devices, particularly as the Internet of Things is implemented in an increasing number of locations. According to a recent IDC [57], prediction, there may be up to 42 billion linked IoT devices by 2025. These technologies are also growing more complicated, progressing from simple smart devices to intelligent linked systems and processes. As the number of increasingly complicated devices grows, so does the volume of data created, which may surpass what a centralized cloud can handle, especially as IoT applications rely more on video processing and ultra-high-definition audio. As a result, there is an increased demand for efficient storage that assures data protection. Another important driver of edge computing growth is the desire for real-time analytics, decision making, and changes. These features are critical for applications such as augmented and virtual reality, linked vehicles, drones, video surveillance, and industrial machinery remote control. This requirement for low latencies reduces transmission time to the cloud.

Also, application development is moving towards new solutions such as container-centric architecture, micro-services architecture, and server-less computing platforms. These solutions provide lightweight, portable alternatives for running applications at the edge, allowing developers to perform testing and maintenance faster and more efficiently. Finally, edge computing addresses a fundamental requirement for industrial operators managing transportation and logistics networks or remote facilities. They may now connect to compute, storage, and analytics resources in contexts with sporadic or restricted connection, as well as in extremely remote locations.

All those different factors point to the upgrading adoption of edge computing around the world. While it took 10–15 years for cloud computing to mature, edge computing is on a faster trajectory. The cloud ushered in a paradigm shift that shifted software and computing power from owned products to delivered services. Edge computing could be seen as an extension of this move towards a more decentralized model. The emphasis today is on defining the architecture (especially emerging industry standards for application development and maintenance, and for interoperability between edge, device, and cloud). Its acceptance may pick in speed once it becomes available.

Advertisement

5. Proposed QoS-QoR aware CNN FPGA accelerator Co-design approach for feature IoT world

5.1 QoS-QoR CNN accelerator for IoT devices

Motivated by the idea and challenges discussed above, we propose a QoS-QoR aware CNN FPGA Accelerator co-design process that includes a hardware-oriented CNN topology and an accelerator design that takes into consideration CNN-specific properties. CNNs and accelerators are created in tandem to find the greatest balance among both QoS and QoR. Targeted QoS, QoR, and hardware resource limitations are inputs to this procedure, while the resulting CNN model and its related accelerator architecture are outputs. The entire process is broken down into three steps:

  • Step One: The bundle is created, and the QoS is assessed. We pick CNN components at random out from the pool layer and create bundles (the basic building blocks of the created CNNs) with various layer combinations. Analytical models are used to analyze each of the assemblies in order to capture hardware parameters (e.g., latency, compute and memory needs, resource utilization), allowing a quality of service estimate to be made at the CNN exploration start.

  • Step Two: Selection of bundles based on QoR and QoS. To find the far more potential beams, we first analyze each beam’s QoR potential by reproducing it n times to create a CNN prototype. For exact results, all CNN prototypes are trained quickly (20 epochs) on the selected dataset. We classify the CNN prototypes with homogeneous QoS to the input targets based on the QoS predicted in step one, and choose the best bundle candidates from each class.

  • Step Three: Exploration and exploitation of CNN that is hardware-dependent. We begin exploring CNNs with the first level technique by stacking the selected packet and utilizing stochastic coordinate descent to explore CNNs under provided QoS and QoR restrictions (SCD). The QoS of SCD’s CNN outputs is precisely assessed before being sent to SCD for updating the CNN model. To increase QoR, produced CNNs that match QoS criteria are presented in the purpose of training and tuning.

Based on this, we propose an accelerator-based selected CNN design that provides a pipeline architecture for efficient CNN implementation with a maximum resource sharing technique. It contains a foldable structure that uses the same hardware components to calculate CNN sets sequentially, saving resources when targeting tiny IoT devices. To improve QoS, it also uses an unfolded structure for computing operations inside bundles in a pipeline manner. The proposed design can benefit from both recurring and pipeline structures by combining the two levels of design. The acceleration phases, on the other hand, are carried out using Xilinx vivado high level synthesis (HLS).

5.2 Proposed architecture: Acceleration and designing tools

HLS approaches have increased the development quality of FPGA-based hardware design in recent years by enabling FPGAs to be programmed in high-level languages (e.g., C/C++) [58]. Designing an FPGA-based CNN accelerator with high-performances, on the other hand, is far from simple, as it necessitates specific hardware development, repeated hardware/software testing to assure operational accuracy, and efficient design space exploration for advanced throttle settings. We’ve seen a rising interest in expanding automation frameworks for developing CNN accelerators from a higher level of abstraction, using particular algorithmic descriptions to CNN and top quality predefined hardware models for rapid design and prototyping, in order to increase the effectiveness of accelerator design. However, there are still design issues, as new development patterns in cloud and embedded FPGAs create fundamentally distinct challenges in satisfying the diverse demands of CNN applications. For example, many arrays are frequently employed in the newest versions of cloud FPGAs to double available resources and give better throughput. When accelerator architectures struggle to grow up/down to meet chip size, cross-routing and distributed on-chip memory can simply create timing violations and reduce possible performance. On the other hand, on-board FPGAs combine heterogeneous components (such as a CPU and a GPU) to efficiently handle various aspects of the targeted activities. It is very difficult to fully use on-chip resources and reap the benefits of specific hardware without the need for an extremely flexible task partitioning scheme. Meanwhile, many researchers are experimenting with fast CONV algorithms to see if they can improve the program [59]. While these accelerators deliver superior performance than classic designs, they are constrained by use cases and necessitate more complicated design approaches. As shown in Figure 4, the proposed QoS-QoR aware CNN FPGA accelerator co-design is consisted of Zynq Processor that is used in all tasks management, like predictions, GPIO management, and automatically mapping the CNN accelerator with the right parameters. The Axi DMA is used to speed up the data and communication exchange between DDR and CNN accelerator. This co-design aims to test the created CNN accelerators on an edge object detection application.

Figure 4.

QoS-QoR aware DNN/CNN FPGA accelerator Co-design.

Advertisement

6. Results and discussion

Considering the work in the literature [60] and in order to test the proposed co-design, we used the same accelerated CNNs (CNN_A, CNN_B) with different layers and configuration summarized in Table 1. Th parameters are summarized in different data precision for weights and feature maps. We test then the proposed QoS-QoR aware CNN co-design on an object detection. The suggested co-design schemes finds the most promising CNN topology example for the intended hardware system and application as a bundle containing depth-wise Cnv3 (DW-Cnv3), point-wise Conv1 (PW-Cnv3), and max-pooling layers. Depending on this data, the co-design investigates 3 CNN configurations, each with a distinct normalization strategy, in order to meet the QoR and QoS requirements. Table 1 shows the different result of the proposed scheme. Using the FPGA Pynq Z1 and the proposed architecture achieved a best results when used different CNNs (CNN_A, CNN_B). According to these results, CNN_A occupies 27% FFs, 78% BRAMs, 84% DSPs, and 76% LUTs with a working frequency of 150 MHz. In addition it reached a 23 FPS with a maximum latency of 44 ms, a maximum power of about 2.6 W and an energy efficiency of about 0.114 J/image. On the other hand, the CNN_B with the configuration of W16 & F8 occupies a 38% FFs, 96% BRAMs, 91% DSPs, and 83% LUTs with a working frequency of 150 MHz. According to this highest hardware cost compared to the first topology, CNN_B cannot surpass the first one considering its energy efficiency factor which is of about 0.16 J/image.

CNN TopologiesCNN_A (W16 & F16)CNN_B (W16 & F8)
Input LayerInput RGB Image (160*160)Input RGB Image (160*160)
Layer 1DW-Cnv3 (3)DW-Cnv3 (3)
Layer 2PW-Cnv1 (48)PW-Cnv1 (48)
Layer 3Max-Pool (2*2)Max-Pool (2*2)
Layer 4DW-Cnv3 (48)DW-Cnv3 (48)
Layer 5PW-Cnv1 (96)PW-Cnv1 (96)
Layer 6Max-Pool (2*2)Max-Pool (2*2)
Layer 7DW-Cnv3 (96)DW-Cnv3 (96)
Layer 8PW-Cnv1 (192)PW-Cnv1 (192)
Layer 9Max-Pool (2*2)Max-Pool (2*2)
Layer 10DW-Cnv3 (192)DW-Cnv3 (192)
Layer 11PW-Cnv1 (384)PW-Cnv1 (384)
Layer 12PW-Cnv1 (10)PW-Cnv1 (10)
Hardware Cost
FFs(%)2738
BRAMs(%)7896
DSPs(%)8491
LUTs(%)7683
Performances
Frequency (MHz)150150
FPS2318
Latency (ms)4463.1
Power (W)2.62.55
Energy Efficiency (J/image)0.1140.160

Table 1.

Results analysis.

Advertisement

7. Conclusion

This forward-looking chapter provides an outlook on low-end motes in the age of IoT. It illustrates how current reprogrammable platforms are the best choice to adapt to the ever-changing IoT environment after a full assessment of the trends and problems offered by the IoT paradigm to low-end devices. Obviously, the ever-increasing volume of data created by IoT motes, along with the end of Moore’s Law, necessitates the development of new IoT system designs that are decentralized from the cloud, where the majority of data processing operations are now handled. This tendency is much more visible in security-critical contexts, where IoT motes must make real-time judgments that cannot be transferred to cloud services because to the infrastructure network’s interminable data transmission delays. Although microcontrollers provide the most programming freedom, their technology has reached its limits and cannot manage the increased computational power required by the upcoming generation of IoT devices. ASICs could satisfy this criterion, but they lack the programming/design flexibility that IoT systems demand. In this aspect, it is clear that reconfigurable platforms are an excellent implementation option for the upcoming generation of low-end IoT motes, as they provide unique competitive advantages such as flexibility through reconfigurable logic, versatility of hardware resources, high performance thanks to parallelism, and low power consumption with high security.

Acronyms and abbreviations

NomenclatureFFs

Flip Flops

BRAMs

Block RAMs

DSPs

Digital Signal Processors

LUTs

Look Up Tables

FPS

Frame Per Seconds

Abbreviations5/6G

Five and Six Generation Networks

IoT

Internet of Things

AI

Artificial Intelligence

FPGA

Field Programmable Gate Array

SoC

System on Chip

RFID

Radio Frequency Identification

IPV4

Internet Protocol version 4

LLN

Low-Power and Lossy Network

MAC

Medium Access Control

IDC

International Data Corporation

TRNG

True Random Number Generator

AES

Advanced Encryption Standard

SHA

Secure Hash Algorithm

ECC

Elliptical Curve Cryptography

MPU

Memory Protection Units

ASICs

Application-specific Integrated Circuits

DPR

Dynamic partial reconfiguration

SDR

Software-Defined Radio

FEC

Forward Error Correction

QoS

Quality of Service

QoR

Quality of Result

References

  1. 1. Sinha BB, Dhanalakshmi R. Recent advancements and challenges of internet of things in smart agriculture: A survey. Future Generation Computer Systems. 2022;126:169-184
  2. 2. Messaoud S, Bradai A, Hashim S, Bukhari R, Qung PTA, Ahmed OB, et al. A survey on machine learning in internet of things: Algorithms, strategies, and applications. Internet of Things. 2020;12:100314 ISSN 2542-6605
  3. 3. Di Martino B, Li KC, Yang LT, Esposito A. Trends and strategic researches in internet of everything. In: Internet of Everything. Singapore: Springer; 2018. pp. 1-12
  4. 4. Posadas DV Jr. After the gold rush: The boom of the internet of things, and the busts of data-security and privacy. Fordham Intellectual Property, Media & Entertainment Law Journal. 2017;28:69
  5. 5. Buyya R, Dastjerdi AV, editors. Internet of Things: Principles and Paradigms. Elsevier; 2016
  6. 6. Rodríguez-Andina JJ, Valdes-Pena MD, Moure MJ. Advanced features and industrial applications of FPGAs—A review. IEEE Transactions on Industrial Informatics. 2015;11(4):853-864
  7. 7. Messaoud S, Bouaafia S, Maraoui A, Ammari AC, Khriji L, Machhout M. Deep convolutional neural networks-based hardware–software on-chip system for computer vision application. Computers & Electrical Engineering. 2022;98:107671
  8. 8. Sheng Z, Yang S, Yu Y, Vasilakos AV, McCann JA, Leung KK. A survey on the ietf protocol suite for the internet of things: Standards, challenges, and opportunities. IEEE Wireless Communications. 2013;20(6):91-98
  9. 9. Shantharama P, Thyagaturu AS, Reisslein M. Hardware-accelerated platforms and infrastructures for network functions: A survey of enabling technologies and research studies. IEEE Access. 2020;8:132021-132085
  10. 10. Lai L, Suda N, Chandra V. Cmsis-nn: Efficient neural network kernels for arm cortex-m cpus. arXiv preprint arXiv:1801.06601. 19 Jan 2018;1. DOI: 10.48550/arXiv.1801.06601
  11. 11. Alaba FA, Othman M, Hashem IAT, Alotaibi F. Internet of things security: A survey. Journal of Network and Computer Applications. 2017;88:10-28
  12. 12. Pinto S, Garlati C. “User mode interrupts—A must for securing embedded systems”. In: Proceedings of the Embedded World Conference. Nuremberg, Bayern, Germany. 2019. pp. 505-510. Available: https://bringyourownit.com/2019/03/03/user-mode-interrupts-a-must-for-securing-embedded-systems/
  13. 13. Shaikh FK, Zeadally S, Exposito E. Enabling technologies for green internet of things. IEEE Systems Journal. 2015;11(2):983-994
  14. 14. Wang K, Wang Y, Sun Y, Guo S, Wu J. Green industrial internet of things architecture: An energy-efficient perspective. IEEE Communications Magazine. 2016;54(12):48-54
  15. 15. Pena MDV, Rodriguez-Andina JJ, Manic M. The internet of things: The role of reconfigurable platforms. IEEE Industrial Electronics Magazine. 2017;11(3):6-19
  16. 16. Tsai CW, Lai CF, Vasilakos AV. Future internet of things: Open issues and challenges. Wireless Networks. 2014;20(8):2201-2217
  17. 17. Al-Kashoash HA, Kharrufa H, Al-Nidawi Y, Kemp AH. Congestion control in wireless sensor and 6LoWPAN networks: Toward the internet of things. Wireless Networks. 2019;25(8):4493-4522
  18. 18. Da Xu L, He W, Li S. Internet of things in industries: A survey. IEEE Transactions on Industrial Informatics. 2014;10(4):2233-2243
  19. 19. Javed F, Afzal MK, Sharif M, Kim BS. Internet of things (IoT) operating systems support, networking technologies, applications, and challenges: A comparative review. IEEE Communications Surveys & Tutorials. 2018;20(3):2062-2100
  20. 20. Lammie C, Olsen A, Carrick T, Azghadi MR. Low-power and high-speed deep FPGA inference Engines for Weed Classification at the edge. IEEE Access. 2019;7:51171-51184
  21. 21. Molanes RF, Amarasinghe K, Rodriguez-Andina J, Manic M. Deep learning and reconfigurable platforms in the internet of things: Challenges and opportunities in algorithms and hardware. IEEE Industrial Electronics Magazine. 2018;12(2):36-49
  22. 22. Luo T, Nagarajan SG. “Distributed anomaly detection using autoencoder neural networks in WSN for IoT”. In: 2018 IEEE International Conference on Communications (ICC). Kansas City, MO, USA. 2018. pp. 1-6. doi: 10.1109/ICC.2018.8422402
  23. 23. Koohang A, Sargent CS, Nord JH, Paliszkiewicz J. Internet of things (IoT): From awareness to continued use. International Journal of Information Management. 2022;62:102442
  24. 24. Granjal J, Monteiro E, Silva JS. Security for the internet of things: A survey of existing protocols and open research issues. IEEE Communications Surveys & Tutorials. 2015;17(3):1294-1312
  25. 25. Chen K, Zhang S, Li Z, Zhang Y, Deng Q, Ray S, et al. Internet-of-things security and vulnerabilities: Taxonomy, challenges, and practice. Journal of Hardware and Systems Security. 2018;2(2):97-110
  26. 26. Chen K, Zhang S, Li Z, Zhang Y, Deng Q, Ray S, et al. Internet-of-things security and vulnerabilities: Taxonomy, challenges, and practice. Journal of Hardware and Systems Security. 2018;2(2):97-110
  27. 27. Pennekamp J, Henze M, Schmidt S, Niemietz P, Fey M, Trauth D, et al. Dataflow challenges in an internet of production: A security & privacy perspective. In: Proceedings of the ACM Workshop on Cyber-Physical Systems Security & Privacy. New York, NY, United States; 2019. pp. 27-38. DOI: 10.1145/3338499.3357357
  28. 28. Benabdessalem R, Hamdi M, Kim TH. “A survey on security models, techniques, and tools for the internet of things”. In: 2014 7th International Conference on Advanced Software Engineering and its Applications. Hainan, China. 2014. pp. 44-48. DOI: 10.1109/ASEA.2014.15
  29. 29. Tan YS, Ko RKL, Holmes G. “Security and data accountability in distributed systems: A provenance survey”. In: 2013 IEEE 10th International Conference on High Performance Computing and Communications & 2013 IEEE International Conference on Embedded and Ubiquitous Computing. Zhangjiajie, China. 2013. pp. 1571-1578. DOI: 10.1109/HPCC.and.EUC.2013.221
  30. 30. Restuccia F, D’Oro S, Melodia T. Securing the internet of things in the age of machine learning and software-defined networking. IEEE Internet of Things Journal. 2018;5(6):4829-4842
  31. 31. Chatterjee S, Kar AK. “Regulation and Governance of the Internet of Things in India”. Regulation and governance of the Internet of Things in India. 2018;20(5): pp. 399-412. DOI: 10.1108/DPRG-04-2018-0017
  32. 32. Nisarga B, Peeters E. “System-Level Tamper Protection Using MSP MCUs.” Dallas, Texas, United States: Texas Instruments; 2016. Available: https://e2e.ti.com/
  33. 33. Kocher PC. Timing attacks on implementations of Diffie-Hellman, RSA, DSS, and other systems. In: Annual International Cryptology Conference. Berlin, Heidelberg: Springer; 1996. pp. 104-113
  34. 34. Illuri B, Jose D, David S, Nagarjuan M. Machine learning based and reconfigurable architecture with a countermeasure for Side Channel attacks. In: Inventive Communication and Computational Technologies. Singapore: Springer; 2022. pp. 175-187
  35. 35. Kamalinejad P, Mahapatra C, Sheng Z, Mirabbasi S, Leung VC, Guan YL. Wireless energy harvesting for the internet of things. IEEE Communications Magazine. 2015;53(6):102-108
  36. 36. Alioto M, Shahghasemi M. The internet of things on its edge: Trends toward its tipping point. IEEE Consumer Electronics Magazine. 2017;7(1):77-87
  37. 37. Rosello V, Portilla J, Riesgo T. “Ultra low power FPGA-based architecture for wake-up Radio in Wireless Sensor Networks.” In: IECON 2011-37th Annual Conference of the IEEE Industrial Electronics Society. Melbourne, VIC, Australia. 2011. pp. 3826-3831. DOI: 10.1109/IECON.2011.6119933
  38. 38. Monmasson E. Fpgas: Fundamentals, advanced features, and applications in industrial electronics [book news]. IEEE Industrial Electronics Magazine. 2017;11(2):73-74
  39. 39. Gomes T, Salgado F, Pinto S, Cabral J, Tavares A. A 6LoWPAN accelerator for internet of things endpoint devices. IEEE Internet of Things Journal. 2017;5(1):371-377
  40. 40. Rao M, Newe T, Grout I, Mathur A. An FPGA-based reconfigurable IPSec AH core with efficient implementation of SHA-3 for high speed IoT applications. Security and Communication Networks. 2016;9(16):3282-3295
  41. 41. Givehchi O, Landsdorf K, Simoens P, Colombo AW. Interoperability for industrial cyber-physical systems: An approach for legacy systems. IEEE Transactions on Industrial Informatics. 2017;13(6):3370-3378
  42. 42. Messaoud S, Bradai A, Ahmed OB, Quang PTA, Atri M, Hossain MS. Deep federated Q-learning-based network slicing for industrial IoT. IEEE Transactions on Industrial Informatics. 2020;17(8):5572-5582
  43. 43. Qiu J, Wang J, Yao S, Guo K, Li B, Zhou E, et al. Going deeper with embedded fpga platform for convolutional neural network. In: ACM International Symposium on FPGA. New York, NY, United States. 2016. DOI: 10.1145/2847263.2847265 2016.
  44. 44. Zhang J, Li J. Improving the performance of OpenCL-based FPGA accelerator for convolutional neural network. In: Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. New York, NY, United States: 2017. pp. 25-34. DOI: 10.1145/3020078.3021698
  45. 45. Zhang X, et al. “Machine learning on FPGAs to face the IoT revolution.” In: 2017 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). Irvine, CA, USA. 2017. pp. 894-901 DOI: 10.1109/ICCAD.2017.8203875
  46. 46. de la Piedra A, Braeken A, Touhafi A. “A performance comparison study of ECC and AES in commercial and research sensor nodes.” In: Eurocon 2013. Zagreb, Croatia: IEEE. 2013. pp. 347-354. DOI: 10.1109/EUROCON.2013.6625007
  47. 47. Xu X, Wang Y. “High speed true random number generator based on FPGA.” In: 2016 International Conference on Information Systems Engineering (ICISE). Los Angeles, CA, USA. 2016. pp. 18-21. DOI: 10.1109/ICISE.2016.14
  48. 48. Cicek I, Al Khas A. A new read–write collision-based SRAM PUF implemented on Xilinx FPGAs. Journal of Cryptographic Engineering. 2022;2190-8516:1-18. DOI: 10.1007/s13389-021-00281-8
  49. 49. Schrijen GJ, Garlati C. Physical Unclonable Functions to the Rescue. In: Proceedings of the Embedded World. Nuremberg, Germany. 27 February–1 March 2018; 2018
  50. 50. Johnson AP, Chakraborty RS, Mukhopadhyay D. A PUF-enabled secure architecture for FPGA-based IoT applications. IEEE Transactions on Multi-Scale Computing Systems. 2015;1(2):110-122
  51. 51. Trimberger SMS. Three ages of FPGAs: A retrospective on the first thirty years of FPGA technology: This paper reflects on how Moore’s law has driven the design of FPGAs through three epochs: The age of invention, the age of expansion, and the age of accumulation. IEEE Solid-State Circuits Magazine. 2018;10(2):16-29
  52. 52. Ahmed I, Zhao S, Meijers J, Trescases O, Betz V. “Automatic BRAM Testing for Robust Dynamic Voltage Scaling for FPGAs.” In: 2018 28th International Conference on Field Programmable Logic and Applications (FPL). Dublin, Ireland. 2018. pp. 68-687. DOI: 10.1109/FPL.2018.00020
  53. 53. Karray F, Jmal MW, Garcia-Ortiz A, Abid M, Obeid AM. A comprehensive survey on wireless sensor node hardware platforms. Computer Networks. 2018;144:89-110
  54. 54. Berder O, Sentieys O. Powwow: PowWow: Power Optimized Hardware/Software Framework for Wireless Motes. In: Proceedings of the 2010 23rd International Conference on Architecture of Computing Systems (ARCS). Hannover, Germany. 22–25 February 2010; pp. 1–5. Hannover, Germany: VDE; 2010. pp. 1-5
  55. 55. Vera-Salas LA, Moreno-Tapia SV, Osornio-Rios RA, Romero-Troncoso Rd, "Reconfigurable Node Processing Unit for a Low-Power Wireless Sensor Network." In: 2010 International Conference on Reconfigurable Computing and FPGAs. Cancun, Mexico. 2010; pp. 173-178. DOI: 10.1109/ReConFig.2010.48
  56. 56. Nyländen T, Boutellier J, Nikunen K, Hannuksela J, Silvén O. “Reconfigurable miniature sensor nodes for condition monitoring.” In: 2012 International Conference on Embedded Computer Systems. Samos, Greece. 2012. pp. 113-119 DOI: 10.1109/SAMOS.2012.6404164
  57. 57. MacGillivray C, Reinsel D. Worldwide Global DataSphere IoT Device and Data Forecast, 2019–2023 (IDC # US45066919). Framingham, MA, USA: International Data Corporation; 2019
  58. 58. Huang L, Li DL, Wang KP, Gao T, Tavares A. A survey on performance optimization of high-level synthesis tools. Journal of Computer Science and Technology. 2020;35:697-720
  59. 59. Coussy P, Gajski DD, Meredith M, Takach A. An introduction to high-level synthesis. IEEE Design & Test of Computers. 2009;26(4):8-17
  60. 60. Zhang X, Hao C, Li Y, Chen Y, Xiong J, Hwu WM, et al. A bidirectional co-design approach to enable deep learning on IoT devices. Arxiv Preprint Arxiv:1905.08369. 2019;1

Written By

Seifeddine Messaoud, Rim Amdouni, Adnen Albouchi, Mohamed Ali Hajjaji, Abdellatif Mtibaa and Mohamed Atri

Reviewed: 25 March 2022 Published: 08 February 2023