Open access peer-reviewed chapter

System-Scenario Methodology to Design a Highly Reliable Radiation-Hardened Memory for Space Applications

Written By

Azam Seyedi, Per Gunnar Kjeldsberg and Roger Birkeland

Reviewed: 29 September 2023 Published: 30 October 2023

DOI: 10.5772/intechopen.113327

From the Edited Volume

Computer Memory and Data Storage

Edited by Azam Seyedi

Chapter metrics overview

45 Chapter Downloads

View Full Metrics

Abstract

Cache memory circuits are one of the concerns of computing systems, especially in terms of power consumption, reliability, and high performance. Voltage-scaling techniques can be used to reduce the total power consumption of the caches. However, aggressive voltage scaling significantly increases the probability of memory failure, especially in environments with high radiation levels, such as space. It is, therefore, important to deploy techniques to deal with reliability issues along with voltage scaling. In this chapter, we present a system-scenario methodology for radiation-hardened memory design to keep the reliability during voltage scaling. Although any SRAM array can benefit from the design, we frame our study on the recently proposed radiation-hardened cell, Nwise, which provides high level of tolerance against single event and multi event upsets in memories. To reduce the power consumption while upholding reliability, we leverage the system-scenario-based design methodology to optimize the energy consumption in applications, where system requirements vary dynamically at run time. We demonstrate the use of the methodology with a use case related to satellite systems and solar activity. Our simulations show that we achieve up to 49.3% power consumption saving compared to using a cache design with a fixed nominal power supply level.

Keywords

  • space applications
  • solar activity
  • radiation hardening
  • single event upset (SEU)
  • voltage scaling
  • system scenario
  • SRAM design
  • reliability
  • Nwise cell

1. Introduction

The memory subsystems are among the main contributors to the total energy consumption, area, and performance of today’s computing systems; therefore, there is a critical need to provide low-power, reliable, and high-performance memories for emerging applications. One of the concerns of memory designs is cache memories. Caches are designed to keep frequently used data and instructions close to the processing unit to avoid power-hungry and slow access to main memory. Almost all modern CPU cores, from ultra-low power chips, such as the ARM Cortex-A53 [1] to the highest-end Intel Core i3-1215UL processors [2], use caches. However, caches are known to consume a large portion (up to 30−70%) of the total processor power [3, 4]. On-chip cache size will also continue to grow due to device scaling coupled with increased performance requirements.

Voltage-scaling techniques are well-known techniques to reduce the total power consumption of the caches. Due to the quadratic relationship between dynamic energy consumption and supply voltage, this is worthwhile even if it incurs overhead in other parts of the system. However, aggressive voltage scaling significantly increases the probability of memory failure. Therefore, further reduction of the supply voltage is not possible without the risk of erroneous computations. This problem increases dramatically in environments with high levels of radiation, such as in space, because the systems stay exposed to high doses of radiation strikes for long periods of time and have limited energy budgets [5]. Therefore, adding advanced techniques to deal with the reliability issues during voltage scaling is critical for energy-efficient memory designs, especially for space applications.

A memory fault is typically modeled as a single event upset (SEU) or a multi event upset (MEU), where a single or multiple nodes in a memory cell may change state due to charge deposited by a radiated energetic particle [6]. Design of low-power and fault-tolerant caches has a rich body of work at different design abstraction levels. Circuit-level techniques are used to improve the reliability of each SRAM cell at low-voltage levels. Apart from the standard six transistor (6 T) SRAM cells, 8 and 10 T SRAM cells have been proposed by [7, 8]. These designs have a large area overhead, which again poses a significant limitation in performance and increase in power consumption of the caches. In addition, they do not provide enough tolerance against high radiation strikes in space.

A hardened 10 T SRAM cell called Quatro-10 T is proposed in [9] to provide higher area efficiency and higher reliability in low-voltage applications with larger noise margin and lower leakage current. It uses negative internal feedbacks to improve the immunity of the cell nodes against SEUs. However, it cannot provide full immunity to all SEUs, and some internal nodes may flip during 01 SEU [10]. The RHBD-10 T cell is proposed with a low area overhead compared to previous radiation-hardened memory cells [10]. However, the proposed 10 T cell suffered from high read access time that may affect its application, wherever high speed is necessary [10, 11]. Furthermore, even if providing SEU tolerance regardless of the upset polarity, the tolerance capability, that is. the radiation-induced charge it can resist, is not high compared to previously proposed cells [11]. Two quadruple cross-coupled storage cells, the so-called QUCCE 10 T and QUCCE 12 T, are proposed in [12]. QUCCE 10 T is a proper cell design for high-speed applications, while the QUCCE 12 T cell is a promising candidate for low-voltage and high reliability. Their SEU tolerance is lower than several previous designs such as [11], though QUCCE 12 T tolerates higher deposited charge at the expense of larger area. Therefore, the proposed cells may be less suitable for space applications that need high cell robustness and low area.

Nwise and Pwise have recently been proposed as two highly reliable radiation-hardened SRAMs cells [11, 13]. Simulation results show that they are competitive cells for radiation-hardened SRAMs to use in various memory blocks for space applications compared to the state of art. Both have the highest level of SEU tolerance capability for the temperature range deployed in space applications. In addition, both have the highest level of the tolerance to MEU. However, all the simulations are done at nominal voltage levels, and voltage scaling has not been addressed in those papers.

Circuit designs proposed in [14, 15] combine data duplication/triplication with error correction schemes to increase reliability while reducing supply voltage. The design can save energy through voltage scaling and ensure reliable behavior in harsh environments, however, with the cost of high area overhead of additional memory cells.

At the architecture level, several schemes have been proposed to save the energy by reducing the voltage while improving the reliability using sophisticated fault tolerance mechanisms, such as block/set pairing [16], address remapping [17], and block/set-level replication [18]. In another set of schemes, various complex error correcting codes (ECC) have been used to protect against both permanent and transient errors while reducing the voltage [19, 20, 21].

Some software-level approaches have been proposed that use language extensions to give the programmer the ability to perform relaxations [22]. For instance, a framework is developed to expose hardware errors to software in specified code regions. This allows programmers to mark certain regions of the code relaxed and decrease the processor’s voltage and frequency below the critical threshold when executing such regions.

For other parts of the embedded systems domain, the so-called system-scenario-based design methodology has been developed to optimize energy consumption. System-scenario-based techniques [23, 24] enable exploitation of application dynamism through fine-grained run-time system tuning. At design time, profiling is used to determine the behavior of different run-time situations. Run-time situations are then clustered into system scenarios based on similarity in a multidimensional cost perspective, such as execution time, energy consumption, and memory footprint [25]. Optimal platform configurations, such as dynamic voltage, frequency settings, memory configuration, and task mapping, are then determined for each scenario. Furthermore, efficient scenario prediction and switching mechanisms are developed. At run time, according to the current run-time situation and the scenario knowledge, the application and platform are switched to the optimal configuration.

However, little work has been attempted to leverage a combination of hardware- and software-level techniques to simultaneously manage unreliability and reduce power consumption, especially in the electronics space industry. To this end, our work adopts an approach where the circuit and run-time levels of the system cooperate to improve energy and reliability issues. Our main goal is to design a low-power and fault-tolerant cache memory accompanied with a system-scenario-based design methodology, which makes it possible to minimize power consumption by adapting voltage and frequency levels according to a dynamically changing exposure to different doses of particle strikes. The contributions of this chapter are as follows:

  1. We present a radiation-hardened memory circuit design, which keeps the reliability during voltage scaling. This design allows us to operate at nominal voltage levels down to low-voltage levels for space applications.

  2. We leverage the system-scenario-based design methodology to optimize the energy consumption at run time.

The rest of the chapter is organized as follows: In Section 2, we first briefly review the Nwise cell design details [11, 13]. Then, we move on to the adapted version of the Nwise cell, which can operate reliably during voltage scaling. We describe the schematic details, operational behavior, and address the SEU robustness analysis. Furthermore, the circuit simulations results, including read and write delay times, read and write power consumption, and robustness simulations during voltage scaling, are presented in this section. In Section 3, we briefly describe the system-scenario-based design methodology and explain how we deploy this methodology in our design. Then, a case study is presented to confirm the applicability of our design in space. In this section, we show how the system-scenario methodology can optimize the power consumption when the system is operating in space. To confirm the usefulness of our proposed method, our memory energy consumption is compared in two cases: when it is equipped with the system-scenario method and when it operates without considering any scenario decision at run time. Finally, Section 4 concludes this chapter.

Advertisement

2. Memory design details

2.1 Cell schematics

The Nwise and Pwise cells have been recently proposed as area-efficient and highly reliable radiation-hardened SRAM cells [11, 13]. However, we will go with Nwise cell in this chapter since it is a proper choice for cache designs.

Figure 1 shows the details of the Nwise cell circuit. The main storage part of the cell is a cross-coupled pair, consisting of transistors N1 and N2. The backup part is another cross-coupled pair consisting of transistors N5 and N6. The Nwise cell, thus, has four storage nodes Q, QB, P, and PB. Transistors N7 and N8 connect BL and BLB to Q and QB, respectively. N7 and N8 are controlled by the word line (WL): when WL is High, N7 and N8 are ON, and read/write operations are done. Two feedback paths (P1-N4 and P2-N3) help the storage nodes recover to their initial value after particle strikes and secure robustness under high radiation conditions.

Figure 1.

Circuit details of the Nwise cell.

Although Nwise improves fault tolerance, it can suffer from poor cell stability under voltage scaling [8]. This is a common problem also for the other radiation-hardened cells presented in the previous section. To overcome the stability problem, a single-ended read port SRAM cell is proposed in [26], which can separate the read bit line from the storage node with the help of two added NMOS transistors. In this way, the stability of the SRAM cell is improved during read operation, which allows further scaling toward threshold levels [8]. Inspired by this technique, our Adapted Nwise cell is chosen for our simulation in this chapter.

We show the structure of the Adapted Nwise cell in Figure 2. The main storage part is still the same as the Nwise cell. However, the access paths are separated: N7 and N8 are write access transistors that connect the main storage nodes to the write bit lines (BLW and BLBW). When the write word line (WWL) is high, N7 and N8 are ON, and a write operation is performed. N9 and N10 connect the storage node QB to the read bit line (BLR). During a read operation, read word line (RWL) is high and turns on N10.

Figure 2.

Circuit details of the adapted Nwise cell.

2.2 Operation analysis

Figure 3 shows the test bench circuit used in our simulations, which consists of a one-column set containing 64 Adapted Nwise cells and associated peripheral circuitry (read and write precharge circuits and appropriate circuits for read [7] and write operations [27, 28]).

Figure 3.

The test bench circuit consists of a one-column set containing 64 adapted Nwise cells and appropriate circuits for read and write operations [7, 27, 28].

Transistors P3, P4, and P5 keep BLW and BLBW precharged to VDD before the write operation begins. Signals EnableW is set to VDD, hence Data and DataB can be transferred to BLW and BLBW. Signal WWL is set to VDD, the write operation starts, and data can be written to the storage nodes. Before starting the read operation, transistor P6 keeps BLR precharged to VDD. The read operation begins when RWL becomes high, hence BLR is connected to the internal storage node through N9 and N10. When the read data is on BLR, EnableR becomes high, and the NAND gate routes the data to out.

We use 45 nm technology to design the Adapted Nwise cell. The simulations are done with LTspice [29] at different voltage levels (1 to 0.5 V). Figure 4 shows the transient simulation results of an Adapted Nwise cell located in the test bench column set for a sequence of “Write 1, Read 1, Write 0, and Read 0” operations. It confirms that the write and read operations are completed successfully.

Figure 4.

The simulation waveform of the adapted Nwise cell for a sequential set of operations, write 1, read 1, write 0, and read 0.

2.3 SEU recovery analysis

The SEU robustness of the proposed cell for VDD = 0.5 V is depicted in Figure 5. When an energetic particle passes through a semiconductor device, electron-hole pairs are created in its path because it loses its energy [30, 31]. If such an energetic particle strikes a reverse-biased junction depletion region, the injected charge is transported by drift current, leading to an accumulation of extra charge at the node [31]. It produces a transient current pulse that changes the value of the node when the injected charge exceeds the critical charge collected in the node Qcrit [30, 31]. Hence, sensitive cell nodes are the nodes surrounded by the reverse-biased drain junction of transistor(s) biased in the OFF state [32]. Thus, when a radiation particle strikes a PMOS transistor, only a positive transient pulse (11 or 01) is generated, whereas when a radiation particle strikes an NMOS transistor, only a negative transient pulse (00 or 10) is induced [32]. Let us assume that the Adapted Nwise cell is in state 1 (Q = 1, QB = 0, PB = 0, and P = 1). Therefore, transistors N2, N4, N6, and P1 are ON, and the rest are OFF. Hence, Q, QB, and P are sensitive nodes, while PB is not sensitive to the particle strike.

Figure 5.

SEU tolerance simulation of the adapted Nwise cell for VDD = 0.5 V.

As shown in Figure 5, fault injections that may result in SEUs occur at 25, 40, 70, and 80 ns, respectively. At 25 ns, QB is affected by a particle strike (QB: 01). We observe that QB returns to its initial state, and followed by it, other nodes return to their initial state. At 40 ns, node Q is affected by a particle strike (Q: 10). Again, the internal nodes of the Adapted Nwise cell recover their initial state after the injected fault. At 70 ns, P is affected by a particle strike (P: 01). As can be seen, node P comes back to its initial state, and followed by it, other internal nodes recover their initial state after the injected faults. Finally, PB is affected by a particle strike at 80 ns (PB: 10). PB also returns to its initial state, and followed by it, other nodes return to their initial state. Hence, the cell is robust against SEUs on all nodes.

The core cell is the same as the Nwise cell. Since the SEU robustness simulations are done in the hold mode, the description of the SEU robustness of the Adapted Nwise cell is the same as that of the Nwise cell [11, 13]. Due to lack of space, we briefly mention the relevant explanations in this article and refer the readers to [11, 13]. In our simulations, we inject charges at cell nodes using the model outlined in Section 2.4. A charge just below the Qcrit of the given node is injected to show how this affects the node voltage, and how the SRAM cell recovers its original state. If a charge equal to or greater than Qcrit is injected, the SRAM cell would not be able to recover, and a actual SEU would occur.

2.4 Design methodology and simulation results

As mentioned, we design and simulate a one-column set consisting of 64 Adapted Nwise cells and associated peripheral circuitry. The high-level structure is a column in a cache memory as described in [33, 34]. Necessary wire capacitances are added to the bitlines (BLW, BLBW, and BLR) and the wordlines (WWL and RWL) based on the sizes reported in [12, 31, 35].

Transistor sizes of the memory cells, as well as the peripheral circuitry, are optimized to get the maximum robustness, as well as minimum read/write power consumption, minimum read/write access times, and minimum area. However, when the optimization goals are conflicting, more weight is given to robustness. Transistor sizes are chosen for the worst case (VDD = 0.5 V): WP1=WP2=60nm,WN1=WN2=240nm,WN3=WN4=240nm,WN5=WN6=60nm,WN7=WN8=180nm,WN9=WN10=180nm.

We use standard manual optimization, tuning transistor sizes to find a global optimized solution. The simulations are performed with LTspice at different voltage levels (1 to 0.5 V) and at a temperature of 27C. The circuits are designed with a 45 nm predictive technology model. From the simulations, we find the read and write power consumption, read and write access times, and Qcrit for different voltage levels.

To simulate the effects of a particle strike injection in a cell node, we use the same model as used in [13]. It is a double exponential current model presented in Eq. (1) that has been widely used by researchers [12, 36, 37]. Qdep is the total charge deposited at the node hit by the particle strike. τr is the collection time constant of the junction and τf is the ion-track establishing time constant. Both are material-dependent time constants [36]. According to Yassin et al. [38], we set τr=164ps and τf=50ps. We gradually increase Qdep in Eq. (1) in the simulations until the data value stored in the SRAM cell changes and cannot be recovered. This is the critical charge of a given node in an SRAM cell. The cell Qcrit is the minimum Qcrit among all sensitive nodes of the cell.

It=QdepτfτretτfetτrE1

Table 1 shows simulation results with the full performance comparison of the Adapted Nwise cell during voltage scaling. The comparisons include power consumption during read and write operations, read access times, write access times, and Qcrit.

Supply Voltage LevelWrite Power (uW)Read Power (uW)Write Access Time (ps)Read Access Time (ps)Qcrit (fF)
1 V19.198.4953.1629.51101.5
0.9 V16.614.9363.4835.3186.6
0.8 V15.853.0678.6743.3970.7
0.7 V16.262.02111.4859.4951.3
0.6 V14.671.58176.7295.7431.9
0.5 V9.931.10400.95196.4815.9

Table 1.

Cost comparison for the adapted Nwise cell for different supply voltage levels.

Advertisement

3. System-scenario-based design methodology

As we mentioned in Section 1, the system-scenario-based design methodology is a combined design-time and run-time methodology [39] to exploit the application’s dynamic behavior at run time, leading to significant optimization potential. According to our simulations, the energy consumption is reduced by combining voltage scaling techniques applied to SRAM cells with the system-scenario-based design method in our framework. A two-phase design-time/run-time system scenario design methodology is detailed in [24]. Here, we describe the methodology according to our use case:

3.1 Case study: space weather forecast

The main idea is to adapt the supply voltage according to the probability of SEUs. When the probability is high, we increase the supply voltage to make the memory more robust against radiation. On the other hand, we reduce the supply voltage to save power when the probability for SEUs is low.

In space, a satellite is exposed to different doses of radiation, depending on, for example, the solar activity. Various agencies provide forecasts for the space weather, that is. the electron or proton flux, based on solar activity. As a case study, we choose the available online information for historic solar weather from [40]. We extract proton flux numbers from November 2004 to May 2022 from the graph. The graph shows samples of the average solar flux units per month, and we choose this parameter as a knob to select the appropriate scenario. In this case, the granularity is relatively coarse, and the power supply voltage can only be adjusted once a month. This is historically coarse-grained statistics. An operational system would have to be based on space weather predictions that typically are valid for a 3 day period, with a forecasted granularity of down to three-hour windows, in which the VDD can be adjusted to the predicted case for each window. Table 2 shows the number of samples in each range of solar flux.

Range of flux levelNumber of one-month samples in each range
0–45 MeV0
45 MeV − 60 MeV0
60 MeV − 85 MeV115
85 MeV − 110 MeV47
110 MeV − 130 MeV32
130 MeV − 160 MeV17

Table 2.

Number of samples at each range of solar flux [40].

A correlation between the solar flux level and the probability of SEU has been reported in [41]. We also assume that the energy of the particles reaching the chip surface is sufficiently moderated by passing through metallic shielding [42]. Under these realistic assumptions, we observe flux levels between 5 and 160 MeV. At design time, we identify run-time situations (RTSs) and cluster them into six scenarios:

  1. The first scenario is defined for flux levels below 45 MeV. For this situation, we choose VDD = 0.5 V. The system is in low-power consumption mode, but at the cost of lowest fault tolerance.

  2. The second scenario is defined for flux levels between 45 and 60 MeV (45 MeV < flux 60MeV). For this situation, we choose VDD = 0.6 V.

  3. The third scenario is defined for flux levels between 60 and 85 MeV (60 MeV < flux 85MeV). For this situation, we choose VDD = 0.7 V.

  4. The fourth scenario is defined for flux levels between 85 and 110 MeV (85 MeV < flux 110MeV). For this situation, we choose VDD = 0.8 V.

  5. The fifth scenario is defined for flux levels between 110 and 130 MeV (110 MeV < flux 130MeV). For this situation, we choose VDD = 0.9 V.

  6. The sixth scenario is defined for flux levels between 130 and 160 MeV (130 MeV < flux 160MeV). For this situation, we choose VDD = 1 V. The system has the highest level of fault tolerance but at the cost of high power consumption.

At run time, the satellite is informed from Earth when the solar activity changes, and scenarios are dynamically selected accordingly. We now calculate the total power consumption in two cases: (a) when the system scenario method is applied and the system can run at different voltage levels, and (b) when the system runs only at a fixed nominal voltage level.

To calculate power consumption, we need the following information: Table 1, which shows cost comparison for the Adapted Nwise cell for different supply voltage levels. Table 2, which shows the number of samples in each solar flux range. Furthermore, the number of each level-one cache operation, which is available for a tested benchmark [43]. According to Petersen [43], the number of read and write operations for a tested application of the STAMP benchmark suite, Genome, are as follows: 400449 and 25,028.

We calculate the total power consumption of the first case as follows: (1) The total power consumption for each voltage level is calculated according to the information obtained from Table 1, and the number of each operation reported in [43]. (2) The number of samples for each scenario is multiplied by the number of corresponding total power calculated in 1. (3) The results of all scenarios are added together.

For example, we examine the sixth scenario: the number of samples in this range is 17. We choose VDD = 1 V for this scenario. The power consumption for each sample is calculated as follows:

Powersample=4004498.49uW+2502819.19uW/400449+25028E2

Therefore, the total power consumption is obtained by multiplying this number by the number of samples in this range (17), which is equal to 155.03 uW. Similarly, we calculate the total power consumption for each scenario. Then, we add them all together, which results in 842.47 uW.

For the second case: (1) The total power consumption is calculated only for VDD = 1 V according to the information obtained from Table 1 and the number of each operation reported in [43]. The power consumption for each sample is the same as Eq. (2). (2) The total number of samples (211) is multiplied by the number of power consumption calculated in 1), which is equal to 1924.11 uW.

In this way, we have demonstrated a 56.2% power saving compared to using Adapted Nwise with fixed nominal power supply level. We repeat our simulation for a cache consisting of the original Nwise cell depicted in Figure 1. The total power consumption of this cache is calculated as follows:

Powertotal=2114004497.49uW+2502814.14uW/400449+25028E3

which is equal to 1662.93 uW, 13.6% lower than that of a cache containing of Adapted Nwise cells (at VDD = 1 V). Using the Adapted Nwise cell in combination with system scenario-controlled voltage scaling reduces the power consumption by 49.3% compared to using the original Nwise cell. Table 3 summarizes the results.

Cache DesignsPower Consumption (uW)
Original Nwise cache design with a fixed nominal power supply level1662.93
Adapted Nwise cache design with a fixed nominal power supply level1924.11
Adapted Nwise cache design with adapted power supply levels according to system scenario design methodology842.47

Table 3.

The total power consumption of each cache design.

It should be mentioned that we do not consider the loss induced by the circuitry generating the different voltage levels. We assume that this memory will be used in systems, where dynamic voltage and frequency scaling are anyway used for the regular logic.

Advertisement

4. Conclusions and future work

In this chapter, we present a radiation-hardened memory design, which keeps the reliability during voltage scaling. Inspired by the Nwise cell, a recently proposed highly reliable radiation-hardened SRAM cell, we present the design of the Adapted Nwise cell, including calculations of its Qcrit, energy consumption, and access time for each operation during voltage scaling. Simulations show that this design can operate at nominal voltage levels down to low-voltage levels for space applications.

In addition, we leverage the scenario-based design methodology to optimize energy consumption at run time and according to our use case: solar activity. Our simulations show that we save up to 49.3% in power consumption compared to using a cache design with a fixed nominal power supply level.

As a future work, an operational system will be made practical by leveraging information from space weather forecasts, thus adjusting the VDD value based on the expected solar flux. Space weather forecasts are given as a three-day prediction and a methodology to extract relevant prediction parameters and distribute them in time to a space system using the Adapted Nwise cell memory must be developed. In addition, process, voltage, and temperature (PVT) variations will be investigated in our future work.

Advertisement

Acknowledgments

This work was funded by the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie Grant Agreement No. 799481.

The authors thank Gunnar Maehlum and Dirk Meier from the company Integrated Detector Electronics AS (ideas) for their help and fruitful comments that greatly improved the design and experiments.

References

  1. 1. ARM Developer [Internet]. 2012. Available from: https://developer.arm.com/Processors/Cortex-A53.php [Accessed: October 21, 2022]
  2. 2. Intel Core i3-1215UL Processors [Internet]. Available from: https://www.intel.com/content/www/us/en/products/sku/230902/intel-core-i31215ul-processor-10m-cache-up-to-4-40-ghz/specifications.html [Accessed: October 21, 2022]
  3. 3. Wong W, Koh C, Chen Y, Li H. VOSCH: Voltage scaled cache hierarchies. In: Proceeding of 25th International Conference on Computer Design (ICCD). Lake Tahoe, California, USA: IEEE; 2007. pp. 496-503
  4. 4. Zhang C, Vahid F, Najjar W. A highly configurable cache for low energy embedded systems. ACM Transactions on Embedded Computing Systems. 2005;4:363-387
  5. 5. Lin D, Xu Y, Liu X, Zhu W, Dai L, Zhang M, et al. A novel highly reliable and low-power radiation hardened SRAM bit-cell design. IEICE Electronics Express. 2018;15(3):20171129
  6. 6. Lin S, Kim Y, Lombardi F. Analysis and design of nanoscale CMOS storage elements for single-event hardening with multiple-node upset. IEEE Transaction on Device Materials Reliability. 2012;12(1):68-77
  7. 7. Calhoun BH, Chandrakasan A. A 256kb sub-threshold SRAM in 65 nm CMOS. In: Proceeding of IEEE International Solid State Circuits Conference - Digest of Technical Papers. San Francisco, California, USA: IEEE; 2006. pp. 2592-2601
  8. 8. Verma N, Chandrakasan A. A 256 kb 65 nm 8T subthreshold SRAM employing sense-amplifier redundancy. IEEE Journal of Solid-State Circuits. 2008;43(1):141-149
  9. 9. Jahinuzzaman SM, Rennie DJ, Sachdev M. A soft error tolerant 10T SRAM bit-cell with differential read capability. IEEE Transaction on Nuclear Science. 2009;56(6):3768-3773
  10. 10. Guo J, Zhu L, Sun Y, Cao H, Huang H, Wang T, et al. Design of area-efficient and highly reliable RHBD 10T memory cell for aerospace applications. IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 2018;26(5):991-994
  11. 11. Seyedi A, Aunet S, Kjeldsberg PG. Nwise and Pwise: 10T radiation hardened SRAM cells for space applications with high reliability requirements. IEEE Access. 2022;10:30624-30642
  12. 12. Jiang J, Xu Y, Zhu W, Xiao J, Zou S. Quadruple cross-coupled latch-based 10T and 12T SRAM bit-cell designs for highly reliable terrestrial applications. IEEE Transactions on Circuits and Systems I: Regular Papers. 2019;66(3):967-977
  13. 13. Seyedi A, Aunet S, Kjeldsberg PG. Nwise: An area efficient and highly reliable radiation hardened memory cell designed for space applications. In: Proceeding of IEEE Nordic Circuits and Systems Conference (NORCAS): NORCHIP and International Symposium of System-on-Chip (SoC). Helsinki, Finland: IEEE; 2019. pp. 1-6
  14. 14. Seyedi A, Yalcin G, Unsal O, Cristal A. Circuit design of a novel adaptable and reliable L1 data cache. In: Proceedings of the 23rd ACM International Conference on Great Lakes Symposium on VLSI (GLSVLSI). Paris, France: Association for Computing Machinery; 2013. pp. 333-334
  15. 15. Yalcin G, Seyedi A, Unsal O, Cristal A. Flexicache: Highly reliable and low- power cache under supply voltage scaling. High Performance Computing. 2014;1:173-190
  16. 16. Wilkerson C, Gao H, Alameldeen AR, Chishti Z, Khellah M, Lu S-L. Trading off cache capacity for reliability to enable low voltage operation. In: Proceeding of International Symposium on Computer Architecture (ISCA). Beijing, China: IEEE; 2008. pp. 203-214
  17. 17. Ansari A, Feng S, Gupta S, Mahlke S. Archipelago: A polymorphic cache design for enabling robust near-threshold operation. In: Proceeding of EEE 17th International Symposium on High Performance Computer Architecture. San Antonio, Texas; USA: IEEE; 2011. pp. 539-550
  18. 18. Banaiyan MA, Homayoun H, Dutt N. FFT-cache: A flexible fault-tolerant cache architecture for ultra-low voltage operation. In: Proceeding of 14th International Conference on Compilers, Architectures and Synthesis for Embedded Systems (CASES). Taipei, Taiwan: IEEE; 2011. pp. 95-104
  19. 19. Alameldeen AR, Wagner I, Chishti Z, Wu W, Wilkerson C, Lu S-L. Energy-efficient cache design using variable-strength error-correcting codes. In: Proceeding of 38th Annual International Symposium on Computer Architecture (ISCA). San Jose, CA, USA: IEEE; 2011. pp. 461-471
  20. 20. Chishti Z, Alameldeen AR, Wilkerson C, Wu W, Lu S-L. Improving cache lifetime reliability at ultra-low voltages. In: Proceeding of 42nd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). New York, NY, USA: IEEE; 2009. pp. 89-99
  21. 21. Qureshi MK, Chishti Z. Operating SECDED-based caches at ultra-low voltage with FLAIR. In: Proceeding of 43rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). Budapest, Hungary: IEEE; 2013. pp. 1-11
  22. 22. Carbin M, Misailovic S, Martin C. Verifying quantitative reliability of programs that execute on unreliable hardware. ACM SIGPLAN Notices. 2013;48(10):33-52
  23. 23. Filippopoulos I, Catthoor F, Kjeldsberg PG. Exploration of energy efficient memory organisations for dynamic multimedia applications using system scenarios. Design Automation for Embedded Systems. 2013;17:669-692
  24. 24. Gheorghita S, Palkovic M, Hamers J, Vandecappelle A, Mamagkakis S, Basten T, et al. System-scenario-based design of dynamic embedded systems. ACM Transactions on Design Automation of Electronic Systems. 2009;14(1):1-45
  25. 25. Oleynik Y, Gerndt M, Schuchart J, Kjeldsberg PG, Nagel WE. Run-time exploitation of application dynamism for energy-efficient exascale computing (READEX). In: Proceeding of 18th International Conference on Computational Science and Engineering. Porto, Portugal: IEEE Computer Society; 2015. pp. 347-350
  26. 26. Chang L, Fried DM, Hergenrother J, Sleight JW, Dennard RH, Montoye RK, et al. Stable SRAM cell design for the 32 nm node and beyond. In: Digest of Technical Papers Symposium on VLSI Technology. Kyoto, Japan: IEEE; 2005. pp. 128-129
  27. 27. Maroof N, Kong BS. Charge sharing write driver and half-vdd pre-charge 8t SRAM with virtual ground for low-power write and read operation. IET Circuits, Devices Systems. 2018;12(1):94-98
  28. 28. Jaeyoung K, Pinaki M. A robust 12T SRAM cell with improved write margin for ultra-low power applications in 40nm CMOS. Integration. 2017;57:1-10
  29. 29. [Internet]. Available from: https://www.analog.com/en/design-center/design-tools-and-calculators/ltspice-simulator.html [Accessed: October 21, 2022]
  30. 30. Atias L, Teman A, Giterman R, Meinerzhagen P, Fish A. A low-voltage radiation-hardened 13T SRAM bitcell for ultra-low power space applications. IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 2016;24(8):2622-2633
  31. 31. Guo J, Xiao L, Mao Z. Novel low-power and highly reliable radiation hardened memory cell for 65 nm CMOS technology. IEEE Transactions on Circuits and Systems I: Regular Papers. 2014;61(7):1994-2001
  32. 32. Kelin LH, Klas L, Mounaim B, Prasanthi R, Linscott IR, Inan US, et al. LEAP: Layout design through error-aware transistor positioning for soft-error resilient sequential cell design. In: Proceeding of IEEE International Reliability Physics Symposium. Anaheim, CA, USA: IEEE; 2010. pp. 203-212
  33. 33. Seyedi A, Armejach A, Cristal A, Unsal O, Hur I, Valero M. Circuit design of a dual-versioning L1 data cache for optimistic concurrency. In: Proceedings of the 21st Edition of the Great Lakes Symposium on Great Lakes Symposium on VLSI (GLSVLSI). Lausanne, Switzerland: Association for Computing Machinery; 2011. pp. 325-330
  34. 34. Armejach A, Seyedi A, Titos-Gil R, Hur I, Cristal A, Unsal OS, et al. Using a reconfigurable L1 data cache for efficient version management in hardware transactional memory. In: Proceeding of International Conference on Parallel Architectures and Compilation Techniques. Galveston, Texas, USA: IEEE; 2011. pp. 361-371
  35. 35. Guo J, Xiao L, Wang T, Liu S, Wang X, Mao Z. Soft error hardened memory design for nanoscale complementary metal oxide semiconductor technology. IEEE Transactions on Reliability. 2015;64(2):596-602
  36. 36. Messenger GC. Collection of charge on junction nodes from ion tracks. IEEE Transactions on Nuclear Science. 1982;29(6):2024-2031
  37. 37. Lin S, Kim Y, Lombardi F. A 11-transistor nanoscale CMOS memory cell for hardening to soft errors. IEEE Transaction on VLSI Systems. 2011;19(5):900-904
  38. 38. Qi C, Xiao L, Wang T, Li J, Li L. A highly reliable memory cell design combined with layout-level approach to tolerant single-event upsets. IEEE Transactions on Device and Materials Reliability. 2016;16(3):388-395
  39. 39. Yassin Y, Catthoor F, Kjeldsberg PG, Perkis A. Techniques for dynamic hardware management of streaming media applications using a framework for system scenarios. Microprocessors and Microsystems. 2018;56:157-168
  40. 40. Space Weather Prediction Center [Internet]. Available from: https://www.swpc.noaa.gov/communities/space-weather-enthusiasts-dashboard [Accessed: 2022-10-21]
  41. 41. The Space Environment Information System [Internet]. Available from: https://www.spenvis.oma.be/help/background/flare/flare.html [Accessed: October 21, 2022]
  42. 42. Petersen E. Single Event Effects in Aerospace. Wiley-IEEE Press; 2011 Online ISBN: 9781118084328. DOI: 10.1002/9781118084328
  43. 43. Seyedi A, Armejach A, Cristal A, Unsal OS, Valero M. Novel SRAM bias control circuits for a low power L1 data cache. In: Proceeding of NORCHIP. Copenhagen, Denmark: IEEE; 2012. pp. 1-6

Written By

Azam Seyedi, Per Gunnar Kjeldsberg and Roger Birkeland

Reviewed: 29 September 2023 Published: 30 October 2023