Open access peer-reviewed chapter

Uniform Interfaces for Resource-Sharing Components in Hierarchically Scheduled Real-Time Systems

Written By

Martijn M. H. P. van den Heuvel, Reinder J. Bril, Johan J. Lukkien, Moris Behnam and Thomas Nolte

Submitted: 26 October 2015 Reviewed: 24 February 2016 Published: 08 June 2016

DOI: 10.5772/62691

From the Edited Volume

Real-time Systems

Edited by Kuodi Jian

Chapter metrics overview

2,438 Chapter Downloads

View Full Metrics

Abstract

In literature, several hierarchical scheduling frameworks (HSFs) have been proposed for enabling resource sharing between components on a uni-processor system. Each HSF comes with its own set of composition rules which take into account a specific synchronization protocol for arbitrating access to resources. However, the inventors of these synchronization protocols have also chosen to describe these composition rules with the help of protocol-specific component interfaces. This creates unnecessary framework dependencies on components.

Keywords

  • hierarchical scheduling frameworks
  • resource sharing
  • component composition
  • real-time interfaces
  • synchronization protocol

1. Introduction

Hierarchical scheduling frameworks (HSFs) have been developed to enable composition and reusability of real-time components in complex systems, for example, as described by Holenderski et al. [1] for the automotive domain. During the development of such systems, component models have become important in order to separate and structure the development of system parts over engineering teams (or third parties). The increasing system complexity therefore demands a decoupling of (i) development and analysis of individual components and (ii) integration of components on a shared platform. This decoupling requires component interfaces covering both functional aspects as well as non-functional aspects, such as timing. An HSF supports system composition from a timing perspective because it isolates components by allocating a processor budget to each component. A component that is validated to meet its timing constraints when executed in isolation will continue meeting its timing constraints after integration (or admission) on a shared uni-processor platform. The HSF is therefore a promising solution for industrial standards, e.g., the AUTomotive Open System ARchitecture (AUTOSAR), which more and more specify that an underlying operating system should prevent timing faults in any component to propagate to other components on the same processor.

Independent analysis of components and their integration in HSFs is enabled through a set of composition rules (e.g., as proposed by Shin and Lee [2]). By splitting the timing analysis in complementary parts, one could establish global (system level) timing properties by composing independently specified and analyzed local (component level) timing properties. Local timing properties are analyzed by assuming a worst-case supply of processor resources to a component. The way of modeling the provisioning of the processor budget to a component is defined by a resource-supply model, e.g., the periodic resource model by Shin and Lee [2] or the bounded-delay model by Feng and Mok [3]. These models make it possible to combine the timing constraints of the tasks within a component (typically deadlines) and abstract from the way tasks are locally scheduled. A component can therefore be represented by a single real-time constraint, called a real-time interface. Components can be composed into an HSF by combining a set of real-time interfaces, which will treat each component as a single task by itself. This enables reuse of components.

The global scheduling environment (a parent component) can provide more resources to its (child) components than just processor resources. For example, components may use operating system services, memory mapped devices, and shared communication devices requiring mutually exclusive access. An HSF with support for resource sharing makes it possible to share serially accessible resources (from now on referred to as resources) between arbitrary tasks, which are located in arbitrary components, in a mutually exclusive manner. A resource that is only shared by tasks within a single component is a local shared resource. Their local scheduling impact can be easily abstracted by real-time interfaces. A resource that is used in more than one component is a globally shared resource.

Any access to a resource (local or global) is assumed to be arbitrated by a synchronization protocol. In practical situations, a component developer is typically unconcerned about the sharing scope of resources. A component may access resources for which just local usage or global usage is determined only upon integration of components into the HSF. Fortunately, the syntax of the primitives for accessing local and global resources can be the same, even though the synchronization protocols are different (e.g., as implemented by van den Heuvel, et al. [4]). The actual binding of function calls to scope-dependent synchronization primitives, that arbitrate either global or local resource access, can be done at compile time or when the component is loaded. Dynamic binding of primitives makes it possible to decouple the specification of global resources in the interface from their use in the implementation. This flexible decoupling of the sharing scope of resources in the application's programming interface is called opacity by Martinez et al. [5] and it abstracts whether or not a resource is globally shared in the system.

This chapter presents an extension of this notion of opacity to component analysis and the corresponding derivation of a real-time interface of a component. Opacity requires that the implementation of a component, as well as the way in which interface parameters are derived (the local analysis), are unaware of the global synchronization protocol. In this way, components cannot make use of any knowledge about the constraints and modifications to a component imposed by the global synchronization protocol. By definition of opacity, all computed interface parameters of a component are made independent of a global synchronization protocol.

Based on this observation, we present the following contributions:

  • We present a uniform representation of component interfaces and a corresponding opaque analysis to derive these interfaces.

  • We survey the existing analyses for components that are assumed to run in HSFs with a particular synchronization protocol. We characterize the opacity compliance of their analyses.

Advertisement

2. Related work

In hierarchically scheduled systems, a group of recurring tasks, forming a component, is mapped on a reservation; reservations were originally introduced by Mercer et al. [6] and Rajkumar et al. [7]. We first review existing works on hierarchical scheduling of independent components. Secondly, we lift the assumption on the independence of components, so that tasks may share resources with other tasks, either within the same component or located in other components. This means that resource sharing expands across reservations which calls for specialized synchronization protocols for arbitrating access to resources. Finally, we discuss the extension of real-time interface representations for components requiring access to shared resources through a synchronization protocol.

2.1. Timing interfaces of independent real-time components

The increasing complexity of real-time systems led to a growing attention for component-based systems. Deng and Liu [8] therefore proposed a two-level HSF for open systems, where components may be independently developed and validated. The corresponding timing analysis of the HSF has been presented by Kuo and Li [9] for fixed-priority preemptive scheduling (FPPS) and by Lipari and Baruah [10] for earliest-deadline-first (EDF) global schedulers.

Later, the research community identified the challenges of separating the timing analysis of the HSF by means of real-time interfaces for components. A real-time interface separates the component's internals (i.e., its tasks and scheduling policy) from its global resource allocation strategy. Wandeler and Thiele [11] calculate demand and service curves for components using real-time calculus. Shin and Lee [2] proposed the periodic resource model to specify periodic processor allocations to components. The explicit-deadline periodic (EDP) resource model by Easwaran et al. [12] extends the periodic resource model of Shin and Lee [2] by distinguishing the relative deadline for the allocation time of budgets explicitly. The bounded-delay model by Feng and Mok [3] describes linear service curves with a bounded initial service delay.

Many works presented approximated [e.g., 1315] and exact [e.g., 2,12,14] budget allocations for the bounded-delay and periodic resource models under preemptive scheduling policies. Both Lipari and Bini [14] and Shin and Lee [16] have presented methods to convert the bounded-delay model into a periodic resource model. In our chapter, we extend these models in order to support task synchronization.

2.2. Task synchronization in hierarchically scheduled systems

In literature, several alternatives are presented to accommodate resource sharing between tasks in reservation-based systems. de Niz et al. [17] support this in their fixed-priority preemptively scheduled (FPPS) Linux/RK resource kernel based on the immediate priority ceiling protocol (IPCP) by Sha et al. [18]. Steinberg et al. [19] implemented a capacity-reserve donation protocol to solve the problem of priority inversion for tasks scheduled in a fixed-priority reservation-based system. A similar approach is described by Lipari et al. [20] for EDF-based systems and termed bandwidth inheritance (BWI).

BWI regulates resource access between tasks that each have their dedicated budget. It works similar to the priority-inheritance protocol (PIP) by Sha et al. [18], and when a task blocks on a resource, it donates its remaining budget to the task that causes the blocking. BWI does not require a priori knowledge of tasks, i.e., no ceilings need to be precalculated. BWI-like protocols are therefore not very suitable for arbitrating hard real-time tasks in HSFs, because the worst-case interference of all tasks in other components that access global resources needs to be added to a component's budget at integration time in order to guarantee its internal tasks' schedulability also in case budget needs to be donated. This leads to pessimistic budget allocations for hard real-time components. To accommodate resource sharing in HSFs, three synchronization protocols have therefore been proposed based on the stack resource policy (SRP) from Baker [21], i.e., HSRP by Davis and Burns [22], SIRAP by Behnam et al. [23], and BROE by Bertogna et al. [24].

2.3. Timing interfaces for resource-sharing components

Global resource sharing in HSFs is often based on the SRP by Baker [21] in order to compute blocking delays in the schedule; these computations follow a similar approach as the SRP, which is then re-used at the various scheduling levels in the HSF. In addition, resource sharing requires scheduling mechanisms which have an impact on the local scheduling of a component. If a task that accesses a globally shared resource is suspended during its execution due to the exhaustion of its (processor) budget, excessive blocking periods can occur which may hamper the correct timeliness of other components (see Ghazalie and Baker [25]). To prevent such budget depletion during global resource access (see Figure 1), four synchronization protocols have been proposed. These are based on two general mechanisms to prevent budget depletion during the execution of a task's critical section:

Figure 1.

When the budget Qs (allocated every period Ps) of a task depletes while a task executes on a global resource, tasks in other components may experience excessive blocking durations, B.

  1. self-blocking: wait with accessing a resource when the remaining budget is insufficient to complete a resource access entirely. Self-blocking comes in two flavors: (i) the subsystem integration and resource allocation policy (SIRAP) by Behnam et al. [23] and (ii) the bounded-delay resource open environment (BROE) by Bertogna et al. [24]. With SIRAP, a self-blocked task essentially spin locks, i.e., it idles the component's budget away, while the task is waiting for its budget to replenish. Instead, BROE delays the remaining processor's resource supply to a component if there is insufficient budget to complete the entire critical section and if the budget supplied so far is running ahead with respect to the guaranteed processor utilization.

    The idea of self-blocking has also been considered in different contexts, e.g., see [26] for supporting soft real-time tasks and see Holman and Anderson [27] for a zone-based protocol in a pfair scheduling environment. SIRAP by Behnam et al. [23] and BROE by Bertogna et al. [24] use self-blocking for hard real-time tasks in HSFs on a single processor and their associated analysis supports composition. Behnam et al. [28] have significantly improved the original SIRAP analysis by Behnam et al. [23] for arbitrating multiple shared resources; similarly, Biondi et al. [29] have improved the analysis of BROE for arbitrating multiple shared resources. However, these improvements also complicate the analysis and they make the timing analysis more protocol specific.

  2. overrun: execute over the budget boundary until the resource is released—called the hierarchical stack resource policy (HSRP) by Davis and Burns [22]. HSRP has two flavors: overrun with payback (OWP) and overrun without payback (ONP). The term without payback means that the additional amount of budget consumed during an overrun does not have to be returned in the next budget period.

    The overrun mechanism (with payback) was first introduced by Ghazalie and Baker [25] in the context of aperiodic servers. This mechanism was later re-used in HSRP in the context of two-level HSFs by Davis and Burns [22] and complemented with a variant without payback. Although the analysis presented by Davis and Burns [22] does not integrate in HSFs due to the lacking support for independent analysis of components, this limitation is lifted by Behnam et al. [30].

Although these four resource-arbitration protocols prevent budget depletion during a task's resource access, in order to do so, processor resources may need to be delivered differently. This, on its turn, may add constraints to the supply of processor resources in order to preserve local deadline constraints of tasks. Protocol developers deal with these constraints in different ways and sometimes these are already taken into account in the local analysis of a component (e.g., see [2830]). This may therefore result in protocol-specific interfaces of components.

We present a uniform way to model the local constraints on the component's processor supply imposed by resource sharing by extending the periodically constrained model of Feng and Mok [3], as presented for independent components by Shin and Lee [16]. It is therefore important to know which resources a task will access in order to support independent analysis of each of the resource-sharing components. Our local analysis then yields the same timing interface, regardless of the protocol being used for global resource synchronization. During the integration of components, we take those interfaces and we analyze the impact of synchronization penalties with the help of protocol-specific composition rules.

Advertisement

3. Real-time scheduling model

A system contains a single processor and a set ℛ of M resources R1, …, RM. The processor and (some of) these resources need to be shared by N components, C1, …, CN, and each component executes its work through a set of (concurrent) tasks (as depicted in Figure 2).

Figure 2.

Overview of our system model. A parent component implements a global scheduler to allocate a share of the processor and a share of other resources, e.g., R1 and R2, to each of its child components, C1 … CN. Each child component, Cs, contains a set of tasks, τs1 … τsn, and a local scheduler. Tasks, located in arbitrary components, may share resources. Tasks receive their share of the resources as specified by their component interface, Γs.

3.1. Component and task model

Each component Cs contains a set Ts of ns sporadic, deadline-constrained tasks τs1, …, τsns. A sporadic task generates an infinite sequence of jobs which are activated at least Tsi time units separated from each other. Each sporadic job may arrive at an arbitrary moment in time, i.e., it may delay its arrival for an arbitrarily long period. A sporadic task can be seen as a sporadically periodic task which exhibits its worst-case processor demand when subsequent jobs arrive separated minimally in time, i.e., similar to a periodic task under arbitrary phasing (see Liu and Layland [31]).

We extend the timing characteristics of a sporadic task, as introduced by Mok [32], with a parameter to capture its resource requirements. The timing characteristics of a task τsiTs are therefore specified by means of a quadruple (TsiEsiDsi, ℋsi), where Tsi ∈ IR+ denotes its minimum inter-arrival time, Esi ∈ IR+ its worst-case execution time (WCET), Dsi ∈ IR+ its (relative) deadline (where 0 < Esi ≤ Dsi ≤ Tsi), and ℋsi denotes the set of its WCETs of critical sections. The WCET of task τsi within a critical section accessing global resource R is denoted hsiℓ (i.e., a value contained in ℋsi), where hsiℓ ∈ ℝ+ ∪ {0}, Esi ≥ hsiℓ. For tasks, we also adopt the basic assumptions by Liu and Layland [31], i.e., jobs do not suspend themselves, a job of a task does not start before its previous job is completed, and the overhead of context switching and task scheduling is ignored. For notational convenience, tasks (and components) are given in deadline-monotonic order, i.e., τs1 has the smallest deadline and τsns has the largest deadline.

A task set is said to be schedulable if all jobs of the tasks are able to complete their WCET of Esi time units within Dsi time units from their arrival. The tasks of this component have to meet their deadlines with a particular budget on the processor and each resource being used. These budgets specify the periodically guaranteed fraction of the resource that the tasks may use. The timing interface of a component Cs specifies this budget, i.e., the interface is denoted by a triple Γs=PsQsXs, where Ps ∈ IR+ denotes the component's period, Qs ∈ IR+ denotes its budget on the processor, and Xs denotes the set of resource holding times to global resources (which may be seen as budgets on resources). The maximum value in Xs is denoted by Xs and, just like any budget, the resource holding time must fit in the components budget: 0 ≤ Xs ≤ Ps. The period Ps therefore serves as an implicit deadline of the component.

The set ℛs denotes the subset of global resources accessed by component Cs, so that hsiℓ > 0 ⇔ R ∈ ℛs and the cardinality of ℛs is denoted by ms (just like the cardinality of Xs). The maximum time that a component Cs executes on the processor while accessing resource R ∈ ℛs is called the resource holding time which is denoted by Xsℓ, where Xsℓ ∈ IR+ ∪ {0} and Xsℓ > 0 ⇔ R ∈ ℛs. The relation between the WCET of a critical section (hsiℓ) and the resource holding times (Xsℓ) of a component is further explained in Section 4.1.

3.2. Scheduling model

A unique system-level (global) scheduler selects which component, and when a component, is executed on the shared processor. The component-level (local) scheduler decides which of the tasks of the executing component is allocated the processor. The global scheduler and each of the local schedulers of individual components may apply different scheduling policies. As scheduling policies, we consider EDF, an optimal dynamic uniprocessor scheduling algorithm, and the deadline-monotonic (DM) algorithm, an optimal priority assignment for FPPS of deadline-constrained tasks. The SRP by Baker [21] is used to arbitrate access to shared resources between components at the global level; similarly, the SRP is used at the local level to arbitrate access to shared resources between tasks locally.

3.3. Synchronization protocol

This chapter focuses on arbitrating global shared resources using the SRP. To be able to use the SRP for synchronizing global resources, its associated ceiling terms need to be extended.

3.3.1. Preemption levels

With the SRP, each task τsi is assigned a static preemption level equal to πsi = 1/Dsi. Similarly, we assign a component a preemption level equal to Πs = 1/Ps, where period Ps serves as a relative deadline. If components (or tasks) have the same calculated preemption level, then the smallest index determines the highest preemption level.

3.3.2. Resource ceilings

With every global resource R two types of resource ceilings are associated; a global resource ceiling RC for global scheduling and a local resource ceiling rcsℓ for local scheduling. These ceilings are statically calculated values, which are defined as the highest preemption level of any component or task that shares the resource. According to the SRP, these ceilings are defined as:

RC=maxΠN,maxΠs|RRs,E1
rcs=maxπsns,maxπsi|hsi>0.E2

We use the outermost max in (1) and (2) to define RC and rcsℓ in those situations where no component or task uses R. The values of the local and global ceilings as defined in (1) and (2) by definition guarantee mutual exclusive access to their corresponding resource R by the sharing tasks and components and, therefore, the values of these ceilings cannot be further decreased. In some situations—as further investigated by Shin et al. [33] and van den Heuvel et al. [34]—it might be desirable to limit preemptions more than is strictly required for mutual exclusive resource access, which can be established by increasing the value of the local resource ceilings in (2) artificially. On the contrary, increasing the global ceiling, i.e., the value of RC in (1), never returns schedulability improvements.

3.3.3. System and component ceilings

The system ceiling and the component ceiling are dynamic parameters that change during execution. The system ceiling is equal to the highest global resource ceiling of a currently locked resource in the system. Similarly, the component ceiling is equal to the highest local resource ceiling of a currently locked resource within a component. Under the SRP, a task can only preempt the currently executing task if its preemption level is higher than its component ceiling. A similar condition for preemption holds for components.

Advertisement

4. Opaque schedulability analysis of a component

After developing a component and before publishing it to a framework integrator, a component is packaged as a re-usable entity. This includes deriving a timing interface to abstract from deadline constraints of tasks. Such an abstraction requires an explicit choice for a resource-supply model, capturing the processor supply to a component. Moreover, a component specifies what it needs in terms of resources and exposes those resources that may be shared globally in its interface. Whether or not a global resource is actually used by other components is unknown within the context of a component.

There are several ways to account for local scheduling penalties due to global resource sharing. One might assume that each resource must be globally shared and, subsequently, account for the worst-case overhead inside the local analysis. Alternatively, we opt for the assumption that all resources are just locally shared during the local analysis and we compensate for global sharing between components at integration time. These assumptions are often made tacitly.

The latter alternative presents the same view as during component development, i.e., a component has the entire platform at its disposal and all resources. Whenever a synchronization protocol for global resources is used that is compliant with a synchronization protocol for local resources, the local analysis of a component can be based on local properties only. We call such a local analysis opaque because it separates local and global resource arbitration.

Definition 1 An opaque analysis provides a sufficient local schedulability condition for an individual component. It treats all resources accessed by the component as local, so that, even under global sharing, properties of the global synchronization protocol do not influence the computed interface parameters.

The key consequence of an opaque local analysis is the absence of assumptions on the global synchronization protocol. Section 4.1 shows how resource holding times, XsXs, can be computed without making assumptions on the global synchronization protocol. Next, we accomplish the same for budget parameter Qs in the interface of the component. This means that the values of the resource holding times should be absent in the equations that validate the local tasks' schedulability. Table 1 gives an overview of local analyses found in literature by indicating their opacity. This section proceeds with an opaque analysis which ultimately results in a uniform representation of component interfaces, ΓPsQsXs.

Analysis of resource-sharing strategies Authors Opacity Impact on local analysis
BROE Bertogna et al. [24] Yes
Enhanced BROE Biondi et al. [29] No Proc.-supply model uses RHTs
HSRP—overrun without
payback (ONP)
Davis and Burns [22] No Not compositional
HSRP—overrun without
payback (ONP)
Behnam et al. [30] Yes
Enhanced overrun Behnam et al. [30] No Proc.-supply model uses RHTs
Improved overrun without payback Behnam et al. [35] No Proc.-supply model uses RHTs
HSRP—overrun with
payback (OWP)
Davis and Burns [22] No Not compositional
HSRP—overrun with
payback (OWP)
Behnam et al. [30] No Proc.-supply model uses RHTs
SIRAP Behnam et al. [23,28] No Proc.-supply model uses RHTs

Table 1.

Overview of the synchronization protocol's support for integrating resource-sharing components into the HSF with opaque analysis.

4.1. Computing resource holding times

The resource holding times were introduced by Bertogna et al. [36] in order to represent the cumulative processor time consumed by the tasks within the same component Cs that can preempt a task τi while it is holding a resource R. The way of computing resource holding times of tasks therefore depends on the local scheduling policy, because the scheduling policy determines possible preemptions. Besides the scheduling policy, preemptions may be limited by a resource ceiling, i.e., the value of the local resource ceiling rcsℓ also influences the resource holding times (see Figure 3). In an HSF, the resource holding time represents the longest critical-section length as experienced by blocked tasks in other components.

In literature, various system assumptions in the description of a particular global synchronization protocol have shown to affect the way of computing resource holding times [e.g., see 23, 24, 30]. However, all these methods can be simplified and unified (independent of the local scheduling policy and the global synchronization protocol) by assuming that the component's period Ps is smaller than the tasks' periods.

Figure 3.

The resource holding time (RHT) represents the cumulatively consumed processor time by any task of a component while one task holds a resource. In order to guarantee mutual-exclusive access to resource R, the associated resource ceiling (rcsℓ) is at least equal to the highest preemption level of any (local) task sharing resource R. One may consider to further limit preemptions during critical sections by increasing the resource ceiling. On the one hand, this may lead to longer blocking delays to tasks with a higher preemption level (in this case: τs1). On the other hand, this decreases the tasks' RHTs (in this case: τs2 or τs3).

The main observation leading to this simplification is that an access to a global resource must be followed by a release of the resource in the same component period, for example, established by the self-blocking mechanisms or the overrun mechanisms considered in real-time literature. If a resource must be accessed and released in the same component period which is smaller than the task periods, then we can limit the number of preemptions within a critical section and this, on its turn, will lead to a smaller resource holding time. The resource holding time of a task τi accessing a resource R is captured by a value Xsiℓ, which represents the amount of processor time supplied to component Cs from the access until the release of task τsi to resource R. We now present a lemma that captures the possible preemptions of task τi, regardless of other system assumptions.

Lemma 1 (Taken from van den Heuvel et al. [34]). Given Ps<Tsmin and Tsmin=minTsi|1ins, all tasks τsj that are allowed to preempt a critical section accessing a global shared resource R, i.e., πsj > rcsℓ, can preempt at most once during an access to resource R when using any global SRP-compliant protocol, independent if the local scheduler is EDF or FPPS.

Lemma 1 makes it possible to compute the resource holding time, Xsiℓ of task τsi to resource R as follows:

Xsi=hsi+πsj>rcsEsj,E3

and the maximum resource holding time within a component Cs is computed as

Xs=maxXsi|1ins.E4

The computed values of Xsℓ are included in the set Xs which is part of the component's interface, Γs. We recall that opacity requires that the way of computing the interface parameters Qs and Xs of a component is independent of the global synchronization protocol; Lemma 1 establishes this requirement for the set of resource holding times, Xs, of a component.

4.2. Computing a processor budget

The traditional schedulability analysis of tasks fills in task characteristics in a demand-bound function or a request bound function and compares the tasks' requirements with the supplied processor resources. The same schedulability analysis holds for tasks executing within a component, although the processor supply is modeled in a more complicated way.

The processor supply refers to the amount of processor resources that a component Cs can provide to its tasks in order to satisfy deadline constraints. The linear lower bound of the processor resources supplied to a component with a periodically assigned processor (specified by an interface Γs=PsQsXs) is given by [2]:

lsbfΓst=QsPst2PsQs.E5

The longest interval a component may receive no processor supply on a periodic resource Γs=PsQsXs is named the blackout duration, i.e.,

BDs=maxt|lsbfΓst=0=2PsQs.E6

The lsbfΓst in (5) is not only a linear approximation of the supplied processor resources in an interval of length t, it also models a bounded-delay resource supply as defined by [3] with a continuous, fractional provisioning of QsPs of the shared processor (also referred to as the virtual processor speed) and a longest initial service delay of BDs time units.

4.2.1. Testing interfaces with earliest-deadline-first scheduling of tasks

Assume we are given a component Cs and its tasks have to execute on a periodic budget Qs every period Ps. If this processor budget is allocated to the tasks according to an EDF scheduling policy, then the following sufficient schedulability condition holds (as described by Shin and Lee [16]):

t:tSs:dbfstlsbfΓst,E7

where dbfst denotes the cumulative processor demand of all tasks of component Cs for a time interval of length t and the set Ss denotes a non-empty finite set of time-interval lengths (see Baruah [37]), i.e.,

Ss=deft=bTsi+Dsi|1ins;b+;t0,lcmTs1Tsns.E8

The dbfst is fully compliant to the schedulability analysis for task sets on a dedicated unit-speed processor, i.e.,

dbfst=bst+1instDsi+TsiTsiEsi.E9

The blocking term, bs(t), is defined according to the SRP, as described by Baruah [37]:

bst=maxhsj|k:hsk>0Dskt<Dsj.E10

The algorithmic complexity of verifying the scheduling condition in (7) is pseudo-polynomial in the number of tasks.

4.2.2. Testing interfaces with fixed-priority preemptive scheduling of tasks

Assume we are given a component Cs and its tasks have to execute on a periodic budget Qs every period Ps. If this processor budget is allocated to the tasks according to a FPPS policy, then the following sufficient schedulability condition holds (as described by Shin and Lee [16]):

1ins:tSsi:rbfstilsbfΓst,E11

where rbfsti denotes the worst-case cumulative processor request of τsi for a time interval of length t and the set Ssi denotes a non-empty finite set of time-interval lengths (see Lehoczky et al. [38]), i.e.,

Ssi=deft=bTsa|a<i;bIN+;t(0,Dsi]Dsi.E12

The rbfsti is fully compliant to the schedulability analysis for task sets on a dedicated unit-speed processor, i.e.,

rbfsti=bsi+1jitTsjEsj.E13

The blocking term, bsi, is defined according to the SRP, as described by Baker [21]:

bsi=maxhsj|πsj<πsircs.E14

The algorithmic complexity of verifying the scheduling condition in (11) is pseudo-polynomial in the number of tasks.

4.2.3. Deriving the processor budget from the scheduling test

Computing a processor budget for a component Cs involves a function that takes a fixed period Ps, a task set and a local scheduling policy as input and returns the smallest component budget Qs. The function should satisfy, dependent on the local scheduling policy, the condition in (7) or (11).

One may approximate the size of the smallest required processor budget by means of a binary search in the range Qs ∈ (0, Ps). As amongst others suggested by Shin and Lee [2], the smallest value of budget Qs can be found by means of taking an intersection between the left-hand sides and the right-hand sides of the inequalities. This intersection concerns solving a simple quadratic equation (e.g., see Lipari and Bini [14]).

Advertisement

5. Integration of components and global schedulability

In this section, we present the composition rules of components in HSFs in the presence of global shared resources. A global integration test implements the admission control for components based on the resource requirements specified in their interfaces. The global analysis explicitly takes into account the corresponding penalties for global resource sharing which depend on the synchronization protocol applied at the top-level scheduler. These penalties include (i) blocking between components and (ii) protocol-specific penalties (in our case, either BROE, ONP, OWP, or SIRAP). Dependent on the chosen global synchronization protocol, the latter influences the processor requests by a component or it influences the processor supplies to a component. To analyze these scheduling penalties appropriately, it is reasonable to assume that during component-integration in the HSF, the synchronization protocol of the HSF is known.

Looking at resource sharing between components, the effectively used processor bandwidth of a component therefore depends on two parts (see Section 3.1): the processor budget (denoted by Qs in interface Γs) and the set of budgets on global resources (which are the resource holding times denoted by Xs in interface Γs). The budget Qs should be sufficient to meet the deadline constraints of the tasks and no other constraints should influence the size of Qs (e.g., constraints related to global synchronization should be avoided). The resource holding times define the amount of execution time that a component receives for an accessing a global resource. In other words, if component Cs is granted access to resource R, it receives Xsℓ time units of execution time on resource R prior to the implicit deadline Ps. The global synchronization protocol defines how this is established and the run-time rules of the protocol may or may not lead to an overlap of the processor allocation times to a component as contained in Qs and Xs. In the remainder of this section, we show the integration of components for two scheduling policies (EDF and FPPS) applied to the allocated bandwidth of the components.

5.1. Earliest-deadline-first scheduling of components

In the processor supply model, we assumed that the component's period also serves as a deadline for the provisioning of its processor budget. The following utilization-based schedulability condition, as defined by Baruah [37], can therefore be applied to the top-level EDF-scheduler:

w:1wN:BPwPw+1swQs+OsPsPs1.E15

The blocking term, B(t), presents the resource holding time of a potentially interfering, resource-sharing component with a deadline beyond the considered component, Cw; it is defined by Baruah [37]:

Bt=maxXu|s:RusPst<Pu.E16

The term Os(t) defines the additional amount of budget that a component Cs requires under a certain global synchronization protocol in order to prevent excessive blocking durations for other components in the system.

With ONP, a component can request for an additional amount of Xs time units of processor time each period Ps. Similarly, with SIRAP, a task of a component may idle away at most Xs time units of processor time each period Ps. Hence, the synchronization penalties of both SIRAP and ONP can be modeled by allocating Xs time units of processor time in addition to the regular processor budget Qs in each period Ps, so that the term Os(t) is defined by Behnam et al. [30] for ONP and van den Heuvel et al. [39] for SIRAP:

Ost=tPsXs.E17

With OWP, a component can only request an additional amount of Xs time units of processor time once. Hence, the term Os(t) is defined by [30]:

Ost=XsiftPs0otherwise.E18

For both SIRAP and BROE, it is required that Xs ≤ Qs in order to be able to complete an entire critical section within a single budget of size Qs. For SIRAP, we establish this condition by allocating Qs + Xs time units of processor budget every period Ps. For BROE, however, we increase Qs with Os(t) time units if it is too small to fit Xs time units contiguously, where Os(t) is defined as follows:

Ost=tPsmax0,XsQs.E19

5.2. Fixed-priority preemptive scheduling of components

For global FPPS of components—by definition disallowing BROE—the following sufficient scheduling condition can be applied (as defined by Lehoczky et al. [38]):

1sN:t(0,Ps]:Bs+1rsOrt+tPrQrt.E20

The blocking term, Bs, is defined by the resource holding time of a lower-priority, resource-sharing component (in line with Baker [21]):

Bs=maxXu|Πu<ΠsRCE21

and the term Or(t) defines the additional amount of budget that a component Cs requires under a certain global synchronization protocol in order to prevent excessive blocking durations for other components in the system.

Similar to EDF, under global FPPS the term Or(t) is defined by Behnam et al. [30] for ONP and by van den Heuvel et al. [39] for SIRAP:

Ort=tPrXr.E22

Also under FPPS, a component arbitrated by OWP can only request an additional amount of Xs time units of processor time once. Hence, the term Or(t) becomes time independent and it is defined by [30]:

Ort=Xr.E23

Just like with tasks, a finite set of time-interval lengths t can be tested in order to determine the schedulability of a set of components, i.e., the set can be specified as in (12) using component period Ps as the deadline for the execution of budget (WCET) Qs. The algorithmic complexity of verifying the scheduling condition in (20) is then pseudo-polynomial in the number of components.

Advertisement

6. On the importance of opacity and its properties

Traditional protocols such as the PCP by Sha et al. [18] and the SRP by Baker [21] can be used for local resource sharing in HSFs, as observed by Almeida and Peidreiras [13]. With an opaque local analysis, we can re-use the same local analysis of components in the presence of global shared resources. The local analysis for HSFs with the ONP protocol, as presented by [30], already satisfied the notion of opacity because it uses a simple overrun upon integration and nothing else locally. In the previous sections, we also unified the local analysis of HSFs with other resource-sharing protocols (OWP, SIRAP, and BROE). This means that the interface of a component is independent of the resource-arbitration protocol. In this section, we briefly review non-opaque analysis and we highlight some important properties of opacity.

6.1. Monotonicity of the analysis

In Sections 4 and 5, we have summarized the compositional timing analysis of an HSF: the global analysis verifies the admission of a set of components into the HSF and the local analysis verifies the deadline constraints of tasks of each component in isolation on a periodically allocated budget, Qs. The local scheduling conditions in (7) and (13) determine the smallest size of Qs. For analyzing these conditions, we observe that increasing a local resource ceiling rcsℓ cannot lead to less blocking of local tasks (term bs(t) or bsi) and, thus, it cannot lead to a smaller budget Qs. As a result, we have the following property.

Property 1 In an analysis satisfying (7) for EDF or (13) for FPPS, the total requested resources of a component reflected in the allocated budget Qs is monotonically non-decreasing with an increase of a local resource ceilings rcsℓ, ∀ R ∈ ℛs.

Although not mentioned explicitly, this property is tacitly assumed in some analysis (e.g., by Shin et al. [33], Behnam et al. [40] and Behnam et al. [30]) and our analysis supports it as well. It holds for all global synchronization protocols except for some non-opaque analysis (see Table 1).

6.1.1. SIRAP and its opacity

The analysis as traditionally presented for SIRAP is non-opaque. However, SIRAP is an important and widely used protocol, so we side-step this problem by applying the same local and global analysis of ONP also to SIRAP, i.e., inserting Xs units of idle time every period Ps. The intuition behind this idea is that SIRAP never idles away more processor time in one component period Ps than ONP requires for overrun (see van den Heuvel et al. [39] for the details). This adjusted analysis of SIRAP satisfies Property 1.

6.1.2. How Property 1 could be violated

Since global resources may need to be shared with tasks in other components, the ideas underlying most of the non-opaque analyses (like the non-opaque ones in Table 1) is to use the resource holding times of local tasks to tighten the analysis of wasted resources. Often this means that the tasks of a component are penalized by changes in the processor supply due to arbitrated accesses to global resources. The properties of a synchronization protocol are then reflected on the computed value of budget Qs of a component. For example, this could work based on the following observations:

  • With OWP, see Behnam et al. [30], the resource holding time can be used to account for the processor time that is being exchanged between two consecutive component periods due to overruns and paybacks.

  • With ONP, see Behnam et al. [35], the resource holding times can be used to tighten the delivery of budget Qs in component period Ps, because an overrun must fit in each component period as well.

  • With SIRAP, see Behnam et al. [23], the resource holding times can be used to determine the amount of resources that can be idled away by the tasks in each component period.

  • With BROE, see Biondi et al. [29], the resource holding times can be used to bound an additional delay due to resource sharing experienced by tasks compared to the regular delay of their resource supply (BDs).

Intuitively, resource sharing comes with penalties to the tasks involved. However, sometimes the local tasks may also benefit from resource sharing, i.e., the tasks may require less processor resources. Section 6.1.3 presents an example of such a scenario using a non-opaque analysis of ONP. This clearly shows that a non-opaque analysis may violate our monotonicity property (Property 1).

6.1.3. Motivating example

We now demonstrate the effect of using properties of the global synchronization protocol for optimizing the parameters of the component's interface in the local analysis. We consider a simple non-opaque analysis for ONP. Behnam et al. [35] improved ONP by observing that the normal budget Qs of a component Cs has to be served at least before Ps − Xs instead of the regular relative deadline Ps (as we assumed for our analysis). The reason is that an overrun of at most length Xs must also fit in each period Ps after budget Qs has been depleted. This means that the blackout duration of the processor supply becomes shorter, so that tasks have to wait shorter until they get selected for execution by the local scheduler. Behnam et al. [35] model their idea with the help of the explicit deadline periodic resource model by Easwaran et al. [12]. The explicit deadline Ps − Xs improves the required budget of the tasks in a non-opaque way because it uses resource holding times to tighten the deadline.

Example 1 Consider a component C1 with a period P1 = 10 and a single task τ11 = (27, 5, 27, {0.5}) which specifies an access to a global resource R for a duration of h11 = X11 = X1 = 0.5 time units. We use ONP for arbitrating access to global resources.

According to the improved ONP analysis of Behnam et al. [35] where the resource holding time of 0.5 time units is exploited to tighten the deadline for budget Q1, it is sufficient to allocate Q1 = 2.5 time units every period of 10 time units. This budget allocation can be captured by interface Γ1=P1,Q1,X1=10,2.5,0.5 with explicit deadline P1 − X1. This interface is derived based on the assumption that an additional amount of 0.5 time units may need to be supplied within one component period to complete resource access by means of a budget overrun.

If resource R turns out to be local to component C1, i.e., component C1 is independent of other components in the system, then budget overruns are unnecessary for accessing resource R. An independent component C1 would have required a periodic budget of Q1=83 time units every period of 10 time units. We recall, however, the 2.5 time units must be supplied within 9.5 time units from the budget's release, leading to a density of processor allocations of 2.59.5. This density is higher than the one without resource sharing, i.e., 8/310<2.59.5.

Once the HSF is composed, one may admit a component into the system requiring 83 time units every 10 time units while one may need to reject a component requiring 2.5 time units before relative deadline 9.5 every 10 time units. This depends on the resources requirements of other components in the HSF. Hence, a non-opaque analysis may give the illusion of resource efficiency by artificially creating resource dependencies.

By making assumptions in the local analysis on how ONP changes the processor supply to a component, a manufacturer may give an untruthful impression on the efficiency of the component. Such impressions cannot be given through an opaque analysis due to its monotonicity property. For some protocol-specific local analysis, monotonicity of the local analysis is hard to prove or disprove. Nevertheless, the above example clearly shows the importance of monotonicity for multi-vendor environments, for example, obtained through an opaque local analysis.

6.2. Detecting and accounting for shared resources

An additional problem with a non-opaque analysis (e.g., see the analyses in Table 1) is that it impacts the budget of a component with synchronization penalties, no matter whether or not an accessed resource is shared with other components. As a result, the synchronization penalties are incorporated in the component's timing interface and they cannot be taken out any longer. Hence, it disallows us to account for the synchronization penalties corresponding to the resources that really need to be shared between components as detected at integration time.

Since a component is unaware of other components, it is also unknown which resources actually need to be shared. Instead of directly deriving the interface of a component, one may therefore perform an intermediate step. One may specify partial interfaces for components for each of the resources the component requires. Upon integration of components, these partial interfaces can then be combined into a true interface by selecting just the interfaces corresponding to the resources that are globally shared with other components.

This procedure works intuitively with an opaque analysis and works as follows. Given a component Cs, we assume that Ps is given by the system designer and is fixed during the whole process of merging partial interfaces into a single interface. A partial interface Γsℓ considers one global resource R in isolation, i.e., R can be globally shared or it can be local to the component.

Definition 2 (Partial interface candidate) (Taken from van den Heuvel et al. [34]) A partial interface candidate Γsℓ = (PsQsℓ, {Xsℓ}) of component Cs accessing resource R is a valid interface Γs for component Cs—i.e., an interface that satisfies the local scheduling condition in (7) for EDF or (11) for FPPS—under the assumption that only resource R may be globally shared with other components.

A partial interface Γsℓ is a valid interface for the restrictive case that resource R is the only resource being exposed globally. It specifies the budget and the resource holding time to resource R of the component, but other resources accessed by the same component are excluded from the interface. The remaining problem is to derive one interface for the case a component accesses more than one globally shared resource. Lemma 2 shows how to merge partial interfaces into a single interface.

Lemma 2 (Taken from van den Heuvel et al. [34]). Assume a component Cs accesses ms resources. Let a selection of partial interfaces of component Cs be a series of Γsℓ = (PsQsℓ, {Xsℓ}), i.e., one partial interface Γsℓ is selected for each resource R. The local tasks' deadlines of component Cs are met by interface Γs = (PsQs, {Xsℓ | R ∈ ℛs}), where Qs = max{Qsℓ | R ∈ ℛs}.

The intuition behind this lemma is as follows. By virtue of the SRP [21], a task τsi can be blocked by just one (outermost) critical section of task τsj with a lower preemption level (where πsi ≥ πsj) before τsi can start its execution. Hence, it is sufficient to add the largest difference in budget to the value of Qsℓ in order to accommodate for one local blocking occurrence.

Lemma 2 presents an important result for open environments in which components may be loaded or removed from the platform after deployment. It enables us to incrementally analyze the resource dependencies of the components in the HSF. Prior to the integration of components, it is still unclear which of the global resources need to be shared with other components and which resources can be treated as local. Upon integration of components, the sets ℛs of accessed resources by component Cs with the resources that truly need to be shared globally are known. We can then make an appropriate selection of partial interfaces and combine them into a single interface for each component. Figure 4 illustrates this procedure as defined by Lemma 2. The result is that we account for the synchronization penalties of just the globally shared resources. If components enter or leave the HSF, one may use the partial interfaces to detect the updated resource dependencies between the components in the HSF.

Figure 4.

Partial interfaces define the resource requirements of a component on each accessed resource separately. They can be combined into a single interface which captures all resource requirements of a component. The resources that do not need to be shared between components can be ignored (in this example, Rs and Rh can be ignored), so that resource arbitration and the corresponding penalties can be avoided for those resources.

Advertisement

7. Conclusion

This chapter introduced the notion of uniform interfaces for resource-sharing components that need to be integrated on a uni-processor platform. The interface of a component abstracts from global resource sharing until component integration. The local timing analysis of a component that returns such an interface is called opaque. Sufficient conditions for opacity are

  • component periods are smaller than the local tasks' periods, so that resource holding times of a component are defined independently of the global synchronization protocol;

  • resource holding times must disappear from the local schedulability test, so that the budget parameter of a component can be solely computed with the purpose of meeting deadline constraints of tasks (and independently of the global synchronization protocol).

As a result of both conditions, when the SRP arbitrates access to shared resources between periodic components, the necessary condition of opacity is satisfied: all interface parameters of a component are computed independently of a global synchronization protocol. Moreover, component dependencies on shared resources and the corresponding synchronization penalties can be optimized at component integration.

We applied opacity to four existing global synchronization protocols: SIRAP, ONP, OWP, and BROE. For some systems that deploy such a protocol, a non-opaque analysis has shown to significantly improve schedulability (e.g., as demonstrated by Behnam et al. [28], Biondi et al. [29]). However, this requires that components are delivered to system integrators with an interface that includes worst-case synchronization penalties, which in practice may never occur. We therefore believe that the simplicity of opaque analysis and its opportunities to analyze systems incrementally may be beneficial for complex systems in which component development, test, analysis, and integration is spread over different research and development teams.

Advertisement

8. Glossary

This section gives an overview of the abbreviations and the symbols being used throughout this chapter.

Abbreviation Description
AUTOSAR AUTomotive Open System Architecture
BROE Bounded-delay resource open environment
BD Blackout duration
BWI Bandwidth inheritance
DM Deadline monotonic
EDF Earliest-deadline-first
EDP Explicit-deadline periodic
FPPS Fixed-priority preemptive scheduling
HSFs Hierarchical scheduling frameworks
HSRP Hierarchical SRP
IPCP Immediate PCP
LCM Least common multiple
ONP Overrun and no payback
OWP Overrun with payback
PCP Priority ceiling protocol
RHT Resource holding time
LSBF Linear supply-bound function
SIRAP Subsystem integration and resource allocation policy
SRP Stack resource policy
WCET Worst-case execution time

Symbol Description
N Number of components in the system
M Number of global resources
Set of global resources
R -th global resource
RC Global resource ceiling of R
Cs s-th component
Πs Preemption level of Cs
Ps Period of the resource allocations to component Cs
Ds Relative deadline for the resource allocations to component Cs
Qs Periodically allocated processor time for Cs
s Set of global resources accessed by Cs
Xs Set of holding times to global resources accessed by Cs
Xsℓ the resource holding time of Cs for R
Xs Maximum of the resource holding times of Cs
Os Processor time for Cs merely dedicated to prevent excessive blocking
Γs Interface of Cs defining periodic resource demands of Cs
Γsℓ Partial interface defining Cs' demands for a given resource R
BDs Longest duration for Cs without any processor supply
dbfst Demand-bound function of the tasks of Cs in an interval t
rbfsti Request-bound function of task τsi and its higher priorities in an interval t
lsbft Linear lower bound of the processor supply in any sliding window of length t
Ts Task set of a component
ns Number of tasks composing component Cs
τsi i-th task of component Cs
Tsi Minimal inter-arrival time of task τsi
Esi WCET of τsi
Dsi (Relative) deadline of τsi
Ssi Set of time instances that to determine schedulability of a task τsi
si Set of WCETs of task τsi on resources
rcsℓ Local resource ceiling of resource R
πsi Preemption level of task τsi
hsiℓ WCET of τsi's largest critical section to R
Xsiℓ Largest resource holding time of τsi to R

References

  1. 1. HolenderskiM., BrilR. J., and LukkienJ. J.. An efficient hierarchical scheduling framework for the automotive domain. In Morteza BabamirSeyed, editor, Real-Time Systems, Architecture, Scheduling, and Application, pages 67–94. InTech, 2012.
  2. 2. ShinI. and LeeI.. Compositional real-time scheduling framework with periodic model. ACM Trans. on Embedded Computing Systems (TECS), 70 (3):0 1–39, April 2008.
  3. 3. FengX.A. and MokA.K.. A model of hierarchical real-time virtual resources. In Real-Time Systems Symposium (RTSS), pages 26–35, December 2002.
  4. 4. van den HeuvelM. M. H. P., BrilR. J., and LukkienJ. J.. Transparent synchronization protocols for compositional real-time systems. IEEE Transactions on Industrial Informatics (TII), 80 (2):0 322–336, May 2012.
  5. 5. López MartinezP., BarrosL., and DrakeJ.. Scheduling configuration of real-time component-based applications. In Reliable Software Technology—Ada-Europe, volume 6106 of Lecture Notes in Computer Science (LNCS), pages 181–195. Springer, 2010.
  6. 6. MercerC.W., SavageS., and TokudaH.. Processor capability reserves: operating system support for multimedia applications. In International Conf. on Multimedia Computing and Systems (ICMCS), pages 90–99, May 1994.
  7. 7. RajkumarR., JuvvaK., MolanoA., and OikawaS.. Resource kernels: a resource-centric approach to real-time and multimedia systems. In SPIE/ACM Conference on Multimedia Computing and Networking (CMCN), pages 150–164, January 1998.
  8. 8. DengZ. and LiuJ.W.-S.. Scheduling real-time applications in open environment. In Real-Time Systems Symposium (RTSS), pages 308–319, December 1997.
  9. 9. KuoT.-W. and LiC.-H.. A fixed-priority-driven open environment for real-time applications. In Real-Time Systems Symposium (RTSS), pages 256–267, December 1999.
  10. 10. LipariG. and BaruahS.K.. Efficient scheduling of real-time multi-task applications in dynamic systems. In Real-Time Technology and Applications Symposium (RTAS), pages 166–175, May 2000.
  11. 11. WandelerE. and ThieleL.. Real-time interfaces for interface-based design of real-time systems with fixed priority scheduling. In Conference on Embedded Software (EMSOFT), pages 80–89, September 2005.
  12. 12. EaswaranA., AnandM., and LeeI.. Compositional analysis framework using EDP resource models. In Real-Time Systems Symposium (RTSS), pages 129–138, December 2007.
  13. 13. AlmeidaL. and PeidreirasP.. Scheduling with temporal partitions: response-time analysis and server design. In Conference on Embedded Software (EMSOFT), pages 95–103, September 2004.
  14. 14. LipariG. and BiniE.. A methodology for designing hierarchical scheduling systems. Journal of Embedded Computing (JEC), 10(2):257–269, April 2005.
  15. 15. FisherN. and DewanF.. A bandwidth allocation scheme for compositional real-time systems with periodic resources. Real-Time Systems, 480(3):223–263, 2012.
  16. 16. ShinI. and LeeI.. Compositional real-time scheduling framework. In Real-Time Systems Symposium (RTSS), pages 57–67, December 2004.
  17. 17. de NizD., AbeniL., SaewongS., and RajkumarR.. Resource sharing in reservation-based systems. In Real-Time Systems Symposium (RTSS), pages 171–180, December 2001.
  18. 18. ShaL., RajkumarR., and J.P. Lehoczky. Priority inheritance protocols: an approach to real-time synchronisation. IEEE Transactions on Computers (TC), 390(9):1175–1185, September 1990.
  19. 19. SteinbergU., WolterJ., and HärtigH.. Fast component interaction for real-time systems. In Euromicro Conference on Real-Time Systems (ECRTS), pages 89–97, July 2005.
  20. 20. LipariG., LamastraG., and AbeniL.. Task synchronization in reservation-based real-time systems. IEEE Transactions on Computers (TC), 530(12):1591–1601, December 2004.
  21. 21. BakerT.P.. Stack-based scheduling of realtime processes. Real-Time Systems, 30(1):67–99, March 1991.
  22. 22. DavisR.I. and BurnsA.. Resource sharing in hierarchical fixed priority pre-emptive systems. In Real-Time Systems Symposium (RTSS), pages 257–267, December 2006.
  23. 23. BehnamM., ShinI., NolteT., and NolinM.. SIRAP: a synchronization protocol for hierarchical resource sharing in real-time open systems. In Conference on Embedded Software (EMSOFT), pages 279–288, October 2007.
  24. 24. BertognaM., FisherN., and BaruahS.. Resource-sharing servers for open environments. IEEE Transactions on Industrial Informatics (TII), 50(3):202–219, August 2009.
  25. 25. GhazalieT. M. and BakerT. P.. Aperiodic servers in a deadline scheduling environment. Real-time Systems, 90(1):31–67, July 1995.
  26. 26. CaccamoM. and ShaL.. Aperiodic servers with resource constraints. In Real-Time Systems Symposium (RTSS), pages 161–170, December 2001.
  27. 27. HolmanP. and AndersonJ.H.. Locking in pfair-scheduled multiprocessor systems. In Real-Time Systems Symposium (RTSS), pages 149–158, December 2002.
  28. 28. BehnamM., NolteT., and BrilR. J.. Bounding the number of self-blocking occurrences of SIRAP. In Real-Time Systems Symposium (RTSS), pages 61–72, December 2010 a .
  29. 29. BiondiA., MelaniA., BertognaM., and ButtazzoG.. Optimal design for reservation servers under shared resources. In Euromicro Conf. on Real-Time Systems (ECRTS), pages 153–164, July 2014.
  30. 30. BehnamM., NolteT., SjodinM., and ShinI.. Overrun methods and resource holding times for hierarchical scheduling of semi-independent real-time systems. IEEE Transactions on Industrial Informatics (TII), 60 (1):93–104, February 2010 b.
  31. 31. LiuC.L. and LaylandJ.W.. Scheduling algorithms for multiprogramming in a real-time environment. Journal of the ACM, 200(1):46–61, January 1973.
  32. 32. MokA.K.-L.. Fundamental design problems of distributed systems for the hard-real-time environment. Phd thesis, Massachusetts Institute of Technology, May 1983. http://www.lcs.mit.edu/publications/pubs/pdf/MIT-LCS-TR-297.pdf.
  33. 33. ShinI., BehnamM., NolteT., and NolinM.. Synthesis of optimal interfaces for hierarchical scheduling with resources. In Real-Time Systems Symposium (RTSS), pages 209–220, December 2008.
  34. 34. M. M. H. P. van den Heuvel, BehnamM., BrilR. J., LukkienJ. J., and NolteT.. Optimal and fast composition of resource-sharing components in hierarchical real-time systems. In IEEE Int. Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA), pages 1–12, Aug. 2014.
  35. 35. BehnamM., NolteT., and BrilR. J.. Tighter schedulability analysis of synchronization protocols based on overrun without payback for hierarchical scheduling frameworks. In International Conference on Engineering of Complex Computer Systems (ICECCS), pages 35–44, April 2011.
  36. 36. BertognaM., FisherN., and BaruahS.. Static-priority scheduling and resource hold times. In Parallel and Distributed Processing Symposium (IPDPS), March 2007.
  37. 37. BaruahS. K.. Resource sharing in EDF-scheduled systems: a closer look. In Real-Time Systems Symposium (RTSS), pages 379–387, December 2006.
  38. 38. LehoczkyJ. P., ShaL., and DingY.. The rate monotonic scheduling algorithm: exact characterization and average case behavior. In Real-Time Systems Symposium (RTSS), pages 166–171, December 1989.
  39. 39. M. M. H. P. van den Heuvel, BehnamM., BrilR. J., LukkienJ. J., and NolteT.. Opaque analysis for resource sharing in compositional real-time systems. In 4th Workshop on Compositional Theory and Technology for Real-Time Embedded Systems (CRTS), pages 3–10, Nov. 2011.
  40. 40. BehnamM., NolteT., and FisherN.. On optimal real-time subsystem-interface generation in the presence of shared resources. In Conference on Emerging Technologies and Factory Automation (ETFA), September 2010 c.

Written By

Martijn M. H. P. van den Heuvel, Reinder J. Bril, Johan J. Lukkien, Moris Behnam and Thomas Nolte

Submitted: 26 October 2015 Reviewed: 24 February 2016 Published: 08 June 2016