RT-Seed: Real-Time Middleware for Semi-Fixed-Priority Scheduling Hiroyuki Chishiro Department of Information and Computer Science, Keio University, Japan Abstract—The continuing economic boom has seen that many people become interested in automated trading systems with timing constraints and quality of service, called real-time trading systems. In order to realize such real-time trading systems, multi-/many-core processors are required. However, real-time trading systems are somewhat complex, and real-time operating systems struggle to offer continuous support because of their customization, robustness, maintainability, and portability compared with real-time middleware. An imprecise computation model is employed to support these trading systems, and semifixed-priority scheduling is a representative imprecise real-time scheduling technique for multi-/many-core processors. Unfortunately, there is no real-time middleware that supports semi-fixedpriority scheduling. This paper presents the RT-Seed real-time middleware, which implements a semi-fixed-priority scheduling algorithm, called Partitioned Rate Monotonic with Wind-up Part (P-RMWP), on Linux. P-RMWP supports the parallel-extended imprecise computation model that executes optional parts in parallel, called parallel optional parts. This paper also describes how to terminate parallel optional parts in the user space. The performance of P-RMWP is evaluated using Intel’s Xeon Phi many-core system.

I.

I NTRODUCTION

The recent favorable economic environment has seen that an investment becomes a hot topic. In addition, many people have started to use automated trading for investment. In order to achieve such trading systems, timing constraints and Quality of Service (QoS) have been required. Hence, this paper focuses on automated trading systems with timing constraints and QoS, called real-time trading systems. In order to achieve these requirements, real-time trading systems employ real-time scheduling. However, in Liu and Layland’s model [1], real-time scheduling algorithms support timing constraints but not QoS. In contrast, the imprecise computation model [2] supports both timing constraints and QoS. The key to the imprecise computation model is that the computation of each task is split into two parts: a mandatory part and an optional part. As a real-time part, the mandatory component affects the correctness of the result, whereas the non-real-time optional part only affects QoS. By restricting the execution of the optional part to after the completion of the mandatory part, imprecise real-time applications can provide the correct output with lower QoS to terminate the optional part. However, imprecise computation models are impractical, because the termination of each optional part cannot guarantee the schedulability. To guarantee the schedulability of the termination of the optional part, an extended imprecise computation model with a second mandatory (wind-up) part has been developed [3]. The extended imprecise computation model is first sup-

ported by dynamic-priority scheduling on uniprocessors [4]. However, this dynamic-priority scheduling is difficult on multi/many-core processors, because the available time of the optional part is calculated online. In contrast, semi-fixed-priority scheduling [5] allows part-level fixed-priority scheduling in the extended imprecise computation model on multi-/many-core processors [6], [7]. The optional deadline that calculates the termination time of the optional part offline enables semi-fixedpriority scheduling to guarantee the schedulability of the windup part. In the RT-Est real-time operating system [8], semifixed-priority scheduling is only implemented in the kernel space. A user space (i.e., real-time middleware) approach is likely to be more readily useful than a kernel space (i.e., realtime operating system) approach for reasons of customization, robustness, maintainability, and portability, as discussed in [9]. Therefore, real-time middleware represents one solution for the continuous support of semi-fixed-priority scheduling. Unfortunately, there is no real-time middleware that supports semi-fixed-priority scheduling. In addition, the optional parts of each task are not executed in parallel, which would improve QoS under semi-fixed-priority scheduling. This paper describes the RT-Seed real-time middleware for Linux, which implements a semi-fixed-priority scheduling algorithm called Partitioned Rate Monotonic with Wind-up Part (P-RMWP) [7]. P-RMWP supports the parallel-extended imprecise computation model that executes optional parts in parallel, called parallel optional parts. The schedulability analysis shows that semi-fixed-priority scheduling in the parallel-extended imprecise computation model has the same schedulability as that in the extended imprecise computation model. The performance of P-RMWP is evaluated using Intel’s Xeon Phi many-core system. Contribution: The contribution of this paper is to design, implement, and evaluate semi-fixed-priority scheduling in the RT-Seed real-time middleware. RT-Seed uses the SCHED_FIFO scheduling policy in POSIX threads to achieve semi-fixed-priority scheduling in the user space on Linux. Therefore, RT-Seed does not require any modifications in Linux. The other contribution is to present the parallelextended imprecise computation model, which executes optional parts in parallel to improve QoS. Finally, the author believes that many traders use RT-Seed to achieve real-time trading systems. The remainder of this paper is organized as follows. In Section II, the parallel-extended imprecise computation model is introduced. Section III explains semi-fixed-priority scheduling in the extended imprecise computation model and RMWP. Section IV describes the RT-Seed real-time middleware and the implementation of P-RMWP in the parallel-extended imprecise

Discarded

τ

τ

1

Optional part Completed

2

0

Terminated

OD1 Mandatory part

Discarded

Release

time

0

Optional part Deadline

OD2

time

Wind-up part

Optional Deadline

Optional part Completed

Fig. 2.

Terminated Mandatory part

: Discarded Optional part Completed Terminated

Fig. 1.

Parallel-extended imprecise computation model

computation model. Section V evaluates the effectiveness of P-RMWP using Intel’s Xeon Phi many-core system. Section VI compares the author’s work with related one, and Section VII concludes this paper. II.

Optional deadline

Wind-up part

S YSTEM M ODEL

A. Parallel-Extended Imprecise Computation Model This paper presents a new model for parallel computing with the extended imprecise computation model [3], called parallel-extended imprecise computation model, for real-time trading systems. Figure 1 illustrates the parallel-extended imprecise computation model. The model supports the parallel execution of optional parts, called parallel optional parts. The parallel optional parts in the parallel-extended imprecise computation model can achieve higher QoS than the optional parts in the extended imprecise computation model. One of the parallel optional parts is completed, terminated, or discarded independently, and hence the parallel optional parts can be flexibly adapted to imprecise real-time applications. When there is only one parallel optional part in the parallel-extended imprecise computation model, the model is identical to the extended imprecise computation model. This paper assumes that the parallel optional parts are assigned to a processor when they are created, and do not migrate among processors during execution. In real-time trading systems, the parallel-extended imprecise computation model can be adapted for technical and/or fundamental analysis. Technical analysis forecasts the direction of prices such as exchange data through the study of past data. In contrast, fundamental analysis makes forecasts using the financial statements of companies and/or countries. For example, the mandatory part obtains exchange data (e.g., EUR/USD) from a stock company, the parallel optional parts conduct technical analysis (e.g., Bollinger Bands [10]) and/or fundamental analysis (e.g., GDP) in parallel to improve QoS for a trading decision, and the wind-up part collects the results from parallel optional parts to make a trading decision and sends a trade request (i.e., bid or ask) to the stock company or takes a wait-and-see attitude (i.e., no trade). When parallel optional parts overrun, they are terminated and the wind-up part is executed to produce a trading decision with low QoS. This model is similar to the fork-join model used in OpenMP [11] and Cilk [12]. The primary difference from

the fork-join model is that the parallel execution may not be completed, and can be terminated at any time, to support an imprecise computation. The proposed model assumes that a task set Γ has n periodic independent tasks τ1 ,...,τn on M identical multi/many-core processors P1 ,...,PM . The task set is synchronous (i.e., all tasks are initially released at the same time). Each task τi has its Worst Case Execution Time (WCET) Ci , period Ti , and relative deadline Di . The relative deadline Di of task τi is equal to its period Ti . The utilization of each task is represented Pnby Ui = Ci /Ti , and the system utilization is 1 U = M i=1 Ui . Each instance of a task is called a job. The number of parallel optional parts in task τi is represented by npi . The overheads of real-time scheduling are included in the WCETs of the mandatory/wind-up parts and the execution times of the parallel optional parts. The parallel-extended imprecise computation model has a second mandatory (wind-up) part, and hence the WCET of each task is Ci = mi + wi , where mi is the WCET of the mandatory part and wi is the WCET of the wind-up part. The execution time of the k th parallel optional part inp task τi is Pni represented by oi,k , and its utilization is Uio = k=1 oi,k /Ti . The longer the optional part of each task takes to execute, the higher its QoS is. Ui is not included in the execution time of the parallel optional parts because the parallel optional parts of each task are non-real-time parts, and hence their completion is not relevant to the successful scheduling of the task set. B. Optional Deadline The relative optional deadline ODi of task τi is defined as the time when an optional part is terminated and a wind-up part is released [5]. Each wind-up part is ready for execution after each optional deadline, and can be completed if each mandatory part is completed by the optional deadline. If the mandatory part of each task is not completed by its optional deadline, the corresponding wind-up part may miss its deadline. Figure 2 shows the optional deadline of each task in the extended imprecise computation model. The solid uparrows, solid down-arrows, and dotted down-arrows represent the release times, deadlines, and optional deadlines, respectively. Task τ1 completes its mandatory part before optional deadline OD1 , and then executes its optional part until OD1 . After OD1 , task τ1 executes its wind-up part. In contrast, task τ2 does not complete its mandatory part by optional deadline OD2 . As a result, when τ2 completes its mandatory part, its wind-up part is executed but its optional part is not executed. III.

S EMI -F IXED -P RIORITY S CHEDULING

Semi-fixed-priority scheduling [5] is defined as part-level fixed-priority scheduling in the extended imprecise computa-

remaining execution time Ri(t)

User Space

mi+wi mi

semi-fixed-priority scheduling Real-Time Process

wi

τ

Double circular linked list FIFO thread queue

Priority

...

99

...

98

:

:

...

50

...

49

:

:

...

1

:

τ

i

0

mi mi+wi

Mandatory part Release

Deadline

ODi

HPQ RTQ

Di time ...

Wind-up part Optional Deadline

enqueue/dequeue

General scheduling and semi-fixed-priority scheduling

Complete mandatory part Complete wind-up part

Mandatory Thread

Higher Priority RTQ

Scheduler

NRTQ Task becomes ready Optional deadline expires

...

SQ

Parallel Optional Threads

Sorted by increasing release time order

NRTQ

Lower Priority SQ Optional part Mandatory part Wind-up part Empty Sleep Fig. 4.

Queue head

i

semi-fixed-priority

Fig. 3.

Kernel Space

(Task) ...

general

# of cores

general scheduling

Task queue

tion model [3]. That is, semi-fixed-priority scheduling fixes the priority of each part in the extended imprecise task, and changes the priority of each extended imprecise task in just two cases: (i) when the extended imprecise task completes its mandatory part and executes its optional part; and (ii) when the extended imprecise task terminates or completes its optional part and executes its wind-up part. Figure 3 shows the difference between general scheduling in Liu and Layland’s model [1] and semi-fixed-priority scheduling in the extended imprecise computation model. In this case, no task suffers interference from higher-priority tasks. In general scheduling, when task τi is released at time 0, the remaining execution time Ri (t) is set to mi + wi and monotonically decreases until Ri (t) becomes 0 at time mi +wi . In semi-fixed-priority scheduling, when task τi is released at time 0, Ri (t) is set to mi and monotonically decreases until Ri (t) becomes 0 at time mi . When Ri (t) reaches 0 at time mi , then τi sleeps until time ODi . When τi is released at time ODi , then Ri (t) is set to wi and monotonically decreases until Ri (t) becomes 0 at time ODi + wi . If τi has not completed its mandatory part by time ODi , then Ri (t) is set to wi when τi completes its mandatory part. In both general scheduling and semi-fixed-priority scheduling, τi completes its wind-up part by time Di . RMWP [5] is a semi-fixed-priority scheduling algorithm that uses the extended imprecise computation model on uniprocessors. As shown in Figure 4, RMWP manages three task queues: Real-Time Queue (RTQ), Non-Real-Time Queue (NRTQ), and Sleep Queue (SQ). RTQ holds tasks that are ready to execute their mandatory or wind-up parts in RM order [1]. Tasks are not allowed to execute their mandatory and wind-

Fig. 5.

Overall architecture of RT-Seed on Linux

up parts simultaneously. NRTQ holds tasks that are ready to execute their optional parts in RM order. Every task in RTQ has higher priority than that in NRTQ. SQ holds tasks that have completed their optional parts by their optional deadlines or their wind-up parts by their deadlines. The calculation of each optional deadline in RMWP is shown in Theorem 2 of [5].

IV.

T HE RT-S EED R EAL -T IME M IDDLEWARE

This paper presents RT-Seed, a real-time middleware for semi-fixed-priority scheduling with parallel optional parts on Linux. The RT-Seed real-time middleware supports the parallel-extended imprecise computation model to improve QoS. The first goal of RT-Seed is to evaluate semi-fixed-priority scheduling algorithms against other real-time scheduling algorithms in the user space. Since there is no real-time middleware that implements a semi-fixed-priority scheduling algorithm, RT-Seed is a test bed for investigating the performance of semifixed-priority scheduling algorithms in the user space. The second goal is to become the de facto standard for real-time middleware supporting imprecise computation. To the best of the author’s knowledge, there is no real-time middleware supporting imprecise computation. The third goal is to make use of semi-fixed-priority scheduling algorithms in real-time trading systems that require timing constraints and QoS. The author believes that RT-Seed can be used to achieve real-time trading systems. First, this section proves a theorem concerning the schedulability of semi-fixed-priority scheduling in the parallelextended imprecise computation model. Next, the design and implementation of semi-fixed-priority scheduling are discussed for the parallel-extended imprecise computation model. Finally, the termination of parallel optional parts is explained.

A. Analysis The author first analyzes the calculation of an optional deadline for semi-fixed-priority scheduling in the parallelextended imprecise computation model. Theorem 1 (Optional Deadline). All semi-fixed-priority scheduling algorithms in the parallel-extended imprecise computation model calculate the same optional deadline as those in the extended imprecise computation model. Proof: Under semi-fixed-priority scheduling, all mandatory and wind-up parts have higher priority than all parallel optional parts in the parallel-extended imprecise computation model. That is, none of the parallel optional parts interfere with any mandatory or wind-up parts. Hence, this theorem holds. By Theorem 1, the schedulability of semi-fixed-priority scheduling in the parallel-extended imprecise computation model is as follows. Theorem 2 (Schedulability). All semi-fixed-priority scheduling algorithms in the parallel-extended imprecise computation model have the same schedulability as those in the extended imprecise computation model. Proof: By Theorem 1, it is clear that the mandatory and wind-up parts of all tasks in the parallel-extended imprecise computation model have the same schedule as those in the extended imprecise computation model. Hence, this theorem holds. By Theorems 1 and 2, semi-fixed-priority scheduling in the parallel-extended imprecise model can use the schedulability analysis and the calculation of optional deadlines described in [5], [6]. B. Design Figure 5 shows the overall architecture of RT-Seed on Linux. RT-Seed uses a partitioned semi-fixed-priority scheduling algorithm, called P-RMWP [7], because partitioned scheduling assigns tasks to processors offline and they do not migrate among processors online. The G-RMWP [6] semi-fixed-priority scheduling algorithm is not used in this paper. This is because (i) global scheduling, such as in GRMWP, allows tasks to migrate among processors, resulting in high overheads, and (ii) middleware-level global scheduling is unsuitable: global scheduling requires fine-grained processor control, but middleware sits atop an operating system that may not expose fine-grained scheduling information or control mechanisms [13]. Therefore, RT-Seed uses a partitioned scheduling approach. A parallel-extended imprecise task is represented as a real-time process in the user space. Each realtime process has two types of threads: a mandatory thread and parallel optional threads. The mandatory thread executes both the mandatory and wind-up parts, and the parallel optional threads execute parallel optional parts to improve QoS. The parallel-extended imprecise task does not allow the mandatory and wind-up parts to migrate among processors during their execution. In contrast, the parallel optional parts migrate to the specified processors prior to execution. When the parallel optional parts become ready or are running, they do not migrate among processors.

To create a real-time process, RT-Seed uses Linux’s SCHED_FIFO scheduling policy. Under SCHED_FIFO, FIFO thread queues with 99 priority levels exist on each processor in the kernel space, with larger values denoting higher priority. Each FIFO queue manages threads using a double circular linked list. The ready queue has four types of queues: RTQ, NRTQ, SQ, and a Highest Priority Queue (HPQ). The priority level of 99 in the HPQ is reserved for the highest priority task1 . The priority of the mandatory thread in RTQ is in the range [50, 98], and the priorities of the parallel optional threads in NRTQ are taken from [1, 49]. In addition, the difference between the priorities of the mandatory and parallel optional threads is 49. For example, when the priority of the mandatory thread is 90, the parallel optional threads have priorities of 41 (= 90 - 49). All mandatory and parallel optional threads are assigned to specified processors offline. SQ manages tasks that are sleeping until their optional deadlines or deadlines (next release time). When the task has completed its mandatory or wind-up part, it is enqueued to SQ. When the task becomes ready or the optional deadline expires, it is enqueued to RTQ and the mandatory or wind-up part is executed. When the task finishes its periodic execution, it is dequeued from the ready queue in the kernel space and destroyed. Linux makes a scheduling decision for each processor in the kernel space. In contrast, RT-Seed sets the thread priorities, assigns them to specified processors, and sends them to sleep in the user space. Note that RT-Seed does not require any modifications to Linux and the ready and sleep queues in the kernel space have been implemented in Linux. The design of RT-Seed on Linux can be adapted to other real-time operating systems, such as VxWorks [15], which implement fixedpriority scheduling [1]. Hence, this design has high portability. C. Implementation RT-Seed is implemented in C++, and the parallel-extended imprecise task is implemented as class Task on Linux. The primary member functions in class Task are as follows. • • •

execMandatory: executes the mandatory part. execOptional: executes the parallel optional parts. execWindup: executes the wind-up part.

Figure 6 illustrates the execution of a parallel-extended imprecise task in P-RMWP. RT-Seed implements P-RMWP using POSIX threads on Linux. In this example, there are three parallel optional parts that are terminated because of their overruns. A parallel-extended imprecise task is created as a realtime process in the sched setscheduler function using the SCHED_FIFO scheduling policy. The mandatory thread creates the parallel optional threads that migrate to specified processors in the sched setaffinity function. Next, the parallel optional threads wait until they receive a wake-up signal from the mandatory thread in the pthread cond wait function. The mandatory thread then sleeps until its release time in the clock nanosleep function. When the parallel-extended imprecise task is released, its mandatory part is executed in the execMandatory member 1 For example, Rate Monotonic with Utilization Separation [14] assigns the highest priority to task τi if Ui > M/(3M − 2).

Mandatory Thread

Parallel Optional Threads

0 sched_setscheduler() clock_nanosleep()

sched_setaffinity() pthread_cond_wait()

sigjmp_buf jmp_buf[NR_CPUS]; Task task;

timer_settime() execOptional()

/* called by parallel optional threads * when optional deadline expires */ void timer_handler(int sig) { /* restore stack context * and signal mask information */ siglongjmp(jmp_buf[sched_getcpu()], sig); }

execMandatory()

release time

pthread_cond_signal() pthread_cond_wait() optional deadline

timer_handler() pthread_cond_signal()

execWindup()

clock_nanosleep()

deadline time

Real-time execution

#define NR_CPUS 228

/* called by parallel optional threads * when they are initialized */ void optional_main(void) { int cpu = sched_getcpu(); timer_t timer_id; struct itimerspec itval, stop_itval; struct sigaction act;

Non-real-time execution

act.sa_handler = timer_handler; sigaction(SIGALRM, &act, NULL); timer_create(CLOCK_REALTIME, NULL, &timer_id);

for mandatory or wind-up part Real-time execution

Wake up signal between threads

do { /* wait until parallel optional parts are * ready to be executed */ pthread_cond_wait(&task.getOptionalCond()[cpu], &task.getOptionalMutex()[cpu]); /* save stack context * and signal mask information */ if (sigsetjmp(jmp_buf[cpu], true) == 0) { /* set up interval * for optional deadline timer */ task.setOdt(&itval); /* start optional deadline timer * (one-shot timer) */ timer_settime(timer_id, TIMER_ABSTIME, &itval, NULL); /* execute parallel optional parts */ task.execOptional(); /* complete parallel optional parts, * and hence stop optional deadline timer */ timer_settime(timer_id, 0, &stop_itval, NULL); } /* check if ending all parallel optional parts */ if (task.endOptionalPart()) { /* send wake-up signal to mandatory thread */ pthread_cond_signal(&task.getMandatoryCond()); } /* continue while task is active */ } while (task.isActive());

for optional part

Mandatory part

Optional part

Wind-up part

Fig. 6. Example of the execution of a parallel-extended imprecise task in P-RMWP

function. Note that the first parallel optional thread is executed on the processor that executes the mandatory thread. When the mandatory thread has completed its mandatory part, the pthread cond signal function is called to wakeup the parallel optional threads. The mandatory thread then waits until it receives the wake-up signal from the parallel optional parts in the pthread cond wait function. After that, the parallel optional threads set up the optional deadline timer in the timer settime function, and execute their parallel optional parts in execOptional member functions until their optional deadlines expire. When their optional deadlines expire, the parallel optional parts execute their timer interrupt routines and terminate their executions in the timer handler function. When all parallel optional threads have terminated, the wake-up signal is sent to the mandatory thread in the pthread cond signal function. Next, the mandatory thread executes the wind-up part in the execWindup function. After completing the wind-up part, the mandatory thread sleeps until the deadline (next release time) in the clock nanosleep function. The parallel-extended imprecise task is executed periodically during the release time and deadline intervals. RT-Seed does not use the pthread cond broadcast function because the parallel optional parts are not always executed after the mandatory part has been completed. Therefore, the mandatory thread sends signals to the specified parallel optional threads in the pthread cond signal functions as their jobs are executed. Under this implementation, parallel optional parts are completed, terminated, or discarded independently, as described in Subsection II-A. D. Termination The problem with the parallel-extended imprecise computation model is how to terminate the parallel optional parts in the user space. The solution is implemented as follows.

timer_delete(timer_id); } Fig. 7. Implementation of the termination of parallel optional parts on Linux in C++

Figure 7 describes the implementation of terminating parallel optional parts on Linux in C++. Because of space limitations, irrelevant initialization routines have been omitted. In addition, the implementation of the mandatory and wind-up parts has been omitted, as they only call the execMandatory and execWindup member functions, respectively. The number of hardware threads (CPUs) represented by NR_CPUS is defined as 228, because RT-Seed is implemented on Intel’s Xeon Phi 3120A (57 cores/228 hardware threads). First, the parallel optional parts execute the optional main function when they are initialized. This function calls the timer create function to create the optional deadline timer. The parallel optional parts then wait until they are executed, i.e., until they receive the wake-up signal from the mandatory thread, in the pthread cond wait function.

C0 C1 C2 C3

C56

C27 C28 C29

...

C56

C0 Hardware Threads

Hardware Threads

Hardware Threads

...

(a) One by One Fig. 8.

C0

...

(b) Two by Two

C41 C42 C43

...

C56

...

(c) All by All

Assigning parallel optional parts to hardware threads in the case of 171 parallel optional parts

TABLE I.

I MPLEMENTATION OF THE TERMINATION OF PARALLEL OPTIONAL PARTS

Implementation sigsetjmp/siglongjmp Periodic Check try-catch

Any Time Termination X

Signal Mask Restoration X (unnecessary)

X

After receiving the wake-up signal, the parallel optional parts call sigsetjmp functions to save their stack contexts and signal mask information for their termination. The setOdt member function is called to set up the optional deadline timer, and the timer settime function is called to start the optional deadline timer (one-shot timer). This is a one-shot timer because the parallel optional parts are not always ready to be executed. If there is no time to execute the parallel optional parts, they are discarded (i.e., parallel optional threads do not receive the wake-up signal from the mandatory thread), as described in Figure 1. The parallel optional parts are now executed in the execOptional member function in class Task. When the parallel optional parts have been completed, the timer settime function is called to stop the optional deadline timer. When the parallel optional parts have been terminated, the timer handler function is called by the SIGALRM interrupt. This calls the siglongjmp function, which restores the stack context and signal mask information. After calling the siglongjmp function, the sigsetjmp function is called, but the return value is not zero. Hence, the condition for the if statement is false, and the parallel optional parts are successfully terminated. After completing or terminating the parallel optional parts, the endOptionalPart member function is called to check if all parallel optional parts are ended. If so, the wake-up signal is sent to the mandatory thread. Finally, this task continues its periodic execution if it is active in the isActive member function. Otherwise, the dowhile loop ends, and the optional deadline timer is deleted in the timer delete function. Other implementations of terminating parallel optional parts are now discussed. First, the parallel optional threads check the termination condition periodically without the optional deadline timer if their optional deadlines have expired. This operation cannot be terminated at any time, and hence this periodic check degrades the improvement of QoS. However, the implementation of sigsetjmp and siglongjmp functions with the optional deadline timer cannot safely reserve resources (e.g., allocating memory with the malloc function) or

acquire mutexes, semaphores, or locks. Reserving resources is unsafe because the application has no way of determining the state of these resources when the parallel optional threads are terminated. Fortunately, this problem does not occur, because the parallel-extended imprecise computation model assumes that the parallel optional parts do not reserve resources and execute pure CPU-bound loops to improve QoS in parallel. Next, the try-catch statement in C++ with the optional deadline timer can be terminated at any time, but this statement does not save and restore the signal mask information. That is, the timer interrupt of the next job does not occur because the signal mask is not cleared. Table I describes how the parallel optional parts are terminated. The implementation of sigsetjmp/siglongjmp functions with the optional deadline timer is an effective approach of terminating the parallel optional parts. V.

E XPERIMENTAL E VALUATIONS

A. Experimental Environment Experiments are conducted using Linux 3.13.0 with Intel’s many-core platform software stack 3.3.4 (released on March 13, 2015) [16] on the Xeon Phi 3120A 1.1GHz (57 cores/228 hardware threads) with 6 GB GDDR5 SDRAM. The Xeon Phi 3120A has four hardware threads on each core. RTSeed is compiled by k1om-mpss-linux-g++ 4.7.0 (a cross compiler of g++ for Xeon Phi). The author uses the full tickless mode [17] to reduce latency and the boot parameter isolcpus=1-227 to avoid running regular tasks in CPU IDs 1–227 in Linux. The number of tasks n is set to one, meaning that only task τ1 exists in this evaluation. This is because the author assumes that the system has many-core processors, there are fewer tasks than processors, and each task executes its optional parts in parallel. Hence, multiple tasks are not necessarily executed on the same processors. That is, only one task evaluation can be adapted to real-time trading systems. The author assumes that the real-time trading system is working in the OANDA Japan trading company [18]. As this company usually provides 1 exchange rate per second, the period of task τ1 is set to 1s. The optional deadline of task τ1 is calculated as OD1 = D1 −w1 in Theorem 2 of [5]. In addition, the number of parallel optional parts np1 is selected from the set {4,8,16,32,57,114,171,228}. The WCET of mandatory part m1 is set to 250ms, the execution time of the optional part is 1s, and the WCET of the wind-up part is set to 250ms. Note that the execution times of all parallel optional parts o1,k are equal to o1 , and hence all optional parts always overrun and are terminated in order to measure the overheads under the worst case conditions. The



τ1

m

b s



e time

0 Mandatory part Release

Optional part Deadline

Wind-up part

Optional Deadline

m

Overhead of beginning mandatory part

b

Overhead of beginning parallel optional threads

s

Overhead of switching mandatory thread to optional thread

e

Overhead of ending parallel optional threads

Fig. 9. Execution of one parallel-extended task with four types of overheads

number of jobs executed in task τ1 is set to 100. The mandatory and wind-up parts of task τ1 are executed on hardware thread ID 0 of core ID 0, and do not migrate among the hardware threads/cores. The parallel optional parts are assigned to hardware threads once the mandatory part has been completed. The assignment policy is an important factor in improving QoS. This paper examines three policies for the assignment of tasks to hardware threads. 1)

2)

3)

One by One: Parallel optional parts are assigned to one hardware thread on each core one by one. When each core has one parallel optional part and further parallel optional parts exist, they are assigned to other hardware threads on each core one by one. Two by Two: Parallel optional parts are assigned to two hardware threads on each core two by two. When each core has two optional parts and further optional parts exist, they are assigned to other hardware threads on each core two by two. All by All: Parallel optional parts are assigned to all hardware threads on each core all by all (four by four on the Xeon Phi 3120A).

Figure 8 shows the assignment of parallel optional parts to hardware threads in the case of 171 parallel optional parts. The black squares represent assigned hardware threads and the white squares represent empty hardware threads. C0–C56 represent core IDs 0–56 on the Xeon Phi 3120A processor. In Figure 8(a), three hardware threads are assigned to C0–C56 (all cores). In Figure 8(b), four hardware threads are assigned to C0–C27, three hardware threads are assigned to C28, and two hardware threads are assigned to C29–C56. In Figure 8(c), there are four hardware threads assigned to C0–C41, three hardware threads assigned to C42, and no hardware threads assigned to C43–C56. B. Overhead Measurements The following four overheads are measured in the parallelextended imprecise computation model.

• •

∆m : the overhead between the release time and the beginning of the mandatory part. ∆b : the overhead of calling the pthread cond signal function to all parallel optional threads in the mandatory thread. ∆s : the overhead of switching the mandatory thread to the optional thread. ∆e : the overhead between the optional deadline and the beginning of the wind-up part.

These overhead are measured by the rdtscp instruction, which reads the time stamp counter of each core. Figure 9 illustrates the execution of one parallel-extended imprecise task τ1 with four types of overheads. When the parallel-extended imprecise task is released, the mandatory part is executed after its initialization has been processed. When one parallel-extended imprecise task has completed its mandatory part, the parallel optional parts are executed once they receive the wake-up signal in the pthread cond signal function. When the optional deadline expires, the optional thread is ended (if it overruns) after handling the timer interrupt of the optional deadline. If the task completes its optional part before its optional deadline, the optional deadline timer is canceled. Finally, the task completes its wind-up part until its deadline. The overhead measurements include three background tasks: No load (in which no background tasks are executed), CPU load (which executes infinite loop tasks on all hardware threads), and CPU-Memory load (which executes 512 KB (equal to the L2 cache size on the Xeon Phi 3120A) read/write tasks in infinite loops on all hardware threads). The CPUMemory load pollutes the cache so as to miss the L1 and L2 caches and read/write data from/to memory. These loads are executed, and hence the overhead measurements are performed under high load conditions. Figure 10 illustrates the overhead associated with beginning the mandatory part. The overheads of all assignment policies depend on the number of tasks. As there is only one task in this experiment, the overheads are approximately constant, regardless of the number of parallel optional parts. In Figure 10(a), there are no executions of the parallel optional parts when the mandatory part is ready to be executed, and hence there is no conflict between hardware resources. In Figures 10(b) and 10(c), these overheads under the CPU load and CPU-Memory load are again approximately constant, and are larger than the overhead under no load. Additionally, the overhead under the CPU-Memory load is larger than that under the CPU load. This is because the background task causes some contention between hardware resources such as CPU and memory, and the cache miss by the CPU-Memory load strongly affects the overhead at the beginning of the mandatory part. Figure 11 shows the overhead of switching the mandatory thread to the optional thread. In Figure 11(a), the overheads of all assignment policies increase as the number of parallel optional parts becomes larger. This is because switching the mandatory thread to the optional thread causes other parallel optional threads in each core to be executed. As the number of parallel optional parts becomes large, the frequency of conflicts between hardware resources increases. In particular, with 228

One by One Two by Two All by All Overhead [us]

Overhead [us]

250 200 150 100 50

300

300

250

250 Overhead [us]

300

200 150 100 One by One Two by Two All by All

50

0

0 0

25

50

75 100 125 150 175 200 225 250 # of Parallel Optional Parts

0

25

50

(a) No load

One by One Two by Two All by All

0

75 100 125 150 175 200 225 250 # of Parallel Optional Parts

0

25

(b) CPU load

One by One Two by Two All by All Overhead [us]

80 Overhead [us]

100

50

75 100 125 150 175 200 225 250 # of Parallel Optional Parts

(c) CPU-Memory load

Overhead of beginning the mandatory part

100

60 40

100

100

80

80

60 40

20

20

0

0 0

25

50

75 100 125 150 175 200 225 250 # of Parallel Optional Parts

(a) No load Fig. 11.

150

50

Overhead [us]

Fig. 10.

200

25

50

40 20

One by One Two by Two All by All 0

60

One by One Two by Two All by All

0

75 100 125 150 175 200 225 250 # of Parallel Optional Parts

0

(b) CPU load

25

50

75 100 125 150 175 200 225 250 # of Parallel Optional Parts

(c) CPU-Memory load

Overhead of switching from mandatory thread to optional thread

parallel optional parts, this results in a dramatic increase in the overhead of switching the mandatory thread to the optional thread. In Figures 11(b) and 11(c), the overheads under the CPU load and CPU-Memory load are similar and approximately constant. Under these load conditions, the overheads do not depend on the number of parallel optional parts. Figure 12 shows the overhead of beginning the parallel optional parts. All assignment policies suffer from increasing overheads as the number of parallel optional parts becomes larger. In addition, the differences between assignment policies are not so large. Interestingly, the absolute overhead with the CPU load is higher than that with the CPU-Memory load, which means that the overhead with the CPU-Memory load is not always higher than that under the CPU load. This is because the pthread cond signal function is called many times in beginning the parallel optional parts. This function uses many if statements, requiring the branch units in the CPU to be executed. However, the CPU-load loop uses the branch unit many times, because this is an infinite loop program without other operations. Therefore, processing the beginning of parallel optional parts suffers from considerably more interference under the CPU load than with the CPUMemory load. Figure 13 shows the overhead of ending the parallel optional parts. As in Figure 12, all assignment policies suffer increased overheads as the number of parallel optional parts increases. Unlike Figure 12, the absolute overhead with the CPU load is lower than that with the CPU-Memory load. The absolute overheads of ending the parallel optional parts are higher than those for beginning the parallel optional parts, because this overhead includes the handling of the timer interrupt, restoring the stack context, and sending the wake-up

signal to the mandatory thread. In Figure 13(a), all assignment policies have approximately the same overheads, regardless of assignment policies. From Figures 13(b) and 13(c), it can be seen that the one by one assignment policy has the highest overhead, whereas the all by all assignment policy has the lowest overhead for all assignment policies. That is, the one by one assignment policy has a high overhead under the load conditions. The time complexity of beginning and ending the parallel optional parts is O(npi ), where npi is the number of parallel optional parts. Hence, the overheads of beginning and ending the parallel optional parts increase linearly when the number of parallel optional parts becomes large. The overhead of ending the parallel optional parts is the largest of all types of overhead. Therefore, the overall overhead strongly depends on the assignment policy. VI.

R ELATED W ORK

TAO [19] is a distributed real-time middleware that implements Real-Time CORBA [20]. TAO uses the framework components and patterns of ACE [21], a middleware that supports many operating systems. In addition, some realtime middleware implements real-time scheduling algorithms [22], [13], [23], [9]. The iLAND middleware [24] uses the SCHED_RR scheduling policy, which schedules real-time tasks by round-robin, in Linux. The RT-Middleware2 [25] does not focus on real-time operation, although a real-time extension of RT-Middleware was presented in the author’s previous work [26]. However, these approaches do not support the imprecise computation model [2] for achieving real-time trading systems. 2 Note

that RT denotes Robot Technology here, not real-time.

8 6 4 2

8 6 4

6 4 2

25

50

75 100 125 150 175 200 225 250 # of Parallel Optional Parts

0 0

25

50

(a) No load

75 100 125 150 175 200 225 250 # of Parallel Optional Parts

0

25

(b) CPU load

50

75 100 125 150 175 200 225 250 # of Parallel Optional Parts

(c) CPU-Memory load

Overhead of the beginning of the parallel optional parts

60

60

One by One Two by Two All by All

50 Overhead [ms]

50 40 30 20

30 20 10

0

0 25

50

75 100 125 150 175 200 225 250 # of Parallel Optional Parts

(a) No load

One by One Two by Two All by All

50

40

10

0

60

One by One Two by Two All by All Overhead [ms]

Fig. 12.

Overhead [ms]

8

0 0

One by One Two by Two All by All

10

2

0

Fig. 13.

12

One by One Two by Two All by All

10 Overhead [ms]

10 Overhead [ms]

12

One by One Two by Two All by All

Overhead [ms]

12

40 30 20 10 0

0

25

50

75 100 125 150 175 200 225 250 # of Parallel Optional Parts

0

(b) CPU load

25

50

75 100 125 150 175 200 225 250 # of Parallel Optional Parts

(c) CPU-Memory load

Overhead of ending the parallel optional parts

Parallel real-time scheduling algorithms in the imprecise computation model have been developed [27], [28]. However, the imprecise computation model is not practical compared with the parallel-extended imprecise computation model described in this paper, and only theoretical and simulation studies have been performed. The RT-Frontier real-time operating system [29] supports dynamic real-time scheduling algorithms in the extended imprecise computation model on uniprocessors, but does not support multi-/many-core processors. In contrast, the RT-Est real-time operating system [8] supports both uniprocessor and multi-/many-processor semi-fixed-priority scheduling algorithms [5], [6], [7] in the kernel space. In contrast, the RT-Seed real-time middleware implements semi-fixed-priority scheduling in the user space. Therefore, RT-Seed is superior to RT-Est with respect to its customization, robustness, maintainability, and portability. RT-OpenMP [30], a real-time extension of OpenMP [11], supports a parallel synchronous task model consisting of a sequence of segments, where each segment consists of one or more parallel execution strands3 . This is similar to the parallelextended imprecise computation model, with the important difference that parallel optional parts can be terminated at any time. Hence, by Theorem 2, all semi-fixed-priority scheduling algorithms in the parallel-extended imprecise computation model have the same schedulability as those in the extended imprecise computation model. NVIDIA’s GPU scheduling [31], [32] is another approach that supports parallel real-time computing. GPUs are non3 Strands

are fundamental units of executable code.

preemptive devices, but the Xeon Phi is a preemptive processor. Hence, the Xeon Phi can implement fully preemptive realtime scheduling. In addition, the parallel-extended imprecise computation model assumes that the parallel optional parts can be terminated at any time. Therefore, the Xeon Phi can be adapted to implement imprecise real-time scheduling such as semi-fixed-priority scheduling [5]. VII.

C ONCLUSION

This paper presented the RT-Seed real-time middleware for semi-fixed-priority scheduling in real-time trading systems. In particular, RT-Seed supports the parallel-extended imprecise computation model that has parallel optional parts to improve QoS. The schedulability analysis shows that semi-fixedpriority scheduling in the parallel-extended imprecise computation model has the same schedulability as that in the extended imprecise computation model. The design and implementation of semi-fixed-priority scheduling in the parallel-extended imprecise computation model are introduced. This design can be easily adapted to other operating systems such as VxWorks [15]. The termination of parallel optional parts is effectively implemented using sigsetjmp/siglongjmp functions with the optional deadline timer. Experimental evaluations investigate the overheads in the parallel-extended imprecise computation model with respect to parallel optional parts for various hardware thread assignment policies on Intel’s Xeon Phi many-core system. Experimental results show that the one by one assignment policy suffers the highest overhead. However, this policy has the potential to improve QoS compared with other assignment policies, because it assigns parallel optional parts to cores in a uniform

manner, thus reducing the contention of hardware resources. The author’s findings suggest that traders should choose an appropriate number of parallel optional parts by considering the overhead associated with beginning and ending the processes. The author believes that the research of real-time trading systems becomes an important topic.

[14]

In future work, a practical imprecise computation model [33] that has multiple mandatory parts will be supported for various real-time trading systems. Real-time trading experiments using technical and/or fundamental analyses in RT-Seed will be planned in the demo/practice accounts of the OANDA Japan trading company [18]. In addition, new benchmarks will be developed to evaluate the performance of real-time trading systems.

[16]

ACKNOWLEDGMENT

[15]

[17] [18] [19]

[20]

This research was supported in part by Keio Gijuku Academic Development Funds and Keio Kougakukai.

[21]

R EFERENCES

[22]

[1]

[2]

[3]

[4]

[5]

[6]

[7]

[8]

[9]

[10] [11] [12]

[13]

C. L. Liu and J. W. Layland, “Scheduling Algorithms for Multiprogramming in a Hard Real-Time Environment,” Journal of the ACM, vol. 20, no. 1, pp. 46–61, Jan. 1973. K. Lin, S. Natarajan, and J. Liu, “Imprecise Results: Utilizing Partial Computations in Real-Time Systems,” in Proceedings of the 8th IEEE Real-Time Systems Symposium, Dec. 1987, pp. 210–217. H. Kobayashi and N. Yamasaki, “An Integrated Approach for Implementing Imprecise Computations,” IEICE Transactions on Information and Systems, vol. 86, no. 10, pp. 2040–2048, Oct. 2003. H. Kobayashi, “REAL-TIME SCHEDULING OF PRACTICAL IMPRECISE TASKS UNDER TRANSIENT AND PERSISTENT OVERLOAD,” Ph.D. dissertation, Keio University, Mar. 2006. H. Chishiro, A. Takeda, K. Funaoka, and N. Yamasaki, “Semi-FixedPriority Scheduling: New Priority Assignment Policy for Practical Imprecise Computation,” in Proceedings of the 16th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications, Aug. 2010, pp. 339–348. H. Chishiro and N. Yamasaki, “Global Semi-Fixed-Priority Scheduling on Multiprocessors,” in Proceedings of the 17th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications, Aug. 2011, pp. 218–223. ——, “Experimental Evaluation of Global and Partitioned SemiFixed-Priority Scheduling Algorithms on Multicore Systems,” in Proceedings of the 15th IEEE International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing, Apr. 2012, pp. 127–134. ——, “RT-Est: Real-Time Operating System for Semi-Fixed-Priority Scheduling Algorithms,” in Proceedings of the 2011 International Symposium on Embedded and Pervasive Systems, Oct. 2011, pp. 358– 365. M. S. Mollison and J. H. Anderson, “Bringing Theory Into Practice: A Userspace Library for Multicore Real-Time Scheduling,” in Proceedings of the 19th IEEE Real-Time and Embedded Technology and Applications Symposium, Apr. 2013, pp. 283–292. J. A. Bollinger, Bollinger on Bollinger Bands, 1st ed. McGraw-Hill, Aug. 2001. O. A. R. Board, OpenMP Application Program Interface Version 3.1, Jul. 2011, http://www.openmp.org/mp-documents/OpenMP3.1.pdf. M. Frigo, C. E. Leiserson, and K. H. Randall, “The Implementation of the Cilk-5 Multithreaded Language,” ACM SIGPLAN Notices, vol. 33, no. 5, pp. 212–223, May 1998. Y. Zhang, C. Gill, and C. Lu, “Real-Time Performance and Middleware for Multiprocessor and Multicore Linux Platforms,” in Proceedings of the 15th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications, Aug. 2009, pp. 437–446.

[23]

[24]

[25]

[26]

[27]

[28]

[29]

[30]

[31]

[32]

[33]

B. Andersson, S. K. Baruah, and J. Jonsson, “Static-Priority Scheduling on Multiprocessors,” in Proceedings of the 22th IEEE Real-Time Systems Symposium, Dec. 2001, pp. 193–202. J. Fiddler, E. Stromberg, and D. N. Wilner, “Software Considerations for Real-Time RISC,” in Proceedings of the Compcon Spring 90 Digest of Papers: Thirty-Fifth IEEE Computer Society International Conference, Feb. 1990, pp. 274–277. I. Corporation, “Intel Manycore Platform Software Stack (MPSS),” https://software.intel.com/en-us/articles/ intel-manycore-platform-software-stack-mpss, Jul. 2014. J. Corbet, “(Nearly) full tickless operation in 3.10,” http://lwn.net/ Articles/549580/, May 2013. O. J. Inc., http://www.oanda.jp/, (in Japanese). D. C. Schmidt, B. Natarajan, A. G. N. Wang, and C. Gill, “TAO: A Pattern-Oriented Object Request Broker for Distributed Real-time and Embedded Systems,” IEEE Distributed Systems Online, vol. 3, no. 2, pp. 1027–1033, Jan. 2002. O. M. Group, Real-time CORBA Specification, formal/05-01-04 ed., Jan. 2005. D. C. Schmidt, “The ADAPTIVE Communication Environment: An Object-Oriented Network Programming Toolkit for Developing Communication Software,” in Proceedings of the 12th Annual Sun Users Group Conference, Dec. 1993, pp. 1–25. P. Li, B. Ravindran, S. Suhaib, and S. Feizabadi, “A Formally Verified Application-Level Framework for Real-Time Scheduling on POSIX Real-Time Operating Systems,” IEEE Transactions on Software Engineering, vol. 30, no. 9, pp. 613–629, Sep. 2004. A. Zerzelidis and A. Wellings, “A Framework for Flexible Scheduling in the RTSJ,” ACM Transactions on Embedded Computing Systems, vol. 10, no. 1, pp. 3:1–3:44, Aug. 2010. M. Garc´ıa-Valls, I. R. L´opez, and L. F. Villar, “iLAND: An Enhanced Middleware for Real-Time Reconfiguration of Service Oriented Distributed Real-Time Systems,” IEEE Transactions on Industrial Informatics, vol. 9, no. 1, pp. 228–236, Feb. 2013. N. Ando, T. Suehiro, K. Kitagaki, T. Kotoku, and W. K. Yoon, “RT-Middleware: Distributed Component Middleware for RT (Robot Technology),” in Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Aug. 2005, pp. 3555– 3560. H. Chishiro, Y. Fujita, A. Takeda, Y. Kojima, K. Funaoka, S. Kato, and N. Yamasaki, “Extended RT-Component Framework for RTMiddleware,” in Proceedings of the 12th IEEE International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing, Mar. 2009, pp. 161–168. A. C. Yu and K.-J. Lin, “Scheduling Parallelizable Imprecise Computations on Multiprocessors,” in Proceedings of the Fifth International Parallel Processing Symposium, Apr. 1991, pp. 531–536. H. Fouad, B. Narahari, and J. K. Hahn, “A Real-Time Parallel Scheduler for the Imprecise Computation Model,” Parallel and Distributed Realtime Systems, vol. 2, no. 1, pp. 25–36, Jan. 2001. H. Kobayashi and N. Yamasaki, “RT-Frontier: A Real-Time Operating System for Practical Imprecise Computation,” in Proceedings of the 10th IEEE Real-Time and Embedded Technology and Applications Symposium, May 2004, pp. 255–264. D. Ferry, J. Li, M. Mahadevan, K. Agrawal, C. Gill, and C. Lu, “A Real-Time Scheduling Service for Parallel Tasks,” in Proceedings of the 19th IEEE Real-Time and Embedded Technology and Applications Symposium, Apr. 2013, pp. 261–272. S. Kato, K. Lakshmanan, A. Kumar, M. Kelkar, Y. Ishikawa, and R. Rajkumar, “RGEM: A Responsive GPGPU Execution Model for Runtime Engines,” in Proceedings of the 32nd IEEE Real-Time Systems Symposium, Dec. 2011, pp. 57–66. G. Elliott, B. Ward, and J. Anderson, “GPUSync: A Framework for Real-Time GPU Management,” in Proceedings of the 35th IEEE RealTime Systems Symposium, Dec. 2013, pp. 33–44. H. Chishiro and N. Yamasaki, “Semi-Fixed-Priority Scheduling with Multiple Mandatory Parts,” in Proceedings of the 16th IEEE International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing, Jun. 2013, pp. 1–8.

RT-Seed: Real-Time Middleware for Semi-Fixed-Priority Scheduling

middleware, which implements a semi-fixed-priority scheduling algorithm, called .... (e.g., EUR/USD) from a stock company, the parallel optional parts conduct ..... many-core platform software stack 3.3.4 (released on March. 13, 2015) [16] on ...

458KB Sizes 3 Downloads 170 Views

Recommend Documents

Modelware for Middleware
CRC for Enterprise Distributed Systems (DSTC)∗. April 16, 2003. Abstract ... ering the design of an enterprise application creating a. Platform Independent ... concepts, allowing the annotation of the PIM to indicate which application artifacts ...

Middleware Technologies for Ubiquitous Computing ...
Fax : (+33|0)4 72 43 62 27 ... challenges raised by ubiquitous computing – effective use of smart spaces, invisibility, and localized scalability .... computer and network resources, enforcing policies, auditing network/user usage, etc. Another ...

Towards Middleware Components for Distributed ...
May 31, 2006 - bases, web services, and messaging systems may also be included as .... 1 Underlying actuators may be realized as hardware or software- based entities. ... constraints while also accounting for variation due to network ...

MONITORING MIDDLEWARE FOR SERVICE LEVEL AGREEMENTS ...
1. INTRODUCTION. Service Level Agreements (SLAs) specify the Quality of Service .... As demonstrated by [7] (QoS monitoring associated with network traffic.

A contract-oriented middleware - UniCa
A contract-oriented middleware. Massimo Bartoletti. University of Cagliari (Italy) — BETTY COST Action. London, Apr 17th, 2015 ...

Oracle® Fusion Middleware -
Sep 1, 2008 - 1 Introduction to Building Fusion Web Applications with Oracle ADF. 1.1. Introduction to Oracle ...... How to Set Database and Java Data Types for an Entity Object Attribute .............. 4-29. 4.10.2 ...... How to Access an Applicatio

Oracle® Fusion Middleware -
Sep 1, 2008 - Fusion Developer's Guide for Oracle Application Development. Framework ...... Introduction to Developing a Web Application with ADF Faces.

Middleware for Long-term Deployment of Delay-tolerant ...
ware/software interfaces; C.3 [Computer Systems Or- ganization]: ... Sensor Networks, Delay-tolerant Networks, Middleware Sys- tem, Application Scheduling. 1.

design of an intelligent middleware for flexible sensor configuration
applications in an imperative way and from each local node's perspective. This results in software ... for M2M. The middleware is designed to perform automatic sensor identification, node configuration, application ..... The profile framework will pe

A Middleware Service for Pervasive Social Networking
Social Networks, Pervasive Computing, Middleware. 1. INTRODUCTION. Pervasive Social Networking (PSN) [1] (also called Mo- bile Social Networking) is a ...

Oracle® Fusion Middleware -
Sep 1, 2008 - 1 Introduction to Building Fusion Web Applications with Oracle ADF. 1.1 ...... create, modify, and validate data using web, wireless, desktop, .... that handles displaying the component and also provides the JavaScript objects.

pdf-175\realtime-data-mining-self-learning-techniques-for ...
... loading more pages. Retrying... pdf-175\realtime-data-mining-self-learning-techniques ... numerical-harmonic-analysis-by-alexander-paprotny.pdf.

Distributed QoS Guarantees for Realtime Traffic in Ad Hoc Networks
... on-demand multime- dia retrieval, require quality of service (QoS) guarantees .... outside interference, the wireless channel has a high packet loss rate and the ...

Oracle® Fusion Middleware -
Sep 1, 2008 - What Happens When You Add Attribute Control Hints ........................................... 4-21. 4.7. Working ...... 12-15. 12.4.3.2. Setting Digital Signatures.

A contract-oriented middleware - UniCa
Apr 17, 2015 - runtime monitoring (send(), receive()). ▻ subtyping. M. Bartoletti, T. Cimoli, M. Murgia, A.S. Podda, L. Pompianu. Compliance and subtyping in ...

A Middleware for Context-Aware Agents in Ubiquitous
computing is essentially a reactive approach to information access, and it ..... Context-sensitive object request broker (R-ORB) hides the intricacies of ad hoc ...

Building Intelligent Middleware for Large Scale CPS ...
the application development process has received significantly ... intelligent middleware for building LCPS applications. The goal .... UI: Web Interface. Show the ...

Ginga-J: The Procedural Middleware for the Brazilian ...
they process declarative or procedural applications, and are called Ginga-J ... data formats, besides protocols up to the application level. .... receiver menus, and native electronic program guides. .... The main architectural elements of Ginga-J.

MICA: Pervasive Middleware for Learning, Sharing and ...
Support both security and privacy measures. .... on the front door is an LCD panel that says whether the .... in the car, read out emails, but if the user is at home.

A Client/Server Message Oriented Middleware for ...
Device software drivers installation and configuration are performed on the server .... PC computer host sees base communication board as a virtual serial port.

Power Awareness In Context Aware Middleware for ...
context aware middleware for ubiquitous Computing Systems. The principal ... thousands small sensor nodes is the key to achieve the designing of ubiquitous ..... the wired [17] and wireless [8] network architecture for a Context-aware Home. ... build

A Middleware-Independent Model and Language for Component ...
A component implements a component type τ, same as a class implements an interface. A component (τ, D, C) is characterized by its type τ, by the distribution D of Boolean type which indicates whether the implementation is distributed, and by its c

Realtime HTML5 Multiplayer Games with Node.js - GitHub
○When writing your game no mental model shift ... Switching between different mental models be it java or python or a C++ .... Senior Applications Developer.