Machine independent implementation of Cooperative Multi-threading in C

By Frans Faase

One would expect that to implement any form of multi-threading or multi-tasking, it would be required to use some kind of low-level machine programming, or at least to have access to some system calls of the operation system. Here we want to show that some form of multi-threading can be implemented in C that is completely machine and operating system independent. It requires some additional coding, but with using of some cleverly preprocessor symbols, the C preprocessor can be used to do most of the work.

Introduction

Multi-tasking is the ability of an operating system to execute more than one executable at the same time. Each executable is running in its own address space, meaning that the executables have no way to share any of their memory. It is nice, because this makes it impossible for any program to damage the execution of any of the other programs running on the system. The programs have no way to exchange any information except through the operating system (or by reading files stored on the file system).

Multi-threading in a sense is like multi-tasking except that it is inside a single executable (process) that there multiple lines (also called threads) of execution. All these threads are executing in the same memory space. This requires that there is some mechanism to keep the different threads from modifying the memory in an unorderly way. This is usually done with a locking mechanism based on semaphores. The programmer has to take care that these are applied at the correct places, otherwise the program can behave in an unexpected way. To use multi-threading inside an executable often requires a lot of effort by the programmer, so why would you like to use multi-threading? In case a program has to perform tasks with different priorities, multi-threading may be the only solution. For example, one wants the system to immediately response to each key-stroke, and mouse interaction, but at the same time, to be able to some form of background processing. In this case it is handy to have the background processing running in a different thread than the forground processing, which needs to respond to all the user interaction. (Note that it would also be possible to implement this scheme in a multi-tasking environement with interprocess communication.)

There are two forms of multi-tasking and multi-threading, namely: preemptive and cooperative. Cooperative multi-threading means that the threads themselves decide at which moment they give an opportunity to another thread to do some processing. Preemptive multi-threading means that there is some kind of out-side agent that stops threads at regular intervals, and determines who is next. The problem with cooperative multi-threading is that it depends on the willingness of threads to make room for other threads. If there is a thread that gets stuck in an endless loop, non of the other threads will ever do something more. (This is why in Windows 3.11 one program can hang-up the whole system.) The problem with preemptive multi-threading is that the execution of each thread can be interrupted at any moment. At first this might not seem a problem, but there are times at which a thread may not want to be interrupted. It also means that number of possible interleavings of execution of the threads increases signifincantly, beyond comprehension.

But the intention of this page is not to discuss all the ins and outs of multi-threaded programming, as this is a subject that is described else where in sufficient detail. We will continue with the question of how to implement cooperative multi-threading in standard C without any operating specific or special libraries.

The nature of procedural execution

In most imperative languages (including C), each execution of a procedure works on its own set of local parameters. That is why it is possible to have recursive procedure (or function) calls, where a procedure calls itself (possibly with some other procedures in between). Furthermore, it is the case that the execution of the calling procedure is supressed until the procedure being called has finished executing. Very early it was discovered that this form of procedural execution could be implemented very efficiently using a stack build into the processor. At first this processor stack was only used for remembering where to continue the execution in the calling procedure (the so-called return address), later on it was also used to hold the local variables of the procedures.

In order to implement multi-threading in a language with procedural execution it is needed to have a separate processor stack for each thread. In a machine specific approach this means that there must be some provision in the operating system to swap between processor stacks. This will be different from operating system to operating system. A machine independent solution would be not to use the processor stack, but to build a stack for each thread using memory allocated from the heap. (Of course, the processor stack is used by the procedures managing all the threads and such.) In the order to implement this idea, the following need to be done:

Representing the list of nested procedure calls

The list of nested procedure calls can easily be represented by a linked list, as for example is done by the following C structure definitions:
typedef struct procedure_T
{
  struct procedure_T *called_by;
} procedure_t; 

typedef struct thread_T
{
  procedure_t *cur_procedure;
} thread_t;
Note that the thread structure points to the deepest nested procedure being called, and that each procedure points to it calling procedure.

Storing local data

Because each procedure has its own set of parameters and local variables, we need to define a structure for each of the procedures which can be part of a thread. Take for example the following procedure definition as an example:
void f(int a, int b, int *r)
{
  int c;
  c = a + b;
  c += 2 * (c + a) * b;
  *r = c;
}
Then the following structure should be defined:
typedef struct
{ int a;
  int b;
  int *r;
  int c;
} f_local_data_t;
Now each procedure needs to be rewritten into a procedure which is called to this structure. For the above procedure this would look like:
 
void exec_f(f_local_data_t *local_data)
{
  local_data->c = local_data->a + local_data->b;
  local_data->c += 2 * (local_data->c + local_data->a) * local_data->b;
  *(local_data->r) = local_data->c;
}  
This can be made more readable, by introducing a preprocessor symbol for the local_data-> part, as follows:
#define V(X) (local_data->X)

void exec_f(f_local_data_t *local_data)
{
  V(c) = V(a) + V(b);
  V(c) += 2 * (V(c) + V(a)) * V(b);
  *V(r) = V(c);
}  
(If you are worried about the fact that this code is less efficient as the original code, then there is a trick to over come this.
Read on.)

Extending the stack structure

In the stack structure there should be pointers to the local data, and the functions (like the one just above here), which can execute on this local data. Because the stack structure has to point to all kinds of structures like f_local_data, we will use a void pointer with the name local_data in the type procedure_t. Furthermore, if we want the stack structure to point to all the possible procedures like exec_f, they should have the same header. One way to solve this, is to let them operate on the thread structure. For our example function exec_f, this would mean it needs to be changed into:
void exec_f(thread_t *thread)
{ f_local_data_t *local_data = (f_local_data_t*)thread->cur_procedure->local_data;
  V(c) = V(a) + V(b);
  V(c) += 2 * (V(c) + V(a)) * V(b);
  *V(r) = V(c);
}
Now the stack types can be extended in the following way:
typedef struct procedure_T
{
  struct procedure_T *called_by;
  void (*exec_procedure)(struct thread_T *);
  void *local_data;
} procedure_t; 

typedef struct thread_T
{
  procedure_t *cur_procedure;
} thread_t;

Calling an execution procedure

Below we first define a procedure which implements the calling of a procedure with our stack structure. As arguments it takes a pointer to a thread, a pointer to the execution procedure, and a pointer to it local data (as a void pointer).
#define MALLOC(T) (T*)malloc(sizeof(T))

void call_procedure(thread_t *thread, void (*exec_procedure)(thread_t *), void *local_data)
{
 procedure_t *new_procedure = MALLOC(procedure_t);
 
 new_procedure->called_by = thread->cur_procedure;
 new_procedure->exec_procedure = exec_procedure;
 new_procedure->local_data = local_data;
 thread->cur_procedure = new_procedure;
}
Now, a call to our original procedure f of the form "f(x, y, &z);", needs to be translated into:
  { f_local_data_t *local_data_f = MALLOC(f_local_data_t);

    local_data_f->a = x;
    local_data_f->b = y;
    local_data_f->r = &z;

    call_procedure(thread, exec_f, (void*)local_data_f);
  }
The above code, does not actually call the procedure f, it only modifies the stack, such that it can be executed. Assumming that we follow the convenstions that all the parameters are given names like argi, it would be possible to define a preprocessor symbol, which does most of the work. This would look like:
#define CALL3(T,F,A,B,C) { F#_local_data_t *local_data_#F = MALLOC(F#_local_data_t);\
         local_data_#F->arg1 = A;\
         local_data_#F->arg2 = B;\
         local_data_#F->arg3 = C;\
         call_procedure(T, exec_#F, (void*)local_data); }
(The use of "#" inside a C preprocessor symbol definition means that the text on both sides is glued together after substitution. Not all preprocessors support the "#" character. Some have other means to achieve the same.) Calling the original procedure f would now look like:
  CALL3(thread, f, a, b, &r)
Now at the end of the defintion of exec_f there should be some code that will remove it from the stack again. This can be done by calling the following procedure:
void return_procedure(thread_t *thread)
{
 procedure_t *called_procedure = thread->cur_procedure;
 
 thread->cur_procedure = old_procedure->called_by;
 free(called_procedure->local_data;
 free(called_procedure);
}
In order to execute a thread the following procedure can be used:
void run_thread(thread_t *thread)
{
  while(thread->cur_function != NULL)
    (thread->exec_procedure)(thread);
}
To summarize the above, the following code is needed to execute the original procedure call "f(x, y, &z);" in a thread:
  { thread_t *thread = MALLOC(thread_t);

    CALL3(thread, f, x, y, &z)

    run_thread(thread);

    free(thread);
  }  
Assuming that f_local_data_t and exec_f are defined as follows:
typedef struct
{ int arg1;
  int arg2;
  int *arg3;
  int c;
} f_local_data_t;

void exec_f(thread_t *thread)
{ f_local_data_t *local_data = (f_local_data_t*)thread->cur_procedure->local_data;

  V(c) = V(arg1) + V(arg2);
  V(c) += 2 * (V(c) + V(arg1)) * V(arg2);
  *V(arg3) = V(c);

  return_procedure(thread);
}

Execution of multiple threads

So far, only the methode to execute a single procedure in a thread was discussed. If several threads are to be executed in parallel, this means that there should be a way to switch between threads. With cooperative multi-threading the threads themselves have to indicate when they are willing to give room to another thread. Such a point is called a reschedule point. Lets asume we have the following procedure:
void exec_proc1(thread_t *thread)
{ 
  /* some code */

  /* a reschedule point */

  /* some more code */
}
This means that at the reschedule point, the procedure is willing to let another thread continue with the execution, and that after some time the rest of the code is executed. If the execution of exec_proc1 is halted at the reschedule point, some mechanism is needed to remember where the execution should continue the next time. To solve this, the
type procedure_t is extended with an integer state field, and now becomes:
typedef struct procedure_T
{
  struct procedure_T *called_by;
  void (*exec_procedure)(struct thread_T *);
  void *local_data;
  int state;
} procedure_t; 
Of course, the procedure exec_proc1 should be adapted to deal with the state. Right at each reschedule point there will be a statement setting the state field of the type procedure_t, followed by a return statement. The next time exec_proc1 is called, it should jump to the next statement following the reschedule point. C allows us to jump to any statement in a procedure. To achieve this, a switch statement needs to be added, which depending on the state jumps to the right label using a goto-statement. Yes, I know, goto-statements are considered evil, but in this case, they are the best available solution 1. To ease the pain a little, the following definitions of preprocessor symbols can be used to camouflage the used of goto:
#define START_RP_DEF  switch (thread->cur_procedure->state) {
#define RP_DEF(X) case X: goto rp#X;
#define END_RP_DEF }

#define RESCHEDULE(X) thread->cur_procedure->state = X; return; rp#X:
With these defines, the execution procedure exec_proc1 can now be written like:
void exec_proc1(thread_t *thread)
{ 
  BEGIN_RP_DEF
    RP_DEF(1)
  END_RB_DEF

  /* some code */

  /* a reschedule point */
  RESCHEDULE(1)

  /* some more code */

}
We assume that the state field has been initialized to zero before the execution procedure is called the first time. The only care that needs to be taken, is that each reschedule point has a RP_DEF line. If this is omitted, the execution will continue from the start again. Putting in a DB_DEF for a non-existing reschedule point, will result in a compile-time error.

Making nested calls inside threads

To call an execution procedure inside an execution procedure it is important to exit the current execution procedure, and let the run_thread procedure continue the execution of the execution procedure being called. Calling a procedure thus also introduces a reschedule point. The procedure call_procedure needs to be adapted to also set the state field, in the following way:
#define MALLOC(T) (T*)malloc(sizeof(T))

void call_procedure(thread_t *thread, int rp, void (*exec_procedure)(thread_t *), void *local_data)
{
 procedure_t *new_procedure = MALLOC(procedure_t);

 thread->cur_procedure->state = rp; 
 new_procedure->called_by = thread->cur_procedure;
 new_procedure->exec_procedure = exec_procedure;
 new_procedure->local_data = local_data;
 new_procedure->state = 0;
 thread->cur_procedure = new_procedure;
}
The preprocessor define for calling an execution procedure with three arguments should now become:
#define CALL3(R,F,A,B,C) { F#_local_data_t *local_data_#F = MALLOC(F#_local_data_t);\
         local_data_#F->arg1 = A;\
         local_data_#F->arg2 = B;\
         local_data_#F->arg3 = C;\
         call_procedure(thread, R, exec_#F, (void*)local_data); }\
         return; rp#R:
Note that the first argument to this preprocessor symbols is the unique reschedule point. Of course a RP_DEF line with this reschedule point should occur at the top of the execution procedure making the call. It is assumed that the thread parameter is defined in the context where this symbol is used.

The thread manager

Although we suggested that at a reschedule point another thread would continue execution, the above defined
run_thread procedure would just continue executing the one and only thread. To manage more than one thread, a thread manager is needed. The thread manager should manage a collection of threads, and the run_thread procedure should stop execution of a thread at a reschedule point. For this it is needed to extend the thread structure with a running mode. These modifications result in the following redefinition of the thread_t type:
/* possible values for running_mode */
#define RM_SUSPENDED  0
#define RM_EXECUTING  1

typedef struct thread_T
{
  struct thread_T *next;
  int running_mode;
  procedure_t *cur_procedure;
} thread_t;
The definition of RESCHEDULE and run_thread should now become:
#define RESCHEDULE(X) thread->cur_procedure->state = X; \
                      thread->running_mode = RM_SUSPENDED; return; rp#X:

void run_thread(thread_t *thread)
{
  while(thread->running_mode == RM_EXEUTING &&
        thread->cur_function != NULL)
    (thread->exec_procedure)(thread);
}
A thread manager should also implement a strategy for executing the threads after each other. The most commonly used strategy is the round robin strategy. As soon as a thread signals a reschedule, it is put at the end of the queue of threads, and the next thread in the queue is executed next. For this we need a procedures for adding a thread to the end of a list with threads. Which is:
void append_thread(thread_t **ref_the_threads, thread_t *a_thread)
{
  while (*ref_the_threads != NULL)
    ref_thread = &((*ref_the_threads)->next);
  *ref_thread = a_thread;
}
A simple thread manager would now look like:
thread_t *runnable_threads = NULL;

void exec_threads()
{
  while (runnable_threads != NULL)
  { thread_t *first_thread;

    /* Take the first thread of the list */
    first = runnable_threads;
    runnable_threads = first_thread->next;
    first_thread->next = NULL;

    /* Execute the first thread */
    first_thread->running_mode = RM_EXECUTING;
    run_thread(first_thread);

    /* If thread has not completed, append it to the end of the list */
    if (first_thread->cur_function != NULL)
      append_thread(&runnable_threads, first_thread);
  }
}
The variable runnable_threads was placed outside the procedure on purpose, otherwise it would not be possible to create threads for within threads. The next section deals about the topic of thread creation.

Thread creation

Whenever a thread is created, the procedure which should be executed as the top-level procedure of the thread, and its local data should be provided. The following procedure creates a thread with these parameters and places it at the end of the queue of threads.
void create_thread(void (*exec_procedure)(thread_t *), void *local_data)
{
 procedure_t *new_procedure = MALLOC(procedure_t);
 thread_t *new_thread = MALLOC(thread_t);

 new_procedure->called_by = thread->cur_procedure;
 new_procedure->exec_procedure = exec_procedure;
 new_procedure->local_data = local_data;
 new_procedure->state = 0;

 new_thread->running_mode = RM_SUSPENDED;
 new_thread->next = NULL;
 new_thread->cur_procedure = new_procedure;

 append_thread(&runnable_threads, new_thread);
}
Note that thread creation can both be done outside the execution of exec_threads as a way to create the initial queue of threads, as during the execution of a thread within exec_threads as a way of dynamic thread creation.

Simularily to the CALL defines, we can also make defines for thread creation. Below the definition of CREATE3 is given.
#define CREATE3(F,A,B,C) { F#_local_data_t *local_data_#F = MALLOC(F#_local_data_t);\
         local_data_#F->arg1 = A;\
         local_data_#F->arg2 = B;\
         local_data_#F->arg3 = C;\
         create_thread(exec_#F, (void*)local_data); }

Syncronization between threads

With cooperative multi-threading it is technically speaking not necessary to implement syncronization between threads, as it would be needed with pre-emptive multi-tasking, because threads simply can delay rescheduling until they are done with their work. However, if we would have some form of interrupt handling implemented, it could be possible that some threads want to sleep until they are awoken by a certain event.

Below a simple signaling method is described. Basically, there are two primitives. One is "wait for signal", the other is "give signal". Both of these these primitives need an argument to identify the particular signal they syncronize on. For this purpose the type signal_t will be introduced. This type will just contain a list of waiting threads, which are queued in the list of runnable threads. The type signal_t is defined as follows:
typedef struct
{
  thread_t *waiting_threads;
}
The "wait for signal" primative should place the thread from which it is executed into the list of waiting threads of the signal. It should also introduce a waiting point, and to keep
exec_threads to put it in the queue of runnable threads, an additional running mode need to be introduced. Because the execution of the thread is suspended, a reschedule point needs to introduced. Below the preprocessor symbol to implement the "wait for signal" primative is given:
#define RM_WAIT_FOR_SIGNAL 2
#define WAIT_FOR_SIGNAL(X,S) append_thread(&(S)->waiting_threads, thread); \
                      thread->cur_procedure->state = X; \
                      thread->running_mode = RM_WAIT_FOR_SIGNAL; return; rp#X:
The procedure exec_threads needs to be modified. An additional condition should be added to prevent the thread from being queued as a runnable thread:
    /* If thread has not completed, append it to the end of the list */
    if (first_thread->cur_function != NULL
        && first_thread->running_mode == RM_SUSPENDED)
      append_thread(&runnable_threads, first_thread);
Now only the primitive "give signal" needs to be implemented, which is rather straight forward:
void signal(signal_t *a_signal)
{
  append_thread(&runnable_threads, a_signal->waiting_signals);
  a_signal->waiting_signals = NULL;
}

#define GIVE_SIGNAL(S) signal((S));
(I admit, this code is a little tricky because it uses the the fact that append_thread does not modify the next field of the thread being added. Thus it allows you to append a list of threads.)

Resource locking

Another form of waiting occurs when a thread wants to make used of a shared resource. In this case only the first thread waiting for a resource needs to be made runnable. And it is also needed to remember which thread has claimed the resource. There are two primitives: "claim resource" and "release resource". Again we define a type, called resource_t as the means of identifying a resource, and administrating its state.
typedef struct
{
  thread_t *requesting_threads;
  thread_t *claimed_by;
} resource_t;
The "claim resource" primitive makes us of the function claim_resource which returns one if the resource could be claimed, otherwise zero. If the resource could not be claimed the thread has to be suspended until the resource becomes available. For this a reschedule point is needed. Also a new running mode will be introduced. This results in the following definitions:
#define RM_WAIT_FOR_RESOURCE 3
#define CLAIM_RESOURCE(X,R) if (!claim_resource(thread, R)) \
                      { thread->cur_procedure->state = X; \
                        thread->running_mode = RM_WAIT_FOR_RESOURCE; return; } \
                      rp#X:

int claim_resource(thread_t *a_thread, resource_t *a_resource)
{
  if (a_resource->claimed_by == NULL)
  {
    a_resource->claimed_by = a_thread;
    return 1;
  }
  else
  {
    append_thread(&a_resource->requesting_threads, a_thread);
    return 0;
  }
}
Although, there is no reason to reschedule a thread which is releasing a resource, it still might be wise, because otherwise there might be a change that the current thread will claim the resource before any of the other resources can do so, which could be undesirable effect. Here we leave it open to the user to add a RESCHEDULE statement or not. The procedure for releasing a resource looks as follows:
#define RELEASE_RESOURCE(R) release_resource(R);

void release_resource(resource_t *a_resource)
{
  a_resource->claimed_by = NULL;
  if (a_resource->requesting_threads != NULL)
  {
    a_resource->claimed_by = a_resource->requesting_threads;
    a_resource->requestion_threads = a_resource->requesting_threads->next;
    append_thread(&runnable_threads, a_resource->claimed_by);
  }
}
As you can see, this procedure will take care of making the next thread requesting the resource runnable, and also makes it claim the resource, otherwise another thread might claim it in the mean time.

Dead-locks

As soon as resource locking has been introduced dead-locks can occur, whenever there is a cycle of threads all having locked some resource that the next thread in the cycle is waiting for. There is a simple test to see if dead-lock has occured, as there should be a last thread closing the cycle. Each thread can wait for at most one resource, and each resource can be claimed by at most one thread. It is thus sufficient to add a pointer in the
thread structure to the resource it is waiting for. As follows2:
typedef struct thread_T
{
  struct thread_T *next;
  int running_mode;
  procedure_t *cur_procedure;
  resource_t *waiting_for_resource;
} thread_t;
Furthermore, the new field waiting_for_resource has to be set correctly in claim_resource and release_resource as follows:
int claim_resource(thread_t *a_thread, resource_t *a_resource)
{
  if (a_resource->claimed_by == NULL)
  {
    a_resource->claimed_by = a_thread;
    return 1;
  }
  else
  {
    a_tread->waiting_for_resource = a_resource;
    append_thread(&a_resource->requesting_threads, a_thread);
    return 0;
  }
}

void release_resource(resource_t *a_resource)
{
  a_resource->claimed_by = NULL;
  if (a_resource->requesting_threads != NULL)
  {
    a_resource->claimed_by = a_resource->requesting_threads;
    a_resource->requestion_threads = a_resource->requesting_threads->next;
    a_resource->claimed_by->waiting_for_resource = NULL;
    append_thread(&runnable_threads, a_resource->claimed_by);
  }
}
With this we can define a function that would return true if one in case claiming a given resource by a given thread would result in a dead-lock situation and zero otherwise as follows:
int dead_lock(thread_t *a_thread, resource_t *a_resource)
{
  if (a_resource->claimed_by != NULL)
  {
    thread_t *cb_thread = a_resource->claimed_by;

    while (cb_thread != NULL && cb_thread != a_thread)
    {
      resource_t *wf_resource = cb_thread->waiting_for_resource;

      if (wf_resource == NULL)
        cb_thread = NULL;
      else
        cb_thread = wf_resource->claimed_by;
    }

    return cb_thread == a_thread;
  }

  return 0;
}
This function can be used when the programmer suspects that a certain resource claim could result in a dead-lock situation. Normally, the occurence of dead-locks should be prevented by always claiming resources in a fixed order. In some cases this might not be avoided, or be more convenient. However, it requires additional programming to deal with the occurence of a dead-lock, which means to release earlier claimed resources. Here it is also better than preventing is better than curing.

Timers

Sometimes it is needed to suspend a thread until a certian period has elapsed. In our implementation, it is needed to make a special provision for timers. It can be simply handled by the thread itself which in a loop checks if the time has passed, and preforms a rescheduling once the desired time has not been reached yet. Of course, it can be implemented as part of the thread structure, if needed.

Prioritized threads

A challinging extention is to give threads different priorities depending on their importance. Implementing priorities is far from trivial, and requires a lot of changes in the code that was presented so far.


Performance improvements

It is indeed the case that the use of local_data->X introduces an additional pointer dereferencing. All parameters and local variables need at least one pointer dereferencing because they are stored on the stack. On modern processors, even global variables need a pointer dereferencing because they are placed on the a global data segment. The use of local_data->X adds an additional pointer dereferencing needed for local variables. If the local variables are referenced a lot, there is a construct that might actually save execution time. The idea is store the data that local_data is pointing to on the stack with an additional copy operation, and copy it back afterwards. So we rewrite the example code:
#define V(X) local_data->X

void exec_f(thread_t *thread)
{ f_local_data_t *local_data = (f_local_data_t*)thread->cur_procedure->local_data;

  V(c) = V(arg1) + V(arg2);
  V(c) += 2 * (V(c) + V(arg1)) * V(arg2);
  *V(arg3) = V(c);

  return_procedure(thread);
}
into the code:
#define V(X) local_data.X

void exec_f(thread_t *thread)
{ f_local_data_t local_data;
  memcpy(local_data, (f_local_data_t*)thread->cur_procedure->local_data, sizeof(local_data));

  V(c) = V(arg1) + V(arg2);
  V(c) += 2 * (V(c) + V(arg1)) * V(arg2);
  *V(arg3) = V(c);

  memcpy((f_local_data_t*)thread->cur_procedure->local_data, local_data, sizeof(local_data));
  return_procedure(thread);
}
Good optimizing compilers translate memcpy into memory moving instructions, which are known to be very fast.

Use the C Preprocessor

Conclusion

....

The trick which we have used to implement cooperative multi-threading is that of making objects of procedure calls. In an object oriented language we would have created one class (probably as a subclass of a class named ThreadProcedure) for each procedure. Calling a procedure is then nothing else as creating an object of this class. In practice the technique of turning procedures in classes is used also for other types of implementation problems. Think for example about the abstract factory design pattern, in which the procedure to create and initialize an object is encapsulated in a class which produces these kind of objects. Also the state design pattern is a method of encapsulating a procedure inside a object.

...


Footnotes

  1. The alternative would be to place all the different parts of the code inside a big select statement. This was my initial idea to solve this problem. But if the the reschedule point occurs in the middle of a control statement (e.g., a if or while statement) the code needs to be flattend out. And there is no easy way of doing this, then with the use of goto statements. Which are exactly the things we wanted to avoid in the first place.

  2. With this type definition we did not take into account the particular order in which the various type definitions occur in the actual source. It might thus be needed to replace "resource_t" by something like "struct resource_T", and modify the definition of resource_t.


My life as a hacker