Multiplexed thread scheduling
Multiplexed threads are subjected to two levels of scheduling:
-
Threads Library Scheduling:
The Threads Library scheduler assigns
multiplexed threads to LWPs for execution
and, at times, preempts them so the LWP can pick up another thread.
-
System Scheduling:
The kernel assigns LWPs to (hardware) processors and later preempts them.
The Threads Library maintains a ``priority level''
for each multiplexed thread.
This value plays a role in the selection of a thread for assignment
to an LWP.
The priority value of a multiplexed thread can be modified
by any other thread in the process via the
thr_setprio(S)
function.
int thr_setprio(
thread_t tid,
int prio
);
Runnable, multiplexed threads are scheduled
for execution as follows:
-
A thread with a higher priority value will be scheduled to run
before a thread with a lower value.
-
Threads with the same priority will be scheduled on a
first-come, first-serve basis.
-
The valid range of priorities is 0 to MAXINT-1;
however,
the Threads Library is optimized for a maximum priority of 126 (or less).
The Threads Library must select a thread for assignment to an LWP
on the following occasions:
-
When an LWP becomes available, the highest priority runnable
multiplexed thread will be assigned to it.
For example,
an LWP becomes available
when a thread exits, or
when a multiplexed thread blocks on a thread synchronization mechanism
(discussed later), or
when the concurrency level is increased.
-
When a multiplexed thread becomes runnable
(perhaps a mutex has been released by one thread and acquired by another),
it will preempt a running multiplexed thread of a lower priority.
-
When an executing thread calls
thr_yield(S),
it deliberately surrenders its LWP to a runnable thread of equal
or higher priority (if any).
Threads Library scheduling and system scheduling
are independent of each other.
-
The Threads Library can assign a thread to an LWP but cannot say
when that LWP will actually execute.
-
The kernel is unaware that the Threads Library is using LWPs
to implement (user-level) threads.
The kernel maintains its own scheduling context
(for example, current priority, ``nice value'', priority class)
that is separate from similar features that the Threads Library
maintains for threads.
The interaction of these two levels of scheduling
can produce some interesting effects:
Additional points to consider:
-
A thread that is blocked in a system call will remain with its
LWP until that system call returns.
-
Each LWP in the pool used for multiplexed LWPs is of the same
kernel scheduling class (that is, time-sharing or fixed priority).
That class is determined by the scheduling class (that is, time-sharing
or fixed priority) of the LWP running the initial thread of the program.
-
One step in associating a thread with an LWP is to make
the signal mask of the LWP agree with that of the thread.
On each thread context switch there is a check for agreement.
If the mask of the new thread differs from that of the prior thread,
there is a system call to update the mask of the LWP.
One implication of this is that using threads with a wide variety
of signal masks can add to the cost of switching threads.
Next topic:
Bound thread scheduling
Previous topic:
Thread scheduling
© 2005 The SCO Group, Inc. All rights reserved.
SCO OpenServer Release 6.0.0 -- 02 June 2005