/design/trunk/src/ch_time.xml |
---|
45,18 → 45,18 |
<para>The rest of this section will, for the sake of clarity, focus on the |
two-register scheme. The decrementer scheme is very similar.</para> |
<para>The kernel must reinitialize the counter registers after each clock |
interrupt in order to schedule next interrupt. However this step is tricky |
and must be done with caution. Imagine that the clock interrupt is masked |
either because the kernel is servicing another interrupt or because the |
processor locally disabled interrupts for a while. If the clock interrupt |
occurs during this period, it will be pending until interrupts are enabled |
again. In theory, that could happen arbitrary counter register ticks |
later. Which is worse, the ideal time period between two non-delayed clock |
interrupts can also elapse arbitrary number of times before the delayed |
interrupt gets serviced. The architecture-specific part of the clock |
interrupt driver must avoid time drifts caused by this by taking proactive |
counter-measures.</para> |
<para>The kernel must reinitialize one of the two registers after each |
clock interrupt in order to schedule next interrupt. However this step is |
tricky and must be done with caution. Imagine that the clock interrupt is |
masked either because the kernel is servicing another interrupt or because |
the processor locally disabled interrupts for a while. If the clock |
interrupt occurs during this period, it will be pending until interrupts |
are enabled again. In theory, that could happen arbitrary counter register |
ticks later. Which is worse, the ideal time period between two non-delayed |
clock interrupts can also elapse arbitrary number of times before the |
delayed interrupt gets serviced. The architecture-specific part of the |
clock interrupt driver must avoid time drifts caused by this by taking |
proactive counter-measures.</para> |
<para>Let us assume that the kernel wants each clock interrupt be |
generated every <constant>TICKCONST</constant> ticks. This value |
/design/trunk/src/ch_arch_overview.xml |
---|
51,7 → 51,9 |
maps to one kernel thread. Threads are grouped into tasks by functionality |
they provide (i.e. several threads implement functionality of one task). |
Tasks serve as containers of threads, they provide linkage to address |
space and are communication endpoints for IPC.</para> |
space and are communication endpoints for IPC. Finally, tasks can be |
holders of capabilities that entitle them to do certain sensitive |
operations (e.g access raw hardware and physical memory).</para> |
<para>The scheduler deploys several run queues on each processor. A thread |
ready for execution is put into one of the run queues, depending on its |
109,6 → 111,4 |
tasks. Calls can be synchronous or asynchronous and can be forwarded from |
one task to another.</para> |
</section> |
</chapter> |
/design/trunk/src/ch_scheduling.xml |
---|
42,7 → 42,7 |
architecture. To highlight some, the program counter and stack pointer |
take part in the synchronous register context. These are the registers |
that must be preserved across a procedure call and during synchronous |
context switches. </para> |
context switches.</para> |
<para>The next type of the context understood by the kernel is the |
asynchronous register context. On an interrupt, the interrupted execution |
103,8 → 103,47 |
</section> |
<section> |
<title>Scheduler</title> |
<title>Threads</title> |
<para>How scheduler designed and how it works.</para> |
<para>A thread is the basic executable entity with some code and stack. |
While the code, implemented by a C language function, can be shared by |
several threads, the stack is always private to each instance of the |
thread. Each thread belongs to exactly one task through which it shares |
address space with its sibling threads. Threads that execute purely in the |
kernel don't have any userspace memory allocated. However, when a thread |
has ambitions to run in userspace, it must be allocated a userspace stack. |
The distinction between the purely kernel threads and threads running also |
in userspace is made by refering to the former group as to kernel threads |
and to the latter group as to userspace threads. Both kernel and userspace |
threads are visible to the scheduler and can become a subject of kernel |
preemption and thread migration during times when preemption is |
possible.</para> |
<para>HelenOS userspace layer knows even smaller units of execution. Each |
userspace thread can make use of an arbitrary number of pseudo threads. |
These pseudo threads have their own synchronous register context, |
userspace code and stack. They live their own life within the userspace |
thread and the scheduler does not have any idea about them because they |
are completely implemented by the userspace library. This implies several |
things:</para> |
<itemizedlist> |
<listitem> |
<para>pseudothreads schedule themselves cooperatively within the time |
slice given to their userspace thread,</para> |
</listitem> |
<listitem> |
<para>pseudothreads share FPU context of their containing thread |
and</para> |
</listitem> |
<listitem> |
<para>all pseudothreads of one userspace thread block when one of them |
goes to sleep.</para> |
</listitem> |
</itemizedlist> |
<para></para> |
</section> |
</chapter> |