Subversion Repositories HelenOS-doc

Rev

Rev 58 | Rev 61 | Go to most recent revision | Show entire file | Ignore whitespace | Details | Blame | Last modification | View Log | RSS feed

Rev 58 Rev 59
Line 141... Line 141...
141
      <listitem>
141
      <listitem>
142
        <para>all pseudothreads of one userspace thread block when one of them
142
        <para>all pseudothreads of one userspace thread block when one of them
143
        goes to sleep.</para>
143
        goes to sleep.</para>
144
      </listitem>
144
      </listitem>
145
    </itemizedlist>
145
    </itemizedlist>
-
 
146
  </section>
-
 
147
 
-
 
148
  <section>
-
 
149
    <title>Scheduler</title>
-
 
150
 
-
 
151
    <section>
-
 
152
      <title>Run queues</title>
-
 
153
 
-
 
154
      <para>There is an array of several run queues on each processor. The
-
 
155
      current version of HelenOS uses 16 run queues implemented by 16 doubly
-
 
156
      linked lists. Each of the run queues is associated with thread priority.
-
 
157
      The lower the run queue index in the array is, the higher is the
-
 
158
      priority of threads linked in that run queue and the shorter is the time
-
 
159
      in which those threads will execute. When kernel code wants to access
-
 
160
      the run queue, it must first acquire its lock.</para>
-
 
161
    </section>
-
 
162
 
-
 
163
    <section>
-
 
164
      <title>Scheduler operation</title>
-
 
165
 
-
 
166
      <para>The scheduler is invoked either explicitly when a thread calls the
-
 
167
      <code>scheduler</code> function (e.g. goes to sleep or merely wants to
-
 
168
      relinquish the processor for a while) or implicitly on a periodic basis
-
 
169
      when the generic clock interrupt preempts the current thread. After its
-
 
170
      invocation, the scheduler saves the synchronous register context of the
-
 
171
      current thread and switches to its private stack. Afterwards, a new
-
 
172
      thread is selected according to the scheduling policy. If there is no
-
 
173
      suitable thread, the processor is idle and no thread executes on it.
-
 
174
      Note that the act of switching to the private scheduler stack is
-
 
175
      essential. If the processor kept running using the stack of the
-
 
176
      preempted thread it could damage it because the old thread can be
-
 
177
      migrated to another processor and scheduled there. In the worst case
-
 
178
      scenario, two execution flows would be using the same stack. </para>
-
 
179
 
-
 
180
      <para>The scheduling policy is implemented in function
-
 
181
      <code>find_best_thread</code>. This function walks the processor run
-
 
182
      queues from lower towards higher indices and looks for a thread. If the
-
 
183
      visited run queue is empty, it simply searches the next run queue. If it
-
 
184
      is known in advance that there are no ready threads waiting for
-
 
185
      execution, <code>find_best_thread</code> interruptibly halts the
-
 
186
      processor or busy waits until some threads arrive. This process repeats
-
 
187
      until <code>find_best_thread</code> succeeds.</para>
-
 
188
 
-
 
189
      <para>After the best thread is chosen, the scheduler switches to the
-
 
190
      thread's task and memory management context. Finally, the saved
-
 
191
      synchronous register context is restored and the thread runs. Each
-
 
192
      scheduled thread is given a time slice depending on its priority (i.e.
-
 
193
      run queue). The higher priority, the shorter timeslice. To summarize,
-
 
194
      this policy schedules threads with high priorities more frequently but
-
 
195
      gives them smaller time slices. On the other hand, lower priority
-
 
196
      threads are scheduled less frequently, but run for longer periods of
-
 
197
      time.</para>
-
 
198
 
-
 
199
      <para>When a thread uses its entire time slice, it is preempted and put
-
 
200
      back into the run queue that immediately follows the previous run queue
-
 
201
      from which the thread ran. Threads that are woken up from a sleep are
-
 
202
      put into the biggest priority run queue. Low priority threads are
-
 
203
      therefore those that don't go to sleep so often and just occupy the
-
 
204
      processor.</para>
-
 
205
 
-
 
206
      <para>In order to avoid complete starvation of the low priority threads,
-
 
207
      from time to time, the scheduler will provide them with a bonus of one
-
 
208
      point priority increase. In other words, the scheduler will now and then
-
 
209
      move the entire run queues one level up.</para>
-
 
210
    </section>
-
 
211
 
-
 
212
    <section>
-
 
213
      <title>Processor load balancing</title>
-
 
214
 
-
 
215
      <para>Normally, for the sake of cache locality, threads are scheduled on
-
 
216
      one of the processors and don't leave it. Nevertheless, a situation in
-
 
217
      which one processor is heavily overloaded while others sit idle can
-
 
218
      occur. HelenOS deploys special kernel threads to help to mitigate this
-
 
219
      problem. Each processor is associated with one load balancing thread
-
 
220
      called <code>kcpulb</code> that wakes up regularily to see whether its
-
 
221
      processor is underbalanced or not. If yes, the thread attempts to
-
 
222
      migrate threads from other overloaded processors to its own processor's
-
 
223
      run queues. When the job is done or there is no need for load balancing,
-
 
224
      the thread goes to sleep.</para>
146
 
225
 
-
 
226
      <para>The balancing threads operate very gently and try to migrate low
-
 
227
      priority threads first; one <code>kcpulb</code> never takes from one
-
 
228
      processor twice in a row. The load balancing threads as well as threads
-
 
229
      that were just stolen cannot be migrated. The <code>kcpulb</code>
-
 
230
      threads are wired to their processors and cannot be migrated whatsoever.
-
 
231
      The ordinary threads are protected only until they are
147
    <para></para>
232
      rescheduled.</para>
-
 
233
    </section>
148
  </section>
234
  </section>
149
</chapter>
235
</chapter>
150
236