Rev 48 | Rev 62 | Go to most recent revision | Show entire file | Ignore whitespace | Details | Blame | Last modification | View Log | RSS feed
Rev 48 | Rev 57 | ||
---|---|---|---|
Line 39... | Line 39... | ||
39 | 39 | ||
40 | <para>The semantics of the test-and-set operation is that the spinlock |
40 | <para>The semantics of the test-and-set operation is that the spinlock |
41 | remains unavailable until this operation called on the respective |
41 | remains unavailable until this operation called on the respective |
42 | spinlock returns zero. HelenOS builds two functions on top of the |
42 | spinlock returns zero. HelenOS builds two functions on top of the |
43 | test-and-set operation. The first function is the unconditional attempt |
43 | test-and-set operation. The first function is the unconditional attempt |
44 | to acquire the spinlock and is called |
44 | to acquire the spinlock and is called <code>spinlock_lock</code>. It |
45 | <emphasis>spinlock_lock</emphasis>. It simply loops until the |
- | |
46 | test-and-set returns a zero value. The other function, |
45 | simply loops until the test-and-set returns a zero value. The other |
47 | <emphasis>spinlock_trylock</emphasis>, is the conditional lock operation |
46 | function, <code>spinlock_trylock</code>, is the conditional lock |
48 | and calls the test-and-set only once to find out whether it managed to |
47 | operation and calls the test-and-set only once to find out whether it |
49 | acquire the spinlock or not. The conditional operation is useful in |
48 | managed to acquire the spinlock or not. The conditional operation is |
50 | situations in which an algorithm cannot acquire more spinlocks in the |
49 | useful in situations in which an algorithm cannot acquire more spinlocks |
51 | proper order and a deadlock cannot be avoided. In such a case, the |
50 | in the proper order and a deadlock cannot be avoided. In such a case, |
52 | algorithm would detect the danger and instead of possibly deadlocking |
51 | the algorithm would detect the danger and instead of possibly |
53 | the system it would simply release some spinlocks it already holds and |
52 | deadlocking the system it would simply release some spinlocks it already |
54 | retry the whole operation with the hope that it will succeed next time. |
53 | holds and retry the whole operation with the hope that it will succeed |
55 | The unlock function, <emphasis>spinlock_unlock</emphasis>, is quite easy |
54 | next time. The unlock function, <code>spinlock_unlock</code>, is quite |
56 | - it merely clears the spinlock variable.</para> |
55 | easy - it merely clears the spinlock variable.</para> |
57 | 56 | ||
58 | <para>Nevertheless, there is a special issue related to hardware |
57 | <para>Nevertheless, there is a special issue related to hardware |
59 | optimizations that modern processors implement. Particularly problematic |
58 | optimizations that modern processors implement. Particularly problematic |
60 | is the out-of-order execution of instructions within the critical |
59 | is the out-of-order execution of instructions within the critical |
61 | section protected by a spinlock. The processors are always |
60 | section protected by a spinlock. The processors are always |
Line 64... | Line 63... | ||
64 | instructions. However, the dependency between instructions inside the |
63 | instructions. However, the dependency between instructions inside the |
65 | critical section and those that implement locking and unlocking of the |
64 | critical section and those that implement locking and unlocking of the |
66 | respective spinlock is not implicit on some processor architectures. As |
65 | respective spinlock is not implicit on some processor architectures. As |
67 | a result, the processor needs to be explicitly told about each |
66 | a result, the processor needs to be explicitly told about each |
68 | occurrence of such a dependency. Therefore, HelenOS adds |
67 | occurrence of such a dependency. Therefore, HelenOS adds |
69 | architecture-specific hooks to all <emphasis>spinlock_lock</emphasis>, |
68 | architecture-specific hooks to all <code>spinlock_lock</code>, |
70 | <emphasis>spinlock_trylock</emphasis> and |
- | |
71 | <emphasis>spinlock_unlock</emphasis> functions to prevent the |
69 | <code>spinlock_trylock</code> and <code>spinlock_unlock</code> functions |
72 | instructions inside the critical section from permeating out. On some |
70 | to prevent the instructions inside the critical section from permeating |
73 | architectures, these hooks can be void because the dependencies are |
71 | out. On some architectures, these hooks can be void because the |
74 | implicitly there because of the special properties of locking and |
72 | dependencies are implicitly there because of the special properties of |
75 | unlocking instructions. However, other architectures need to instrument |
73 | locking and unlocking instructions. However, other architectures need to |
76 | these hooks with different memory barriers, depending on what operations |
74 | instrument these hooks with different memory barriers, depending on what |
77 | could permeate out.</para> |
75 | operations could permeate out.</para> |
78 | 76 | ||
79 | <para>Spinlocks have one significant drawback: when held for longer time |
77 | <para>Spinlocks have one significant drawback: when held for longer time |
80 | periods, they harm both parallelism and concurrency. The processor |
78 | periods, they harm both parallelism and concurrency. The processor |
81 | executing <emphasis>spinlock_lock</emphasis> does not do any fruitful |
79 | executing <code>spinlock_lock</code> does not do any fruitful work and |
82 | work and is effectively halted until it can grab the lock and proceed. |
80 | is effectively halted until it can grab the lock and proceed. |
83 | Similarily, other execution flows cannot execute on the processor that |
81 | Similarily, other execution flows cannot execute on the processor that |
84 | holds the spinlock because the kernel disables preemption on that |
82 | holds the spinlock because the kernel disables preemption on that |
85 | processor when a spinlock is held. The reason behind disabling |
83 | processor when a spinlock is held. The reason behind disabling |
86 | preemption is priority inversion problem avoidance. For the same reason, |
84 | preemption is priority inversion problem avoidance. For the same reason, |
87 | threads are strongly discouraged from sleeping when they hold a |
85 | threads are strongly discouraged from sleeping when they hold a |
Line 112... | Line 110... | ||
112 | wait queue as a missed wakeup and later forwarded to the first thread |
110 | wait queue as a missed wakeup and later forwarded to the first thread |
113 | that decides to wait in the queue. The inner structures of the wait |
111 | that decides to wait in the queue. The inner structures of the wait |
114 | queue are protected by a spinlock.</para> |
112 | queue are protected by a spinlock.</para> |
115 | 113 | ||
116 | <para>The thread that wants to wait for a wait queue event uses the |
114 | <para>The thread that wants to wait for a wait queue event uses the |
117 | <emphasis>waitq_sleep_timeout</emphasis> function. The algorithm then |
115 | <code>waitq_sleep_timeout</code> function. The algorithm then checks the |
118 | checks the wait queue's counter of missed wakeups and if there are any |
116 | wait queue's counter of missed wakeups and if there are any missed |
119 | missed wakeups, the call returns immediately. The call also returns |
117 | wakeups, the call returns immediately. The call also returns immediately |
120 | immediately if only a conditional wait was requested. Otherwise the |
118 | if only a conditional wait was requested. Otherwise the thread is |
121 | thread is enqueued in the wait queue's list of sleeping threads and its |
119 | enqueued in the wait queue's list of sleeping threads and its state is |
122 | state is changed to <emphasis>Sleeping</emphasis>. It then sleeps until |
120 | changed to <constant>Sleeping</constant>. It then sleeps until one of |
123 | one of the following events happens:</para> |
121 | the following events happens:</para> |
124 | 122 | ||
125 | <orderedlist> |
123 | <orderedlist> |
126 | <listitem> |
124 | <listitem> |
127 | <para>another thread calls <emphasis>waitq_wakeup</emphasis> and the |
125 | <para>another thread calls <code>waitq_wakeup</code> and the thread |
128 | thread is the first thread in the wait queue's list of sleeping |
126 | is the first thread in the wait queue's list of sleeping |
129 | threads;</para> |
127 | threads;</para> |
130 | </listitem> |
128 | </listitem> |
131 | 129 | ||
132 | <listitem> |
130 | <listitem> |
133 | <para>another thread calls |
- | |
134 | <emphasis>waitq_interrupt_sleep</emphasis> on the sleeping |
131 | <para>another thread calls <code>waitq_interrupt_sleep</code> on the |
135 | thread;</para> |
132 | sleeping thread;</para> |
136 | </listitem> |
133 | </listitem> |
137 | 134 | ||
138 | <listitem> |
135 | <listitem> |
139 | <para>the sleep times out provided that none of the previous |
136 | <para>the sleep times out provided that none of the previous |
140 | occurred within a specified time limit; the limit can be |
137 | occurred within a specified time limit; the limit can be |
Line 142... | Line 139... | ||
142 | </listitem> |
139 | </listitem> |
143 | </orderedlist> |
140 | </orderedlist> |
144 | 141 | ||
145 | <para>All five possibilities (immediate return on success, immediate |
142 | <para>All five possibilities (immediate return on success, immediate |
146 | return on failure, wakeup after sleep, interruption and timeout) are |
143 | return on failure, wakeup after sleep, interruption and timeout) are |
147 | distinguishable by the return value of |
144 | distinguishable by the return value of <code>waitq_sleep_timeout</code>. |
148 | <emphasis>waitq_sleep_timeout</emphasis>. Being able to interrupt a |
- | |
149 | sleeping thread is essential for externally initiated thread |
145 | Being able to interrupt a sleeping thread is essential for externally |
150 | termination. The ability to wait only for a certain amount of time is |
146 | initiated thread termination. The ability to wait only for a certain |
151 | used, for instance, to passively delay thread execution by several |
147 | amount of time is used, for instance, to passively delay thread |
152 | microseconds or even seconds in <emphasis>thread_sleep</emphasis> |
148 | execution by several microseconds or even seconds in |
153 | function. Due to the fact that all other passive kernel synchronization |
149 | <code>thread_sleep</code> function. Due to the fact that all other |
154 | primitives are based on wait queues, they also have the option of being |
150 | passive kernel synchronization primitives are based on wait queues, they |
155 | interrutped and, more importantly, can timeout. All of them also |
151 | also have the option of being interrutped and, more importantly, can |
156 | implement the conditional operation. Furthemore, this very fundamental |
152 | timeout. All of them also implement the conditional operation. |
157 | interface reaches up to the implementation of futexes - userspace |
153 | Furthemore, this very fundamental interface reaches up to the |
158 | synchronization primitive, which makes it possible for a userspace |
154 | implementation of futexes - userspace synchronization primitive, which |
159 | thread to request a synchronization operation with a timeout or a |
155 | makes it possible for a userspace thread to request a synchronization |
160 | conditional operation.</para> |
156 | operation with a timeout or a conditional operation.</para> |
161 | 157 | ||
162 | <para>From the description above, it should be apparent, that when a |
158 | <para>From the description above, it should be apparent, that when a |
163 | sleeping thread is woken by <emphasis>waitq_wakeup</emphasis> or when |
159 | sleeping thread is woken by <code>waitq_wakeup</code> or when |
164 | <emphasis>waitq_sleep_timeout</emphasis> succeeds immediately, the |
160 | <code>waitq_sleep_timeout</code> succeeds immediately, the thread can be |
165 | thread can be sure that the event has occurred. The thread need not and |
161 | sure that the event has occurred. The thread need not and should not |
166 | should not verify this fact. This approach is called direct hand-off and |
162 | verify this fact. This approach is called direct hand-off and is |
167 | is characteristic for all passive HelenOS synchronization primitives, |
163 | characteristic for all passive HelenOS synchronization primitives, with |
168 | with the exception as described below.</para> |
164 | the exception as described below.</para> |
169 | </section> |
165 | </section> |
170 | 166 | ||
171 | <section> |
167 | <section> |
172 | <title>Semaphores</title> |
168 | <title>Semaphores</title> |
173 | 169 | ||
174 | <para>The interesting point about wait queues is that the number of |
170 | <para>The interesting point about wait queues is that the number of |
175 | missed wakeups is equal to the number of threads that will not block in |
171 | missed wakeups is equal to the number of threads that will not block in |
176 | <emphasis>watiq_sleep_timeout</emphasis> and would immediately succeed |
172 | <code>watiq_sleep_timeout</code> and would immediately succeed instead. |
177 | instead. On the other hand, semaphores are synchronization primitives |
173 | On the other hand, semaphores are synchronization primitives that will |
178 | that will let predefined amount of threads into their critical section |
174 | let predefined amount of threads into their critical section and block |
179 | and block any other threads above this count. However, these two cases |
175 | any other threads above this count. However, these two cases are exactly |
180 | are exactly the same. Semaphores in HelenOS are therefore implemented as |
176 | the same. Semaphores in HelenOS are therefore implemented as wait queues |
181 | wait queues with a single semantic change: their wait queue is |
177 | with a single semantic change: their wait queue is initialized to have |
182 | initialized to have so many missed wakeups as is the number of threads |
178 | so many missed wakeups as is the number of threads that the semphore |
183 | that the semphore intends to let into its critical section |
179 | intends to let into its critical section simultaneously.</para> |
184 | simultaneously.</para> |
- | |
185 | 180 | ||
186 | <para>In the semaphore language, the wait queue operation |
181 | <para>In the semaphore language, the wait queue operation |
187 | <emphasis>waitq_sleep_timeout</emphasis> corresponds to |
182 | <code>waitq_sleep_timeout</code> corresponds to semaphore |
188 | <emphasis><emphasis>semaphore</emphasis> down</emphasis> operation, |
183 | <code>down</code> operation, represented by the function |
189 | represented by the function <emphasis>semaphore_down_timeout</emphasis> |
184 | <code>semaphore_down_timeout</code> and by way of similitude the wait |
190 | and by way of similitude the wait queue operation waitq_wakeup |
185 | queue operation waitq_wakeup corresponds to semaphore <code>up</code> |
191 | corresponds to semaphore <emphasis>up</emphasis> operation, represented |
186 | operation, represented by the function <code>sempafore_up</code>. The |
192 | by the function <emphasis>sempafore_up</emphasis>. The conditional down |
187 | conditional down operation is called |
193 | operation is called <emphasis>semaphore_trydown</emphasis>.</para> |
188 | <code>semaphore_trydown</code>.</para> |
194 | </section> |
189 | </section> |
195 | 190 | ||
196 | <section> |
191 | <section> |
197 | <title>Mutexes</title> |
192 | <title>Mutexes</title> |
198 | 193 | ||
Line 201... | Line 196... | ||
201 | critical section. Indeed, mutexes in HelenOS are implemented exactly in |
196 | critical section. Indeed, mutexes in HelenOS are implemented exactly in |
202 | this way: they are built on top of semaphores. From another point of |
197 | this way: they are built on top of semaphores. From another point of |
203 | view, they can be viewed as spinlocks without busy waiting. Their |
198 | view, they can be viewed as spinlocks without busy waiting. Their |
204 | semaphore heritage provides good basics for both conditional operation |
199 | semaphore heritage provides good basics for both conditional operation |
205 | and operation with timeout. The locking operation is called |
200 | and operation with timeout. The locking operation is called |
206 | <emphasis>mutex_lock</emphasis>, the conditional locking operation is |
201 | <code>mutex_lock</code>, the conditional locking operation is called |
207 | called <emphasis>mutex_trylock</emphasis> and the unlocking operation is |
202 | <code>mutex_trylock</code> and the unlocking operation is called |
208 | called <emphasis>mutex_unlock</emphasis>.</para> |
203 | <code>mutex_unlock</code>.</para> |
209 | </section> |
204 | </section> |
210 | 205 | ||
211 | <section> |
206 | <section> |
212 | <title>Reader/writer locks</title> |
207 | <title>Reader/writer locks</title> |
213 | 208 | ||
Line 254... | Line 249... | ||
254 | whether a simpler but equivalently fair solution exists.</para> |
249 | whether a simpler but equivalently fair solution exists.</para> |
255 | 250 | ||
256 | <para>The implementation of rwlocks as it has been already put, makes |
251 | <para>The implementation of rwlocks as it has been already put, makes |
257 | use of one single wait queue for both readers and writers, thus avoiding |
252 | use of one single wait queue for both readers and writers, thus avoiding |
258 | any possibility of starvation. In fact, rwlocks use a mutex rather than |
253 | any possibility of starvation. In fact, rwlocks use a mutex rather than |
259 | a bare wait queue. This mutex is called <emphasis>exclusive</emphasis> |
254 | a bare wait queue. This mutex is called <code>exclusive</code> and is |
260 | and is used to synchronize writers. The writer's lock operation, |
255 | used to synchronize writers. The writer's lock operation, |
261 | <emphasis>rwlock_write_lock_timeout</emphasis>, simply tries to acquire |
256 | <code>rwlock_write_lock_timeout</code>, simply tries to acquire the |
262 | the exclusive mutex. If it succeeds, the writer is granted the rwlock. |
257 | exclusive mutex. If it succeeds, the writer is granted the rwlock. |
263 | However, if the operation fails (e.g. times out), the writer must check |
258 | However, if the operation fails (e.g. times out), the writer must check |
264 | for potential readers at the head of the list of sleeping threads |
259 | for potential readers at the head of the list of sleeping threads |
265 | associated with the mutex's wait queue and then proceed according to the |
260 | associated with the mutex's wait queue and then proceed according to the |
266 | procedure outlined above.</para> |
261 | procedure outlined above.</para> |
267 | 262 | ||
268 | <para>The exclusive mutex plays an important role in reader |
263 | <para>The exclusive mutex plays an important role in reader |
269 | synchronization as well. However, a reader doing the reader's lock |
264 | synchronization as well. However, a reader doing the reader's lock |
270 | operation, <emphasis>rwlock_read_lock_timeout</emphasis>, may bypass |
265 | operation, <code>rwlock_read_lock_timeout</code>, may bypass this mutex |
271 | this mutex when it detects that:</para> |
266 | when it detects that:</para> |
272 | 267 | ||
273 | <orderedlist> |
268 | <orderedlist> |
274 | <listitem> |
269 | <listitem> |
275 | <para>there are other readers in the critical section and</para> |
270 | <para>there are other readers in the critical section and</para> |
276 | </listitem> |
271 | </listitem> |
Line 312... | Line 307... | ||
312 | <para><programlisting><function>mutex_lock</function>(<varname>mtx</varname>); |
307 | <para><programlisting><function>mutex_lock</function>(<varname>mtx</varname>); |
313 | <varname>condition</varname> = <constant>true</constant>; |
308 | <varname>condition</varname> = <constant>true</constant>; |
314 | <function>condvar_signal</function>(<varname>cv</varname>); /* <remark>condvar_broadcast(cv);</remark> */ |
309 | <function>condvar_signal</function>(<varname>cv</varname>); /* <remark>condvar_broadcast(cv);</remark> */ |
315 | <function>mutex_unlock</function>(<varname>mtx</varname>);</programlisting></para> |
310 | <function>mutex_unlock</function>(<varname>mtx</varname>);</programlisting></para> |
316 | 311 | ||
317 | <para>The wait operation, <emphasis>condvar_wait_timeout</emphasis>, |
312 | <para>The wait operation, <code>condvar_wait_timeout</code>, always puts |
318 | always puts the calling thread to sleep. The thread then sleeps until |
313 | the calling thread to sleep. The thread then sleeps until another thread |
319 | another thread invokes <emphasis>condvar_broadcast</emphasis> on the |
314 | invokes <code>condvar_broadcast</code> on the same condition variable or |
320 | same condition variable or until it is woken up by |
- | |
321 | <emphasis>condvar_signal</emphasis>. The |
315 | until it is woken up by <code>condvar_signal</code>. The |
322 | <emphasis>condvar_signal</emphasis> operation unblocks the first thread |
316 | <code>condvar_signal</code> operation unblocks the first thread blocking |
323 | blocking on the condition variable while the |
317 | on the condition variable while the <code>condvar_broadcast</code> |
324 | <emphasis>condvar_broadcast</emphasis> operation unblocks all threads |
- | |
325 | blocking there. If there are no blocking threads, these two operations |
318 | operation unblocks all threads blocking there. If there are no blocking |
326 | have no efect.</para> |
319 | threads, these two operations have no efect.</para> |
327 | 320 | ||
328 | <para>Note that the threads must synchronize over a dedicated mutex. To |
321 | <para>Note that the threads must synchronize over a dedicated mutex. To |
329 | prevent race condition between <emphasis>condvar_wait_timeout</emphasis> |
322 | prevent race condition between <code>condvar_wait_timeout</code> and |
330 | and <emphasis>condvar_signal</emphasis> or |
- | |
331 | <emphasis>condvar_broadcast</emphasis>, the mutex is passed to |
323 | <code>condvar_signal</code> or <code>condvar_broadcast</code>, the mutex |
332 | <emphasis>condvar_wait_timeout</emphasis> which then atomically puts the |
324 | is passed to <code>condvar_wait_timeout</code> which then atomically |
333 | calling thread asleep and unlocks the mutex. When the thread eventually |
325 | puts the calling thread asleep and unlocks the mutex. When the thread |
334 | wakes up, <emphasis>condvar_wait</emphasis> regains the mutex and |
326 | eventually wakes up, <code>condvar_wait</code> regains the mutex and |
335 | returns.</para> |
327 | returns.</para> |
336 | 328 | ||
337 | <para>Also note, that there is no conditional operation for condition |
329 | <para>Also note, that there is no conditional operation for condition |
338 | variables. Such an operation would make no sence since condition |
330 | variables. Such an operation would make no sence since condition |
339 | variables are defined to forget events for which there is no waiting |
331 | variables are defined to forget events for which there is no waiting |
340 | thread and because <emphasis>condvar_wait</emphasis> must always go to |
332 | thread and because <code>condvar_wait</code> must always go to sleep. |
341 | sleep. The operation with timeout is supported as usually.</para> |
333 | The operation with timeout is supported as usually.</para> |
342 | 334 | ||
343 | <para>In HelenOS, condition variables are based on wait queues. As it is |
335 | <para>In HelenOS, condition variables are based on wait queues. As it is |
344 | already mentioned above, wait queues remember missed events while |
336 | already mentioned above, wait queues remember missed events while |
345 | condition variables must not do so. This is reasoned by the fact that |
337 | condition variables must not do so. This is reasoned by the fact that |
346 | condition variables are designed for scenarios in which an event might |
338 | condition variables are designed for scenarios in which an event might |
347 | occur very many times without being picked up by any waiting thread. On |
339 | occur very many times without being picked up by any waiting thread. On |
348 | the other hand, wait queues would remember any event that had not been |
340 | the other hand, wait queues would remember any event that had not been |
349 | picked up by a call to <emphasis>waitq_sleep_timeout</emphasis>. |
341 | picked up by a call to <code>waitq_sleep_timeout</code>. Therefore, if |
350 | Therefore, if wait queues were used directly and without any changes to |
342 | wait queues were used directly and without any changes to implement |
351 | implement condition variables, the missed_wakeup counter would hurt |
343 | condition variables, the missed_wakeup counter would hurt performance of |
352 | performance of the implementation: the <code>while</code> loop in |
344 | the implementation: the <code>while</code> loop in |
353 | <emphasis>condvar_wait_timeout</emphasis> would effectively do busy |
345 | <code>condvar_wait_timeout</code> would effectively do busy waiting |
354 | waiting until all missed wakeups were discarded.</para> |
346 | until all missed wakeups were discarded.</para> |
355 | 347 | ||
356 | <para>The requirement on the wait operation to atomically put the caller |
348 | <para>The requirement on the wait operation to atomically put the caller |
357 | to sleep and release the mutex poses an interesting problem on |
349 | to sleep and release the mutex poses an interesting problem on |
358 | <emphasis>condvar_wait_timeout</emphasis>. More precisely, the thread |
350 | <code>condvar_wait_timeout</code>. More precisely, the thread should |
359 | should sleep in the condvar's wait queue prior to releasing the mutex, |
351 | sleep in the condvar's wait queue prior to releasing the mutex, but it |
360 | but it must not hold the mutex when it is sleeping.</para> |
352 | must not hold the mutex when it is sleeping.</para> |
361 | 353 | ||
362 | <para>Problems described in the two previous paragraphs are addressed in |
354 | <para>Problems described in the two previous paragraphs are addressed in |
363 | HelenOS by dividing the <emphasis>waitq_sleep_timeout</emphasis> |
355 | HelenOS by dividing the <code>waitq_sleep_timeout</code> function into |
364 | function into three pieces:</para> |
356 | three pieces:</para> |
365 | 357 | ||
366 | <orderedlist> |
358 | <orderedlist> |
367 | <listitem> |
359 | <listitem> |
368 | <para><emphasis>waitq_sleep_prepare</emphasis> prepares the thread |
360 | <para><code>waitq_sleep_prepare</code> prepares the thread to go to |
369 | to go to sleep by, among other things, locking the wait |
361 | sleep by, among other things, locking the wait queue;</para> |
370 | queue;</para> |
- | |
371 | </listitem> |
362 | </listitem> |
372 | 363 | ||
373 | <listitem> |
364 | <listitem> |
374 | <para><emphasis>waitq_sleep_timeout_unsafe</emphasis> implements the |
365 | <para><code>waitq_sleep_timeout_unsafe</code> implements the core |
375 | core blocking logic;</para> |
366 | blocking logic;</para> |
376 | </listitem> |
367 | </listitem> |
377 | 368 | ||
378 | <listitem> |
369 | <listitem> |
379 | <para><emphasis>waitq_sleep_finish</emphasis> performs cleanup after |
370 | <para><code>waitq_sleep_finish</code> performs cleanup after |
380 | <emphasis>waitq_sleep_timeout_unsafe</emphasis>; after this call, |
371 | <code>waitq_sleep_timeout_unsafe</code>; after this call, the wait |
381 | the wait queue spinlock is guaranteed to be unlocked by the |
372 | queue spinlock is guaranteed to be unlocked by the caller</para> |
382 | caller</para> |
- | |
383 | </listitem> |
373 | </listitem> |
384 | </orderedlist> |
374 | </orderedlist> |
385 | 375 | ||
386 | <para>The stock <emphasis>waitq_sleep_timeout</emphasis> is then a mere |
376 | <para>The stock <code>waitq_sleep_timeout</code> is then a mere wrapper |
387 | wrapper that calls these three functions. It is provided for convenience |
377 | that calls these three functions. It is provided for convenience in |
388 | in cases where the caller doesn't require such a low level control. |
378 | cases where the caller doesn't require such a low level control. |
389 | However, the implementation of <emphasis>condvar_wait_timeout</emphasis> |
379 | However, the implementation of <code>condvar_wait_timeout</code> does |
390 | does need this finer-grained control because it has to interleave calls |
380 | need this finer-grained control because it has to interleave calls to |
391 | to these functions by other actions. It carries its operations out in |
381 | these functions by other actions. It carries its operations out in the |
392 | the following order:</para> |
382 | following order:</para> |
393 | 383 | ||
394 | <orderedlist> |
384 | <orderedlist> |
395 | <listitem> |
385 | <listitem> |
396 | <para>calls <emphasis>waitq_sleep_prepare</emphasis> in order to |
386 | <para>calls <code>waitq_sleep_prepare</code> in order to lock the |
397 | lock the condition variable's wait queue,</para> |
387 | condition variable's wait queue,</para> |
398 | </listitem> |
388 | </listitem> |
399 | 389 | ||
400 | <listitem> |
390 | <listitem> |
401 | <para>releases the mutex,</para> |
391 | <para>releases the mutex,</para> |
402 | </listitem> |
392 | </listitem> |
Line 404... | Line 394... | ||
404 | <listitem> |
394 | <listitem> |
405 | <para>clears the counter of missed wakeups,</para> |
395 | <para>clears the counter of missed wakeups,</para> |
406 | </listitem> |
396 | </listitem> |
407 | 397 | ||
408 | <listitem> |
398 | <listitem> |
409 | <para>calls <emphasis>waitq_sleep_timeout_unsafe</emphasis>,</para> |
399 | <para>calls <code>waitq_sleep_timeout_unsafe</code>,</para> |
410 | </listitem> |
400 | </listitem> |
411 | 401 | ||
412 | <listitem> |
402 | <listitem> |
413 | <para>retakes the mutex,</para> |
403 | <para>retakes the mutex,</para> |
414 | </listitem> |
404 | </listitem> |
415 | 405 | ||
416 | <listitem> |
406 | <listitem> |
417 | <para>calls <emphasis>waitq_sleep_finish</emphasis>.</para> |
407 | <para>calls <code>waitq_sleep_finish</code>.</para> |
418 | </listitem> |
408 | </listitem> |
419 | </orderedlist> |
409 | </orderedlist> |
420 | </section> |
410 | </section> |
421 | </section> |
411 | </section> |
422 | 412 |