Rev 114 | Rev 119 | Go to most recent revision | Show entire file | Ignore whitespace | Details | Blame | Last modification | View Log | RSS feed
Rev 114 | Rev 117 | ||
---|---|---|---|
Line 7... | Line 7... | ||
7 | <para>Due to the high intertask communication traffic, IPC becomes critical |
7 | <para>Due to the high intertask communication traffic, IPC becomes critical |
8 | subsystem for microkernels, putting high demands on the speed, latency and |
8 | subsystem for microkernels, putting high demands on the speed, latency and |
9 | reliability of IPC model and implementation. Although theoretically the use |
9 | reliability of IPC model and implementation. Although theoretically the use |
10 | of asynchronous messaging system looks promising, it is not often |
10 | of asynchronous messaging system looks promising, it is not often |
11 | implemented because of a problematic implementation of end user |
11 | implemented because of a problematic implementation of end user |
12 | applications. HelenOS implements a fully asynchronous messaging system with |
12 | applications. HelenOS implements fully asynchronous messaging system with a |
13 | a special layer providing a user application developer a reasonably |
13 | special layer providing a user application developer a reasonably |
14 | synchronous multithreaded environment sufficient to develop complex |
14 | synchronous multithreaded environment sufficient to develop complex |
15 | protocols.</para> |
15 | protocols.</para> |
16 | 16 | ||
17 | <section> |
17 | <section> |
18 | <title>Services provided by kernel</title> |
18 | <title>Services provided by kernel</title> |
Line 226... | Line 226... | ||
226 | <emphasis>manager</emphasis> task is found or a new one is created and |
226 | <emphasis>manager</emphasis> task is found or a new one is created and |
227 | control is transfered to this manager task. The manager tasks pops |
227 | control is transfered to this manager task. The manager tasks pops |
228 | messages from the answerbox and puts them into appropriate queues of |
228 | messages from the answerbox and puts them into appropriate queues of |
229 | running tasks. If a task waiting for a message is not running, the |
229 | running tasks. If a task waiting for a message is not running, the |
230 | control is transferred to it.</para> |
230 | control is transferred to it.</para> |
231 | |
231 | |
232 | <figure float="1"> |
232 | <figure float="1"> |
233 | <mediaobject id="ipc2"> |
233 | <mediaobject id="ipc2"> |
234 | <imageobject role="pdf"> |
234 | <imageobject role="pdf"> |
235 | <imagedata fileref="images/ipc2.pdf" format="PDF" /> |
235 | <imagedata fileref="images/ipc2.pdf" format="PDF" /> |
236 | </imageobject> |
236 | </imageobject> |
Line 245... | Line 245... | ||
245 | </mediaobject> |
245 | </mediaobject> |
246 | 246 | ||
247 | <title>Single point of entry</title> |
247 | <title>Single point of entry</title> |
248 | </figure> |
248 | </figure> |
249 | 249 | ||
250 | - | ||
251 | <para>Very similar situation arises when a task decides to send a lot of |
250 | <para>Very similar situation arises when a task decides to send a lot of |
252 | messages and reaches kernel limit of asynchronous messages. In such |
251 | messages and reaches kernel limit of asynchronous messages. In such |
253 | situation 2 remedies are available - the userspace liberary can either |
252 | situation 2 remedies are available - the userspace liberary can either |
254 | cache the message locally and resend the message when some answers |
253 | cache the message locally and resend the message when some answers |
255 | arrive, or it can block the thread and let it go on only after the |
254 | arrive, or it can block the thread and let it go on only after the |
Line 272... | Line 271... | ||
272 | </section> |
271 | </section> |
273 | 272 | ||
274 | <section> |
273 | <section> |
275 | <title>Ordering problem</title> |
274 | <title>Ordering problem</title> |
276 | 275 | ||
277 | <para>Unfortunately, in the real world is is never so easy. E.g. if a |
276 | <para>Unfortunately, the real world is is never so simple. E.g. if a |
278 | server handles incoming requests and as a part of it's response sends |
277 | server handles incoming requests and as a part of its response sends |
279 | asynchronous messages, it can be easily prempted and other thread may |
278 | asynchronous messages, it can be easily prempted and other thread may |
280 | start intervening. This can happen even if the application utilizes only |
279 | start intervening. This can happen even if the application utilizes only |
281 | 1 kernel thread. Classical synchronization using semaphores is not |
280 | 1 kernel thread. Classical synchronization using semaphores is not |
282 | possible, as locking on them would block the thread completely and the |
281 | possible, as locking on them would block the thread completely so that |
283 | answer couldn't be ever processed. The IPC framework allows a developer |
282 | the answer couldn't be ever processed. The IPC framework allows a |
284 | to specify, that the thread should not be preempted to any other thread |
283 | developer to specify, that part of the code should not be preempted by |
285 | (except notification handlers) while still being able to queue messages |
284 | any other thread (except notification handlers) while still being able |
286 | belonging to other threads and regain control when the answer |
285 | to queue messages belonging to other threads and regain control when the |
287 | arrives.</para> |
286 | answer arrives.</para> |
288 | 287 | ||
289 | <para>This mechanism works transparently in multithreaded environment, |
288 | <para>This mechanism works transparently in multithreaded environment, |
290 | where classical locking mechanism (futexes) should be used. The IPC |
289 | where additional locking mechanism (futexes) should be used. The IPC |
291 | framework ensures that there will always be enough free threads to |
290 | framework ensures that there will always be enough free kernel threads |
292 | handle the threads requiring correct synchronization and allow the |
291 | to handle incoming answers and allow the application to run more |
293 | application to run more user-space threads inside the kernel threads |
292 | user-space threads inside the kernel threads without the danger of |
294 | without the danger of locking all kernel threads in futexes.</para> |
293 | locking all kernel threads in futexes.</para> |
295 | </section> |
294 | </section> |
296 | 295 | ||
297 | <section> |
296 | <section> |
298 | <title>The interface</title> |
297 | <title>The interface</title> |
299 | 298 | ||
- | 299 | <para>The interface was developed to be as simple to use as possible. |
|
- | 300 | Classical applications simply send messages and occasionally wait for an |
|
- | 301 | answer and check results. If the number of sent messages is higher than |
|
- | 302 | kernel limit, the flow of application is stopped until some answers |
|
- | 303 | arrive. On the other hand server applications are expected to work in a |
|
300 | <para></para> |
304 | multithreaded environment.</para> |
- | 305 | ||
- | 306 | <para>The server interface requires developer to specify a |
|
- | 307 | <function>connection_thread</function> function. When new connection is |
|
- | 308 | detected, a new userspace thread is automatically created and control is |
|
- | 309 | transferred to this function. The code then decides whether to accept |
|
- | 310 | the connection and creates a normal event loop. The userspace IPC |
|
- | 311 | library ensures correct switching between several userspace threads |
|
- | 312 | within the kernel environment.</para> |
|
301 | </section> |
313 | </section> |
302 | </section> |
314 | </section> |
303 | </chapter> |
315 | </chapter> |
304 | 316 |