Subversion Repositories HelenOS-doc

Rev

Rev 64 | Rev 66 | Go to most recent revision | Details | Compare with Previous | Last modification | View Log | RSS feed

Rev Author Line No. Line
9 bondari 1
<?xml version="1.0" encoding="UTF-8"?>
11 bondari 2
<chapter id="mm">
3
  <?dbhtml filename="mm.html"?>
9 bondari 4
 
11 bondari 5
  <title>Memory management</title>
9 bondari 6
 
64 jermar 7
  <para>In previous chapters, this book described the scheduling subsystem as
8
  the creator of the impression that threads execute in parallel. The memory
9
  management subsystem, on the other hand, creates the impression that there
10
  is enough physical memory for the kernel and that userspace tasks have the
11
  entire address space only for themselves.</para>
12
 
26 bondari 13
  <section>
64 jermar 14
    <title>Physical memory management</title>
15
 
16
    <section id="zones_and_frames">
17
      <title>Zones and frames</title>
18
 
19
      <para>HelenOS represents continuous areas of physical memory in
20
      structures called frame zones (abbreviated as zones). Each zone contains
21
      information about the number of allocated and unallocated physical
22
      memory frames as well as the physical base address of the zone and
23
      number of frames contained in it. A zone also contains an array of frame
24
      structures describing each frame of the zone and, in the last, but not
25
      the least important, front, each zone is equipped with a buddy system
26
      that faciliates effective allocation of power-of-two sized block of
27
      frames.</para>
28
 
29
      <para>This organization of physical memory provides good preconditions
30
      for hot-plugging of more zones. There is also one currently unused zone
31
      attribute: <code>flags</code>. The attribute could be used to give a
32
      special meaning to some zones in the future.</para>
33
 
34
      <para>The zones are linked in a doubly-linked list. This might seem a
35
      bit ineffective because the zone list is walked everytime a frame is
36
      allocated or deallocated. However, this does not represent a significant
37
      performance problem as it is expected that the number of zones will be
38
      rather low. Moreover, most architectures merge all zones into
39
      one.</para>
40
 
41
      <para>For each physical memory frame found in a zone, there is a frame
42
      structure that contains number of references and data used by buddy
43
      system.</para>
44
    </section>
45
 
46
    <section id="frame_allocator">
47
      <title>Frame allocator</title>
48
 
49
      <para>The frame allocator satisfies kernel requests to allocate
50
      power-of-two sized blocks of physical memory. Because of zonal
51
      organization of physical memory, the frame allocator is always working
52
      within a context of some frame zone. In order to carry out the
53
      allocation requests, the frame allocator is tightly integrated with the
54
      buddy system belonging to the zone. The frame allocator is also
55
      responsible for updating information about the number of free and busy
56
      frames in the zone. <figure>
57
          <mediaobject id="frame_alloc">
58
            <imageobject role="html">
59
              <imagedata fileref="images/frame_alloc.png" format="PNG" />
60
            </imageobject>
61
 
62
            <imageobject role="fop">
63
              <imagedata fileref="images.vector/frame_alloc.svg" format="SVG" />
64
            </imageobject>
65
          </mediaobject>
66
 
67
          <title>Frame allocator scheme.</title>
68
        </figure></para>
69
 
70
      <formalpara>
71
        <title>Allocation / deallocation</title>
72
 
73
        <para>Upon allocation request via function <code>frame_alloc</code>,
74
        the frame allocator first tries to find a zone that can satisfy the
75
        request (i.e. has the required amount of free frames). Once a suitable
76
        zone is found, the frame allocator uses the buddy allocator on the
77
        zone's buddy system to perform the allocation. During deallocation,
78
        which is triggered by a call to <code>frame_free</code>, the frame
79
        allocator looks up the respective zone that contains the frame being
80
        deallocated. Afterwards, it calls the buddy allocator again, this time
81
        to take care of deallocation within the zone's buddy system.</para>
82
      </formalpara>
83
    </section>
84
 
85
    <section id="buddy_allocator">
86
      <title>Buddy allocator</title>
87
 
88
      <para>In the buddy system, the memory is broken down into power-of-two
89
      sized naturally aligned blocks. These blocks are organized in an array
90
      of lists, in which the list with index i contains all unallocated blocks
91
      of size <mathphrase>2<superscript>i</superscript></mathphrase>. The
92
      index i is called the order of block. Should there be two adjacent
93
      equally sized blocks in the list i<mathphrase />(i.e. buddies), the
94
      buddy allocator would coalesce them and put the resulting block in list
95
      <mathphrase>i + 1</mathphrase>, provided that the resulting block would
96
      be naturally aligned. Similarily, when the allocator is asked to
97
      allocate a block of size
98
      <mathphrase>2<superscript>i</superscript></mathphrase>, it first tries
99
      to satisfy the request from the list with index i. If the request cannot
100
      be satisfied (i.e. the list i is empty), the buddy allocator will try to
101
      allocate and split a larger block from the list with index i + 1. Both
102
      of these algorithms are recursive. The recursion ends either when there
103
      are no blocks to coalesce in the former case or when there are no blocks
104
      that can be split in the latter case.</para>
105
 
106
      <para>This approach greatly reduces external fragmentation of memory and
107
      helps in allocating bigger continuous blocks of memory aligned to their
108
      size. On the other hand, the buddy allocator suffers increased internal
109
      fragmentation of memory and is not suitable for general kernel
110
      allocations. This purpose is better addressed by the <link
111
      linkend="slab">slab allocator</link>.<figure>
112
          <mediaobject id="buddy_alloc">
113
            <imageobject role="html">
114
              <imagedata fileref="images/buddy_alloc.png" format="PNG" />
115
            </imageobject>
116
 
117
            <imageobject role="fop">
118
              <imagedata fileref="images.vector/buddy_alloc.svg" format="SVG" />
119
            </imageobject>
120
          </mediaobject>
121
 
122
          <title>Buddy system scheme.</title>
123
        </figure></para>
124
 
125
      <section>
126
        <title>Implementation</title>
127
 
128
        <para>The buddy allocator is, in fact, an abstract framework wich can
129
        be easily specialized to serve one particular task. It knows nothing
130
        about the nature of memory it helps to allocate. In order to beat the
131
        lack of this knowledge, the buddy allocator exports an interface that
132
        each of its clients is required to implement. When supplied with an
133
        implementation of this interface, the buddy allocator can use
134
        specialized external functions to find a buddy for a block, split and
135
        coalesce blocks, manipulate block order and mark blocks busy or
136
        available.</para>
137
 
138
        <formalpara>
139
          <title>Data organization</title>
140
 
141
          <para>Each entity allocable by the buddy allocator is required to
142
          contain space for storing block order number and a link variable
143
          used to interconnect blocks within the same order.</para>
144
 
145
          <para>Whatever entities are allocated by the buddy allocator, the
146
          first entity within a block is used to represent the entire block.
147
          The first entity keeps the order of the whole block. Other entities
148
          within the block are assigned the magic value
149
          <constant>BUDDY_INNER_BLOCK</constant>. This is especially important
150
          for effective identification of buddies in a one-dimensional array
151
          because the entity that represents a potential buddy cannot be
152
          associated with <constant>BUDDY_INNER_BLOCK</constant> (i.e. if it
153
          is associated with <constant>BUDDY_INNER_BLOCK</constant> then it is
154
          not a buddy).</para>
155
        </formalpara>
156
      </section>
157
    </section>
158
 
159
    <section id="slab">
160
      <title>Slab allocator</title>
161
 
162
      <para>The majority of memory allocation requests in the kernel is for
163
      small, frequently used data structures. The basic idea behind the slab
65 jermar 164
      allocator is that commonly used objects are preallocated in continuous
165
      areas of physical memory called slabs<footnote>
166
          <para>Slabs are in fact blocks of physical memory frames allocated
167
          from the frame allocator.</para>
168
        </footnote>. Whenever an object is to be allocated, the slab allocator
169
      returns the first available item from a suitable slab corresponding to
170
      the object type<footnote>
171
          <para>The mechanism is rather more complicated, see the next
172
          paragraph.</para>
173
        </footnote>. Due to the fact that the sizes of the requested and
174
      allocated object match, the slab allocator significantly freduces
175
      internal fragmentation.</para>
64 jermar 176
 
65 jermar 177
      <para>Slabs of one object type are organized in a structure called slab
178
      cache. There are ususally more slabs in the slab cache, depending on
179
      previous allocations. If the the slab cache runs out of available slabs,
180
      new slabs are allocated. In order to exploit parallelism and to avoid
181
      locking of shared spinlocks, slab caches can have variants of
182
      CPU-private slabs called magazines. Each object begins its life in a
183
      slab. When it is allocated from there, the slab allocator calls a
184
      constructor that is registered in the respective slab cache. The
185
      constructor initializes and brings the object into a known state. The
186
      object is then used by the user. When the user later frees the object,
187
      the slab allocator puts it into a CPU-private magazine, from where it
188
      can be precedently allocated again. Note that allocations satisfied from
189
      a magazine are already initialized by the constructor.</para>
64 jermar 190
 
65 jermar 191
      <para>Should HelenOS run short of memory, it would start deallocating
192
      objects from magazines, calling slab cache destructor on them and
193
      putting them back into slabs. When a slab contanins no allocated object,
194
      it is immediately freed.</para>
195
 
64 jermar 196
      <para><figure>
197
          <mediaobject id="slab_alloc">
198
            <imageobject role="html">
199
              <imagedata fileref="images/slab_alloc.png" format="PNG" />
200
            </imageobject>
201
          </mediaobject>
202
 
203
          <title>Slab allocator scheme.</title>
204
        </figure></para>
205
 
206
      <section>
207
        <title>Implementation</title>
208
 
209
        <para>The slab allocator is closely modelled after <ulink
210
        url="http://www.usenix.org/events/usenix01/full_papers/bonwick/bonwick_html/">
211
        OpenSolaris slab allocator by Jeff Bonwick and Jonathan Adams </ulink>
212
        with the following exceptions:<itemizedlist>
213
            <listitem>
65 jermar 214
               empty magazines are deallocated when not needed
64 jermar 215
            </listitem>
216
          </itemizedlist> Following features are not currently supported but
217
        would be easy to do: <itemizedlist>
218
            <listitem>
219
               cache coloring
220
            </listitem>
221
 
222
            <listitem>
223
               dynamic magazine grow (different magazine sizes are already supported, but we would need to adjust allocation strategy)
224
            </listitem>
225
          </itemizedlist></para>
226
 
227
        <section>
228
          <title>Magazine layer</title>
229
 
230
          <para>Due to the extensive bottleneck on SMP architures, caused by
231
          global slab locking mechanism, making processing of all slab
232
          allocation requests serialized, a new layer was introduced to the
233
          classic slab allocator design. Slab allocator was extended to
234
          support per-CPU caches 'magazines' to achieve good SMP scaling.
235
          <termdef>Slab SMP perfromance bottleneck was resolved by introducing
236
          a per-CPU caching scheme called as <glossterm>magazine
237
          layer</glossterm></termdef>.</para>
238
 
239
          <para>Magazine is a N-element cache of objects, so each magazine can
240
          satisfy N allocations. Magazine behaves like a automatic weapon
241
          magazine (LIFO, stack), so the allocation/deallocation become simple
242
          push/pop pointer operation. Trick is that CPU does not access global
243
          slab allocator data during the allocation from its magazine, thus
244
          making possible parallel allocations between CPUs.</para>
245
 
246
          <para>Implementation also requires adding another feature as the
247
          CPU-bound magazine is actually a pair of magazines to avoid
248
          thrashing when during allocation/deallocatiion of 1 item at the
249
          magazine size boundary. LIFO order is enforced, which should avoid
250
          fragmentation as much as possible.</para>
251
 
252
          <para>Another important entity of magazine layer is the common full
253
          magazine list (also called a depot), that stores full magazines that
254
          may be used by any of the CPU magazine caches to reload active CPU
255
          magazine. This list of magazines can be pre-filled with full
256
          magazines during initialization, but in current implementation it is
257
          filled during object deallocation, when CPU magazine becomes
258
          full.</para>
259
 
260
          <para>Slab allocator control structures are allocated from special
261
          slabs, that are marked by special flag, indicating that it should
262
          not be used for slab magazine layer. This is done to avoid possible
263
          infinite recursions and deadlock during conventional slab allocaiton
264
          requests.</para>
265
        </section>
266
 
267
        <section>
268
          <title>Allocation/deallocation</title>
269
 
270
          <para>Every cache contains list of full slabs and list of partialy
271
          full slabs. Empty slabs are immediately freed (thrashing will be
272
          avoided because of magazines).</para>
273
 
274
          <para>The SLAB allocator allocates lots of space and does not free
275
          it. When frame allocator fails to allocate the frame, it calls
276
          slab_reclaim(). It tries 'light reclaim' first, then brutal reclaim.
277
          The light reclaim releases slabs from cpu-shared magazine-list,
278
          until at least 1 slab is deallocated in each cache (this algorithm
279
          should probably change). The brutal reclaim removes all cached
280
          objects, even from CPU-bound magazines.</para>
281
 
282
          <formalpara>
283
            <title>Allocation</title>
284
 
285
            <para><emphasis>Step 1.</emphasis> When it comes to the allocation
286
            request, slab allocator first of all checks availability of memory
287
            in local CPU-bound magazine. If it is there, we would just "pop"
288
            the CPU magazine and return the pointer to object.</para>
289
 
290
            <para><emphasis>Step 2.</emphasis> If the CPU-bound magazine is
291
            empty, allocator will attempt to reload magazin, swapping it with
292
            second CPU magazine and returns to the first step.</para>
293
 
294
            <para><emphasis>Step 3.</emphasis> Now we are in the situation
295
            when both CPU-bound magazines are empty, which makes allocator to
296
            access shared full-magazines depot to reload CPU-bound magazines.
297
            If reload is succesful (meaning there are full magazines in depot)
298
            algoritm continues at Step 1.</para>
299
 
300
            <para><emphasis>Step 4.</emphasis> Final step of the allocation.
301
            In this step object is allocated from the conventional slab layer
302
            and pointer is returned.</para>
303
          </formalpara>
304
 
305
          <formalpara>
306
            <title>Deallocation</title>
307
 
308
            <para><emphasis>Step 1.</emphasis> During deallocation request,
309
            slab allocator will check if the local CPU-bound magazine is not
310
            full. In this case we will just push the pointer to this
311
            magazine.</para>
312
 
313
            <para><emphasis>Step 2.</emphasis> If the CPU-bound magazine is
314
            full, allocator will attempt to reload magazin, swapping it with
315
            second CPU magazine and returns to the first step.</para>
316
 
317
            <para><emphasis>Step 3.</emphasis> Now we are in the situation
318
            when both CPU-bound magazines are full, which makes allocator to
319
            access shared full-magazines depot to put one of the magazines to
320
            the depot and creating new empty magazine. Algoritm continues at
321
            Step 1.</para>
322
          </formalpara>
323
        </section>
324
      </section>
325
    </section>
326
 
327
    <!-- End of Physmem -->
328
  </section>
329
 
330
  <section>
11 bondari 331
    <title>Virtual memory management</title>
9 bondari 332
 
333
    <section>
35 bondari 334
      <title>Introduction</title>
335
 
336
      <para>Virtual memory is a special memory management technique, used by
337
      kernel to achieve a bunch of mission critical goals. <itemizedlist>
338
          <listitem>
339
             Isolate each task from other tasks that are running on the system at the same time.
340
          </listitem>
341
 
342
          <listitem>
343
             Allow to allocate more memory, than is actual physical memory size of the machine.
344
          </listitem>
345
 
346
          <listitem>
347
             Allowing, in general, to load and execute two programs that are linked on the same address without complicated relocations.
348
          </listitem>
349
        </itemizedlist></para>
38 bondari 350
 
39 bondari 351
      <para><!--
352
 
38 bondari 353
                TLB shootdown ASID/ASID:PAGE/ALL.
354
                TLB shootdown requests can come in asynchroniously
355
                so there is a cache of TLB shootdown requests. Upon cache overflow TLB shootdown ALL is executed
356
 
357
 
358
                <para>
359
                        Address spaces. Address space area (B+ tree). Only for uspace. Set of syscalls (shrink/extend etc).
360
                        Special address space area type - device - prohibits shrink/extend syscalls to call on it.
361
                        Address space has link to mapping tables (hierarchical - per Address space, hash - global tables).
362
                </para>
363
 
364
--></para>
35 bondari 365
    </section>
366
 
367
    <section>
368
      <title>Paging</title>
369
 
370
      <para>Virtual memory is usually using paged memory model, where virtual
371
      memory address space is divided into the <emphasis>pages</emphasis>
372
      (usually having size 4096 bytes) and physical memory is divided into the
39 bondari 373
      frames (same sized as a page, of course). Each page may be mapped to
374
      some frame and then, upon memory access to the virtual address, CPU
375
      performs <emphasis>address translation</emphasis> during the instruction
35 bondari 376
      execution. Non-existing mapping generates page fault exception, calling
377
      kernel exception handler, thus allowing kernel to manipulate rules of
378
      memory access. Information for pages mapping is stored by kernel in the
379
      <link linkend="page_tables">page tables</link></para>
380
 
381
      <para>The majority of the architectures use multi-level page tables,
382
      which means need to access physical memory several times before getting
383
      physical address. This fact would make serios performance overhead in
384
      virtual memory management. To avoid this <link linkend="tlb">Traslation
385
      Lookaside Buffer (TLB)</link> is used.</para>
386
    </section>
387
 
388
    <section>
11 bondari 389
      <title>Address spaces</title>
9 bondari 390
 
35 bondari 391
      <section>
46 bondari 392
        <title>Address space areas</title>
35 bondari 393
 
46 bondari 394
        <para>Each address space consists of mutually disjunctive continuous
395
        address space areas. Address space area is precisely defined by its
47 bondari 396
        base address and the number of frames/pages is contains.</para>
35 bondari 397
 
47 bondari 398
        <para>Address space area , that define behaviour and permissions on
399
        the particular area. <itemizedlist>
46 bondari 400
            <listitem>
401
 
402
 
403
              <emphasis>AS_AREA_READ</emphasis>
404
 
405
               flag indicates reading permission.
406
            </listitem>
407
 
408
            <listitem>
409
 
410
 
411
              <emphasis>AS_AREA_WRITE</emphasis>
412
 
413
               flag indicates writing permission.
414
            </listitem>
415
 
416
            <listitem>
417
 
418
 
419
              <emphasis>AS_AREA_EXEC</emphasis>
420
 
421
               flag indicates code execution permission. Some architectures do not support execution persmission restriction. In this case this flag has no effect.
422
            </listitem>
423
 
424
            <listitem>
425
 
426
 
427
              <emphasis>AS_AREA_DEVICE</emphasis>
428
 
429
               marks area as mapped to the device memory.
430
            </listitem>
431
          </itemizedlist></para>
432
 
433
        <para>Kernel provides possibility tasks create/expand/shrink/share its
434
        address space via the set of syscalls.</para>
35 bondari 435
      </section>
436
 
437
      <section>
438
        <title>Address Space ID (ASID)</title>
439
 
46 bondari 440
        <para>When switching to the different task, kernel also require to
441
        switch mappings to the different address space. In case TLB cannot
47 bondari 442
        distinguish address space mappings, all mapping information in TLB
443
        from the old address space must be flushed, which can create certain
444
        uncessary overhead during the task switching. To avoid this, some
445
        architectures have capability to segregate different address spaces on
446
        hardware level introducing the address space identifier as a part of
447
        TLB record, telling the virtual address space translation unit to
448
        which address space this record is applicable.</para>
35 bondari 449
 
46 bondari 450
        <para>HelenOS kernel can take advantage of this hardware supported
47 bondari 451
        identifier by having an ASID abstraction which is somehow related to
452
        the corresponding architecture identifier. I.e. on ia64 kernel ASID is
453
        derived from RID (region identifier) and on the mips32 kernel ASID is
454
        actually the hardware identifier. As expected, this ASID information
455
        record is the part of <emphasis>as_t</emphasis> structure.</para>
35 bondari 456
 
47 bondari 457
        <para>Due to the hardware limitations, hardware ASID has limited
458
        length from 8 bits on ia64 to 24 bits on mips32, which makes it
459
        impossible to use it as unique address space identifier for all tasks
460
        running in the system. In such situations special ASID stealing
461
        algoritm is used, which takes ASID from inactive task and assigns it
462
        to the active task.</para>
463
 
464
        <para><classname>ASID stealing algoritm here.</classname></para>
35 bondari 465
      </section>
9 bondari 466
    </section>
467
 
468
    <section>
11 bondari 469
      <title>Virtual address translation</title>
9 bondari 470
 
35 bondari 471
      <section id="page_tables">
472
        <title>Page tables</title>
34 bondari 473
 
35 bondari 474
        <para>HelenOS kernel has two different approaches to the paging
475
        implementation: <emphasis>4 level page tables</emphasis> and
476
        <emphasis>global hash tables</emphasis>, which are accessible via
47 bondari 477
        generic paging abstraction layer. Such different functionality was
478
        caused by the major architectural differences between supported
479
        platforms. This abstraction is implemented with help of the global
480
        structure of pointers to basic mapping functions
481
        <emphasis>page_mapping_operations</emphasis>. To achieve different
482
        functionality of page tables, corresponding layer must implement
483
        functions, declared in
484
        <emphasis>page_mapping_operations</emphasis></para>
34 bondari 485
 
35 bondari 486
        <formalpara>
487
          <title>4-level page tables</title>
34 bondari 488
 
35 bondari 489
          <para>4-level page tables are the generalization of the hardware
47 bondari 490
          capabilities of several architectures.<itemizedlist>
35 bondari 491
              <listitem>
492
                 ia32 uses 2-level page tables, with full hardware support.
493
              </listitem>
34 bondari 494
 
35 bondari 495
              <listitem>
496
                 amd64 uses 4-level page tables, also coming with full hardware support.
497
              </listitem>
498
 
499
              <listitem>
500
                 mips and ppc32 have 2-level tables, software simulated support.
501
              </listitem>
502
            </itemizedlist></para>
503
        </formalpara>
504
 
505
        <formalpara>
506
          <title>Global hash tables</title>
507
 
508
          <para>- global page hash table: existuje jen jedna v celem systemu
509
          (vyuziva ji ia64), pozn. ia64 ma zatim vypnuty VHPT. Pouziva se
46 bondari 510
          genericke hash table s oddelenymi collision chains. ASID support is
511
          required to use global hash tables.</para>
35 bondari 512
        </formalpara>
513
 
514
        <para>Thanks to the abstract paging interface, there is possibility
515
        left have more paging implementations, for example B-Tree page
516
        tables.</para>
517
      </section>
518
 
519
      <section id="tlb">
54 bondari 520
        <title>Translation Lookaside buffer</title>
35 bondari 521
 
54 bondari 522
        <para>Due to the extensive overhead during the page mapping lookup in
523
        the page tables, all architectures has fast assotiative cache memory
524
        built-in CPU. This memory called TLB stores recently used page table
525
        entries.</para>
35 bondari 526
 
54 bondari 527
        <section id="tlb_shootdown">
528
          <title>TLB consistency. TLB shootdown algorithm.</title>
35 bondari 529
 
54 bondari 530
          <para>Operating system is responsible for keeping TLB consistent by
531
          invalidating the contents of TLB, whenever there is some change in
532
          page tables. Those changes may occur when page or group of pages
533
          were unmapped, mapping is changed or system switching active address
534
          space to schedule a new system task (which is a batch unmap of all
535
          address space mappings). Moreover, this invalidation operation must
536
          be done an all system CPUs because each CPU has its own independent
537
          TLB cache. Thus maintaining TLB consistency on SMP configuration as
538
          not as trivial task as it looks at the first glance. Naive solution
539
          would assume remote TLB invalidatation, which is not possible on the
540
          most of the architectures, because of the simple fact - flushing TLB
541
          is allowed only on the local CPU and there is no possibility to
542
          access other CPUs' TLB caches.</para>
543
 
544
          <para>Technique of remote invalidation of TLB entries is called "TLB
545
          shootdown". HelenOS uses a variation of the algorithm described by
546
          D. Black et al., "Translation Lookaside Buffer Consistency: A
547
          Software Approach," Proc. Third Int'l Conf. Architectural Support
548
          for Programming Languages and Operating Systems, 1989, pp.
549
          113-122.</para>
550
 
551
          <para>As the situation demands, you will want partitial invalidation
552
          of TLB caches. In case of simple memory mapping change it is
553
          necessary to invalidate only one or more adjacent pages. In case if
554
          the architecture is aware of ASIDs, during the address space
555
          switching, kernel invalidates only entries from this particular
556
          address space. Final option of the TLB invalidation is the complete
557
          TLB cache invalidation, which is the operation that flushes all
558
          entries in TLB.</para>
559
 
560
          <para>TLB shootdown is performed in two phases. First, the initiator
561
          process sends an IPI message indicating the TLB shootdown request to
562
          the rest of the CPUs. Then, it waits until all CPUs confirm TLB
563
          invalidating action execution.</para>
564
        </section>
35 bondari 565
      </section>
566
    </section>
46 bondari 567
 
568
    <section>
569
      <title>---</title>
570
 
571
      <para>At the moment HelenOS does not support swapping.</para>
572
 
573
      <para>- pouzivame vypadky stranky k alokaci ramcu on-demand v ramci
574
      as_area - na architekturach, ktere to podporuji, podporujeme non-exec
575
      stranky</para>
576
    </section>
26 bondari 577
  </section>
11 bondari 578
</chapter>