Subversion Repositories HelenOS-doc

Rev

Rev 62 | Rev 65 | Go to most recent revision | Only display areas with differences | Ignore whitespace | Details | Blame | Last modification | View Log | RSS feed

Rev 62 Rev 64
1
<?xml version="1.0" encoding="UTF-8"?>
1
<?xml version="1.0" encoding="UTF-8"?>
2
<chapter id="mm">
2
<chapter id="mm">
3
  <?dbhtml filename="mm.html"?>
3
  <?dbhtml filename="mm.html"?>
4
 
4
 
5
  <title>Memory management</title>
5
  <title>Memory management</title>
6
 
6
 
-
 
7
  <para>In previous chapters, this book described the scheduling subsystem as
-
 
8
  the creator of the impression that threads execute in parallel. The memory
-
 
9
  management subsystem, on the other hand, creates the impression that there
-
 
10
  is enough physical memory for the kernel and that userspace tasks have the
-
 
11
  entire address space only for themselves.</para>
-
 
12
 
-
 
13
  <section>
-
 
14
    <title>Physical memory management</title>
-
 
15
 
-
 
16
    <section id="zones_and_frames">
-
 
17
      <title>Zones and frames</title>
-
 
18
 
-
 
19
      <para>HelenOS represents continuous areas of physical memory in
-
 
20
      structures called frame zones (abbreviated as zones). Each zone contains
-
 
21
      information about the number of allocated and unallocated physical
-
 
22
      memory frames as well as the physical base address of the zone and
-
 
23
      number of frames contained in it. A zone also contains an array of frame
-
 
24
      structures describing each frame of the zone and, in the last, but not
-
 
25
      the least important, front, each zone is equipped with a buddy system
-
 
26
      that faciliates effective allocation of power-of-two sized block of
-
 
27
      frames.</para>
-
 
28
 
-
 
29
      <para>This organization of physical memory provides good preconditions
-
 
30
      for hot-plugging of more zones. There is also one currently unused zone
-
 
31
      attribute: <code>flags</code>. The attribute could be used to give a
-
 
32
      special meaning to some zones in the future.</para>
-
 
33
 
-
 
34
      <para>The zones are linked in a doubly-linked list. This might seem a
-
 
35
      bit ineffective because the zone list is walked everytime a frame is
-
 
36
      allocated or deallocated. However, this does not represent a significant
-
 
37
      performance problem as it is expected that the number of zones will be
-
 
38
      rather low. Moreover, most architectures merge all zones into
-
 
39
      one.</para>
-
 
40
 
-
 
41
      <para>For each physical memory frame found in a zone, there is a frame
-
 
42
      structure that contains number of references and data used by buddy
-
 
43
      system.</para>
-
 
44
    </section>
-
 
45
 
-
 
46
    <section id="frame_allocator">
-
 
47
      <title>Frame allocator</title>
-
 
48
 
-
 
49
      <para>The frame allocator satisfies kernel requests to allocate
-
 
50
      power-of-two sized blocks of physical memory. Because of zonal
-
 
51
      organization of physical memory, the frame allocator is always working
-
 
52
      within a context of some frame zone. In order to carry out the
-
 
53
      allocation requests, the frame allocator is tightly integrated with the
-
 
54
      buddy system belonging to the zone. The frame allocator is also
-
 
55
      responsible for updating information about the number of free and busy
-
 
56
      frames in the zone. <figure>
-
 
57
          <mediaobject id="frame_alloc">
-
 
58
            <imageobject role="html">
-
 
59
              <imagedata fileref="images/frame_alloc.png" format="PNG" />
-
 
60
            </imageobject>
-
 
61
 
-
 
62
            <imageobject role="fop">
-
 
63
              <imagedata fileref="images.vector/frame_alloc.svg" format="SVG" />
-
 
64
            </imageobject>
-
 
65
          </mediaobject>
-
 
66
 
-
 
67
          <title>Frame allocator scheme.</title>
-
 
68
        </figure></para>
-
 
69
 
-
 
70
      <formalpara>
-
 
71
        <title>Allocation / deallocation</title>
-
 
72
 
-
 
73
        <para>Upon allocation request via function <code>frame_alloc</code>,
-
 
74
        the frame allocator first tries to find a zone that can satisfy the
-
 
75
        request (i.e. has the required amount of free frames). Once a suitable
-
 
76
        zone is found, the frame allocator uses the buddy allocator on the
-
 
77
        zone's buddy system to perform the allocation. During deallocation,
-
 
78
        which is triggered by a call to <code>frame_free</code>, the frame
-
 
79
        allocator looks up the respective zone that contains the frame being
-
 
80
        deallocated. Afterwards, it calls the buddy allocator again, this time
-
 
81
        to take care of deallocation within the zone's buddy system.</para>
-
 
82
      </formalpara>
-
 
83
    </section>
-
 
84
 
-
 
85
    <section id="buddy_allocator">
-
 
86
      <title>Buddy allocator</title>
-
 
87
 
-
 
88
      <para>In the buddy system, the memory is broken down into power-of-two
-
 
89
      sized naturally aligned blocks. These blocks are organized in an array
-
 
90
      of lists, in which the list with index i contains all unallocated blocks
-
 
91
      of size <mathphrase>2<superscript>i</superscript></mathphrase>. The
-
 
92
      index i is called the order of block. Should there be two adjacent
-
 
93
      equally sized blocks in the list i<mathphrase />(i.e. buddies), the
-
 
94
      buddy allocator would coalesce them and put the resulting block in list
-
 
95
      <mathphrase>i + 1</mathphrase>, provided that the resulting block would
-
 
96
      be naturally aligned. Similarily, when the allocator is asked to
-
 
97
      allocate a block of size
-
 
98
      <mathphrase>2<superscript>i</superscript></mathphrase>, it first tries
-
 
99
      to satisfy the request from the list with index i. If the request cannot
-
 
100
      be satisfied (i.e. the list i is empty), the buddy allocator will try to
-
 
101
      allocate and split a larger block from the list with index i + 1. Both
-
 
102
      of these algorithms are recursive. The recursion ends either when there
-
 
103
      are no blocks to coalesce in the former case or when there are no blocks
-
 
104
      that can be split in the latter case.</para>
-
 
105
 
-
 
106
      <para>This approach greatly reduces external fragmentation of memory and
-
 
107
      helps in allocating bigger continuous blocks of memory aligned to their
-
 
108
      size. On the other hand, the buddy allocator suffers increased internal
-
 
109
      fragmentation of memory and is not suitable for general kernel
-
 
110
      allocations. This purpose is better addressed by the <link
-
 
111
      linkend="slab">slab allocator</link>.<figure>
-
 
112
          <mediaobject id="buddy_alloc">
-
 
113
            <imageobject role="html">
-
 
114
              <imagedata fileref="images/buddy_alloc.png" format="PNG" />
-
 
115
            </imageobject>
-
 
116
 
-
 
117
            <imageobject role="fop">
-
 
118
              <imagedata fileref="images.vector/buddy_alloc.svg" format="SVG" />
-
 
119
            </imageobject>
-
 
120
          </mediaobject>
-
 
121
 
-
 
122
          <title>Buddy system scheme.</title>
-
 
123
        </figure></para>
-
 
124
 
-
 
125
      <section>
-
 
126
        <title>Implementation</title>
-
 
127
 
-
 
128
        <para>The buddy allocator is, in fact, an abstract framework wich can
-
 
129
        be easily specialized to serve one particular task. It knows nothing
-
 
130
        about the nature of memory it helps to allocate. In order to beat the
-
 
131
        lack of this knowledge, the buddy allocator exports an interface that
-
 
132
        each of its clients is required to implement. When supplied with an
-
 
133
        implementation of this interface, the buddy allocator can use
-
 
134
        specialized external functions to find a buddy for a block, split and
-
 
135
        coalesce blocks, manipulate block order and mark blocks busy or
-
 
136
        available.</para>
-
 
137
 
-
 
138
        <formalpara>
-
 
139
          <title>Data organization</title>
-
 
140
 
-
 
141
          <para>Each entity allocable by the buddy allocator is required to
-
 
142
          contain space for storing block order number and a link variable
-
 
143
          used to interconnect blocks within the same order.</para>
-
 
144
 
-
 
145
          <para>Whatever entities are allocated by the buddy allocator, the
-
 
146
          first entity within a block is used to represent the entire block.
-
 
147
          The first entity keeps the order of the whole block. Other entities
-
 
148
          within the block are assigned the magic value
-
 
149
          <constant>BUDDY_INNER_BLOCK</constant>. This is especially important
-
 
150
          for effective identification of buddies in a one-dimensional array
-
 
151
          because the entity that represents a potential buddy cannot be
-
 
152
          associated with <constant>BUDDY_INNER_BLOCK</constant> (i.e. if it
-
 
153
          is associated with <constant>BUDDY_INNER_BLOCK</constant> then it is
-
 
154
          not a buddy).</para>
-
 
155
        </formalpara>
-
 
156
      </section>
-
 
157
    </section>
-
 
158
 
-
 
159
    <section id="slab">
-
 
160
      <title>Slab allocator</title>
-
 
161
 
-
 
162
      <para>The majority of memory allocation requests in the kernel is for
-
 
163
      small, frequently used data structures. The basic idea behind the slab
-
 
164
      allocator is that deployment of lists of preallocated, commonly used
-
 
165
      objects. Whenever an object is to be allocated, the slab allocator takes
-
 
166
      the first item from the list corresponding to the object type. This
-
 
167
      avoids the overhead of allocating and dellocating commonly used types of
-
 
168
      objects such as threads, B+tree nodes etc. Due to the fact that the
-
 
169
      sizes of the requested and allocated object match, the slab allocator
-
 
170
      significantly eliminates internal fragmentation. Moreover, each list can
-
 
171
      have a constructor and a destructor, which leads to performance gains
-
 
172
      because constructed and then destroyed objects don't need to be
-
 
173
      reinitialized<footnote>
-
 
174
          <para>Provided that the assumption that the destructor leaves the
-
 
175
          object in a consistent state holds.</para>
-
 
176
        </footnote>.</para>
-
 
177
 
-
 
178
      <para>In the slab allocator, objects of one type are kept in continuous
-
 
179
      areas of physical memory called slabs. Slabs can span from one to
-
 
180
      several physical memory frames. Slabs of objects of one type are stored
-
 
181
      in slab caches. When the allocator needs to allocate an object, it
-
 
182
      searches available slabs. When the slab does not contain any free
-
 
183
      obejct, a new slab is allocated and added to the cache. Contrary to
-
 
184
      allocation, deallocated objects are returned to their respective slabs.
-
 
185
      Empty slabs are deallocated immediately while empty slab caches are not
-
 
186
      freed until HelenOS runs short of memory.</para>
-
 
187
 
-
 
188
      <para><figure>
-
 
189
          <mediaobject id="slab_alloc">
-
 
190
            <imageobject role="html">
-
 
191
              <imagedata fileref="images/slab_alloc.png" format="PNG" />
-
 
192
            </imageobject>
-
 
193
 
-
 
194
            <imageobject role="fop">
-
 
195
              <imagedata fileref="images.vector/slab_alloc.svg" format="SVG" />
-
 
196
            </imageobject>
-
 
197
          </mediaobject>
-
 
198
 
-
 
199
          <title>Slab allocator scheme.</title>
-
 
200
        </figure></para>
-
 
201
 
-
 
202
      <section>
-
 
203
        <para>
-
 
204
          <termdef />
-
 
205
 
-
 
206
         
-
 
207
 
-
 
208
          <termdef />
-
 
209
        </para>
-
 
210
      </section>
-
 
211
 
-
 
212
      <section>
-
 
213
        <title>Implementation</title>
-
 
214
 
-
 
215
        <para>The slab allocator is closely modelled after <ulink
-
 
216
        url="http://www.usenix.org/events/usenix01/full_papers/bonwick/bonwick_html/">
-
 
217
        OpenSolaris slab allocator by Jeff Bonwick and Jonathan Adams </ulink>
-
 
218
        with the following exceptions:<itemizedlist>
-
 
219
            <listitem></listitem>
-
 
220
 
-
 
221
            <listitem>
-
 
222
               empty magazines are deallocated when not needed
-
 
223
            </listitem>
-
 
224
          </itemizedlist> Following features are not currently supported but
-
 
225
        would be easy to do: <itemizedlist>
-
 
226
            <listitem>
-
 
227
               cache coloring
-
 
228
            </listitem>
-
 
229
 
-
 
230
            <listitem>
-
 
231
               dynamic magazine grow (different magazine sizes are already supported, but we would need to adjust allocation strategy)
-
 
232
            </listitem>
-
 
233
          </itemizedlist></para>
-
 
234
 
-
 
235
        <section>
-
 
236
          <title>Magazine layer</title>
-
 
237
 
-
 
238
          <para>Due to the extensive bottleneck on SMP architures, caused by
-
 
239
          global slab locking mechanism, making processing of all slab
-
 
240
          allocation requests serialized, a new layer was introduced to the
-
 
241
          classic slab allocator design. Slab allocator was extended to
-
 
242
          support per-CPU caches 'magazines' to achieve good SMP scaling.
-
 
243
          <termdef>Slab SMP perfromance bottleneck was resolved by introducing
-
 
244
          a per-CPU caching scheme called as <glossterm>magazine
-
 
245
          layer</glossterm></termdef>.</para>
-
 
246
 
-
 
247
          <para>Magazine is a N-element cache of objects, so each magazine can
-
 
248
          satisfy N allocations. Magazine behaves like a automatic weapon
-
 
249
          magazine (LIFO, stack), so the allocation/deallocation become simple
-
 
250
          push/pop pointer operation. Trick is that CPU does not access global
-
 
251
          slab allocator data during the allocation from its magazine, thus
-
 
252
          making possible parallel allocations between CPUs.</para>
-
 
253
 
-
 
254
          <para>Implementation also requires adding another feature as the
-
 
255
          CPU-bound magazine is actually a pair of magazines to avoid
-
 
256
          thrashing when during allocation/deallocatiion of 1 item at the
-
 
257
          magazine size boundary. LIFO order is enforced, which should avoid
-
 
258
          fragmentation as much as possible.</para>
-
 
259
 
-
 
260
          <para>Another important entity of magazine layer is the common full
-
 
261
          magazine list (also called a depot), that stores full magazines that
-
 
262
          may be used by any of the CPU magazine caches to reload active CPU
-
 
263
          magazine. This list of magazines can be pre-filled with full
-
 
264
          magazines during initialization, but in current implementation it is
-
 
265
          filled during object deallocation, when CPU magazine becomes
-
 
266
          full.</para>
-
 
267
 
-
 
268
          <para>Slab allocator control structures are allocated from special
-
 
269
          slabs, that are marked by special flag, indicating that it should
-
 
270
          not be used for slab magazine layer. This is done to avoid possible
-
 
271
          infinite recursions and deadlock during conventional slab allocaiton
-
 
272
          requests.</para>
-
 
273
        </section>
-
 
274
 
-
 
275
        <section>
-
 
276
          <title>Allocation/deallocation</title>
-
 
277
 
-
 
278
          <para>Every cache contains list of full slabs and list of partialy
-
 
279
          full slabs. Empty slabs are immediately freed (thrashing will be
-
 
280
          avoided because of magazines).</para>
-
 
281
 
-
 
282
          <para>The SLAB allocator allocates lots of space and does not free
-
 
283
          it. When frame allocator fails to allocate the frame, it calls
-
 
284
          slab_reclaim(). It tries 'light reclaim' first, then brutal reclaim.
-
 
285
          The light reclaim releases slabs from cpu-shared magazine-list,
-
 
286
          until at least 1 slab is deallocated in each cache (this algorithm
-
 
287
          should probably change). The brutal reclaim removes all cached
-
 
288
          objects, even from CPU-bound magazines.</para>
-
 
289
 
-
 
290
          <formalpara>
-
 
291
            <title>Allocation</title>
-
 
292
 
-
 
293
            <para><emphasis>Step 1.</emphasis> When it comes to the allocation
-
 
294
            request, slab allocator first of all checks availability of memory
-
 
295
            in local CPU-bound magazine. If it is there, we would just "pop"
-
 
296
            the CPU magazine and return the pointer to object.</para>
-
 
297
 
-
 
298
            <para><emphasis>Step 2.</emphasis> If the CPU-bound magazine is
-
 
299
            empty, allocator will attempt to reload magazin, swapping it with
-
 
300
            second CPU magazine and returns to the first step.</para>
-
 
301
 
-
 
302
            <para><emphasis>Step 3.</emphasis> Now we are in the situation
-
 
303
            when both CPU-bound magazines are empty, which makes allocator to
-
 
304
            access shared full-magazines depot to reload CPU-bound magazines.
-
 
305
            If reload is succesful (meaning there are full magazines in depot)
-
 
306
            algoritm continues at Step 1.</para>
-
 
307
 
-
 
308
            <para><emphasis>Step 4.</emphasis> Final step of the allocation.
-
 
309
            In this step object is allocated from the conventional slab layer
-
 
310
            and pointer is returned.</para>
-
 
311
          </formalpara>
-
 
312
 
-
 
313
          <formalpara>
-
 
314
            <title>Deallocation</title>
-
 
315
 
-
 
316
            <para><emphasis>Step 1.</emphasis> During deallocation request,
-
 
317
            slab allocator will check if the local CPU-bound magazine is not
-
 
318
            full. In this case we will just push the pointer to this
-
 
319
            magazine.</para>
-
 
320
 
-
 
321
            <para><emphasis>Step 2.</emphasis> If the CPU-bound magazine is
-
 
322
            full, allocator will attempt to reload magazin, swapping it with
-
 
323
            second CPU magazine and returns to the first step.</para>
-
 
324
 
-
 
325
            <para><emphasis>Step 3.</emphasis> Now we are in the situation
-
 
326
            when both CPU-bound magazines are full, which makes allocator to
-
 
327
            access shared full-magazines depot to put one of the magazines to
-
 
328
            the depot and creating new empty magazine. Algoritm continues at
-
 
329
            Step 1.</para>
-
 
330
          </formalpara>
-
 
331
        </section>
-
 
332
      </section>
-
 
333
    </section>
-
 
334
 
-
 
335
    <!-- End of Physmem -->
-
 
336
  </section>
-
 
337
 
7
  <section>
338
  <section>
8
    <title>Virtual memory management</title>
339
    <title>Virtual memory management</title>
9
 
340
 
10
    <section>
341
    <section>
11
      <title>Introduction</title>
342
      <title>Introduction</title>
12
 
343
 
13
      <para>Virtual memory is a special memory management technique, used by
344
      <para>Virtual memory is a special memory management technique, used by
14
      kernel to achieve a bunch of mission critical goals. <itemizedlist>
345
      kernel to achieve a bunch of mission critical goals. <itemizedlist>
15
          <listitem>
346
          <listitem>
16
             Isolate each task from other tasks that are running on the system at the same time.
347
             Isolate each task from other tasks that are running on the system at the same time.
17
          </listitem>
348
          </listitem>
18
 
349
 
19
          <listitem>
350
          <listitem>
20
             Allow to allocate more memory, than is actual physical memory size of the machine.
351
             Allow to allocate more memory, than is actual physical memory size of the machine.
21
          </listitem>
352
          </listitem>
22
 
353
 
23
          <listitem>
354
          <listitem>
24
             Allowing, in general, to load and execute two programs that are linked on the same address without complicated relocations.
355
             Allowing, in general, to load and execute two programs that are linked on the same address without complicated relocations.
25
          </listitem>
356
          </listitem>
26
        </itemizedlist></para>
357
        </itemizedlist></para>
27
 
358
 
28
      <para><!--
359
      <para><!--
29
 
360
 
30
                TLB shootdown ASID/ASID:PAGE/ALL.
361
                TLB shootdown ASID/ASID:PAGE/ALL.
31
                TLB shootdown requests can come in asynchroniously
362
                TLB shootdown requests can come in asynchroniously
32
                so there is a cache of TLB shootdown requests. Upon cache overflow TLB shootdown ALL is executed
363
                so there is a cache of TLB shootdown requests. Upon cache overflow TLB shootdown ALL is executed
33
 
364
 
34
 
365
 
35
                <para>
366
                <para>
36
                        Address spaces. Address space area (B+ tree). Only for uspace. Set of syscalls (shrink/extend etc).
367
                        Address spaces. Address space area (B+ tree). Only for uspace. Set of syscalls (shrink/extend etc).
37
                        Special address space area type - device - prohibits shrink/extend syscalls to call on it.
368
                        Special address space area type - device - prohibits shrink/extend syscalls to call on it.
38
                        Address space has link to mapping tables (hierarchical - per Address space, hash - global tables).
369
                        Address space has link to mapping tables (hierarchical - per Address space, hash - global tables).
39
                </para>
370
                </para>
40
 
371
 
41
--></para>
372
--></para>
42
    </section>
373
    </section>
43
 
374
 
44
    <section>
375
    <section>
45
      <title>Paging</title>
376
      <title>Paging</title>
46
 
377
 
47
      <para>Virtual memory is usually using paged memory model, where virtual
378
      <para>Virtual memory is usually using paged memory model, where virtual
48
      memory address space is divided into the <emphasis>pages</emphasis>
379
      memory address space is divided into the <emphasis>pages</emphasis>
49
      (usually having size 4096 bytes) and physical memory is divided into the
380
      (usually having size 4096 bytes) and physical memory is divided into the
50
      frames (same sized as a page, of course). Each page may be mapped to
381
      frames (same sized as a page, of course). Each page may be mapped to
51
      some frame and then, upon memory access to the virtual address, CPU
382
      some frame and then, upon memory access to the virtual address, CPU
52
      performs <emphasis>address translation</emphasis> during the instruction
383
      performs <emphasis>address translation</emphasis> during the instruction
53
      execution. Non-existing mapping generates page fault exception, calling
384
      execution. Non-existing mapping generates page fault exception, calling
54
      kernel exception handler, thus allowing kernel to manipulate rules of
385
      kernel exception handler, thus allowing kernel to manipulate rules of
55
      memory access. Information for pages mapping is stored by kernel in the
386
      memory access. Information for pages mapping is stored by kernel in the
56
      <link linkend="page_tables">page tables</link></para>
387
      <link linkend="page_tables">page tables</link></para>
57
 
388
 
58
      <para>The majority of the architectures use multi-level page tables,
389
      <para>The majority of the architectures use multi-level page tables,
59
      which means need to access physical memory several times before getting
390
      which means need to access physical memory several times before getting
60
      physical address. This fact would make serios performance overhead in
391
      physical address. This fact would make serios performance overhead in
61
      virtual memory management. To avoid this <link linkend="tlb">Traslation
392
      virtual memory management. To avoid this <link linkend="tlb">Traslation
62
      Lookaside Buffer (TLB)</link> is used.</para>
393
      Lookaside Buffer (TLB)</link> is used.</para>
63
    </section>
394
    </section>
64
 
395
 
65
    <section>
396
    <section>
66
      <title>Address spaces</title>
397
      <title>Address spaces</title>
67
 
398
 
68
      <section>
399
      <section>
69
        <title>Address space areas</title>
400
        <title>Address space areas</title>
70
 
401
 
71
        <para>Each address space consists of mutually disjunctive continuous
402
        <para>Each address space consists of mutually disjunctive continuous
72
        address space areas. Address space area is precisely defined by its
403
        address space areas. Address space area is precisely defined by its
73
        base address and the number of frames/pages is contains.</para>
404
        base address and the number of frames/pages is contains.</para>
74
 
405
 
75
        <para>Address space area , that define behaviour and permissions on
406
        <para>Address space area , that define behaviour and permissions on
76
        the particular area. <itemizedlist>
407
        the particular area. <itemizedlist>
77
            <listitem>
408
            <listitem>
78
               
409
               
79
 
410
 
80
              <emphasis>AS_AREA_READ</emphasis>
411
              <emphasis>AS_AREA_READ</emphasis>
81
 
412
 
82
               flag indicates reading permission.
413
               flag indicates reading permission.
83
            </listitem>
414
            </listitem>
84
 
415
 
85
            <listitem>
416
            <listitem>
86
               
417
               
87
 
418
 
88
              <emphasis>AS_AREA_WRITE</emphasis>
419
              <emphasis>AS_AREA_WRITE</emphasis>
89
 
420
 
90
               flag indicates writing permission.
421
               flag indicates writing permission.
91
            </listitem>
422
            </listitem>
92
 
423
 
93
            <listitem>
424
            <listitem>
94
               
425
               
95
 
426
 
96
              <emphasis>AS_AREA_EXEC</emphasis>
427
              <emphasis>AS_AREA_EXEC</emphasis>
97
 
428
 
98
               flag indicates code execution permission. Some architectures do not support execution persmission restriction. In this case this flag has no effect.
429
               flag indicates code execution permission. Some architectures do not support execution persmission restriction. In this case this flag has no effect.
99
            </listitem>
430
            </listitem>
100
 
431
 
101
            <listitem>
432
            <listitem>
102
               
433
               
103
 
434
 
104
              <emphasis>AS_AREA_DEVICE</emphasis>
435
              <emphasis>AS_AREA_DEVICE</emphasis>
105
 
436
 
106
               marks area as mapped to the device memory.
437
               marks area as mapped to the device memory.
107
            </listitem>
438
            </listitem>
108
          </itemizedlist></para>
439
          </itemizedlist></para>
109
 
440
 
110
        <para>Kernel provides possibility tasks create/expand/shrink/share its
441
        <para>Kernel provides possibility tasks create/expand/shrink/share its
111
        address space via the set of syscalls.</para>
442
        address space via the set of syscalls.</para>
112
      </section>
443
      </section>
113
 
444
 
114
      <section>
445
      <section>
115
        <title>Address Space ID (ASID)</title>
446
        <title>Address Space ID (ASID)</title>
116
 
447
 
117
        <para>When switching to the different task, kernel also require to
448
        <para>When switching to the different task, kernel also require to
118
        switch mappings to the different address space. In case TLB cannot
449
        switch mappings to the different address space. In case TLB cannot
119
        distinguish address space mappings, all mapping information in TLB
450
        distinguish address space mappings, all mapping information in TLB
120
        from the old address space must be flushed, which can create certain
451
        from the old address space must be flushed, which can create certain
121
        uncessary overhead during the task switching. To avoid this, some
452
        uncessary overhead during the task switching. To avoid this, some
122
        architectures have capability to segregate different address spaces on
453
        architectures have capability to segregate different address spaces on
123
        hardware level introducing the address space identifier as a part of
454
        hardware level introducing the address space identifier as a part of
124
        TLB record, telling the virtual address space translation unit to
455
        TLB record, telling the virtual address space translation unit to
125
        which address space this record is applicable.</para>
456
        which address space this record is applicable.</para>
126
 
457
 
127
        <para>HelenOS kernel can take advantage of this hardware supported
458
        <para>HelenOS kernel can take advantage of this hardware supported
128
        identifier by having an ASID abstraction which is somehow related to
459
        identifier by having an ASID abstraction which is somehow related to
129
        the corresponding architecture identifier. I.e. on ia64 kernel ASID is
460
        the corresponding architecture identifier. I.e. on ia64 kernel ASID is
130
        derived from RID (region identifier) and on the mips32 kernel ASID is
461
        derived from RID (region identifier) and on the mips32 kernel ASID is
131
        actually the hardware identifier. As expected, this ASID information
462
        actually the hardware identifier. As expected, this ASID information
132
        record is the part of <emphasis>as_t</emphasis> structure.</para>
463
        record is the part of <emphasis>as_t</emphasis> structure.</para>
133
 
464
 
134
        <para>Due to the hardware limitations, hardware ASID has limited
465
        <para>Due to the hardware limitations, hardware ASID has limited
135
        length from 8 bits on ia64 to 24 bits on mips32, which makes it
466
        length from 8 bits on ia64 to 24 bits on mips32, which makes it
136
        impossible to use it as unique address space identifier for all tasks
467
        impossible to use it as unique address space identifier for all tasks
137
        running in the system. In such situations special ASID stealing
468
        running in the system. In such situations special ASID stealing
138
        algoritm is used, which takes ASID from inactive task and assigns it
469
        algoritm is used, which takes ASID from inactive task and assigns it
139
        to the active task.</para>
470
        to the active task.</para>
140
 
471
 
141
        <para><classname>ASID stealing algoritm here.</classname></para>
472
        <para><classname>ASID stealing algoritm here.</classname></para>
142
      </section>
473
      </section>
143
    </section>
474
    </section>
144
 
475
 
145
    <section>
476
    <section>
146
      <title>Virtual address translation</title>
477
      <title>Virtual address translation</title>
147
 
478
 
148
      <section id="page_tables">
479
      <section id="page_tables">
149
        <title>Page tables</title>
480
        <title>Page tables</title>
150
 
481
 
151
        <para>HelenOS kernel has two different approaches to the paging
482
        <para>HelenOS kernel has two different approaches to the paging
152
        implementation: <emphasis>4 level page tables</emphasis> and
483
        implementation: <emphasis>4 level page tables</emphasis> and
153
        <emphasis>global hash tables</emphasis>, which are accessible via
484
        <emphasis>global hash tables</emphasis>, which are accessible via
154
        generic paging abstraction layer. Such different functionality was
485
        generic paging abstraction layer. Such different functionality was
155
        caused by the major architectural differences between supported
486
        caused by the major architectural differences between supported
156
        platforms. This abstraction is implemented with help of the global
487
        platforms. This abstraction is implemented with help of the global
157
        structure of pointers to basic mapping functions
488
        structure of pointers to basic mapping functions
158
        <emphasis>page_mapping_operations</emphasis>. To achieve different
489
        <emphasis>page_mapping_operations</emphasis>. To achieve different
159
        functionality of page tables, corresponding layer must implement
490
        functionality of page tables, corresponding layer must implement
160
        functions, declared in
491
        functions, declared in
161
        <emphasis>page_mapping_operations</emphasis></para>
492
        <emphasis>page_mapping_operations</emphasis></para>
162
 
493
 
163
        <formalpara>
494
        <formalpara>
164
          <title>4-level page tables</title>
495
          <title>4-level page tables</title>
165
 
496
 
166
          <para>4-level page tables are the generalization of the hardware
497
          <para>4-level page tables are the generalization of the hardware
167
          capabilities of several architectures.<itemizedlist>
498
          capabilities of several architectures.<itemizedlist>
168
              <listitem>
499
              <listitem>
169
                 ia32 uses 2-level page tables, with full hardware support.
500
                 ia32 uses 2-level page tables, with full hardware support.
170
              </listitem>
501
              </listitem>
171
 
502
 
172
              <listitem>
503
              <listitem>
173
                 amd64 uses 4-level page tables, also coming with full hardware support.
504
                 amd64 uses 4-level page tables, also coming with full hardware support.
174
              </listitem>
505
              </listitem>
175
 
506
 
176
              <listitem>
507
              <listitem>
177
                 mips and ppc32 have 2-level tables, software simulated support.
508
                 mips and ppc32 have 2-level tables, software simulated support.
178
              </listitem>
509
              </listitem>
179
            </itemizedlist></para>
510
            </itemizedlist></para>
180
        </formalpara>
511
        </formalpara>
181
 
512
 
182
        <formalpara>
513
        <formalpara>
183
          <title>Global hash tables</title>
514
          <title>Global hash tables</title>
184
 
515
 
185
          <para>- global page hash table: existuje jen jedna v celem systemu
516
          <para>- global page hash table: existuje jen jedna v celem systemu
186
          (vyuziva ji ia64), pozn. ia64 ma zatim vypnuty VHPT. Pouziva se
517
          (vyuziva ji ia64), pozn. ia64 ma zatim vypnuty VHPT. Pouziva se
187
          genericke hash table s oddelenymi collision chains. ASID support is
518
          genericke hash table s oddelenymi collision chains. ASID support is
188
          required to use global hash tables.</para>
519
          required to use global hash tables.</para>
189
        </formalpara>
520
        </formalpara>
190
 
521
 
191
        <para>Thanks to the abstract paging interface, there is possibility
522
        <para>Thanks to the abstract paging interface, there is possibility
192
        left have more paging implementations, for example B-Tree page
523
        left have more paging implementations, for example B-Tree page
193
        tables.</para>
524
        tables.</para>
194
      </section>
525
      </section>
195
 
526
 
196
      <section id="tlb">
527
      <section id="tlb">
197
        <title>Translation Lookaside buffer</title>
528
        <title>Translation Lookaside buffer</title>
198
 
529
 
199
        <para>Due to the extensive overhead during the page mapping lookup in
530
        <para>Due to the extensive overhead during the page mapping lookup in
200
        the page tables, all architectures has fast assotiative cache memory
531
        the page tables, all architectures has fast assotiative cache memory
201
        built-in CPU. This memory called TLB stores recently used page table
532
        built-in CPU. This memory called TLB stores recently used page table
202
        entries.</para>
533
        entries.</para>
203
 
534
 
204
        <section id="tlb_shootdown">
535
        <section id="tlb_shootdown">
205
          <title>TLB consistency. TLB shootdown algorithm.</title>
536
          <title>TLB consistency. TLB shootdown algorithm.</title>
206
 
537
 
207
          <para>Operating system is responsible for keeping TLB consistent by
538
          <para>Operating system is responsible for keeping TLB consistent by
208
          invalidating the contents of TLB, whenever there is some change in
539
          invalidating the contents of TLB, whenever there is some change in
209
          page tables. Those changes may occur when page or group of pages
540
          page tables. Those changes may occur when page or group of pages
210
          were unmapped, mapping is changed or system switching active address
541
          were unmapped, mapping is changed or system switching active address
211
          space to schedule a new system task (which is a batch unmap of all
542
          space to schedule a new system task (which is a batch unmap of all
212
          address space mappings). Moreover, this invalidation operation must
543
          address space mappings). Moreover, this invalidation operation must
213
          be done an all system CPUs because each CPU has its own independent
544
          be done an all system CPUs because each CPU has its own independent
214
          TLB cache. Thus maintaining TLB consistency on SMP configuration as
545
          TLB cache. Thus maintaining TLB consistency on SMP configuration as
215
          not as trivial task as it looks at the first glance. Naive solution
546
          not as trivial task as it looks at the first glance. Naive solution
216
          would assume remote TLB invalidatation, which is not possible on the
547
          would assume remote TLB invalidatation, which is not possible on the
217
          most of the architectures, because of the simple fact - flushing TLB
548
          most of the architectures, because of the simple fact - flushing TLB
218
          is allowed only on the local CPU and there is no possibility to
549
          is allowed only on the local CPU and there is no possibility to
219
          access other CPUs' TLB caches.</para>
550
          access other CPUs' TLB caches.</para>
220
 
551
 
221
          <para>Technique of remote invalidation of TLB entries is called "TLB
552
          <para>Technique of remote invalidation of TLB entries is called "TLB
222
          shootdown". HelenOS uses a variation of the algorithm described by
553
          shootdown". HelenOS uses a variation of the algorithm described by
223
          D. Black et al., "Translation Lookaside Buffer Consistency: A
554
          D. Black et al., "Translation Lookaside Buffer Consistency: A
224
          Software Approach," Proc. Third Int'l Conf. Architectural Support
555
          Software Approach," Proc. Third Int'l Conf. Architectural Support
225
          for Programming Languages and Operating Systems, 1989, pp.
556
          for Programming Languages and Operating Systems, 1989, pp.
226
          113-122.</para>
557
          113-122.</para>
227
 
558
 
228
          <para>As the situation demands, you will want partitial invalidation
559
          <para>As the situation demands, you will want partitial invalidation
229
          of TLB caches. In case of simple memory mapping change it is
560
          of TLB caches. In case of simple memory mapping change it is
230
          necessary to invalidate only one or more adjacent pages. In case if
561
          necessary to invalidate only one or more adjacent pages. In case if
231
          the architecture is aware of ASIDs, during the address space
562
          the architecture is aware of ASIDs, during the address space
232
          switching, kernel invalidates only entries from this particular
563
          switching, kernel invalidates only entries from this particular
233
          address space. Final option of the TLB invalidation is the complete
564
          address space. Final option of the TLB invalidation is the complete
234
          TLB cache invalidation, which is the operation that flushes all
565
          TLB cache invalidation, which is the operation that flushes all
235
          entries in TLB.</para>
566
          entries in TLB.</para>
236
 
567
 
237
          <para>TLB shootdown is performed in two phases. First, the initiator
568
          <para>TLB shootdown is performed in two phases. First, the initiator
238
          process sends an IPI message indicating the TLB shootdown request to
569
          process sends an IPI message indicating the TLB shootdown request to
239
          the rest of the CPUs. Then, it waits until all CPUs confirm TLB
570
          the rest of the CPUs. Then, it waits until all CPUs confirm TLB
240
          invalidating action execution.</para>
571
          invalidating action execution.</para>
241
        </section>
572
        </section>
242
      </section>
573
      </section>
243
    </section>
574
    </section>
244
 
575
 
245
    <section>
576
    <section>
246
      <title>---</title>
577
      <title>---</title>
247
 
578
 
248
      <para>At the moment HelenOS does not support swapping.</para>
579
      <para>At the moment HelenOS does not support swapping.</para>
249
 
580
 
250
      <para>- pouzivame vypadky stranky k alokaci ramcu on-demand v ramci
581
      <para>- pouzivame vypadky stranky k alokaci ramcu on-demand v ramci
251
      as_area - na architekturach, ktere to podporuji, podporujeme non-exec
582
      as_area - na architekturach, ktere to podporuji, podporujeme non-exec
252
      stranky</para>
583
      stranky</para>
253
    </section>
584
    </section>
254
  </section>
585
  </section>
255
 
-
 
256
  <!-- End of VM -->
-
 
257
 
-
 
258
  <section>
-
 
259
    <!-- Phys mem -->
-
 
260
 
-
 
261
    <title>Physical memory management</title>
-
 
262
 
-
 
263
    <section id="zones_and_frames">
-
 
264
      <title>Zones and frames</title>
-
 
265
 
-
 
266
      <para><!--graphic fileref="images/mm2.png" /--><!--graphic fileref="images/buddy_alloc.svg" format="SVG" /--></para>
-
 
267
 
-
 
268
      <para>On some architectures not whole physical memory is available for
-
 
269
      conventional usage. This limitations require from kernel to maintain a
-
 
270
      table of available and unavailable ranges of physical memory addresses.
-
 
271
      Main idea of zones is in creating memory zone entity, that is a
-
 
272
      continuous chunk of memory available for allocation. If some chunk is
-
 
273
      not available, we simply do not put it in any zone.</para>
-
 
274
 
-
 
275
      <para>Zone is also serves for informational purposes, containing
-
 
276
      information about number of free and busy frames. Physical memory
-
 
277
      allocation is also done inside the certain zone. Allocation of zone
-
 
278
      frame must be organized by the <link linkend="frame_allocator">frame
-
 
279
      allocator</link> associated with the zone.</para>
-
 
280
 
-
 
281
      <para>Some of the architectures (mips32, ppc32) have only one zone, that
-
 
282
      covers whole physical memory, and the others (like ia32) may have
-
 
283
      multiple zones. Information about zones on current machine is stored in
-
 
284
      BIOS hardware tables or can be hardcoded into kernel during compile
-
 
285
      time.</para>
-
 
286
    </section>
-
 
287
 
-
 
288
    <section id="frame_allocator">
-
 
289
      <title>Frame allocator</title>
-
 
290
 
-
 
291
      <figure><mediaobject id="frame_alloc">
-
 
292
          <imageobject role="html">
-
 
293
            <imagedata fileref="images/frame_alloc.png" format="PNG" />
-
 
294
          </imageobject>
-
 
295
 
-
 
296
          <imageobject role="fop">
-
 
297
            <imagedata fileref="images.vector/frame_alloc.svg" format="SVG" />
-
 
298
          </imageobject>
-
 
299
        </mediaobject>
-
 
300
    <title>Frame allocator scheme.</title>
-
 
301
    </figure>
-
 
302
 
-
 
303
      <formalpara>
-
 
304
        <title>Overview</title>
-
 
305
 
-
 
306
        <para>Frame allocator provides physical memory allocation for the
-
 
307
        kernel. Because of zonal organization of physical memory, frame
-
 
308
        allocator is always working in context of some zone, thus making
-
 
309
        impossible to allocate a piece of memory, which lays in different
-
 
310
        zone, which cannot happen, because two adjacent zones can be merged
-
 
311
        into one. Frame allocator is also being responsible to update
-
 
312
        information on the number of free/busy frames in zone. Physical memory
-
 
313
        allocation inside one <link linkend="zones_and_frames">memory
-
 
314
        zone</link> is being handled by an instance of <link
-
 
315
        linkend="buddy_allocator">buddy allocator</link> tailored to allocate
-
 
316
        blocks of physical memory frames.</para>
-
 
317
      </formalpara>
-
 
318
 
-
 
319
      <formalpara>
-
 
320
        <title>Allocation / deallocation</title>
-
 
321
 
-
 
322
        <para>Upon allocation request, frame allocator tries to find first
-
 
323
        zone, that can satisfy the incoming request (has required amount of
-
 
324
        free frames to allocate). During deallocation, frame allocator needs
-
 
325
        to find zone, that contain deallocated frame. This approach could
-
 
326
        bring up two potential problems: <itemizedlist>
-
 
327
            <listitem>
-
 
328
               Linear search of zones does not any good to performance, but number of zones is not expected to be high. And if yes, list of zones can be replaced with more time-efficient B-tree.
-
 
329
            </listitem>
-
 
330
 
-
 
331
            <listitem>
-
 
332
               Quickly find out if zone contains required number of frames to allocate and if this chunk of memory is properly aligned. This issue is perfectly solved bu the buddy allocator.
-
 
333
            </listitem>
-
 
334
          </itemizedlist></para>
-
 
335
      </formalpara>
-
 
336
    </section>
-
 
337
 
-
 
338
    <section id="buddy_allocator">
-
 
339
      <title>Buddy allocator</title>
-
 
340
 
-
 
341
      <section>
-
 
342
        <title>Overview</title>
-
 
343
 
-
 
344
        <figure><mediaobject id="buddy_alloc">
-
 
345
            <imageobject role="html">
-
 
346
              <imagedata fileref="images/buddy_alloc.png" format="PNG" />
-
 
347
            </imageobject>
-
 
348
 
-
 
349
            <imageobject role="fop">
-
 
350
              <imagedata fileref="images.vector/buddy_alloc.svg" format="SVG" />
-
 
351
            </imageobject>
-
 
352
          </mediaobject>
-
 
353
      <title>Buddy system scheme.</title>
-
 
354
      </figure>
-
 
355
 
-
 
356
        <para>In the buddy allocator, the memory is broken down into
-
 
357
        power-of-two sized naturally aligned blocks. These blocks are
-
 
358
        organized in an array of lists, in which the list with index i
-
 
359
        contains all unallocated blocks of size
-
 
360
        <mathphrase>2<superscript>i</superscript></mathphrase>. The index i is
-
 
361
        called the order of block. Should there be two adjacent equally sized
-
 
362
        blocks in the list i<mathphrase />(i.e. buddies), the buddy allocator
-
 
363
        would coalesce them and put the resulting block in list <mathphrase>i
-
 
364
        + 1</mathphrase>, provided that the resulting block would be naturally
-
 
365
        aligned. Similarily, when the allocator is asked to allocate a block
-
 
366
        of size <mathphrase>2<superscript>i</superscript></mathphrase>, it
-
 
367
        first tries to satisfy the request from the list with index i. If the
-
 
368
        request cannot be satisfied (i.e. the list i is empty), the buddy
-
 
369
        allocator will try to allocate and split a larger block from the list
-
 
370
        with index i + 1. Both of these algorithms are recursive. The
-
 
371
        recursion ends either when there are no blocks to coalesce in the
-
 
372
        former case or when there are no blocks that can be split in the
-
 
373
        latter case.</para>
-
 
374
 
-
 
375
        <!--graphic fileref="images/mm1.png" format="EPS" /-->
-
 
376
 
-
 
377
        <para>This approach greatly reduces external fragmentation of memory
-
 
378
        and helps in allocating bigger continuous blocks of memory aligned to
-
 
379
        their size. On the other hand, the buddy allocator suffers increased
-
 
380
        internal fragmentation of memory and is not suitable for general
-
 
381
        kernel allocations. This purpose is better addressed by the <link
-
 
382
        linkend="slab">slab allocator</link>.</para>
-
 
383
      </section>
-
 
384
 
-
 
385
      <section>
-
 
386
        <title>Implementation</title>
-
 
387
 
-
 
388
        <para>The buddy allocator is, in fact, an abstract framework wich can
-
 
389
        be easily specialized to serve one particular task. It knows nothing
-
 
390
        about the nature of memory it helps to allocate. In order to beat the
-
 
391
        lack of this knowledge, the buddy allocator exports an interface that
-
 
392
        each of its clients is required to implement. When supplied with an
-
 
393
        implementation of this interface, the buddy allocator can use
-
 
394
        specialized external functions to find a buddy for a block, split and
-
 
395
        coalesce blocks, manipulate block order and mark blocks busy or
-
 
396
        available. For precise documentation of this interface, refer to
-
 
397
        <emphasis>"HelenOS Generic Kernel Reference Manual"</emphasis>.</para>
-
 
398
 
-
 
399
        <formalpara>
-
 
400
          <title>Data organization</title>
-
 
401
 
-
 
402
          <para>Each entity allocable by the buddy allocator is required to
-
 
403
          contain space for storing block order number and a link variable
-
 
404
          used to interconnect blocks within the same order.</para>
-
 
405
 
-
 
406
          <para>Whatever entities are allocated by the buddy allocator, the
-
 
407
          first entity within a block is used to represent the entire block.
-
 
408
          The first entity keeps the order of the whole block. Other entities
-
 
409
          within the block are assigned the magic value
-
 
410
          <constant>BUDDY_INNER_BLOCK</constant>. This is especially important
-
 
411
          for effective identification of buddies in a one-dimensional array
-
 
412
          because the entity that represents a potential buddy cannot be
-
 
413
          associated with <constant>BUDDY_INNER_BLOCK</constant> (i.e. if it
-
 
414
          is associated with <constant>BUDDY_INNER_BLOCK</constant> then it is
-
 
415
          not a buddy).</para>
-
 
416
 
-
 
417
          <para>The buddy allocator always uses the first frame to represent
-
 
418
          the frame block. This frame contains <varname>buddy_order</varname>
-
 
419
          variable to provide information about the block size it actually
-
 
420
          represents (
-
 
421
          <mathphrase>2<superscript>buddy_order</superscript></mathphrase>
-
 
422
          frames block). Other frames in block have this value set to magic
-
 
423
          <constant>BUDDY_INNER_BLOCK</constant> that is much greater than
-
 
424
          buddy <varname>max_order</varname> value.</para>
-
 
425
 
-
 
426
          <para>Each <varname>frame_t</varname> also contains pointer member
-
 
427
          to hold frame structure in the linked list inside one order.</para>
-
 
428
        </formalpara>
-
 
429
 
-
 
430
        <formalpara>
-
 
431
          <title>Allocation algorithm</title>
-
 
432
 
-
 
433
          <para>Upon <mathphrase>2<superscript>i</superscript></mathphrase>
-
 
434
          frames block allocation request, allocator checks if there are any
-
 
435
          blocks available at the order list <varname>i</varname>. If yes,
-
 
436
          removes block from order list and returns its address. If no,
-
 
437
          recursively allocates
-
 
438
          <mathphrase>2<superscript>i+1</superscript></mathphrase> frame
-
 
439
          block, splits it into two
-
 
440
          <mathphrase>2<superscript>i</superscript></mathphrase> frame blocks.
-
 
441
          Then adds one of the blocks to the <varname>i</varname> order list
-
 
442
          and returns address of another.</para>
-
 
443
        </formalpara>
-
 
444
 
-
 
445
        <formalpara>
-
 
446
          <title>Deallocation algorithm</title>
-
 
447
 
-
 
448
          <para>Check if block has so called buddy (another free
-
 
449
          <mathphrase>2<superscript>i</superscript></mathphrase> frame block
-
 
450
          that can be linked with freed block into the
-
 
451
          <mathphrase>2<superscript>i+1</superscript></mathphrase> block).
-
 
452
          Technically, buddy is a odd/even block for even/odd block
-
 
453
          respectively. Plus we can put an extra requirement, that resulting
-
 
454
          block must be aligned to its size. This requirement guarantees
-
 
455
          natural block alignment for the blocks coming out the allocation
-
 
456
          system.</para>
-
 
457
 
-
 
458
          <para>Using direct pointer arithmetics,
-
 
459
          <varname>frame_t::ref_count</varname> and
-
 
460
          <varname>frame_t::buddy_order</varname> variables, finding buddy is
-
 
461
          done at constant time.</para>
-
 
462
        </formalpara>
-
 
463
      </section>
-
 
464
    </section>
-
 
465
 
-
 
466
    <section id="slab">
-
 
467
      <title>Slab allocator</title>
-
 
468
 
-
 
469
      <section>
-
 
470
        <title>Overview</title>
-
 
471
 
-
 
472
        <para><termdef><glossterm>Slab</glossterm> represents a contiguous
-
 
473
        piece of memory, usually made of several physically contiguous
-
 
474
        pages.</termdef> <termdef><glossterm>Slab cache</glossterm> consists
-
 
475
        of one or more slabs.</termdef></para>
-
 
476
 
-
 
477
        <para>The majority of memory allocation requests in the kernel are for
-
 
478
        small, frequently used data structures. For this purpose the slab
-
 
479
        allocator is a perfect solution. The basic idea behind the slab
-
 
480
        allocator is to have lists of commonly used objects available packed
-
 
481
        into pages. This avoids the overhead of allocating and destroying
-
 
482
        commonly used types of objects such threads, virtual memory structures
-
 
483
        etc. Also due to the exact allocated size matching, slab allocation
-
 
484
        completely eliminates internal fragmentation issue.</para>
-
 
485
      </section>
-
 
486
 
-
 
487
      <section>
-
 
488
        <title>Implementation</title>
-
 
489
 
-
 
490
        <figure><mediaobject id="slab_alloc">
-
 
491
            <imageobject role="html">
-
 
492
              <imagedata fileref="images/slab_alloc.png" format="PNG" />
-
 
493
            </imageobject>
-
 
494
 
-
 
495
            <imageobject role="fop">
-
 
496
              <imagedata fileref="images.vector/slab_alloc.svg" format="SVG" />
-
 
497
            </imageobject>
-
 
498
          </mediaobject>
-
 
499
      <title>Slab allocator scheme.</title>
-
 
500
      </figure>
-
 
501
 
-
 
502
        <para>The slab allocator is closely modelled after <ulink
-
 
503
        url="http://www.usenix.org/events/usenix01/full_papers/bonwick/bonwick_html/">
-
 
504
        OpenSolaris slab allocator by Jeff Bonwick and Jonathan Adams </ulink>
-
 
505
        with the following exceptions: <itemizedlist>
-
 
506
            <listitem>
-
 
507
               empty slabs are deallocated immediately (in Linux they are kept in linked list, in Solaris ???)
-
 
508
            </listitem>
-
 
509
 
-
 
510
            <listitem>
-
 
511
               empty magazines are deallocated when not needed (in Solaris they are held in linked list in slab cache)
-
 
512
            </listitem>
-
 
513
          </itemizedlist> Following features are not currently supported but
-
 
514
        would be easy to do: <itemizedlist>
-
 
515
            <listitem>
-
 
516
               - cache coloring
-
 
517
            </listitem>
-
 
518
 
-
 
519
            <listitem>
-
 
520
               - dynamic magazine grow (different magazine sizes are already supported, but we would need to adjust allocation strategy)
-
 
521
            </listitem>
-
 
522
          </itemizedlist></para>
-
 
523
 
-
 
524
        <section>
-
 
525
          <title>Magazine layer</title>
-
 
526
 
-
 
527
          <para>Due to the extensive bottleneck on SMP architures, caused by
-
 
528
          global slab locking mechanism, making processing of all slab
-
 
529
          allocation requests serialized, a new layer was introduced to the
-
 
530
          classic slab allocator design. Slab allocator was extended to
-
 
531
          support per-CPU caches 'magazines' to achieve good SMP scaling.
-
 
532
          <termdef>Slab SMP perfromance bottleneck was resolved by introducing
-
 
533
          a per-CPU caching scheme called as <glossterm>magazine
-
 
534
          layer</glossterm></termdef>.</para>
-
 
535
 
-
 
536
          <para>Magazine is a N-element cache of objects, so each magazine can
-
 
537
          satisfy N allocations. Magazine behaves like a automatic weapon
-
 
538
          magazine (LIFO, stack), so the allocation/deallocation become simple
-
 
539
          push/pop pointer operation. Trick is that CPU does not access global
-
 
540
          slab allocator data during the allocation from its magazine, thus
-
 
541
          making possible parallel allocations between CPUs.</para>
-
 
542
 
-
 
543
          <para>Implementation also requires adding another feature as the
-
 
544
          CPU-bound magazine is actually a pair of magazines to avoid
-
 
545
          thrashing when during allocation/deallocatiion of 1 item at the
-
 
546
          magazine size boundary. LIFO order is enforced, which should avoid
-
 
547
          fragmentation as much as possible.</para>
-
 
548
 
-
 
549
          <para>Another important entity of magazine layer is the common full
-
 
550
          magazine list (also called a depot), that stores full magazines that
-
 
551
          may be used by any of the CPU magazine caches to reload active CPU
-
 
552
          magazine. This list of magazines can be pre-filled with full
-
 
553
          magazines during initialization, but in current implementation it is
-
 
554
          filled during object deallocation, when CPU magazine becomes
-
 
555
          full.</para>
-
 
556
 
-
 
557
          <para>Slab allocator control structures are allocated from special
-
 
558
          slabs, that are marked by special flag, indicating that it should
-
 
559
          not be used for slab magazine layer. This is done to avoid possible
-
 
560
          infinite recursions and deadlock during conventional slab allocaiton
-
 
561
          requests.</para>
-
 
562
        </section>
-
 
563
 
-
 
564
        <section>
-
 
565
          <title>Allocation/deallocation</title>
-
 
566
 
-
 
567
          <para>Every cache contains list of full slabs and list of partialy
-
 
568
          full slabs. Empty slabs are immediately freed (thrashing will be
-
 
569
          avoided because of magazines).</para>
-
 
570
 
-
 
571
          <para>The SLAB allocator allocates lots of space and does not free
-
 
572
          it. When frame allocator fails to allocate the frame, it calls
-
 
573
          slab_reclaim(). It tries 'light reclaim' first, then brutal reclaim.
-
 
574
          The light reclaim releases slabs from cpu-shared magazine-list,
-
 
575
          until at least 1 slab is deallocated in each cache (this algorithm
-
 
576
          should probably change). The brutal reclaim removes all cached
-
 
577
          objects, even from CPU-bound magazines.</para>
-
 
578
 
-
 
579
          <formalpara>
-
 
580
            <title>Allocation</title>
-
 
581
 
-
 
582
            <para><emphasis>Step 1.</emphasis> When it comes to the allocation
-
 
583
            request, slab allocator first of all checks availability of memory
-
 
584
            in local CPU-bound magazine. If it is there, we would just "pop"
-
 
585
            the CPU magazine and return the pointer to object.</para>
-
 
586
 
-
 
587
            <para><emphasis>Step 2.</emphasis> If the CPU-bound magazine is
-
 
588
            empty, allocator will attempt to reload magazin, swapping it with
-
 
589
            second CPU magazine and returns to the first step.</para>
-
 
590
 
-
 
591
            <para><emphasis>Step 3.</emphasis> Now we are in the situation
-
 
592
            when both CPU-bound magazines are empty, which makes allocator to
-
 
593
            access shared full-magazines depot to reload CPU-bound magazines.
-
 
594
            If reload is succesful (meaning there are full magazines in depot)
-
 
595
            algoritm continues at Step 1.</para>
-
 
596
 
-
 
597
            <para><emphasis>Step 4.</emphasis> Final step of the allocation.
-
 
598
            In this step object is allocated from the conventional slab layer
-
 
599
            and pointer is returned.</para>
-
 
600
          </formalpara>
-
 
601
 
-
 
602
          <formalpara>
-
 
603
            <title>Deallocation</title>
-
 
604
 
-
 
605
            <para><emphasis>Step 1.</emphasis> During deallocation request,
-
 
606
            slab allocator will check if the local CPU-bound magazine is not
-
 
607
            full. In this case we will just push the pointer to this
-
 
608
            magazine.</para>
-
 
609
 
-
 
610
            <para><emphasis>Step 2.</emphasis> If the CPU-bound magazine is
-
 
611
            full, allocator will attempt to reload magazin, swapping it with
-
 
612
            second CPU magazine and returns to the first step.</para>
-
 
613
 
-
 
614
            <para><emphasis>Step 3.</emphasis> Now we are in the situation
-
 
615
            when both CPU-bound magazines are full, which makes allocator to
-
 
616
            access shared full-magazines depot to put one of the magazines to
-
 
617
            the depot and creating new empty magazine. Algoritm continues at
-
 
618
            Step 1.</para>
-
 
619
          </formalpara>
-
 
620
        </section>
-
 
621
      </section>
-
 
622
    </section>
-
 
623
 
-
 
624
    <!-- End of Physmem -->
-
 
625
  </section>
-
 
626
 
-
 
627
  <section>
-
 
628
    <title>Memory sharing</title>
-
 
629
 
-
 
630
    <para>Not implemented yet(?)</para>
-
 
631
  </section>
-
 
632
</chapter>
586
</chapter>
633
587