Subversion Repositories HelenOS-doc

Rev

Rev 62 | Rev 65 | Go to most recent revision | Show entire file | Ignore whitespace | Details | Blame | Last modification | View Log | RSS feed

Rev 62 Rev 64
Line 2... Line 2...
2
<chapter id="mm">
2
<chapter id="mm">
3
  <?dbhtml filename="mm.html"?>
3
  <?dbhtml filename="mm.html"?>
4
 
4
 
5
  <title>Memory management</title>
5
  <title>Memory management</title>
6
 
6
 
-
 
7
  <para>In previous chapters, this book described the scheduling subsystem as
-
 
8
  the creator of the impression that threads execute in parallel. The memory
-
 
9
  management subsystem, on the other hand, creates the impression that there
-
 
10
  is enough physical memory for the kernel and that userspace tasks have the
-
 
11
  entire address space only for themselves.</para>
-
 
12
 
-
 
13
  <section>
-
 
14
    <title>Physical memory management</title>
-
 
15
 
-
 
16
    <section id="zones_and_frames">
-
 
17
      <title>Zones and frames</title>
-
 
18
 
-
 
19
      <para>HelenOS represents continuous areas of physical memory in
-
 
20
      structures called frame zones (abbreviated as zones). Each zone contains
-
 
21
      information about the number of allocated and unallocated physical
-
 
22
      memory frames as well as the physical base address of the zone and
-
 
23
      number of frames contained in it. A zone also contains an array of frame
-
 
24
      structures describing each frame of the zone and, in the last, but not
-
 
25
      the least important, front, each zone is equipped with a buddy system
-
 
26
      that faciliates effective allocation of power-of-two sized block of
-
 
27
      frames.</para>
-
 
28
 
-
 
29
      <para>This organization of physical memory provides good preconditions
-
 
30
      for hot-plugging of more zones. There is also one currently unused zone
-
 
31
      attribute: <code>flags</code>. The attribute could be used to give a
-
 
32
      special meaning to some zones in the future.</para>
-
 
33
 
-
 
34
      <para>The zones are linked in a doubly-linked list. This might seem a
-
 
35
      bit ineffective because the zone list is walked everytime a frame is
-
 
36
      allocated or deallocated. However, this does not represent a significant
-
 
37
      performance problem as it is expected that the number of zones will be
-
 
38
      rather low. Moreover, most architectures merge all zones into
-
 
39
      one.</para>
-
 
40
 
-
 
41
      <para>For each physical memory frame found in a zone, there is a frame
-
 
42
      structure that contains number of references and data used by buddy
-
 
43
      system.</para>
-
 
44
    </section>
-
 
45
 
-
 
46
    <section id="frame_allocator">
-
 
47
      <title>Frame allocator</title>
-
 
48
 
-
 
49
      <para>The frame allocator satisfies kernel requests to allocate
-
 
50
      power-of-two sized blocks of physical memory. Because of zonal
-
 
51
      organization of physical memory, the frame allocator is always working
-
 
52
      within a context of some frame zone. In order to carry out the
-
 
53
      allocation requests, the frame allocator is tightly integrated with the
-
 
54
      buddy system belonging to the zone. The frame allocator is also
-
 
55
      responsible for updating information about the number of free and busy
-
 
56
      frames in the zone. <figure>
-
 
57
          <mediaobject id="frame_alloc">
-
 
58
            <imageobject role="html">
-
 
59
              <imagedata fileref="images/frame_alloc.png" format="PNG" />
-
 
60
            </imageobject>
-
 
61
 
-
 
62
            <imageobject role="fop">
-
 
63
              <imagedata fileref="images.vector/frame_alloc.svg" format="SVG" />
-
 
64
            </imageobject>
-
 
65
          </mediaobject>
-
 
66
 
-
 
67
          <title>Frame allocator scheme.</title>
-
 
68
        </figure></para>
-
 
69
 
-
 
70
      <formalpara>
-
 
71
        <title>Allocation / deallocation</title>
-
 
72
 
-
 
73
        <para>Upon allocation request via function <code>frame_alloc</code>,
-
 
74
        the frame allocator first tries to find a zone that can satisfy the
-
 
75
        request (i.e. has the required amount of free frames). Once a suitable
-
 
76
        zone is found, the frame allocator uses the buddy allocator on the
-
 
77
        zone's buddy system to perform the allocation. During deallocation,
-
 
78
        which is triggered by a call to <code>frame_free</code>, the frame
-
 
79
        allocator looks up the respective zone that contains the frame being
-
 
80
        deallocated. Afterwards, it calls the buddy allocator again, this time
-
 
81
        to take care of deallocation within the zone's buddy system.</para>
-
 
82
      </formalpara>
-
 
83
    </section>
-
 
84
 
-
 
85
    <section id="buddy_allocator">
-
 
86
      <title>Buddy allocator</title>
-
 
87
 
-
 
88
      <para>In the buddy system, the memory is broken down into power-of-two
-
 
89
      sized naturally aligned blocks. These blocks are organized in an array
-
 
90
      of lists, in which the list with index i contains all unallocated blocks
-
 
91
      of size <mathphrase>2<superscript>i</superscript></mathphrase>. The
-
 
92
      index i is called the order of block. Should there be two adjacent
-
 
93
      equally sized blocks in the list i<mathphrase />(i.e. buddies), the
-
 
94
      buddy allocator would coalesce them and put the resulting block in list
-
 
95
      <mathphrase>i + 1</mathphrase>, provided that the resulting block would
-
 
96
      be naturally aligned. Similarily, when the allocator is asked to
-
 
97
      allocate a block of size
-
 
98
      <mathphrase>2<superscript>i</superscript></mathphrase>, it first tries
-
 
99
      to satisfy the request from the list with index i. If the request cannot
-
 
100
      be satisfied (i.e. the list i is empty), the buddy allocator will try to
-
 
101
      allocate and split a larger block from the list with index i + 1. Both
-
 
102
      of these algorithms are recursive. The recursion ends either when there
-
 
103
      are no blocks to coalesce in the former case or when there are no blocks
-
 
104
      that can be split in the latter case.</para>
-
 
105
 
-
 
106
      <para>This approach greatly reduces external fragmentation of memory and
-
 
107
      helps in allocating bigger continuous blocks of memory aligned to their
-
 
108
      size. On the other hand, the buddy allocator suffers increased internal
-
 
109
      fragmentation of memory and is not suitable for general kernel
-
 
110
      allocations. This purpose is better addressed by the <link
-
 
111
      linkend="slab">slab allocator</link>.<figure>
-
 
112
          <mediaobject id="buddy_alloc">
-
 
113
            <imageobject role="html">
-
 
114
              <imagedata fileref="images/buddy_alloc.png" format="PNG" />
-
 
115
            </imageobject>
-
 
116
 
-
 
117
            <imageobject role="fop">
-
 
118
              <imagedata fileref="images.vector/buddy_alloc.svg" format="SVG" />
-
 
119
            </imageobject>
-
 
120
          </mediaobject>
-
 
121
 
-
 
122
          <title>Buddy system scheme.</title>
-
 
123
        </figure></para>
-
 
124
 
-
 
125
      <section>
-
 
126
        <title>Implementation</title>
-
 
127
 
-
 
128
        <para>The buddy allocator is, in fact, an abstract framework wich can
-
 
129
        be easily specialized to serve one particular task. It knows nothing
-
 
130
        about the nature of memory it helps to allocate. In order to beat the
-
 
131
        lack of this knowledge, the buddy allocator exports an interface that
-
 
132
        each of its clients is required to implement. When supplied with an
-
 
133
        implementation of this interface, the buddy allocator can use
-
 
134
        specialized external functions to find a buddy for a block, split and
-
 
135
        coalesce blocks, manipulate block order and mark blocks busy or
-
 
136
        available.</para>
-
 
137
 
-
 
138
        <formalpara>
-
 
139
          <title>Data organization</title>
-
 
140
 
-
 
141
          <para>Each entity allocable by the buddy allocator is required to
-
 
142
          contain space for storing block order number and a link variable
-
 
143
          used to interconnect blocks within the same order.</para>
-
 
144
 
-
 
145
          <para>Whatever entities are allocated by the buddy allocator, the
-
 
146
          first entity within a block is used to represent the entire block.
-
 
147
          The first entity keeps the order of the whole block. Other entities
-
 
148
          within the block are assigned the magic value
-
 
149
          <constant>BUDDY_INNER_BLOCK</constant>. This is especially important
-
 
150
          for effective identification of buddies in a one-dimensional array
-
 
151
          because the entity that represents a potential buddy cannot be
-
 
152
          associated with <constant>BUDDY_INNER_BLOCK</constant> (i.e. if it
-
 
153
          is associated with <constant>BUDDY_INNER_BLOCK</constant> then it is
-
 
154
          not a buddy).</para>
-
 
155
        </formalpara>
-
 
156
      </section>
-
 
157
    </section>
-
 
158
 
-
 
159
    <section id="slab">
-
 
160
      <title>Slab allocator</title>
-
 
161
 
-
 
162
      <para>The majority of memory allocation requests in the kernel is for
-
 
163
      small, frequently used data structures. The basic idea behind the slab
-
 
164
      allocator is that deployment of lists of preallocated, commonly used
-
 
165
      objects. Whenever an object is to be allocated, the slab allocator takes
-
 
166
      the first item from the list corresponding to the object type. This
-
 
167
      avoids the overhead of allocating and dellocating commonly used types of
-
 
168
      objects such as threads, B+tree nodes etc. Due to the fact that the
-
 
169
      sizes of the requested and allocated object match, the slab allocator
-
 
170
      significantly eliminates internal fragmentation. Moreover, each list can
-
 
171
      have a constructor and a destructor, which leads to performance gains
-
 
172
      because constructed and then destroyed objects don't need to be
-
 
173
      reinitialized<footnote>
-
 
174
          <para>Provided that the assumption that the destructor leaves the
-
 
175
          object in a consistent state holds.</para>
-
 
176
        </footnote>.</para>
-
 
177
 
-
 
178
      <para>In the slab allocator, objects of one type are kept in continuous
-
 
179
      areas of physical memory called slabs. Slabs can span from one to
-
 
180
      several physical memory frames. Slabs of objects of one type are stored
-
 
181
      in slab caches. When the allocator needs to allocate an object, it
-
 
182
      searches available slabs. When the slab does not contain any free
-
 
183
      obejct, a new slab is allocated and added to the cache. Contrary to
-
 
184
      allocation, deallocated objects are returned to their respective slabs.
-
 
185
      Empty slabs are deallocated immediately while empty slab caches are not
-
 
186
      freed until HelenOS runs short of memory.</para>
-
 
187
 
-
 
188
      <para><figure>
-
 
189
          <mediaobject id="slab_alloc">
-
 
190
            <imageobject role="html">
-
 
191
              <imagedata fileref="images/slab_alloc.png" format="PNG" />
-
 
192
            </imageobject>
-
 
193
 
-
 
194
            <imageobject role="fop">
-
 
195
              <imagedata fileref="images.vector/slab_alloc.svg" format="SVG" />
-
 
196
            </imageobject>
-
 
197
          </mediaobject>
-
 
198
 
-
 
199
          <title>Slab allocator scheme.</title>
-
 
200
        </figure></para>
-
 
201
 
-
 
202
      <section>
-
 
203
        <para>
-
 
204
          <termdef />
-
 
205
 
-
 
206
         
-
 
207
 
-
 
208
          <termdef />
-
 
209
        </para>
-
 
210
      </section>
-
 
211
 
-
 
212
      <section>
-
 
213
        <title>Implementation</title>
-
 
214
 
-
 
215
        <para>The slab allocator is closely modelled after <ulink
-
 
216
        url="http://www.usenix.org/events/usenix01/full_papers/bonwick/bonwick_html/">
-
 
217
        OpenSolaris slab allocator by Jeff Bonwick and Jonathan Adams </ulink>
-
 
218
        with the following exceptions:<itemizedlist>
-
 
219
            <listitem></listitem>
-
 
220
 
-
 
221
            <listitem>
-
 
222
               empty magazines are deallocated when not needed
-
 
223
            </listitem>
-
 
224
          </itemizedlist> Following features are not currently supported but
-
 
225
        would be easy to do: <itemizedlist>
-
 
226
            <listitem>
-
 
227
               cache coloring
-
 
228
            </listitem>
-
 
229
 
-
 
230
            <listitem>
-
 
231
               dynamic magazine grow (different magazine sizes are already supported, but we would need to adjust allocation strategy)
-
 
232
            </listitem>
-
 
233
          </itemizedlist></para>
-
 
234
 
-
 
235
        <section>
-
 
236
          <title>Magazine layer</title>
-
 
237
 
-
 
238
          <para>Due to the extensive bottleneck on SMP architures, caused by
-
 
239
          global slab locking mechanism, making processing of all slab
-
 
240
          allocation requests serialized, a new layer was introduced to the
-
 
241
          classic slab allocator design. Slab allocator was extended to
-
 
242
          support per-CPU caches 'magazines' to achieve good SMP scaling.
-
 
243
          <termdef>Slab SMP perfromance bottleneck was resolved by introducing
-
 
244
          a per-CPU caching scheme called as <glossterm>magazine
-
 
245
          layer</glossterm></termdef>.</para>
-
 
246
 
-
 
247
          <para>Magazine is a N-element cache of objects, so each magazine can
-
 
248
          satisfy N allocations. Magazine behaves like a automatic weapon
-
 
249
          magazine (LIFO, stack), so the allocation/deallocation become simple
-
 
250
          push/pop pointer operation. Trick is that CPU does not access global
-
 
251
          slab allocator data during the allocation from its magazine, thus
-
 
252
          making possible parallel allocations between CPUs.</para>
-
 
253
 
-
 
254
          <para>Implementation also requires adding another feature as the
-
 
255
          CPU-bound magazine is actually a pair of magazines to avoid
-
 
256
          thrashing when during allocation/deallocatiion of 1 item at the
-
 
257
          magazine size boundary. LIFO order is enforced, which should avoid
-
 
258
          fragmentation as much as possible.</para>
-
 
259
 
-
 
260
          <para>Another important entity of magazine layer is the common full
-
 
261
          magazine list (also called a depot), that stores full magazines that
-
 
262
          may be used by any of the CPU magazine caches to reload active CPU
-
 
263
          magazine. This list of magazines can be pre-filled with full
-
 
264
          magazines during initialization, but in current implementation it is
-
 
265
          filled during object deallocation, when CPU magazine becomes
-
 
266
          full.</para>
-
 
267
 
-
 
268
          <para>Slab allocator control structures are allocated from special
-
 
269
          slabs, that are marked by special flag, indicating that it should
-
 
270
          not be used for slab magazine layer. This is done to avoid possible
-
 
271
          infinite recursions and deadlock during conventional slab allocaiton
-
 
272
          requests.</para>
-
 
273
        </section>
-
 
274
 
-
 
275
        <section>
-
 
276
          <title>Allocation/deallocation</title>
-
 
277
 
-
 
278
          <para>Every cache contains list of full slabs and list of partialy
-
 
279
          full slabs. Empty slabs are immediately freed (thrashing will be
-
 
280
          avoided because of magazines).</para>
-
 
281
 
-
 
282
          <para>The SLAB allocator allocates lots of space and does not free
-
 
283
          it. When frame allocator fails to allocate the frame, it calls
-
 
284
          slab_reclaim(). It tries 'light reclaim' first, then brutal reclaim.
-
 
285
          The light reclaim releases slabs from cpu-shared magazine-list,
-
 
286
          until at least 1 slab is deallocated in each cache (this algorithm
-
 
287
          should probably change). The brutal reclaim removes all cached
-
 
288
          objects, even from CPU-bound magazines.</para>
-
 
289
 
-
 
290
          <formalpara>
-
 
291
            <title>Allocation</title>
-
 
292
 
-
 
293
            <para><emphasis>Step 1.</emphasis> When it comes to the allocation
-
 
294
            request, slab allocator first of all checks availability of memory
-
 
295
            in local CPU-bound magazine. If it is there, we would just "pop"
-
 
296
            the CPU magazine and return the pointer to object.</para>
-
 
297
 
-
 
298
            <para><emphasis>Step 2.</emphasis> If the CPU-bound magazine is
-
 
299
            empty, allocator will attempt to reload magazin, swapping it with
-
 
300
            second CPU magazine and returns to the first step.</para>
-
 
301
 
-
 
302
            <para><emphasis>Step 3.</emphasis> Now we are in the situation
-
 
303
            when both CPU-bound magazines are empty, which makes allocator to
-
 
304
            access shared full-magazines depot to reload CPU-bound magazines.
-
 
305
            If reload is succesful (meaning there are full magazines in depot)
-
 
306
            algoritm continues at Step 1.</para>
-
 
307
 
-
 
308
            <para><emphasis>Step 4.</emphasis> Final step of the allocation.
-
 
309
            In this step object is allocated from the conventional slab layer
-
 
310
            and pointer is returned.</para>
-
 
311
          </formalpara>
-
 
312
 
-
 
313
          <formalpara>
-
 
314
            <title>Deallocation</title>
-
 
315
 
-
 
316
            <para><emphasis>Step 1.</emphasis> During deallocation request,
-
 
317
            slab allocator will check if the local CPU-bound magazine is not
-
 
318
            full. In this case we will just push the pointer to this
-
 
319
            magazine.</para>
-
 
320
 
-
 
321
            <para><emphasis>Step 2.</emphasis> If the CPU-bound magazine is
-
 
322
            full, allocator will attempt to reload magazin, swapping it with
-
 
323
            second CPU magazine and returns to the first step.</para>
-
 
324
 
-
 
325
            <para><emphasis>Step 3.</emphasis> Now we are in the situation
-
 
326
            when both CPU-bound magazines are full, which makes allocator to
-
 
327
            access shared full-magazines depot to put one of the magazines to
-
 
328
            the depot and creating new empty magazine. Algoritm continues at
-
 
329
            Step 1.</para>
-
 
330
          </formalpara>
-
 
331
        </section>
-
 
332
      </section>
-
 
333
    </section>
-
 
334
 
-
 
335
    <!-- End of Physmem -->
-
 
336
  </section>
-
 
337
 
7
  <section>
338
  <section>
8
    <title>Virtual memory management</title>
339
    <title>Virtual memory management</title>
9
 
340
 
10
    <section>
341
    <section>
11
      <title>Introduction</title>
342
      <title>Introduction</title>
Line 250... Line 581...
250
      <para>- pouzivame vypadky stranky k alokaci ramcu on-demand v ramci
581
      <para>- pouzivame vypadky stranky k alokaci ramcu on-demand v ramci
251
      as_area - na architekturach, ktere to podporuji, podporujeme non-exec
582
      as_area - na architekturach, ktere to podporuji, podporujeme non-exec
252
      stranky</para>
583
      stranky</para>
253
    </section>
584
    </section>
254
  </section>
585
  </section>
255
 
-
 
256
  <!-- End of VM -->
-
 
257
 
-
 
258
  <section>
-
 
259
    <!-- Phys mem -->
-
 
260
 
-
 
261
    <title>Physical memory management</title>
-
 
262
 
-
 
263
    <section id="zones_and_frames">
-
 
264
      <title>Zones and frames</title>
-
 
265
 
-
 
266
      <para><!--graphic fileref="images/mm2.png" /--><!--graphic fileref="images/buddy_alloc.svg" format="SVG" /--></para>
-
 
267
 
-
 
268
      <para>On some architectures not whole physical memory is available for
-
 
269
      conventional usage. This limitations require from kernel to maintain a
-
 
270
      table of available and unavailable ranges of physical memory addresses.
-
 
271
      Main idea of zones is in creating memory zone entity, that is a
-
 
272
      continuous chunk of memory available for allocation. If some chunk is
-
 
273
      not available, we simply do not put it in any zone.</para>
-
 
274
 
-
 
275
      <para>Zone is also serves for informational purposes, containing
-
 
276
      information about number of free and busy frames. Physical memory
-
 
277
      allocation is also done inside the certain zone. Allocation of zone
-
 
278
      frame must be organized by the <link linkend="frame_allocator">frame
-
 
279
      allocator</link> associated with the zone.</para>
-
 
280
 
-
 
281
      <para>Some of the architectures (mips32, ppc32) have only one zone, that
-
 
282
      covers whole physical memory, and the others (like ia32) may have
-
 
283
      multiple zones. Information about zones on current machine is stored in
-
 
284
      BIOS hardware tables or can be hardcoded into kernel during compile
-
 
285
      time.</para>
-
 
286
    </section>
-
 
287
 
-
 
288
    <section id="frame_allocator">
-
 
289
      <title>Frame allocator</title>
-
 
290
 
-
 
291
      <figure><mediaobject id="frame_alloc">
-
 
292
          <imageobject role="html">
-
 
293
            <imagedata fileref="images/frame_alloc.png" format="PNG" />
-
 
294
          </imageobject>
-
 
295
 
-
 
296
          <imageobject role="fop">
-
 
297
            <imagedata fileref="images.vector/frame_alloc.svg" format="SVG" />
-
 
298
          </imageobject>
-
 
299
        </mediaobject>
-
 
300
    <title>Frame allocator scheme.</title>
-
 
301
    </figure>
-
 
302
 
-
 
303
      <formalpara>
-
 
304
        <title>Overview</title>
-
 
305
 
-
 
306
        <para>Frame allocator provides physical memory allocation for the
-
 
307
        kernel. Because of zonal organization of physical memory, frame
-
 
308
        allocator is always working in context of some zone, thus making
-
 
309
        impossible to allocate a piece of memory, which lays in different
-
 
310
        zone, which cannot happen, because two adjacent zones can be merged
-
 
311
        into one. Frame allocator is also being responsible to update
-
 
312
        information on the number of free/busy frames in zone. Physical memory
-
 
313
        allocation inside one <link linkend="zones_and_frames">memory
-
 
314
        zone</link> is being handled by an instance of <link
-
 
315
        linkend="buddy_allocator">buddy allocator</link> tailored to allocate
-
 
316
        blocks of physical memory frames.</para>
-
 
317
      </formalpara>
-
 
318
 
-
 
319
      <formalpara>
-
 
320
        <title>Allocation / deallocation</title>
-
 
321
 
-
 
322
        <para>Upon allocation request, frame allocator tries to find first
-
 
323
        zone, that can satisfy the incoming request (has required amount of
-
 
324
        free frames to allocate). During deallocation, frame allocator needs
-
 
325
        to find zone, that contain deallocated frame. This approach could
-
 
326
        bring up two potential problems: <itemizedlist>
-
 
327
            <listitem>
-
 
328
               Linear search of zones does not any good to performance, but number of zones is not expected to be high. And if yes, list of zones can be replaced with more time-efficient B-tree.
-
 
329
            </listitem>
-
 
330
 
-
 
331
            <listitem>
-
 
332
               Quickly find out if zone contains required number of frames to allocate and if this chunk of memory is properly aligned. This issue is perfectly solved bu the buddy allocator.
-
 
333
            </listitem>
-
 
334
          </itemizedlist></para>
-
 
335
      </formalpara>
-
 
336
    </section>
-
 
337
 
-
 
338
    <section id="buddy_allocator">
-
 
339
      <title>Buddy allocator</title>
-
 
340
 
-
 
341
      <section>
-
 
342
        <title>Overview</title>
-
 
343
 
-
 
344
        <figure><mediaobject id="buddy_alloc">
-
 
345
            <imageobject role="html">
-
 
346
              <imagedata fileref="images/buddy_alloc.png" format="PNG" />
-
 
347
            </imageobject>
-
 
348
 
-
 
349
            <imageobject role="fop">
-
 
350
              <imagedata fileref="images.vector/buddy_alloc.svg" format="SVG" />
-
 
351
            </imageobject>
-
 
352
          </mediaobject>
-
 
353
      <title>Buddy system scheme.</title>
-
 
354
      </figure>
-
 
355
 
-
 
356
        <para>In the buddy allocator, the memory is broken down into
-
 
357
        power-of-two sized naturally aligned blocks. These blocks are
-
 
358
        organized in an array of lists, in which the list with index i
-
 
359
        contains all unallocated blocks of size
-
 
360
        <mathphrase>2<superscript>i</superscript></mathphrase>. The index i is
-
 
361
        called the order of block. Should there be two adjacent equally sized
-
 
362
        blocks in the list i<mathphrase />(i.e. buddies), the buddy allocator
-
 
363
        would coalesce them and put the resulting block in list <mathphrase>i
-
 
364
        + 1</mathphrase>, provided that the resulting block would be naturally
-
 
365
        aligned. Similarily, when the allocator is asked to allocate a block
-
 
366
        of size <mathphrase>2<superscript>i</superscript></mathphrase>, it
-
 
367
        first tries to satisfy the request from the list with index i. If the
-
 
368
        request cannot be satisfied (i.e. the list i is empty), the buddy
-
 
369
        allocator will try to allocate and split a larger block from the list
-
 
370
        with index i + 1. Both of these algorithms are recursive. The
-
 
371
        recursion ends either when there are no blocks to coalesce in the
-
 
372
        former case or when there are no blocks that can be split in the
-
 
373
        latter case.</para>
-
 
374
 
-
 
375
        <!--graphic fileref="images/mm1.png" format="EPS" /-->
-
 
376
 
-
 
377
        <para>This approach greatly reduces external fragmentation of memory
-
 
378
        and helps in allocating bigger continuous blocks of memory aligned to
-
 
379
        their size. On the other hand, the buddy allocator suffers increased
-
 
380
        internal fragmentation of memory and is not suitable for general
-
 
381
        kernel allocations. This purpose is better addressed by the <link
-
 
382
        linkend="slab">slab allocator</link>.</para>
-
 
383
      </section>
-
 
384
 
-
 
385
      <section>
-
 
386
        <title>Implementation</title>
-
 
387
 
-
 
388
        <para>The buddy allocator is, in fact, an abstract framework wich can
-
 
389
        be easily specialized to serve one particular task. It knows nothing
-
 
390
        about the nature of memory it helps to allocate. In order to beat the
-
 
391
        lack of this knowledge, the buddy allocator exports an interface that
-
 
392
        each of its clients is required to implement. When supplied with an
-
 
393
        implementation of this interface, the buddy allocator can use
-
 
394
        specialized external functions to find a buddy for a block, split and
-
 
395
        coalesce blocks, manipulate block order and mark blocks busy or
-
 
396
        available. For precise documentation of this interface, refer to
-
 
397
        <emphasis>"HelenOS Generic Kernel Reference Manual"</emphasis>.</para>
-
 
398
 
-
 
399
        <formalpara>
-
 
400
          <title>Data organization</title>
-
 
401
 
-
 
402
          <para>Each entity allocable by the buddy allocator is required to
-
 
403
          contain space for storing block order number and a link variable
-
 
404
          used to interconnect blocks within the same order.</para>
-
 
405
 
-
 
406
          <para>Whatever entities are allocated by the buddy allocator, the
-
 
407
          first entity within a block is used to represent the entire block.
-
 
408
          The first entity keeps the order of the whole block. Other entities
-
 
409
          within the block are assigned the magic value
-
 
410
          <constant>BUDDY_INNER_BLOCK</constant>. This is especially important
-
 
411
          for effective identification of buddies in a one-dimensional array
-
 
412
          because the entity that represents a potential buddy cannot be
-
 
413
          associated with <constant>BUDDY_INNER_BLOCK</constant> (i.e. if it
-
 
414
          is associated with <constant>BUDDY_INNER_BLOCK</constant> then it is
-
 
415
          not a buddy).</para>
-
 
416
 
-
 
417
          <para>The buddy allocator always uses the first frame to represent
-
 
418
          the frame block. This frame contains <varname>buddy_order</varname>
-
 
419
          variable to provide information about the block size it actually
-
 
420
          represents (
-
 
421
          <mathphrase>2<superscript>buddy_order</superscript></mathphrase>
-
 
422
          frames block). Other frames in block have this value set to magic
-
 
423
          <constant>BUDDY_INNER_BLOCK</constant> that is much greater than
-
 
424
          buddy <varname>max_order</varname> value.</para>
-
 
425
 
-
 
426
          <para>Each <varname>frame_t</varname> also contains pointer member
-
 
427
          to hold frame structure in the linked list inside one order.</para>
-
 
428
        </formalpara>
-
 
429
 
-
 
430
        <formalpara>
-
 
431
          <title>Allocation algorithm</title>
-
 
432
 
-
 
433
          <para>Upon <mathphrase>2<superscript>i</superscript></mathphrase>
-
 
434
          frames block allocation request, allocator checks if there are any
-
 
435
          blocks available at the order list <varname>i</varname>. If yes,
-
 
436
          removes block from order list and returns its address. If no,
-
 
437
          recursively allocates
-
 
438
          <mathphrase>2<superscript>i+1</superscript></mathphrase> frame
-
 
439
          block, splits it into two
-
 
440
          <mathphrase>2<superscript>i</superscript></mathphrase> frame blocks.
-
 
441
          Then adds one of the blocks to the <varname>i</varname> order list
-
 
442
          and returns address of another.</para>
-
 
443
        </formalpara>
-
 
444
 
-
 
445
        <formalpara>
-
 
446
          <title>Deallocation algorithm</title>
-
 
447
 
-
 
448
          <para>Check if block has so called buddy (another free
-
 
449
          <mathphrase>2<superscript>i</superscript></mathphrase> frame block
-
 
450
          that can be linked with freed block into the
-
 
451
          <mathphrase>2<superscript>i+1</superscript></mathphrase> block).
-
 
452
          Technically, buddy is a odd/even block for even/odd block
-
 
453
          respectively. Plus we can put an extra requirement, that resulting
-
 
454
          block must be aligned to its size. This requirement guarantees
-
 
455
          natural block alignment for the blocks coming out the allocation
-
 
456
          system.</para>
-
 
457
 
-
 
458
          <para>Using direct pointer arithmetics,
-
 
459
          <varname>frame_t::ref_count</varname> and
-
 
460
          <varname>frame_t::buddy_order</varname> variables, finding buddy is
-
 
461
          done at constant time.</para>
-
 
462
        </formalpara>
-
 
463
      </section>
-
 
464
    </section>
-
 
465
 
-
 
466
    <section id="slab">
-
 
467
      <title>Slab allocator</title>
-
 
468
 
-
 
469
      <section>
-
 
470
        <title>Overview</title>
-
 
471
 
-
 
472
        <para><termdef><glossterm>Slab</glossterm> represents a contiguous
-
 
473
        piece of memory, usually made of several physically contiguous
-
 
474
        pages.</termdef> <termdef><glossterm>Slab cache</glossterm> consists
-
 
475
        of one or more slabs.</termdef></para>
-
 
476
 
-
 
477
        <para>The majority of memory allocation requests in the kernel are for
-
 
478
        small, frequently used data structures. For this purpose the slab
-
 
479
        allocator is a perfect solution. The basic idea behind the slab
-
 
480
        allocator is to have lists of commonly used objects available packed
-
 
481
        into pages. This avoids the overhead of allocating and destroying
-
 
482
        commonly used types of objects such threads, virtual memory structures
-
 
483
        etc. Also due to the exact allocated size matching, slab allocation
-
 
484
        completely eliminates internal fragmentation issue.</para>
-
 
485
      </section>
-
 
486
 
-
 
487
      <section>
-
 
488
        <title>Implementation</title>
-
 
489
 
-
 
490
        <figure><mediaobject id="slab_alloc">
-
 
491
            <imageobject role="html">
-
 
492
              <imagedata fileref="images/slab_alloc.png" format="PNG" />
-
 
493
            </imageobject>
-
 
494
 
-
 
495
            <imageobject role="fop">
-
 
496
              <imagedata fileref="images.vector/slab_alloc.svg" format="SVG" />
-
 
497
            </imageobject>
-
 
498
          </mediaobject>
-
 
499
      <title>Slab allocator scheme.</title>
-
 
500
      </figure>
-
 
501
 
-
 
502
        <para>The slab allocator is closely modelled after <ulink
-
 
503
        url="http://www.usenix.org/events/usenix01/full_papers/bonwick/bonwick_html/">
-
 
504
        OpenSolaris slab allocator by Jeff Bonwick and Jonathan Adams </ulink>
-
 
505
        with the following exceptions: <itemizedlist>
-
 
506
            <listitem>
-
 
507
               empty slabs are deallocated immediately (in Linux they are kept in linked list, in Solaris ???)
-
 
508
            </listitem>
-
 
509
 
-
 
510
            <listitem>
-
 
511
               empty magazines are deallocated when not needed (in Solaris they are held in linked list in slab cache)
-
 
512
            </listitem>
-
 
513
          </itemizedlist> Following features are not currently supported but
-
 
514
        would be easy to do: <itemizedlist>
-
 
515
            <listitem>
-
 
516
               - cache coloring
-
 
517
            </listitem>
-
 
518
 
-
 
519
            <listitem>
-
 
520
               - dynamic magazine grow (different magazine sizes are already supported, but we would need to adjust allocation strategy)
-
 
521
            </listitem>
-
 
522
          </itemizedlist></para>
-
 
523
 
-
 
524
        <section>
-
 
525
          <title>Magazine layer</title>
-
 
526
 
-
 
527
          <para>Due to the extensive bottleneck on SMP architures, caused by
-
 
528
          global slab locking mechanism, making processing of all slab
-
 
529
          allocation requests serialized, a new layer was introduced to the
-
 
530
          classic slab allocator design. Slab allocator was extended to
-
 
531
          support per-CPU caches 'magazines' to achieve good SMP scaling.
-
 
532
          <termdef>Slab SMP perfromance bottleneck was resolved by introducing
-
 
533
          a per-CPU caching scheme called as <glossterm>magazine
-
 
534
          layer</glossterm></termdef>.</para>
-
 
535
 
-
 
536
          <para>Magazine is a N-element cache of objects, so each magazine can
-
 
537
          satisfy N allocations. Magazine behaves like a automatic weapon
-
 
538
          magazine (LIFO, stack), so the allocation/deallocation become simple
-
 
539
          push/pop pointer operation. Trick is that CPU does not access global
-
 
540
          slab allocator data during the allocation from its magazine, thus
-
 
541
          making possible parallel allocations between CPUs.</para>
-
 
542
 
-
 
543
          <para>Implementation also requires adding another feature as the
-
 
544
          CPU-bound magazine is actually a pair of magazines to avoid
-
 
545
          thrashing when during allocation/deallocatiion of 1 item at the
-
 
546
          magazine size boundary. LIFO order is enforced, which should avoid
-
 
547
          fragmentation as much as possible.</para>
-
 
548
 
-
 
549
          <para>Another important entity of magazine layer is the common full
-
 
550
          magazine list (also called a depot), that stores full magazines that
-
 
551
          may be used by any of the CPU magazine caches to reload active CPU
-
 
552
          magazine. This list of magazines can be pre-filled with full
-
 
553
          magazines during initialization, but in current implementation it is
-
 
554
          filled during object deallocation, when CPU magazine becomes
-
 
555
          full.</para>
-
 
556
 
-
 
557
          <para>Slab allocator control structures are allocated from special
-
 
558
          slabs, that are marked by special flag, indicating that it should
-
 
559
          not be used for slab magazine layer. This is done to avoid possible
-
 
560
          infinite recursions and deadlock during conventional slab allocaiton
-
 
561
          requests.</para>
-
 
562
        </section>
-
 
563
 
-
 
564
        <section>
-
 
565
          <title>Allocation/deallocation</title>
-
 
566
 
-
 
567
          <para>Every cache contains list of full slabs and list of partialy
-
 
568
          full slabs. Empty slabs are immediately freed (thrashing will be
-
 
569
          avoided because of magazines).</para>
-
 
570
 
-
 
571
          <para>The SLAB allocator allocates lots of space and does not free
-
 
572
          it. When frame allocator fails to allocate the frame, it calls
-
 
573
          slab_reclaim(). It tries 'light reclaim' first, then brutal reclaim.
-
 
574
          The light reclaim releases slabs from cpu-shared magazine-list,
-
 
575
          until at least 1 slab is deallocated in each cache (this algorithm
-
 
576
          should probably change). The brutal reclaim removes all cached
-
 
577
          objects, even from CPU-bound magazines.</para>
-
 
578
 
-
 
579
          <formalpara>
-
 
580
            <title>Allocation</title>
-
 
581
 
-
 
582
            <para><emphasis>Step 1.</emphasis> When it comes to the allocation
-
 
583
            request, slab allocator first of all checks availability of memory
-
 
584
            in local CPU-bound magazine. If it is there, we would just "pop"
-
 
585
            the CPU magazine and return the pointer to object.</para>
-
 
586
 
-
 
587
            <para><emphasis>Step 2.</emphasis> If the CPU-bound magazine is
-
 
588
            empty, allocator will attempt to reload magazin, swapping it with
-
 
589
            second CPU magazine and returns to the first step.</para>
-
 
590
 
-
 
591
            <para><emphasis>Step 3.</emphasis> Now we are in the situation
-
 
592
            when both CPU-bound magazines are empty, which makes allocator to
-
 
593
            access shared full-magazines depot to reload CPU-bound magazines.
-
 
594
            If reload is succesful (meaning there are full magazines in depot)
-
 
595
            algoritm continues at Step 1.</para>
-
 
596
 
-
 
597
            <para><emphasis>Step 4.</emphasis> Final step of the allocation.
-
 
598
            In this step object is allocated from the conventional slab layer
-
 
599
            and pointer is returned.</para>
-
 
600
          </formalpara>
-
 
601
 
-
 
602
          <formalpara>
-
 
603
            <title>Deallocation</title>
-
 
604
 
-
 
605
            <para><emphasis>Step 1.</emphasis> During deallocation request,
-
 
606
            slab allocator will check if the local CPU-bound magazine is not
-
 
607
            full. In this case we will just push the pointer to this
-
 
608
            magazine.</para>
-
 
609
 
-
 
610
            <para><emphasis>Step 2.</emphasis> If the CPU-bound magazine is
-
 
611
            full, allocator will attempt to reload magazin, swapping it with
-
 
612
            second CPU magazine and returns to the first step.</para>
-
 
613
 
-
 
614
            <para><emphasis>Step 3.</emphasis> Now we are in the situation
-
 
615
            when both CPU-bound magazines are full, which makes allocator to
-
 
616
            access shared full-magazines depot to put one of the magazines to
-
 
617
            the depot and creating new empty magazine. Algoritm continues at
-
 
618
            Step 1.</para>
-
 
619
          </formalpara>
-
 
620
        </section>
-
 
621
      </section>
-
 
622
    </section>
-
 
623
 
-
 
624
    <!-- End of Physmem -->
-
 
625
  </section>
-
 
626
 
-
 
627
  <section>
-
 
628
    <title>Memory sharing</title>
-
 
629
 
-
 
630
    <para>Not implemented yet(?)</para>
-
 
631
  </section>
-
 
632
</chapter>
586
</chapter>
633
587