Subversion Repositories HelenOS-doc

Rev

Rev 67 | Rev 69 | Go to most recent revision | Only display areas with differences | Ignore whitespace | Details | Blame | Last modification | View Log | RSS feed

Rev 67 Rev 68
1
<?xml version="1.0" encoding="UTF-8"?>
1
<?xml version="1.0" encoding="UTF-8"?>
2
<chapter id="mm">
2
<chapter id="mm">
3
  <?dbhtml filename="mm.html"?>
3
  <?dbhtml filename="mm.html"?>
4
 
4
 
5
  <title>Memory management</title>
5
  <title>Memory management</title>
6
 
6
 
7
  <para>In previous chapters, this book described the scheduling subsystem as
7
  <para>In previous chapters, this book described the scheduling subsystem as
8
  the creator of the impression that threads execute in parallel. The memory
8
  the creator of the impression that threads execute in parallel. The memory
9
  management subsystem, on the other hand, creates the impression that there
9
  management subsystem, on the other hand, creates the impression that there
10
  is enough physical memory for the kernel and that userspace tasks have the
10
  is enough physical memory for the kernel and that userspace tasks have the
11
  entire address space only for themselves.</para>
11
  entire address space only for themselves.</para>
12
 
12
 
13
  <section>
13
  <section>
14
    <title>Physical memory management</title>
14
    <title>Physical memory management</title>
15
 
15
 
16
    <section id="zones_and_frames">
16
    <section id="zones_and_frames">
17
      <title>Zones and frames</title>
17
      <title>Zones and frames</title>
18
 
18
 
19
      <para>HelenOS represents continuous areas of physical memory in
19
      <para>HelenOS represents continuous areas of physical memory in
20
      structures called frame zones (abbreviated as zones). Each zone contains
20
      structures called frame zones (abbreviated as zones). Each zone contains
21
      information about the number of allocated and unallocated physical
21
      information about the number of allocated and unallocated physical
22
      memory frames as well as the physical base address of the zone and
22
      memory frames as well as the physical base address of the zone and
23
      number of frames contained in it. A zone also contains an array of frame
23
      number of frames contained in it. A zone also contains an array of frame
24
      structures describing each frame of the zone and, in the last, but not
24
      structures describing each frame of the zone and, in the last, but not
25
      the least important, front, each zone is equipped with a buddy system
25
      the least important, front, each zone is equipped with a buddy system
26
      that faciliates effective allocation of power-of-two sized block of
26
      that faciliates effective allocation of power-of-two sized block of
27
      frames.</para>
27
      frames.</para>
28
 
28
 
29
      <para>This organization of physical memory provides good preconditions
29
      <para>This organization of physical memory provides good preconditions
30
      for hot-plugging of more zones. There is also one currently unused zone
30
      for hot-plugging of more zones. There is also one currently unused zone
31
      attribute: <code>flags</code>. The attribute could be used to give a
31
      attribute: <code>flags</code>. The attribute could be used to give a
32
      special meaning to some zones in the future.</para>
32
      special meaning to some zones in the future.</para>
33
 
33
 
34
      <para>The zones are linked in a doubly-linked list. This might seem a
34
      <para>The zones are linked in a doubly-linked list. This might seem a
35
      bit ineffective because the zone list is walked everytime a frame is
35
      bit ineffective because the zone list is walked everytime a frame is
36
      allocated or deallocated. However, this does not represent a significant
36
      allocated or deallocated. However, this does not represent a significant
37
      performance problem as it is expected that the number of zones will be
37
      performance problem as it is expected that the number of zones will be
38
      rather low. Moreover, most architectures merge all zones into
38
      rather low. Moreover, most architectures merge all zones into
39
      one.</para>
39
      one.</para>
40
 
40
 
41
      <para>For each physical memory frame found in a zone, there is a frame
41
      <para>For each physical memory frame found in a zone, there is a frame
42
      structure that contains number of references and data used by buddy
42
      structure that contains number of references and data used by buddy
43
      system.</para>
43
      system.</para>
44
    </section>
44
    </section>
45
 
45
 
46
    <section id="frame_allocator">
46
    <section id="frame_allocator">
47
      <title>Frame allocator</title>
47
      <title>Frame allocator</title>
48
 
48
 
49
      <para>The frame allocator satisfies kernel requests to allocate
49
      <para>The frame allocator satisfies kernel requests to allocate
50
      power-of-two sized blocks of physical memory. Because of zonal
50
      power-of-two sized blocks of physical memory. Because of zonal
51
      organization of physical memory, the frame allocator is always working
51
      organization of physical memory, the frame allocator is always working
52
      within a context of some frame zone. In order to carry out the
52
      within a context of some frame zone. In order to carry out the
53
      allocation requests, the frame allocator is tightly integrated with the
53
      allocation requests, the frame allocator is tightly integrated with the
54
      buddy system belonging to the zone. The frame allocator is also
54
      buddy system belonging to the zone. The frame allocator is also
55
      responsible for updating information about the number of free and busy
55
      responsible for updating information about the number of free and busy
56
      frames in the zone. <figure>
56
      frames in the zone. <figure>
57
          <mediaobject id="frame_alloc">
57
          <mediaobject id="frame_alloc">
58
            <imageobject role="html">
58
            <imageobject role="html">
59
              <imagedata fileref="images/frame_alloc.png" format="PNG" />
59
              <imagedata fileref="images/frame_alloc.png" format="PNG" />
60
            </imageobject>
60
            </imageobject>
61
 
61
 
62
            <imageobject role="fop">
62
            <imageobject role="fop">
63
              <imagedata fileref="images.vector/frame_alloc.svg" format="SVG" />
63
              <imagedata fileref="images.vector/frame_alloc.svg" format="SVG" />
64
            </imageobject>
64
            </imageobject>
65
          </mediaobject>
65
          </mediaobject>
66
 
66
 
67
          <title>Frame allocator scheme.</title>
67
          <title>Frame allocator scheme.</title>
68
        </figure></para>
68
        </figure></para>
69
 
69
 
70
      <formalpara>
70
      <formalpara>
71
        <title>Allocation / deallocation</title>
71
        <title>Allocation / deallocation</title>
72
 
72
 
73
        <para>Upon allocation request via function <code>frame_alloc</code>,
73
        <para>Upon allocation request via function <code>frame_alloc</code>,
74
        the frame allocator first tries to find a zone that can satisfy the
74
        the frame allocator first tries to find a zone that can satisfy the
75
        request (i.e. has the required amount of free frames). Once a suitable
75
        request (i.e. has the required amount of free frames). Once a suitable
76
        zone is found, the frame allocator uses the buddy allocator on the
76
        zone is found, the frame allocator uses the buddy allocator on the
77
        zone's buddy system to perform the allocation. During deallocation,
77
        zone's buddy system to perform the allocation. During deallocation,
78
        which is triggered by a call to <code>frame_free</code>, the frame
78
        which is triggered by a call to <code>frame_free</code>, the frame
79
        allocator looks up the respective zone that contains the frame being
79
        allocator looks up the respective zone that contains the frame being
80
        deallocated. Afterwards, it calls the buddy allocator again, this time
80
        deallocated. Afterwards, it calls the buddy allocator again, this time
81
        to take care of deallocation within the zone's buddy system.</para>
81
        to take care of deallocation within the zone's buddy system.</para>
82
      </formalpara>
82
      </formalpara>
83
    </section>
83
    </section>
84
 
84
 
85
    <section id="buddy_allocator">
85
    <section id="buddy_allocator">
86
      <title>Buddy allocator</title>
86
      <title>Buddy allocator</title>
87
 
87
 
88
      <para>In the buddy system, the memory is broken down into power-of-two
88
      <para>In the buddy system, the memory is broken down into power-of-two
89
      sized naturally aligned blocks. These blocks are organized in an array
89
      sized naturally aligned blocks. These blocks are organized in an array
90
      of lists, in which the list with index i contains all unallocated blocks
90
      of lists, in which the list with index i contains all unallocated blocks
91
      of size <mathphrase>2<superscript>i</superscript></mathphrase>. The
91
      of size <mathphrase>2<superscript>i</superscript></mathphrase>. The
92
      index i is called the order of block. Should there be two adjacent
92
      index i is called the order of block. Should there be two adjacent
93
      equally sized blocks in the list i<mathphrase />(i.e. buddies), the
93
      equally sized blocks in the list i<mathphrase />(i.e. buddies), the
94
      buddy allocator would coalesce them and put the resulting block in list
94
      buddy allocator would coalesce them and put the resulting block in list
95
      <mathphrase>i + 1</mathphrase>, provided that the resulting block would
95
      <mathphrase>i + 1</mathphrase>, provided that the resulting block would
96
      be naturally aligned. Similarily, when the allocator is asked to
96
      be naturally aligned. Similarily, when the allocator is asked to
97
      allocate a block of size
97
      allocate a block of size
98
      <mathphrase>2<superscript>i</superscript></mathphrase>, it first tries
98
      <mathphrase>2<superscript>i</superscript></mathphrase>, it first tries
99
      to satisfy the request from the list with index i. If the request cannot
99
      to satisfy the request from the list with index i. If the request cannot
100
      be satisfied (i.e. the list i is empty), the buddy allocator will try to
100
      be satisfied (i.e. the list i is empty), the buddy allocator will try to
101
      allocate and split a larger block from the list with index i + 1. Both
101
      allocate and split a larger block from the list with index i + 1. Both
102
      of these algorithms are recursive. The recursion ends either when there
102
      of these algorithms are recursive. The recursion ends either when there
103
      are no blocks to coalesce in the former case or when there are no blocks
103
      are no blocks to coalesce in the former case or when there are no blocks
104
      that can be split in the latter case.</para>
104
      that can be split in the latter case.</para>
105
 
105
 
106
      <para>This approach greatly reduces external fragmentation of memory and
106
      <para>This approach greatly reduces external fragmentation of memory and
107
      helps in allocating bigger continuous blocks of memory aligned to their
107
      helps in allocating bigger continuous blocks of memory aligned to their
108
      size. On the other hand, the buddy allocator suffers increased internal
108
      size. On the other hand, the buddy allocator suffers increased internal
109
      fragmentation of memory and is not suitable for general kernel
109
      fragmentation of memory and is not suitable for general kernel
110
      allocations. This purpose is better addressed by the <link
110
      allocations. This purpose is better addressed by the <link
111
      linkend="slab">slab allocator</link>.<figure>
111
      linkend="slab">slab allocator</link>.<figure>
112
          <mediaobject id="buddy_alloc">
112
          <mediaobject id="buddy_alloc">
113
            <imageobject role="html">
113
            <imageobject role="html">
114
              <imagedata fileref="images/buddy_alloc.png" format="PNG" />
114
              <imagedata fileref="images/buddy_alloc.png" format="PNG" />
115
            </imageobject>
115
            </imageobject>
116
 
116
 
117
            <imageobject role="fop">
117
            <imageobject role="fop">
118
              <imagedata fileref="images.vector/buddy_alloc.svg" format="SVG" />
118
              <imagedata fileref="images.vector/buddy_alloc.svg" format="SVG" />
119
            </imageobject>
119
            </imageobject>
120
          </mediaobject>
120
          </mediaobject>
121
 
121
 
122
          <title>Buddy system scheme.</title>
122
          <title>Buddy system scheme.</title>
123
        </figure></para>
123
        </figure></para>
124
 
124
 
125
      <section>
125
      <section>
126
        <title>Implementation</title>
126
        <title>Implementation</title>
127
 
127
 
128
        <para>The buddy allocator is, in fact, an abstract framework wich can
128
        <para>The buddy allocator is, in fact, an abstract framework wich can
129
        be easily specialized to serve one particular task. It knows nothing
129
        be easily specialized to serve one particular task. It knows nothing
130
        about the nature of memory it helps to allocate. In order to beat the
130
        about the nature of memory it helps to allocate. In order to beat the
131
        lack of this knowledge, the buddy allocator exports an interface that
131
        lack of this knowledge, the buddy allocator exports an interface that
132
        each of its clients is required to implement. When supplied with an
132
        each of its clients is required to implement. When supplied with an
133
        implementation of this interface, the buddy allocator can use
133
        implementation of this interface, the buddy allocator can use
134
        specialized external functions to find a buddy for a block, split and
134
        specialized external functions to find a buddy for a block, split and
135
        coalesce blocks, manipulate block order and mark blocks busy or
135
        coalesce blocks, manipulate block order and mark blocks busy or
136
        available.</para>
136
        available.</para>
137
 
137
 
138
        <formalpara>
138
        <formalpara>
139
          <title>Data organization</title>
139
          <title>Data organization</title>
140
 
140
 
141
          <para>Each entity allocable by the buddy allocator is required to
141
          <para>Each entity allocable by the buddy allocator is required to
142
          contain space for storing block order number and a link variable
142
          contain space for storing block order number and a link variable
143
          used to interconnect blocks within the same order.</para>
143
          used to interconnect blocks within the same order.</para>
144
 
144
 
145
          <para>Whatever entities are allocated by the buddy allocator, the
145
          <para>Whatever entities are allocated by the buddy allocator, the
146
          first entity within a block is used to represent the entire block.
146
          first entity within a block is used to represent the entire block.
147
          The first entity keeps the order of the whole block. Other entities
147
          The first entity keeps the order of the whole block. Other entities
148
          within the block are assigned the magic value
148
          within the block are assigned the magic value
149
          <constant>BUDDY_INNER_BLOCK</constant>. This is especially important
149
          <constant>BUDDY_INNER_BLOCK</constant>. This is especially important
150
          for effective identification of buddies in a one-dimensional array
150
          for effective identification of buddies in a one-dimensional array
151
          because the entity that represents a potential buddy cannot be
151
          because the entity that represents a potential buddy cannot be
152
          associated with <constant>BUDDY_INNER_BLOCK</constant> (i.e. if it
152
          associated with <constant>BUDDY_INNER_BLOCK</constant> (i.e. if it
153
          is associated with <constant>BUDDY_INNER_BLOCK</constant> then it is
153
          is associated with <constant>BUDDY_INNER_BLOCK</constant> then it is
154
          not a buddy).</para>
154
          not a buddy).</para>
155
        </formalpara>
155
        </formalpara>
156
      </section>
156
      </section>
157
    </section>
157
    </section>
158
 
158
 
159
    <section id="slab">
159
    <section id="slab">
160
      <title>Slab allocator</title>
160
      <title>Slab allocator</title>
161
 
161
 
162
      <para>The majority of memory allocation requests in the kernel is for
162
      <para>The majority of memory allocation requests in the kernel is for
163
      small, frequently used data structures. The basic idea behind the slab
163
      small, frequently used data structures. The basic idea behind the slab
164
      allocator is that commonly used objects are preallocated in continuous
164
      allocator is that commonly used objects are preallocated in continuous
165
      areas of physical memory called slabs<footnote>
165
      areas of physical memory called slabs<footnote>
166
          <para>Slabs are in fact blocks of physical memory frames allocated
166
          <para>Slabs are in fact blocks of physical memory frames allocated
167
          from the frame allocator.</para>
167
          from the frame allocator.</para>
168
        </footnote>. Whenever an object is to be allocated, the slab allocator
168
        </footnote>. Whenever an object is to be allocated, the slab allocator
169
      returns the first available item from a suitable slab corresponding to
169
      returns the first available item from a suitable slab corresponding to
170
      the object type<footnote>
170
      the object type<footnote>
171
          <para>The mechanism is rather more complicated, see the next
171
          <para>The mechanism is rather more complicated, see the next
172
          paragraph.</para>
172
          paragraph.</para>
173
        </footnote>. Due to the fact that the sizes of the requested and
173
        </footnote>. Due to the fact that the sizes of the requested and
174
      allocated object match, the slab allocator significantly reduces
174
      allocated object match, the slab allocator significantly reduces
175
      internal fragmentation.</para>
175
      internal fragmentation.</para>
176
 
176
 
177
      <para>Slabs of one object type are organized in a structure called slab
177
      <para>Slabs of one object type are organized in a structure called slab
178
      cache. There are ususally more slabs in the slab cache, depending on
178
      cache. There are ususally more slabs in the slab cache, depending on
179
      previous allocations. If the the slab cache runs out of available slabs,
179
      previous allocations. If the the slab cache runs out of available slabs,
180
      new slabs are allocated. In order to exploit parallelism and to avoid
180
      new slabs are allocated. In order to exploit parallelism and to avoid
181
      locking of shared spinlocks, slab caches can have variants of
181
      locking of shared spinlocks, slab caches can have variants of
182
      processor-private slabs called magazines. On each processor, there is a
182
      processor-private slabs called magazines. On each processor, there is a
183
      two-magazine cache. Full magazines that are not part of any
183
      two-magazine cache. Full magazines that are not part of any
184
      per-processor magazine cache are stored in a global list of full
184
      per-processor magazine cache are stored in a global list of full
185
      magazines.</para>
185
      magazines.</para>
186
 
186
 
187
      <para>Each object begins its life in a slab. When it is allocated from
187
      <para>Each object begins its life in a slab. When it is allocated from
188
      there, the slab allocator calls a constructor that is registered in the
188
      there, the slab allocator calls a constructor that is registered in the
189
      respective slab cache. The constructor initializes and brings the object
189
      respective slab cache. The constructor initializes and brings the object
190
      into a known state. The object is then used by the user. When the user
190
      into a known state. The object is then used by the user. When the user
191
      later frees the object, the slab allocator puts it into a processor
191
      later frees the object, the slab allocator puts it into a processor
192
      private magazine cache, from where it can be precedently allocated
192
      private magazine cache, from where it can be precedently allocated
193
      again. Note that allocations satisfied from a magazine are already
193
      again. Note that allocations satisfied from a magazine are already
194
      initialized by the constructor. When both of the processor cached
194
      initialized by the constructor. When both of the processor cached
195
      magazines get full, the allocator will move one of the magazines to the
195
      magazines get full, the allocator will move one of the magazines to the
196
      list of full magazines. Similarily, when allocating from an empty
196
      list of full magazines. Similarily, when allocating from an empty
197
      processor magazine cache, the kernel will reload only one magazine from
197
      processor magazine cache, the kernel will reload only one magazine from
198
      the list of full magazines. In other words, the slab allocator tries to
198
      the list of full magazines. In other words, the slab allocator tries to
199
      keep the processor magazine cache only half-full in order to prevent
199
      keep the processor magazine cache only half-full in order to prevent
200
      thrashing when allocations and deallocations interleave on magazine
200
      thrashing when allocations and deallocations interleave on magazine
201
      boundaries.</para>
201
      boundaries.</para>
202
 
202
 
203
      <para>Should HelenOS run short of memory, it would start deallocating
203
      <para>Should HelenOS run short of memory, it would start deallocating
204
      objects from magazines, calling slab cache destructor on them and
204
      objects from magazines, calling slab cache destructor on them and
205
      putting them back into slabs. When a slab contanins no allocated object,
205
      putting them back into slabs. When a slab contanins no allocated object,
206
      it is immediately freed.</para>
206
      it is immediately freed.</para>
207
 
207
 
208
      <para><figure>
208
      <para><figure>
209
          <mediaobject id="slab_alloc">
209
          <mediaobject id="slab_alloc">
210
            <imageobject role="html">
210
            <imageobject role="html">
211
              <imagedata fileref="images/slab_alloc.png" format="PNG" />
211
              <imagedata fileref="images/slab_alloc.png" format="PNG" />
212
            </imageobject>
212
            </imageobject>
213
          </mediaobject>
213
          </mediaobject>
214
 
214
 
215
          <title>Slab allocator scheme.</title>
215
          <title>Slab allocator scheme.</title>
216
        </figure></para>
216
        </figure></para>
217
 
217
 
218
      <section>
218
      <section>
219
        <title>Implementation</title>
219
        <title>Implementation</title>
220
 
220
 
221
        <para>The slab allocator is closely modelled after OpenSolaris slab
221
        <para>The slab allocator is closely modelled after OpenSolaris slab
222
        allocator by Jeff Bonwick and Jonathan Adams with the following
222
        allocator by Jeff Bonwick and Jonathan Adams with the following
223
        exceptions:<itemizedlist>
223
        exceptions:<itemizedlist>
224
            <listitem>
224
            <listitem>
225
              empty slabs are immediately deallocated
225
               empty slabs are immediately deallocated
226
            </listitem>
226
            </listitem>
227
 
227
 
228
            <listitem>
228
            <listitem>
229
              <para>empty magazines are deallocated when not needed</para>
229
              <para>empty magazines are deallocated when not needed</para>
230
            </listitem>
230
            </listitem>
231
          </itemizedlist> Following features are not currently supported but
231
          </itemizedlist> Following features are not currently supported but
232
        would be easy to do: <itemizedlist>
232
        would be easy to do: <itemizedlist>
233
            <listitem>
233
            <listitem>
234
               cache coloring
234
               cache coloring
235
            </listitem>
235
            </listitem>
236
 
236
 
237
            <listitem>
237
            <listitem>
238
               dynamic magazine grow (different magazine sizes are already supported, but the allocation strategy would need to be adjusted)
238
               dynamic magazine grow (different magazine sizes are already supported, but the allocation strategy would need to be adjusted)
239
            </listitem>
239
            </listitem>
240
          </itemizedlist></para>
240
          </itemizedlist></para>
241
 
241
 
242
        <section>
242
        <section>
243
          <title>Magazine layer</title>
243
          <title>Magazine layer</title>
244
 
244
 
245
          <para>Due to the extensive bottleneck on SMP architures, caused by
245
          <para>Due to the extensive bottleneck on SMP architures, caused by
246
          global slab locking mechanism, making processing of all slab
246
          global slab locking mechanism, making processing of all slab
247
          allocation requests serialized, a new layer was introduced to the
247
          allocation requests serialized, a new layer was introduced to the
248
          classic slab allocator design. Slab allocator was extended to
248
          classic slab allocator design. Slab allocator was extended to
249
          support per-CPU caches 'magazines' to achieve good SMP scaling.
249
          support per-CPU caches 'magazines' to achieve good SMP scaling.
250
          <termdef>Slab SMP perfromance bottleneck was resolved by introducing
250
          <termdef>Slab SMP perfromance bottleneck was resolved by introducing
251
          a per-CPU caching scheme called as <glossterm>magazine
251
          a per-CPU caching scheme called as <glossterm>magazine
252
          layer</glossterm></termdef>.</para>
252
          layer</glossterm></termdef>.</para>
253
 
253
 
254
          <para>Magazine is a N-element cache of objects, so each magazine can
254
          <para>Magazine is a N-element cache of objects, so each magazine can
255
          satisfy N allocations. Magazine behaves like a automatic weapon
255
          satisfy N allocations. Magazine behaves like a automatic weapon
256
          magazine (LIFO, stack), so the allocation/deallocation become simple
256
          magazine (LIFO, stack), so the allocation/deallocation become simple
257
          push/pop pointer operation. Trick is that CPU does not access global
257
          push/pop pointer operation. Trick is that CPU does not access global
258
          slab allocator data during the allocation from its magazine, thus
258
          slab allocator data during the allocation from its magazine, thus
259
          making possible parallel allocations between CPUs.</para>
259
          making possible parallel allocations between CPUs.</para>
260
 
260
 
261
          <para>Implementation also requires adding another feature as the
261
          <para>Implementation also requires adding another feature as the
262
          CPU-bound magazine is actually a pair of magazines to avoid
262
          CPU-bound magazine is actually a pair of magazines to avoid
263
          thrashing when during allocation/deallocatiion of 1 item at the
263
          thrashing when during allocation/deallocatiion of 1 item at the
264
          magazine size boundary. LIFO order is enforced, which should avoid
264
          magazine size boundary. LIFO order is enforced, which should avoid
265
          fragmentation as much as possible.</para>
265
          fragmentation as much as possible.</para>
266
 
266
 
267
          <para>Another important entity of magazine layer is the common full
267
          <para>Another important entity of magazine layer is the common full
268
          magazine list (also called a depot), that stores full magazines that
268
          magazine list (also called a depot), that stores full magazines that
269
          may be used by any of the CPU magazine caches to reload active CPU
269
          may be used by any of the CPU magazine caches to reload active CPU
270
          magazine. This list of magazines can be pre-filled with full
270
          magazine. This list of magazines can be pre-filled with full
271
          magazines during initialization, but in current implementation it is
271
          magazines during initialization, but in current implementation it is
272
          filled during object deallocation, when CPU magazine becomes
272
          filled during object deallocation, when CPU magazine becomes
273
          full.</para>
273
          full.</para>
274
 
274
 
275
          <para>Slab allocator control structures are allocated from special
275
          <para>Slab allocator control structures are allocated from special
276
          slabs, that are marked by special flag, indicating that it should
276
          slabs, that are marked by special flag, indicating that it should
277
          not be used for slab magazine layer. This is done to avoid possible
277
          not be used for slab magazine layer. This is done to avoid possible
278
          infinite recursions and deadlock during conventional slab allocaiton
278
          infinite recursions and deadlock during conventional slab allocaiton
279
          requests.</para>
279
          requests.</para>
280
        </section>
280
        </section>
281
 
281
 
282
        <section>
282
        <section>
283
          <title>Allocation/deallocation</title>
283
          <title>Allocation/deallocation</title>
284
 
284
 
285
          <para>Every cache contains list of full slabs and list of partialy
285
          <para>Every cache contains list of full slabs and list of partialy
286
          full slabs. Empty slabs are immediately freed (thrashing will be
286
          full slabs. Empty slabs are immediately freed (thrashing will be
287
          avoided because of magazines).</para>
287
          avoided because of magazines).</para>
288
 
288
 
289
          <para>The SLAB allocator allocates lots of space and does not free
289
          <para>The SLAB allocator allocates lots of space and does not free
290
          it. When frame allocator fails to allocate the frame, it calls
290
          it. When frame allocator fails to allocate the frame, it calls
291
          slab_reclaim(). It tries 'light reclaim' first, then brutal reclaim.
291
          slab_reclaim(). It tries 'light reclaim' first, then brutal reclaim.
292
          The light reclaim releases slabs from cpu-shared magazine-list,
292
          The light reclaim releases slabs from cpu-shared magazine-list,
293
          until at least 1 slab is deallocated in each cache (this algorithm
293
          until at least 1 slab is deallocated in each cache (this algorithm
294
          should probably change). The brutal reclaim removes all cached
294
          should probably change). The brutal reclaim removes all cached
295
          objects, even from CPU-bound magazines.</para>
295
          objects, even from CPU-bound magazines.</para>
296
 
296
 
297
          <formalpara>
297
          <formalpara>
298
            <title>Allocation</title>
298
            <title>Allocation</title>
299
 
299
 
300
            <para><emphasis>Step 1.</emphasis> When it comes to the allocation
300
            <para><emphasis>Step 1.</emphasis> When it comes to the allocation
301
            request, slab allocator first of all checks availability of memory
301
            request, slab allocator first of all checks availability of memory
302
            in local CPU-bound magazine. If it is there, we would just "pop"
302
            in local CPU-bound magazine. If it is there, we would just "pop"
303
            the CPU magazine and return the pointer to object.</para>
303
            the CPU magazine and return the pointer to object.</para>
304
 
304
 
305
            <para><emphasis>Step 2.</emphasis> If the CPU-bound magazine is
305
            <para><emphasis>Step 2.</emphasis> If the CPU-bound magazine is
306
            empty, allocator will attempt to reload magazin, swapping it with
306
            empty, allocator will attempt to reload magazin, swapping it with
307
            second CPU magazine and returns to the first step.</para>
307
            second CPU magazine and returns to the first step.</para>
308
 
308
 
309
            <para><emphasis>Step 3.</emphasis> Now we are in the situation
309
            <para><emphasis>Step 3.</emphasis> Now we are in the situation
310
            when both CPU-bound magazines are empty, which makes allocator to
310
            when both CPU-bound magazines are empty, which makes allocator to
311
            access shared full-magazines depot to reload CPU-bound magazines.
311
            access shared full-magazines depot to reload CPU-bound magazines.
312
            If reload is succesful (meaning there are full magazines in depot)
312
            If reload is succesful (meaning there are full magazines in depot)
313
            algoritm continues at Step 1.</para>
313
            algoritm continues at Step 1.</para>
314
 
314
 
315
            <para><emphasis>Step 4.</emphasis> Final step of the allocation.
315
            <para><emphasis>Step 4.</emphasis> Final step of the allocation.
316
            In this step object is allocated from the conventional slab layer
316
            In this step object is allocated from the conventional slab layer
317
            and pointer is returned.</para>
317
            and pointer is returned.</para>
318
          </formalpara>
318
          </formalpara>
319
 
319
 
320
          <formalpara>
320
          <formalpara>
321
            <title>Deallocation</title>
321
            <title>Deallocation</title>
322
 
322
 
323
            <para><emphasis>Step 1.</emphasis> During deallocation request,
323
            <para><emphasis>Step 1.</emphasis> During deallocation request,
324
            slab allocator will check if the local CPU-bound magazine is not
324
            slab allocator will check if the local CPU-bound magazine is not
325
            full. In this case we will just push the pointer to this
325
            full. In this case we will just push the pointer to this
326
            magazine.</para>
326
            magazine.</para>
327
 
327
 
328
            <para><emphasis>Step 2.</emphasis> If the CPU-bound magazine is
328
            <para><emphasis>Step 2.</emphasis> If the CPU-bound magazine is
329
            full, allocator will attempt to reload magazin, swapping it with
329
            full, allocator will attempt to reload magazin, swapping it with
330
            second CPU magazine and returns to the first step.</para>
330
            second CPU magazine and returns to the first step.</para>
331
 
331
 
332
            <para><emphasis>Step 3.</emphasis> Now we are in the situation
332
            <para><emphasis>Step 3.</emphasis> Now we are in the situation
333
            when both CPU-bound magazines are full, which makes allocator to
333
            when both CPU-bound magazines are full, which makes allocator to
334
            access shared full-magazines depot to put one of the magazines to
334
            access shared full-magazines depot to put one of the magazines to
335
            the depot and creating new empty magazine. Algoritm continues at
335
            the depot and creating new empty magazine. Algoritm continues at
336
            Step 1.</para>
336
            Step 1.</para>
337
          </formalpara>
337
          </formalpara>
338
        </section>
338
        </section>
339
      </section>
339
      </section>
340
    </section>
340
    </section>
341
 
341
 
342
    <!-- End of Physmem -->
342
    <!-- End of Physmem -->
343
  </section>
343
  </section>
344
 
344
 
345
  <section>
345
  <section>
346
    <title>Virtual memory management</title>
346
    <title>Virtual memory management</title>
347
 
347
 
348
    <section>
348
    <section>
349
      <title>Introduction</title>
349
      <title>Introduction</title>
350
 
350
 
351
      <para>Virtual memory is a special memory management technique, used by
351
      <para>Virtual memory is a special memory management technique, used by
352
      kernel to achieve a bunch of mission critical goals. <itemizedlist>
352
      kernel to achieve a bunch of mission critical goals. <itemizedlist>
353
          <listitem>
353
          <listitem>
354
             Isolate each task from other tasks that are running on the system at the same time.
354
             Isolate each task from other tasks that are running on the system at the same time.
355
          </listitem>
355
          </listitem>
356
 
356
 
357
          <listitem>
357
          <listitem>
358
             Allow to allocate more memory, than is actual physical memory size of the machine.
358
             Allow to allocate more memory, than is actual physical memory size of the machine.
359
          </listitem>
359
          </listitem>
360
 
360
 
361
          <listitem>
361
          <listitem>
362
             Allowing, in general, to load and execute two programs that are linked on the same address without complicated relocations.
362
             Allowing, in general, to load and execute two programs that are linked on the same address without complicated relocations.
363
          </listitem>
363
          </listitem>
364
        </itemizedlist></para>
364
        </itemizedlist></para>
365
 
365
 
366
      <para><!--
366
      <para><!--
367
 
367
 
368
                TLB shootdown ASID/ASID:PAGE/ALL.
-
 
369
                TLB shootdown requests can come in asynchroniously
-
 
370
                so there is a cache of TLB shootdown requests. Upon cache overflow TLB shootdown ALL is executed
-
 
371
 
-
 
372
 
-
 
373
                <para>
368
                <para>
374
                        Address spaces. Address space area (B+ tree). Only for uspace. Set of syscalls (shrink/extend etc).
369
                        Address spaces. Address space area (B+ tree). Only for uspace. Set of syscalls (shrink/extend etc).
375
                        Special address space area type - device - prohibits shrink/extend syscalls to call on it.
370
                        Special address space area type - device - prohibits shrink/extend syscalls to call on it.
376
                        Address space has link to mapping tables (hierarchical - per Address space, hash - global tables).
371
                        Address space has link to mapping tables (hierarchical - per Address space, hash - global tables).
377
                </para>
372
                </para>
378
 
373
 
379
--></para>
374
--></para>
380
    </section>
375
    </section>
381
 
376
 
382
    <section>
377
    <section>
383
      <title>Paging</title>
-
 
384
 
-
 
385
      <para>Virtual memory is usually using paged memory model, where virtual
-
 
386
      memory address space is divided into the <emphasis>pages</emphasis>
-
 
387
      (usually having size 4096 bytes) and physical memory is divided into the
-
 
388
      frames (same sized as a page, of course). Each page may be mapped to
-
 
389
      some frame and then, upon memory access to the virtual address, CPU
-
 
390
      performs <emphasis>address translation</emphasis> during the instruction
-
 
391
      execution. Non-existing mapping generates page fault exception, calling
-
 
392
      kernel exception handler, thus allowing kernel to manipulate rules of
-
 
393
      memory access. Information for pages mapping is stored by kernel in the
-
 
394
      <link linkend="page_tables">page tables</link></para>
-
 
395
 
-
 
396
      <para>The majority of the architectures use multi-level page tables,
-
 
397
      which means need to access physical memory several times before getting
-
 
398
      physical address. This fact would make serios performance overhead in
-
 
399
      virtual memory management. To avoid this <link linkend="tlb">Traslation
-
 
400
      Lookaside Buffer (TLB)</link> is used.</para>
-
 
401
    </section>
-
 
402
 
-
 
403
    <section>
-
 
404
      <title>Address spaces</title>
378
      <title>Address spaces</title>
405
 
379
 
406
      <section>
380
      <section>
407
        <title>Address space areas</title>
381
        <title>Address space areas</title>
408
 
382
 
409
        <para>Each address space consists of mutually disjunctive continuous
383
        <para>Each address space consists of mutually disjunctive continuous
410
        address space areas. Address space area is precisely defined by its
384
        address space areas. Address space area is precisely defined by its
411
        base address and the number of frames/pages is contains.</para>
385
        base address and the number of frames/pages is contains.</para>
412
 
386
 
413
        <para>Address space area , that define behaviour and permissions on
387
        <para>Address space area , that define behaviour and permissions on
414
        the particular area. <itemizedlist>
388
        the particular area. <itemizedlist>
415
            <listitem>
389
            <listitem>
416
               
390
               
417
 
391
 
418
              <emphasis>AS_AREA_READ</emphasis>
392
              <emphasis>AS_AREA_READ</emphasis>
419
 
393
 
420
               flag indicates reading permission.
394
               flag indicates reading permission.
421
            </listitem>
395
            </listitem>
422
 
396
 
423
            <listitem>
397
            <listitem>
424
               
398
               
425
 
399
 
426
              <emphasis>AS_AREA_WRITE</emphasis>
400
              <emphasis>AS_AREA_WRITE</emphasis>
427
 
401
 
428
               flag indicates writing permission.
402
               flag indicates writing permission.
429
            </listitem>
403
            </listitem>
430
 
404
 
431
            <listitem>
405
            <listitem>
432
               
406
               
433
 
407
 
434
              <emphasis>AS_AREA_EXEC</emphasis>
408
              <emphasis>AS_AREA_EXEC</emphasis>
435
 
409
 
436
               flag indicates code execution permission. Some architectures do not support execution persmission restriction. In this case this flag has no effect.
410
               flag indicates code execution permission. Some architectures do not support execution persmission restriction. In this case this flag has no effect.
437
            </listitem>
411
            </listitem>
438
 
412
 
439
            <listitem>
413
            <listitem>
440
               
414
               
441
 
415
 
442
              <emphasis>AS_AREA_DEVICE</emphasis>
416
              <emphasis>AS_AREA_DEVICE</emphasis>
443
 
417
 
444
               marks area as mapped to the device memory.
418
               marks area as mapped to the device memory.
445
            </listitem>
419
            </listitem>
446
          </itemizedlist></para>
420
          </itemizedlist></para>
447
 
421
 
448
        <para>Kernel provides possibility tasks create/expand/shrink/share its
422
        <para>Kernel provides possibility tasks create/expand/shrink/share its
449
        address space via the set of syscalls.</para>
423
        address space via the set of syscalls.</para>
450
      </section>
424
      </section>
451
 
425
 
452
      <section>
426
      <section>
453
        <title>Address Space ID (ASID)</title>
427
        <title>Address Space ID (ASID)</title>
454
 
428
 
455
        <para>When switching to the different task, kernel also require to
429
        <para>When switching to the different task, kernel also require to
456
        switch mappings to the different address space. In case TLB cannot
430
        switch mappings to the different address space. In case TLB cannot
457
        distinguish address space mappings, all mapping information in TLB
431
        distinguish address space mappings, all mapping information in TLB
458
        from the old address space must be flushed, which can create certain
432
        from the old address space must be flushed, which can create certain
459
        uncessary overhead during the task switching. To avoid this, some
433
        uncessary overhead during the task switching. To avoid this, some
460
        architectures have capability to segregate different address spaces on
434
        architectures have capability to segregate different address spaces on
461
        hardware level introducing the address space identifier as a part of
435
        hardware level introducing the address space identifier as a part of
462
        TLB record, telling the virtual address space translation unit to
436
        TLB record, telling the virtual address space translation unit to
463
        which address space this record is applicable.</para>
437
        which address space this record is applicable.</para>
464
 
438
 
465
        <para>HelenOS kernel can take advantage of this hardware supported
439
        <para>HelenOS kernel can take advantage of this hardware supported
466
        identifier by having an ASID abstraction which is somehow related to
440
        identifier by having an ASID abstraction which is somehow related to
467
        the corresponding architecture identifier. I.e. on ia64 kernel ASID is
441
        the corresponding architecture identifier. I.e. on ia64 kernel ASID is
468
        derived from RID (region identifier) and on the mips32 kernel ASID is
442
        derived from RID (region identifier) and on the mips32 kernel ASID is
469
        actually the hardware identifier. As expected, this ASID information
443
        actually the hardware identifier. As expected, this ASID information
470
        record is the part of <emphasis>as_t</emphasis> structure.</para>
444
        record is the part of <emphasis>as_t</emphasis> structure.</para>
471
 
445
 
472
        <para>Due to the hardware limitations, hardware ASID has limited
446
        <para>Due to the hardware limitations, hardware ASID has limited
473
        length from 8 bits on ia64 to 24 bits on mips32, which makes it
447
        length from 8 bits on ia64 to 24 bits on mips32, which makes it
474
        impossible to use it as unique address space identifier for all tasks
448
        impossible to use it as unique address space identifier for all tasks
475
        running in the system. In such situations special ASID stealing
449
        running in the system. In such situations special ASID stealing
476
        algoritm is used, which takes ASID from inactive task and assigns it
450
        algoritm is used, which takes ASID from inactive task and assigns it
477
        to the active task.</para>
451
        to the active task.</para>
478
 
452
 
479
        <para><classname>ASID stealing algoritm here.</classname></para>
453
        <para><classname>ASID stealing algoritm here.</classname></para>
480
      </section>
454
      </section>
481
    </section>
455
    </section>
482
 
456
 
483
    <section>
457
    <section>
484
      <title>Virtual address translation</title>
458
      <title>Virtual address translation</title>
485
 
459
 
486
      <section id="page_tables">
460
      <section id="paging">
487
        <title>Page tables</title>
461
        <title>Paging</title>
488
 
462
 
489
        <para>HelenOS kernel has two different approaches to the paging
-
 
490
        implementation: <emphasis>4 level page tables</emphasis> and
-
 
491
        <emphasis>global hash tables</emphasis>, which are accessible via
-
 
492
        generic paging abstraction layer. Such different functionality was
-
 
493
        caused by the major architectural differences between supported
-
 
494
        platforms. This abstraction is implemented with help of the global
-
 
495
        structure of pointers to basic mapping functions
-
 
496
        <emphasis>page_mapping_operations</emphasis>. To achieve different
-
 
497
        functionality of page tables, corresponding layer must implement
-
 
498
        functions, declared in
463
        <section>
499
        <emphasis>page_mapping_operations</emphasis></para>
464
          <title>Introduction</title>
500
 
465
 
-
 
466
          <para>Virtual memory is usually using paged memory model, where
-
 
467
          virtual memory address space is divided into the
-
 
468
          <emphasis>pages</emphasis> (usually having size 4096 bytes) and
-
 
469
          physical memory is divided into the frames (same sized as a page, of
-
 
470
          course). Each page may be mapped to some frame and then, upon memory
-
 
471
          access to the virtual address, CPU performs <emphasis>address
-
 
472
          translation</emphasis> during the instruction execution.
-
 
473
          Non-existing mapping generates page fault exception, calling kernel
-
 
474
          exception handler, thus allowing kernel to manipulate rules of
-
 
475
          memory access. Information for pages mapping is stored by kernel in
-
 
476
          the <link linkend="page_tables">page tables</link></para>
-
 
477
 
-
 
478
          <para>The majority of the architectures use multi-level page tables,
-
 
479
          which means need to access physical memory several times before
-
 
480
          getting physical address. This fact would make serios performance
-
 
481
          overhead in virtual memory management. To avoid this <link
-
 
482
          linkend="tlb">Traslation Lookaside Buffer (TLB)</link> is
501
        <formalpara>
483
          used.</para>
-
 
484
 
-
 
485
          <para>HelenOS kernel has two different approaches to the paging
-
 
486
          implementation: <emphasis>4 level page tables</emphasis> and
-
 
487
          <emphasis>global hash table</emphasis>, which are accessible via
-
 
488
          generic paging abstraction layer. Such different functionality was
-
 
489
          caused by the major architectural differences between supported
-
 
490
          platforms. This abstraction is implemented with help of the global
-
 
491
          structure of pointers to basic mapping functions
-
 
492
          <emphasis>page_mapping_operations</emphasis>. To achieve different
-
 
493
          functionality of page tables, corresponding layer must implement
-
 
494
          functions, declared in
-
 
495
          <emphasis>page_mapping_operations</emphasis></para>
-
 
496
 
-
 
497
          <para>Thanks to the abstract paging interface, there was a place
-
 
498
          left for more paging implementations (besides already implemented
-
 
499
          hieararchical page tables and hash table), for example B-Tree based
-
 
500
          page tables.</para>
-
 
501
        </section>
-
 
502
 
-
 
503
        <section>
502
          <title>4-level page tables</title>
504
          <title>Hierarchical 4-level page tables</title>
503
 
505
 
504
          <para>4-level page tables are the generalization of the hardware
506
          <para>Hierarchical 4-level page tables are the generalization of the
-
 
507
          hardware capabilities of most architectures. Each address space has
505
          capabilities of several architectures.<itemizedlist>
508
          its own page tables.<itemizedlist>
506
              <listitem>
509
              <listitem>
507
                 ia32 uses 2-level page tables, with full hardware support.
510
                 ia32 uses 2-level page tables, with full hardware support.
508
              </listitem>
511
              </listitem>
509
 
512
 
510
              <listitem>
513
              <listitem>
511
                 amd64 uses 4-level page tables, also coming with full hardware support.
514
                 amd64 uses 4-level page tables, also coming with full hardware support.
512
              </listitem>
515
              </listitem>
513
 
516
 
514
              <listitem>
517
              <listitem>
515
                 mips and ppc32 have 2-level tables, software simulated support.
518
                 mips and ppc32 have 2-level tables, software simulated support.
516
              </listitem>
519
              </listitem>
517
            </itemizedlist></para>
520
            </itemizedlist></para>
518
        </formalpara>
521
        </section>
-
 
522
 
-
 
523
        <section>
-
 
524
          <title>Global hash table</title>
519
 
525
 
-
 
526
          <para>Implementation of the global hash table was encouraged by the
-
 
527
          ia64 architecture support. One of the major differences between
-
 
528
          global hash table and hierarchical tables is that global hash table
-
 
529
          exists only once in the system and the hierarchical tables are
520
        <formalpara>
530
          maintained per address space.</para>
-
 
531
 
-
 
532
          <para>Thus, hash table contains information about all address spaces
-
 
533
          mappings in the system, so, the hash of an entry must contain
-
 
534
          information of both address space pointer or id and the virtual
-
 
535
          address of the page. Generic hash table implementation assumes that
-
 
536
          the addresses of the pointers to the address spaces are likely to be
-
 
537
          on the close addresses, so it uses least significant bits for hash;
-
 
538
          also it assumes that the virtual page addresses have roughly the
-
 
539
          same probability of occurring, so the least significant bits of VPN
521
          <title>Global hash tables</title>
540
          compose the hash index.</para>
522
 
541
 
523
          <para>- global page hash table: existuje jen jedna v celem systemu
-
 
524
          (vyuziva ji ia64), pozn. ia64 ma zatim vypnuty VHPT. Pouziva se
-
 
525
          genericke hash table s oddelenymi collision chains. ASID support is
-
 
526
          required to use global hash tables.</para>
542
          <para>Collision chains ...</para>
527
        </formalpara>
543
        </section>
528
 
-
 
529
        <para>Thanks to the abstract paging interface, there is possibility
-
 
530
        left have more paging implementations, for example B-Tree page
-
 
531
        tables.</para>
-
 
532
      </section>
544
      </section>
533
 
545
 
534
      <section id="tlb">
546
      <section id="tlb">
535
        <title>Translation Lookaside buffer</title>
547
        <title>Translation Lookaside buffer</title>
536
 
548
 
537
        <para>Due to the extensive overhead during the page mapping lookup in
549
        <para>Due to the extensive overhead during the page mapping lookup in
538
        the page tables, all architectures has fast assotiative cache memory
550
        the page tables, all architectures has fast assotiative cache memory
539
        built-in CPU. This memory called TLB stores recently used page table
551
        built-in CPU. This memory called TLB stores recently used page table
540
        entries.</para>
552
        entries.</para>
541
 
553
 
542
        <section id="tlb_shootdown">
554
        <section id="tlb_shootdown">
543
          <title>TLB consistency. TLB shootdown algorithm.</title>
555
          <title>TLB consistency. TLB shootdown algorithm.</title>
544
 
556
 
545
          <para>Operating system is responsible for keeping TLB consistent by
557
          <para>Operating system is responsible for keeping TLB consistent by
546
          invalidating the contents of TLB, whenever there is some change in
558
          invalidating the contents of TLB, whenever there is some change in
547
          page tables. Those changes may occur when page or group of pages
559
          page tables. Those changes may occur when page or group of pages
548
          were unmapped, mapping is changed or system switching active address
560
          were unmapped, mapping is changed or system switching active address
549
          space to schedule a new system task (which is a batch unmap of all
561
          space to schedule a new system task. Moreover, this invalidation
550
          address space mappings). Moreover, this invalidation operation must
562
          operation must be done an all system CPUs because each CPU has its
551
          be done an all system CPUs because each CPU has its own independent
563
          own independent TLB cache. Thus maintaining TLB consistency on SMP
552
          TLB cache. Thus maintaining TLB consistency on SMP configuration as
564
          configuration as not as trivial task as it looks on the first
553
          not as trivial task as it looks at the first glance. Naive solution
565
          glance. Naive solution would assume that is the CPU which wants to
554
          would assume remote TLB invalidatation, which is not possible on the
566
          invalidate TLB will invalidate TLB caches on other CPUs. It is not
555
          most of the architectures, because of the simple fact - flushing TLB
567
          possible on the most of the architectures, because of the simple
556
          is allowed only on the local CPU and there is no possibility to
568
          fact - flushing TLB is allowed only on the local CPU and there is no
557
          access other CPUs' TLB caches.</para>
569
          possibility to access other CPUs' TLB caches, thus invalidate TLB
-
 
570
          remotely.</para>
558
 
571
 
559
          <para>Technique of remote invalidation of TLB entries is called "TLB
572
          <para>Technique of remote invalidation of TLB entries is called "TLB
560
          shootdown". HelenOS uses a variation of the algorithm described by
573
          shootdown". HelenOS uses a variation of the algorithm described by
561
          D. Black et al., "Translation Lookaside Buffer Consistency: A
574
          D. Black et al., "Translation Lookaside Buffer Consistency: A
562
          Software Approach," Proc. Third Int'l Conf. Architectural Support
575
          Software Approach," Proc. Third Int'l Conf. Architectural Support
563
          for Programming Languages and Operating Systems, 1989, pp.
576
          for Programming Languages and Operating Systems, 1989, pp.
564
          113-122.</para>
577
          113-122.</para>
565
 
578
 
566
          <para>As the situation demands, you will want partitial invalidation
579
          <para>As the situation demands, you will want partitial invalidation
567
          of TLB caches. In case of simple memory mapping change it is
580
          of TLB caches. In case of simple memory mapping change it is
568
          necessary to invalidate only one or more adjacent pages. In case if
581
          necessary to invalidate only one or more adjacent pages. In case if
569
          the architecture is aware of ASIDs, during the address space
582
          the architecture is aware of ASIDs, when kernel needs to dump some
570
          switching, kernel invalidates only entries from this particular
583
          ASID to use by another task, it invalidates only entries from this
571
          address space. Final option of the TLB invalidation is the complete
584
          particular address space. Final option of the TLB invalidation is
572
          TLB cache invalidation, which is the operation that flushes all
585
          the complete TLB cache invalidation, which is the operation that
573
          entries in TLB.</para>
586
          flushes all entries in TLB.</para>
574
 
-
 
575
          <para>TLB shootdown is performed in two phases. First, the initiator
-
 
576
          process sends an IPI message indicating the TLB shootdown request to
-
 
577
          the rest of the CPUs. Then, it waits until all CPUs confirm TLB
-
 
578
          invalidating action execution.</para>
-
 
579
        </section>
-
 
580
      </section>
-
 
581
    </section>
-
 
582
 
587
 
583
    <section>
-
 
584
      <title>---</title>
588
          <para>TLB shootdown is performed in two phases.</para>
585
 
589
 
-
 
590
          <formalpara>
586
      <para>At the moment HelenOS does not support swapping.</para>
591
            <title>Phase 1.</title>
587
 
592
 
588
      <para>- pouzivame vypadky stranky k alokaci ramcu on-demand v ramci
593
            <para>First, initiator locks a global TLB spinlock, then request
-
 
594
            is being put to the local request cache of every other CPU in the
-
 
595
            system protected by its spinlock. In case the cache is full, all
589
      as_area - na architekturach, ktere to podporuji, podporujeme non-exec
596
            requests in the cache are replaced by one request, indicating
-
 
597
            global TLB flush. Then the initiator thread sends an IPI message
-
 
598
            indicating the TLB shootdown request to the rest of the CPUs and
-
 
599
            waits actively until all CPUs confirm TLB invalidating action
-
 
600
            execution by setting up a special flag. After setting this flag
-
 
601
            this thread is blocked on the TLB spinlock, held by the
-
 
602
            initiator.</para>
-
 
603
          </formalpara>
-
 
604
 
-
 
605
          <formalpara>
-
 
606
            <title>Phase 2.</title>
-
 
607
 
-
 
608
            <para>All CPUs are waiting on the TLB spinlock to execute TLB
-
 
609
            invalidation action and have indicated their intention to the
-
 
610
            initiator. Initiator continues, cleaning up its TLB and releasing
-
 
611
            the global TLB spinlock. After this all other CPUs gain and
-
 
612
            immidiately release TLB spinlock and perform TLB invalidation
590
      stranky</para>
613
            actions.</para>
-
 
614
          </formalpara>
-
 
615
        </section>
-
 
616
      </section>
591
    </section>
617
    </section>
592
  </section>
618
  </section>
593
</chapter>
619
</chapter>