Subversion Repositories HelenOS-doc

Compare Revisions

Ignore whitespace Rev 168 → Rev 169

/design/trunk/src/ch_ipc.xml
314,7 → 314,7
<title>The Interface</title>
 
<para>The interface was developed to be as simple to use as possible.
Classical applications simply send messages and occasionally wait for an
Typical applications simply send messages and occasionally wait for an
answer and check results. If the number of sent messages is higher than
the kernel limit, the flow of application is stopped until some answers
arrive. On the other hand, server applications are expected to work in a
/design/trunk/src/ap_arch.xml
119,8 → 119,8
<title>Intel IA-32</title>
 
<para>The ia32 architecture uses 4K pages and processor supported 2-level
page tables. Along with amd64 It is one of the 2 architectures that fully
supports SMP configurations. The architecture is mostly similar to amd64,
page tables. Along with amd64, it is one of the two architectures that fully
support SMP configurations. The architecture is mostly similar to amd64,
it even shares a lot of code. The debugging support is the same as with
amd64. The thread local storage uses GS register.</para>
</section>
/design/trunk/src/ch_memory_management.xml
143,7 → 143,7
<section>
<title>Implementation</title>
 
<para>The buddy allocator is, in fact, an abstract framework wich can
<para>The buddy allocator is, in fact, an abstract framework which can
be easily specialized to serve one particular task. It knows nothing
about the nature of memory it helps to allocate. In order to beat the
lack of this knowledge, the buddy allocator exports an interface that
365,7 → 365,7
virtual memory: segmentation and paging. Even though some processor
architectures supported by HelenOS<footnote>
<para>ia32 has full-fledged segmentation.</para>
</footnote> provide both mechanism, the kernel makes use solely of
</footnote> provide both mechanisms, the kernel makes use solely of
paging.</para>
 
<section id="paging">
375,7 → 375,7
divided into small power-of-two sized naturally aligned blocks called
pages. The processor implements a translation mechanism, that allows the
operating system to manage mappings between set of pages and set of
indentically sized and identically aligned pieces of physical memory
identically sized and identically aligned pieces of physical memory
called frames. In a result, references to continuous virtual memory
areas don't necessarily need to reference continuos area of physical
memory. Supported page sizes usually range from several kilobytes to
462,7 → 462,7
table is called PTL0, the two middle levels are called PTL1 and PTL2,
and, finally, the leaf level is called PTL3. All architectures using
this mechanism are required to use PTL0 and PTL3. However, the middle
levels can be left out, depending on the hardware hierachy or
levels can be left out, depending on the hardware hierarchy or
structure of software-only page tables. The genericity is achieved
through a set of macros that define transitions from one level to
another. Unused levels are optimised out by the compiler.
786,7 → 786,7
taken from the mips32 terminology, is used to refer to the address space
identification number. The advantage of having ASIDs is that TLB does
not have to be invalidated on thread context switch as long as ASIDs are
unique. Unfortunatelly, architectures supported by HelenOS use all
unique. Unfortunately, architectures supported by HelenOS use all
different widths of ASID numbers<footnote>
<para>amd64 and ia32 don't use similar abstraction at all, mips32
has 8-bit ASIDs and ia64 can have ASIDs between 18 to 24 bits
/design/trunk/src/ch_scheduling.xml
7,7 → 7,7
<para>One of the key aims of the operating system is to create and support
the impression that several activities are executing contemporarily. This is
true for both uniprocessor as well as multiprocessor systems. In the case of
multiprocessor systems, the activities are trully happening in parallel. The
multiprocessor systems, the activities are truly happening in parallel. The
scheduler helps to materialize this impression by planning threads on as
many processors as possible and, when this strategy reaches its limits, by
quickly switching among threads executing on a single processor.</para>