Rev |
Age |
Author |
Path |
Log message |
Diff |
4490 |
5646 d 18 h |
decky |
/trunk/kernel/ |
remove redundant index_t and count_t types (which were always quite ambiguous and not actually needed) |
|
4174 |
5718 d 11 h |
decky |
/trunk/kernel/generic/ |
add malloc slab caches for up to 4 MB blocks |
|
3973 |
5743 d 2 h |
decky |
/trunk/kernel/ |
kernel memory management revisited (phase 2): map physical memory according to zones
- ia32: register reserved and ACPI zones
- pareas are now used only for mapping of present physical memory (hw_area() is gone)
- firmware zones and physical addresses outside any zones are allowed to be mapped generally
- fix nasty antient bug in zones_insert_zone() |
|
3972 |
5743 d 14 h |
decky |
/trunk/kernel/ |
kernel memory management revisited (phase 1): proper support for zone flags
- the zone_t structures are now statically allocated to be easily available
- the locking scheme was simplified
- new flags for non-available zones were introduced
- FRAME_LOW_4_GiB flag is removed, the functionality will be eventually reimplemented using a generic mechanism |
|
3940 |
5749 d 20 h |
decky |
/trunk/kernel/ |
make hw_area API more generic
this allows mapping of EGA VRAM on ia32/amd64 |
|
3908 |
5753 d 19 h |
decky |
/trunk/ |
overhaul pareas: use one single physical area for the physical address space not belonging to physical memory |
|
3240 |
5954 d 3 h |
decky |
/ |
move unfinished ObjC support to a separate branch |
|
3233 |
5957 d 23 h |
decky |
/trunk/ |
remove dummy page coloring facility, which is currenty not used |
|
3222 |
5976 d 22 h |
svoboda |
/trunk/ |
Merge program-loader related stuff from dynload branch to trunk. (huge) |
|
3208 |
5978 d 18 h |
jermar |
/trunk/kernel/generic/ |
The real intention of the previous commit was to put the boundary
on 4 GiB, not 16 GiB. |
|
3207 |
5978 d 18 h |
jermar |
/trunk/kernel/generic/ |
Introduce FRAME_LOW_16_GiB slab/frame allocator flag. When specified, the
allocators will not allocate memory above 16 GiB. Each architecture needs to
make sure not to merge zones from below and above 16 GiB. Allocations that
require memory below 16 GiB need to be altered to use this flag. |
|
3206 |
5978 d 19 h |
jermar |
/trunk/kernel/generic/ |
Avoid deadlock during the 'zone n' kconsole command. Buddy allocator detail is
no longer printed because the effort to avoid the deadlock was simply not worth
it. |
|
3182 |
5992 d 23 h |
jermar |
/trunk/kernel/generic/include/mm/ |
cstyle for slab.h |
|
2745 |
6109 d 2 h |
decky |
/trunk/ |
code cleanup (mostly signed/unsigned)
allow extra compiler warnings |
|
2725 |
6129 d 23 h |
decky |
/trunk/kernel/ |
remove config.memory_size, get_memory_size() and memory_init.{c|d}
the amount of available memory can be calculated from the sizes of the zones
add FRAMES2SIZE, SIZE2KB and SIZE2MB functions/macros (code readability) |
|
2556 |
6260 d 3 h |
jermar |
/trunk/kernel/generic/ |
Rename as_get_size() to as_area_get_size() and add a doxygen comment. |
|
2465 |
6372 d 23 h |
jermar |
/trunk/ |
Merge arm32 into trunk. |
|
2444 |
6380 d 14 h |
jermar |
/trunk/kernel/ |
First fixes for suncc support.
It is going to be a long way... |
|
2183 |
6436 d 22 h |
jermar |
/trunk/kernel/generic/ |
Continue to de-oversynchronize the kernel.
- replace as->refcount with an atomic counter; accesses to this
reference counter are not to be done when the as->lock mutex is held;
this gets us rid of mutex_lock_active();
Remove the possibility of a deadlock between TLB shootdown and asidlock.
- get rid of mutex_lock_active() on as->lock
- when locking the asidlock spinlock, always do it conditionally and with
preemption disabled; in the unsuccessful case, enable interrupts and try again
- there should be no deadlock between TLB shootdown and the as->lock mutexes
- PLEASE REVIEW !!!
Add DEADLOCK_PROBE's to places where we have spinlock_trylock() loops. |
|
2170 |
6441 d 15 h |
jermar |
/trunk/kernel/ |
Simplify synchronization in as_switch().
The function was oversynchronized, which
was causing deadlocks on the address
space mutex.
Now, address spaces can only be switched
when the asidlock is held. This also protects
stealing of ASIDs. No other synchronization
is necessary. |
|