Rev |
Age |
Author |
Path |
Log message |
Diff |
4490 |
5644 d 13 h |
decky |
/trunk/kernel/ |
remove redundant index_t and count_t types (which were always quite ambiguous and not actually needed) |
|
4174 |
5716 d 6 h |
decky |
/trunk/kernel/generic/ |
add malloc slab caches for up to 4 MB blocks |
|
3973 |
5740 d 21 h |
decky |
/trunk/kernel/ |
kernel memory management revisited (phase 2): map physical memory according to zones
- ia32: register reserved and ACPI zones
- pareas are now used only for mapping of present physical memory (hw_area() is gone)
- firmware zones and physical addresses outside any zones are allowed to be mapped generally
- fix nasty antient bug in zones_insert_zone() |
|
3972 |
5741 d 9 h |
decky |
/trunk/kernel/ |
kernel memory management revisited (phase 1): proper support for zone flags
- the zone_t structures are now statically allocated to be easily available
- the locking scheme was simplified
- new flags for non-available zones were introduced
- FRAME_LOW_4_GiB flag is removed, the functionality will be eventually reimplemented using a generic mechanism |
|
3940 |
5747 d 15 h |
decky |
/trunk/kernel/ |
make hw_area API more generic
this allows mapping of EGA VRAM on ia32/amd64 |
|
3908 |
5751 d 14 h |
decky |
/trunk/ |
overhaul pareas: use one single physical area for the physical address space not belonging to physical memory |
|
3240 |
5951 d 22 h |
decky |
/ |
move unfinished ObjC support to a separate branch |
|
3233 |
5955 d 18 h |
decky |
/trunk/ |
remove dummy page coloring facility, which is currenty not used |
|
3222 |
5974 d 17 h |
svoboda |
/trunk/ |
Merge program-loader related stuff from dynload branch to trunk. (huge) |
|
3208 |
5976 d 13 h |
jermar |
/trunk/kernel/generic/ |
The real intention of the previous commit was to put the boundary
on 4 GiB, not 16 GiB. |
|
3207 |
5976 d 13 h |
jermar |
/trunk/kernel/generic/ |
Introduce FRAME_LOW_16_GiB slab/frame allocator flag. When specified, the
allocators will not allocate memory above 16 GiB. Each architecture needs to
make sure not to merge zones from below and above 16 GiB. Allocations that
require memory below 16 GiB need to be altered to use this flag. |
|
3206 |
5976 d 14 h |
jermar |
/trunk/kernel/generic/ |
Avoid deadlock during the 'zone n' kconsole command. Buddy allocator detail is
no longer printed because the effort to avoid the deadlock was simply not worth
it. |
|
3182 |
5990 d 18 h |
jermar |
/trunk/kernel/generic/include/mm/ |
cstyle for slab.h |
|
2745 |
6106 d 21 h |
decky |
/trunk/ |
code cleanup (mostly signed/unsigned)
allow extra compiler warnings |
|
2725 |
6127 d 18 h |
decky |
/trunk/kernel/ |
remove config.memory_size, get_memory_size() and memory_init.{c|d}
the amount of available memory can be calculated from the sizes of the zones
add FRAMES2SIZE, SIZE2KB and SIZE2MB functions/macros (code readability) |
|
2556 |
6257 d 22 h |
jermar |
/trunk/kernel/generic/ |
Rename as_get_size() to as_area_get_size() and add a doxygen comment. |
|
2465 |
6370 d 18 h |
jermar |
/trunk/ |
Merge arm32 into trunk. |
|
2444 |
6378 d 9 h |
jermar |
/trunk/kernel/ |
First fixes for suncc support.
It is going to be a long way... |
|
2183 |
6434 d 17 h |
jermar |
/trunk/kernel/generic/ |
Continue to de-oversynchronize the kernel.
- replace as->refcount with an atomic counter; accesses to this
reference counter are not to be done when the as->lock mutex is held;
this gets us rid of mutex_lock_active();
Remove the possibility of a deadlock between TLB shootdown and asidlock.
- get rid of mutex_lock_active() on as->lock
- when locking the asidlock spinlock, always do it conditionally and with
preemption disabled; in the unsuccessful case, enable interrupts and try again
- there should be no deadlock between TLB shootdown and the as->lock mutexes
- PLEASE REVIEW !!!
Add DEADLOCK_PROBE's to places where we have spinlock_trylock() loops. |
|
2170 |
6439 d 10 h |
jermar |
/trunk/kernel/ |
Simplify synchronization in as_switch().
The function was oversynchronized, which
was causing deadlocks on the address
space mutex.
Now, address spaces can only be switched
when the asidlock is held. This also protects
stealing of ASIDs. No other synchronization
is necessary. |
|