The plan is to have the kernel be just a scheduler, IPC relay, physical memory manager and (probably) a virtual memory manager.
The system uses seL4-like capabilities, but on a global linear array instead of the CNode tree. And physical memory allocation is managed by the kernel.
zig build run # thats it
# read 'Project-Specific Options' from `zig build --help` for more options
zig build run -Dtest=true # include custom unit test runner
zig build # generates the os.iso in zig-out/os.iso
zig build run --prominent-compile-errors --summary none -freference-trace \
-Doptimize=ReleaseSmall -Duefi=false -Ddebug=1 -Dgdb=false -Ddisplay=false -Dtest=true
- kernel: src/kernel
- kernel/user interface: src/abi
- root process: src/userspace/root
-
kernel
- PMM
- VMM
- VMM arch implementation back in the kernel,
user-space vmm manages mapping of capabilities
to the (single per thread) vmem capability.
Frame should be the only mappable capability
and it is dynamically sized:
0x1000 * 2^size
.
- VMM arch implementation back in the kernel,
user-space vmm manages mapping of capabilities
to the (single per thread) vmem capability.
Frame should be the only mappable capability
and it is dynamically sized:
- GDT, TSS, IDT
- ACPI, APIC
- SMP
- user space
- HPET
- TSC
- scheduler
- binary loader
- message IPC, shared memory IPC
- multiple parallel recvs and calls to the same endpoint
- signaling system (IPC without messages)
- figure out userland interrupts (ps2 keyboard, ..)
- capabilities
- allocate capabilities
- deallocate capabilities
- map capabilities
- unmap capabilities
- send capabilities
- disallow mapping a frame twice without cloning the cap
- disallow overlapping maps
- syscalls
-
root + initfsd process
- decompress initfs.tar.gz
- execute initfs:///sbin/init and give it a capability to IPC with the initfs
-
vmm server process
- handles virtual memory for everything
-
proc server process
- handles individual processes and their threads
-
initfs:///sbin/initd process
- launch initfs:///sbin/rngd
- launch initfs:///sbin/vfsd
- launch services from initfs://
-
initfs:///sbin/vfsd process
- create fs://
- exec required root filesystem drivers
- read /etc/fstab before mounting root (root= kernel cli arg)
- mount everything according to /etc/fstab
- exec other filesystem drivers lazily
-
initfs:///sbin/fsd.fat32
-
initfs:///sbin/rngd process
-
/sbin/inputd process
-
/sbin/outputd process
-
/sbin/kbd process
-
/sbin/moused process
-
/sbin/timed process
-
/sbin/fbd process
-
/sbin/pcid process
-
/sbin/usbd process
Approximate synchronous IPC performance: call
+ replyRecv
loop takes about 10µs (100 000 per second):
// server
while (true) {
try rx.replyRecv(&msg);
}
// client
while (true) {
try tx.call(&msg);
}