For now I've just put all the stub functions that are needed to link the
kernel into a file called stubs.cpp. I've not yet moved across the interrupt
handling code or the ELF64 relocation code to the x86 directory. Once those
have been moved I can get rid of the x86_64 headers/source directories.
Not many changes seeing as there's not much x86_64 stuff done yet. Small
differences are handled with ifdefs, large differences (descriptors.h,
struct iframe) have separate headers under arch/x86/32 and arch/x86/64.
The setup procedure is fairly simple: create a 64-bit GDT and 64-bit page
tables that include all kernel mappings from the 32-bit address space, but at
the correct 64-bit address, then go through kernel_args and changes all virtual
addresses to 64-bit addresses, and finally switch to long mode and jump to the
kernel.
* platform_allocate_elf_region() is removed, it is implemented in platform-
independent code now (ELF*Class::AllocateRegion). For ELF64 it is now
assumed that 64-bit addresses are mapped in the loader's 32-bit address space
as (address - KERNEL_BASE_64BIT + KERNEL_BASE).
* mapped_delta field from preloaded_*_image removed, now handled compile-time
using the ELF*Class::Map method.
* Also link the kernel with -z max-page-size=0x1000, removes the need for
2MB alignment on the data segment (not going to map the kernel with large
pages for the time being).
* Added a FixedWidthPointer template class which uses 64-bit storage to hold
a pointer. This is used in place of raw pointers in kernel_args.
* Added __attribute__((packed)) to kernel_args and all structures contained
within it. This is necessary due to different alignment behaviour for
32-bit and 64-bit compilation with GCC.
* With these changes, kernel_args will now come out the same size for both
the x86_64 kernel and the loader, excluding the preloaded_image structure
which has not yet been changed.
* Tested both an x86 GCC2 and GCC4 build, no problems caused by these changes.
* x86_64 is using the existing *_ia32 boot platforms.
* Special flags are required when compiling the loader to get GCC to compile
32-bit code. This adds a new set of rules for compiling boot code rather
than using the kernel rules, which compile using the necessary flags.
* Some x86_64 private headers have been stubbed by #include'ing the x86
versions. These will be replaced later.
AMD C1E is a BIOS controlled C3 state. Certain processors families
may cut off TSC and the lapic timer when it is in a deep C state,
including C1E state, thus the cpu can't be waken up and system will hang.
This patch firstly adds the support of idle selection during boot. Then
it implements amdc1e_noarat_idle() routine which checks the MSR which
contains the C1eOnCmpHalt (bit 28) and SmiOnCmpHalt (bit 27) before
executing the halt instruction, then clear them once set.
However intel C1E doesn't has such problem. AMD C1E is a BIOS controlled
C3 state. The difference between C1E and C3 is that transition into C1E
is not initiated by the operating system. System will enter C1E state
automatically when both cores enters C1 state. As for intel C1E, it
means "reduce CPU voltage before entering corresponding Cx-state".
This patch may fix#8111, #3999, #7562, #7940 and #8060
Copied from the description of #3999:
>but for some reason I hit the power button instead of the reset one. And
>the boot continued!!
The reason is CPUs are waken up once power button is hit.
Signed-off-by: Fredrik Holmqvist <fredrik.holmqvist@gmail.com>
* Prepend x86_ to non-static x86 code
* Add x86_init_fpu function to kernel header
* Don't init fpu multiple times on smp systems
* Verified fpu is still started on smp and non-smp
* SSE code still generates general protection faults
on smp systems though
* Rename init_sse to init_fpu and handle FPU setup.
* Stop trying to set up FPU before VM init.
We tried to set up the FPU before VM init, then
set it up again after VM init with SSE extensions,
this caused SSE and MMX applications to crash.
* Be more logical in FPU setup by detecting CPU flag prior
to enabling FPU. (it's unlikely Haiku will run on
a processor without a fpu... but lets be consistant)
* SSE2 gcc code now runs (faster even) without GPF
* tqh confirms his previously crashing mmx code now works
* The non-SSE FPU enable after VM init needs tested!
* The vm86 code or the code running in virtual 8086 mode may clobber the
%fs register that we use for the CPU dependent thread local storage
(TLS). Previously the vm86 code would simply restore %fs on exit, but
this doesn't always work. If the thread got unscheduled while running
in virtual 8086 mode and was then rescheduled on a different CPU, the
vm86 exit code would restore the %fs register with the TLS value of
the old CPU, causing anything using TLS in userland to crash later on.
Instead we skip the %fs register restore on exit (as do the other
interrupt return functions) and explicitly update the potentially
clobbered %fs by calling x86_set_tls_context(). This will repopulate
the %fs register with the TLS value for the right CPU. Fixes#8068.
* Made the static set_tls_context() into x86_set_tls_context() and made
it available to others to faciliate the above.
* Sync the vm86 specific interrupt code with the changes from hrev23370,
using the iframe pop macro to properly return. Previously what was
pushed in int_bottom wasn't poped on return.
* Account for the time update macro resetting the in_kernel flag and
reset it to 1, as we aren't actually returning to userland. This
didn't cause any harm though as only the time tracking is using that
flag so far.
* Some minor cleanup.
* If we detect ACPI 2.0 or higher, the spec says we should use the XSDT rather
than the RSDT. Attempt to do so, falling back to the RSDT if the former fails
to be mapped/validated.
* Refactored acpi_find_table into a templated version to account for the fact
that the XSDT exports different pointer widths for its links to other tables
than the RSDT.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42133 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Reorganized the kernel locking related to threads and teams.
* We now discriminate correctly between process and thread signals. Signal
handlers have been moved to teams. Fixes#5679.
* Implemented real-time signal support, including signal queuing, SA_SIGINFO
support, sigqueue(), sigwaitinfo(), sigtimedwait(), waitid(), and the addition
of the real-time signal range. Closes#1935 and #2695.
* Gave SIGBUS a separate signal number. Fixes#6704.
* Implemented <time.h> clock and timer support, and fixed/completed alarm() and
[set]itimer(). Closes#5682.
* Implemented support for thread cancellation. Closes#5686.
* Moved send_signal() from <signal.h> to <OS.h>. Fixes#7554.
* Lots over smaller more or less related changes.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42116 a95241bf-73f2-0310-859d-f6bbb57e9c96
at the override entry to trigger the overriden vector so that we don't need
to configure any additional redirections.
* Also configures the polarity and trigger modes found in the override entry.
* When disabling the legacy PIC, retrieve the enabled interrupts and re-enable
then in the IO-APIC. This will for example make the ACPI SCI work that is
installed prior to switching interrupt models. Through the transparent support
for interrupt source overrides it'll also automatically relay from the old to
the new vector.
This should make ACPI interrupts work and should support relocating the ISA PIT
from irq 0 to a different global system interrupt (usually 2) so that it can
still work when IO-APICs are in use.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@41528 a95241bf-73f2-0310-859d-f6bbb57e9c96
at all and, since there can be multiple IO-APICs, we need to do the
enumeration again in the kernel anyway. Also only set ioapic_phys the first
time we encounter an IO-APIC object as it looks cleaner when we arrive at the
first IO-APIC default address.
* Therefore we don't have to worry about already mapped IO-APICs when
enumerating them in the kernel.
* Also remove the mapping function that is now not used anymore.
* We still use the ioapic_phys field of the kernel args to determine whether
there is an IO-APIC at all to avoid needlessly doing the enumeration again.
This fixes multi IO-APIC configurations, because before we would indeed map
the last IO-APIC listed in the MADT, but then in the kernel assumed we mapped
the first one. We'd end up with mapping the last listed IO-APIC twice and the
first IO-APIC never, always programming the last one when we actually targetted
the first one.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@41476 a95241bf-73f2-0310-859d-f6bbb57e9c96
mark the ISA interrupts as unusable and then use ioapic_is_interrupt_available
to determine if that vector is possibly taken by an IO-APIC. If IO-APICs are
not used, this will simply always return false, leaving all vectors free for
MSI use.
* The msi_init() now has to be done after a potential IO-APIC init, so it is now
done after ioapic_init() instead of inside apic_init().
* Add apic_disable_local_ints() to clear the local ints on the local APIC once
we are in APIC mode (i.e. the IO-APIC is set up and we don't need the external
routing anymore).
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@41445 a95241bf-73f2-0310-859d-f6bbb57e9c96
functional change intended.
* Use an appropriately sized sLevelTriggeredInterrupts for each controller type.
This also fixes an out of bound access for IO-APICs with more than 32 entries
and also returns the right mode in such cases.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@41426 a95241bf-73f2-0310-859d-f6bbb57e9c96
* The team and thread kernel structures have been renamed to Team and Thread
respectively and moved into the new BKernel namespace.
* Several (kernel add-on) sources have been converted from C to C++ since
private kernel headers are included that are no longer C compatible.
Changes after merging:
* Fixed gcc 2 build (warnings mainly in the scary firewire bus manager).
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@40196 a95241bf-73f2-0310-859d-f6bbb57e9c96
Kernel doesn't use it, and it could be regenerated in the kernel if it did need it.
This also unlocks the apic range the bios can use. Previously the apic ids would have
to fit within 0..MAX_CPUS or it'd reject the cpu. Some boxes (mine in particular)
seem to sparsely populate the apic id so that the range is pretty large.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37108 a95241bf-73f2-0310-859d-f6bbb57e9c96
was used.
* Renamed X86VMTranslationMap to X86VMTranslationMap32Bit and pulled the paging
method agnostic part into new base class X86VMTranslationMap.
* Moved X86PagingStructures into its own header/source pair.
* Moved pgdir_virt from X86PagingStructures to X86PagingStructures32Bit where
it is actually used.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37055 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Renamed i386_context_switch() to x86_context_switch().
* x86_context_switch() no longer sets the page directory.
arch_thread_context_switch() does that explicitly, now. This allows to solve
the TODO by reordering releasing the previous paging structures reference and
setting the new page directory.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37024 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Renamed vm_translation_map_arch_info to X86PagingStructures, and all
members and local variables of that type accordingly.
* arch_thread_context_switch(): Added TODO: The still active paging structures
can indeed be deleted before we stop using them.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37022 a95241bf-73f2-0310-859d-f6bbb57e9c96
where appropriate.
* Typedef'ed page_num_t to phys_addr_t and used it in more places in
vm_page.{h,cpp}.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36937 a95241bf-73f2-0310-859d-f6bbb57e9c96
interrupts (MSI).
* Add the remaining IDT entries and redirection functions in the interrupt code.
* Make the PIC end_of_interrupt() return a result to indicate whether the vector
was handled by this PIC. If it isn't we now issue a apic_end_of_interrupt()
in the assumption of apic local interrupt, MSI or IPI. This also removes
the need for the gUsingIOAPIC global and doing manual apic_end_of_interrupt()
calls in the SMP and timer code.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36221 a95241bf-73f2-0310-859d-f6bbb57e9c96
other places where previously the same functionality was duplicated. Also
seperated the header which was originally arch_smp.h into apic.h and arch_smp.h
again as some of it is MP and not actually APIC.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36182 a95241bf-73f2-0310-859d-f6bbb57e9c96
Added fields for temporary storage of the debug registers dr6 and dr7 to the
arch_cpu_info structure. The actual registers are stored at the beginning of
x86_exit_user_debug_at_kernel_entry() and read in
x86_handle_debug_exception().
The problem was that x86_exit_user_debug_at_kernel_entry() itself overwrote
dr7 and, if kernel breakpoints were enabled, dr6 could be overwritten anytime
after. So x86_handle_debug_exception() would find incorrect values in the
registers (definitely in dr7) and thus interpret the detected debug condition
incorrectly. Usually watchpoints were recognized as breakpoints.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35951 a95241bf-73f2-0310-859d-f6bbb57e9c96
arch_debug_registers instead.
* Call arch_debug_save_registers() on all CPUs when entering the kernel
debugger.
* Added debug_get_debug_registers() to return a specified CPU's saved
registers.
* x86:
- Replaced the previous arch_debug_save_registers() implementation. Disabled
getting the registers via the gdb interface for the time being.
- Fixed the "sc", "call", and "calling" commands to also work for threads
running on another CPU.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35907 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Made the page table allocation more flexible. Got rid of sMaxVirtualAddress
and added new virtual_end address to the architecture specific kernel args.
* Increased the virtual space we reserve for the kernel to 16 MB. That
should suffice for quite a while. The previous 2 MB were too tight when
building the kernel with debug info.
* mmu_init(): The way we were translating the BIOS' extended memory map to
our physical ranges arrays was broken. Small gaps between usable memory
ranges would be ignored and instead marked allocated. This worked fine for
the boot loader and during the early kernel initialization, but after the
VM has been fully set up it frees all physical ranges that have not been
claimed otherwise. So those ranges could be entered into the free pages
list and would be used later. This could possibly cause all kinds of weird
problems, probably including ACPI issues. Now we add only the actually
usable ranges to our list.
Kernel:
* vm_page_init(): The pages of the ranges between the usable physical memory
ranges are now marked PAGE_STATE_UNUSED, the allocated ranges
PAGE_STATE_WIRED.
* unmap_and_free_physical_pages(): Don't free pages marked as unused.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35726 a95241bf-73f2-0310-859d-f6bbb57e9c96
needs to be or'ed to the address specification), "uncached" is assumed.
* Set the memory type for the "BIOS" and "DMA" areas to write-back. Not sure, if
that's correct, but that's what was effectively used on my machines before.
* Changed x86_set_mtrrs() and the CPU module hook to also set the default memory
type.
* Rewrote the MTRR computation once more:
- Now we know all used memory ranges, so we are free to extend used ranges
into unused ones in order to simplify them for MTRR setup.
- Leverage the subtractive properties of uncached and write-through ranges to
simplify ranges of any other respectively write-back type.
- Set the default memory type to write-back, so we don't need MTRRs for the
RAM ranges.
- If a new range intersects with an existing one, we no longer just fail.
Instead we use the strictest requirements implied by the ranges. This fixes
#5383.
Overall the new algorithm should be sufficient with far less MTRRs than before
(on my desktop machine 4 are used at maximum, while 8 didn't quite suffice
before). A drawback of the current implementation is that it doesn't deal with
the case of running out of MTRRs at all, which might result in some ranges
having weaker caching/memory ordering properties than requested.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35515 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Pulled the physical page mapping functions out of vm_translation_map into
a new interface VMPhysicalPageMapper.
* Renamed vm_translation_map to VMTranslationMap and made it a proper C++
class. The functions in the operations vector have become methods.
* Added class GenericVMPhysicalPageMapper implementing VMPhysicalPageMapper
as far as possible (without actually writing new code).
* Adjusted the x86 and the PPC specifics accordingly (untested for the
latter). For the other architectures the build is, I'm afraid, seriously
broken.
The next steps will modify and extend the VMTranslationMap interface, so that
it will be possible to fix the bugs in vm_unmap_page[s]() and employ
architecture specific optimizations.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35066 a95241bf-73f2-0310-859d-f6bbb57e9c96
system_time_nsecs(), returning the system time in nanoseconds. The function
is only really implemented for x86. For the other architectures
system_time() * 1000 is returned.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34543 a95241bf-73f2-0310-859d-f6bbb57e9c96
all MTRRs at once.
* Added a respective x86_set_mtrrs() kernel function.
* x86 CPU module:
- Implemented the new hook.
- Prefixed most debug output with the CPU index. Otherwise it gets quite
confusing with multiple CPUs.
- generic_init_mtrrs(): No longer clear all MTRRs, if they are already
enabled. This lets us benefit from the BIOS's setup until we install our
own -- otherwise with caching disabled things are *really* slow.
* arch_vm.cpp: Completely rewrote the MTRR handling as the old one was not
only slow (O(2^n)), but also broken (resulting in incorrect setups (e.g.
with cachable ranges larger than requested)), and not working by design for
certain cases (subtractive setups intersecting ranges added later).
Now we maintain an array with the successfully set ranges. When a new range
is added, we recompute the complete MTRR setup as we need to. The new
algorithm analyzing the ranges has linear complexity and also handles range
base addresses with an alignment not matching the range size (e.g. a range
at address 0x1000 with size 0x2000) and joining of adjacent/overlapping
ranges of the same type.
This fixes the slow graphics on my 4 GB machine (though unfortunately the
8 MTRRs aren't enough to fully cover the complete frame buffer (about 35
pixel lines remain uncachable), but that can't be helped without rounding up
the frame buffer size, for which we don't have enough information). It might
also fix#1823.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34197 a95241bf-73f2-0310-859d-f6bbb57e9c96