Commit Graph

109 Commits

Author SHA1 Message Date
X512
8ca0f03d0c riscv64/smp: Implement multi-processor support
* Working under qemu smp 1,2+
* Working on SiFive Unmatched
* x86_64 efi not broken by smp_boot_other_cpus change

Change-Id: I32ebc17913e46ed082be9ade8f56448bbf12f16e
Reviewed-on: https://review.haiku-os.org/c/haiku/+/4705
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: Alex von Gluck IV <kallisti5@unixzen.com>
2021-12-12 15:35:24 +00:00
Jérôme Duval
45872f7f9c kernel/vm: restrict permission changes on shared file-mapped areas
a protection_max attribute is added in VMArea.
a read-only opened file already can't be mapped shared read-write at the moment,
but can later be changed to read-write with mprotect() or set_area_protection().
When creating the VMArea, the actual maximum protection is stored in the area,
so that it can be checked when needed.
this fixes a VM TODO.

Change-Id: I33b144c192034eeb059f1dede5dbef5af947280d
Reviewed-on: https://review.haiku-os.org/c/haiku/+/3804
Reviewed-by: Adrien Destugues <pulkomandy@gmail.com>
2021-03-23 17:40:31 +00:00
Adrien Destugues
a959262cd0 implement mlock(), munlock()
Change-Id: I2f04b8986d2ed32bb4d30d238d668e21a1505778
Reviewed-on: https://review.haiku-os.org/c/haiku/+/1991
Reviewed-by: Adrien Destugues <pulkomandy@gmail.com>
2020-12-03 07:58:05 +00:00
Michael Lotz
75a10a74e8 kernel/vm: Make vm_copy_area take page protections into account.
When copying an area with vm_copy_area only the new protection would be
applied and any possibly existing page protections on the source area
were ignored.

For areas with stricter area protection than page protection, this lead
to faults when accessing the copy. In the opposite case it lead to too
relaxed protection. The currently only user of vm_copy_area is
fork_team which goes through all areas of the parent and copies them to
the new team. Hence page protections were ignored on all forked teams.

Remove the protection argument and instead always carry over the source
area protection and duplicate the page protections when present.

Also make sure to take the page protections into account for deciding
whether or not the copy is writable and therefore needs to have copy on
write semantics.

Change-Id: I52f295f2aaa66e31b4900b754343b3be9a19ba30
Reviewed-on: https://review.haiku-os.org/c/haiku/+/3166
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
2020-08-23 00:55:58 +00:00
Michael Lotz
8e74e30784 kernel/vm: Add discard_address_range that discards pages.
Pages in the given range are unmapped and freed without getting written
back anywhere. It can be used whenever a caller does not care about the
data in the given range anymore and wants to reduce page pressure.

Change-Id: I8bcce68fab278efef710d3714677e1d463504a56
Reviewed-on: https://review.haiku-os.org/c/haiku/+/2843
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
2020-08-01 19:23:27 +00:00
Michael Lotz
31cee26cfe kernel: Whitespace cleanup only. 2020-06-13 23:24:27 +02:00
Michael Lotz
a6926d4287 kernel/vm: Introduce and use VMAddressSpace::AreaRangeIterator.
It iterates over all areas intersecting a given address range and
removes the need for manually skipping uninteresting initial areas. It
uses VMAddressSpace::FindClosestArea() to efficiently find the starting
area.

This speeds up the two iterations in unmap_address_range and one in
wait_if_address_range_is_wired and resolves a TODO in the latter hinting
at such a solution.

Change-Id: Iba1d39942db4e4b27e17706be194496f9d4279ed
Reviewed-on: https://review.haiku-os.org/c/haiku/+/2841
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
2020-05-30 02:29:41 +00:00
Michael Lotz
a626bdab77 kernel/vm: Remove linear search from _get_next_area_info.
This introduces VMAddressSpace::FindClosestArea() that can be used to
find the closest area to a given address in either direction. This is
now trivial and efficient since both kernel and user address spaces use
a binary search tree.

Using FindClosestArea() getting multiple area infos is sped up
dramatically as it removes the need for a linear search from the first
area to the one given in the cookie on each successive invocation.

Change-Id: I227da87d915f6f3d3ef88bfeb6be5d4c97c3baaa
Reviewed-on: https://review.haiku-os.org/c/haiku/+/2840
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
2020-05-30 02:29:41 +00:00
Michael Lotz
428bc69ab8 VMCache: Factor out a _FreePageRange method.
The code in the Resize and Rebase methods was identical except for the
iterator.

Change-Id: I9f6b3c2c09af0c26778215bd627fed030c4d46f1
Reviewed-on: https://review.haiku-os.org/c/haiku/+/2835
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
2020-05-30 01:47:40 +00:00
Michael Lotz
d6ddb118f3 kernel/vm: Whitespace cleanup only. 2020-05-10 23:55:25 +02:00
Michael Lotz
4e2b49bc0c kernel/vm: Implement swap adoption for cut_area middle case.
Rename MovePageRange to Adopt and group it with Resize/Rebase as it
covers the third, middle cut case.

Implement VMAnonymousCache::Adopt() to actually adopt swap pages. This
has to recreate swap blocks instead of taking them over from the source
cache as the cut offset or base offset between the caches may not be
swap block aligned. This means that adoption may fail due to memory
shortage in allocating the swap blocks.

For the middle cut case it is therefore now possible to have the adopt
fail in which case the previous cache restore logic is applied. Since
the readoption of the pages from the second cache can fail for the same
reason, there is a slight chance that we can't restore and lose pages.
For now, just panic in such a case and add a TODO to free memory and
retry.

Change-Id: I9a661f00c8f03bbbea2fe6dee90371c68d7951e6
Reviewed-on: https://review.haiku-os.org/c/haiku/+/2588
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
2020-05-08 21:56:56 +00:00
Hamish Morrison
c6657ffe02 Resize caches in all cases when cutting areas
* Adds VMCache::MovePageRange() and VMCache::Rebase() to facilitate
  this.

Applied on top of hrev45098 and rebased with the hrev45564 page_num_t to
off_t change included.

Change-Id: Ie61bf43696783e3376fb4144ddced3781aa092ba
Reviewed-on: https://review.haiku-os.org/c/haiku/+/2581
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
2020-05-08 21:56:56 +00:00
Augustin Cavalier
39665db167 kernel/vm: Inline the VMArea::name string.
B_OS_NAME_LENGTH is 32, char* is 8 (on x64), and this structure
has quite a lot of pointers in it so it is not like we really
needed to save those 24 bytes. Hitting malloc() in here is not
so great, especially because we usually have B_DONT_LOCK_KERNEL_SPACE
turned on, so just inline and avoid it.

Change-Id: I5c94955324cfda08972895826b61748c3b69096a
2019-07-13 13:42:49 -04:00
Augustin Cavalier
513403d420 Revert team and thread changes for COMPAT_MODE (hrev52010 & hrev52011).
This reverts commit c558f9c8fe.
This reverts commit 44f24718b1.
This reverts commit a69cb33030.
This reverts commit 951182620e.

There have been multiple reports that these changes break mounting NTFS partitions
(on all systems, see #14204), and shutting down (on certain systems, see #12405.)
Until they can be fixed, they are being backed out.
2018-06-14 22:25:06 -04:00
Jérôme Duval
951182620e kernel/x86_64: setup a new team in compatibility mode.
* in load_image_internal(), elf32_load_user_image checks whether the binary
format requires the compatibility mode.
* we then set up the flag THREAD_FLAGS_COMPAT_MODE and the address space size.
* the compatibility mode runtime_loader is hardcoded with x86/runtime_loader.
* if needed, the 64-bit flat_args structure is converted in-place to its 32-bit
layout.
* a 32-bit flat_args isn't handled yet (a 32-bit team execs a 64-bit binary).

Change-Id: Ia6a066bde8d1774d85de29b48dc500e27ae9668f
2018-06-12 17:56:55 +02:00
Michael Lotz
321372e3ef kernel: Make size argument to create_area_etc() size_t.
It was limited to a uint32 and could for example be overflown by the
slab MemoryManager that uses size_t on a 64 bit system.

This aligns the signature with create_area() that already uses size_t
for the size argument.

Note that the function is currently private, so the impact should be
limited.
2018-04-04 00:07:59 +02:00
Augustin Cavalier
bf77c15232 kernel/vm: Correct virtual function declarations.
The base VMCache class changed to the generic_ types with their
introduction in in *2011* (435c43f591),
but these classes were never properly adapted. These functions should not
be called here (they panic() -- but the base class only returns B_ERROR,
so that is a difference at least.)

Found by Clang's -Woverloaded-virtual.
2017-12-02 21:42:50 -05:00
Augustin Cavalier
30c9d3c0cc kernel: Correct class/struct mixups.
Almost certainly harmless. Spotted by Clang.
2017-12-01 20:27:15 -05:00
Ingo Weinhold
078a965f65 vm_soft_fault(): Avoid deadlock waiting for wired ranges
* VMArea::AddWaiterIfWired(): Replace the ignoreRange argument by a
flags argument and introduce (currently only) flag
IGNORE_WRITE_WIRED_RANGES. If specified, ranges wired for writing
are ignored. Ignoring just a single specified range doesn't cut it
in vm_soft_fault(), and there aren't any other users of that feature.
* vm_soft_fault(): When having to unmap a page of a lower cache, this
page cannot be wired for writing. So we can safely ignore all
writed-wired ranges, instead of just our own. We even have to do that
in case there's another thread that concurrently tries to write-wire
the same page, since otherwise we'd deadlock waiting for each other.
2014-10-29 12:37:25 +01:00
Ingo Weinhold
9da590f73e Add vm_page_free_etc()
It additionally gets a vm_page_reservation* argument. If not NULL, the
page count of the reservation is incremented for the freed page.
2014-10-29 02:36:08 +01:00
Pawel Dziepak
d0f2d8282f Merge branch 'scheduler'
Conflicts:
	build/jam/packages/Haiku
	headers/os/kernel/OS.h
	headers/os/opengl/GLRenderer.h
	headers/private/shared/cpu_type.h
	src/add-ons/kernel/drivers/power/acpi_battery/acpi_battery.h
	src/bin/sysinfo.cpp
	src/bin/top.c
	src/system/kernel/arch/x86/arch_system_info.cpp
	src/system/kernel/port.cpp
2014-01-17 04:06:15 +01:00
Pawel Dziepak
d02aaee17e kernel, libroot: Add more memory info in system_info
system_info now contains all information previously available only
through __get_system_info_etc(B_MEMORY_INFO, ...).
2013-12-16 04:53:46 +01:00
Ingo Weinhold
7b83ce1142 Add KDL command "mapping"
* VMTranslationMap:
  - Add DebugPrintMappingInfo(): Given a virtual address it is supposed
    to print the paging structure information for that address. To be
    implemented by derived classes.
  - Add DebugGetReverseMappingInfo(): Given a physical addresss it is
    supposed to find all virtual addresses mapped to it. To be
    implemented by derived classes.
* X86VMTranslationMapPAE: Implement the new methods
  DebugPrintMappingInfo() and DebugGetReverseMappingInfo().
* Add KDL command "mapping". It supports both virtual address lookups
  and reverse lookups.
2013-12-05 05:13:21 +01:00
Ingo Weinhold
7bf85edf58 Allow disabling ASLR via DISABLE_ASLR environment variable
* VMAddressSpace: Add randomizingEnabled property.
* VMUserAddressSpace: Randomize addresses only when randomizingEnabled
  property is set.
* create_team_arg(): Check, if the team's environment contains
  "DISABLE_ASLR=1". Set the team's address space property
  randomizingEnabled accordingly in load_image_internal() and
  exec_team().
2013-12-01 02:51:50 +01:00
Ingo Weinhold
e5f6591382 VM: vm_memset_physical(): Correct length parameter type 2013-11-11 22:27:52 +01:00
Pawel Dziepak
73ad2473e7 Remove remaining unnecessary 'volatile' qualifiers 2013-11-06 00:03:07 +01:00
Pawel Dziepak
8614737f71 elf: restore correct region protection after relocation 2013-04-16 03:44:38 +02:00
Pawel Dziepak
966f207668 x86: enable data execution prevention
Set execute disable bit for any page that belongs to area with neither
B_EXECUTE_AREA nor B_KERNEL_EXECUTE_AREA set.

In order to take advanage of NX bit in 32 bit protected mode PAE must be
enabled. Thus, from now on it is also enabled when the CPU supports NX bit.

vm_page_fault() takes additional argument which indicates whether page fault
was caused by an illegal instruction fetch.
2013-04-04 15:22:23 +02:00
Hamish Morrison
d1f280c805 Add support for pthread_attr_get/setguardsize()
* Added the aforementioned functions.
* create_area_etc() now takes a guard size parameter.
* The thread_info::stack_base/end range now refers to the usable range
  only.
2012-12-28 18:02:58 +00:00
Alex Smith
6e2f6d1ace Changed cookie type for get_next_area_info() to ssize_t.
The cookie is used to store the base address of the area that was just
visited. On 64-bit systems, int32 is not sufficient. Therefore, changed
to ssize_t which retains compatibility on x86 while expanding to a
sufficient size on x86_64.
2012-07-29 09:31:14 +01:00
Michael Lotz
7418dbd908 Introduce debug page wise kernel area protection functions.
This adds a pair of functions vm_prepare_kernel_area_debug_protection()
and vm_set_kernel_area_debug_protection() to set a kernel area up for
page wise protection and to actually protect individual pages
respectively.

It was already possible to read and write protect full areas via area
protection flags and not mapping any actual pages. For areas that
actually have mapped pages this doesn't work however as no fault, at
which the permissions could be checked, is generated on access.

These new functions use the debug helpers of the translation map to mark
individual pages as non-present without unmapping them. This allows them
to be "protected", i.e. causing a fault on read and write access. As they
aren't actually unmapped they can later be marked present again.

Note that these are debug helpers and have quite a few restrictions as
described in the comment above the function and is only useful for some
very specific and constrained use cases.
2011-12-03 19:49:18 +01:00
Michael Lotz
643cf35ee8 Add debug helper functions to mark pages present.
They can be used to mark pages as present/non-present without actually
unmapping them. Marking pages as non-present causes every access to
fault. We can use that for debugging as it allows us to "read protect"
individual kernel pages.
2011-12-03 19:45:31 +01:00
Michael Lotz
2cdd33d4ff Add an assert to ensure the wired count doesn't wrap. 2011-11-13 22:02:36 +01:00
Michael Lotz
90c6930ebb Add VM page allocation tracking similar to what happens in the slab already.
Introduces "page_allocation_infos" and "page_allocations_per_caller" KDL
commands.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43190 a95241bf-73f2-0310-859d-f6bbb57e9c96
2011-11-04 18:07:07 +00:00
Ingo Weinhold
f8154d172d mmlr (distracted) + bonefish:
* Turn VMCache::consumers C list into a DoublyLinkedList.
* Use object caches for the different VMCache types and the VMCacheRefs.
  The purpose is to reduce slab area fragmentation.
* Requires the introduction of a pure virtual VMCache::DeleteObject()
  method, implemented in the derived classes.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43133 a95241bf-73f2-0310-859d-f6bbb57e9c96
2011-11-02 21:14:11 +00:00
Rene Gollent
e8d73efc9c Should have been part of previous commit.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42130 a95241bf-73f2-0310-859d-f6bbb57e9c96
2011-06-12 20:18:01 +00:00
Ingo Weinhold
4535495d80 Merged the signals branch into trunk, with these changes:
* The team and thread kernel structures have been renamed to Team and Thread
  respectively and moved into the new BKernel namespace.
* Several (kernel add-on) sources have been converted from C to C++ since
  private kernel headers are included that are no longer C compatible.

Changes after merging:
* Fixed gcc 2 build (warnings mainly in the scary firewire bus manager).


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@40196 a95241bf-73f2-0310-859d-f6bbb57e9c96
2011-01-10 21:54:38 +00:00
Ingo Weinhold
a0d93d1416 * VMCache::Unlock(): Renamed local variable consumerLocked to avoid shadowing
the parameter (CID 5329).
* _MergeWithOnlyConsumer(): Removed the somewhat weird consumerLocked
  parameter. The caller can unlock itself, if desired. Improves Unlock()
  readability.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@40084 a95241bf-73f2-0310-859d-f6bbb57e9c96
2011-01-03 00:44:40 +00:00
Ingo Weinhold
b944766870 * Moved the vm_page initialization from vm_page.cpp:vm_page_init() to the new
vm_page::Init().
* Made vm_page::wired_count private and added accessor methods.
* Added VMCache::fWiredPagesCount (the number of wired pages the cache
  contains) and accessor methods.
* Made more use of vm_page::IsMapped().
* vm_copy_on_write_area(): Added vm_page_reservation* parameter that can be
  used to request a special handling for wired pages. If given the wired pages
  are replaced by copies and the original pages are moved to the upper cache.
* vm_copy_area():
  - We don't need to do any wired ranges handling, if the source area is a
    B_SHARED_AREA, since we don't touch the area's mappings in this case.
  - We no longer wait for wired ranges of the concerned areas to disappear.
    Instead we use the new vm_copy_on_write_area() feature and just let it
    copy the wired pages. This fixes #6288, an issue introduced with the use
    of user mutexes in libroot: When executing multiple concurrent fork()s all
    but the first one would wait on the fork mutex, which (being a user mutex)
    would wire a page that the vm_copy_area() of the first fork() would wait
    for.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37460 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-07-10 15:08:13 +00:00
Fredrik Holmqvist
e5dca54f1a r37259 fails to compile on one of my systems without this header. Checking out fresh tree and reconfiguring didn't work so don't really know why.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37263 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-06-26 17:24:11 +00:00
Ingo Weinhold
b46540452a Added vm_page_max_address() which returns the greatest address of accessible
physical memory.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37230 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-06-23 13:52:32 +00:00
Ingo Weinhold
13638944db Removed never read VMCache::scan_skip.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37195 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-06-21 15:10:37 +00:00
Ingo Weinhold
0d5ab7a14d Moved duplicate code from the VMTranslationMap subclasses' UnmapPage() and
ClearAccessedAndModified() implementations into helper methods PageUnmapped()
and UnaccessedPageUnmapped() in the base class.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37187 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-06-21 13:32:33 +00:00
Ingo Weinhold
c955359cb6 Added vm_available_not_needed_memory_debug(), a
vm_available_not_needed_memory() version that can be called from within the
kernel debugger.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37167 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-06-18 20:57:05 +00:00
Ingo Weinhold
f8e263c184 Slipped by in r37138: Added VMCache::Dump() and removed GetLock().
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37141 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-06-15 00:07:18 +00:00
Ingo Weinhold
377ecfe797 * Renamed cache_type_to_string() to vm_cache_type_to_string() and made in
kernel private.
* Moved dumping code from dump_cache() to new VMCache::Dump().
* Override VMCache::Dump() in VMVnodeCache to also print the vnode.
* Removed no longer needed VMCache::GetLock().


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37138 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-06-14 23:57:00 +00:00
Ingo Weinhold
a8ad734f1c * Introduced structures {virtual,physical}_address_restrictions, which specify
restrictions for virtual/physical addresses.
* vm_page_allocate_page_run():
  - Fixed conversion of base/limit to array indexes. sPhysicalPageOffset was not
    taken into account.
  - Takes a physical_address_restrictions instead of base/limit and also
    supports alignment and boundary restrictions, now.
* map_backing_store(), VM[User,Kernel]AddressSpace::InsertArea()/
  ReserveAddressRange() take a virtual_address_restrictions parameter, now. They
  also support an alignment independent from the range size.
* create_area_etc(), vm_create_anonymous_area(): Take
  {virtual,physical}_address_restrictions parameters, now.
* Removed no longer needed B_PHYSICAL_BASE_ADDRESS.
* DMAResources:
  - Fixed potential overflows of uint32 when initializing from device node
    attributes.
  - Fixed bounce buffer creation TODOs: By using create_area_etc() with the
    new restrictions parameters we can directly support physical high address,
    boundary, and alignment.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37131 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-06-14 16:25:14 +00:00
Ingo Weinhold
1d26c7248f vm_page_allocate_page_run(): Added parameter "limit", specifying the upper
physical address limit for the page run to allocate.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37086 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-06-10 17:30:49 +00:00
Ingo Weinhold
641b3c82df Renamed allocate_early_physical_page() to vm_allocate_early_physical_page()
and made it public.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37072 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-06-09 21:21:18 +00:00
Ingo Weinhold
2ea7b17cf3 * vm_allocate_early(): Replace "bool blockAlign" parameter by a more flexible
"addr_t aligmnent".
* X86PagingMethod32Bit::PhysicalPageSlotPool::InitInitial(),
  generic_vm_physical_page_mapper_init(): Use vm_allocate_early()'s alignment
  feature instead of aligning by hand.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37070 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-06-09 11:15:43 +00:00