[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[SCM] GNU Mach branch, master, updated. v1.7-46-ge29b779
From: |
Richard Braun |
Subject: |
[SCM] GNU Mach branch, master, updated. v1.7-46-ge29b779 |
Date: |
Tue, 20 Sep 2016 22:41:57 +0000 (UTC) |
This is an automated email from the git hooks/post-receive script. It was
generated because a ref change was pushed to the repository containing
the project "GNU Mach".
The branch, master has been updated
via e29b7797dc2aebcfb00fc08201c31ef0caf5f4d3 (commit)
via 6923672268ae8e51e3cf303314fca196dc369e19 (commit)
via 39fb13e762817b814aa0fc6e49305b5c0fd0c083 (commit)
via 5d1258459ad618481a4f239e8ce020bdecda1d3f (commit)
via 783ad37f65384994dfa5387ab3847a8a4d77b90b (commit)
via 38aca37c00548f9b31bf17e74ab4a36c73521782 (commit)
via 66a878640573dd9101e3915db44408b661220038 (commit)
via 8322083864500f5726f4f04f80427acee4b52c9a (commit)
from c78fe96446794f71a2db7d7e3d43cb15658590a3 (commit)
Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.
- Log -----------------------------------------------------------------
commit e29b7797dc2aebcfb00fc08201c31ef0caf5f4d3
Author: Richard Braun <address@hidden>
Date: Wed Sep 21 00:36:22 2016 +0200
Enable high memory
* i386/i386at/biosmem.c (biosmem_setup): Load the HIGHMEM segment if
present.
(biosmem_free_usable): Report high memory as usable.
* vm/vm_page.c (vm_page_boot_table_size, vm_page_table_size,
vm_page_mem_size, vm_page_mem_free): Scan all segments.
* vm/vm_resident.c (vm_page_grab): Describe allocation strategy
with regard to the HIGHMEM segment.
commit 6923672268ae8e51e3cf303314fca196dc369e19
Author: Richard Braun <address@hidden>
Date: Wed Sep 21 00:35:26 2016 +0200
Update device drivers for highmem support
Unconditionally use bounce buffers for now.
* linux/dev/glue/net.c (device_write): Unconditionally use a
bounce buffer.
* xen/block.c (device_write): Likewise.
* xen/net.c: Include <device/ds_routines.h>.
(device_write): Unconditionally use a bounce buffer.
commit 39fb13e762817b814aa0fc6e49305b5c0fd0c083
Author: Richard Braun <address@hidden>
Date: Wed Sep 21 00:33:35 2016 +0200
Update Linux block layer glue code
The Linux block layer glue code needs to use page nodes with the
appropriate interface since their redefinition as struct list.
* linux/dev/glue/block.c: Include <kern/list.h>.
(struct temp_data): Define member `pages' as a struct list.
(alloc_buffer): Update to use list_xxx functions.
(free_buffer, INIT_DATA, device_open, device_read): Likewise.
commit 5d1258459ad618481a4f239e8ce020bdecda1d3f
Author: Richard Braun <address@hidden>
Date: Tue Sep 20 23:44:23 2016 +0200
Rework pageout to handle multiple segments
As we're about to use a new HIGHMEM segment, potentially much larger
than the existing DMA and DIRECTMAP ones, it's now compulsory to make
the pageout daemon aware of those segments.
And while we're at it, let's fix some of the defects that have been
plaguing pageout forever, such as throttling, and pageout of internal
versus external pages (this commit notably introduces a hardcoded
policy in which as many external pages are selected before considering
internal pages).
* kern/slab.c (kmem_pagefree_physmem): Update call to vm_page_release.
* vm/vm_page.c: Include <kern/counters.h> and <vm/vm_pageout.h>.
(VM_PAGE_SEG_THRESHOLD_MIN_NUM, VM_PAGE_SEG_THRESHOLD_MIN_DENOM,
VM_PAGE_SEG_THRESHOLD_MIN, VM_PAGE_SEG_THRESHOLD_LOW_NUM,
VM_PAGE_SEG_THRESHOLD_LOW_DENOM, VM_PAGE_SEG_THRESHOLD_LOW,
VM_PAGE_SEG_THRESHOLD_HIGH_NUM, VM_PAGE_SEG_THRESHOLD_HIGH_DENOM,
VM_PAGE_SEG_THRESHOLD_HIGH, VM_PAGE_SEG_MIN_PAGES,
VM_PAGE_HIGH_ACTIVE_PAGE_NUM, VM_PAGE_HIGH_ACTIVE_PAGE_DENOM): New macros.
(struct vm_page_queue): New type.
(struct vm_page_seg): Add new members `min_free_pages', `low_free_pages',
`high_free_pages', `active_pages', `nr_active_pages', `high_active_pages',
`inactive_pages', `nr_inactive_pages'.
(vm_page_alloc_paused): New variable.
(vm_page_pageable, vm_page_can_move, vm_page_remove_mappings): New
functions.
(vm_page_seg_alloc_from_buddy): Pause allocations and start the pageout
daemon as appropriate.
(vm_page_queue_init, vm_page_queue_push, vm_page_queue_remove,
vm_page_queue_first, vm_page_seg_get, vm_page_seg_index,
vm_page_seg_compute_pageout_thresholds): New functions.
(vm_page_seg_init): Initialize the new segment members.
(vm_page_seg_add_active_page, vm_page_seg_remove_active_page,
vm_page_seg_add_inactive_page, vm_page_seg_remove_inactive_page,
vm_page_seg_pull_active_page, vm_page_seg_pull_inactive_page,
vm_page_seg_pull_cache_page): New functions.
(vm_page_seg_min_page_available, vm_page_seg_page_available,
vm_page_seg_usable, vm_page_seg_double_lock, vm_page_seg_double_unlock,
vm_page_seg_balance_page, vm_page_seg_balance, vm_page_seg_evict,
vm_page_seg_compute_high_active_page, vm_page_seg_refill_inactive,
vm_page_lookup_seg, vm_page_check): New functions.
(vm_page_alloc_pa): Handle allocation failure from VM privileged thread.
(vm_page_info_all): Display additional segment properties.
(vm_page_wire, vm_page_unwire, vm_page_deactivate, vm_page_activate,
vm_page_wait): Move from vm/vm_resident.c and rewrite to use segments.
(vm_page_queues_remove, vm_page_check_usable, vm_page_may_balance,
vm_page_balance_once, vm_page_balance, vm_page_evict_once): New functions.
(VM_PAGE_MAX_LAUNDRY, VM_PAGE_MAX_EVICTIONS): New macros.
(vm_page_evict, vm_page_refill_inactive): New functions.
* vm/vm_page.h: Include <kern/list.h>.
(struct vm_page): Remove member `pageq', reuse the `node' member instead,
move the `listq' and `next' members above `vm_page_header'.
(VM_PAGE_CHECK): Define as an alias to vm_page_check.
(vm_page_check): New function declaration.
(vm_page_queue_fictitious, vm_page_queue_active, vm_page_queue_inactive,
vm_page_free_target, vm_page_free_min, vm_page_inactive_target,
vm_page_free_reserved, vm_page_free_wanted): Remove extern declarations.
(vm_page_external_pagedout): New extern declaration.
(vm_page_release): Update declaration.
(VM_PAGE_QUEUES_REMOVE): Define as an alias to vm_page_queues_remove.
(VM_PT_PMAP, VM_PT_KMEM, VM_PT_STACK): Remove macros.
(VM_PT_KERNEL): Update value.
(vm_page_queues_remove, vm_page_balance, vm_page_evict,
vm_page_refill_inactive): New function declarations.
* vm/vm_pageout.c (VM_PAGEOUT_BURST_MAX, VM_PAGEOUT_BURST_MIN,
VM_PAGEOUT_BURST_WAIT, VM_PAGEOUT_EMPTY_WAIT, VM_PAGEOUT_PAUSE_MAX,
VM_PAGE_INACTIVE_TARGET, VM_PAGE_FREE_TARGET, VM_PAGE_FREE_MIN,
VM_PAGE_FREE_RESERVED, VM_PAGEOUT_RESERVED_INTERNAL,
VM_PAGEOUT_RESERVED_REALLY): Remove macros.
(vm_pageout_reserved_internal, vm_pageout_reserved_really,
vm_pageout_burst_max, vm_pageout_burst_min, vm_pageout_burst_wait,
vm_pageout_empty_wait, vm_pageout_pause_count, vm_pageout_pause_max,
vm_pageout_active, vm_pageout_inactive, vm_pageout_inactive_nolock,
vm_pageout_inactive_busy, vm_pageout_inactive_absent,
vm_pageout_inactive_used, vm_pageout_inactive_clean,
vm_pageout_inactive_dirty, vm_pageout_inactive_double,
vm_pageout_inactive_cleaned_external): Remove variables.
(vm_pageout_requested, vm_pageout_continue): New variables.
(vm_pageout_setup): Wait for page allocation to succeed instead of
falling back to flush, update double paging protocol with caller,
add pageout throttling setup.
(vm_pageout_scan): Rewrite to use the new vm_page balancing,
eviction and inactive queue refill functions.
(vm_pageout_scan_continue, vm_pageout_continue): Remove functions.
(vm_pageout): Rewrite.
(vm_pageout_start, vm_pageout_resume): New functions.
* vm/vm_pageout.h (vm_pageout_continue, vm_pageout_scan_continue): Remove
function declarations.
(vm_pageout_start, vm_pageout_resume): New function declarations.
* vm/vm_resident.c: Include <kern/list.h>.
(vm_page_queue_fictitious): Define as a struct list.
(vm_page_free_wanted, vm_page_external_count, vm_page_free_avail,
vm_page_queue_active, vm_page_queue_inactive, vm_page_free_target,
vm_page_free_min, vm_page_inactive_target, vm_page_free_reserved):
Remove variables.
(vm_page_external_pagedout): New variable.
(vm_page_bootstrap): Don't initialize removed variable, update
initialization of vm_page_queue_fictitious.
(vm_page_replace): Call VM_PAGE_QUEUES_REMOVE where appropriate.
(vm_page_remove): Likewise.
(vm_page_grab_fictitious): Update to use list_xxx functions.
(vm_page_release_fictitious): Likewise.
(vm_page_grab): Remove pageout related code.
(vm_page_release): Add `laundry' and `external' parameters for
pageout throttling.
(vm_page_grab_contig): Remove pageout related code.
(vm_page_free_contig): Likewise.
(vm_page_free): Remove pageout related code, update call to
vm_page_release.
(vm_page_wait, vm_page_wire, vm_page_unwire, vm_page_deactivate,
vm_page_activate): Move to vm/vm_page.c.
commit 783ad37f65384994dfa5387ab3847a8a4d77b90b
Author: Richard Braun <address@hidden>
Date: Tue Sep 20 22:59:42 2016 +0200
Redefine what an external page is
Instead of a "page considered external", which apparently takes into
account whether a page is dirty or not, redefine this property to
reliably mean "is in an external object".
This commit mostly deals with the impact of this change on the page
allocation interface.
* i386/intel/pmap.c (pmap_page_table_page_alloc): Update call to
vm_page_grab.
* kern/slab.c (kmem_pagealloc_physmem): Use vm_page_grab instead of
vm_page_grab_contig.
(kmem_pagefree_physmem): Use vm_page_release instead of
vm_page_free_contig.
* linux/dev/glue/block.c (alloc_buffer, device_read): Update call
to vm_page_grab.
* vm/vm_fault.c (vm_fault_page): Update calls to vm_page_grab and
vm_page_convert.
* vm/vm_map.c (vm_map_copy_steal_pages): Update call to vm_page_grab.
* vm/vm_page.h (struct vm_page): Remove `extcounted' member.
(vm_page_external_limit, vm_page_external_count): Remove extern
declarations.
(vm_page_convert, vm_page_grab): Update declarations.
(vm_page_release, vm_page_grab_phys_addr): New function declarations.
* vm/vm_pageout.c (VM_PAGE_EXTERNAL_LIMIT): Remove macro.
(VM_PAGE_EXTERNAL_TARGET): Likewise.
(vm_page_external_target): Remove variable.
(vm_pageout_scan): Remove specific handling of external pages.
(vm_pageout): Don't set vm_page_external_limit and
vm_page_external_target.
* vm/vm_resident.c (vm_page_external_limit): Remove variable.
(vm_page_insert, vm_page_replace, vm_page_remove): Update external
page tracking.
(vm_page_convert): Remove `external' parameter.
(vm_page_grab): Likewise. Remove specific handling of external pages.
(vm_page_grab_phys_addr): Update call to vm_page_grab.
(vm_page_release): Remove `external' parameter and remove specific
handling of external pages.
(vm_page_wait): Remove specific handling of external pages.
(vm_page_alloc): Update call to vm_page_grab.
(vm_page_free): Update call to vm_page_release.
* xen/block.c (device_read): Update call to vm_page_grab.
* xen/net.c (device_write): Likewise.
commit 38aca37c00548f9b31bf17e74ab4a36c73521782
Author: Richard Braun <address@hidden>
Date: Tue Sep 20 22:11:24 2016 +0200
Replace vm_offset_t with phys_addr_t where appropriate
* i386/i386/phys.c (pmap_zero_page, pmap_copy_page, copy_to_phys,
copy_from_phys, kvtophys): Use the phys_addr_t type for physical
addresses.
* i386/intel/pmap.c (pmap_map, pmap_map_bd, pmap_destroy,
pmap_remove_range, pmap_page_protect, pmap_enter, pmap_extract,
pmap_collect, phys_attribute_clear, phys_attribute_test,
pmap_clear_modify, pmap_is_modified, pmap_clear_reference,
pmap_is_referenced): Likewise.
* i386/intel/pmap.h (pt_entry_t): Unconditionally define as a
phys_addr_t.
(pmap_zero_page, pmap_copy_page, kvtophys): Use the phys_addr_t
type for physical addresses.
* vm/pmap.h (pmap_enter, pmap_page_protect, pmap_clear_reference,
pmap_is_referenced, pmap_clear_modify, pmap_is_modified,
pmap_extract, pmap_map_bd): Likewise.
* vm/vm_page.h (vm_page_fictitious_addr): Declare as a phys_addr_t.
* vm/vm_resident.c (vm_page_fictitious_addr): Likewise.
(vm_page_grab_phys_addr): Change return type to phys_addr_t.
commit 66a878640573dd9101e3915db44408b661220038
Author: Richard Braun <address@hidden>
Date: Tue Sep 20 21:34:07 2016 +0200
Remove phys_first_addr and phys_last_addr global variables
The old assumption that all physical memory is directly mapped in
kernel space is about to go away. Those variables are directly linked
to that assumption.
* i386/i386/model_dep.h (phys_first_addr): Remove extern declaration.
(phys_last_addr): Likewise.
* i386/i386/phys.c (pmap_zero_page): Use VM_PAGE_DIRECTMAP_LIMIT
instead of phys_last_addr.
(pmap_copy_page, copy_to_phys, copy_from_phys): Likewise.
* i386/i386/trap.c (user_trap): Remove check against phys_last_addr.
* i386/i386at/biosmem.c (biosmem_bootstrap_common): Don't set
phys_last_addr.
* i386/i386at/mem.c (memmmap): Use vm_page_lookup_pa to determine if
a physical address references physical memory.
* i386/i386at/model_dep.c (phys_first_addr): Remove variable.
(phys_last_addr): Likewise.
(pmap_free_pages, pmap_valid_page): Remove functions.
* i386/intel/pmap.c: Include i386at/biosmem.h.
(pa_index): Turn into an alias for vm_page_table_index.
(pmap_bootstrap): Replace uses of phys_first_addr and phys_last_addr
as appropriate.
(pmap_virtual_space): Use vm_page_table_size instead of phys_first_addr
and phys_last_addr to obtain the number of physical pages.
(pmap_verify_free): Remove function.
(valid_page): Turn this macro into an inline function and rewrite
using vm_page_lookup_pa.
(pmap_page_table_page_alloc): Build the pmap VM object using
vm_page_table_size to determine its size.
(pmap_remove_range, pmap_page_protect, phys_attribute_clear,
phys_attribute_test): Turn page indexes into unsigned long integers.
(pmap_enter): Likewise. In addition, use either vm_page_lookup_pa or
biosmem_directmap_end to determine if a physical address references
physical memory.
* i386/xen/xen.c (hyp_p2m_init): Use vm_page_table_size instead of
phys_last_addr to obtain the number of physical pages.
* kern/startup.c (phys_first_addr): Remove extern declaration.
(phys_last_addr): Likewise.
* linux/dev/init/main.c (linux_init): Use vm_page_seg_end with the
appropriate segment selector instead of phys_last_addr to determine
where high memory starts.
* vm/pmap.h: Update requirements description.
(pmap_free_pages, pmap_valid_page): Remove declarations.
* vm/vm_page.c (vm_page_seg_end, vm_page_boot_table_size,
vm_page_table_size, vm_page_table_index): New functions.
* vm/vm_page.h (vm_page_seg_end, vm_page_table_size,
vm_page_table_index): New function declarations.
* vm/vm_resident.c (vm_page_bucket_count, vm_page_hash_mask): Define
as unsigned long integers.
(vm_page_bootstrap): Compute VP table size based on the page table
size instead of the value returned by pmap_free_pages.
commit 8322083864500f5726f4f04f80427acee4b52c9a
Author: Richard Braun <address@hidden>
Date: Tue Sep 20 20:43:34 2016 +0200
VM: remove commented out code
The vm_page_direct_va, vm_page_direct_pa and vm_page_direct_ptr
functions were imported along with the new vm_page module, but
never actually used since the kernel already has phystokv and
kvtophys functions.
-----------------------------------------------------------------------
Summary of changes:
i386/i386/model_dep.h | 7 -
i386/i386/phys.c | 22 +-
i386/i386/trap.c | 10 -
i386/i386at/biosmem.c | 24 +-
i386/i386at/mem.c | 22 +-
i386/i386at/model_dep.c | 22 -
i386/intel/pmap.c | 117 ++--
i386/intel/pmap.h | 12 +-
i386/xen/xen.c | 2 +-
kern/slab.c | 4 +-
kern/startup.c | 1 -
linux/dev/glue/block.c | 35 +-
linux/dev/glue/net.c | 63 +-
linux/dev/init/main.c | 2 +-
vm/pmap.h | 32 +-
vm/vm_fault.c | 8 +-
vm/vm_map.c | 2 +-
vm/vm_page.c | 1714 +++++++++++++++++++++++++++++++++++++++++------
vm/vm_page.h | 169 +++--
vm/vm_pageout.c | 690 +++----------------
vm/vm_pageout.h | 4 +-
vm/vm_resident.c | 385 +++--------
xen/block.c | 51 +-
xen/net.c | 49 +-
24 files changed, 1974 insertions(+), 1473 deletions(-)
hooks/post-receive
--
GNU Mach
[Prev in Thread] |
Current Thread |
[Next in Thread] |
- [SCM] GNU Mach branch, master, updated. v1.7-46-ge29b779,
Richard Braun <=