bmalloc should do partial scavenges more frequently
authorsbarati@apple.com <sbarati@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Tue, 10 Apr 2018 23:34:42 +0000 (23:34 +0000)
committersbarati@apple.com <sbarati@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Tue, 10 Apr 2018 23:34:42 +0000 (23:34 +0000)
https://bugs.webkit.org/show_bug.cgi?id=184176

Reviewed by Filip Pizlo.

This patch adds the ability for bmalloc to do a partial scavenge.
bmalloc will now do a partial scavenge with some frequency even
when the heap is growing.

For Heap, this means tracking the high water mark of where the Heap
has allocated since the last scavenge. Partial scavenging is just
decommitting entries in the LargeMap that are past this high water
mark. Because we allocate in first fit order out of LargeMap, tracking
the high water mark is a good heuristic of how much memory a partial
scavenge should decommit.

For IsoHeaps, each IsoDirectory also keeps track of its high water mark
for the furthest page it allocates into. Similar to Heap, we scavenge pages
past that high water mark. IsoHeapImpl then tracks the high water mark
for the IsoDirectory it allocates into. We then scavenge all directories
including and past the directory high water mark. This includes scavenging
the inline directory when its the only thing we allocate out of since
the last scavenge.

This patch also adds some other capabilities to bmalloc:

Heaps and IsoHeaps now track how much memory is freeable. Querying
this number is now cheap.

Heaps no longer hold the global lock when decommitting large ranges.
Instead, that range is just marked as non eligible to be allocated.
Then, without the lock held, the scavenger will decommit those ranges.
Once this is done, the scavenger will then reacquire the lock and mark
these ranges as eligible. This lessens lock contention between the
scavenger and the allocation slow path since threads that are taking an
allocation slow path can now allocate concurrently to the scavenger's
decommits. The main consideration in adding this functionality is that
a large allocation may fail while the scavenger is in the process of
decommitting memory. When the Heap fails to allocate a large range when
the scavenger is in the middle of a decommit, Heap will wait for the
Scavenger to finish and then it will try to allocate a large range again.

Decommitting from Heap now aggregates the ranges to decommit and tries to
merge them together to lower the number of calls to vmDeallocatePhysicalPages.
This is analogous to what IsoHeaps already do.

* bmalloc.xcodeproj/project.pbxproj:
* bmalloc/Allocator.cpp:
(bmalloc::Allocator::tryAllocate):
(bmalloc::Allocator::allocateImpl):
(bmalloc::Allocator::reallocate):
(bmalloc::Allocator::refillAllocatorSlowCase):
(bmalloc::Allocator::allocateLarge):
* bmalloc/BulkDecommit.h: Added.
(bmalloc::BulkDecommit::addEager):
(bmalloc::BulkDecommit::addLazy):
(bmalloc::BulkDecommit::processEager):
(bmalloc::BulkDecommit::processLazy):
(bmalloc::BulkDecommit::add):
(bmalloc::BulkDecommit::process):
* bmalloc/Deallocator.cpp:
(bmalloc::Deallocator::scavenge):
(bmalloc::Deallocator::processObjectLog):
(bmalloc::Deallocator::deallocateSlowCase):
* bmalloc/Deallocator.h:
(bmalloc::Deallocator::lineCache):
* bmalloc/Heap.cpp:
(bmalloc::Heap::freeableMemory):
(bmalloc::Heap::markAllLargeAsEligibile):
(bmalloc::Heap::decommitLargeRange):
(bmalloc::Heap::scavenge):
(bmalloc::Heap::scavengeToHighWatermark):
(bmalloc::Heap::deallocateLineCache):
(bmalloc::Heap::allocateSmallChunk):
(bmalloc::Heap::deallocateSmallChunk):
(bmalloc::Heap::allocateSmallPage):
(bmalloc::Heap::deallocateSmallLine):
(bmalloc::Heap::allocateSmallBumpRangesByMetadata):
(bmalloc::Heap::allocateSmallBumpRangesByObject):
(bmalloc::Heap::splitAndAllocate):
(bmalloc::Heap::tryAllocateLarge):
(bmalloc::Heap::allocateLarge):
(bmalloc::Heap::isLarge):
(bmalloc::Heap::largeSize):
(bmalloc::Heap::shrinkLarge):
(bmalloc::Heap::deallocateLarge):
(bmalloc::Heap::externalCommit):
(bmalloc::Heap::externalDecommit):
* bmalloc/Heap.h:
(bmalloc::Heap::allocateSmallBumpRanges):
(bmalloc::Heap::derefSmallLine):
* bmalloc/IsoDirectory.h:
* bmalloc/IsoDirectoryInlines.h:
(bmalloc::passedNumPages>::takeFirstEligible):
(bmalloc::passedNumPages>::didBecome):
(bmalloc::passedNumPages>::didDecommit):
(bmalloc::passedNumPages>::scavengePage):
(bmalloc::passedNumPages>::scavenge):
(bmalloc::passedNumPages>::scavengeToHighWatermark):
(bmalloc::passedNumPages>::freeableMemory): Deleted.
* bmalloc/IsoHeapImpl.h:
* bmalloc/IsoHeapImplInlines.h:
(bmalloc::IsoHeapImpl<Config>::takeFirstEligible):
(bmalloc::IsoHeapImpl<Config>::scavenge):
(bmalloc::IsoHeapImpl<Config>::scavengeToHighWatermark):
(bmalloc::IsoHeapImpl<Config>::freeableMemory):
(bmalloc::IsoHeapImpl<Config>::isNowFreeable):
(bmalloc::IsoHeapImpl<Config>::isNoLongerFreeable):
* bmalloc/LargeMap.cpp:
(bmalloc::LargeMap::remove):
(bmalloc::LargeMap::markAllAsEligibile):
* bmalloc/LargeMap.h:
(bmalloc::LargeMap::size):
(bmalloc::LargeMap::at):
* bmalloc/LargeRange.h:
(bmalloc::LargeRange::setEligible):
(bmalloc::LargeRange::isEligibile const):
(bmalloc::canMerge):
* bmalloc/ObjectType.cpp:
(bmalloc::objectType):
* bmalloc/Scavenger.cpp:
(bmalloc::PrintTime::PrintTime):
(bmalloc::PrintTime::~PrintTime):
(bmalloc::PrintTime::print):
(bmalloc::Scavenger::timeSinceLastFullScavenge):
(bmalloc::Scavenger::timeSinceLastPartialScavenge):
(bmalloc::Scavenger::scavenge):
(bmalloc::Scavenger::partialScavenge):
(bmalloc::Scavenger::freeableMemory):
(bmalloc::Scavenger::threadRunLoop):
* bmalloc/Scavenger.h:
* bmalloc/SmallLine.h:
(bmalloc::SmallLine::refCount):
(bmalloc::SmallLine::ref):
(bmalloc::SmallLine::deref):
* bmalloc/SmallPage.h:
(bmalloc::SmallPage::refCount):
(bmalloc::SmallPage::hasFreeLines const):
(bmalloc::SmallPage::setHasFreeLines):
(bmalloc::SmallPage::ref):
(bmalloc::SmallPage::deref):
* bmalloc/bmalloc.cpp:
(bmalloc::api::tryLargeZeroedMemalignVirtual):
(bmalloc::api::freeLargeVirtual):

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@230501 268f45cc-cd09-0410-ab3c-d52691b4dbfc

21 files changed:
Source/bmalloc/ChangeLog
Source/bmalloc/bmalloc.xcodeproj/project.pbxproj
Source/bmalloc/bmalloc/Allocator.cpp
Source/bmalloc/bmalloc/BulkDecommit.h [new file with mode: 0644]
Source/bmalloc/bmalloc/Deallocator.cpp
Source/bmalloc/bmalloc/Deallocator.h
Source/bmalloc/bmalloc/Heap.cpp
Source/bmalloc/bmalloc/Heap.h
Source/bmalloc/bmalloc/IsoDirectory.h
Source/bmalloc/bmalloc/IsoDirectoryInlines.h
Source/bmalloc/bmalloc/IsoHeapImpl.h
Source/bmalloc/bmalloc/IsoHeapImplInlines.h
Source/bmalloc/bmalloc/LargeMap.cpp
Source/bmalloc/bmalloc/LargeMap.h
Source/bmalloc/bmalloc/LargeRange.h
Source/bmalloc/bmalloc/ObjectType.cpp
Source/bmalloc/bmalloc/Scavenger.cpp
Source/bmalloc/bmalloc/Scavenger.h
Source/bmalloc/bmalloc/SmallLine.h
Source/bmalloc/bmalloc/SmallPage.h
Source/bmalloc/bmalloc/bmalloc.cpp

index 659816f..03970c9 100644 (file)
@@ -1,3 +1,150 @@
+2018-04-10  Saam Barati  <sbarati@apple.com>
+
+        bmalloc should do partial scavenges more frequently
+        https://bugs.webkit.org/show_bug.cgi?id=184176
+
+        Reviewed by Filip Pizlo.
+
+        This patch adds the ability for bmalloc to do a partial scavenge.
+        bmalloc will now do a partial scavenge with some frequency even
+        when the heap is growing.
+        
+        For Heap, this means tracking the high water mark of where the Heap
+        has allocated since the last scavenge. Partial scavenging is just
+        decommitting entries in the LargeMap that are past this high water
+        mark. Because we allocate in first fit order out of LargeMap, tracking
+        the high water mark is a good heuristic of how much memory a partial
+        scavenge should decommit.
+        
+        For IsoHeaps, each IsoDirectory also keeps track of its high water mark
+        for the furthest page it allocates into. Similar to Heap, we scavenge pages
+        past that high water mark. IsoHeapImpl then tracks the high water mark
+        for the IsoDirectory it allocates into. We then scavenge all directories 
+        including and past the directory high water mark. This includes scavenging
+        the inline directory when its the only thing we allocate out of since
+        the last scavenge.
+        
+        This patch also adds some other capabilities to bmalloc:
+        
+        Heaps and IsoHeaps now track how much memory is freeable. Querying
+        this number is now cheap.
+        
+        Heaps no longer hold the global lock when decommitting large ranges.
+        Instead, that range is just marked as non eligible to be allocated.
+        Then, without the lock held, the scavenger will decommit those ranges.
+        Once this is done, the scavenger will then reacquire the lock and mark
+        these ranges as eligible. This lessens lock contention between the
+        scavenger and the allocation slow path since threads that are taking an
+        allocation slow path can now allocate concurrently to the scavenger's
+        decommits. The main consideration in adding this functionality is that
+        a large allocation may fail while the scavenger is in the process of
+        decommitting memory. When the Heap fails to allocate a large range when
+        the scavenger is in the middle of a decommit, Heap will wait for the
+        Scavenger to finish and then it will try to allocate a large range again.
+        
+        Decommitting from Heap now aggregates the ranges to decommit and tries to
+        merge them together to lower the number of calls to vmDeallocatePhysicalPages.
+        This is analogous to what IsoHeaps already do.
+
+        * bmalloc.xcodeproj/project.pbxproj:
+        * bmalloc/Allocator.cpp:
+        (bmalloc::Allocator::tryAllocate):
+        (bmalloc::Allocator::allocateImpl):
+        (bmalloc::Allocator::reallocate):
+        (bmalloc::Allocator::refillAllocatorSlowCase):
+        (bmalloc::Allocator::allocateLarge):
+        * bmalloc/BulkDecommit.h: Added.
+        (bmalloc::BulkDecommit::addEager):
+        (bmalloc::BulkDecommit::addLazy):
+        (bmalloc::BulkDecommit::processEager):
+        (bmalloc::BulkDecommit::processLazy):
+        (bmalloc::BulkDecommit::add):
+        (bmalloc::BulkDecommit::process):
+        * bmalloc/Deallocator.cpp:
+        (bmalloc::Deallocator::scavenge):
+        (bmalloc::Deallocator::processObjectLog):
+        (bmalloc::Deallocator::deallocateSlowCase):
+        * bmalloc/Deallocator.h:
+        (bmalloc::Deallocator::lineCache):
+        * bmalloc/Heap.cpp:
+        (bmalloc::Heap::freeableMemory):
+        (bmalloc::Heap::markAllLargeAsEligibile):
+        (bmalloc::Heap::decommitLargeRange):
+        (bmalloc::Heap::scavenge):
+        (bmalloc::Heap::scavengeToHighWatermark):
+        (bmalloc::Heap::deallocateLineCache):
+        (bmalloc::Heap::allocateSmallChunk):
+        (bmalloc::Heap::deallocateSmallChunk):
+        (bmalloc::Heap::allocateSmallPage):
+        (bmalloc::Heap::deallocateSmallLine):
+        (bmalloc::Heap::allocateSmallBumpRangesByMetadata):
+        (bmalloc::Heap::allocateSmallBumpRangesByObject):
+        (bmalloc::Heap::splitAndAllocate):
+        (bmalloc::Heap::tryAllocateLarge):
+        (bmalloc::Heap::allocateLarge):
+        (bmalloc::Heap::isLarge):
+        (bmalloc::Heap::largeSize):
+        (bmalloc::Heap::shrinkLarge):
+        (bmalloc::Heap::deallocateLarge):
+        (bmalloc::Heap::externalCommit):
+        (bmalloc::Heap::externalDecommit):
+        * bmalloc/Heap.h:
+        (bmalloc::Heap::allocateSmallBumpRanges):
+        (bmalloc::Heap::derefSmallLine):
+        * bmalloc/IsoDirectory.h:
+        * bmalloc/IsoDirectoryInlines.h:
+        (bmalloc::passedNumPages>::takeFirstEligible):
+        (bmalloc::passedNumPages>::didBecome):
+        (bmalloc::passedNumPages>::didDecommit):
+        (bmalloc::passedNumPages>::scavengePage):
+        (bmalloc::passedNumPages>::scavenge):
+        (bmalloc::passedNumPages>::scavengeToHighWatermark):
+        (bmalloc::passedNumPages>::freeableMemory): Deleted.
+        * bmalloc/IsoHeapImpl.h:
+        * bmalloc/IsoHeapImplInlines.h:
+        (bmalloc::IsoHeapImpl<Config>::takeFirstEligible):
+        (bmalloc::IsoHeapImpl<Config>::scavenge):
+        (bmalloc::IsoHeapImpl<Config>::scavengeToHighWatermark):
+        (bmalloc::IsoHeapImpl<Config>::freeableMemory):
+        (bmalloc::IsoHeapImpl<Config>::isNowFreeable):
+        (bmalloc::IsoHeapImpl<Config>::isNoLongerFreeable):
+        * bmalloc/LargeMap.cpp:
+        (bmalloc::LargeMap::remove):
+        (bmalloc::LargeMap::markAllAsEligibile):
+        * bmalloc/LargeMap.h:
+        (bmalloc::LargeMap::size):
+        (bmalloc::LargeMap::at):
+        * bmalloc/LargeRange.h:
+        (bmalloc::LargeRange::setEligible):
+        (bmalloc::LargeRange::isEligibile const):
+        (bmalloc::canMerge):
+        * bmalloc/ObjectType.cpp:
+        (bmalloc::objectType):
+        * bmalloc/Scavenger.cpp:
+        (bmalloc::PrintTime::PrintTime):
+        (bmalloc::PrintTime::~PrintTime):
+        (bmalloc::PrintTime::print):
+        (bmalloc::Scavenger::timeSinceLastFullScavenge):
+        (bmalloc::Scavenger::timeSinceLastPartialScavenge):
+        (bmalloc::Scavenger::scavenge):
+        (bmalloc::Scavenger::partialScavenge):
+        (bmalloc::Scavenger::freeableMemory):
+        (bmalloc::Scavenger::threadRunLoop):
+        * bmalloc/Scavenger.h:
+        * bmalloc/SmallLine.h:
+        (bmalloc::SmallLine::refCount):
+        (bmalloc::SmallLine::ref):
+        (bmalloc::SmallLine::deref):
+        * bmalloc/SmallPage.h:
+        (bmalloc::SmallPage::refCount):
+        (bmalloc::SmallPage::hasFreeLines const):
+        (bmalloc::SmallPage::setHasFreeLines):
+        (bmalloc::SmallPage::ref):
+        (bmalloc::SmallPage::deref):
+        * bmalloc/bmalloc.cpp:
+        (bmalloc::api::tryLargeZeroedMemalignVirtual):
+        (bmalloc::api::freeLargeVirtual):
+
 2018-04-09  Yusuke Suzuki  <utatane.tea@gmail.com>
 
         [bmalloc] Name Scavenger thread "bmalloc scavenger"
index bed0fa2..bd3889e 100644 (file)
                4426E2831C839547008EB042 /* BSoftLinking.h in Headers */ = {isa = PBXBuildFile; fileRef = 4426E2821C839547008EB042 /* BSoftLinking.h */; };
                6599C5CC1EC3F15900A2F7BB /* AvailableMemory.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 6599C5CA1EC3F15900A2F7BB /* AvailableMemory.cpp */; };
                6599C5CD1EC3F15900A2F7BB /* AvailableMemory.h in Headers */ = {isa = PBXBuildFile; fileRef = 6599C5CB1EC3F15900A2F7BB /* AvailableMemory.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               7939885B2076EEB60074A2E7 /* BulkDecommit.h in Headers */ = {isa = PBXBuildFile; fileRef = 7939885A2076EEB50074A2E7 /* BulkDecommit.h */; settings = {ATTRIBUTES = (Private, ); }; };
                795AB3C7206E0D340074FE76 /* PhysicalPageMap.h in Headers */ = {isa = PBXBuildFile; fileRef = 795AB3C6206E0D250074FE76 /* PhysicalPageMap.h */; settings = {ATTRIBUTES = (Private, ); }; };
                AD0934331FCF406D00E85EB5 /* BCompiler.h in Headers */ = {isa = PBXBuildFile; fileRef = AD0934321FCF405000E85EB5 /* BCompiler.h */; settings = {ATTRIBUTES = (Private, ); }; };
                AD14AD29202529C400890E3B /* ProcessCheck.h in Headers */ = {isa = PBXBuildFile; fileRef = AD14AD27202529A600890E3B /* ProcessCheck.h */; };
                4426E2821C839547008EB042 /* BSoftLinking.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = BSoftLinking.h; path = bmalloc/darwin/BSoftLinking.h; sourceTree = "<group>"; };
                6599C5CA1EC3F15900A2F7BB /* AvailableMemory.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = AvailableMemory.cpp; path = bmalloc/AvailableMemory.cpp; sourceTree = "<group>"; };
                6599C5CB1EC3F15900A2F7BB /* AvailableMemory.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = AvailableMemory.h; path = bmalloc/AvailableMemory.h; sourceTree = "<group>"; };
+               7939885A2076EEB50074A2E7 /* BulkDecommit.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = BulkDecommit.h; path = bmalloc/BulkDecommit.h; sourceTree = "<group>"; };
                795AB3C6206E0D250074FE76 /* PhysicalPageMap.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = PhysicalPageMap.h; path = bmalloc/PhysicalPageMap.h; sourceTree = "<group>"; };
                AD0934321FCF405000E85EB5 /* BCompiler.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = BCompiler.h; path = bmalloc/BCompiler.h; sourceTree = "<group>"; };
                AD14AD27202529A600890E3B /* ProcessCheck.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = ProcessCheck.h; path = bmalloc/ProcessCheck.h; sourceTree = "<group>"; };
                14D9DB4E17F2866E00EAAB79 /* heap */ = {
                        isa = PBXGroup;
                        children = (
+                               7939885A2076EEB50074A2E7 /* BulkDecommit.h */,
                                140FA00219CE429C00FFD3C8 /* BumpRange.h */,
                                147DC6E21CA5B70B00724E8D /* Chunk.h */,
                                142B44341E2839E7001DA6E9 /* DebugHeap.cpp */,
                                1448C30118F3754C00502839 /* bmalloc.h in Headers */,
                                0F7EB84D1F9541C700F1ABCB /* BMalloced.h in Headers */,
                                14C919C918FCC59F0028DB43 /* BPlatform.h in Headers */,
+                               7939885B2076EEB60074A2E7 /* BulkDecommit.h in Headers */,
                                4426E2831C839547008EB042 /* BSoftLinking.h in Headers */,
                                14DD789C18F48D4A00950702 /* BumpAllocator.h in Headers */,
                                140FA00319CE429C00FFD3C8 /* BumpRange.h in Headers */,
index cda8d8a..770e78a 100644 (file)
@@ -60,7 +60,7 @@ void* Allocator::tryAllocate(size_t size)
     if (size <= smallMax)
         return allocate(size);
 
-    std::lock_guard<Mutex> lock(Heap::mutex());
+    std::unique_lock<Mutex> lock(Heap::mutex());
     return m_heap.tryAllocateLarge(lock, alignment, size);
 }
 
@@ -89,7 +89,7 @@ void* Allocator::allocateImpl(size_t alignment, size_t size, bool crashOnFailure
     if (size <= smallMax && alignment <= smallMax)
         return allocate(roundUpToMultipleOf(alignment, size));
 
-    std::lock_guard<Mutex> lock(Heap::mutex());
+    std::unique_lock<Mutex> lock(Heap::mutex());
     if (crashOnFailure)
         return m_heap.allocateLarge(lock, alignment, size);
     return m_heap.tryAllocateLarge(lock, alignment, size);
@@ -112,7 +112,7 @@ void* Allocator::reallocate(void* object, size_t newSize)
         break;
     }
     case ObjectType::Large: {
-        std::lock_guard<Mutex> lock(Heap::mutex());
+        std::unique_lock<Mutex> lock(Heap::mutex());
         oldSize = m_heap.largeSize(lock, object);
 
         if (newSize < oldSize && newSize > smallMax) {
@@ -153,7 +153,7 @@ BNO_INLINE void Allocator::refillAllocatorSlowCase(BumpAllocator& allocator, siz
 {
     BumpRangeCache& bumpRangeCache = m_bumpRangeCaches[sizeClass];
 
-    std::lock_guard<Mutex> lock(Heap::mutex());
+    std::unique_lock<Mutex> lock(Heap::mutex());
     m_deallocator.processObjectLog(lock);
     m_heap.allocateSmallBumpRanges(lock, sizeClass, allocator, bumpRangeCache, m_deallocator.lineCache(lock));
 }
@@ -168,7 +168,7 @@ BINLINE void Allocator::refillAllocator(BumpAllocator& allocator, size_t sizeCla
 
 BNO_INLINE void* Allocator::allocateLarge(size_t size)
 {
-    std::lock_guard<Mutex> lock(Heap::mutex());
+    std::unique_lock<Mutex> lock(Heap::mutex());
     return m_heap.allocateLarge(lock, alignment, size);
 }
 
diff --git a/Source/bmalloc/bmalloc/BulkDecommit.h b/Source/bmalloc/bmalloc/BulkDecommit.h
new file mode 100644 (file)
index 0000000..ef9341d
--- /dev/null
@@ -0,0 +1,95 @@
+/*
+ * Copyright (C) 2018 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include "VMAllocate.h"
+#include <vector>
+
+namespace bmalloc {
+
+class BulkDecommit {
+    using Data = std::vector<std::pair<char*, size_t>>;
+
+public:
+    void addEager(void* ptr, size_t size)
+    {
+        add(m_eager, ptr, size);
+    }
+    void addLazy(void* ptr, size_t size)
+    {
+        add(m_lazy, ptr, size);
+    }
+    void processEager()
+    {
+        process(m_eager);
+    }
+    void processLazy()
+    {
+        process(m_lazy);
+    }
+
+private:
+    void add(Data& data, void* ptr, size_t size)
+    {
+        char* begin = roundUpToMultipleOf(vmPageSizePhysical(), static_cast<char*>(ptr));
+        char* end = roundDownToMultipleOf(vmPageSizePhysical(), static_cast<char*>(ptr) + size);
+        if (begin >= end)
+            return;
+        data.push_back({begin, end - begin});
+    }
+
+    void process(BulkDecommit::Data& decommits)
+    {
+        std::sort(
+            decommits.begin(), decommits.end(),
+            [&] (const auto& a, const auto& b) -> bool {
+                return a.first < b.first;
+            });
+
+        char* run = nullptr;
+        size_t runSize = 0;
+        for (unsigned i = 0; i < decommits.size(); ++i) {
+            auto& pair = decommits[i];
+            if (run + runSize != pair.first) {
+                if (run)
+                    vmDeallocatePhysicalPages(run, runSize);
+                run = pair.first;
+                runSize = pair.second;
+            } else {
+                BASSERT(run);
+                runSize += pair.second;
+            }
+        }
+
+        if (run)
+            vmDeallocatePhysicalPages(run, runSize);
+    }
+
+    Data m_eager;
+    Data m_lazy;
+};
+
+} // namespace bmalloc
index 6095682..1b37051 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014-2018 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -60,13 +60,13 @@ void Deallocator::scavenge()
     if (m_debugHeap)
         return;
 
-    std::lock_guard<Mutex> lock(Heap::mutex());
+    std::unique_lock<Mutex> lock(Heap::mutex());
 
     processObjectLog(lock);
     m_heap.deallocateLineCache(lock, lineCache(lock));
 }
 
-void Deallocator::processObjectLog(std::lock_guard<Mutex>& lock)
+void Deallocator::processObjectLog(std::unique_lock<Mutex>& lock)
 {
     for (Object object : m_objectLog)
         m_heap.derefSmallLine(lock, object, lineCache(lock));
@@ -81,7 +81,7 @@ void Deallocator::deallocateSlowCase(void* object)
     if (!object)
         return;
 
-    std::lock_guard<Mutex> lock(Heap::mutex());
+    std::unique_lock<Mutex> lock(Heap::mutex());
     if (m_heap.isLarge(lock, object)) {
         m_heap.deallocateLarge(lock, object);
         return;
index 6c33524..325d1df 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014-2018 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -47,9 +47,9 @@ public:
     void deallocate(void*);
     void scavenge();
     
-    void processObjectLog(std::lock_guard<Mutex>&);
+    void processObjectLog(std::unique_lock<Mutex>&);
     
-    LineCache& lineCache(std::lock_guard<Mutex>&) { return m_lineCache; }
+    LineCache& lineCache(std::unique_lock<Mutex>&) { return m_lineCache; }
 
 private:
     bool deallocateFastCase(void*);
index 5d5d19f..53b3e36 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2014-2018 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -26,6 +26,7 @@
 #include "Heap.h"
 
 #include "AvailableMemory.h"
+#include "BulkDecommit.h"
 #include "BumpAllocator.h"
 #include "Chunk.h"
 #include "Environment.h"
@@ -38,6 +39,7 @@
 #include "VMHeap.h"
 #include "bmalloc.h"
 #include <thread>
+#include <vector>
 
 namespace bmalloc {
 
@@ -140,20 +142,7 @@ void Heap::initializePageMetadata()
 
 size_t Heap::freeableMemory(std::lock_guard<Mutex>&)
 {
-    size_t result = 0;
-    for (auto& list : m_freePages) {
-        for (auto* chunk : list) {
-            for (auto* page : chunk->freePages()) {
-                if (page->hasPhysicalPages())
-                    result += physicalPageSizeSloppy(page->begin()->begin(), pageSize(&list - &m_freePages[0]));
-            }
-        }
-    }
-    
-    for (auto& range : m_largeFree)
-        result += range.totalPhysicalSize();
-
-    return result;
+    return m_freeableMemory;
 }
 
 size_t Heap::footprint()
@@ -162,7 +151,29 @@ size_t Heap::footprint()
     return m_footprint;
 }
 
-void Heap::scavenge(std::lock_guard<Mutex>&)
+void Heap::markAllLargeAsEligibile(std::lock_guard<Mutex>&)
+{
+    m_largeFree.markAllAsEligibile();
+    m_hasPendingDecommits = false;
+    m_condition.notify_all();
+}
+
+void Heap::decommitLargeRange(std::lock_guard<Mutex>&, LargeRange& range, BulkDecommit& decommitter)
+{
+    m_footprint -= range.totalPhysicalSize();
+    m_freeableMemory -= range.totalPhysicalSize();
+    decommitter.addLazy(range.begin(), range.size());
+    m_hasPendingDecommits = true;
+    range.setStartPhysicalSize(0);
+    range.setTotalPhysicalSize(0);
+    BASSERT(range.isEligibile());
+    range.setEligible(false);
+#if ENABLE_PHYSICAL_PAGE_MAP 
+    m_physicalPageMap.decommit(range.begin(), range.size());
+#endif
+}
+
+void Heap::scavenge(std::lock_guard<Mutex>& lock, BulkDecommit& decommitter)
 {
     for (auto& list : m_freePages) {
         for (auto* chunk : list) {
@@ -171,8 +182,10 @@ void Heap::scavenge(std::lock_guard<Mutex>&)
                     continue;
 
                 size_t pageSize = bmalloc::pageSize(&list - &m_freePages[0]);
-                m_footprint -= physicalPageSizeSloppy(page->begin()->begin(), pageSize);
-                vmDeallocatePhysicalPagesSloppy(page->begin()->begin(), pageSize);
+                size_t decommitSize = physicalPageSizeSloppy(page->begin()->begin(), pageSize);
+                m_freeableMemory -= decommitSize;
+                m_footprint -= decommitSize;
+                decommitter.addEager(page->begin()->begin(), pageSize);
                 page->setHasPhysicalPages(false);
 #if ENABLE_PHYSICAL_PAGE_MAP 
                 m_physicalPageMap.decommit(page->begin()->begin(), pageSize);
@@ -186,18 +199,27 @@ void Heap::scavenge(std::lock_guard<Mutex>&)
             deallocateSmallChunk(list.pop(), &list - &m_chunkCache[0]);
     }
 
-    for (auto& range : m_largeFree) {
-        m_footprint -= range.totalPhysicalSize();
-        vmDeallocatePhysicalPagesSloppy(range.begin(), range.size());
-        range.setStartPhysicalSize(0);
-        range.setTotalPhysicalSize(0);
-#if ENABLE_PHYSICAL_PAGE_MAP 
-        m_physicalPageMap.decommit(range.begin(), range.size());
-#endif
+    for (LargeRange& range : m_largeFree) {
+        m_highWatermark = std::min(m_highWatermark, static_cast<void*>(range.begin()));
+        decommitLargeRange(lock, range, decommitter);
     }
+
+    m_freeableMemory = 0;
+}
+
+void Heap::scavengeToHighWatermark(std::lock_guard<Mutex>& lock, BulkDecommit& decommitter)
+{
+    void* newHighWaterMark = nullptr;
+    for (LargeRange& range : m_largeFree) {
+        if (range.begin() <= m_highWatermark)
+            newHighWaterMark = std::min(newHighWaterMark, static_cast<void*>(range.begin()));
+        else
+            decommitLargeRange(lock, range, decommitter);
+    }
+    m_highWatermark = newHighWaterMark;
 }
 
-void Heap::deallocateLineCache(std::lock_guard<Mutex>&, LineCache& lineCache)
+void Heap::deallocateLineCache(std::unique_lock<Mutex>&, LineCache& lineCache)
 {
     for (auto& list : lineCache) {
         while (!list.isEmpty()) {
@@ -207,7 +229,7 @@ void Heap::deallocateLineCache(std::lock_guard<Mutex>&, LineCache& lineCache)
     }
 }
 
-void Heap::allocateSmallChunk(std::lock_guard<Mutex>& lock, size_t pageClass)
+void Heap::allocateSmallChunk(std::unique_lock<Mutex>& lock, size_t pageClass)
 {
     RELEASE_BASSERT(isActiveHeapKind(m_kind));
     
@@ -228,6 +250,8 @@ void Heap::allocateSmallChunk(std::lock_guard<Mutex>& lock, size_t pageClass)
             page->setHasFreeLines(lock, true);
             chunk->freePages().push(page);
         });
+
+        m_freeableMemory += chunkSize;
         
         m_scavenger->schedule(0);
 
@@ -244,19 +268,26 @@ void Heap::deallocateSmallChunk(Chunk* chunk, size_t pageClass)
     size_t size = m_largeAllocated.remove(chunk);
     size_t totalPhysicalSize = size;
 
+    size_t accountedInFreeable = 0;
+
     bool hasPhysicalPages = true;
     forEachPage(chunk, pageSize(pageClass), [&](SmallPage* page) {
+        size_t physicalSize = physicalPageSizeSloppy(page->begin()->begin(), pageSize(pageClass));
         if (!page->hasPhysicalPages()) {
-            totalPhysicalSize -= physicalPageSizeSloppy(page->begin()->begin(), pageSize(pageClass));
+            totalPhysicalSize -= physicalSize;
             hasPhysicalPages = false;
-        }
+        } else
+            accountedInFreeable += physicalSize;
     });
 
+    m_freeableMemory -= accountedInFreeable;
+    m_freeableMemory += totalPhysicalSize;
+
     size_t startPhysicalSize = hasPhysicalPages ? size : 0;
     m_largeFree.add(LargeRange(chunk, size, startPhysicalSize, totalPhysicalSize));
 }
 
-SmallPage* Heap::allocateSmallPage(std::lock_guard<Mutex>& lock, size_t sizeClass, LineCache& lineCache)
+SmallPage* Heap::allocateSmallPage(std::unique_lock<Mutex>& lock, size_t sizeClass, LineCache& lineCache)
 {
     RELEASE_BASSERT(isActiveHeapKind(m_kind));
 
@@ -282,10 +313,13 @@ SmallPage* Heap::allocateSmallPage(std::lock_guard<Mutex>& lock, size_t sizeClas
         if (chunk->freePages().isEmpty())
             m_freePages[pageClass].remove(chunk);
 
-        if (!page->hasPhysicalPages()) {
-            size_t pageSize = bmalloc::pageSize(pageClass);
+        size_t pageSize = bmalloc::pageSize(pageClass);
+        size_t physicalSize = physicalPageSizeSloppy(page->begin()->begin(), pageSize);
+        if (page->hasPhysicalPages())
+            m_freeableMemory -= physicalSize;
+        else {
             m_scavenger->scheduleIfUnderMemoryPressure(pageSize);
-            m_footprint += physicalPageSizeSloppy(page->begin()->begin(), pageSize);
+            m_footprint += physicalSize;
             vmAllocatePhysicalPagesSloppy(page->begin()->begin(), pageSize);
             page->setHasPhysicalPages(true);
 #if ENABLE_PHYSICAL_PAGE_MAP 
@@ -300,7 +334,7 @@ SmallPage* Heap::allocateSmallPage(std::lock_guard<Mutex>& lock, size_t sizeClas
     return page;
 }
 
-void Heap::deallocateSmallLine(std::lock_guard<Mutex>& lock, Object object, LineCache& lineCache)
+void Heap::deallocateSmallLine(std::unique_lock<Mutex>& lock, Object object, LineCache& lineCache)
 {
     BASSERT(!object.line()->refCount(lock));
     SmallPage* page = object.page();
@@ -317,6 +351,8 @@ void Heap::deallocateSmallLine(std::lock_guard<Mutex>& lock, Object object, Line
     size_t sizeClass = page->sizeClass();
     size_t pageClass = m_pageClasses[sizeClass];
 
+    m_freeableMemory += physicalPageSizeSloppy(page->begin()->begin(), pageSize(pageClass));
+
     List<SmallPage>::remove(page); // 'page' may be in any thread's line cache.
     
     Chunk* chunk = Chunk::get(page);
@@ -339,7 +375,7 @@ void Heap::deallocateSmallLine(std::lock_guard<Mutex>& lock, Object object, Line
 }
 
 void Heap::allocateSmallBumpRangesByMetadata(
-    std::lock_guard<Mutex>& lock, size_t sizeClass,
+    std::unique_lock<Mutex>& lock, size_t sizeClass,
     BumpAllocator& allocator, BumpRangeCache& rangeCache,
     LineCache& lineCache)
 {
@@ -403,7 +439,7 @@ void Heap::allocateSmallBumpRangesByMetadata(
 }
 
 void Heap::allocateSmallBumpRangesByObject(
-    std::lock_guard<Mutex>& lock, size_t sizeClass,
+    std::unique_lock<Mutex>& lock, size_t sizeClass,
     BumpAllocator& allocator, BumpRangeCache& rangeCache,
     LineCache& lineCache)
 {
@@ -459,7 +495,7 @@ void Heap::allocateSmallBumpRangesByObject(
     }
 }
 
-LargeRange Heap::splitAndAllocate(std::lock_guard<Mutex>&, LargeRange& range, size_t alignment, size_t size)
+LargeRange Heap::splitAndAllocate(std::unique_lock<Mutex>&, LargeRange& range, size_t alignment, size_t size)
 {
     RELEASE_BASSERT(isActiveHeapKind(m_kind));
 
@@ -491,11 +527,15 @@ LargeRange Heap::splitAndAllocate(std::lock_guard<Mutex>&, LargeRange& range, si
 #endif
     }
     
-    if (prev)
+    if (prev) {
+        m_freeableMemory += prev.totalPhysicalSize();
         m_largeFree.add(prev);
+    }
 
-    if (next)
+    if (next) {
+        m_freeableMemory += next.totalPhysicalSize();
         m_largeFree.add(next);
+    }
 
     m_objectTypes.set(Chunk::get(range.begin()), ObjectType::Large);
 
@@ -503,7 +543,7 @@ LargeRange Heap::splitAndAllocate(std::lock_guard<Mutex>&, LargeRange& range, si
     return range;
 }
 
-void* Heap::tryAllocateLarge(std::lock_guard<Mutex>& lock, size_t alignment, size_t size)
+void* Heap::tryAllocateLarge(std::unique_lock<Mutex>& lock, size_t alignment, size_t size)
 {
     RELEASE_BASSERT(isActiveHeapKind(m_kind));
 
@@ -526,6 +566,12 @@ void* Heap::tryAllocateLarge(std::lock_guard<Mutex>& lock, size_t alignment, siz
 
     LargeRange range = m_largeFree.remove(alignment, size);
     if (!range) {
+        if (m_hasPendingDecommits) {
+            m_condition.wait(lock, [&]() { return !m_hasPendingDecommits; });
+            // Now we're guaranteed we're looking at all available memory.
+            return tryAllocateLarge(lock, alignment, size);
+        }
+
         if (usingGigacage())
             return nullptr;
 
@@ -534,31 +580,34 @@ void* Heap::tryAllocateLarge(std::lock_guard<Mutex>& lock, size_t alignment, siz
             return nullptr;
         
         m_largeFree.add(range);
-
         range = m_largeFree.remove(alignment, size);
     }
 
-    return splitAndAllocate(lock, range, alignment, size).begin();
+    m_freeableMemory -= range.totalPhysicalSize();
+
+    void* result = splitAndAllocate(lock, range, alignment, size).begin();
+    m_highWatermark = std::max(m_highWatermark, result);
+    return result;
 }
 
-void* Heap::allocateLarge(std::lock_guard<Mutex>& lock, size_t alignment, size_t size)
+void* Heap::allocateLarge(std::unique_lock<Mutex>& lock, size_t alignment, size_t size)
 {
     void* result = tryAllocateLarge(lock, alignment, size);
     RELEASE_BASSERT(result);
     return result;
 }
 
-bool Heap::isLarge(std::lock_guard<Mutex>&, void* object)
+bool Heap::isLarge(std::unique_lock<Mutex>&, void* object)
 {
     return m_objectTypes.get(Object(object).chunk()) == ObjectType::Large;
 }
 
-size_t Heap::largeSize(std::lock_guard<Mutex>&, void* object)
+size_t Heap::largeSize(std::unique_lock<Mutex>&, void* object)
 {
     return m_largeAllocated.get(object);
 }
 
-void Heap::shrinkLarge(std::lock_guard<Mutex>& lock, const Range& object, size_t newSize)
+void Heap::shrinkLarge(std::unique_lock<Mutex>& lock, const Range& object, size_t newSize)
 {
     BASSERT(object.size() > newSize);
 
@@ -569,23 +618,24 @@ void Heap::shrinkLarge(std::lock_guard<Mutex>& lock, const Range& object, size_t
     m_scavenger->schedule(size);
 }
 
-void Heap::deallocateLarge(std::lock_guard<Mutex>&, void* object)
+void Heap::deallocateLarge(std::unique_lock<Mutex>&, void* object)
 {
     if (m_debugHeap)
         return m_debugHeap->freeLarge(object);
 
     size_t size = m_largeAllocated.remove(object);
     m_largeFree.add(LargeRange(object, size, size, size));
+    m_freeableMemory += size;
     m_scavenger->schedule(size);
 }
 
 void Heap::externalCommit(void* ptr, size_t size)
 {
-    std::lock_guard<Mutex> lock(Heap::mutex());
+    std::unique_lock<Mutex> lock(Heap::mutex());
     externalCommit(lock, ptr, size);
 }
 
-void Heap::externalCommit(std::lock_guard<Mutex>&, void* ptr, size_t size)
+void Heap::externalCommit(std::unique_lock<Mutex>&, void* ptr, size_t size)
 {
     BUNUSED_PARAM(ptr);
 
@@ -597,11 +647,11 @@ void Heap::externalCommit(std::lock_guard<Mutex>&, void* ptr, size_t size)
 
 void Heap::externalDecommit(void* ptr, size_t size)
 {
-    std::lock_guard<Mutex> lock(Heap::mutex());
+    std::unique_lock<Mutex> lock(Heap::mutex());
     externalDecommit(lock, ptr, size);
 }
 
-void Heap::externalDecommit(std::lock_guard<Mutex>&, void* ptr, size_t size)
+void Heap::externalDecommit(std::unique_lock<Mutex>&, void* ptr, size_t size)
 {
     BUNUSED_PARAM(ptr);
 
index e16d2a0..7146dd7 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2014-2018 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
 #include "SmallPage.h"
 #include "Vector.h"
 #include <array>
+#include <condition_variable>
 #include <mutex>
+#include <vector>
 
 namespace bmalloc {
 
 class BeginTag;
+class BulkDecommit;
 class BumpAllocator;
 class DebugHeap;
 class EndTag;
@@ -62,30 +65,36 @@ public:
     
     DebugHeap* debugHeap() { return m_debugHeap; }
 
-    void allocateSmallBumpRanges(std::lock_guard<Mutex>&, size_t sizeClass,
+    void allocateSmallBumpRanges(std::unique_lock<Mutex>&, size_t sizeClass,
         BumpAllocator&, BumpRangeCache&, LineCache&);
-    void derefSmallLine(std::lock_guard<Mutex>&, Object, LineCache&);
-    void deallocateLineCache(std::lock_guard<Mutex>&, LineCache&);
+    void derefSmallLine(std::unique_lock<Mutex>&, Object, LineCache&);
+    void deallocateLineCache(std::unique_lock<Mutex>&, LineCache&);
 
-    void* allocateLarge(std::lock_guard<Mutex>&, size_t alignment, size_t);
-    void* tryAllocateLarge(std::lock_guard<Mutex>&, size_t alignment, size_t);
-    void deallocateLarge(std::lock_guard<Mutex>&, void*);
+    void* allocateLarge(std::unique_lock<Mutex>&, size_t alignment, size_t);
+    void* tryAllocateLarge(std::unique_lock<Mutex>&, size_t alignment, size_t);
+    void deallocateLarge(std::unique_lock<Mutex>&, void*);
 
-    bool isLarge(std::lock_guard<Mutex>&, void*);
-    size_t largeSize(std::lock_guard<Mutex>&, void*);
-    void shrinkLarge(std::lock_guard<Mutex>&, const Range&, size_t);
+    bool isLarge(std::unique_lock<Mutex>&, void*);
+    size_t largeSize(std::unique_lock<Mutex>&, void*);
+    void shrinkLarge(std::unique_lock<Mutex>&, const Range&, size_t);
 
-    void scavenge(std::lock_guard<Mutex>&);
+    void scavenge(std::lock_guard<Mutex>&, BulkDecommit&);
+    void scavenge(std::lock_guard<Mutex>&, BulkDecommit&, size_t& freed, size_t goal);
+    void scavengeToHighWatermark(std::lock_guard<Mutex>&, BulkDecommit&);
 
     size_t freeableMemory(std::lock_guard<Mutex>&);
     size_t footprint();
 
     void externalDecommit(void* ptr, size_t);
-    void externalDecommit(std::lock_guard<Mutex>&, void* ptr, size_t);
+    void externalDecommit(std::unique_lock<Mutex>&, void* ptr, size_t);
     void externalCommit(void* ptr, size_t);
-    void externalCommit(std::lock_guard<Mutex>&, void* ptr, size_t);
+    void externalCommit(std::unique_lock<Mutex>&, void* ptr, size_t);
+
+    void markAllLargeAsEligibile(std::lock_guard<Mutex>&);
 
 private:
+    void decommitLargeRange(std::lock_guard<Mutex>&, LargeRange&, BulkDecommit&);
+
     struct LargeObjectHash {
         static unsigned hash(void* key)
         {
@@ -103,22 +112,22 @@ private:
     void initializeLineMetadata();
     void initializePageMetadata();
 
-    void allocateSmallBumpRangesByMetadata(std::lock_guard<Mutex>&,
+    void allocateSmallBumpRangesByMetadata(std::unique_lock<Mutex>&,
         size_t sizeClass, BumpAllocator&, BumpRangeCache&, LineCache&);
-    void allocateSmallBumpRangesByObject(std::lock_guard<Mutex>&,
+    void allocateSmallBumpRangesByObject(std::unique_lock<Mutex>&,
         size_t sizeClass, BumpAllocator&, BumpRangeCache&, LineCache&);
 
-    SmallPage* allocateSmallPage(std::lock_guard<Mutex>&, size_t sizeClass, LineCache&);
-    void deallocateSmallLine(std::lock_guard<Mutex>&, Object, LineCache&);
+    SmallPage* allocateSmallPage(std::unique_lock<Mutex>&, size_t sizeClass, LineCache&);
+    void deallocateSmallLine(std::unique_lock<Mutex>&, Object, LineCache&);
 
-    void allocateSmallChunk(std::lock_guard<Mutex>&, size_t pageClass);
+    void allocateSmallChunk(std::unique_lock<Mutex>&, size_t pageClass);
     void deallocateSmallChunk(Chunk*, size_t pageClass);
 
     void mergeLarge(BeginTag*&, EndTag*&, Range&);
     void mergeLargeLeft(EndTag*&, BeginTag*&, Range&, bool& inVMHeap);
     void mergeLargeRight(EndTag*&, BeginTag*&, Range&, bool& inVMHeap);
 
-    LargeRange splitAndAllocate(std::lock_guard<Mutex>&, LargeRange&, size_t alignment, size_t);
+    LargeRange splitAndAllocate(std::unique_lock<Mutex>&, LargeRange&, size_t alignment, size_t);
 
     HeapKind m_kind;
     
@@ -139,14 +148,20 @@ private:
     DebugHeap* m_debugHeap { nullptr };
 
     size_t m_footprint { 0 };
+    size_t m_freeableMemory { 0 };
+
+    bool m_hasPendingDecommits { false };
+    std::condition_variable_any m_condition;
 
 #if ENABLE_PHYSICAL_PAGE_MAP 
     PhysicalPageMap m_physicalPageMap;
 #endif
+
+    void* m_highWatermark { nullptr };
 };
 
 inline void Heap::allocateSmallBumpRanges(
-    std::lock_guard<Mutex>& lock, size_t sizeClass,
+    std::unique_lock<Mutex>& lock, size_t sizeClass,
     BumpAllocator& allocator, BumpRangeCache& rangeCache,
     LineCache& lineCache)
 {
@@ -155,7 +170,7 @@ inline void Heap::allocateSmallBumpRanges(
     return allocateSmallBumpRangesByObject(lock, sizeClass, allocator, rangeCache, lineCache);
 }
 
-inline void Heap::derefSmallLine(std::lock_guard<Mutex>& lock, Object object, LineCache& lineCache)
+inline void Heap::derefSmallLine(std::unique_lock<Mutex>& lock, Object object, LineCache& lineCache)
 {
     if (!object.line()->deref(lock))
         return;
index 0be7c13..2dc913b 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2017-2018 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -75,17 +75,14 @@ public:
     // Iterate over all empty and committed pages, and put them into the vector. This also records the
     // pages as being decommitted. It's the caller's job to do the actual decommitting.
     void scavenge(Vector<DeferredDecommit>&);
-
-    // This is only here for debugging purposes.
-    // FIXME: Make this fast so we can use it to help determine when to
-    // run the scavenger:
-    // https://bugs.webkit.org/show_bug.cgi?id=184176
-    size_t freeableMemory();
+    void scavengeToHighWatermark(Vector<DeferredDecommit>&);
 
     template<typename Func>
     void forEachCommittedPage(const Func&);
     
 private:
+    void scavengePage(size_t, Vector<DeferredDecommit>&);
+
     // NOTE: I suppose that this could be two bitvectors. But from working on the GC, I found that the
     // number of bitvectors does not matter as much as whether or not they make intuitive sense.
     Bits<numPages> m_eligible;
@@ -93,6 +90,7 @@ private:
     Bits<numPages> m_committed;
     std::array<IsoPage<Config>*, numPages> m_pages;
     unsigned m_firstEligible { 0 };
+    unsigned m_highWatermark { 0 };
 };
 
 } // namespace bmalloc
index 6c95a93..12a48e5 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2017-2018 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -50,6 +50,8 @@ EligibilityResult<Config> IsoDirectory<Config, passedNumPages>::takeFirstEligibl
     m_firstEligible = pageIndex;
     if (pageIndex >= numPages)
         return EligibilityKind::Full;
+
+    m_highWatermark = std::max(pageIndex, m_highWatermark);
     
     Scavenger& scavenger = *PerProcess<Scavenger>::get();
     scavenger.didStartGrowing();
@@ -74,6 +76,9 @@ EligibilityResult<Config> IsoDirectory<Config, passedNumPages>::takeFirstEligibl
 
         m_committed[pageIndex] = true;
         this->m_heap.didCommit(page, IsoPageBase::pageSize);
+    } else {
+        if (m_empty[pageIndex])
+            this->m_heap.isNoLongerFreeable(page, IsoPageBase::pageSize);
     }
     
     RELEASE_BASSERT(page);
@@ -100,6 +105,8 @@ void IsoDirectory<Config, passedNumPages>::didBecome(IsoPage<Config>* page, IsoP
     case IsoPageTrigger::Empty:
         if (verbose)
             fprintf(stderr, "%p: %p did become empty.\n", this, page);
+        BASSERT(!!m_committed[pageIndex]);
+        this->m_heap.isNowFreeable(page, IsoPageBase::pageSize);
         m_empty[pageIndex] = true;
         PerProcess<Scavenger>::get()->schedule(IsoPageBase::pageSize);
         return;
@@ -114,30 +121,40 @@ void IsoDirectory<Config, passedNumPages>::didDecommit(unsigned index)
     // to be a frequently executed path, in the sense that decommitting perf will be dominated by the
     // syscall itself (which has to do many hard things).
     std::lock_guard<Mutex> locker(this->m_heap.lock);
+    BASSERT(!!m_committed[index]);
+    this->m_heap.isNoLongerFreeable(m_pages[index], IsoPageBase::pageSize);
     m_committed[index] = false;
     this->m_heap.didDecommit(m_pages[index], IsoPageBase::pageSize);
 }
 
 template<typename Config, unsigned passedNumPages>
+void IsoDirectory<Config, passedNumPages>::scavengePage(size_t index, Vector<DeferredDecommit>& decommits)
+{
+    // Make sure that this page is now off limits.
+    m_empty[index] = false;
+    m_eligible[index] = false;
+    decommits.push(DeferredDecommit(this, m_pages[index], index));
+}
+
+template<typename Config, unsigned passedNumPages>
 void IsoDirectory<Config, passedNumPages>::scavenge(Vector<DeferredDecommit>& decommits)
 {
     (m_empty & m_committed).forEachSetBit(
         [&] (size_t index) {
-            // Make sure that this page is now off limits.
-            m_empty[index] = false;
-            m_eligible[index] = false;
-            decommits.push(DeferredDecommit(this, m_pages[index], index));
+            scavengePage(index, decommits);
         });
+    m_highWatermark = 0;
 }
 
 template<typename Config, unsigned passedNumPages>
-size_t IsoDirectory<Config, passedNumPages>::freeableMemory()
+void IsoDirectory<Config, passedNumPages>::scavengeToHighWatermark(Vector<DeferredDecommit>& decommits)
 {
-    size_t result = 0;
-    (m_empty & m_committed).forEachSetBit([&] (size_t) {
-        result += IsoPageBase::pageSize;
-    });
-    return result;
+    (m_empty & m_committed).forEachSetBit(
+        [&] (size_t index) {
+            if (index > m_highWatermark)
+                scavengePage(index, decommits);
+        });
+    m_highWatermark = 0;
 }
 
 template<typename Config, unsigned passedNumPages>
index 1282623..b29a004 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2017-2018 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -40,6 +40,7 @@ public:
     virtual ~IsoHeapImplBase();
     
     virtual void scavenge(Vector<DeferredDecommit>&) = 0;
+    virtual void scavengeToHighWatermark(Vector<DeferredDecommit>&) = 0;
     virtual size_t freeableMemory() = 0;
     virtual size_t footprint() = 0;
     
@@ -71,11 +72,8 @@ public:
     void didBecomeEligible(IsoDirectory<Config, IsoDirectoryPage<Config>::numPages>*);
     
     void scavenge(Vector<DeferredDecommit>&) override;
+    void scavengeToHighWatermark(Vector<DeferredDecommit>&) override;
 
-    // This is only here for debugging purposes.
-    // FIXME: Make this fast so we can use it to help determine when to
-    // run the scavenger:
-    // https://bugs.webkit.org/show_bug.cgi?id=184176
     size_t freeableMemory() override;
 
     size_t footprint() override;
@@ -99,6 +97,9 @@ public:
 
     void didCommit(void* ptr, size_t bytes);
     void didDecommit(void* ptr, size_t bytes);
+
+    void isNowFreeable(void* ptr, size_t bytes);
+    void isNoLongerFreeable(void* ptr, size_t bytes);
     
     // It's almost always the caller's responsibility to grab the lock. This lock comes from the
     // PerProcess<IsoTLSDeallocatorEntry<Config>>::get()->lock. That's pretty weird, and we don't
@@ -112,10 +113,12 @@ private:
     IsoDirectoryPage<Config>* m_headDirectory { nullptr };
     IsoDirectoryPage<Config>* m_tailDirectory { nullptr };
     size_t m_footprint { 0 };
+    size_t m_freeableMemory { 0 };
 #if ENABLE_PHYSICAL_PAGE_MAP
     PhysicalPageMap m_physicalPageMap;
 #endif
-    unsigned m_numDirectoryPages { 0 };
+    unsigned m_nextDirectoryPageIndex { 1 }; // We start at 1 so that the high water mark being zero means we've only allocated in the inline directory since the last scavenge.
+    unsigned m_directoryHighWatermark { 0 };
     
     bool m_isInlineDirectoryEligible { true };
     IsoDirectoryPage<Config>* m_firstEligibleDirectory { nullptr };
index 9a11e9e..987ceb5 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2017-2018 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -60,11 +60,13 @@ EligibilityResult<Config> IsoHeapImpl<Config>::takeFirstEligible()
     
     for (; m_firstEligibleDirectory; m_firstEligibleDirectory = m_firstEligibleDirectory->next) {
         EligibilityResult<Config> result = m_firstEligibleDirectory->payload.takeFirstEligible();
-        if (result.kind != EligibilityKind::Full)
+        if (result.kind != EligibilityKind::Full) {
+            m_directoryHighWatermark = std::max(m_directoryHighWatermark, m_firstEligibleDirectory->index());
             return result;
+        }
     }
     
-    auto* newDirectory = new IsoDirectoryPage<Config>(*this, m_numDirectoryPages++);
+    auto* newDirectory = new IsoDirectoryPage<Config>(*this, m_nextDirectoryPageIndex++);
     if (m_headDirectory) {
         m_tailDirectory->next = newDirectory;
         m_tailDirectory = newDirectory;
@@ -73,6 +75,7 @@ EligibilityResult<Config> IsoHeapImpl<Config>::takeFirstEligible()
         m_headDirectory = newDirectory;
         m_tailDirectory = newDirectory;
     }
+    m_directoryHighWatermark = newDirectory->index();
     m_firstEligibleDirectory = newDirectory;
     EligibilityResult<Config> result = newDirectory->payload.takeFirstEligible();
     RELEASE_BASSERT(result.kind != EligibilityKind::Full);
@@ -102,17 +105,25 @@ void IsoHeapImpl<Config>::scavenge(Vector<DeferredDecommit>& decommits)
         [&] (auto& directory) {
             directory.scavenge(decommits);
         });
+    m_directoryHighWatermark = 0;
+}
+
+template<typename Config>
+void IsoHeapImpl<Config>::scavengeToHighWatermark(Vector<DeferredDecommit>& decommits)
+{
+    if (!m_directoryHighWatermark)
+        m_inlineDirectory.scavengeToHighWatermark(decommits);
+    for (IsoDirectoryPage<Config>* page = m_headDirectory; page; page = page->next) {
+        if (page->index() >= m_directoryHighWatermark)
+            page->payload.scavengeToHighWatermark(decommits);
+    }
+    m_directoryHighWatermark = 0;
 }
 
 template<typename Config>
 size_t IsoHeapImpl<Config>::freeableMemory()
 {
-    size_t result = 0;
-    forEachDirectory(
-        [&] (auto& directory) {
-            result += directory.freeableMemory();
-        });
-    return result;
+    return m_freeableMemory;
 }
 
 template<typename Config>
@@ -207,5 +218,19 @@ void IsoHeapImpl<Config>::didDecommit(void* ptr, size_t bytes)
 #endif
 }
 
+template<typename Config>
+void IsoHeapImpl<Config>::isNowFreeable(void* ptr, size_t bytes)
+{
+    BUNUSED_PARAM(ptr);
+    m_freeableMemory += bytes;
+}
+
+template<typename Config>
+void IsoHeapImpl<Config>::isNoLongerFreeable(void* ptr, size_t bytes)
+{
+    BUNUSED_PARAM(ptr);
+    m_freeableMemory -= bytes;
+}
+
 } // namespace bmalloc
 
index 443ad91..1cd7b49 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2016-2018 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -34,6 +34,9 @@ LargeRange LargeMap::remove(size_t alignment, size_t size)
 
     LargeRange* candidate = m_free.end();
     for (LargeRange* it = m_free.begin(); it != m_free.end(); ++it) {
+        if (!it->isEligibile())
+            continue;
+
         if (it->size() < size)
             continue;
 
@@ -76,4 +79,10 @@ void LargeMap::add(const LargeRange& range)
     m_free.push(merged);
 }
 
+void LargeMap::markAllAsEligibile()
+{
+    for (LargeRange& range : m_free)
+        range.setEligible(true);
+}
+
 } // namespace bmalloc
index fe4f2c6..fc1c9e2 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2016-2018 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -40,6 +40,10 @@ public:
     void add(const LargeRange&);
     LargeRange remove(size_t alignment, size_t);
     Vector<LargeRange>& ranges() { return m_free; }
+    void markAllAsEligibile();
+
+    size_t size() { return m_free.size(); }
+    LargeRange& at(size_t i) { return m_free[i]; }
 
 private:
     Vector<LargeRange> m_free;
index 879c84f..5558255 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2016-2018 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -80,16 +80,26 @@ public:
 
     std::pair<LargeRange, LargeRange> split(size_t) const;
 
+    void setEligible(bool eligible) { m_isEligible = eligible; }
+    bool isEligibile() const { return m_isEligible; }
+
     bool operator<(const void* other) const { return begin() < other; }
     bool operator<(const LargeRange& other) const { return begin() < other.begin(); }
 
 private:
     size_t m_startPhysicalSize;
     size_t m_totalPhysicalSize;
+    bool m_isEligible { true };
 };
 
 inline bool canMerge(const LargeRange& a, const LargeRange& b)
 {
+    if (!a.isEligibile() || !b.isEligibile()) {
+        // FIXME: We can make this work if we find it's helpful as long as the merged
+        // range is only eligible if a and b are eligible.
+        return false;
+    }
+
     if (a.end() == b.begin())
         return true;
     
index 360c2af..c06e578 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2014-2018 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -38,7 +38,7 @@ ObjectType objectType(HeapKind kind, void* object)
         if (!object)
             return ObjectType::Small;
 
-        std::lock_guard<Mutex> lock(Heap::mutex());
+        std::unique_lock<Mutex> lock(Heap::mutex());
         if (PerProcess<PerHeapKind<Heap>>::getFastCase()->at(kind).isLarge(lock, object))
             return ObjectType::Large;
     }
index d64499b..66fc75a 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2017-2018 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -27,6 +27,7 @@
 
 #include "AllIsoHeapsInlines.h"
 #include "AvailableMemory.h"
+#include "BulkDecommit.h"
 #include "Environment.h"
 #include "Heap.h"
 #if BOS(DARWIN)
@@ -41,6 +42,28 @@ namespace bmalloc {
 
 static constexpr bool verbose = false;
 
+struct PrintTime {
+    PrintTime(const char* str) 
+        : string(str)
+    { }
+
+    ~PrintTime()
+    {
+        if (!printed)
+            print();
+    }
+    void print()
+    {
+        if (verbose) {
+            fprintf(stderr, "%s %lfms\n", string, static_cast<double>(std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::steady_clock::now() - start).count()) / 1000);
+            printed = true;
+        }
+    }
+    const char* string;
+    std::chrono::steady_clock::time_point start { std::chrono::steady_clock::now() };
+    bool printed { false };
+};
+
 Scavenger::Scavenger(std::lock_guard<Mutex>&)
 {
 #if BOS(DARWIN)
@@ -143,8 +166,22 @@ inline void dumpStats()
     dump("bmalloc-footprint", PerProcess<Scavenger>::get()->footprint());
 }
 
+std::chrono::milliseconds Scavenger::timeSinceLastFullScavenge()
+{
+    std::unique_lock<Mutex> lock(m_mutex);
+    return std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now() - m_lastFullScavengeTime);
+}
+
+std::chrono::milliseconds Scavenger::timeSinceLastPartialScavenge()
+{
+    std::unique_lock<Mutex> lock(m_mutex);
+    return std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now() - m_lastPartialScavengeTime);
+}
+
 void Scavenger::scavenge()
 {
+    std::unique_lock<Mutex> lock(m_scavengingMutex);
+
     if (verbose) {
         fprintf(stderr, "--------------------------------\n");
         fprintf(stderr, "--before scavenging--\n");
@@ -152,16 +189,36 @@ void Scavenger::scavenge()
     }
 
     {
-        std::lock_guard<Mutex> lock(Heap::mutex());
-        for (unsigned i = numHeaps; i--;) {
-            if (!isActiveHeapKind(static_cast<HeapKind>(i)))
-                continue;
-            PerProcess<PerHeapKind<Heap>>::get()->at(i).scavenge(lock);
+        BulkDecommit decommitter;
+
+        {
+            PrintTime printTime("\nfull scavenge under lock time");
+            std::lock_guard<Mutex> lock(Heap::mutex());
+            for (unsigned i = numHeaps; i--;) {
+                if (!isActiveHeapKind(static_cast<HeapKind>(i)))
+                    continue;
+                PerProcess<PerHeapKind<Heap>>::get()->at(i).scavenge(lock, decommitter);
+            }
+            decommitter.processEager();
+        }
+
+        {
+            PrintTime printTime("full scavenge lazy decommit time");
+            decommitter.processLazy();
+        }
+
+        {
+            PrintTime printTime("full scavenge mark all as eligible time");
+            std::lock_guard<Mutex> lock(Heap::mutex());
+            for (unsigned i = numHeaps; i--;) {
+                if (!isActiveHeapKind(static_cast<HeapKind>(i)))
+                    continue;
+                PerProcess<PerHeapKind<Heap>>::get()->at(i).markAllLargeAsEligibile(lock);
+            }
         }
     }
-    
+
     {
-        std::lock_guard<Mutex> locker(m_isoScavengeLock);
         RELEASE_BASSERT(!m_deferredDecommits.size());
         PerProcess<AllIsoHeaps>::get()->forEach(
             [&] (IsoHeapImplBase& heap) {
@@ -176,6 +233,78 @@ void Scavenger::scavenge()
         dumpStats();
         fprintf(stderr, "--------------------------------\n");
     }
+
+    {
+        std::unique_lock<Mutex> lock(m_mutex);
+        m_lastFullScavengeTime = std::chrono::steady_clock::now();
+    }
+}
+
+void Scavenger::partialScavenge()
+{
+    std::unique_lock<Mutex> lock(m_scavengingMutex);
+
+    if (verbose) {
+        fprintf(stderr, "--------------------------------\n");
+        fprintf(stderr, "--before partial scavenging--\n");
+        dumpStats();
+    }
+
+    {
+        BulkDecommit decommitter;
+        {
+            PrintTime printTime("\npartialScavenge under lock time");
+            std::lock_guard<Mutex> lock(Heap::mutex());
+            for (unsigned i = numHeaps; i--;) {
+                if (!isActiveHeapKind(static_cast<HeapKind>(i)))
+                    continue;
+                Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(i);
+                size_t freeableMemory = heap.freeableMemory(lock);
+                if (freeableMemory < 4 * MB)
+                    continue;
+                heap.scavengeToHighWatermark(lock, decommitter);
+            }
+
+            decommitter.processEager();
+        }
+
+        {
+            PrintTime printTime("partialScavenge lazy decommit time");
+            decommitter.processLazy();
+        }
+
+        {
+            PrintTime printTime("partialScavenge mark all as eligible time");
+            std::lock_guard<Mutex> lock(Heap::mutex());
+            for (unsigned i = numHeaps; i--;) {
+                if (!isActiveHeapKind(static_cast<HeapKind>(i)))
+                    continue;
+                Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(i);
+                heap.markAllLargeAsEligibile(lock);
+            }
+        }
+    }
+
+    {
+        RELEASE_BASSERT(!m_deferredDecommits.size());
+        PerProcess<AllIsoHeaps>::get()->forEach(
+            [&] (IsoHeapImplBase& heap) {
+                heap.scavengeToHighWatermark(m_deferredDecommits);
+            });
+        IsoHeapImplBase::finishScavenging(m_deferredDecommits);
+        m_deferredDecommits.shrink(0);
+    }
+
+    if (verbose) {
+        fprintf(stderr, "--after partial scavenging--\n");
+        dumpStats();
+        fprintf(stderr, "--------------------------------\n");
+    }
+
+    {
+        std::unique_lock<Mutex> lock(m_mutex);
+        m_lastPartialScavengeTime = std::chrono::steady_clock::now();
+    }
 }
 
 size_t Scavenger::freeableMemory()
@@ -190,7 +319,6 @@ size_t Scavenger::freeableMemory()
         }
     }
 
-    std::lock_guard<Mutex> locker(m_isoScavengeLock);
     PerProcess<AllIsoHeaps>::get()->forEach(
         [&] (IsoHeapImplBase& heap) {
             result += heap.freeableMemory();
@@ -251,23 +379,62 @@ void Scavenger::threadRunLoop()
         
         setSelfQOSClass();
         
-        {
-            if (verbose) {
-                fprintf(stderr, "--------------------------------\n");
-                fprintf(stderr, "considering running scavenger\n");
-                dumpStats();
-                fprintf(stderr, "--------------------------------\n");
+        if (verbose) {
+            fprintf(stderr, "--------------------------------\n");
+            fprintf(stderr, "considering running scavenger\n");
+            dumpStats();
+            fprintf(stderr, "--------------------------------\n");
+        }
+
+        enum class ScavengeMode {
+            None,
+            Partial,
+            Full
+        };
+
+        size_t freeableMemory = this->freeableMemory();
+
+        ScavengeMode scavengeMode = [&] {
+            auto timeSinceLastFullScavenge = this->timeSinceLastFullScavenge();
+            auto timeSinceLastPartialScavenge = this->timeSinceLastPartialScavenge();
+            auto timeSinceLastScavenge = std::min(timeSinceLastPartialScavenge, timeSinceLastFullScavenge);
+            if (isUnderMemoryPressure() && freeableMemory > 4 * MB && timeSinceLastScavenge > std::chrono::milliseconds(5))
+                return ScavengeMode::Full;
+
+            if (!m_isProbablyGrowing) {
+                if (timeSinceLastFullScavenge < std::chrono::milliseconds(1000))
+                    return ScavengeMode::Partial;
+                return ScavengeMode::Full;
             }
 
-            std::unique_lock<Mutex> lock(m_mutex);
-            if (m_isProbablyGrowing && !isUnderMemoryPressure()) {
-                m_isProbablyGrowing = false;
-                runSoonHoldingLock();
-                continue;
+            if (timeSinceLastScavenge < std::chrono::milliseconds(8000)) {
+                // Rate limit partial scavenges.
+                return ScavengeMode::None;
             }
+            if (freeableMemory < 50 * MB)
+                return ScavengeMode::None;
+            if (5 * freeableMemory < footprint())
+                return ScavengeMode::None;
+            return ScavengeMode::Partial;
+        }();
+
+        m_isProbablyGrowing = false;
+
+        switch (scavengeMode) {
+        case ScavengeMode::None: {
+            runSoon();
+            break;
+        }
+        case ScavengeMode::Partial: {
+            partialScavenge();
+            runSoon();
+            break;
+        }
+        case ScavengeMode::Full: {
+            scavenge();
+            break;
+        }
         }
-
-        scavenge();
     }
 }
 
index 41b39bc..858d66a 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2017-2018 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -30,6 +30,7 @@
 #include "Mutex.h"
 #include "PerProcess.h"
 #include "Vector.h"
+#include <chrono>
 #include <condition_variable>
 #include <mutex>
 
@@ -85,21 +86,27 @@ private:
     void setSelfQOSClass();
     void setThreadName(const char*);
 
+    std::chrono::milliseconds timeSinceLastFullScavenge();
+    std::chrono::milliseconds timeSinceLastPartialScavenge();
+    void partialScavenge();
+
     std::atomic<State> m_state { State::Sleep };
     size_t m_scavengerBytes { 0 };
     bool m_isProbablyGrowing { false };
     
     Mutex m_mutex;
+    Mutex m_scavengingMutex;
     std::condition_variable_any m_condition;
 
     std::thread m_thread;
+    std::chrono::steady_clock::time_point m_lastFullScavengeTime { std::chrono::steady_clock::now() };
+    std::chrono::steady_clock::time_point m_lastPartialScavengeTime { std::chrono::steady_clock::now() };
     
 #if BOS(DARWIN)
     dispatch_source_t m_pressureHandlerDispatchSource;
     qos_class_t m_requestedScavengerThreadQOSClass { QOS_CLASS_USER_INITIATED };
 #endif
     
-    Mutex m_isoScavengeLock;
     Vector<DeferredDecommit> m_deferredDecommits;
 };
 
index c3199c5..6be85d3 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2014-2018 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -35,9 +35,9 @@ namespace bmalloc {
 
 class SmallLine {
 public:
-    void ref(std::lock_guard<Mutex>&, unsigned char = 1);
-    bool deref(std::lock_guard<Mutex>&);
-    unsigned refCount(std::lock_guard<Mutex>&) { return m_refCount; }
+    void ref(std::unique_lock<Mutex>&, unsigned char = 1);
+    bool deref(std::unique_lock<Mutex>&);
+    unsigned refCount(std::unique_lock<Mutex>&) { return m_refCount; }
     
     char* begin();
     char* end();
@@ -51,13 +51,13 @@ static_assert(
 
 };
 
-inline void SmallLine::ref(std::lock_guard<Mutex>&, unsigned char refCount)
+inline void SmallLine::ref(std::unique_lock<Mutex>&, unsigned char refCount)
 {
     BASSERT(!m_refCount);
     m_refCount = refCount;
 }
 
-inline bool SmallLine::deref(std::lock_guard<Mutex>&)
+inline bool SmallLine::deref(std::unique_lock<Mutex>&)
 {
     BASSERT(m_refCount);
     --m_refCount;
index 919d142..e024c8d 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2014-2018 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -38,15 +38,15 @@ class SmallLine;
 
 class SmallPage : public ListNode<SmallPage> {
 public:
-    void ref(std::lock_guard<Mutex>&);
-    bool deref(std::lock_guard<Mutex>&);
-    unsigned refCount(std::lock_guard<Mutex>&) { return m_refCount; }
+    void ref(std::unique_lock<Mutex>&);
+    bool deref(std::unique_lock<Mutex>&);
+    unsigned refCount(std::unique_lock<Mutex>&) { return m_refCount; }
     
     size_t sizeClass() { return m_sizeClass; }
     void setSizeClass(size_t sizeClass) { m_sizeClass = sizeClass; }
     
-    bool hasFreeLines(std::lock_guard<Mutex>&) const { return m_hasFreeLines; }
-    void setHasFreeLines(std::lock_guard<Mutex>&, bool hasFreeLines) { m_hasFreeLines = hasFreeLines; }
+    bool hasFreeLines(std::unique_lock<Mutex>&) const { return m_hasFreeLines; }
+    void setHasFreeLines(std::unique_lock<Mutex>&, bool hasFreeLines) { m_hasFreeLines = hasFreeLines; }
     
     bool hasPhysicalPages() { return m_hasPhysicalPages; }
     void setHasPhysicalPages(bool hasPhysicalPages) { m_hasPhysicalPages = hasPhysicalPages; }
@@ -70,14 +70,14 @@ static_assert(
 
 using LineCache = std::array<List<SmallPage>, sizeClassCount>;
 
-inline void SmallPage::ref(std::lock_guard<Mutex>&)
+inline void SmallPage::ref(std::unique_lock<Mutex>&)
 {
     BASSERT(!m_slide);
     ++m_refCount;
     BASSERT(m_refCount);
 }
 
-inline bool SmallPage::deref(std::lock_guard<Mutex>&)
+inline bool SmallPage::deref(std::unique_lock<Mutex>&)
 {
     BASSERT(!m_slide);
     BASSERT(m_refCount);
index 30dd767..be5aca2 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2017-2018 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -52,7 +52,7 @@ void* tryLargeZeroedMemalignVirtual(size_t alignment, size_t size, HeapKind kind
 
     void* result;
     {
-        std::lock_guard<Mutex> lock(Heap::mutex());
+        std::unique_lock<Mutex> lock(Heap::mutex());
         result = heap.tryAllocateLarge(lock, alignment, size);
         if (result) {
             // Don't track this as dirty memory that dictates how we drive the scavenger.
@@ -72,7 +72,7 @@ void freeLargeVirtual(void* object, size_t size, HeapKind kind)
 {
     kind = mapToActiveHeapKind(kind);
     Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(kind);
-    std::lock_guard<Mutex> lock(Heap::mutex());
+    std::unique_lock<Mutex> lock(Heap::mutex());
     // Balance out the externalDecommit when we allocated the zeroed virtual memory.
     heap.externalCommit(lock, object, size);
     heap.deallocateLarge(lock, object);