bmalloc should compute its own estimate of its footprint
authorsbarati@apple.com <sbarati@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Mon, 2 Apr 2018 21:09:45 +0000 (21:09 +0000)
committersbarati@apple.com <sbarati@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Mon, 2 Apr 2018 21:09:45 +0000 (21:09 +0000)
https://bugs.webkit.org/show_bug.cgi?id=184121

Reviewed by Filip Pizlo.

Source/bmalloc:

This patch makes it so that bmalloc keeps track of its own physical
footprint.

Doing this for IsoHeaps is trivial. It allocates/deallocates fixed
page sizes at a time. IsoHeapImpl just updates a count every time
a page is committed/decommitted.

Making Heap keep its footprint was a bit trickier because of how
LargeRange is constructed. Before this patch, LargeRange kept track
of the amount of physical memory at the start of its range. This
patch extends large range to also keep track of the total physical memory
in the range just for footprint bookkeeping. This was needed to make
Heap's footprint come close to resembling reality, because as we merge and split
large ranges, the start physical size often becomes wildly inaccurate.
The total physical size number stored in LargeRange is still just an
estimate. It's possible that as ranges are split, that the total physical
size split amongst the two ranges doesn't resemble reality. This can
happen when the total physical size is really all in one end of the split,
but we mark it as being proportionally split amongst the resulting two
ranges. In practice, I did not notice this being a problem. The footprint
estimate tracks reality very closely (in my testing, within less than 1MB for
heaps with sizes upwards of 1GB). The other nice thing about total physical
size is that even if it diverges from reality in terms of how memory is
using up physical RAM, it stays internally consistent inside bmalloc's
own data structures.

The main oversight of this patch is how it deals with Wasm memory. All Wasm
memory will be viewed by bmalloc as taking up physical space even when it
may not be. Wasm memory starts off as taking up purely virtual pages. When a
page is first accessed, only then will the OS page it in and cause it to use
physical RAM. I opened a bug to come up with a solution to this problem:
https://bugs.webkit.org/show_bug.cgi?id=184207

* bmalloc.xcodeproj/project.pbxproj:
* bmalloc/AvailableMemory.cpp:
(bmalloc::memoryStatus):
* bmalloc/BPlatform.h:
* bmalloc/Heap.cpp:
(bmalloc::Heap::Heap):
(bmalloc::Heap::freeableMemory):
(bmalloc::Heap::footprint):
(bmalloc::Heap::scavenge):
(bmalloc::Heap::deallocateSmallChunk):
(bmalloc::Heap::allocateSmallPage):
(bmalloc::Heap::splitAndAllocate):
(bmalloc::Heap::tryAllocateLarge):
(bmalloc::Heap::shrinkLarge):
(bmalloc::Heap::deallocateLarge):
(bmalloc::Heap::externalCommit):
(bmalloc::Heap::externalDecommit):
* bmalloc/Heap.h:
* bmalloc/IsoDirectory.h:
* bmalloc/IsoDirectoryInlines.h:
(bmalloc::passedNumPages>::takeFirstEligible):
(bmalloc::passedNumPages>::didDecommit):
(bmalloc::passedNumPages>::freeableMemory):
* bmalloc/IsoHeapImpl.h:
* bmalloc/IsoHeapImplInlines.h:
(bmalloc::IsoHeapImpl<Config>::freeableMemory):
(bmalloc::IsoHeapImpl<Config>::footprint):
(bmalloc::IsoHeapImpl<Config>::didCommit):
(bmalloc::IsoHeapImpl<Config>::didDecommit):
* bmalloc/LargeRange.h:
(bmalloc::LargeRange::LargeRange):
(bmalloc::LargeRange::startPhysicalSize const):
(bmalloc::LargeRange::setStartPhysicalSize):
(bmalloc::LargeRange::totalPhysicalSize const):
(bmalloc::LargeRange::setTotalPhysicalSize):
(bmalloc::merge):
(bmalloc::LargeRange::split const):
(bmalloc::LargeRange::physicalSize const): Deleted.
(bmalloc::LargeRange::setPhysicalSize): Deleted.
* bmalloc/PhysicalPageMap.h: Added.
This class is added for debugging purposes. It's useful when hacking
on the code that calculates the footprint to use this map as a sanity
check. It's just a simple implementation that has a set of all the committed pages.

(bmalloc::PhysicalPageMap::commit):
(bmalloc::PhysicalPageMap::decommit):
(bmalloc::PhysicalPageMap::footprint):
(bmalloc::PhysicalPageMap::forEachPhysicalPage):
* bmalloc/Scavenger.cpp:
(bmalloc::dumpStats):
(bmalloc::Scavenger::scavenge):
(bmalloc::Scavenger::freeableMemory):
This is here just for debugging for now. But we should implement an
efficient version of this to use when driving when to run the
scavenger.

(bmalloc::Scavenger::footprint):
(bmalloc::Scavenger::threadRunLoop):
* bmalloc/Scavenger.h:
* bmalloc/VMAllocate.h:
(bmalloc::physicalPageSizeSloppy):
* bmalloc/VMHeap.cpp:
(bmalloc::VMHeap::tryAllocateLargeChunk):
* bmalloc/bmalloc.cpp:
(bmalloc::api::commitAlignedPhysical):
(bmalloc::api::decommitAlignedPhysical):
* bmalloc/bmalloc.h:

Source/JavaScriptCore:

* heap/IsoAlignedMemoryAllocator.cpp:
(JSC::IsoAlignedMemoryAllocator::~IsoAlignedMemoryAllocator):
(JSC::IsoAlignedMemoryAllocator::tryAllocateAlignedMemory):
(JSC::IsoAlignedMemoryAllocator::freeAlignedMemory):

Source/WTF:

* wtf/FastMalloc.cpp:
(WTF::fastCommitAlignedMemory):
(WTF::fastDecommitAlignedMemory):
* wtf/FastMalloc.h:

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@230187 268f45cc-cd09-0410-ab3c-d52691b4dbfc

23 files changed:
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/heap/IsoAlignedMemoryAllocator.cpp
Source/WTF/ChangeLog
Source/WTF/wtf/FastMalloc.cpp
Source/WTF/wtf/FastMalloc.h
Source/bmalloc/ChangeLog
Source/bmalloc/bmalloc.xcodeproj/project.pbxproj
Source/bmalloc/bmalloc/AvailableMemory.cpp
Source/bmalloc/bmalloc/BPlatform.h
Source/bmalloc/bmalloc/Heap.cpp
Source/bmalloc/bmalloc/Heap.h
Source/bmalloc/bmalloc/IsoDirectory.h
Source/bmalloc/bmalloc/IsoDirectoryInlines.h
Source/bmalloc/bmalloc/IsoHeapImpl.h
Source/bmalloc/bmalloc/IsoHeapImplInlines.h
Source/bmalloc/bmalloc/LargeRange.h
Source/bmalloc/bmalloc/PhysicalPageMap.h [new file with mode: 0644]
Source/bmalloc/bmalloc/Scavenger.cpp
Source/bmalloc/bmalloc/Scavenger.h
Source/bmalloc/bmalloc/VMAllocate.h
Source/bmalloc/bmalloc/VMHeap.cpp
Source/bmalloc/bmalloc/bmalloc.cpp
Source/bmalloc/bmalloc/bmalloc.h

index 6e86ad7..36902c1 100644 (file)
@@ -1,3 +1,15 @@
+2018-04-02  Saam Barati  <sbarati@apple.com>
+
+        bmalloc should compute its own estimate of its footprint
+        https://bugs.webkit.org/show_bug.cgi?id=184121
+
+        Reviewed by Filip Pizlo.
+
+        * heap/IsoAlignedMemoryAllocator.cpp:
+        (JSC::IsoAlignedMemoryAllocator::~IsoAlignedMemoryAllocator):
+        (JSC::IsoAlignedMemoryAllocator::tryAllocateAlignedMemory):
+        (JSC::IsoAlignedMemoryAllocator::freeAlignedMemory):
+
 2018-04-02  Mark Lam  <mark.lam@apple.com>
 
         We should not trash the stack pointer on OSR entry.
index 158688a..abddece 100644 (file)
@@ -37,7 +37,7 @@ IsoAlignedMemoryAllocator::~IsoAlignedMemoryAllocator()
     for (unsigned i = 0; i < m_blocks.size(); ++i) {
         void* block = m_blocks[i];
         if (!m_committed[i])
-            OSAllocator::commit(block, MarkedBlock::blockSize, true, false);
+            WTF::fastCommitAlignedMemory(block, MarkedBlock::blockSize);
         fastAlignedFree(block);
     }
 }
@@ -55,7 +55,7 @@ void* IsoAlignedMemoryAllocator::tryAllocateAlignedMemory(size_t alignment, size
     if (m_firstUncommitted < m_blocks.size()) {
         m_committed[m_firstUncommitted] = true;
         void* result = m_blocks[m_firstUncommitted];
-        OSAllocator::commit(result, MarkedBlock::blockSize, true, false);
+        WTF::fastCommitAlignedMemory(result, MarkedBlock::blockSize);
         return result;
     }
     
@@ -80,7 +80,7 @@ void IsoAlignedMemoryAllocator::freeAlignedMemory(void* basePtr)
     unsigned index = iter->value;
     m_committed[index] = false;
     m_firstUncommitted = std::min(index, m_firstUncommitted);
-    OSAllocator::decommit(basePtr, MarkedBlock::blockSize);
+    WTF::fastDecommitAlignedMemory(basePtr, MarkedBlock::blockSize);
 }
 
 void IsoAlignedMemoryAllocator::dump(PrintStream& out) const
index 82e81f3..0fe3360 100644 (file)
@@ -1,3 +1,15 @@
+2018-04-02  Saam Barati  <sbarati@apple.com>
+
+        bmalloc should compute its own estimate of its footprint
+        https://bugs.webkit.org/show_bug.cgi?id=184121
+
+        Reviewed by Filip Pizlo.
+
+        * wtf/FastMalloc.cpp:
+        (WTF::fastCommitAlignedMemory):
+        (WTF::fastDecommitAlignedMemory):
+        * wtf/FastMalloc.h:
+
 2018-03-30  Filip Pizlo  <fpizlo@apple.com>
 
         Strings and Vectors shouldn't do index masking
index e1918f0..986b0f1 100644 (file)
@@ -102,6 +102,8 @@ TryMallocReturnValue tryFastZeroedMalloc(size_t n)
 
 #if defined(USE_SYSTEM_MALLOC) && USE_SYSTEM_MALLOC
 
+#include <wtf/OSAllocator.h>
+
 #if OS(WINDOWS)
 #include <malloc.h>
 #endif
@@ -238,6 +240,16 @@ size_t fastMallocSize(const void* p)
 #endif
 }
 
+void fastCommitAlignedMemory(void* ptr, size_t size)
+{
+    OSAllocator::commit(ptr, size, true, false);
+}
+
+void fastDecommitAlignedMemory(void* ptr, size_t size)
+{
+    OSAllocator::decommit(ptr, size);
+}
+
 } // namespace WTF
 
 #else // defined(USE_SYSTEM_MALLOC) && USE_SYSTEM_MALLOC
@@ -361,6 +373,16 @@ FastMallocStatistics fastMallocStatistics()
     return statistics;
 }
 
+void fastCommitAlignedMemory(void* ptr, size_t size)
+{
+    bmalloc::api::commitAlignedPhysical(ptr, size);
+}
+
+void fastDecommitAlignedMemory(void* ptr, size_t size)
+{
+    bmalloc::api::decommitAlignedPhysical(ptr, size);
+}
+
 } // namespace WTF
 
 #endif // defined(USE_SYSTEM_MALLOC) && USE_SYSTEM_MALLOC
index 6672ce4..41ef69f 100644 (file)
@@ -70,6 +70,9 @@ WTF_EXPORT_PRIVATE size_t fastMallocGoodSize(size_t);
 WTF_EXPORT_PRIVATE void releaseFastMallocFreeMemory();
 WTF_EXPORT_PRIVATE void releaseFastMallocFreeMemoryForThisThread();
 
+WTF_EXPORT_PRIVATE void fastCommitAlignedMemory(void*, size_t);
+WTF_EXPORT_PRIVATE void fastDecommitAlignedMemory(void*, size_t);
+
 struct FastMallocStatistics {
     size_t reservedVMBytes;
     size_t committedVMBytes;
index 3fb805e..79ac3e7 100644 (file)
@@ -1,3 +1,111 @@
+2018-04-02  Saam Barati  <sbarati@apple.com>
+
+        bmalloc should compute its own estimate of its footprint
+        https://bugs.webkit.org/show_bug.cgi?id=184121
+
+        Reviewed by Filip Pizlo.
+
+        This patch makes it so that bmalloc keeps track of its own physical
+        footprint.
+        
+        Doing this for IsoHeaps is trivial. It allocates/deallocates fixed
+        page sizes at a time. IsoHeapImpl just updates a count every time
+        a page is committed/decommitted.
+        
+        Making Heap keep its footprint was a bit trickier because of how
+        LargeRange is constructed. Before this patch, LargeRange kept track
+        of the amount of physical memory at the start of its range. This
+        patch extends large range to also keep track of the total physical memory
+        in the range just for footprint bookkeeping. This was needed to make
+        Heap's footprint come close to resembling reality, because as we merge and split
+        large ranges, the start physical size often becomes wildly inaccurate.
+        The total physical size number stored in LargeRange is still just an
+        estimate. It's possible that as ranges are split, that the total physical
+        size split amongst the two ranges doesn't resemble reality. This can
+        happen when the total physical size is really all in one end of the split,
+        but we mark it as being proportionally split amongst the resulting two
+        ranges. In practice, I did not notice this being a problem. The footprint
+        estimate tracks reality very closely (in my testing, within less than 1MB for
+        heaps with sizes upwards of 1GB). The other nice thing about total physical
+        size is that even if it diverges from reality in terms of how memory is
+        using up physical RAM, it stays internally consistent inside bmalloc's
+        own data structures.
+        
+        The main oversight of this patch is how it deals with Wasm memory. All Wasm
+        memory will be viewed by bmalloc as taking up physical space even when it
+        may not be. Wasm memory starts off as taking up purely virtual pages. When a
+        page is first accessed, only then will the OS page it in and cause it to use
+        physical RAM. I opened a bug to come up with a solution to this problem:
+        https://bugs.webkit.org/show_bug.cgi?id=184207
+
+        * bmalloc.xcodeproj/project.pbxproj:
+        * bmalloc/AvailableMemory.cpp:
+        (bmalloc::memoryStatus):
+        * bmalloc/BPlatform.h:
+        * bmalloc/Heap.cpp:
+        (bmalloc::Heap::Heap):
+        (bmalloc::Heap::freeableMemory):
+        (bmalloc::Heap::footprint):
+        (bmalloc::Heap::scavenge):
+        (bmalloc::Heap::deallocateSmallChunk):
+        (bmalloc::Heap::allocateSmallPage):
+        (bmalloc::Heap::splitAndAllocate):
+        (bmalloc::Heap::tryAllocateLarge):
+        (bmalloc::Heap::shrinkLarge):
+        (bmalloc::Heap::deallocateLarge):
+        (bmalloc::Heap::externalCommit):
+        (bmalloc::Heap::externalDecommit):
+        * bmalloc/Heap.h:
+        * bmalloc/IsoDirectory.h:
+        * bmalloc/IsoDirectoryInlines.h:
+        (bmalloc::passedNumPages>::takeFirstEligible):
+        (bmalloc::passedNumPages>::didDecommit):
+        (bmalloc::passedNumPages>::freeableMemory):
+        * bmalloc/IsoHeapImpl.h:
+        * bmalloc/IsoHeapImplInlines.h:
+        (bmalloc::IsoHeapImpl<Config>::freeableMemory):
+        (bmalloc::IsoHeapImpl<Config>::footprint):
+        (bmalloc::IsoHeapImpl<Config>::didCommit):
+        (bmalloc::IsoHeapImpl<Config>::didDecommit):
+        * bmalloc/LargeRange.h:
+        (bmalloc::LargeRange::LargeRange):
+        (bmalloc::LargeRange::startPhysicalSize const):
+        (bmalloc::LargeRange::setStartPhysicalSize):
+        (bmalloc::LargeRange::totalPhysicalSize const):
+        (bmalloc::LargeRange::setTotalPhysicalSize):
+        (bmalloc::merge):
+        (bmalloc::LargeRange::split const):
+        (bmalloc::LargeRange::physicalSize const): Deleted.
+        (bmalloc::LargeRange::setPhysicalSize): Deleted.
+        * bmalloc/PhysicalPageMap.h: Added.
+        This class is added for debugging purposes. It's useful when hacking
+        on the code that calculates the footprint to use this map as a sanity
+        check. It's just a simple implementation that has a set of all the committed pages.
+
+        (bmalloc::PhysicalPageMap::commit):
+        (bmalloc::PhysicalPageMap::decommit):
+        (bmalloc::PhysicalPageMap::footprint):
+        (bmalloc::PhysicalPageMap::forEachPhysicalPage):
+        * bmalloc/Scavenger.cpp:
+        (bmalloc::dumpStats):
+        (bmalloc::Scavenger::scavenge):
+        (bmalloc::Scavenger::freeableMemory):
+        This is here just for debugging for now. But we should implement an
+        efficient version of this to use when driving when to run the
+        scavenger.
+
+        (bmalloc::Scavenger::footprint):
+        (bmalloc::Scavenger::threadRunLoop):
+        * bmalloc/Scavenger.h:
+        * bmalloc/VMAllocate.h:
+        (bmalloc::physicalPageSizeSloppy):
+        * bmalloc/VMHeap.cpp:
+        (bmalloc::VMHeap::tryAllocateLargeChunk):
+        * bmalloc/bmalloc.cpp:
+        (bmalloc::api::commitAlignedPhysical):
+        (bmalloc::api::decommitAlignedPhysical):
+        * bmalloc/bmalloc.h:
+
 2018-03-28  Commit Queue  <commit-queue@webkit.org>
 
         Unreviewed, rolling out r230005.
index e32618c..3d4e069 100644 (file)
                4426E2831C839547008EB042 /* BSoftLinking.h in Headers */ = {isa = PBXBuildFile; fileRef = 4426E2821C839547008EB042 /* BSoftLinking.h */; };
                6599C5CC1EC3F15900A2F7BB /* AvailableMemory.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 6599C5CA1EC3F15900A2F7BB /* AvailableMemory.cpp */; };
                6599C5CD1EC3F15900A2F7BB /* AvailableMemory.h in Headers */ = {isa = PBXBuildFile; fileRef = 6599C5CB1EC3F15900A2F7BB /* AvailableMemory.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               795AB3C7206E0D340074FE76 /* PhysicalPageMap.h in Headers */ = {isa = PBXBuildFile; fileRef = 795AB3C6206E0D250074FE76 /* PhysicalPageMap.h */; settings = {ATTRIBUTES = (Private, ); }; };
                AD0934331FCF406D00E85EB5 /* BCompiler.h in Headers */ = {isa = PBXBuildFile; fileRef = AD0934321FCF405000E85EB5 /* BCompiler.h */; settings = {ATTRIBUTES = (Private, ); }; };
                AD14AD29202529C400890E3B /* ProcessCheck.h in Headers */ = {isa = PBXBuildFile; fileRef = AD14AD27202529A600890E3B /* ProcessCheck.h */; };
                AD14AD2A202529C700890E3B /* ProcessCheck.mm in Sources */ = {isa = PBXBuildFile; fileRef = AD14AD28202529B000890E3B /* ProcessCheck.mm */; };
                4426E2821C839547008EB042 /* BSoftLinking.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = BSoftLinking.h; path = bmalloc/darwin/BSoftLinking.h; sourceTree = "<group>"; };
                6599C5CA1EC3F15900A2F7BB /* AvailableMemory.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = AvailableMemory.cpp; path = bmalloc/AvailableMemory.cpp; sourceTree = "<group>"; };
                6599C5CB1EC3F15900A2F7BB /* AvailableMemory.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = AvailableMemory.h; path = bmalloc/AvailableMemory.h; sourceTree = "<group>"; };
+               795AB3C6206E0D250074FE76 /* PhysicalPageMap.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = PhysicalPageMap.h; path = bmalloc/PhysicalPageMap.h; sourceTree = "<group>"; };
                AD0934321FCF405000E85EB5 /* BCompiler.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = BCompiler.h; path = bmalloc/BCompiler.h; sourceTree = "<group>"; };
                AD14AD27202529A600890E3B /* ProcessCheck.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = ProcessCheck.h; path = bmalloc/ProcessCheck.h; sourceTree = "<group>"; };
                AD14AD28202529B000890E3B /* ProcessCheck.mm */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.objcpp; name = ProcessCheck.mm; path = bmalloc/ProcessCheck.mm; sourceTree = "<group>"; };
                                144BE11E1CA346520099C8C0 /* Object.h */,
                                14105E8318E14374003A106E /* ObjectType.cpp */,
                                1485656018A43DBA00ED6942 /* ObjectType.h */,
+                               795AB3C6206E0D250074FE76 /* PhysicalPageMap.h */,
                                AD14AD27202529A600890E3B /* ProcessCheck.h */,
                                AD14AD28202529B000890E3B /* ProcessCheck.mm */,
                                0F5BF1501F22E1570029D91D /* Scavenger.cpp */,
                                14DD78C918F48D7500950702 /* BInline.h in Headers */,
                                0F7EB8471F9541B000F1ABCB /* IsoDeallocator.h in Headers */,
                                0F7EB8241F9541B000F1ABCB /* IsoHeapImplInlines.h in Headers */,
+                               795AB3C7206E0D340074FE76 /* PhysicalPageMap.h in Headers */,
                                144C07F51C7B70260051BB6A /* LargeMap.h in Headers */,
                                14C8992D1CC578330027A057 /* LargeRange.h in Headers */,
                                140FA00519CE4B6800FFD3C8 /* LineMetadata.h in Headers */,
index fcc9b98..6abbded 100644 (file)
@@ -25,6 +25,9 @@
 
 #include "AvailableMemory.h"
 
+#include "Environment.h"
+#include "PerProcess.h"
+#include "Scavenger.h"
 #include "Sizes.h"
 #include <mutex>
 #if BOS(DARWIN)
@@ -95,12 +98,16 @@ size_t availableMemory()
 #if BPLATFORM(IOS)
 MemoryStatus memoryStatus()
 {
-    task_vm_info_data_t vmInfo;
-    mach_msg_type_number_t vmSize = TASK_VM_INFO_COUNT;
-    
-    size_t memoryFootprint = 0;
-    if (KERN_SUCCESS == task_info(mach_task_self(), TASK_VM_INFO, (task_info_t)(&vmInfo), &vmSize))
-        memoryFootprint = static_cast<size_t>(vmInfo.phys_footprint);
+    size_t memoryFootprint;
+    if (PerProcess<Environment>::get()->isDebugHeapEnabled()) {
+        task_vm_info_data_t vmInfo;
+        mach_msg_type_number_t vmSize = TASK_VM_INFO_COUNT;
+        
+        memoryFootprint = 0;
+        if (KERN_SUCCESS == task_info(mach_task_self(), TASK_VM_INFO, (task_info_t)(&vmInfo), &vmSize))
+            memoryFootprint = static_cast<size_t>(vmInfo.phys_footprint);
+    } else
+        memoryFootprint = PerProcess<Scavenger>::get()->footprint();
 
     double percentInUse = static_cast<double>(memoryFootprint) / static_cast<double>(availableMemory());
     double percentAvailableMemoryInUse = std::min(percentInUse, 1.0);
index 6e8216b..5aefa05 100644 (file)
 #if !defined(BUNUSED_PARAM)
 #define BUNUSED_PARAM(variable) (void)variable
 #endif
+
+/* This is used for debugging when hacking on how bmalloc calculates its physical footprint. */
+#define ENABLE_PHYSICAL_PAGE_MAP 0
index 180244f..1801445 100644 (file)
@@ -59,7 +59,7 @@ Heap::Heap(HeapKind kind, std::lock_guard<StaticMutex>&)
 #if GIGACAGE_ENABLED
         if (usingGigacage()) {
             RELEASE_BASSERT(gigacageBasePtr());
-            m_largeFree.add(LargeRange(gigacageBasePtr(), gigacageSize(), 0));
+            m_largeFree.add(LargeRange(gigacageBasePtr(), gigacageSize(), 0, 0));
         }
 #endif
     }
@@ -138,6 +138,30 @@ void Heap::initializePageMetadata()
         m_pageClasses[i] = (computePageSize(i) - 1) / smallPageSize;
 }
 
+size_t Heap::freeableMemory(std::lock_guard<StaticMutex>&)
+{
+    size_t result = 0;
+    for (auto& list : m_freePages) {
+        for (auto* chunk : list) {
+            for (auto* page : chunk->freePages()) {
+                if (page->hasPhysicalPages())
+                    result += physicalPageSizeSloppy(page->begin()->begin(), pageSize(&list - &m_freePages[0]));
+            }
+        }
+    }
+    
+    for (auto& range : m_largeFree)
+        result += range.totalPhysicalSize();
+
+    return result;
+}
+
+size_t Heap::footprint()
+{
+    BASSERT(!m_debugHeap);
+    return m_footprint;
+}
+
 void Heap::scavenge(std::lock_guard<StaticMutex>&)
 {
     for (auto& list : m_freePages) {
@@ -146,9 +170,13 @@ void Heap::scavenge(std::lock_guard<StaticMutex>&)
                 if (!page->hasPhysicalPages())
                     continue;
 
-                vmDeallocatePhysicalPagesSloppy(page->begin()->begin(), pageSize(&list - &m_freePages[0]));
-
+                size_t pageSize = bmalloc::pageSize(&list - &m_freePages[0]);
+                m_footprint -= physicalPageSizeSloppy(page->begin()->begin(), pageSize);
+                vmDeallocatePhysicalPagesSloppy(page->begin()->begin(), pageSize);
                 page->setHasPhysicalPages(false);
+#if ENABLE_PHYSICAL_PAGE_MAP 
+                m_physicalPageMap.decommit(page->begin()->begin(), pageSize);
+#endif
             }
         }
     }
@@ -159,9 +187,13 @@ void Heap::scavenge(std::lock_guard<StaticMutex>&)
     }
 
     for (auto& range : m_largeFree) {
+        m_footprint -= range.totalPhysicalSize();
         vmDeallocatePhysicalPagesSloppy(range.begin(), range.size());
-
-        range.setPhysicalSize(0);
+        range.setStartPhysicalSize(0);
+        range.setTotalPhysicalSize(0);
+#if ENABLE_PHYSICAL_PAGE_MAP 
+        m_physicalPageMap.decommit(range.begin(), range.size());
+#endif
     }
 }
 
@@ -210,15 +242,18 @@ void Heap::deallocateSmallChunk(Chunk* chunk, size_t pageClass)
     m_objectTypes.set(chunk, ObjectType::Large);
     
     size_t size = m_largeAllocated.remove(chunk);
+    size_t totalPhysicalSize = size;
 
     bool hasPhysicalPages = true;
     forEachPage(chunk, pageSize(pageClass), [&](SmallPage* page) {
-        if (!page->hasPhysicalPages())
+        if (!page->hasPhysicalPages()) {
+            totalPhysicalSize -= physicalPageSizeSloppy(page->begin()->begin(), pageSize(pageClass));
             hasPhysicalPages = false;
+        }
     });
-    size_t physicalSize = hasPhysicalPages ? size : 0;
 
-    m_largeFree.add(LargeRange(chunk, size, physicalSize));
+    size_t startPhysicalSize = hasPhysicalPages ? size : 0;
+    m_largeFree.add(LargeRange(chunk, size, startPhysicalSize, totalPhysicalSize));
 }
 
 SmallPage* Heap::allocateSmallPage(std::lock_guard<StaticMutex>& lock, size_t sizeClass, LineCache& lineCache)
@@ -248,10 +283,14 @@ SmallPage* Heap::allocateSmallPage(std::lock_guard<StaticMutex>& lock, size_t si
             m_freePages[pageClass].remove(chunk);
 
         if (!page->hasPhysicalPages()) {
-            m_scavenger->scheduleIfUnderMemoryPressure(pageSize(pageClass));
-
-            vmAllocatePhysicalPagesSloppy(page->begin()->begin(), pageSize(pageClass));
+            size_t pageSize = bmalloc::pageSize(pageClass);
+            m_scavenger->scheduleIfUnderMemoryPressure(pageSize);
+            m_footprint += physicalPageSizeSloppy(page->begin()->begin(), pageSize);
+            vmAllocatePhysicalPagesSloppy(page->begin()->begin(), pageSize);
             page->setHasPhysicalPages(true);
+#if ENABLE_PHYSICAL_PAGE_MAP 
+            m_physicalPageMap.commit(page->begin()->begin(), pageSize);
+#endif
         }
 
         return page;
@@ -420,7 +459,7 @@ void Heap::allocateSmallBumpRangesByObject(
     }
 }
 
-LargeRange Heap::splitAndAllocate(LargeRange& range, size_t alignment, size_t size)
+LargeRange Heap::splitAndAllocate(std::lock_guard<StaticMutex>&, LargeRange& range, size_t alignment, size_t size)
 {
     RELEASE_BASSERT(isActiveHeapKind(m_kind));
 
@@ -441,10 +480,15 @@ LargeRange Heap::splitAndAllocate(LargeRange& range, size_t alignment, size_t si
         next = pair.second;
     }
     
-    if (range.physicalSize() < range.size()) {
+    if (range.startPhysicalSize() < range.size()) {
         m_scavenger->scheduleIfUnderMemoryPressure(range.size());
-        vmAllocatePhysicalPagesSloppy(range.begin() + range.physicalSize(), range.size() - range.physicalSize());
-        range.setPhysicalSize(range.size());
+        m_footprint += range.size() - range.totalPhysicalSize();
+        vmAllocatePhysicalPagesSloppy(range.begin() + range.startPhysicalSize(), range.size() - range.startPhysicalSize());
+        range.setStartPhysicalSize(range.size());
+        range.setTotalPhysicalSize(range.size());
+#if ENABLE_PHYSICAL_PAGE_MAP 
+        m_physicalPageMap.commit(range.begin(), range.size());
+#endif
     }
     
     if (prev)
@@ -459,7 +503,7 @@ LargeRange Heap::splitAndAllocate(LargeRange& range, size_t alignment, size_t si
     return range;
 }
 
-void* Heap::tryAllocateLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t size)
+void* Heap::tryAllocateLarge(std::lock_guard<StaticMutex>& lock, size_t alignment, size_t size)
 {
     RELEASE_BASSERT(isActiveHeapKind(m_kind));
 
@@ -494,7 +538,7 @@ void* Heap::tryAllocateLarge(std::lock_guard<StaticMutex>&, size_t alignment, si
         range = m_largeFree.remove(alignment, size);
     }
 
-    return splitAndAllocate(range, alignment, size).begin();
+    return splitAndAllocate(lock, range, alignment, size).begin();
 }
 
 void* Heap::allocateLarge(std::lock_guard<StaticMutex>& lock, size_t alignment, size_t size)
@@ -514,13 +558,13 @@ size_t Heap::largeSize(std::lock_guard<StaticMutex>&, void* object)
     return m_largeAllocated.get(object);
 }
 
-void Heap::shrinkLarge(std::lock_guard<StaticMutex>&, const Range& object, size_t newSize)
+void Heap::shrinkLarge(std::lock_guard<StaticMutex>& lock, const Range& object, size_t newSize)
 {
     BASSERT(object.size() > newSize);
 
     size_t size = m_largeAllocated.remove(object.begin());
-    LargeRange range = LargeRange(object, size);
-    splitAndAllocate(range, alignment, newSize);
+    LargeRange range = LargeRange(object, size, size);
+    splitAndAllocate(lock, range, alignment, newSize);
 
     m_scavenger->schedule(size);
 }
@@ -531,8 +575,40 @@ void Heap::deallocateLarge(std::lock_guard<StaticMutex>&, void* object)
         return m_debugHeap->freeLarge(object);
 
     size_t size = m_largeAllocated.remove(object);
-    m_largeFree.add(LargeRange(object, size, size));
+    m_largeFree.add(LargeRange(object, size, size, size));
     m_scavenger->schedule(size);
 }
 
+void Heap::externalCommit(void* ptr, size_t size)
+{
+    std::lock_guard<StaticMutex> lock(Heap::mutex());
+    externalCommit(lock, ptr, size);
+}
+
+void Heap::externalCommit(std::lock_guard<StaticMutex>&, void* ptr, size_t size)
+{
+    BUNUSED_PARAM(ptr);
+
+    m_footprint += size;
+#if ENABLE_PHYSICAL_PAGE_MAP 
+    m_physicalPageMap.commit(ptr, size);
+#endif
+}
+
+void Heap::externalDecommit(void* ptr, size_t size)
+{
+    std::lock_guard<StaticMutex> lock(Heap::mutex());
+    externalDecommit(lock, ptr, size);
+}
+
+void Heap::externalDecommit(std::lock_guard<StaticMutex>&, void* ptr, size_t size)
+{
+    BUNUSED_PARAM(ptr);
+
+    m_footprint -= size;
+#if ENABLE_PHYSICAL_PAGE_MAP 
+    m_physicalPageMap.decommit(ptr, size);
+#endif
+}
+
 } // namespace bmalloc
index d8d75be..583e76b 100644 (file)
@@ -37,6 +37,7 @@
 #include "Object.h"
 #include "PerHeapKind.h"
 #include "PerProcess.h"
+#include "PhysicalPageMap.h"
 #include "SmallLine.h"
 #include "SmallPage.h"
 #include "Vector.h"
@@ -76,6 +77,14 @@ public:
 
     void scavenge(std::lock_guard<StaticMutex>&);
 
+    size_t freeableMemory(std::lock_guard<StaticMutex>&);
+    size_t footprint();
+
+    void externalDecommit(void* ptr, size_t);
+    void externalDecommit(std::lock_guard<StaticMutex>&, void* ptr, size_t);
+    void externalCommit(void* ptr, size_t);
+    void externalCommit(std::lock_guard<StaticMutex>&, void* ptr, size_t);
+
 private:
     struct LargeObjectHash {
         static unsigned hash(void* key)
@@ -109,7 +118,7 @@ private:
     void mergeLargeLeft(EndTag*&, BeginTag*&, Range&, bool& inVMHeap);
     void mergeLargeRight(EndTag*&, BeginTag*&, Range&, bool& inVMHeap);
 
-    LargeRange splitAndAllocate(LargeRange&, size_t alignment, size_t);
+    LargeRange splitAndAllocate(std::lock_guard<StaticMutex>&, LargeRange&, size_t alignment, size_t);
 
     HeapKind m_kind;
     
@@ -128,6 +137,12 @@ private:
 
     Scavenger* m_scavenger { nullptr };
     DebugHeap* m_debugHeap { nullptr };
+
+    size_t m_footprint { 0 };
+
+#if ENABLE_PHYSICAL_PAGE_MAP 
+    PhysicalPageMap m_physicalPageMap;
+#endif
 };
 
 inline void Heap::allocateSmallBumpRanges(
index f8c8f8c..0be7c13 100644 (file)
@@ -76,6 +76,12 @@ public:
     // pages as being decommitted. It's the caller's job to do the actual decommitting.
     void scavenge(Vector<DeferredDecommit>&);
 
+    // This is only here for debugging purposes.
+    // FIXME: Make this fast so we can use it to help determine when to
+    // run the scavenger:
+    // https://bugs.webkit.org/show_bug.cgi?id=184176
+    size_t freeableMemory();
+
     template<typename Func>
     void forEachCommittedPage(const Func&);
     
index c98d013..6c95a93 100644 (file)
@@ -71,8 +71,9 @@ EligibilityResult<Config> IsoDirectory<Config, passedNumPages>::takeFirstEligibl
             vmAllocatePhysicalPages(page, IsoPageBase::pageSize);
             new (page) IsoPage<Config>(*this, pageIndex);
         }
-        
+
         m_committed[pageIndex] = true;
+        this->m_heap.didCommit(page, IsoPageBase::pageSize);
     }
     
     RELEASE_BASSERT(page);
@@ -114,6 +115,7 @@ void IsoDirectory<Config, passedNumPages>::didDecommit(unsigned index)
     // syscall itself (which has to do many hard things).
     std::lock_guard<Mutex> locker(this->m_heap.lock);
     m_committed[index] = false;
+    this->m_heap.didDecommit(m_pages[index], IsoPageBase::pageSize);
 }
 
 template<typename Config, unsigned passedNumPages>
@@ -129,6 +131,16 @@ void IsoDirectory<Config, passedNumPages>::scavenge(Vector<DeferredDecommit>& de
 }
 
 template<typename Config, unsigned passedNumPages>
+size_t IsoDirectory<Config, passedNumPages>::freeableMemory()
+{
+    size_t result = 0;
+    (m_empty & m_committed).forEachSetBit([&] (size_t) {
+        result += IsoPageBase::pageSize;
+    });
+    return result;
+}
+
+template<typename Config, unsigned passedNumPages>
 template<typename Func>
 void IsoDirectory<Config, passedNumPages>::forEachCommittedPage(const Func& func)
 {
index ca6e209..a3275fc 100644 (file)
@@ -28,6 +28,7 @@
 #include "BMalloced.h"
 #include "IsoDirectoryPage.h"
 #include "IsoTLSAllocatorEntry.h"
+#include "PhysicalPageMap.h"
 
 namespace bmalloc {
 
@@ -39,6 +40,8 @@ public:
     virtual ~IsoHeapImplBase();
     
     virtual void scavenge(Vector<DeferredDecommit>&) = 0;
+    virtual size_t freeableMemory() = 0;
+    virtual size_t footprint() = 0;
     
     void scavengeNow();
     static void finishScavenging(Vector<DeferredDecommit>&);
@@ -53,7 +56,7 @@ private:
 };
 
 template<typename Config>
-class IsoHeapImpl : public IsoHeapImplBase {
+class IsoHeapImpl final : public IsoHeapImplBase {
     // Pick a size that makes us most efficiently use the bitvectors.
     static constexpr unsigned numPagesInInlineDirectory = 32;
     
@@ -67,6 +70,14 @@ public:
     void didBecomeEligible(IsoDirectory<Config, IsoDirectoryPage<Config>::numPages>*);
     
     void scavenge(Vector<DeferredDecommit>&) override;
+
+    // This is only here for debugging purposes.
+    // FIXME: Make this fast so we can use it to help determine when to
+    // run the scavenger:
+    // https://bugs.webkit.org/show_bug.cgi?id=184176
+    size_t freeableMemory() override;
+
+    size_t footprint() override;
     
     unsigned allocatorOffset();
     unsigned deallocatorOffset();
@@ -84,6 +95,9 @@ public:
     // This is only accurate when all threads are scavenged. Otherwise it will overestimate.
     template<typename Func>
     void forEachLiveObject(const Func&);
+
+    void didCommit(void* ptr, size_t bytes);
+    void didDecommit(void* ptr, size_t bytes);
     
     // It's almost always the caller's responsibility to grab the lock. This lock comes from the
     // PerProcess<IsoTLSDeallocatorEntry<Config>>::get()->lock. That's pretty weird, and we don't
@@ -96,6 +110,10 @@ private:
     IsoDirectory<Config, numPagesInInlineDirectory> m_inlineDirectory;
     IsoDirectoryPage<Config>* m_headDirectory { nullptr };
     IsoDirectoryPage<Config>* m_tailDirectory { nullptr };
+    size_t m_footprint { 0 };
+#if ENABLE_PHYSICAL_PAGE_MAP
+    PhysicalPageMap m_physicalPageMap;
+#endif
     unsigned m_numDirectoryPages { 0 };
     
     bool m_isInlineDirectoryEligible { true };
index 31637b8..bbdac92 100644 (file)
@@ -104,6 +104,17 @@ void IsoHeapImpl<Config>::scavenge(Vector<DeferredDecommit>& decommits)
 }
 
 template<typename Config>
+size_t IsoHeapImpl<Config>::freeableMemory()
+{
+    size_t result = 0;
+    forEachDirectory(
+        [&] (auto& directory) {
+            result += directory.freeableMemory();
+        });
+    return result;
+}
+
+template<typename Config>
 unsigned IsoHeapImpl<Config>::allocatorOffset()
 {
     return m_allocator.offset();
@@ -166,5 +177,34 @@ void IsoHeapImpl<Config>::forEachLiveObject(const Func& func)
         });
 }
 
+template<typename Config>
+size_t IsoHeapImpl<Config>::footprint()
+{
+#if ENABLE_PHYSICAL_PAGE_MAP
+    RELEASE_BASSERT(m_footprint == m_physicalPageMap.footprint());
+#endif
+    return m_footprint;
+}
+
+template<typename Config>
+void IsoHeapImpl<Config>::didCommit(void* ptr, size_t bytes)
+{
+    BUNUSED_PARAM(ptr);
+    m_footprint += bytes;
+#if ENABLE_PHYSICAL_PAGE_MAP
+    m_physicalPageMap.commit(ptr, bytes);
+#endif
+}
+
+template<typename Config>
+void IsoHeapImpl<Config>::didDecommit(void* ptr, size_t bytes)
+{
+    BUNUSED_PARAM(ptr);
+    m_footprint -= bytes;
+#if ENABLE_PHYSICAL_PAGE_MAP
+    m_physicalPageMap.decommit(ptr, bytes);
+#endif
+}
+
 } // namespace bmalloc
 
index 231546f..2b81c25 100644 (file)
@@ -35,26 +35,48 @@ class LargeRange : public Range {
 public:
     LargeRange()
         : Range()
-        , m_physicalSize(0)
+        , m_startPhysicalSize(0)
+        , m_totalPhysicalSize(0)
     {
     }
 
-    LargeRange(const Range& other, size_t physicalSize)
+    LargeRange(const Range& other, size_t startPhysicalSize, size_t totalPhysicalSize)
         : Range(other)
-        , m_physicalSize(physicalSize)
+        , m_startPhysicalSize(startPhysicalSize)
+        , m_totalPhysicalSize(totalPhysicalSize)
     {
+        BASSERT(size() >= this->totalPhysicalSize());
+        BASSERT(this->totalPhysicalSize() >= this->startPhysicalSize());
     }
 
-    LargeRange(void* begin, size_t size, size_t physicalSize)
+    LargeRange(void* begin, size_t size, size_t startPhysicalSize, size_t totalPhysicalSize)
         : Range(begin, size)
-        , m_physicalSize(physicalSize)
+        , m_startPhysicalSize(startPhysicalSize)
+        , m_totalPhysicalSize(totalPhysicalSize)
     {
+        BASSERT(this->size() >= this->totalPhysicalSize());
+        BASSERT(this->totalPhysicalSize() >= this->startPhysicalSize());
     }
 
-    // Returns a lower bound on physical size. Ranges that span non-physical
-    // fragments only remember the physical size of the first fragment.
-    size_t physicalSize() const { return m_physicalSize; }
-    void setPhysicalSize(size_t physicalSize) { m_physicalSize = physicalSize; }
+    // Returns a lower bound on physical size at the start of the range. Ranges that
+    // span non-physical fragments use this number to remember the physical size of
+    // the first fragment.
+    size_t startPhysicalSize() const { return m_startPhysicalSize; }
+    void setStartPhysicalSize(size_t startPhysicalSize) { m_startPhysicalSize = startPhysicalSize; }
+
+    // This is accurate in the sense that if you take a range A and split it N ways
+    // and sum totalPhysicalSize over each of the N splits, you'll end up with A's
+    // totalPhysicalSize. This means if you take a LargeRange out of a LargeMap, split it,
+    // then insert the subsequent two ranges back into the LargeMap, the sum of the
+    // totalPhysicalSize of each LargeRange in the LargeMap will stay constant. This
+    // property is not true of startPhysicalSize. This invariant about totalPhysicalSize
+    // is good enough to get an accurate footprint estimate for memory used in bmalloc.
+    // The reason this is just an estimate is that splitting LargeRanges may lead to this
+    // number being rebalanced in arbitrary ways between the two resulting ranges. This
+    // is why the footprint is just an estimate. In practice, this arbitrary rebalance
+    // doesn't really affect accuracy.
+    size_t totalPhysicalSize() const { return m_totalPhysicalSize; }
+    void setTotalPhysicalSize(size_t totalPhysicalSize) { m_totalPhysicalSize = totalPhysicalSize; }
 
     std::pair<LargeRange, LargeRange> split(size_t) const;
 
@@ -62,7 +84,8 @@ public:
     bool operator<(const LargeRange& other) const { return begin() < other.begin(); }
 
 private:
-    size_t m_physicalSize;
+    size_t m_startPhysicalSize;
+    size_t m_totalPhysicalSize;
 };
 
 inline bool canMerge(const LargeRange& a, const LargeRange& b)
@@ -79,31 +102,40 @@ inline bool canMerge(const LargeRange& a, const LargeRange& b)
 inline LargeRange merge(const LargeRange& a, const LargeRange& b)
 {
     const LargeRange& left = std::min(a, b);
-    if (left.size() == left.physicalSize()) {
+    if (left.size() == left.startPhysicalSize()) {
         return LargeRange(
             left.begin(),
             a.size() + b.size(),
-            a.physicalSize() + b.physicalSize());
+            a.startPhysicalSize() + b.startPhysicalSize(),
+            a.totalPhysicalSize() + b.totalPhysicalSize());
     }
 
     return LargeRange(
         left.begin(),
         a.size() + b.size(),
-        left.physicalSize());
+        left.startPhysicalSize(),
+        a.totalPhysicalSize() + b.totalPhysicalSize());
 }
 
-inline std::pair<LargeRange, LargeRange> LargeRange::split(size_t size) const
+inline std::pair<LargeRange, LargeRange> LargeRange::split(size_t leftSize) const
 {
-    BASSERT(size <= this->size());
-    
-    if (size <= physicalSize()) {
-        LargeRange left(begin(), size, size);
-        LargeRange right(left.end(), this->size() - size, physicalSize() - size);
+    BASSERT(leftSize <= this->size());
+    size_t rightSize = this->size() - leftSize;
+
+    if (leftSize <= startPhysicalSize()) {
+        BASSERT(totalPhysicalSize() >= leftSize);
+        LargeRange left(begin(), leftSize, leftSize, leftSize);
+        LargeRange right(left.end(), rightSize, startPhysicalSize() - leftSize, totalPhysicalSize() - leftSize);
         return std::make_pair(left, right);
     }
 
-    LargeRange left(begin(), size, physicalSize());
-    LargeRange right(left.end(), this->size() - size, 0);
+    double ratio = static_cast<double>(leftSize) / static_cast<double>(this->size());
+    size_t leftTotalPhysicalSize = static_cast<size_t>(ratio * totalPhysicalSize());
+    leftTotalPhysicalSize = std::max(startPhysicalSize(), leftTotalPhysicalSize);
+    size_t rightTotalPhysicalSize = totalPhysicalSize() - leftTotalPhysicalSize;
+
+    LargeRange left(begin(), leftSize, startPhysicalSize(), leftTotalPhysicalSize);
+    LargeRange right(left.end(), rightSize, 0, rightTotalPhysicalSize);
     return std::make_pair(left, right);
 }
 
diff --git a/Source/bmalloc/bmalloc/PhysicalPageMap.h b/Source/bmalloc/bmalloc/PhysicalPageMap.h
new file mode 100644 (file)
index 0000000..c509ec1
--- /dev/null
@@ -0,0 +1,75 @@
+/*
+ * Copyright (C) 2018 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#if ENABLE_PHYSICAL_PAGE_MAP 
+
+#include "VMAllocate.h"
+#include <unordered_set>
+
+namespace bmalloc {
+
+// This class is useful for debugging bmalloc's footprint.
+class PhysicalPageMap {
+public:
+
+    void commit(void* ptr, size_t size)
+    {
+        forEachPhysicalPage(ptr, size, [&] (void* ptr) {
+            m_physicalPages.insert(ptr);
+        });
+    }
+
+    void decommit(void* ptr, size_t size)
+    {
+        forEachPhysicalPage(ptr, size, [&] (void* ptr) {
+            m_physicalPages.erase(ptr);
+        });
+    }
+
+    size_t footprint()
+    {
+        return static_cast<size_t>(m_physicalPages.size()) * vmPageSizePhysical();
+    }
+
+private:
+    template <typename F>
+    void forEachPhysicalPage(void* ptr, size_t size, F f)
+    {
+        char* begin = roundUpToMultipleOf(vmPageSizePhysical(), static_cast<char*>(ptr));
+        char* end = roundDownToMultipleOf(vmPageSizePhysical(), static_cast<char*>(ptr) + size);
+        while (begin < end) {
+            f(begin);
+            begin += vmPageSizePhysical();
+        }
+    }
+
+    std::unordered_set<void*> m_physicalPages;
+};
+
+} // namespace bmalloc
+
+#endif // ENABLE_PHYSICAL_PAGE_MAP 
index 030cfcb..670725b 100644 (file)
 
 #include "AllIsoHeapsInlines.h"
 #include "AvailableMemory.h"
+#include "Environment.h"
 #include "Heap.h"
+#if BOS(DARWIN)
+#import <dispatch/dispatch.h>
+#import <mach/host_info.h>
+#import <mach/mach.h>
+#import <mach/mach_error.h>
+#endif
 #include <thread>
 
 namespace bmalloc {
 
+static constexpr bool verbose = false;
+
 Scavenger::Scavenger(std::lock_guard<StaticMutex>&)
 {
 #if BOS(DARWIN)
@@ -115,8 +124,33 @@ void Scavenger::schedule(size_t bytes)
     runSoonHoldingLock();
 }
 
+inline void dumpStats()
+{
+    auto dump = [] (auto* string, auto size) {
+        fprintf(stderr, "%s %zuMB\n", string, static_cast<size_t>(size) / 1024 / 1024);
+    };
+
+#if BOS(DARWIN)
+    task_vm_info_data_t vmInfo;
+    mach_msg_type_number_t vmSize = TASK_VM_INFO_COUNT;
+    if (KERN_SUCCESS == task_info(mach_task_self(), TASK_VM_INFO, (task_info_t)(&vmInfo), &vmSize)) {
+        dump("phys_footrpint", vmInfo.phys_footprint);
+        dump("internal+compressed", vmInfo.internal + vmInfo.compressed);
+    }
+#endif
+
+    dump("bmalloc-freeable", PerProcess<Scavenger>::get()->freeableMemory());
+    dump("bmalloc-footprint", PerProcess<Scavenger>::get()->footprint());
+}
+
 void Scavenger::scavenge()
 {
+    if (verbose) {
+        fprintf(stderr, "--------------------------------\n");
+        fprintf(stderr, "--before scavenging--\n");
+        dumpStats();
+    }
+
     {
         std::lock_guard<StaticMutex> lock(Heap::mutex());
         for (unsigned i = numHeaps; i--;) {
@@ -126,14 +160,62 @@ void Scavenger::scavenge()
         }
     }
     
+    {
+        std::lock_guard<Mutex> locker(m_isoScavengeLock);
+        RELEASE_BASSERT(!m_deferredDecommits.size());
+        PerProcess<AllIsoHeaps>::get()->forEach(
+            [&] (IsoHeapImplBase& heap) {
+                heap.scavenge(m_deferredDecommits);
+            });
+        IsoHeapImplBase::finishScavenging(m_deferredDecommits);
+        m_deferredDecommits.shrink(0);
+    }
+
+    if (verbose) {
+        fprintf(stderr, "--after scavenging--\n");
+        dumpStats();
+        fprintf(stderr, "--------------------------------\n");
+    }
+}
+
+size_t Scavenger::freeableMemory()
+{
+    size_t result = 0;
+    {
+        std::lock_guard<StaticMutex> lock(Heap::mutex());
+        for (unsigned i = numHeaps; i--;) {
+            if (!isActiveHeapKind(static_cast<HeapKind>(i)))
+                continue;
+            result += PerProcess<PerHeapKind<Heap>>::get()->at(i).freeableMemory(lock);
+        }
+    }
+
     std::lock_guard<Mutex> locker(m_isoScavengeLock);
-    RELEASE_BASSERT(!m_deferredDecommits.size());
     PerProcess<AllIsoHeaps>::get()->forEach(
         [&] (IsoHeapImplBase& heap) {
-            heap.scavenge(m_deferredDecommits);
+            result += heap.freeableMemory();
         });
-    IsoHeapImplBase::finishScavenging(m_deferredDecommits);
-    m_deferredDecommits.shrink(0);
+
+    return result;
+}
+
+size_t Scavenger::footprint()
+{
+    RELEASE_BASSERT(!PerProcess<Environment>::get()->isDebugHeapEnabled());
+
+    size_t result = 0;
+    for (unsigned i = numHeaps; i--;) {
+        if (!isActiveHeapKind(static_cast<HeapKind>(i)))
+            continue;
+        result += PerProcess<PerHeapKind<Heap>>::get()->at(i).footprint();
+    }
+
+    PerProcess<AllIsoHeaps>::get()->forEach(
+        [&] (IsoHeapImplBase& heap) {
+            result += heap.footprint();
+        });
+
+    return result;
 }
 
 void Scavenger::threadEntryPoint(Scavenger* scavenger)
@@ -169,6 +251,13 @@ void Scavenger::threadRunLoop()
         setSelfQOSClass();
         
         {
+            if (verbose) {
+                fprintf(stderr, "--------------------------------\n");
+                fprintf(stderr, "considering running scavenger\n");
+                dumpStats();
+                fprintf(stderr, "--------------------------------\n");
+            }
+
             std::unique_lock<Mutex> lock(m_mutex);
             if (m_isProbablyGrowing && !isUnderMemoryPressure()) {
                 m_isProbablyGrowing = false;
@@ -176,7 +265,7 @@ void Scavenger::threadRunLoop()
                 continue;
             }
         }
-        
+
         scavenge();
     }
 }
index a5df58d..dd2422b 100644 (file)
@@ -62,6 +62,15 @@ public:
     BEXPORT void scheduleIfUnderMemoryPressure(size_t bytes);
     BEXPORT void schedule(size_t bytes);
 
+    // This is only here for debugging purposes.
+    // FIXME: Make this fast so we can use it to help determine when to
+    // run the scavenger:
+    // https://bugs.webkit.org/show_bug.cgi?id=184176
+    size_t freeableMemory();
+    // This doesn't do any synchronization, so it might return a slightly out of date answer.
+    // It's unlikely, but possible.
+    size_t footprint();
+
 private:
     enum class State { Sleep, Run, RunSoon };
     
@@ -74,7 +83,7 @@ private:
     void threadRunLoop();
     
     void setSelfQOSClass();
-    
+
     std::atomic<State> m_state { State::Sleep };
     size_t m_scavengerBytes { 0 };
     bool m_isProbablyGrowing { false };
index 713e9c9..23ebc20 100644 (file)
@@ -219,6 +219,18 @@ inline void vmAllocatePhysicalPages(void* p, size_t vmSize)
 #endif
 }
 
+// Returns how much memory you would commit/decommit had you called
+// vmDeallocate/AllocatePhysicalPagesSloppy with p and size.
+inline size_t physicalPageSizeSloppy(void* p, size_t size)
+{
+    char* begin = roundUpToMultipleOf(vmPageSizePhysical(), static_cast<char*>(p));
+    char* end = roundDownToMultipleOf(vmPageSizePhysical(), static_cast<char*>(p) + size);
+
+    if (begin >= end)
+        return 0;
+    return end - begin;
+}
+
 // Trims requests that are un-page-aligned.
 inline void vmDeallocatePhysicalPagesSloppy(void* p, size_t size)
 {
index 785103d..24f6f49 100644 (file)
@@ -57,7 +57,7 @@ LargeRange VMHeap::tryAllocateLargeChunk(size_t alignment, size_t size)
     PerProcess<Zone>::get()->addRange(Range(chunk->bytes(), size));
 #endif
 
-    return LargeRange(chunk->bytes(), size, 0);
+    return LargeRange(chunk->bytes(), size, 0, 0);
 }
 
 } // namespace bmalloc
index 4e4b824..92ea3b4 100644 (file)
@@ -91,5 +91,21 @@ void setScavengerThreadQOSClass(qos_class_t overrideClass)
 }
 #endif
 
+void commitAlignedPhysical(void* object, size_t size, HeapKind kind)
+{
+    vmValidatePhysical(object, size);
+    vmAllocatePhysicalPages(object, size);
+    Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(kind);
+    heap.externalCommit(object, size);
+}
+
+void decommitAlignedPhysical(void* object, size_t size, HeapKind kind)
+{
+    vmValidatePhysical(object, size);
+    vmDeallocatePhysicalPages(object, size);
+    Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(kind);
+    heap.externalDecommit(object, size);
+}
+
 } } // namespace bmalloc::api
 
index 7fd6796..8bf7fa9 100644 (file)
@@ -94,6 +94,11 @@ inline void scavengeThisThread()
 BEXPORT void scavenge();
 
 BEXPORT bool isEnabled(HeapKind kind = HeapKind::Primary);
+
+// ptr must be aligned to vmPageSizePhysical and size must be divisible 
+// by vmPageSizePhysical.
+BEXPORT void decommitAlignedPhysical(void* object, size_t, HeapKind = HeapKind::Primary);
+BEXPORT void commitAlignedPhysical(void* object, size_t, HeapKind = HeapKind::Primary);
     
 inline size_t availableMemory()
 {