[BMalloc] Scavenger should react to recent memory activity
authormsaboff@apple.com <msaboff@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Tue, 19 Mar 2019 17:31:01 +0000 (17:31 +0000)
committermsaboff@apple.com <msaboff@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Tue, 19 Mar 2019 17:31:01 +0000 (17:31 +0000)
https://bugs.webkit.org/show_bug.cgi?id=195895

Reviewed by Geoffrey Garen.

This change adds a recently used bit to objects that are scavenged.  When an object is allocated, that bit is set.
When we scavenge, if the bit is set, we clear it.  If the bit was already clear, we decommit the object.  The timing
to scavenging has been changed as well.  We perform our first scavne almost immediately after bmalloc is initialized
(10ms later).  Subsequent scavenging is done as a multiple of the time it took to scavenge.  We bound this computed
time between a minimum and maximum.  Through empirical testing, the multiplier, minimum and maximum are
150x, 100ms and 10,000ms respectively.  For mini-mode, when the JIT is disabled, we use much more aggressive values of
50x, 25ms and 500ms.

Eliminated partial scavenging since this change allows for any scavenge to be partial or full based on recent use of
the objects on the various free lists.

* bmalloc/Chunk.h:
(bmalloc::Chunk::usedSinceLastScavenge):
(bmalloc::Chunk::clearUsedSinceLastScavenge):
(bmalloc::Chunk::setUsedSinceLastScavenge):
* bmalloc/Heap.cpp:
(bmalloc::Heap::scavenge):
(bmalloc::Heap::allocateSmallChunk):
(bmalloc::Heap::allocateSmallPage):
(bmalloc::Heap::splitAndAllocate):
(bmalloc::Heap::tryAllocateLarge):
(bmalloc::Heap::scavengeToHighWatermark): Deleted.
* bmalloc/Heap.h:
* bmalloc/IsoDirectory.h:
* bmalloc/IsoDirectoryInlines.h:
(bmalloc::passedNumPages>::takeFirstEligible):
(bmalloc::passedNumPages>::scavenge):
(bmalloc::passedNumPages>::scavengeToHighWatermark): Deleted.
* bmalloc/IsoHeapImpl.h:
* bmalloc/IsoHeapImplInlines.h:
(bmalloc::IsoHeapImpl<Config>::scavengeToHighWatermark): Deleted.
* bmalloc/LargeRange.h:
(bmalloc::LargeRange::LargeRange):
(bmalloc::LargeRange::usedSinceLastScavenge):
(bmalloc::LargeRange::clearUsedSinceLastScavenge):
(bmalloc::LargeRange::setUsedSinceLastScavenge):
(): Deleted.
* bmalloc/Scavenger.cpp:
(bmalloc::Scavenger::Scavenger):
(bmalloc::Scavenger::threadRunLoop):
(bmalloc::Scavenger::timeSinceLastPartialScavenge): Deleted.
(bmalloc::Scavenger::partialScavenge): Deleted.
* bmalloc/Scavenger.h:
* bmalloc/SmallPage.h:
(bmalloc::SmallPage::usedSinceLastScavenge):
(bmalloc::SmallPage::clearUsedSinceLastScavenge):
(bmalloc::SmallPage::setUsedSinceLastScavenge):

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@243144 268f45cc-cd09-0410-ab3c-d52691b4dbfc

13 files changed:
Source/bmalloc/ChangeLog
Source/bmalloc/bmalloc/Chunk.h
Source/bmalloc/bmalloc/Heap.cpp
Source/bmalloc/bmalloc/Heap.h
Source/bmalloc/bmalloc/IsoDirectory.h
Source/bmalloc/bmalloc/IsoDirectoryInlines.h
Source/bmalloc/bmalloc/IsoHeapImpl.h
Source/bmalloc/bmalloc/IsoHeapImplInlines.h
Source/bmalloc/bmalloc/LargeMap.cpp
Source/bmalloc/bmalloc/LargeRange.h
Source/bmalloc/bmalloc/Scavenger.cpp
Source/bmalloc/bmalloc/Scavenger.h
Source/bmalloc/bmalloc/SmallPage.h

index 86886c5..d3da3b4 100644 (file)
@@ -1,3 +1,58 @@
+2019-03-18  Michael Saboff  <msaboff@apple.com>
+
+        [BMalloc] Scavenger should react to recent memory activity
+        https://bugs.webkit.org/show_bug.cgi?id=195895
+
+        Reviewed by Geoffrey Garen.
+
+        This change adds a recently used bit to objects that are scavenged.  When an object is allocated, that bit is set.
+        When we scavenge, if the bit is set, we clear it.  If the bit was already clear, we decommit the object.  The timing
+        to scavenging has been changed as well.  We perform our first scavne almost immediately after bmalloc is initialized
+        (10ms later).  Subsequent scavenging is done as a multiple of the time it took to scavenge.  We bound this computed
+        time between a minimum and maximum.  Through empirical testing, the multiplier, minimum and maximum are
+        150x, 100ms and 10,000ms respectively.  For mini-mode, when the JIT is disabled, we use much more aggressive values of
+        50x, 25ms and 500ms.
+
+        Eliminated partial scavenging since this change allows for any scavenge to be partial or full based on recent use of
+        the objects on the various free lists.
+
+        * bmalloc/Chunk.h:
+        (bmalloc::Chunk::usedSinceLastScavenge):
+        (bmalloc::Chunk::clearUsedSinceLastScavenge):
+        (bmalloc::Chunk::setUsedSinceLastScavenge):
+        * bmalloc/Heap.cpp:
+        (bmalloc::Heap::scavenge):
+        (bmalloc::Heap::allocateSmallChunk):
+        (bmalloc::Heap::allocateSmallPage):
+        (bmalloc::Heap::splitAndAllocate):
+        (bmalloc::Heap::tryAllocateLarge):
+        (bmalloc::Heap::scavengeToHighWatermark): Deleted.
+        * bmalloc/Heap.h:
+        * bmalloc/IsoDirectory.h:
+        * bmalloc/IsoDirectoryInlines.h:
+        (bmalloc::passedNumPages>::takeFirstEligible):
+        (bmalloc::passedNumPages>::scavenge):
+        (bmalloc::passedNumPages>::scavengeToHighWatermark): Deleted.
+        * bmalloc/IsoHeapImpl.h:
+        * bmalloc/IsoHeapImplInlines.h:
+        (bmalloc::IsoHeapImpl<Config>::scavengeToHighWatermark): Deleted.
+        * bmalloc/LargeRange.h:
+        (bmalloc::LargeRange::LargeRange):
+        (bmalloc::LargeRange::usedSinceLastScavenge):
+        (bmalloc::LargeRange::clearUsedSinceLastScavenge):
+        (bmalloc::LargeRange::setUsedSinceLastScavenge):
+        (): Deleted.
+        * bmalloc/Scavenger.cpp:
+        (bmalloc::Scavenger::Scavenger):
+        (bmalloc::Scavenger::threadRunLoop):
+        (bmalloc::Scavenger::timeSinceLastPartialScavenge): Deleted.
+        (bmalloc::Scavenger::partialScavenge): Deleted.
+        * bmalloc/Scavenger.h:
+        * bmalloc/SmallPage.h:
+        (bmalloc::SmallPage::usedSinceLastScavenge):
+        (bmalloc::SmallPage::clearUsedSinceLastScavenge):
+        (bmalloc::SmallPage::setUsedSinceLastScavenge):
+
 2019-03-14  Yusuke Suzuki  <ysuzuki@apple.com>
 
         [bmalloc] Add StaticPerProcess for known types to save pages
 2019-03-14  Yusuke Suzuki  <ysuzuki@apple.com>
 
         [bmalloc] Add StaticPerProcess for known types to save pages
index 31db6f2..17ebeb5 100644 (file)
@@ -45,6 +45,10 @@ public:
     void deref() { BASSERT(m_refCount); --m_refCount; }
     unsigned refCount() { return m_refCount; }
 
     void deref() { BASSERT(m_refCount); --m_refCount; }
     unsigned refCount() { return m_refCount; }
 
+    bool usedSinceLastScavenge() { return m_usedSinceLastScavenge; }
+    void clearUsedSinceLastScavenge() { m_usedSinceLastScavenge = false; }
+    void setUsedSinceLastScavenge() { m_usedSinceLastScavenge = true; }
+
     size_t offset(void*);
 
     char* address(size_t offset);
     size_t offset(void*);
 
     char* address(size_t offset);
@@ -59,6 +63,7 @@ public:
 
 private:
     size_t m_refCount { };
 
 private:
     size_t m_refCount { };
+    bool m_usedSinceLastScavenge: 1;
     List<SmallPage> m_freePages { };
 
     std::array<SmallLine, chunkSize / smallLineSize> m_lines { };
     List<SmallPage> m_freePages { };
 
     std::array<SmallLine, chunkSize / smallLineSize> m_lines { };
index ef03fd4..2686634 100644 (file)
@@ -175,13 +175,18 @@ void Heap::decommitLargeRange(std::lock_guard<Mutex>&, LargeRange& range, BulkDe
 #endif
 }
 
 #endif
 }
 
-void Heap::scavenge(std::lock_guard<Mutex>& lock, BulkDecommit& decommitter)
+void Heap::scavenge(std::lock_guard<Mutex>& lock, BulkDecommit& decommitter, size_t& deferredDecommits)
 {
     for (auto& list : m_freePages) {
         for (auto* chunk : list) {
             for (auto* page : chunk->freePages()) {
                 if (!page->hasPhysicalPages())
                     continue;
 {
     for (auto& list : m_freePages) {
         for (auto* chunk : list) {
             for (auto* page : chunk->freePages()) {
                 if (!page->hasPhysicalPages())
                     continue;
+                if (page->usedSinceLastScavenge()) {
+                    page->clearUsedSinceLastScavenge();
+                    deferredDecommits++;
+                    continue;
+                }
 
                 size_t pageSize = bmalloc::pageSize(&list - &m_freePages[0]);
                 size_t decommitSize = physicalPageSizeSloppy(page->begin()->begin(), pageSize);
 
                 size_t pageSize = bmalloc::pageSize(&list - &m_freePages[0]);
                 size_t decommitSize = physicalPageSizeSloppy(page->begin()->begin(), pageSize);
@@ -189,36 +194,36 @@ void Heap::scavenge(std::lock_guard<Mutex>& lock, BulkDecommit& decommitter)
                 m_footprint -= decommitSize;
                 decommitter.addEager(page->begin()->begin(), pageSize);
                 page->setHasPhysicalPages(false);
                 m_footprint -= decommitSize;
                 decommitter.addEager(page->begin()->begin(), pageSize);
                 page->setHasPhysicalPages(false);
-#if ENABLE_PHYSICAL_PAGE_MAP 
+#if ENABLE_PHYSICAL_PAGE_MAP
                 m_physicalPageMap.decommit(page->begin()->begin(), pageSize);
 #endif
             }
         }
     }
                 m_physicalPageMap.decommit(page->begin()->begin(), pageSize);
 #endif
             }
         }
     }
-    
+
     for (auto& list : m_chunkCache) {
     for (auto& list : m_chunkCache) {
-        while (!list.isEmpty())
-            deallocateSmallChunk(list.pop(), &list - &m_chunkCache[0]);
+        for (auto iter = list.begin(); iter != list.end(); ) {
+            Chunk* chunk = *iter;
+            if (chunk->usedSinceLastScavenge()) {
+                chunk->clearUsedSinceLastScavenge();
+                deferredDecommits++;
+                ++iter;
+                continue;
+            }
+            ++iter;
+            list.remove(chunk);
+            deallocateSmallChunk(chunk, &list - &m_chunkCache[0]);
+        }
     }
 
     for (LargeRange& range : m_largeFree) {
     }
 
     for (LargeRange& range : m_largeFree) {
-        m_highWatermark = std::min(m_highWatermark, static_cast<void*>(range.begin()));
+        if (range.usedSinceLastScavenge()) {
+            range.clearUsedSinceLastScavenge();
+            deferredDecommits++;
+            continue;
+        }
         decommitLargeRange(lock, range, decommitter);
     }
         decommitLargeRange(lock, range, decommitter);
     }
-
-    m_freeableMemory = 0;
-}
-
-void Heap::scavengeToHighWatermark(std::lock_guard<Mutex>& lock, BulkDecommit& decommitter)
-{
-    void* newHighWaterMark = nullptr;
-    for (LargeRange& range : m_largeFree) {
-        if (range.begin() <= m_highWatermark)
-            newHighWaterMark = std::min(newHighWaterMark, static_cast<void*>(range.begin()));
-        else
-            decommitLargeRange(lock, range, decommitter);
-    }
-    m_highWatermark = newHighWaterMark;
 }
 
 void Heap::deallocateLineCache(std::unique_lock<Mutex>&, LineCache& lineCache)
 }
 
 void Heap::deallocateLineCache(std::unique_lock<Mutex>&, LineCache& lineCache)
@@ -249,6 +254,7 @@ void Heap::allocateSmallChunk(std::unique_lock<Mutex>& lock, size_t pageClass)
 
         forEachPage(chunk, pageSize, [&](SmallPage* page) {
             page->setHasPhysicalPages(true);
 
         forEachPage(chunk, pageSize, [&](SmallPage* page) {
             page->setHasPhysicalPages(true);
+            page->setUsedSinceLastScavenge();
             page->setHasFreeLines(lock, true);
             chunk->freePages().push(page);
         });
             page->setHasFreeLines(lock, true);
             chunk->freePages().push(page);
         });
@@ -310,6 +316,7 @@ SmallPage* Heap::allocateSmallPage(std::unique_lock<Mutex>& lock, size_t sizeCla
         Chunk* chunk = m_freePages[pageClass].tail();
 
         chunk->ref();
         Chunk* chunk = m_freePages[pageClass].tail();
 
         chunk->ref();
+        chunk->setUsedSinceLastScavenge();
 
         SmallPage* page = chunk->freePages().pop();
         if (chunk->freePages().isEmpty())
 
         SmallPage* page = chunk->freePages().pop();
         if (chunk->freePages().isEmpty())
@@ -324,10 +331,11 @@ SmallPage* Heap::allocateSmallPage(std::unique_lock<Mutex>& lock, size_t sizeCla
             m_footprint += physicalSize;
             vmAllocatePhysicalPagesSloppy(page->begin()->begin(), pageSize);
             page->setHasPhysicalPages(true);
             m_footprint += physicalSize;
             vmAllocatePhysicalPagesSloppy(page->begin()->begin(), pageSize);
             page->setHasPhysicalPages(true);
-#if ENABLE_PHYSICAL_PAGE_MAP 
+#if ENABLE_PHYSICAL_PAGE_MAP
             m_physicalPageMap.commit(page->begin()->begin(), pageSize);
 #endif
         }
             m_physicalPageMap.commit(page->begin()->begin(), pageSize);
 #endif
         }
+        page->setUsedSinceLastScavenge();
 
         return page;
     }();
 
         return page;
     }();
@@ -585,7 +593,6 @@ void* Heap::tryAllocateLarge(std::unique_lock<Mutex>& lock, size_t alignment, si
     m_freeableMemory -= range.totalPhysicalSize();
 
     void* result = splitAndAllocate(lock, range, alignment, size).begin();
     m_freeableMemory -= range.totalPhysicalSize();
 
     void* result = splitAndAllocate(lock, range, alignment, size).begin();
-    m_highWatermark = std::max(m_highWatermark, result);
     return result;
 }
 
     return result;
 }
 
index b67cf66..96735c5 100644 (file)
@@ -76,9 +76,8 @@ public:
     size_t largeSize(std::unique_lock<Mutex>&, void*);
     void shrinkLarge(std::unique_lock<Mutex>&, const Range&, size_t);
 
     size_t largeSize(std::unique_lock<Mutex>&, void*);
     void shrinkLarge(std::unique_lock<Mutex>&, const Range&, size_t);
 
-    void scavenge(std::lock_guard<Mutex>&, BulkDecommit&);
+    void scavenge(std::lock_guard<Mutex>&, BulkDecommit&, size_t& deferredDecommits);
     void scavenge(std::lock_guard<Mutex>&, BulkDecommit&, size_t& freed, size_t goal);
     void scavenge(std::lock_guard<Mutex>&, BulkDecommit&, size_t& freed, size_t goal);
-    void scavengeToHighWatermark(std::lock_guard<Mutex>&, BulkDecommit&);
 
     size_t freeableMemory(std::lock_guard<Mutex>&);
     size_t footprint();
 
     size_t freeableMemory(std::lock_guard<Mutex>&);
     size_t footprint();
@@ -153,8 +152,6 @@ private:
 #if ENABLE_PHYSICAL_PAGE_MAP 
     PhysicalPageMap m_physicalPageMap;
 #endif
 #if ENABLE_PHYSICAL_PAGE_MAP 
     PhysicalPageMap m_physicalPageMap;
 #endif
-
-    void* m_highWatermark { nullptr };
 };
 
 inline void Heap::allocateSmallBumpRanges(
 };
 
 inline void Heap::allocateSmallBumpRanges(
index 2dc913b..30df8a6 100644 (file)
@@ -75,7 +75,6 @@ public:
     // Iterate over all empty and committed pages, and put them into the vector. This also records the
     // pages as being decommitted. It's the caller's job to do the actual decommitting.
     void scavenge(Vector<DeferredDecommit>&);
     // Iterate over all empty and committed pages, and put them into the vector. This also records the
     // pages as being decommitted. It's the caller's job to do the actual decommitting.
     void scavenge(Vector<DeferredDecommit>&);
-    void scavengeToHighWatermark(Vector<DeferredDecommit>&);
 
     template<typename Func>
     void forEachCommittedPage(const Func&);
 
     template<typename Func>
     void forEachCommittedPage(const Func&);
@@ -90,7 +89,6 @@ private:
     Bits<numPages> m_committed;
     std::array<IsoPage<Config>*, numPages> m_pages;
     unsigned m_firstEligible { 0 };
     Bits<numPages> m_committed;
     std::array<IsoPage<Config>*, numPages> m_pages;
     unsigned m_firstEligible { 0 };
-    unsigned m_highWatermark { 0 };
 };
 
 } // namespace bmalloc
 };
 
 } // namespace bmalloc
index f388ded..640d82e 100644 (file)
@@ -51,8 +51,6 @@ EligibilityResult<Config> IsoDirectory<Config, passedNumPages>::takeFirstEligibl
     if (pageIndex >= numPages)
         return EligibilityKind::Full;
 
     if (pageIndex >= numPages)
         return EligibilityKind::Full;
 
-    m_highWatermark = std::max(pageIndex, m_highWatermark);
-    
     Scavenger& scavenger = *Scavenger::get();
     scavenger.didStartGrowing();
     
     Scavenger& scavenger = *Scavenger::get();
     scavenger.didStartGrowing();
     
@@ -143,18 +141,6 @@ void IsoDirectory<Config, passedNumPages>::scavenge(Vector<DeferredDecommit>& de
         [&] (size_t index) {
             scavengePage(index, decommits);
         });
         [&] (size_t index) {
             scavengePage(index, decommits);
         });
-    m_highWatermark = 0;
-}
-
-template<typename Config, unsigned passedNumPages>
-void IsoDirectory<Config, passedNumPages>::scavengeToHighWatermark(Vector<DeferredDecommit>& decommits)
-{
-    (m_empty & m_committed).forEachSetBit(
-        [&] (size_t index) {
-            if (index > m_highWatermark)
-                scavengePage(index, decommits);
-        });
-    m_highWatermark = 0;
 }
 
 template<typename Config, unsigned passedNumPages>
 }
 
 template<typename Config, unsigned passedNumPages>
index b29a004..5e643dd 100644 (file)
@@ -40,7 +40,6 @@ public:
     virtual ~IsoHeapImplBase();
     
     virtual void scavenge(Vector<DeferredDecommit>&) = 0;
     virtual ~IsoHeapImplBase();
     
     virtual void scavenge(Vector<DeferredDecommit>&) = 0;
-    virtual void scavengeToHighWatermark(Vector<DeferredDecommit>&) = 0;
     virtual size_t freeableMemory() = 0;
     virtual size_t footprint() = 0;
     
     virtual size_t freeableMemory() = 0;
     virtual size_t footprint() = 0;
     
@@ -72,7 +71,6 @@ public:
     void didBecomeEligible(IsoDirectory<Config, IsoDirectoryPage<Config>::numPages>*);
     
     void scavenge(Vector<DeferredDecommit>&) override;
     void didBecomeEligible(IsoDirectory<Config, IsoDirectoryPage<Config>::numPages>*);
     
     void scavenge(Vector<DeferredDecommit>&) override;
-    void scavengeToHighWatermark(Vector<DeferredDecommit>&) override;
 
     size_t freeableMemory() override;
 
 
     size_t freeableMemory() override;
 
index ed7e36f..4586f26 100644 (file)
@@ -110,19 +110,6 @@ void IsoHeapImpl<Config>::scavenge(Vector<DeferredDecommit>& decommits)
 }
 
 template<typename Config>
 }
 
 template<typename Config>
-void IsoHeapImpl<Config>::scavengeToHighWatermark(Vector<DeferredDecommit>& decommits)
-{
-    std::lock_guard<Mutex> locker(this->lock);
-    if (!m_directoryHighWatermark)
-        m_inlineDirectory.scavengeToHighWatermark(decommits);
-    for (IsoDirectoryPage<Config>* page = m_headDirectory; page; page = page->next) {
-        if (page->index() >= m_directoryHighWatermark)
-            page->payload.scavengeToHighWatermark(decommits);
-    }
-    m_directoryHighWatermark = 0;
-}
-
-template<typename Config>
 size_t IsoHeapImpl<Config>::freeableMemory()
 {
     return m_freeableMemory;
 size_t IsoHeapImpl<Config>::freeableMemory()
 {
     return m_freeableMemory;
index 1cd7b49..310ed8f 100644 (file)
@@ -75,7 +75,8 @@ void LargeMap::add(const LargeRange& range)
 
         merged = merge(merged, m_free.pop(i--));
     }
 
         merged = merge(merged, m_free.pop(i--));
     }
-    
+
+    merged.setUsedSinceLastScavenge();
     m_free.push(merged);
 }
 
     m_free.push(merged);
 }
 
index 5558255..915ce15 100644 (file)
@@ -37,6 +37,8 @@ public:
         : Range()
         , m_startPhysicalSize(0)
         , m_totalPhysicalSize(0)
         : Range()
         , m_startPhysicalSize(0)
         , m_totalPhysicalSize(0)
+        , m_isEligible(true)
+        , m_usedSinceLastScavenge(false)
     {
     }
 
     {
     }
 
@@ -44,15 +46,19 @@ public:
         : Range(other)
         , m_startPhysicalSize(startPhysicalSize)
         , m_totalPhysicalSize(totalPhysicalSize)
         : Range(other)
         , m_startPhysicalSize(startPhysicalSize)
         , m_totalPhysicalSize(totalPhysicalSize)
+        , m_isEligible(true)
+        , m_usedSinceLastScavenge(false)
     {
         BASSERT(this->size() >= this->totalPhysicalSize());
         BASSERT(this->totalPhysicalSize() >= this->startPhysicalSize());
     }
 
     {
         BASSERT(this->size() >= this->totalPhysicalSize());
         BASSERT(this->totalPhysicalSize() >= this->startPhysicalSize());
     }
 
-    LargeRange(void* begin, size_t size, size_t startPhysicalSize, size_t totalPhysicalSize)
+    LargeRange(void* begin, size_t size, size_t startPhysicalSize, size_t totalPhysicalSize, bool usedSinceLastScavenge = false)
         : Range(begin, size)
         , m_startPhysicalSize(startPhysicalSize)
         , m_totalPhysicalSize(totalPhysicalSize)
         : Range(begin, size)
         , m_startPhysicalSize(startPhysicalSize)
         , m_totalPhysicalSize(totalPhysicalSize)
+        , m_isEligible(true)
+        , m_usedSinceLastScavenge(usedSinceLastScavenge)
     {
         BASSERT(this->size() >= this->totalPhysicalSize());
         BASSERT(this->totalPhysicalSize() >= this->startPhysicalSize());
     {
         BASSERT(this->size() >= this->totalPhysicalSize());
         BASSERT(this->totalPhysicalSize() >= this->startPhysicalSize());
@@ -83,13 +89,18 @@ public:
     void setEligible(bool eligible) { m_isEligible = eligible; }
     bool isEligibile() const { return m_isEligible; }
 
     void setEligible(bool eligible) { m_isEligible = eligible; }
     bool isEligibile() const { return m_isEligible; }
 
+    bool usedSinceLastScavenge() const { return m_usedSinceLastScavenge; }
+    void clearUsedSinceLastScavenge() { m_usedSinceLastScavenge = false; }
+    void setUsedSinceLastScavenge() { m_usedSinceLastScavenge = true; }
+
     bool operator<(const void* other) const { return begin() < other; }
     bool operator<(const LargeRange& other) const { return begin() < other.begin(); }
 
 private:
     size_t m_startPhysicalSize;
     size_t m_totalPhysicalSize;
     bool operator<(const void* other) const { return begin() < other; }
     bool operator<(const LargeRange& other) const { return begin() < other.begin(); }
 
 private:
     size_t m_startPhysicalSize;
     size_t m_totalPhysicalSize;
-    bool m_isEligible { true };
+    unsigned m_isEligible: 1;
+    unsigned m_usedSinceLastScavenge: 1;
 };
 
 inline bool canMerge(const LargeRange& a, const LargeRange& b)
 };
 
 inline bool canMerge(const LargeRange& a, const LargeRange& b)
@@ -112,19 +123,22 @@ inline bool canMerge(const LargeRange& a, const LargeRange& b)
 inline LargeRange merge(const LargeRange& a, const LargeRange& b)
 {
     const LargeRange& left = std::min(a, b);
 inline LargeRange merge(const LargeRange& a, const LargeRange& b)
 {
     const LargeRange& left = std::min(a, b);
+    bool mergedUsedSinceLastScavenge = a.usedSinceLastScavenge() || b.usedSinceLastScavenge();
     if (left.size() == left.startPhysicalSize()) {
         return LargeRange(
             left.begin(),
             a.size() + b.size(),
             a.startPhysicalSize() + b.startPhysicalSize(),
     if (left.size() == left.startPhysicalSize()) {
         return LargeRange(
             left.begin(),
             a.size() + b.size(),
             a.startPhysicalSize() + b.startPhysicalSize(),
-            a.totalPhysicalSize() + b.totalPhysicalSize());
+            a.totalPhysicalSize() + b.totalPhysicalSize(),
+            mergedUsedSinceLastScavenge);
     }
 
     return LargeRange(
         left.begin(),
         a.size() + b.size(),
         left.startPhysicalSize(),
     }
 
     return LargeRange(
         left.begin(),
         a.size() + b.size(),
         left.startPhysicalSize(),
-        a.totalPhysicalSize() + b.totalPhysicalSize());
+        a.totalPhysicalSize() + b.totalPhysicalSize(),
+        mergedUsedSinceLastScavenge);
 }
 
 inline std::pair<LargeRange, LargeRange> LargeRange::split(size_t leftSize) const
 }
 
 inline std::pair<LargeRange, LargeRange> LargeRange::split(size_t leftSize) const
index d668fe0..6a7ca73 100644 (file)
@@ -80,7 +80,8 @@ Scavenger::Scavenger(std::lock_guard<Mutex>&)
     dispatch_resume(m_pressureHandlerDispatchSource);
     dispatch_release(queue);
 #endif
     dispatch_resume(m_pressureHandlerDispatchSource);
     dispatch_release(queue);
 #endif
-    
+    m_waitTime = std::chrono::milliseconds(10);
+
     m_thread = std::thread(&threadEntryPoint, this);
 }
 
     m_thread = std::thread(&threadEntryPoint, this);
 }
 
@@ -177,12 +178,6 @@ std::chrono::milliseconds Scavenger::timeSinceLastFullScavenge()
     return std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now() - m_lastFullScavengeTime);
 }
 
     return std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now() - m_lastFullScavengeTime);
 }
 
-std::chrono::milliseconds Scavenger::timeSinceLastPartialScavenge()
-{
-    std::unique_lock<Mutex> lock(m_mutex);
-    return std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now() - m_lastPartialScavengeTime);
-}
-
 void Scavenger::enableMiniMode()
 {
     m_isInMiniMode = true; // We just store to this racily. The scavenger thread will eventually pick up the right value.
 void Scavenger::enableMiniMode()
 {
     m_isInMiniMode = true; // We just store to this racily. The scavenger thread will eventually pick up the right value.
@@ -205,13 +200,17 @@ void Scavenger::scavenge()
 
         {
             PrintTime printTime("\nfull scavenge under lock time");
 
         {
             PrintTime printTime("\nfull scavenge under lock time");
+            size_t deferredDecommits = 0;
             std::lock_guard<Mutex> lock(Heap::mutex());
             for (unsigned i = numHeaps; i--;) {
                 if (!isActiveHeapKind(static_cast<HeapKind>(i)))
                     continue;
             std::lock_guard<Mutex> lock(Heap::mutex());
             for (unsigned i = numHeaps; i--;) {
                 if (!isActiveHeapKind(static_cast<HeapKind>(i)))
                     continue;
-                PerProcess<PerHeapKind<Heap>>::get()->at(i).scavenge(lock, decommitter);
+                PerProcess<PerHeapKind<Heap>>::get()->at(i).scavenge(lock, decommitter, deferredDecommits);
             }
             decommitter.processEager();
             }
             decommitter.processEager();
+
+            if (deferredDecommits)
+                m_state = State::RunSoon;
         }
 
         {
         }
 
         {
@@ -252,73 +251,6 @@ void Scavenger::scavenge()
     }
 }
 
     }
 }
 
-void Scavenger::partialScavenge()
-{
-    std::unique_lock<Mutex> lock(m_scavengingMutex);
-
-    if (verbose) {
-        fprintf(stderr, "--------------------------------\n");
-        fprintf(stderr, "--before partial scavenging--\n");
-        dumpStats();
-    }
-
-    {
-        BulkDecommit decommitter;
-        {
-            PrintTime printTime("\npartialScavenge under lock time");
-            std::lock_guard<Mutex> lock(Heap::mutex());
-            for (unsigned i = numHeaps; i--;) {
-                if (!isActiveHeapKind(static_cast<HeapKind>(i)))
-                    continue;
-                Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(i);
-                size_t freeableMemory = heap.freeableMemory(lock);
-                if (freeableMemory < 4 * MB)
-                    continue;
-                heap.scavengeToHighWatermark(lock, decommitter);
-            }
-
-            decommitter.processEager();
-        }
-
-        {
-            PrintTime printTime("partialScavenge lazy decommit time");
-            decommitter.processLazy();
-        }
-
-        {
-            PrintTime printTime("partialScavenge mark all as eligible time");
-            std::lock_guard<Mutex> lock(Heap::mutex());
-            for (unsigned i = numHeaps; i--;) {
-                if (!isActiveHeapKind(static_cast<HeapKind>(i)))
-                    continue;
-                Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(i);
-                heap.markAllLargeAsEligibile(lock);
-            }
-        }
-    }
-
-    {
-        RELEASE_BASSERT(!m_deferredDecommits.size());
-        AllIsoHeaps::get()->forEach(
-            [&] (IsoHeapImplBase& heap) {
-                heap.scavengeToHighWatermark(m_deferredDecommits);
-            });
-        IsoHeapImplBase::finishScavenging(m_deferredDecommits);
-        m_deferredDecommits.shrink(0);
-    }
-
-    if (verbose) {
-        fprintf(stderr, "--after partial scavenging--\n");
-        dumpStats();
-        fprintf(stderr, "--------------------------------\n");
-    }
-
-    {
-        std::unique_lock<Mutex> lock(m_mutex);
-        m_lastPartialScavengeTime = std::chrono::steady_clock::now();
-    }
-}
-
 size_t Scavenger::freeableMemory()
 {
     size_t result = 0;
 size_t Scavenger::freeableMemory()
 {
     size_t result = 0;
@@ -386,7 +318,7 @@ void Scavenger::threadRunLoop()
         
         if (m_state == State::RunSoon) {
             std::unique_lock<Mutex> lock(m_mutex);
         
         if (m_state == State::RunSoon) {
             std::unique_lock<Mutex> lock(m_mutex);
-            m_condition.wait_for(lock, std::chrono::milliseconds(m_isInMiniMode ? 200 : 2000), [&]() { return m_state != State::RunSoon; });
+            m_condition.wait_for(lock, m_waitTime, [&]() { return m_state != State::RunSoon; });
         }
         
         m_state = State::Sleep;
         }
         
         m_state = State::Sleep;
@@ -400,67 +332,31 @@ void Scavenger::threadRunLoop()
             fprintf(stderr, "--------------------------------\n");
         }
 
             fprintf(stderr, "--------------------------------\n");
         }
 
-        enum class ScavengeMode {
-            None,
-            Partial,
-            Full
-        };
-
-        size_t freeableMemory = this->freeableMemory();
-
-        ScavengeMode scavengeMode = [&] {
-            auto timeSinceLastFullScavenge = this->timeSinceLastFullScavenge();
-            auto timeSinceLastPartialScavenge = this->timeSinceLastPartialScavenge();
-            auto timeSinceLastScavenge = std::min(timeSinceLastPartialScavenge, timeSinceLastFullScavenge);
+        std::chrono::steady_clock::time_point start { std::chrono::steady_clock::now() };
+        
+        scavenge();
 
 
-            if (isUnderMemoryPressure() && freeableMemory > 1 * MB && timeSinceLastScavenge > std::chrono::milliseconds(5))
-                return ScavengeMode::Full;
+        auto timeSpentScavenging = std::chrono::steady_clock::now() - start;
 
 
-            if (!m_isProbablyGrowing) {
-                if (timeSinceLastFullScavenge < std::chrono::milliseconds(1000) && !m_isInMiniMode)
-                    return ScavengeMode::Partial;
-                return ScavengeMode::Full;
-            }
+        if (verbose) {
+            fprintf(stderr, "time spent scavenging %lfms\n",
+                static_cast<double>(std::chrono::duration_cast<std::chrono::microseconds>(timeSpentScavenging).count()) / 1000);
+        }
 
 
-            if (m_isInMiniMode) {
-                if (timeSinceLastFullScavenge < std::chrono::milliseconds(200))
-                    return ScavengeMode::Partial;
-                return ScavengeMode::Full;
-            }
+        std::chrono::milliseconds newWaitTime;
 
 
-#if BCPU(X86_64)
-            auto partialScavengeInterval = std::chrono::milliseconds(12000);
-#else
-            auto partialScavengeInterval = std::chrono::milliseconds(8000);
-#endif
-            if (timeSinceLastScavenge < partialScavengeInterval) {
-                // Rate limit partial scavenges.
-                return ScavengeMode::None;
-            }
-            if (freeableMemory < 25 * MB)
-                return ScavengeMode::None;
-            if (5 * freeableMemory < footprint())
-                return ScavengeMode::None;
-            return ScavengeMode::Partial;
-        }();
-
-        m_isProbablyGrowing = false;
-
-        switch (scavengeMode) {
-        case ScavengeMode::None: {
-            runSoon();
-            break;
-        }
-        case ScavengeMode::Partial: {
-            partialScavenge();
-            runSoon();
-            break;
-        }
-        case ScavengeMode::Full: {
-            scavenge();
-            break;
-        }
+        if (m_isInMiniMode) {
+            timeSpentScavenging *= 50;
+            newWaitTime = std::chrono::duration_cast<std::chrono::milliseconds>(timeSpentScavenging);
+            newWaitTime = std::min(std::max(newWaitTime, std::chrono::milliseconds(25)), std::chrono::milliseconds(500));
+        } else {
+            timeSpentScavenging *= 150;
+            newWaitTime = std::chrono::duration_cast<std::chrono::milliseconds>(timeSpentScavenging);
+            m_waitTime = std::min(std::max(newWaitTime, std::chrono::milliseconds(100)), std::chrono::milliseconds(10000));
         }
         }
+
+        if (verbose)
+            fprintf(stderr, "new wait time %lldms\n", m_waitTime.count());
     }
 }
 
     }
 }
 
index d52a0de..d3efa9f 100644 (file)
@@ -89,11 +89,10 @@ private:
     void setThreadName(const char*);
 
     std::chrono::milliseconds timeSinceLastFullScavenge();
     void setThreadName(const char*);
 
     std::chrono::milliseconds timeSinceLastFullScavenge();
-    std::chrono::milliseconds timeSinceLastPartialScavenge();
-    void partialScavenge();
 
     std::atomic<State> m_state { State::Sleep };
     size_t m_scavengerBytes { 0 };
 
     std::atomic<State> m_state { State::Sleep };
     size_t m_scavengerBytes { 0 };
+    std::chrono::milliseconds m_waitTime;
     bool m_isProbablyGrowing { false };
     bool m_isInMiniMode { false };
     
     bool m_isProbablyGrowing { false };
     bool m_isInMiniMode { false };
     
@@ -103,7 +102,6 @@ private:
 
     std::thread m_thread;
     std::chrono::steady_clock::time_point m_lastFullScavengeTime { std::chrono::steady_clock::now() };
 
     std::thread m_thread;
     std::chrono::steady_clock::time_point m_lastFullScavengeTime { std::chrono::steady_clock::now() };
-    std::chrono::steady_clock::time_point m_lastPartialScavengeTime { std::chrono::steady_clock::now() };
     
 #if BOS(DARWIN)
     dispatch_source_t m_pressureHandlerDispatchSource;
     
 #if BOS(DARWIN)
     dispatch_source_t m_pressureHandlerDispatchSource;
index e024c8d..c38bd1e 100644 (file)
@@ -51,6 +51,10 @@ public:
     bool hasPhysicalPages() { return m_hasPhysicalPages; }
     void setHasPhysicalPages(bool hasPhysicalPages) { m_hasPhysicalPages = hasPhysicalPages; }
     
     bool hasPhysicalPages() { return m_hasPhysicalPages; }
     void setHasPhysicalPages(bool hasPhysicalPages) { m_hasPhysicalPages = hasPhysicalPages; }
     
+    bool usedSinceLastScavenge() { return m_usedSinceLastScavenge; }
+    void clearUsedSinceLastScavenge() { m_usedSinceLastScavenge = false; }
+    void setUsedSinceLastScavenge() { m_usedSinceLastScavenge = true; }
+
     SmallLine* begin();
 
     unsigned char slide() const { return m_slide; }
     SmallLine* begin();
 
     unsigned char slide() const { return m_slide; }
@@ -59,6 +63,7 @@ public:
 private:
     unsigned char m_hasFreeLines: 1;
     unsigned char m_hasPhysicalPages: 1;
 private:
     unsigned char m_hasFreeLines: 1;
     unsigned char m_hasPhysicalPages: 1;
+    unsigned char m_usedSinceLastScavenge: 1;
     unsigned char m_refCount: 7;
     unsigned char m_sizeClass;
     unsigned char m_slide;
     unsigned char m_refCount: 7;
     unsigned char m_sizeClass;
     unsigned char m_slide;