bmalloc: Pathological madvise churn on the free(malloc(x)) benchmark
authorggaren@apple.com <ggaren@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Sat, 28 Feb 2015 00:29:22 +0000 (00:29 +0000)
committerggaren@apple.com <ggaren@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Sat, 28 Feb 2015 00:29:22 +0000 (00:29 +0000)
https://bugs.webkit.org/show_bug.cgi?id=142058

Reviewed by Andreas Kling.

The churn was caused by repeatedly splitting an object with physical
pages from an object without, and then merging them back together again.
The merge would conservatively forget that we had physical pages, forcing
a new call to madvise on the next allocation.

This patch more strictly segregates objects in the heap from objects in
the VM heap, with these changes:

(1) Objects in the heap are not allowed to merge with objects in the VM
heap, and vice versa -- since that would erase our precise knowledge of
which physical pages had been allocated.

(2) The VM heap is exclusively responsible for allocating and deallocating
physical pages.

(3) The heap free list must consider entries for objects that are in the
VM heap to be invalid, and vice versa. (This condition can arise
because the free list does not eagerly remove items.)

With these changes, we can know that any valid object in the heap's free
list already has physical pages, and does not need to call madvise.

Note that the VM heap -- as before -- might sometimes contain ranges
or pieces of ranges that have physical pages, since we allow splitting
of ranges at granularities smaller than the VM page size. These ranges
can eventually merge with ranges in the heap during scavenging.

* bmalloc.xcodeproj/project.pbxproj:

* bmalloc/BoundaryTag.h:
(bmalloc::BoundaryTag::owner):
(bmalloc::BoundaryTag::setOwner):
(bmalloc::BoundaryTag::initSentinel):
(bmalloc::BoundaryTag::hasPhysicalPages): Deleted.
(bmalloc::BoundaryTag::setHasPhysicalPages): Deleted. Replaced the concept
of "has physical pages" with a bit indicating which heap owns the large
object. This is a more precise concept, since the old bit was really a
Yes / Maybe bit.

* bmalloc/Deallocator.cpp:

* bmalloc/FreeList.cpp: Adopt
(bmalloc::FreeList::takeGreedy):
(bmalloc::FreeList::take):
(bmalloc::FreeList::removeInvalidAndDuplicateEntries):
* bmalloc/FreeList.h:
(bmalloc::FreeList::push): Added API for considering the owner when
deciding if a free list entry is valid.

* bmalloc/Heap.cpp:
(bmalloc::Heap::Heap): Adopt new API.

(bmalloc::Heap::scavengeLargeRanges): Scavenge all ranges with no minimum,
since some ranges might be able to merge with ranges in the VM heap, and
they won't be allowed to until we scavenge them.

(bmalloc::Heap::allocateSmallPage):
(bmalloc::Heap::allocateMediumPage):
(bmalloc::Heap::allocateLarge): New VM heap API makes this function
simpler, since we always get back physical pages now.

* bmalloc/Heap.h:
* bmalloc/LargeObject.h:
(bmalloc::LargeObject::end):
(bmalloc::LargeObject::owner):
(bmalloc::LargeObject::setOwner):
(bmalloc::LargeObject::isValidAndFree):
(bmalloc::LargeObject::merge): Do not merge objects across heaps since
that causes madvise churn.
(bmalloc::LargeObject::validateSelf):
(bmalloc::LargeObject::init):
(bmalloc::LargeObject::hasPhysicalPages): Deleted.
(bmalloc::LargeObject::setHasPhysicalPages): Deleted. Propogate the Owner API.

* bmalloc/Owner.h: Added.

* bmalloc/SegregatedFreeList.cpp:
(bmalloc::SegregatedFreeList::SegregatedFreeList):
(bmalloc::SegregatedFreeList::insert):
(bmalloc::SegregatedFreeList::takeGreedy):
(bmalloc::SegregatedFreeList::take):
* bmalloc/SegregatedFreeList.h: Propogate the owner API.

* bmalloc/VMAllocate.h:
(bmalloc::vmDeallocatePhysicalPagesSloppy):
(bmalloc::vmAllocatePhysicalPagesSloppy): Clarified these functions and
removed an edge case.

* bmalloc/VMHeap.cpp:
(bmalloc::VMHeap::VMHeap):
* bmalloc/VMHeap.h:
(bmalloc::VMHeap::allocateSmallPage):
(bmalloc::VMHeap::allocateMediumPage):
(bmalloc::VMHeap::allocateLargeObject):
(bmalloc::VMHeap::deallocateLargeObject): Be sure to give each object
a new chance to merge, since it might have been prohibited from merging
before by virtue of not being in the VM heap.

(bmalloc::VMHeap::allocateLargeRange): Deleted.
(bmalloc::VMHeap::deallocateLargeRange): Deleted.

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@180797 268f45cc-cd09-0410-ab3c-d52691b4dbfc

15 files changed:
Source/bmalloc/ChangeLog
Source/bmalloc/bmalloc.xcodeproj/project.pbxproj
Source/bmalloc/bmalloc/BoundaryTag.h
Source/bmalloc/bmalloc/Deallocator.cpp
Source/bmalloc/bmalloc/FreeList.cpp
Source/bmalloc/bmalloc/FreeList.h
Source/bmalloc/bmalloc/Heap.cpp
Source/bmalloc/bmalloc/Heap.h
Source/bmalloc/bmalloc/LargeObject.h
Source/bmalloc/bmalloc/Owner.h [new file with mode: 0644]
Source/bmalloc/bmalloc/SegregatedFreeList.cpp
Source/bmalloc/bmalloc/SegregatedFreeList.h
Source/bmalloc/bmalloc/VMAllocate.h
Source/bmalloc/bmalloc/VMHeap.cpp
Source/bmalloc/bmalloc/VMHeap.h

index 58d92a7..fdac372 100644 (file)
@@ -1,3 +1,111 @@
+2015-02-27  Geoffrey Garen  <ggaren@apple.com>
+
+        bmalloc: Pathological madvise churn on the free(malloc(x)) benchmark
+        https://bugs.webkit.org/show_bug.cgi?id=142058
+
+        Reviewed by Andreas Kling.
+
+        The churn was caused by repeatedly splitting an object with physical
+        pages from an object without, and then merging them back together again.
+        The merge would conservatively forget that we had physical pages, forcing
+        a new call to madvise on the next allocation.
+
+        This patch more strictly segregates objects in the heap from objects in
+        the VM heap, with these changes:
+
+        (1) Objects in the heap are not allowed to merge with objects in the VM
+        heap, and vice versa -- since that would erase our precise knowledge of
+        which physical pages had been allocated.
+
+        (2) The VM heap is exclusively responsible for allocating and deallocating
+        physical pages.
+
+        (3) The heap free list must consider entries for objects that are in the
+        VM heap to be invalid, and vice versa. (This condition can arise
+        because the free list does not eagerly remove items.)
+
+        With these changes, we can know that any valid object in the heap's free
+        list already has physical pages, and does not need to call madvise.
+
+        Note that the VM heap -- as before -- might sometimes contain ranges
+        or pieces of ranges that have physical pages, since we allow splitting
+        of ranges at granularities smaller than the VM page size. These ranges
+        can eventually merge with ranges in the heap during scavenging.
+
+        * bmalloc.xcodeproj/project.pbxproj:
+
+        * bmalloc/BoundaryTag.h:
+        (bmalloc::BoundaryTag::owner):
+        (bmalloc::BoundaryTag::setOwner):
+        (bmalloc::BoundaryTag::initSentinel):
+        (bmalloc::BoundaryTag::hasPhysicalPages): Deleted.
+        (bmalloc::BoundaryTag::setHasPhysicalPages): Deleted. Replaced the concept
+        of "has physical pages" with a bit indicating which heap owns the large
+        object. This is a more precise concept, since the old bit was really a
+        Yes / Maybe bit.
+
+        * bmalloc/Deallocator.cpp:
+
+        * bmalloc/FreeList.cpp: Adopt
+        (bmalloc::FreeList::takeGreedy):
+        (bmalloc::FreeList::take):
+        (bmalloc::FreeList::removeInvalidAndDuplicateEntries):
+        * bmalloc/FreeList.h:
+        (bmalloc::FreeList::push): Added API for considering the owner when
+        deciding if a free list entry is valid.
+
+        * bmalloc/Heap.cpp:
+        (bmalloc::Heap::Heap): Adopt new API.
+
+        (bmalloc::Heap::scavengeLargeRanges): Scavenge all ranges with no minimum,
+        since some ranges might be able to merge with ranges in the VM heap, and
+        they won't be allowed to until we scavenge them.
+
+        (bmalloc::Heap::allocateSmallPage):
+        (bmalloc::Heap::allocateMediumPage):
+        (bmalloc::Heap::allocateLarge): New VM heap API makes this function
+        simpler, since we always get back physical pages now.
+
+        * bmalloc/Heap.h:
+        * bmalloc/LargeObject.h:
+        (bmalloc::LargeObject::end):
+        (bmalloc::LargeObject::owner):
+        (bmalloc::LargeObject::setOwner):
+        (bmalloc::LargeObject::isValidAndFree):
+        (bmalloc::LargeObject::merge): Do not merge objects across heaps since
+        that causes madvise churn.
+        (bmalloc::LargeObject::validateSelf):
+        (bmalloc::LargeObject::init):
+        (bmalloc::LargeObject::hasPhysicalPages): Deleted.
+        (bmalloc::LargeObject::setHasPhysicalPages): Deleted. Propogate the Owner API.
+
+        * bmalloc/Owner.h: Added.
+
+        * bmalloc/SegregatedFreeList.cpp:
+        (bmalloc::SegregatedFreeList::SegregatedFreeList):
+        (bmalloc::SegregatedFreeList::insert):
+        (bmalloc::SegregatedFreeList::takeGreedy):
+        (bmalloc::SegregatedFreeList::take):
+        * bmalloc/SegregatedFreeList.h: Propogate the owner API.
+
+        * bmalloc/VMAllocate.h:
+        (bmalloc::vmDeallocatePhysicalPagesSloppy):
+        (bmalloc::vmAllocatePhysicalPagesSloppy): Clarified these functions and
+        removed an edge case.
+
+        * bmalloc/VMHeap.cpp:
+        (bmalloc::VMHeap::VMHeap):
+        * bmalloc/VMHeap.h:
+        (bmalloc::VMHeap::allocateSmallPage):
+        (bmalloc::VMHeap::allocateMediumPage):
+        (bmalloc::VMHeap::allocateLargeObject):
+        (bmalloc::VMHeap::deallocateLargeObject): Be sure to give each object
+        a new chance to merge, since it might have been prohibited from merging
+        before by virtue of not being in the VM heap.
+
+        (bmalloc::VMHeap::allocateLargeRange): Deleted.
+        (bmalloc::VMHeap::deallocateLargeRange): Deleted.
+
 2015-02-26  Geoffrey Garen  <ggaren@apple.com>
 
         bmalloc: Large object free list can grow infinitely
index a412122..337ec68 100644 (file)
@@ -26,6 +26,7 @@
                14C6216F1A9A9A6200E72293 /* LargeObject.h in Headers */ = {isa = PBXBuildFile; fileRef = 14C6216E1A9A9A6200E72293 /* LargeObject.h */; settings = {ATTRIBUTES = (Private, ); }; };
                14C919C918FCC59F0028DB43 /* BPlatform.h in Headers */ = {isa = PBXBuildFile; fileRef = 14C919C818FCC59F0028DB43 /* BPlatform.h */; settings = {ATTRIBUTES = (Private, ); }; };
                14CC394C18EA8858004AFE34 /* libbmalloc.a in Frameworks */ = {isa = PBXBuildFile; fileRef = 14F271BE18EA3963008C152F /* libbmalloc.a */; };
+               14D2CD9B1AA12CFB00770440 /* Owner.h in Headers */ = {isa = PBXBuildFile; fileRef = 14D2CD9A1AA12CFB00770440 /* Owner.h */; settings = {ATTRIBUTES = (Private, ); }; };
                14DD788C18F48CAE00950702 /* LargeChunk.h in Headers */ = {isa = PBXBuildFile; fileRef = 147AAA8818CD17CE002201E4 /* LargeChunk.h */; settings = {ATTRIBUTES = (Private, ); }; };
                14DD788D18F48CC600950702 /* BeginTag.h in Headers */ = {isa = PBXBuildFile; fileRef = 1417F64518B54A700076FA3F /* BeginTag.h */; settings = {ATTRIBUTES = (Private, ); }; };
                14DD788E18F48CCD00950702 /* BoundaryTag.h in Headers */ = {isa = PBXBuildFile; fileRef = 1485655E18A43AF900ED6942 /* BoundaryTag.h */; settings = {ATTRIBUTES = (Private, ); }; };
                14C6216E1A9A9A6200E72293 /* LargeObject.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = LargeObject.h; path = bmalloc/LargeObject.h; sourceTree = "<group>"; };
                14C919C818FCC59F0028DB43 /* BPlatform.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = BPlatform.h; path = bmalloc/BPlatform.h; sourceTree = "<group>"; };
                14CC394418EA8743004AFE34 /* libmbmalloc.dylib */ = {isa = PBXFileReference; explicitFileType = "compiled.mach-o.dylib"; includeInIndex = 0; path = libmbmalloc.dylib; sourceTree = BUILT_PRODUCTS_DIR; };
+               14D2CD9A1AA12CFB00770440 /* Owner.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = Owner.h; path = bmalloc/Owner.h; sourceTree = "<group>"; };
                14D9DB4517F2447100EAAB79 /* FixedVector.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; lineEnding = 0; name = FixedVector.h; path = bmalloc/FixedVector.h; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.objcpp; };
                14DA32071885F9E6007269E0 /* Line.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; lineEnding = 0; name = Line.h; path = bmalloc/Line.h; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.objcpp; };
                14DA320C18875B09007269E0 /* Heap.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = Heap.h; path = bmalloc/Heap.h; sourceTree = "<group>"; };
                                143EF9AE1A9FABF6004F5C77 /* FreeList.h */,
                                147AAA8818CD17CE002201E4 /* LargeChunk.h */,
                                14C6216E1A9A9A6200E72293 /* LargeObject.h */,
+                               14D2CD9A1AA12CFB00770440 /* Owner.h */,
                                146BEE2118C845AE0002D5A2 /* SegregatedFreeList.cpp */,
                                146BEE1E18C841C50002D5A2 /* SegregatedFreeList.h */,
                        );
                                14DD788E18F48CCD00950702 /* BoundaryTag.h in Headers */,
                                14DD78C818F48D7500950702 /* FixedVector.h in Headers */,
                                14DD78B718F48D6B00950702 /* MediumLine.h in Headers */,
+                               14D2CD9B1AA12CFB00770440 /* Owner.h in Headers */,
                                14DD78B618F48D6B00950702 /* MediumChunk.h in Headers */,
                                14DD78BC18F48D6B00950702 /* SmallLine.h in Headers */,
                                14DD789818F48D4A00950702 /* Allocator.h in Headers */,
index 99c4c50..408a4fa 100644 (file)
@@ -27,6 +27,7 @@
 #define BoundaryTag_h
 
 #include "BAssert.h"
+#include "Owner.h"
 #include "Range.h"
 #include "Sizes.h"
 #include <cstring>
@@ -49,8 +50,8 @@ public:
     bool isEnd() { return m_isEnd; }
     void setEnd(bool isEnd) { m_isEnd = isEnd; }
 
-    bool hasPhysicalPages() { return m_hasPhysicalPages; }
-    void setHasPhysicalPages(bool hasPhysicalPages) { m_hasPhysicalPages = hasPhysicalPages; }
+    Owner owner() { return m_owner; }
+    void setOwner(Owner owner) { m_owner = owner; }
     
     bool isMarked() { return m_isMarked; }
     void setMarked(bool isMarked) { m_isMarked = isMarked; }
@@ -84,7 +85,7 @@ private:
 
     bool m_isFree: 1;
     bool m_isEnd: 1;
-    bool m_hasPhysicalPages: 1;
+    Owner m_owner: 1;
     bool m_isMarked: 1;
     unsigned m_compactBegin: compactBeginBits;
     unsigned m_size: sizeBits;
@@ -121,6 +122,7 @@ inline void BoundaryTag::initSentinel()
 {
     setRange(Range(nullptr, largeMin));
     setFree(false);
+    setOwner(Owner::VMHeap);
 }
 
 } // namespace bmalloc
index bc1c4e3..393b677 100644 (file)
@@ -24,7 +24,6 @@
  */
 
 #include "BAssert.h"
-#include "BeginTag.h"
 #include "LargeChunk.h"
 #include "Deallocator.h"
 #include "Heap.h"
index d2dca87..06ecc0e 100644 (file)
  * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
  */
 
-#include "BeginTag.h"
 #include "LargeChunk.h"
 #include "FreeList.h"
 #include "Vector.h"
 
 namespace bmalloc {
 
-LargeObject FreeList::takeGreedy(size_t size)
+LargeObject FreeList::takeGreedy(Owner owner)
 {
     for (size_t i = m_vector.size(); i-- > 0; ) {
         // We don't eagerly remove items when we merge and/or split ranges,
         // so we need to validate each free list entry before using it.
         LargeObject largeObject(LargeObject::DoNotValidate, m_vector[i].begin());
-        if (!largeObject.isValidAndFree(m_vector[i].size())) {
+        if (!largeObject.isValidAndFree(owner, m_vector[i].size())) {
             m_vector.pop(i);
             continue;
         }
 
-        if (largeObject.size() < size)
-            continue;
-
         m_vector.pop(i);
         return largeObject;
     }
@@ -51,7 +47,7 @@ LargeObject FreeList::takeGreedy(size_t size)
     return LargeObject();
 }
 
-LargeObject FreeList::take(size_t size)
+LargeObject FreeList::take(Owner owner, size_t size)
 {
     LargeObject first;
     size_t end = m_vector.size() > freeListSearchDepth ? m_vector.size() - freeListSearchDepth : 0;
@@ -59,7 +55,7 @@ LargeObject FreeList::take(size_t size)
         // We don't eagerly remove items when we merge and/or split ranges, so
         // we need to validate each free list entry before using it.
         LargeObject largeObject(LargeObject::DoNotValidate, m_vector[i].begin());
-        if (!largeObject.isValidAndFree(m_vector[i].size())) {
+        if (!largeObject.isValidAndFree(owner, m_vector[i].size())) {
             m_vector.pop(i);
             continue;
         }
@@ -76,7 +72,7 @@ LargeObject FreeList::take(size_t size)
     return first;
 }
 
-LargeObject FreeList::take(size_t alignment, size_t size, size_t unalignedSize)
+LargeObject FreeList::take(Owner owner, size_t alignment, size_t size, size_t unalignedSize)
 {
     BASSERT(isPowerOfTwo(alignment));
     size_t alignmentMask = alignment - 1;
@@ -87,7 +83,7 @@ LargeObject FreeList::take(size_t alignment, size_t size, size_t unalignedSize)
         // We don't eagerly remove items when we merge and/or split ranges, so
         // we need to validate each free list entry before using it.
         LargeObject largeObject(LargeObject::DoNotValidate, m_vector[i].begin());
-        if (!largeObject.isValidAndFree(m_vector[i].size())) {
+        if (!largeObject.isValidAndFree(owner, m_vector[i].size())) {
             m_vector.pop(i);
             continue;
         }
@@ -107,11 +103,11 @@ LargeObject FreeList::take(size_t alignment, size_t size, size_t unalignedSize)
     return first;
 }
 
-void FreeList::removeInvalidAndDuplicateEntries()
+void FreeList::removeInvalidAndDuplicateEntries(Owner owner)
 {
     for (size_t i = m_vector.size(); i-- > 0; ) {
         LargeObject largeObject(LargeObject::DoNotValidate, m_vector[i].begin());
-        if (!largeObject.isValidAndFree(m_vector[i].size())) {
+        if (!largeObject.isValidAndFree(owner, m_vector[i].size())) {
             m_vector.pop(i);
             continue;
         }
index 914da2d..a409085 100644 (file)
@@ -37,14 +37,14 @@ class FreeList {
 public:
     FreeList();
 
-    void push(const LargeObject&);
+    void push(Owner, const LargeObject&);
 
-    LargeObject take(size_t);
-    LargeObject take(size_t alignment, size_t, size_t unalignedSize);
+    LargeObject take(Owner, size_t);
+    LargeObject take(Owner, size_t alignment, size_t, size_t unalignedSize);
     
-    LargeObject takeGreedy(size_t);
+    LargeObject takeGreedy(Owner);
 
-    void removeInvalidAndDuplicateEntries();
+    void removeInvalidAndDuplicateEntries(Owner);
     
 private:
     Vector<Range> m_vector;
@@ -57,11 +57,11 @@ inline FreeList::FreeList()
 {
 }
 
-inline void FreeList::push(const LargeObject& largeObject)
+inline void FreeList::push(Owner owner, const LargeObject& largeObject)
 {
     BASSERT(largeObject.isFree());
     if (m_vector.size() == m_limit) {
-        removeInvalidAndDuplicateEntries();
+        removeInvalidAndDuplicateEntries(owner);
         m_limit = std::max(m_vector.size() * freeListGrowFactor, freeListSearchDepth);
     }
     m_vector.push(largeObject.range());
index 27ddc93..4aa5946 100644 (file)
@@ -46,7 +46,8 @@ static inline void sleep(std::unique_lock<StaticMutex>& lock, std::chrono::milli
 }
 
 Heap::Heap(std::lock_guard<StaticMutex>&)
-    : m_isAllocatingPages(false)
+    : m_largeObjects(Owner::Heap)
+    , m_isAllocatingPages(false)
     , m_scavenger(*this, &Heap::concurrentScavenge)
 {
     initializeLineMetadata();
@@ -144,10 +145,10 @@ void Heap::scavengeLargeRanges(std::unique_lock<StaticMutex>& lock, std::chrono:
             continue;
         }
 
-        LargeObject largeObject = m_largeObjects.takeGreedy(vmPageSize);
+        LargeObject largeObject = m_largeObjects.takeGreedy();
         if (!largeObject)
             return;
-        m_vmHeap.deallocateLargeRange(lock, largeObject);
+        m_vmHeap.deallocateLargeObject(lock, largeObject);
     }
 }
 
@@ -240,10 +241,7 @@ SmallPage* Heap::allocateSmallPage(std::lock_guard<StaticMutex>& lock, size_t si
     SmallPage* page = [this, sizeClass]() {
         if (m_smallPages.size())
             return m_smallPages.pop();
-        
-        SmallPage* page = m_vmHeap.allocateSmallPage();
-        vmAllocatePhysicalPages(page->begin()->begin(), vmPageSize);
-        return page;
+        return m_vmHeap.allocateSmallPage();
     }();
 
     page->setSizeClass(sizeClass);
@@ -265,10 +263,7 @@ MediumPage* Heap::allocateMediumPage(std::lock_guard<StaticMutex>& lock, size_t
     MediumPage* page = [this, sizeClass]() {
         if (m_mediumPages.size())
             return m_mediumPages.pop();
-        
-        MediumPage* page = m_vmHeap.allocateMediumPage();
-        vmAllocatePhysicalPages(page->begin()->begin(), vmPageSize);
-        return page;
+        return m_vmHeap.allocateMediumPage();
     }();
 
     page->setSizeClass(sizeClass);
@@ -375,12 +370,6 @@ void* Heap::allocateLarge(std::lock_guard<StaticMutex>&, LargeObject& largeObjec
     }
 
     largeObject.setFree(false);
-
-    if (!largeObject.hasPhysicalPages()) {
-        vmAllocatePhysicalPagesSloppy(largeObject.begin(), largeObject.size());
-        largeObject.setHasPhysicalPages(true);
-    }
-    
     return largeObject.begin();
 }
 
@@ -394,7 +383,7 @@ void* Heap::allocateLarge(std::lock_guard<StaticMutex>& lock, size_t size)
 
     LargeObject largeObject = m_largeObjects.take(size);
     if (!largeObject)
-        largeObject = m_vmHeap.allocateLargeRange(size);
+        largeObject = m_vmHeap.allocateLargeObject(size);
 
     return allocateLarge(lock, largeObject, size);
 }
@@ -415,21 +404,17 @@ void* Heap::allocateLarge(std::lock_guard<StaticMutex>& lock, size_t alignment,
 
     LargeObject largeObject = m_largeObjects.take(alignment, size, unalignedSize);
     if (!largeObject)
-        largeObject = m_vmHeap.allocateLargeRange(alignment, size, unalignedSize);
+        largeObject = m_vmHeap.allocateLargeObject(alignment, size, unalignedSize);
 
     size_t alignmentMask = alignment - 1;
-    if (!test(largeObject.begin(), alignmentMask))
-        return allocateLarge(lock, largeObject, size);
-
-    // Because we allocate VM left-to-right, we must explicitly allocate the
-    // unaligned space on the left in order to break off the aligned space
-    // we want in the middle.
-    size_t prefixSize = roundUpToMultipleOf(alignment, largeObject.begin() + largeMin) - largeObject.begin();
-    std::pair<LargeObject, LargeObject> pair = largeObject.split(prefixSize);
-    allocateLarge(lock, pair.first, prefixSize);
-    allocateLarge(lock, pair.second, size);
-    deallocateLarge(lock, pair.first);
-    return pair.second.begin();
+    if (test(largeObject.begin(), alignmentMask)) {
+        size_t prefixSize = roundUpToMultipleOf(alignment, largeObject.begin() + largeMin) - largeObject.begin();
+        std::pair<LargeObject, LargeObject> pair = largeObject.split(prefixSize);
+        m_largeObjects.insert(pair.first);
+        largeObject = pair.second;
+    }
+
+    return allocateLarge(lock, largeObject, size);
 }
 
 void Heap::deallocateLarge(std::lock_guard<StaticMutex>&, const LargeObject& largeObject)
index 6ce22ba..6a82f26 100644 (file)
@@ -86,8 +86,8 @@ private:
 
     void splitLarge(BeginTag*, size_t, EndTag*&, Range&);
     void mergeLarge(BeginTag*&, EndTag*&, Range&);
-    void mergeLargeLeft(EndTag*&, BeginTag*&, Range&, bool& hasPhysicalPages);
-    void mergeLargeRight(EndTag*&, BeginTag*&, Range&, bool& hasPhysicalPages);
+    void mergeLargeLeft(EndTag*&, BeginTag*&, Range&, bool& inVMHeap);
+    void mergeLargeRight(EndTag*&, BeginTag*&, Range&, bool& inVMHeap);
     
     void concurrentScavenge();
     void scavengeSmallPages(std::unique_lock<StaticMutex>&, std::chrono::milliseconds);
index 9d883cb..dba8b93 100644 (file)
@@ -46,19 +46,20 @@ public:
     bool operator!() { return !m_object; }
 
     char* begin() const { return static_cast<char*>(m_object); }
+    char* end() const { return begin() + size(); }
     size_t size() const { return m_beginTag->size(); }
     Range range() const { return Range(m_object, size()); }
 
     void setFree(bool) const;
     bool isFree() const;
     
-    bool hasPhysicalPages() const;
-    void setHasPhysicalPages(bool) const;
+    Owner owner() const;
+    void setOwner(Owner) const;
     
     bool isMarked() const;
     void setMarked(bool) const;
     
-    bool isValidAndFree(size_t) const;
+    bool isValidAndFree(Owner, size_t) const;
 
     LargeObject merge() const;
     std::pair<LargeObject, LargeObject> split(size_t) const;
@@ -116,17 +117,17 @@ inline bool LargeObject::isFree() const
     return m_beginTag->isFree();
 }
 
-inline bool LargeObject::hasPhysicalPages() const
+inline Owner LargeObject::owner() const
 {
     validate();
-    return m_beginTag->hasPhysicalPages();
+    return m_beginTag->owner();
 }
 
-inline void LargeObject::setHasPhysicalPages(bool hasPhysicalPages) const
+inline void LargeObject::setOwner(Owner owner) const
 {
     validate();
-    m_beginTag->setHasPhysicalPages(hasPhysicalPages);
-    m_endTag->setHasPhysicalPages(hasPhysicalPages);
+    m_beginTag->setOwner(owner);
+    m_endTag->setOwner(owner);
 }
 
 inline bool LargeObject::isMarked() const
@@ -142,7 +143,7 @@ inline void LargeObject::setMarked(bool isMarked) const
     m_endTag->setMarked(isMarked);
 }
 
-inline bool LargeObject::isValidAndFree(size_t expectedSize) const
+inline bool LargeObject::isValidAndFree(Owner expectedOwner, size_t expectedSize) const
 {
     if (!m_beginTag->isFree())
         return false;
@@ -156,6 +157,9 @@ inline bool LargeObject::isValidAndFree(size_t expectedSize) const
     if (m_beginTag->compactBegin() != BoundaryTag::compactBegin(m_object))
         return false;
 
+    if (m_beginTag->owner() != expectedOwner)
+        return false;
+    
     return true;
 }
 
@@ -164,17 +168,15 @@ inline LargeObject LargeObject::merge() const
     validate();
     BASSERT(isFree());
 
-    bool hasPhysicalPages = m_beginTag->hasPhysicalPages();
-
     BeginTag* beginTag = m_beginTag;
     EndTag* endTag = m_endTag;
     Range range = this->range();
+    Owner owner = this->owner();
     
     EndTag* prev = beginTag->prev();
-    if (prev->isFree()) {
+    if (prev->isFree() && prev->owner() == owner) {
         Range left(range.begin() - prev->size(), prev->size());
         range = Range(left.begin(), left.size() + range.size());
-        hasPhysicalPages &= prev->hasPhysicalPages();
 
         prev->clear();
         beginTag->clear();
@@ -183,12 +185,10 @@ inline LargeObject LargeObject::merge() const
     }
 
     BeginTag* next = endTag->next();
-    if (next->isFree()) {
+    if (next->isFree() && next->owner() == owner) {
         Range right(range.end(), next->size());
         range = Range(range.begin(), range.size() + right.size());
 
-        hasPhysicalPages &= next->hasPhysicalPages();
-
         endTag->clear();
         next->clear();
 
@@ -197,7 +197,7 @@ inline LargeObject LargeObject::merge() const
 
     beginTag->setRange(range);
     beginTag->setFree(true);
-    beginTag->setHasPhysicalPages(hasPhysicalPages);
+    beginTag->setOwner(owner);
     endTag->init(beginTag);
 
     return LargeObject(beginTag, endTag, range.begin());
@@ -238,7 +238,7 @@ inline void LargeObject::validateSelf() const
 
     BASSERT(m_beginTag->size() == m_endTag->size());
     BASSERT(m_beginTag->isFree() == m_endTag->isFree());
-    BASSERT(m_beginTag->hasPhysicalPages() == m_endTag->hasPhysicalPages());
+    BASSERT(m_beginTag->owner() == m_endTag->owner());
     BASSERT(m_beginTag->isMarked() == m_endTag->isMarked());
 }
 
@@ -264,7 +264,7 @@ inline Range LargeObject::init(LargeChunk* chunk)
     BeginTag* beginTag = LargeChunk::beginTag(range.begin());
     beginTag->setRange(range);
     beginTag->setFree(true);
-    beginTag->setHasPhysicalPages(false);
+    beginTag->setOwner(Owner::VMHeap);
 
     EndTag* endTag = LargeChunk::endTag(range.begin(), range.size());
     endTag->init(beginTag);
diff --git a/Source/bmalloc/bmalloc/Owner.h b/Source/bmalloc/bmalloc/Owner.h
new file mode 100644 (file)
index 0000000..8554623
--- /dev/null
@@ -0,0 +1,38 @@
+/*
+ * Copyright (C) 2015 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#ifndef Owner_h
+#define Owner_h
+
+namespace bmalloc {
+
+enum class Owner : unsigned {
+    VMHeap,
+    Heap
+};
+
+} // namespace bmalloc
+
+#endif // Owner_h
index a7ba983..96f6671 100644 (file)
 
 namespace bmalloc {
 
-SegregatedFreeList::SegregatedFreeList()
+SegregatedFreeList::SegregatedFreeList(Owner owner)
+    : m_owner(owner)
 {
     BASSERT(static_cast<size_t>(&select(largeMax) - m_freeLists.begin()) == m_freeLists.size() - 1);
 }
 
 void SegregatedFreeList::insert(const LargeObject& largeObject)
 {
+    BASSERT(largeObject.owner() == m_owner);
     auto& list = select(largeObject.size());
-    list.push(largeObject);
+    list.push(m_owner, largeObject);
 }
 
-LargeObject SegregatedFreeList::takeGreedy(size_t size)
+LargeObject SegregatedFreeList::takeGreedy()
 {
     for (size_t i = m_freeLists.size(); i-- > 0; ) {
-        LargeObject largeObject = m_freeLists[i].takeGreedy(size);
+        LargeObject largeObject = m_freeLists[i].takeGreedy(m_owner);
         if (!largeObject)
             continue;
 
@@ -53,7 +55,7 @@ LargeObject SegregatedFreeList::takeGreedy(size_t size)
 LargeObject SegregatedFreeList::take(size_t size)
 {
     for (auto* list = &select(size); list != m_freeLists.end(); ++list) {
-        LargeObject largeObject = list->take(size);
+        LargeObject largeObject = list->take(m_owner, size);
         if (!largeObject)
             continue;
 
@@ -65,7 +67,7 @@ LargeObject SegregatedFreeList::take(size_t size)
 LargeObject SegregatedFreeList::take(size_t alignment, size_t size, size_t unalignedSize)
 {
     for (auto* list = &select(size); list != m_freeLists.end(); ++list) {
-        LargeObject largeObject = list->take(alignment, size, unalignedSize);
+        LargeObject largeObject = list->take(m_owner, alignment, size, unalignedSize);
         if (!largeObject)
             continue;
 
index 752981a..7479db7 100644 (file)
@@ -33,7 +33,7 @@ namespace bmalloc {
 
 class SegregatedFreeList {
 public:
-    SegregatedFreeList();
+    SegregatedFreeList(Owner);
 
     void insert(const LargeObject&);
 
@@ -54,11 +54,12 @@ public:
     // fit is found. Never returns LargeObject() spuriously. Incrementally
     // removes stale items from the free list while searching. Eagerly removes
     // the returned object from the free list.
-    LargeObject takeGreedy(size_t);
-    
+    LargeObject takeGreedy();
+
 private:
     FreeList& select(size_t);
 
+    Owner m_owner;
     std::array<FreeList, 19> m_freeLists;
 };
 
index 8f3ea5f..31f79b4 100644 (file)
@@ -131,18 +131,16 @@ inline void vmAllocatePhysicalPages(void* p, size_t vmSize)
 #endif
 }
 
-// Trims requests that are un-page-aligned. NOTE: size must be at least a page.
+// Trims requests that are un-page-aligned.
 inline void vmDeallocatePhysicalPagesSloppy(void* p, size_t size)
 {
-    BASSERT(size >= vmPageSize);
-
     char* begin = roundUpToMultipleOf<vmPageSize>(static_cast<char*>(p));
     char* end = roundDownToMultipleOf<vmPageSize>(static_cast<char*>(p) + size);
 
-    Range range(begin, end - begin);
-    if (!range)
+    if (begin >= end)
         return;
-    vmDeallocatePhysicalPages(range.begin(), range.size());
+
+    vmDeallocatePhysicalPages(begin, end - begin);
 }
 
 // Expands requests that are un-page-aligned. NOTE: Allocation must proceed left-to-right.
@@ -151,10 +149,10 @@ inline void vmAllocatePhysicalPagesSloppy(void* p, size_t size)
     char* begin = roundUpToMultipleOf<vmPageSize>(static_cast<char*>(p));
     char* end = roundUpToMultipleOf<vmPageSize>(static_cast<char*>(p) + size);
 
-    Range range(begin, end - begin);
-    if (!range)
+    if (begin >= end)
         return;
-    vmAllocatePhysicalPages(range.begin(), range.size());
+
+    vmAllocatePhysicalPages(begin, end - begin);
 }
 
 } // namespace bmalloc
index 2d5136d..26ffb1f 100644 (file)
@@ -33,6 +33,7 @@
 namespace bmalloc {
 
 VMHeap::VMHeap()
+    : m_largeObjects(Owner::VMHeap)
 {
 }
 
index 792963e..d581ab6 100644 (file)
@@ -52,14 +52,15 @@ public:
 
     SmallPage* allocateSmallPage();
     MediumPage* allocateMediumPage();
-    LargeObject allocateLargeRange(size_t);
-    LargeObject allocateLargeRange(size_t alignment, size_t, size_t unalignedSize);
+    LargeObject allocateLargeObject(size_t);
+    LargeObject allocateLargeObject(size_t alignment, size_t, size_t unalignedSize);
 
     void deallocateSmallPage(std::unique_lock<StaticMutex>&, SmallPage*);
     void deallocateMediumPage(std::unique_lock<StaticMutex>&, MediumPage*);
-    void deallocateLargeRange(std::unique_lock<StaticMutex>&, LargeObject&);
+    void deallocateLargeObject(std::unique_lock<StaticMutex>&, LargeObject&);
 
 private:
+    LargeObject allocateLargeObject(LargeObject&, size_t);
     void grow();
 
     Vector<SmallPage*> m_smallPages;
@@ -75,7 +76,9 @@ inline SmallPage* VMHeap::allocateSmallPage()
     if (!m_smallPages.size())
         grow();
 
-    return m_smallPages.pop();
+    SmallPage* page = m_smallPages.pop();
+    vmAllocatePhysicalPages(page->begin()->begin(), vmPageSize);
+    return page;
 }
 
 inline MediumPage* VMHeap::allocateMediumPage()
@@ -83,10 +86,27 @@ inline MediumPage* VMHeap::allocateMediumPage()
     if (!m_mediumPages.size())
         grow();
 
-    return m_mediumPages.pop();
+    MediumPage* page = m_mediumPages.pop();
+    vmAllocatePhysicalPages(page->begin()->begin(), vmPageSize);
+    return page;
 }
 
-inline LargeObject VMHeap::allocateLargeRange(size_t size)
+inline LargeObject VMHeap::allocateLargeObject(LargeObject& largeObject, size_t size)
+{
+    BASSERT(largeObject.isFree());
+
+    if (largeObject.size() - size > largeMin) {
+        std::pair<LargeObject, LargeObject> split = largeObject.split(size);
+        largeObject = split.first;
+        m_largeObjects.insert(split.second);
+    }
+
+    vmAllocatePhysicalPagesSloppy(largeObject.begin(), largeObject.size());
+    largeObject.setOwner(Owner::Heap);
+    return largeObject.begin();
+}
+
+inline LargeObject VMHeap::allocateLargeObject(size_t size)
 {
     LargeObject largeObject = m_largeObjects.take(size);
     if (!largeObject) {
@@ -94,10 +114,11 @@ inline LargeObject VMHeap::allocateLargeRange(size_t size)
         largeObject = m_largeObjects.take(size);
         BASSERT(largeObject);
     }
-    return largeObject;
+
+    return allocateLargeObject(largeObject, size);
 }
 
-inline LargeObject VMHeap::allocateLargeRange(size_t alignment, size_t size, size_t unalignedSize)
+inline LargeObject VMHeap::allocateLargeObject(size_t alignment, size_t size, size_t unalignedSize)
 {
     LargeObject largeObject = m_largeObjects.take(alignment, size, unalignedSize);
     if (!largeObject) {
@@ -105,7 +126,11 @@ inline LargeObject VMHeap::allocateLargeRange(size_t alignment, size_t size, siz
         largeObject = m_largeObjects.take(alignment, size, unalignedSize);
         BASSERT(largeObject);
     }
-    return largeObject;
+
+    size_t alignmentMask = alignment - 1;
+    if (test(largeObject.begin(), alignmentMask))
+        return allocateLargeObject(largeObject, unalignedSize);
+    return allocateLargeObject(largeObject, size);
 }
 
 inline void VMHeap::deallocateSmallPage(std::unique_lock<StaticMutex>& lock, SmallPage* page)
@@ -126,20 +151,25 @@ inline void VMHeap::deallocateMediumPage(std::unique_lock<StaticMutex>& lock, Me
     m_mediumPages.push(page);
 }
 
-inline void VMHeap::deallocateLargeRange(std::unique_lock<StaticMutex>& lock, LargeObject& largeObject)
+inline void VMHeap::deallocateLargeObject(std::unique_lock<StaticMutex>& lock, LargeObject& largeObject)
 {
-    // Temporarily mark this range as allocated to prevent clients from merging
-    // with it and then reallocating it while we're messing with its physical pages.
-    largeObject.setFree(false);
+    largeObject.setOwner(Owner::VMHeap);
+    
+    // If we couldn't merge with our neighbors before because they were in the
+    // VM heap, we can merge with them now.
+    LargeObject merged = largeObject.merge();
+
+    // Temporarily mark this object as allocated to prevent clients from merging
+    // with it or allocating it while we're messing with its physical pages.
+    merged.setFree(false);
 
     lock.unlock();
-    vmDeallocatePhysicalPagesSloppy(largeObject.begin(), largeObject.size());
+    vmDeallocatePhysicalPagesSloppy(merged.begin(), merged.size());
     lock.lock();
 
-    largeObject.setFree(true);
-    largeObject.setHasPhysicalPages(false);
+    merged.setFree(true);
 
-    m_largeObjects.insert(largeObject);
+    m_largeObjects.insert(merged);
 }
 
 } // namespace bmalloc