bmalloc: Merge the large and xlarge allocators
authorggaren@apple.com <ggaren@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Tue, 19 Apr 2016 23:36:20 +0000 (23:36 +0000)
committerggaren@apple.com <ggaren@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Tue, 19 Apr 2016 23:36:20 +0000 (23:36 +0000)
https://bugs.webkit.org/show_bug.cgi?id=156734

Reviewed by Andreas Kling.

This give us better defense against worst case memory usage:

                                      Baseline                Patch                    Δ
    Peak Memory:
        nimlang                      198,132kB            181,468kB      ^ 1.09x smaller

It also eliminates inline metadata for large objects, fixing the
regression introduced in r198675, and more:

    run-malloc-benchmarks Baseline:~/OpenSource/WebKitBuildBaseline/Release/ Patch:~/OpenSource/WebKitBuild/Release/

                                                          Baseline                          Patch                              Δ
    Memory at End:
        big                                               10,880kB                        3,328kB                ^ 3.27x smaller
        facebook                                           3,112kB                        2,868kB                ^ 1.09x smaller
        fragment --parallel                                1,848kB                          760kB                ^ 2.43x smaller
        fragment_iterate --parallel                        4,908kB                          776kB                ^ 6.32x smaller
        big --parallel                                    48,076kB                       11,892kB                ^ 4.04x smaller

Overall memory use looks OK:

    run-malloc-benchmarks --memory_warning Baseline:~/OpenSource/WebKitBuildBaseline/Release/ Patch:~/OpenSource/WebKitBuild/Release/

                                                Baseline                               Patch                                   Δ
    Memory at End:
        <arithmetic mean>                       13,992kB                            13,987kB                      ^ 1.0x smaller

Overall throughput looks OK:

    run-malloc-benchmarks Baseline:~/OpenSource/WebKitBuildBaseline/Release/ Patch:~/OpenSource/WebKitBuild/Release/

                                                          Baseline                          Patch                              Δ
    Execution Time:
        <arithmetic mean>                                    103ms                          104ms                 ! 1.01x slower

We're a bit slower on the "all-out large allocations on all cores"
benchmark, but I think that's an OK price to pay:

                                                          Baseline                          Patch                              Δ
    Execution Time:
        big --parallel                                       125ms                          136ms                 ! 1.09x slower

This patch net removes 1.5k lines of code. It turns out that large
allocations are rare, and free memory fragments are also rare, so the
combination is super rare, and a simple O(n) algorithm that ensures good
memory behavior is the best option.

Fun fact: In practice, the odds that the old code would save memory
were *worse* than the odds that it would contain a bug that wasted
memory. :)

* bmalloc.xcodeproj/project.pbxproj:

* bmalloc/Allocator.cpp:
(bmalloc::Allocator::tryAllocate): largeMax is the new xLargeMax since
xLargeMax is gone now.

(bmalloc::Allocator::allocate): I moved the rounding code into allocateLarge,
so we don't have to do it here.

(bmalloc::Allocator::reallocate):
(bmalloc::Allocator::allocateSlowCase):
(bmalloc::Allocator::allocateXLarge): Deleted. No more XLarge case.

* bmalloc/Allocator.h:

* bmalloc/BeginTag.h: Removed.
* bmalloc/BoundaryTag.h: Removed.

* bmalloc/Chunk.h:
(bmalloc::ChunkHash::hash): Added a hash function. The best hash function
is a unique and monotonically increasing integer, and that's exactly what
we typically get from the high bits of a Chunk, since the OS allocates
Chunks at unique and increasing addresses.
(bmalloc::Chunk::boundaryTags): Deleted.
(bmalloc::Chunk::objectType): Deleted.
(bmalloc::Chunk::beginTag): Deleted.
(bmalloc::Chunk::endTag): Deleted.

* bmalloc/Deallocator.cpp:
(bmalloc::Deallocator::deallocateSlowCase): We no longer know for sure,
by looking at its bit pattern, whether a pointer is small or large.
Instead, any pointer with large alignment *might* be large, and when
we occasionally encounter such an object, we have to consult a hash
table in the Heap to find out for sure. This turns out to be just as
cheap in practice.

We don't deallocate large objects on the fast path anymore. We can't,
because large objects have out-of-line metadata now.

(bmalloc::Deallocator::deallocateXLarge): Deleted.

* bmalloc/Deallocator.h:
(bmalloc::Deallocator::deallocateFastCase): See deallocateSlowCase.

* bmalloc/EndTag.h: Removed.
* bmalloc/FreeList.cpp: Removed.
* bmalloc/FreeList.h: Removed.

* bmalloc/Heap.cpp:
(bmalloc::Heap::allocateSmallPage): Be sure to track each chunk in
the object type map, so we can distinguish small vs large objects.

(bmalloc::Heap::deallocateSmallLine): No need to check object type
because we know object type now by virtue of being on the small object
path.

(bmalloc::Heap::splitAndAllocate): Be sure to track each chunk in
the object type map, so we can distinguish small vs large objects. Large
objects can split across chunks, so we need to add each large object's
chunk as it is allocated.

(bmalloc::Heap::tryAllocateLarge):
(bmalloc::Heap::allocateLarge):
(bmalloc::Heap::isLarge):
(bmalloc::Heap::largeSize):
(bmalloc::Heap::shrinkLarge):
(bmalloc::Heap::deallocateLarge): Merged in existing XLarge logic for
large objects.

(bmalloc::Heap::scavengeXLargeObjects): Deleted.
(bmalloc::Heap::allocateXLarge): Deleted.
(bmalloc::Heap::tryAllocateXLarge): Deleted.
(bmalloc::Heap::xLargeSize): Deleted.
(bmalloc::Heap::shrinkXLarge): Deleted.
(bmalloc::Heap::deallocateXLarge): Deleted.

* bmalloc/Heap.h:
(bmalloc::Heap::LargeObjectHash::hash):

* bmalloc/LargeObject.h: Removed.

* bmalloc/Map.h: Added.
(bmalloc::Map::size):
(bmalloc::Map::capacity):
(bmalloc::Map::get):
(bmalloc::Map::set):
(bmalloc::Map::remove):
(bmalloc::Map::shouldGrow):
(bmalloc::Map::shouldShrink):
(bmalloc::Map::find):
(bmalloc::Hash>::rehash): Simple hash table.

* bmalloc/Object.h:

* bmalloc/ObjectType.cpp:
(bmalloc::objectType):
* bmalloc/ObjectType.h:
(bmalloc::mightBeLarge): See deallocateSlowCase.
(bmalloc::isXLarge): Deleted.

* bmalloc/SegregatedFreeList.cpp: Removed.
* bmalloc/SegregatedFreeList.h: Removed.

* bmalloc/Sizes.h: Upped smallMax to 64kB. Upping to 32kB is pretty
reasonable, since sizes between 16kB and 32kB share page sizes. I went
all the way up to 64kB because the GC uses 64kB blocks, and also just
for extra padding to ensure that large allocations are indeed rare.

* bmalloc/SortedVector.h: Removed.

* bmalloc/VMHeap.cpp:
(bmalloc::VMHeap::tryAllocateLargeChunk):
(bmalloc::VMHeap::allocateSmallChunk):
(bmalloc::VMHeap::VMHeap): Deleted.
(bmalloc::VMHeap::allocateChunk): Deleted.
* bmalloc/VMHeap.h:
(bmalloc::VMHeap::deallocateSmallPage):
(bmalloc::VMHeap::allocateLargeObject): Deleted.
(bmalloc::VMHeap::deallocateLargeObject): Deleted. Nixed all the boundary
tag logic since metadata is out of line now.

* bmalloc/VMState.h: Removed. Instead of an abstract state, we track
the precise amount of committed physical pages at the head of a VM
range. This allows us to merge aggressively without triggering an madvise
storm most of the time.

* bmalloc/Vector.h:
(bmalloc::Vector<T>::Vector):
(bmalloc::Vector<T>::insert):
(bmalloc::Vector<T>::remove):
(bmalloc::Vector<T>::resize): Filled out some missing helpers.

* bmalloc/XLargeMap.cpp:
(bmalloc::XLargeMap::remove):
(bmalloc::XLargeMap::add):
(bmalloc::XLargeMap::removePhysical):
(bmalloc::XLargeMap::takeFree): Deleted.
(bmalloc::XLargeMap::addFree): Deleted.
(bmalloc::XLargeMap::addAllocated): Deleted.
(bmalloc::XLargeMap::getAllocated): Deleted.
(bmalloc::XLargeMap::takeAllocated): Deleted.
(bmalloc::XLargeMap::shrinkToFit): Deleted.
(bmalloc::XLargeMap::takePhysical): Deleted.
(bmalloc::XLargeMap::addVirtual): Deleted.
* bmalloc/XLargeMap.h:
(bmalloc::XLargeMap::Allocation::operator<): Deleted. We don't track
object sizes anymore -- just free space. (The Heap tracks object sizes.)
We use plain old linear search for free space. (See intro.)

* bmalloc/XLargeRange.h:
(bmalloc::XLargeRange::physicalSize):
(bmalloc::XLargeRange::setPhysicalSize):
(bmalloc::merge):
(bmalloc::XLargeRange::split):
(bmalloc::XLargeRange::vmState): Deleted.
(bmalloc::XLargeRange::setVMState): Deleted. See VMState.h.

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@199746 268f45cc-cd09-0410-ab3c-d52691b4dbfc

31 files changed:
Source/bmalloc/CMakeLists.txt
Source/bmalloc/ChangeLog
Source/bmalloc/bmalloc.xcodeproj/project.pbxproj
Source/bmalloc/bmalloc/Allocator.cpp
Source/bmalloc/bmalloc/Allocator.h
Source/bmalloc/bmalloc/BeginTag.h [deleted file]
Source/bmalloc/bmalloc/BoundaryTag.h [deleted file]
Source/bmalloc/bmalloc/Chunk.h
Source/bmalloc/bmalloc/Deallocator.cpp
Source/bmalloc/bmalloc/Deallocator.h
Source/bmalloc/bmalloc/EndTag.h [deleted file]
Source/bmalloc/bmalloc/FreeList.cpp [deleted file]
Source/bmalloc/bmalloc/FreeList.h [deleted file]
Source/bmalloc/bmalloc/Heap.cpp
Source/bmalloc/bmalloc/Heap.h
Source/bmalloc/bmalloc/LargeObject.h [deleted file]
Source/bmalloc/bmalloc/Map.h [new file with mode: 0644]
Source/bmalloc/bmalloc/Object.h
Source/bmalloc/bmalloc/ObjectType.cpp
Source/bmalloc/bmalloc/ObjectType.h
Source/bmalloc/bmalloc/SegregatedFreeList.cpp [deleted file]
Source/bmalloc/bmalloc/SegregatedFreeList.h [deleted file]
Source/bmalloc/bmalloc/Sizes.h
Source/bmalloc/bmalloc/SortedVector.h [deleted file]
Source/bmalloc/bmalloc/VMHeap.cpp
Source/bmalloc/bmalloc/VMHeap.h
Source/bmalloc/bmalloc/VMState.h [deleted file]
Source/bmalloc/bmalloc/Vector.h
Source/bmalloc/bmalloc/XLargeMap.cpp
Source/bmalloc/bmalloc/XLargeMap.h
Source/bmalloc/bmalloc/XLargeRange.h

index b494d64..337c8c1 100644 (file)
@@ -9,11 +9,9 @@ set(bmalloc_SOURCES
     bmalloc/Cache.cpp
     bmalloc/Deallocator.cpp
     bmalloc/Environment.cpp
-    bmalloc/FreeList.cpp
     bmalloc/Heap.cpp
     bmalloc/Logging.cpp
     bmalloc/ObjectType.cpp
-    bmalloc/SegregatedFreeList.cpp
     bmalloc/StaticMutex.cpp
     bmalloc/VMHeap.cpp
     bmalloc/XLargeMap.cpp
index 9eca8f3..8ebbc64 100644 (file)
@@ -1,3 +1,218 @@
+2016-04-19  Geoffrey Garen  <ggaren@apple.com>
+
+        bmalloc: Merge the large and xlarge allocators
+        https://bugs.webkit.org/show_bug.cgi?id=156734
+
+        Reviewed by Andreas Kling.
+
+        This give us better defense against worst case memory usage:
+
+                                              Baseline                Patch                    Δ
+            Peak Memory:
+                nimlang                      198,132kB            181,468kB      ^ 1.09x smaller
+
+        It also eliminates inline metadata for large objects, fixing the
+        regression introduced in r198675, and more:
+
+            run-malloc-benchmarks Baseline:~/OpenSource/WebKitBuildBaseline/Release/ Patch:~/OpenSource/WebKitBuild/Release/
+
+                                                                  Baseline                          Patch                              Δ
+            Memory at End:
+                big                                               10,880kB                        3,328kB                ^ 3.27x smaller
+                facebook                                           3,112kB                        2,868kB                ^ 1.09x smaller
+                fragment --parallel                                1,848kB                          760kB                ^ 2.43x smaller
+                fragment_iterate --parallel                        4,908kB                          776kB                ^ 6.32x smaller
+                big --parallel                                    48,076kB                       11,892kB                ^ 4.04x smaller
+
+        Overall memory use looks OK:
+
+            run-malloc-benchmarks --memory_warning Baseline:~/OpenSource/WebKitBuildBaseline/Release/ Patch:~/OpenSource/WebKitBuild/Release/
+
+                                                        Baseline                               Patch                                   Δ
+            Memory at End:
+                <arithmetic mean>                       13,992kB                            13,987kB                      ^ 1.0x smaller
+
+        Overall throughput looks OK:
+
+            run-malloc-benchmarks Baseline:~/OpenSource/WebKitBuildBaseline/Release/ Patch:~/OpenSource/WebKitBuild/Release/
+
+                                                                  Baseline                          Patch                              Δ
+            Execution Time:
+                <arithmetic mean>                                    103ms                          104ms                 ! 1.01x slower
+
+        We're a bit slower on the "all-out large allocations on all cores"
+        benchmark, but I think that's an OK price to pay:
+
+                                                                  Baseline                          Patch                              Δ
+            Execution Time:
+                big --parallel                                       125ms                          136ms                 ! 1.09x slower
+
+        This patch net removes 1.5k lines of code. It turns out that large
+        allocations are rare, and free memory fragments are also rare, so the
+        combination is super rare, and a simple O(n) algorithm that ensures good
+        memory behavior is the best option.
+
+        Fun fact: In practice, the odds that the old code would save memory
+        were *worse* than the odds that it would contain a bug that wasted
+        memory. :)
+
+        * bmalloc.xcodeproj/project.pbxproj:
+
+        * bmalloc/Allocator.cpp:
+        (bmalloc::Allocator::tryAllocate): largeMax is the new xLargeMax since
+        xLargeMax is gone now.
+
+        (bmalloc::Allocator::allocate): I moved the rounding code into allocateLarge,
+        so we don't have to do it here.
+
+        (bmalloc::Allocator::reallocate):
+        (bmalloc::Allocator::allocateSlowCase):
+        (bmalloc::Allocator::allocateXLarge): Deleted. No more XLarge case.
+
+        * bmalloc/Allocator.h:
+
+        * bmalloc/BeginTag.h: Removed.
+        * bmalloc/BoundaryTag.h: Removed.
+
+        * bmalloc/Chunk.h:
+        (bmalloc::ChunkHash::hash): Added a hash function. The best hash function
+        is a unique and monotonically increasing integer, and that's exactly what
+        we typically get from the high bits of a Chunk, since the OS allocates
+        Chunks at unique and increasing addresses.
+        (bmalloc::Chunk::boundaryTags): Deleted.
+        (bmalloc::Chunk::objectType): Deleted.
+        (bmalloc::Chunk::beginTag): Deleted.
+        (bmalloc::Chunk::endTag): Deleted.
+
+        * bmalloc/Deallocator.cpp:
+        (bmalloc::Deallocator::deallocateSlowCase): We no longer know for sure,
+        by looking at its bit pattern, whether a pointer is small or large.
+        Instead, any pointer with large alignment *might* be large, and when
+        we occasionally encounter such an object, we have to consult a hash
+        table in the Heap to find out for sure. This turns out to be just as
+        cheap in practice.
+
+        We don't deallocate large objects on the fast path anymore. We can't,
+        because large objects have out-of-line metadata now.
+
+        (bmalloc::Deallocator::deallocateXLarge): Deleted.
+
+        * bmalloc/Deallocator.h:
+        (bmalloc::Deallocator::deallocateFastCase): See deallocateSlowCase.
+
+        * bmalloc/EndTag.h: Removed.
+        * bmalloc/FreeList.cpp: Removed.
+        * bmalloc/FreeList.h: Removed.
+
+        * bmalloc/Heap.cpp:
+        (bmalloc::Heap::allocateSmallPage): Be sure to track each chunk in
+        the object type map, so we can distinguish small vs large objects.
+
+        (bmalloc::Heap::deallocateSmallLine): No need to check object type
+        because we know object type now by virtue of being on the small object
+        path.
+
+        (bmalloc::Heap::splitAndAllocate): Be sure to track each chunk in
+        the object type map, so we can distinguish small vs large objects. Large
+        objects can split across chunks, so we need to add each large object's
+        chunk as it is allocated.
+
+        (bmalloc::Heap::tryAllocateLarge):
+        (bmalloc::Heap::allocateLarge):
+        (bmalloc::Heap::isLarge):
+        (bmalloc::Heap::largeSize):
+        (bmalloc::Heap::shrinkLarge):
+        (bmalloc::Heap::deallocateLarge): Merged in existing XLarge logic for
+        large objects.
+
+        (bmalloc::Heap::scavengeXLargeObjects): Deleted.
+        (bmalloc::Heap::allocateXLarge): Deleted.
+        (bmalloc::Heap::tryAllocateXLarge): Deleted.
+        (bmalloc::Heap::xLargeSize): Deleted.
+        (bmalloc::Heap::shrinkXLarge): Deleted.
+        (bmalloc::Heap::deallocateXLarge): Deleted.
+
+        * bmalloc/Heap.h:
+        (bmalloc::Heap::LargeObjectHash::hash):
+
+        * bmalloc/LargeObject.h: Removed.
+
+        * bmalloc/Map.h: Added.
+        (bmalloc::Map::size):
+        (bmalloc::Map::capacity):
+        (bmalloc::Map::get):
+        (bmalloc::Map::set):
+        (bmalloc::Map::remove):
+        (bmalloc::Map::shouldGrow):
+        (bmalloc::Map::shouldShrink):
+        (bmalloc::Map::find):
+        (bmalloc::Hash>::rehash): Simple hash table.
+
+        * bmalloc/Object.h:
+
+        * bmalloc/ObjectType.cpp:
+        (bmalloc::objectType):
+        * bmalloc/ObjectType.h:
+        (bmalloc::mightBeLarge): See deallocateSlowCase.
+        (bmalloc::isXLarge): Deleted.
+
+        * bmalloc/SegregatedFreeList.cpp: Removed.
+        * bmalloc/SegregatedFreeList.h: Removed.
+
+        * bmalloc/Sizes.h: Upped smallMax to 64kB. Upping to 32kB is pretty
+        reasonable, since sizes between 16kB and 32kB share page sizes. I went
+        all the way up to 64kB because the GC uses 64kB blocks, and also just
+        for extra padding to ensure that large allocations are indeed rare.
+
+        * bmalloc/SortedVector.h: Removed.
+
+        * bmalloc/VMHeap.cpp:
+        (bmalloc::VMHeap::tryAllocateLargeChunk):
+        (bmalloc::VMHeap::allocateSmallChunk):
+        (bmalloc::VMHeap::VMHeap): Deleted.
+        (bmalloc::VMHeap::allocateChunk): Deleted.
+        * bmalloc/VMHeap.h:
+        (bmalloc::VMHeap::deallocateSmallPage):
+        (bmalloc::VMHeap::allocateLargeObject): Deleted.
+        (bmalloc::VMHeap::deallocateLargeObject): Deleted. Nixed all the boundary
+        tag logic since metadata is out of line now.
+
+        * bmalloc/VMState.h: Removed. Instead of an abstract state, we track
+        the precise amount of committed physical pages at the head of a VM
+        range. This allows us to merge aggressively without triggering an madvise
+        storm most of the time.
+
+        * bmalloc/Vector.h:
+        (bmalloc::Vector<T>::Vector):
+        (bmalloc::Vector<T>::insert):
+        (bmalloc::Vector<T>::remove):
+        (bmalloc::Vector<T>::resize): Filled out some missing helpers.
+
+        * bmalloc/XLargeMap.cpp:
+        (bmalloc::XLargeMap::remove):
+        (bmalloc::XLargeMap::add):
+        (bmalloc::XLargeMap::removePhysical):
+        (bmalloc::XLargeMap::takeFree): Deleted.
+        (bmalloc::XLargeMap::addFree): Deleted.
+        (bmalloc::XLargeMap::addAllocated): Deleted.
+        (bmalloc::XLargeMap::getAllocated): Deleted.
+        (bmalloc::XLargeMap::takeAllocated): Deleted.
+        (bmalloc::XLargeMap::shrinkToFit): Deleted.
+        (bmalloc::XLargeMap::takePhysical): Deleted.
+        (bmalloc::XLargeMap::addVirtual): Deleted.
+        * bmalloc/XLargeMap.h:
+        (bmalloc::XLargeMap::Allocation::operator<): Deleted. We don't track
+        object sizes anymore -- just free space. (The Heap tracks object sizes.)
+        We use plain old linear search for free space. (See intro.)
+
+        * bmalloc/XLargeRange.h:
+        (bmalloc::XLargeRange::physicalSize):
+        (bmalloc::XLargeRange::setPhysicalSize):
+        (bmalloc::merge):
+        (bmalloc::XLargeRange::split):
+        (bmalloc::XLargeRange::vmState): Deleted.
+        (bmalloc::XLargeRange::setVMState): Deleted. See VMState.h.
+
 2016-04-11  Fujii Hironori  <Hironori.Fujii@jp.sony.com>
 
         [CMake] Make FOLDER property INHERITED
index 647efa7..7f7a2ec 100644 (file)
@@ -9,14 +9,11 @@
 /* Begin PBXBuildFile section */
                1400274918F89C1300115C97 /* Heap.h in Headers */ = {isa = PBXBuildFile; fileRef = 14DA320C18875B09007269E0 /* Heap.h */; settings = {ATTRIBUTES = (Private, ); }; };
                1400274A18F89C2300115C97 /* VMHeap.h in Headers */ = {isa = PBXBuildFile; fileRef = 144F7BFC18BFC517003537F3 /* VMHeap.h */; settings = {ATTRIBUTES = (Private, ); }; };
-               1400274C18F89C3D00115C97 /* SegregatedFreeList.h in Headers */ = {isa = PBXBuildFile; fileRef = 146BEE1E18C841C50002D5A2 /* SegregatedFreeList.h */; settings = {ATTRIBUTES = (Private, ); }; };
                140FA00319CE429C00FFD3C8 /* BumpRange.h in Headers */ = {isa = PBXBuildFile; fileRef = 140FA00219CE429C00FFD3C8 /* BumpRange.h */; settings = {ATTRIBUTES = (Private, ); }; };
                140FA00519CE4B6800FFD3C8 /* LineMetadata.h in Headers */ = {isa = PBXBuildFile; fileRef = 140FA00419CE4B6800FFD3C8 /* LineMetadata.h */; settings = {ATTRIBUTES = (Private, ); }; };
                141D9B001C8E51C0000ABBA0 /* List.h in Headers */ = {isa = PBXBuildFile; fileRef = 141D9AFF1C8E51C0000ABBA0 /* List.h */; settings = {ATTRIBUTES = (Private, ); }; };
                143CB81C19022BC900B16A45 /* StaticMutex.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 143CB81A19022BC900B16A45 /* StaticMutex.cpp */; };
                143CB81D19022BC900B16A45 /* StaticMutex.h in Headers */ = {isa = PBXBuildFile; fileRef = 143CB81B19022BC900B16A45 /* StaticMutex.h */; settings = {ATTRIBUTES = (Private, ); }; };
-               143EF9AF1A9FABF6004F5C77 /* FreeList.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 143EF9AD1A9FABF6004F5C77 /* FreeList.cpp */; };
-               143EF9B01A9FABF6004F5C77 /* FreeList.h in Headers */ = {isa = PBXBuildFile; fileRef = 143EF9AE1A9FABF6004F5C77 /* FreeList.h */; settings = {ATTRIBUTES = (Private, ); }; };
                1440AFCB1A95261100837FAA /* Zone.h in Headers */ = {isa = PBXBuildFile; fileRef = 1440AFCA1A95261100837FAA /* Zone.h */; settings = {ATTRIBUTES = (Private, ); }; };
                1440AFCD1A9527AF00837FAA /* Zone.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 1440AFCC1A9527AF00837FAA /* Zone.cpp */; };
                1448C30018F3754600502839 /* mbmalloc.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 1448C2FF18F3754300502839 /* mbmalloc.cpp */; };
                144BE11F1CA346520099C8C0 /* Object.h in Headers */ = {isa = PBXBuildFile; fileRef = 144BE11E1CA346520099C8C0 /* Object.h */; settings = {ATTRIBUTES = (Private, ); }; };
                144C07F41C7B70260051BB6A /* XLargeMap.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 144C07F21C7B70260051BB6A /* XLargeMap.cpp */; };
                144C07F51C7B70260051BB6A /* XLargeMap.h in Headers */ = {isa = PBXBuildFile; fileRef = 144C07F31C7B70260051BB6A /* XLargeMap.h */; settings = {ATTRIBUTES = (Private, ); }; };
-               146041E71C7FF2EF00E9F94E /* SortedVector.h in Headers */ = {isa = PBXBuildFile; fileRef = 146041E61C7FF2EF00E9F94E /* SortedVector.h */; settings = {ATTRIBUTES = (Private, ); }; };
                147DC6E31CA5B70B00724E8D /* Chunk.h in Headers */ = {isa = PBXBuildFile; fileRef = 147DC6E21CA5B70B00724E8D /* Chunk.h */; settings = {ATTRIBUTES = (Private, ); }; };
                14895D911A3A319C0006235D /* Environment.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 14895D8F1A3A319C0006235D /* Environment.cpp */; };
                14895D921A3A319C0006235D /* Environment.h in Headers */ = {isa = PBXBuildFile; fileRef = 14895D901A3A319C0006235D /* Environment.h */; settings = {ATTRIBUTES = (Private, ); }; };
-               14C6216F1A9A9A6200E72293 /* LargeObject.h in Headers */ = {isa = PBXBuildFile; fileRef = 14C6216E1A9A9A6200E72293 /* LargeObject.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               14C8992B1CC485E70027A057 /* Map.h in Headers */ = {isa = PBXBuildFile; fileRef = 14C8992A1CC485E70027A057 /* Map.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               14C8992D1CC578330027A057 /* XLargeRange.h in Headers */ = {isa = PBXBuildFile; fileRef = 14C8992C1CC578330027A057 /* XLargeRange.h */; settings = {ATTRIBUTES = (Private, ); }; };
                14C919C918FCC59F0028DB43 /* BPlatform.h in Headers */ = {isa = PBXBuildFile; fileRef = 14C919C818FCC59F0028DB43 /* BPlatform.h */; settings = {ATTRIBUTES = (Private, ); }; };
                14CC394C18EA8858004AFE34 /* libbmalloc.a in Frameworks */ = {isa = PBXBuildFile; fileRef = 14F271BE18EA3963008C152F /* libbmalloc.a */; };
-               14D2CD9B1AA12CFB00770440 /* VMState.h in Headers */ = {isa = PBXBuildFile; fileRef = 14D2CD9A1AA12CFB00770440 /* VMState.h */; settings = {ATTRIBUTES = (Private, ); }; };
-               14DD788D18F48CC600950702 /* BeginTag.h in Headers */ = {isa = PBXBuildFile; fileRef = 1417F64518B54A700076FA3F /* BeginTag.h */; settings = {ATTRIBUTES = (Private, ); }; };
-               14DD788E18F48CCD00950702 /* BoundaryTag.h in Headers */ = {isa = PBXBuildFile; fileRef = 1485655E18A43AF900ED6942 /* BoundaryTag.h */; settings = {ATTRIBUTES = (Private, ); }; };
                14DD789018F48CEB00950702 /* Sizes.h in Headers */ = {isa = PBXBuildFile; fileRef = 145F6874179DF84100D65598 /* Sizes.h */; settings = {ATTRIBUTES = (Private, ); }; };
-               14DD789218F48CFC00950702 /* EndTag.h in Headers */ = {isa = PBXBuildFile; fileRef = 1417F64618B54A700076FA3F /* EndTag.h */; settings = {ATTRIBUTES = (Private, ); }; };
                14DD789318F48D0F00950702 /* ObjectType.h in Headers */ = {isa = PBXBuildFile; fileRef = 1485656018A43DBA00ED6942 /* ObjectType.h */; settings = {ATTRIBUTES = (Private, ); }; };
                14DD789818F48D4A00950702 /* Allocator.h in Headers */ = {isa = PBXBuildFile; fileRef = 145F6856179DC8CA00D65598 /* Allocator.h */; settings = {ATTRIBUTES = (Private, ); }; };
                14DD789918F48D4A00950702 /* Cache.h in Headers */ = {isa = PBXBuildFile; fileRef = 144469E517A46BFE00F9EA1D /* Cache.h */; settings = {ATTRIBUTES = (Private, ); }; };
                14DD78CE18F48D7500950702 /* Syscall.h in Headers */ = {isa = PBXBuildFile; fileRef = 1417F64F18B7280C0076FA3F /* Syscall.h */; settings = {ATTRIBUTES = (Private, ); }; };
                14DD78CF18F48D7500950702 /* Vector.h in Headers */ = {isa = PBXBuildFile; fileRef = 1479E21217A1A255006D4E9D /* Vector.h */; settings = {ATTRIBUTES = (Private, ); }; };
                14DD78D018F48D7500950702 /* VMAllocate.h in Headers */ = {isa = PBXBuildFile; fileRef = 1479E21417A1A63E006D4E9D /* VMAllocate.h */; settings = {ATTRIBUTES = (Private, ); }; };
-               14EB79EA1C7C1BC4005E834F /* XLargeRange.h in Headers */ = {isa = PBXBuildFile; fileRef = 14EB79E81C7C1BC4005E834F /* XLargeRange.h */; settings = {ATTRIBUTES = (Private, ); }; };
                14F271C318EA3978008C152F /* Allocator.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 145F6855179DC8CA00D65598 /* Allocator.cpp */; };
                14F271C418EA397B008C152F /* Cache.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 144469E417A46BFE00F9EA1D /* Cache.cpp */; };
                14F271C518EA397E008C152F /* Deallocator.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 145F6859179DC90200D65598 /* Deallocator.cpp */; };
-               14F271C618EA3983008C152F /* SegregatedFreeList.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 146BEE2118C845AE0002D5A2 /* SegregatedFreeList.cpp */; };
                14F271C718EA3990008C152F /* Heap.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 14DA320E18875D9F007269E0 /* Heap.cpp */; };
                14F271C818EA3990008C152F /* ObjectType.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 14105E8318E14374003A106E /* ObjectType.cpp */; };
                14F271C918EA3990008C152F /* VMHeap.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 144F7BFB18BFC517003537F3 /* VMHeap.cpp */; };
@@ -85,8 +76,6 @@
                1413E460189DCE1E00546D68 /* Inline.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = Inline.h; path = bmalloc/Inline.h; sourceTree = "<group>"; };
                1413E462189DE1CD00546D68 /* BumpAllocator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; lineEnding = 0; name = BumpAllocator.h; path = bmalloc/BumpAllocator.h; sourceTree = "<group>"; };
                1413E468189EEDE400546D68 /* BAssert.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = BAssert.h; path = bmalloc/BAssert.h; sourceTree = "<group>"; };
-               1417F64518B54A700076FA3F /* BeginTag.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = BeginTag.h; path = bmalloc/BeginTag.h; sourceTree = "<group>"; };
-               1417F64618B54A700076FA3F /* EndTag.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = EndTag.h; path = bmalloc/EndTag.h; sourceTree = "<group>"; };
                1417F64F18B7280C0076FA3F /* Syscall.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = Syscall.h; path = bmalloc/Syscall.h; sourceTree = "<group>"; };
                1417F65218BA88A00076FA3F /* AsyncTask.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = AsyncTask.h; path = bmalloc/AsyncTask.h; sourceTree = "<group>"; };
                141D9AFF1C8E51C0000ABBA0 /* List.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = List.h; path = bmalloc/List.h; sourceTree = "<group>"; };
@@ -94,8 +83,6 @@
                143CB81A19022BC900B16A45 /* StaticMutex.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = StaticMutex.cpp; path = bmalloc/StaticMutex.cpp; sourceTree = "<group>"; };
                143CB81B19022BC900B16A45 /* StaticMutex.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = StaticMutex.h; path = bmalloc/StaticMutex.h; sourceTree = "<group>"; };
                143E29ED18CAE90500FE8A0F /* SmallPage.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = SmallPage.h; path = bmalloc/SmallPage.h; sourceTree = "<group>"; };
-               143EF9AD1A9FABF6004F5C77 /* FreeList.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FreeList.cpp; path = bmalloc/FreeList.cpp; sourceTree = "<group>"; };
-               143EF9AE1A9FABF6004F5C77 /* FreeList.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FreeList.h; path = bmalloc/FreeList.h; sourceTree = "<group>"; };
                1440AFCA1A95261100837FAA /* Zone.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = Zone.h; path = bmalloc/Zone.h; sourceTree = "<group>"; };
                1440AFCC1A9527AF00837FAA /* Zone.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = Zone.cpp; path = bmalloc/Zone.cpp; sourceTree = "<group>"; };
                144469E417A46BFE00F9EA1D /* Cache.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; lineEnding = 0; name = Cache.cpp; path = bmalloc/Cache.cpp; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.cpp; };
                145F685A179DC90200D65598 /* Deallocator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; lineEnding = 0; name = Deallocator.h; path = bmalloc/Deallocator.h; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.objcpp; };
                145F6874179DF84100D65598 /* Sizes.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = Sizes.h; path = bmalloc/Sizes.h; sourceTree = "<group>"; };
                145F6878179E3A4400D65598 /* Range.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; lineEnding = 0; name = Range.h; path = bmalloc/Range.h; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.objcpp; };
-               146041E61C7FF2EF00E9F94E /* SortedVector.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = SortedVector.h; path = bmalloc/SortedVector.h; sourceTree = "<group>"; };
-               146BEE1E18C841C50002D5A2 /* SegregatedFreeList.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = SegregatedFreeList.h; path = bmalloc/SegregatedFreeList.h; sourceTree = "<group>"; };
-               146BEE2118C845AE0002D5A2 /* SegregatedFreeList.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = SegregatedFreeList.cpp; path = bmalloc/SegregatedFreeList.cpp; sourceTree = "<group>"; };
                1479E21217A1A255006D4E9D /* Vector.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; lineEnding = 0; name = Vector.h; path = bmalloc/Vector.h; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.objcpp; };
                1479E21417A1A63E006D4E9D /* VMAllocate.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; lineEnding = 0; name = VMAllocate.h; path = bmalloc/VMAllocate.h; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.objcpp; };
                147DC6E21CA5B70B00724E8D /* Chunk.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = Chunk.h; path = bmalloc/Chunk.h; sourceTree = "<group>"; };
-               1485655E18A43AF900ED6942 /* BoundaryTag.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = BoundaryTag.h; path = bmalloc/BoundaryTag.h; sourceTree = "<group>"; };
                1485656018A43DBA00ED6942 /* ObjectType.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = ObjectType.h; path = bmalloc/ObjectType.h; sourceTree = "<group>"; };
                14895D8F1A3A319C0006235D /* Environment.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = Environment.cpp; path = bmalloc/Environment.cpp; sourceTree = "<group>"; };
                14895D901A3A319C0006235D /* Environment.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = Environment.h; path = bmalloc/Environment.h; sourceTree = "<group>"; };
                14B650C618F39F4800751968 /* bmalloc.xcconfig */ = {isa = PBXFileReference; lastKnownFileType = text.xcconfig; path = bmalloc.xcconfig; sourceTree = "<group>"; };
                14B650C718F39F4800751968 /* DebugRelease.xcconfig */ = {isa = PBXFileReference; lastKnownFileType = text.xcconfig; path = DebugRelease.xcconfig; sourceTree = "<group>"; };
                14B650C918F3A04200751968 /* mbmalloc.xcconfig */ = {isa = PBXFileReference; lastKnownFileType = text.xcconfig; path = mbmalloc.xcconfig; sourceTree = "<group>"; };
-               14C6216E1A9A9A6200E72293 /* LargeObject.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = LargeObject.h; path = bmalloc/LargeObject.h; sourceTree = "<group>"; };
+               14C8992A1CC485E70027A057 /* Map.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = Map.h; path = bmalloc/Map.h; sourceTree = "<group>"; };
+               14C8992C1CC578330027A057 /* XLargeRange.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = XLargeRange.h; path = bmalloc/XLargeRange.h; sourceTree = "<group>"; };
                14C919C818FCC59F0028DB43 /* BPlatform.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = BPlatform.h; path = bmalloc/BPlatform.h; sourceTree = "<group>"; };
                14CC394418EA8743004AFE34 /* libmbmalloc.dylib */ = {isa = PBXFileReference; explicitFileType = "compiled.mach-o.dylib"; includeInIndex = 0; path = libmbmalloc.dylib; sourceTree = BUILT_PRODUCTS_DIR; };
-               14D2CD9A1AA12CFB00770440 /* VMState.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = VMState.h; path = bmalloc/VMState.h; sourceTree = "<group>"; };
                14D9DB4517F2447100EAAB79 /* FixedVector.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; lineEnding = 0; name = FixedVector.h; path = bmalloc/FixedVector.h; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.objcpp; };
                14DA320C18875B09007269E0 /* Heap.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = Heap.h; path = bmalloc/Heap.h; sourceTree = "<group>"; };
                14DA320E18875D9F007269E0 /* Heap.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = Heap.cpp; path = bmalloc/Heap.cpp; sourceTree = "<group>"; };
-               14EB79E81C7C1BC4005E834F /* XLargeRange.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = XLargeRange.h; path = bmalloc/XLargeRange.h; sourceTree = "<group>"; };
                14F271BE18EA3963008C152F /* libbmalloc.a */ = {isa = PBXFileReference; explicitFileType = archive.ar; includeInIndex = 0; path = libbmalloc.a; sourceTree = BUILT_PRODUCTS_DIR; };
                4426E27E1C838EE0008EB042 /* Logging.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = Logging.cpp; path = bmalloc/Logging.cpp; sourceTree = "<group>"; };
                4426E27F1C838EE0008EB042 /* Logging.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = Logging.h; path = bmalloc/Logging.h; sourceTree = "<group>"; };
                        name = api;
                        sourceTree = "<group>";
                };
-               144C07F71C7B707D0051BB6A /* heap: xlarge */ = {
-                       isa = PBXGroup;
-                       children = (
-                               144C07F21C7B70260051BB6A /* XLargeMap.cpp */,
-                               144C07F31C7B70260051BB6A /* XLargeMap.h */,
-                               14EB79E81C7C1BC4005E834F /* XLargeRange.h */,
-                       );
-                       name = "heap: xlarge";
-                       sourceTree = "<group>";
-               };
                145F6836179DC45F00D65598 = {
                        isa = PBXGroup;
                        children = (
                                14D9DB4D17F2865C00EAAB79 /* cache */,
                                147AAA9C18CE6010002201E4 /* heap: large */,
                                147AAA9A18CE5FD3002201E4 /* heap: small */,
-                               144C07F71C7B707D0051BB6A /* heap: xlarge */,
                                14D9DB4E17F2866E00EAAB79 /* heap */,
                                14D9DB4F17F2868900EAAB79 /* stdlib */,
                                14B650C418F39F4800751968 /* Configurations */,
                147AAA9C18CE6010002201E4 /* heap: large */ = {
                        isa = PBXGroup;
                        children = (
-                               1417F64518B54A700076FA3F /* BeginTag.h */,
-                               1485655E18A43AF900ED6942 /* BoundaryTag.h */,
-                               1417F64618B54A700076FA3F /* EndTag.h */,
-                               143EF9AD1A9FABF6004F5C77 /* FreeList.cpp */,
-                               143EF9AE1A9FABF6004F5C77 /* FreeList.h */,
-                               14C6216E1A9A9A6200E72293 /* LargeObject.h */,
-                               146BEE2118C845AE0002D5A2 /* SegregatedFreeList.cpp */,
-                               146BEE1E18C841C50002D5A2 /* SegregatedFreeList.h */,
-                               14D2CD9A1AA12CFB00770440 /* VMState.h */,
+                               144C07F21C7B70260051BB6A /* XLargeMap.cpp */,
+                               144C07F31C7B70260051BB6A /* XLargeMap.h */,
+                               14C8992C1CC578330027A057 /* XLargeRange.h */,
                        );
                        name = "heap: large";
                        sourceTree = "<group>";
                                141D9AFF1C8E51C0000ABBA0 /* List.h */,
                                4426E27E1C838EE0008EB042 /* Logging.cpp */,
                                4426E27F1C838EE0008EB042 /* Logging.h */,
+                               14C8992A1CC485E70027A057 /* Map.h */,
                                144DCED617A649D90093B2F2 /* Mutex.h */,
                                14446A0717A61FA400F9EA1D /* PerProcess.h */,
                                144469FD17A61F1F00F9EA1D /* PerThread.h */,
                                145F6878179E3A4400D65598 /* Range.h */,
-                               146041E61C7FF2EF00E9F94E /* SortedVector.h */,
                                143CB81A19022BC900B16A45 /* StaticMutex.cpp */,
                                143CB81B19022BC900B16A45 /* StaticMutex.h */,
                                1417F64F18B7280C0076FA3F /* Syscall.h */,
                        files = (
                                14DD78CF18F48D7500950702 /* Vector.h in Headers */,
                                14C919C918FCC59F0028DB43 /* BPlatform.h in Headers */,
-                               14DD789218F48CFC00950702 /* EndTag.h in Headers */,
                                1440AFCB1A95261100837FAA /* Zone.h in Headers */,
                                140FA00519CE4B6800FFD3C8 /* LineMetadata.h in Headers */,
                                14DD78CC18F48D7500950702 /* PerThread.h in Headers */,
                                144BE11F1CA346520099C8C0 /* Object.h in Headers */,
                                143CB81D19022BC900B16A45 /* StaticMutex.h in Headers */,
                                1448C30118F3754C00502839 /* bmalloc.h in Headers */,
-                               14C6216F1A9A9A6200E72293 /* LargeObject.h in Headers */,
                                14DD789A18F48D4A00950702 /* Deallocator.h in Headers */,
-                               1400274C18F89C3D00115C97 /* SegregatedFreeList.h in Headers */,
-                               14DD788D18F48CC600950702 /* BeginTag.h in Headers */,
                                14DD78CD18F48D7500950702 /* Range.h in Headers */,
                                14DD789C18F48D4A00950702 /* BumpAllocator.h in Headers */,
                                14DD789918F48D4A00950702 /* Cache.h in Headers */,
                                14DD789018F48CEB00950702 /* Sizes.h in Headers */,
                                14DD78C718F48D7500950702 /* BAssert.h in Headers */,
                                14DD78D018F48D7500950702 /* VMAllocate.h in Headers */,
-                               14EB79EA1C7C1BC4005E834F /* XLargeRange.h in Headers */,
-                               143EF9B01A9FABF6004F5C77 /* FreeList.h in Headers */,
                                14DD78CE18F48D7500950702 /* Syscall.h in Headers */,
                                14DD78C618F48D7500950702 /* AsyncTask.h in Headers */,
                                14DD78C918F48D7500950702 /* Inline.h in Headers */,
                                140FA00319CE429C00FFD3C8 /* BumpRange.h in Headers */,
                                4426E2811C838EE0008EB042 /* Logging.h in Headers */,
                                14DD78C518F48D7500950702 /* Algorithm.h in Headers */,
+                               14C8992B1CC485E70027A057 /* Map.h in Headers */,
                                14DD78BD18F48D6B00950702 /* SmallPage.h in Headers */,
-                               14DD788E18F48CCD00950702 /* BoundaryTag.h in Headers */,
-                               146041E71C7FF2EF00E9F94E /* SortedVector.h in Headers */,
                                14DD78C818F48D7500950702 /* FixedVector.h in Headers */,
+                               14C8992D1CC578330027A057 /* XLargeRange.h in Headers */,
                                144C07F51C7B70260051BB6A /* XLargeMap.h in Headers */,
-                               14D2CD9B1AA12CFB00770440 /* VMState.h in Headers */,
                                147DC6E31CA5B70B00724E8D /* Chunk.h in Headers */,
                                14DD78BC18F48D6B00950702 /* SmallLine.h in Headers */,
                                14DD789818F48D4A00950702 /* Allocator.h in Headers */,
                        buildActionMask = 2147483647;
                        files = (
                                143CB81C19022BC900B16A45 /* StaticMutex.cpp in Sources */,
-                               14F271C618EA3983008C152F /* SegregatedFreeList.cpp in Sources */,
                                14F271C318EA3978008C152F /* Allocator.cpp in Sources */,
                                14895D911A3A319C0006235D /* Environment.cpp in Sources */,
-                               143EF9AF1A9FABF6004F5C77 /* FreeList.cpp in Sources */,
                                4426E2801C838EE0008EB042 /* Logging.cpp in Sources */,
                                14F271C718EA3990008C152F /* Heap.cpp in Sources */,
                                14F271C918EA3990008C152F /* VMHeap.cpp in Sources */,
index deb2236..0aa303d 100644 (file)
@@ -28,7 +28,6 @@
 #include "Chunk.h"
 #include "Deallocator.h"
 #include "Heap.h"
-#include "LargeObject.h"
 #include "PerProcess.h"
 #include "Sizes.h"
 #include <algorithm>
@@ -56,12 +55,12 @@ void* Allocator::tryAllocate(size_t size)
     if (!m_isBmallocEnabled)
         return malloc(size);
 
-    if (size <= largeMax)
+    if (size <= smallMax)
         return allocate(size);
 
-    if (size <= xLargeMax) {
+    if (size <= largeMax) {
         std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
-        return PerProcess<Heap>::getFastCase()->tryAllocateXLarge(lock, alignment, size);
+        return PerProcess<Heap>::getFastCase()->tryAllocateLarge(lock, alignment, size);
     }
 
     return nullptr;
@@ -84,20 +83,9 @@ void* Allocator::allocate(size_t alignment, size_t size)
     if (size <= smallMax && alignment <= smallMax)
         return allocate(roundUpToMultipleOf(alignment, size));
 
-    if (size <= largeMax && alignment <= largeMax) {
-        size = std::max(largeMin, roundUpToMultipleOf<largeAlignment>(size));
-        alignment = roundUpToMultipleOf<largeAlignment>(alignment);
-        size_t unalignedSize = largeMin + alignment - largeAlignment + size;
-        if (unalignedSize <= largeMax && alignment <= chunkSize / 2) {
-            std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
-            m_deallocator.processObjectLog(lock);
-            return PerProcess<Heap>::getFastCase()->allocateLarge(lock, alignment, size, unalignedSize);
-        }
-    }
-
-    if (size <= xLargeMax && alignment <= xLargeMax) {
+    if (size <= largeMax && alignment <= largeMax / 2) {
         std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
-        return PerProcess<Heap>::getFastCase()->allocateXLarge(lock, alignment, size);
+        return PerProcess<Heap>::getFastCase()->allocateLarge(lock, alignment, size);
     }
 
     BCRASH();
@@ -112,35 +100,20 @@ void* Allocator::reallocate(void* object, size_t newSize)
     size_t oldSize = 0;
     switch (objectType(object)) {
     case ObjectType::Small: {
+        BASSERT(objectType(nullptr) == ObjectType::Small);
+        if (!object)
+            break;
+
         size_t sizeClass = Object(object).page()->sizeClass();
         oldSize = objectSize(sizeClass);
         break;
     }
     case ObjectType::Large: {
         std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
-
-        LargeObject largeObject(object);
-        oldSize = largeObject.size();
+        oldSize = PerProcess<Heap>::getFastCase()->largeSize(lock, object);
 
         if (newSize < oldSize && newSize > smallMax) {
-            if (oldSize - newSize >= largeMin) {
-                newSize = roundUpToMultipleOf<largeAlignment>(newSize);
-                PerProcess<Heap>::getFastCase()->shrinkLarge(lock, largeObject, newSize);
-                return object;
-            }
-        }
-        break;
-    }
-    case ObjectType::XLarge: {
-        BASSERT(objectType(nullptr) == ObjectType::XLarge);
-        if (!object)
-            break;
-
-        std::unique_lock<StaticMutex> lock(PerProcess<Heap>::mutex());
-        oldSize = PerProcess<Heap>::getFastCase()->xLargeSize(lock, object);
-
-        if (newSize < oldSize && newSize > largeMax) {
-            PerProcess<Heap>::getFastCase()->shrinkXLarge(lock, Range(object, oldSize), newSize);
+            PerProcess<Heap>::getFastCase()->shrinkLarge(lock, Range(object, oldSize), newSize);
             return object;
         }
         break;
@@ -192,17 +165,8 @@ INLINE void Allocator::refillAllocator(BumpAllocator& allocator, size_t sizeClas
 
 NO_INLINE void* Allocator::allocateLarge(size_t size)
 {
-    size = roundUpToMultipleOf<largeAlignment>(size);
-
     std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
-    m_deallocator.processObjectLog(lock);
-    return PerProcess<Heap>::getFastCase()->allocateLarge(lock, size);
-}
-
-NO_INLINE void* Allocator::allocateXLarge(size_t size)
-{
-    std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
-    return PerProcess<Heap>::getFastCase()->allocateXLarge(lock, size);
+    return PerProcess<Heap>::getFastCase()->allocateLarge(lock, alignment, size);
 }
 
 NO_INLINE void* Allocator::allocateLogSizeClass(size_t size)
@@ -232,9 +196,6 @@ void* Allocator::allocateSlowCase(size_t size)
     if (size <= largeMax)
         return allocateLarge(size);
 
-    if (size <= xLargeMax)
-        return allocateXLarge(size);
-
     BCRASH();
     return nullptr;
 }
index 206c786..0c91a40 100644 (file)
@@ -54,7 +54,6 @@ private:
     
     void* allocateLogSizeClass(size_t);
     void* allocateLarge(size_t);
-    void* allocateXLarge(size_t);
     
     void refillAllocator(BumpAllocator&, size_t sizeClass);
     void refillAllocatorSlowCase(BumpAllocator&, size_t sizeClass);
diff --git a/Source/bmalloc/bmalloc/BeginTag.h b/Source/bmalloc/bmalloc/BeginTag.h
deleted file mode 100644 (file)
index e870590..0000000
+++ /dev/null
@@ -1,38 +0,0 @@
-/*
- * Copyright (C) 2014, 2015 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
- */
-
-#ifndef BeginTag_h
-#define BeginTag_h
-
-#include "BoundaryTag.h"
-
-namespace bmalloc {
-
-class BeginTag : public BoundaryTag {
-};
-
-} // namespace bmalloc
-
-#endif // BeginTag_h
diff --git a/Source/bmalloc/bmalloc/BoundaryTag.h b/Source/bmalloc/bmalloc/BoundaryTag.h
deleted file mode 100644 (file)
index 41e9b5f..0000000
+++ /dev/null
@@ -1,130 +0,0 @@
-/*
- * Copyright (C) 2014-2016 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
- */
-
-#ifndef BoundaryTag_h
-#define BoundaryTag_h
-
-#include "BAssert.h"
-#include "Range.h"
-#include "Sizes.h"
-#include "VMState.h"
-#include <cstring>
-
-namespace bmalloc {
-
-class BeginTag;
-class EndTag;
-class Chunk;
-class Range;
-
-class BoundaryTag {
-public:
-    static Range init(Chunk*);
-    static unsigned compactBegin(void*);
-
-    bool isFree() { return m_isFree; }
-    void setFree(bool isFree) { m_isFree = isFree; }
-    
-    bool isEnd() { return m_isEnd; }
-    void setEnd(bool isEnd) { m_isEnd = isEnd; }
-
-    VMState vmState() { return VMState(m_vmState); }
-    void setVMState(VMState vmState) { m_vmState = static_cast<unsigned>(vmState); }
-
-    bool isMarked() { return m_isMarked; }
-    void setMarked(bool isMarked) { m_isMarked = isMarked; }
-
-    bool isNull() { return !m_size; }
-    void clear() { std::memset(this, 0, sizeof(*this)); }
-    
-    size_t size() { return m_size; }
-    unsigned compactBegin() { return m_compactBegin; }
-
-    void setRange(const Range&);
-    
-    bool isSentinel() { return !m_compactBegin; }
-    void initSentinel();
-    
-    EndTag* prev();
-    BeginTag* next();
-
-private:
-    static const size_t flagBits = 5;
-    static const size_t compactBeginBits = 4;
-    static const size_t sizeBits = bitCount<unsigned>() - flagBits - compactBeginBits;
-
-    static_assert(
-        (1 << compactBeginBits) - 1 >= (largeMin - 1) / largeAlignment,
-        "compactBegin must be encodable in a BoundaryTag.");
-
-    static_assert(
-        (1 << sizeBits) - 1 >= largeObjectMax,
-        "largeObjectMax must be encodable in a BoundaryTag.");
-
-    bool m_isFree: 1;
-    bool m_isEnd: 1;
-    unsigned m_vmState: 2;
-    bool m_isMarked: 1;
-    unsigned m_compactBegin: compactBeginBits;
-    unsigned m_size: sizeBits;
-};
-
-inline unsigned BoundaryTag::compactBegin(void* object)
-{
-    return static_cast<unsigned>(
-        reinterpret_cast<uintptr_t>(mask(object, largeMin - 1)) / largeAlignment);
-}
-
-inline void BoundaryTag::setRange(const Range& range)
-{
-    m_compactBegin = compactBegin(range.begin());
-    BASSERT(this->compactBegin() == compactBegin(range.begin()));
-
-    m_size = static_cast<unsigned>(range.size());
-    BASSERT(this->size() == range.size());
-}
-
-inline EndTag* BoundaryTag::prev()
-{
-    BoundaryTag* prev = this - 1;
-    return reinterpret_cast<EndTag*>(prev);
-}
-
-inline BeginTag* BoundaryTag::next()
-{
-    BoundaryTag* next = this + 1;
-    return reinterpret_cast<BeginTag*>(next);
-}
-
-inline void BoundaryTag::initSentinel()
-{
-    setRange(Range(nullptr, largeMin));
-    setFree(false);
-    setVMState(VMState::Virtual);
-}
-
-} // namespace bmalloc
-
-#endif // BoundaryTag_h
index 9e768fc..bc9750f 100644 (file)
 #ifndef Chunk_h
 #define Chunk_h
 
-#include "BeginTag.h"
-#include "EndTag.h"
 #include "Object.h"
-#include "ObjectType.h"
 #include "Sizes.h"
 #include "SmallLine.h"
 #include "SmallPage.h"
 namespace bmalloc {
 
 class Chunk {
-    // Our metadata layout includes a left and right edge sentinel.
-    // Metadata takes up enough space to leave at least the first two
-    // boundary tag slots unused.
-    //
-    //      So, boundary tag space looks like this:
-    //
-    //          [OOXXXXX...]
-    //
-    //      And BoundaryTag::get subtracts one, producing:
-    //
-    //          [OXXXXX...O].
-    //
-    // We use the X's for boundary tags and the O's for edge sentinels.
-
-    static const size_t boundaryTagCount = chunkSize / largeMin;
-    static_assert(boundaryTagCount > 2, "Chunk must have space for two sentinel boundary tags");
-
 public:
     static Chunk* get(void*);
 
-    static BeginTag* beginTag(void*);
-    static EndTag* endTag(void*, size_t);
-
-    Chunk(std::lock_guard<StaticMutex>&, ObjectType);
+    Chunk(std::lock_guard<StaticMutex>&);
 
     size_t offset(void*);
 
@@ -73,33 +50,23 @@ public:
     char* bytes() { return reinterpret_cast<char*>(this); }
     SmallLine* lines() { return m_lines.begin(); }
     SmallPage* pages() { return m_pages.begin(); }
-    std::array<BoundaryTag, boundaryTagCount>& boundaryTags() { return m_boundaryTags; }
-
-    ObjectType objectType() { return m_objectType; }
 
 private:
-    union {
-        // The first few bytes of metadata cover the metadata region, so they're
-        // not used. We can steal them to store m_objectType.
-        ObjectType m_objectType;
-        std::array<SmallLine, chunkSize / smallLineSize> m_lines;
-    };
-
-    union {
-        // A chunk is either small or large for its lifetime, so we can union
-        // small and large metadata, and then use one or the other at runtime.
-        std::array<SmallPage, chunkSize / smallPageSize> m_pages;
-        std::array<BoundaryTag, boundaryTagCount> m_boundaryTags;
-    };
+    std::array<SmallLine, chunkSize / smallLineSize> m_lines;
+    std::array<SmallPage, chunkSize / smallPageSize> m_pages;
 };
 
 static_assert(sizeof(Chunk) + largeMax <= chunkSize, "largeMax is too big");
 
-static_assert(sizeof(Chunk) / smallLineSize > sizeof(ObjectType),
-    "Chunk::m_objectType overlaps with metadata");
+struct ChunkHash {
+    static unsigned hash(Chunk* key)
+    {
+        return static_cast<unsigned>(
+            reinterpret_cast<uintptr_t>(key) / chunkSize);
+    }
+};
 
-inline Chunk::Chunk(std::lock_guard<StaticMutex>&, ObjectType objectType)
-    : m_objectType(objectType)
+inline Chunk::Chunk(std::lock_guard<StaticMutex>&)
 {
 }
 
@@ -108,26 +75,6 @@ inline Chunk* Chunk::get(void* object)
     return static_cast<Chunk*>(mask(object, chunkMask));
 }
 
-inline BeginTag* Chunk::beginTag(void* object)
-{
-    Chunk* chunk = get(object);
-    size_t boundaryTagNumber = (static_cast<char*>(object) - reinterpret_cast<char*>(chunk)) / largeMin - 1; // - 1 to offset from the right sentinel.
-    return static_cast<BeginTag*>(&chunk->m_boundaryTags[boundaryTagNumber]);
-}
-
-inline EndTag* Chunk::endTag(void* object, size_t size)
-{
-    Chunk* chunk = get(object);
-    char* end = static_cast<char*>(object) + size;
-
-    // We subtract largeMin before computing the end pointer's boundary tag. An
-    // object's size need not be an even multiple of largeMin. Subtracting
-    // largeMin rounds down to the last boundary tag prior to our neighbor.
-
-    size_t boundaryTagNumber = (end - largeMin - reinterpret_cast<char*>(chunk)) / largeMin - 1; // - 1 to offset from the right sentinel.
-    return static_cast<EndTag*>(&chunk->m_boundaryTags[boundaryTagNumber]);
-}
-
 inline size_t Chunk::offset(void* object)
 {
     BASSERT(object >= this);
index 1c10339..c4fba11 100644 (file)
@@ -59,12 +59,6 @@ void Deallocator::scavenge()
         processObjectLog();
 }
 
-void Deallocator::deallocateXLarge(void* object)
-{
-    std::unique_lock<StaticMutex> lock(PerProcess<Heap>::mutex());
-    PerProcess<Heap>::getFastCase()->deallocateXLarge(lock, object);
-}
-
 void Deallocator::processObjectLog(std::lock_guard<StaticMutex>& lock)
 {
     Heap* heap = PerProcess<Heap>::getFastCase();
@@ -83,8 +77,6 @@ void Deallocator::processObjectLog()
 
 void Deallocator::deallocateSlowCase(void* object)
 {
-    BASSERT(!deallocateFastCase(object));
-    
     if (!m_isBmallocEnabled) {
         free(object);
         return;
@@ -93,11 +85,15 @@ void Deallocator::deallocateSlowCase(void* object)
     if (!object)
         return;
 
-    if (isXLarge(object))
-        return deallocateXLarge(object);
+    std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
+    if (PerProcess<Heap>::getFastCase()->isLarge(lock, object)) {
+        PerProcess<Heap>::getFastCase()->deallocateLarge(lock, object);
+        return;
+    }
+
+    if (m_objectLog.size() == m_objectLog.capacity())
+        processObjectLog(lock);
 
-    BASSERT(m_objectLog.size() == m_objectLog.capacity());
-    processObjectLog();
     m_objectLog.push(object);
 }
 
index 78adfcf..7fca956 100644 (file)
@@ -51,16 +51,14 @@ private:
     bool deallocateFastCase(void*);
     void deallocateSlowCase(void*);
 
-    void deallocateXLarge(void*);
-
     FixedVector<void*, deallocatorLogCapacity> m_objectLog;
     bool m_isBmallocEnabled;
 };
 
 inline bool Deallocator::deallocateFastCase(void* object)
 {
-    BASSERT(isXLarge(nullptr));
-    if (isXLarge(object))
+    BASSERT(mightBeLarge(nullptr));
+    if (mightBeLarge(object))
         return false;
 
     if (m_objectLog.size() == m_objectLog.capacity())
diff --git a/Source/bmalloc/bmalloc/EndTag.h b/Source/bmalloc/bmalloc/EndTag.h
deleted file mode 100644 (file)
index 88a591e..0000000
+++ /dev/null
@@ -1,52 +0,0 @@
-/*
- * Copyright (C) 2014, 2015 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
- */
-
-#ifndef EndTag_h
-#define EndTag_h
-
-#include "BoundaryTag.h"
-
-namespace bmalloc {
-
-class EndTag : public BoundaryTag {
-public:
-    void init(BeginTag*);
-};
-
-inline void EndTag::init(BeginTag* other)
-{
-    // To save space, an object can have only one tag, representing both
-    // its begin and its end. In that case, we must avoid initializing the
-    // end tag, since there is no end tag.
-    if (static_cast<BoundaryTag*>(this) == static_cast<BoundaryTag*>(other))
-        return;
-
-    std::memcpy(this, other, sizeof(BoundaryTag));
-    setEnd(true);
-}
-
-} // namespace bmalloc
-
-#endif // EndTag_h
diff --git a/Source/bmalloc/bmalloc/FreeList.cpp b/Source/bmalloc/bmalloc/FreeList.cpp
deleted file mode 100644 (file)
index adc884a..0000000
+++ /dev/null
@@ -1,140 +0,0 @@
-/*
- * Copyright (C) 2014-2016 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
- */
-
-#include "FreeList.h"
-#include "Chunk.h"
-#include "Vector.h"
-
-namespace bmalloc {
-
-// We don't eagerly remove invalidated entries from the free list when we merge
-// large objects. This means that the free list can contain invalid and/or
-// duplicate entries. (Repeating a merge-and-then-split produces a duplicate.)
-
-// To avoid infinite growth in invalid entries, we incrementally remove
-// invalid entries as we discover them during allocation, and we also garbage
-// collect the free list as it grows.
-
-LargeObject FreeList::takeGreedy(VMState::HasPhysical hasPhysical)
-{
-    for (size_t i = 0; i < m_vector.size(); ++i) {
-        LargeObject largeObject(LargeObject::DoNotValidate, m_vector[i].begin());
-        if (!largeObject.isValidAndFree(hasPhysical, m_vector[i].size())) {
-            m_vector.pop(i--);
-            continue;
-        }
-
-        m_vector.pop(i--);
-        return largeObject;
-    }
-
-    return LargeObject();
-}
-
-LargeObject FreeList::take(VMState::HasPhysical hasPhysical, size_t size)
-{
-    LargeObject candidate;
-    size_t candidateIndex;
-    size_t begin = m_vector.size() > freeListSearchDepth ? m_vector.size() - freeListSearchDepth : 0;
-    for (size_t i = begin; i < m_vector.size(); ++i) {
-        LargeObject largeObject(LargeObject::DoNotValidate, m_vector[i].begin());
-        if (!largeObject.isValidAndFree(hasPhysical, m_vector[i].size())) {
-            m_vector.pop(i--);
-            continue;
-        }
-
-        if (largeObject.size() < size)
-            continue;
-
-        if (!!candidate && candidate.begin() < largeObject.begin())
-            continue;
-
-        candidate = largeObject;
-        candidateIndex = i;
-    }
-
-    if (!!candidate)
-        m_vector.pop(candidateIndex);
-    return candidate;
-}
-
-LargeObject FreeList::take(VMState::HasPhysical hasPhysical, size_t alignment, size_t size, size_t unalignedSize)
-{
-    BASSERT(isPowerOfTwo(alignment));
-    size_t alignmentMask = alignment - 1;
-
-    LargeObject candidate;
-    size_t candidateIndex;
-    size_t begin = m_vector.size() > freeListSearchDepth ? m_vector.size() - freeListSearchDepth : 0;
-    for (size_t i = begin; i < m_vector.size(); ++i) {
-        LargeObject largeObject(LargeObject::DoNotValidate, m_vector[i].begin());
-        if (!largeObject.isValidAndFree(hasPhysical, m_vector[i].size())) {
-            m_vector.pop(i--);
-            continue;
-        }
-
-        if (largeObject.size() < size)
-            continue;
-
-        if (test(largeObject.begin(), alignmentMask) && largeObject.size() < unalignedSize)
-            continue;
-
-        if (!!candidate && candidate.begin() < largeObject.begin())
-            continue;
-
-        candidate = largeObject;
-        candidateIndex = i;
-    }
-    
-    if (!!candidate)
-        m_vector.pop(candidateIndex);
-    return candidate;
-}
-
-void FreeList::removeInvalidAndDuplicateEntries(VMState::HasPhysical hasPhysical)
-{
-    for (size_t i = 0; i < m_vector.size(); ++i) {
-        LargeObject largeObject(LargeObject::DoNotValidate, m_vector[i].begin());
-        if (!largeObject.isValidAndFree(hasPhysical, m_vector[i].size())) {
-            m_vector.pop(i--);
-            continue;
-        }
-        
-        largeObject.setMarked(false);
-    }
-
-    for (size_t i = 0; i < m_vector.size(); ++i) {
-        LargeObject largeObject(LargeObject::DoNotValidate, m_vector[i].begin());
-        if (largeObject.isMarked()) {
-            m_vector.pop(i--);
-            continue;
-        }
-
-        largeObject.setMarked(true);
-    }
-}
-
-
-} // namespace bmalloc
diff --git a/Source/bmalloc/bmalloc/FreeList.h b/Source/bmalloc/bmalloc/FreeList.h
deleted file mode 100644 (file)
index 0929f14..0000000
+++ /dev/null
@@ -1,75 +0,0 @@
-/*
- * Copyright (C) 2014-2016 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
- */
-
-#ifndef FreeList_h
-#define FreeList_h
-
-#include "LargeObject.h"
-#include "Vector.h"
-
-namespace bmalloc {
-
-// Helper object for SegregatedFreeList.
-
-class FreeList {
-public:
-    FreeList();
-
-    void push(VMState::HasPhysical, const LargeObject&);
-
-    LargeObject take(VMState::HasPhysical, size_t);
-    LargeObject take(VMState::HasPhysical, size_t alignment, size_t, size_t unalignedSize);
-
-    LargeObject takeGreedy(VMState::HasPhysical);
-
-    void removeInvalidAndDuplicateEntries(VMState::HasPhysical);
-
-private:
-    Vector<Range> m_vector;
-    size_t m_limit;
-};
-
-inline FreeList::FreeList()
-    : m_vector()
-    , m_limit(freeListSearchDepth)
-{
-}
-
-inline void FreeList::push(VMState::HasPhysical hasPhysical, const LargeObject& largeObject)
-{
-    BASSERT(largeObject.isFree());
-    BASSERT(largeObject.vmState().hasPhysical() == static_cast<bool>(hasPhysical));
-    BASSERT(!largeObject.prevCanMerge());
-    BASSERT(!largeObject.nextCanMerge());
-    if (m_vector.size() == m_limit) {
-        removeInvalidAndDuplicateEntries(hasPhysical);
-        m_limit = std::max(m_vector.size() * freeListGrowFactor, freeListSearchDepth);
-    }
-    m_vector.push(largeObject.range());
-}
-
-} // namespace bmalloc
-
-#endif // FreeList_h
index 7dfd71c..fe31b05 100644 (file)
@@ -26,7 +26,6 @@
 #include "Heap.h"
 #include "BumpAllocator.h"
 #include "Chunk.h"
-#include "LargeObject.h"
 #include "PerProcess.h"
 #include "SmallLine.h"
 #include "SmallPage.h"
@@ -36,13 +35,11 @@ namespace bmalloc {
 
 Heap::Heap(std::lock_guard<StaticMutex>&)
     : m_vmPageSizePhysical(vmPageSizePhysical())
-    , m_largeObjects(VMState::HasPhysical::True)
     , m_isAllocatingPages(false)
     , m_scavenger(*this, &Heap::concurrentScavenge)
 {
     RELEASE_BASSERT(vmPageSizePhysical() >= smallPageSize);
     RELEASE_BASSERT(vmPageSize() >= vmPageSizePhysical());
-    RELEASE_BASSERT(xLargeAlignment >= vmPageSize());
 
     initializeLineMetadata();
     initializePageMetadata();
@@ -91,7 +88,7 @@ void Heap::initializePageMetadata()
         for (size_t pageSize = m_vmPageSizePhysical;
             pageSize < pageSizeMax;
             pageSize += m_vmPageSizePhysical) {
-            RELEASE_BASSERT(pageSize <= largeObjectMax);
+            RELEASE_BASSERT(pageSize <= chunkSize / 2);
             size_t waste = pageSize % size;
             if (waste <= pageSize / pageSizeWasteFactor)
                 return pageSize;
@@ -116,7 +113,6 @@ void Heap::scavenge(std::unique_lock<StaticMutex>& lock, std::chrono::millisecon
 
     scavengeSmallPages(lock, sleepDuration);
     scavengeLargeObjects(lock, sleepDuration);
-    scavengeXLargeObjects(lock, sleepDuration);
 
     sleep(lock, sleepDuration);
 }
@@ -135,157 +131,16 @@ void Heap::scavengeSmallPages(std::unique_lock<StaticMutex>& lock, std::chrono::
 
 void Heap::scavengeLargeObjects(std::unique_lock<StaticMutex>& lock, std::chrono::milliseconds sleepDuration)
 {
-    while (LargeObject largeObject = m_largeObjects.takeGreedy()) {
-        m_vmHeap.deallocateLargeObject(lock, largeObject);
-        waitUntilFalse(lock, sleepDuration, m_isAllocatingPages);
-    }
-}
-
-void Heap::scavengeXLargeObjects(std::unique_lock<StaticMutex>& lock, std::chrono::milliseconds sleepDuration)
-{
-    while (XLargeRange range = m_xLargeMap.takePhysical()) {
+    while (XLargeRange range = m_largeFree.removePhysical()) {
         lock.unlock();
         vmDeallocatePhysicalPagesSloppy(range.begin(), range.size());
         lock.lock();
         
-        range.setVMState(VMState::Virtual);
-        m_xLargeMap.addVirtual(range);
+        range.setPhysicalSize(0);
+        m_largeFree.add(range);
 
         waitUntilFalse(lock, sleepDuration, m_isAllocatingPages);
     }
-
-    m_xLargeMap.shrinkToFit();
-}
-
-inline LargeObject& Heap::splitAndAllocate(std::lock_guard<StaticMutex>& lock, LargeObject& largeObject, size_t size)
-{
-    BASSERT(largeObject.isFree());
-
-    LargeObject nextLargeObject;
-
-    if (largeObject.size() - size >= largeMin) {
-        std::pair<LargeObject, LargeObject> split = largeObject.split(size);
-        largeObject = split.first;
-        nextLargeObject = split.second;
-    }
-
-    largeObject.setFree(false);
-    Object object(largeObject.begin());
-    object.line()->ref(lock);
-    BASSERT(object.chunk()->objectType() == ObjectType::Large);
-
-    if (nextLargeObject) {
-        BASSERT(!nextLargeObject.nextCanMerge());
-        m_largeObjects.insert(nextLargeObject);
-    }
-
-    return largeObject;
-}
-
-inline LargeObject& Heap::splitAndAllocate(std::lock_guard<StaticMutex>& lock, LargeObject& largeObject, size_t alignment, size_t size)
-{
-    LargeObject prevLargeObject;
-    LargeObject nextLargeObject;
-
-    size_t alignmentMask = alignment - 1;
-    if (test(largeObject.begin(), alignmentMask)) {
-        size_t prefixSize = roundUpToMultipleOf(alignment, largeObject.begin() + largeMin) - largeObject.begin();
-        std::pair<LargeObject, LargeObject> pair = largeObject.split(prefixSize);
-        prevLargeObject = pair.first;
-        largeObject = pair.second;
-    }
-
-    BASSERT(largeObject.isFree());
-
-    if (largeObject.size() - size >= largeMin) {
-        std::pair<LargeObject, LargeObject> split = largeObject.split(size);
-        largeObject = split.first;
-        nextLargeObject = split.second;
-    }
-
-    largeObject.setFree(false);
-    Object object(largeObject.begin());
-    object.line()->ref(lock);
-    BASSERT(object.chunk()->objectType() == ObjectType::Large);
-
-    if (prevLargeObject) {
-        LargeObject merged = prevLargeObject.merge();
-        m_largeObjects.insert(merged);
-    }
-
-    if (nextLargeObject) {
-        LargeObject merged = nextLargeObject.merge();
-        m_largeObjects.insert(merged);
-    }
-
-    return largeObject;
-}
-
-void* Heap::allocateLarge(std::lock_guard<StaticMutex>& lock, size_t size)
-{
-    BASSERT(size <= largeMax);
-    BASSERT(size >= largeMin);
-    BASSERT(size == roundUpToMultipleOf<largeAlignment>(size));
-    
-    LargeObject largeObject = m_largeObjects.take(size);
-    if (!largeObject)
-        largeObject = m_vmHeap.allocateLargeObject(lock, size);
-
-    if (largeObject.vmState().hasVirtual()) {
-        m_isAllocatingPages = true;
-        // We commit before we split in order to avoid split/merge commit/decommit churn.
-        vmAllocatePhysicalPagesSloppy(largeObject.begin(), largeObject.size());
-        largeObject.setVMState(VMState::Physical);
-    }
-
-    largeObject = splitAndAllocate(lock, largeObject, size);
-
-    return largeObject.begin();
-}
-
-void* Heap::allocateLarge(std::lock_guard<StaticMutex>& lock, size_t alignment, size_t size, size_t unalignedSize)
-{
-    BASSERT(size <= largeMax);
-    BASSERT(size >= largeMin);
-    BASSERT(size == roundUpToMultipleOf<largeAlignment>(size));
-    BASSERT(unalignedSize <= largeMax);
-    BASSERT(unalignedSize >= largeMin);
-    BASSERT(unalignedSize == roundUpToMultipleOf<largeAlignment>(unalignedSize));
-    BASSERT(alignment <= chunkSize / 2);
-    BASSERT(alignment >= largeAlignment);
-    BASSERT(isPowerOfTwo(alignment));
-
-    LargeObject largeObject = m_largeObjects.take(alignment, size, unalignedSize);
-    if (!largeObject)
-        largeObject = m_vmHeap.allocateLargeObject(lock, alignment, size, unalignedSize);
-
-    if (largeObject.vmState().hasVirtual()) {
-        m_isAllocatingPages = true;
-        // We commit before we split in order to avoid split/merge commit/decommit churn.
-        vmAllocatePhysicalPagesSloppy(largeObject.begin(), largeObject.size());
-        largeObject.setVMState(VMState::Physical);
-    }
-
-    largeObject = splitAndAllocate(lock, largeObject, alignment, size);
-
-    return largeObject.begin();
-}
-
-void Heap::shrinkLarge(std::lock_guard<StaticMutex>& lock, LargeObject& largeObject, size_t newSize)
-{
-    std::pair<LargeObject, LargeObject> split = largeObject.split(newSize);
-    deallocateLarge(lock, split.second);
-}
-
-void Heap::deallocateLarge(std::lock_guard<StaticMutex>&, const LargeObject& largeObject)
-{
-    BASSERT(!largeObject.isFree());
-    BASSERT(Object(largeObject.begin()).chunk()->objectType() == ObjectType::Large);
-    largeObject.setFree(true);
-    
-    LargeObject merged = largeObject.merge();
-    m_largeObjects.insert(merged);
-    m_scavenger.run();
 }
 
 SmallPage* Heap::allocateSmallPage(std::lock_guard<StaticMutex>& lock, size_t sizeClass)
@@ -299,7 +154,10 @@ SmallPage* Heap::allocateSmallPage(std::lock_guard<StaticMutex>& lock, size_t si
             return m_smallPages[pageClass].pop();
 
         m_isAllocatingPages = true;
-        return m_vmHeap.allocateSmallPage(lock, pageClass);
+
+        SmallPage* page = m_vmHeap.allocateSmallPage(lock, pageClass);
+        m_objectTypes.set(Chunk::get(page), ObjectType::Small);
+        return page;
     }();
 
     page->setSizeClass(sizeClass);
@@ -310,10 +168,8 @@ void Heap::deallocateSmallLine(std::lock_guard<StaticMutex>& lock, Object object
 {
     BASSERT(!object.line()->refCount(lock));
     SmallPage* page = object.page();
-    if (object.chunk()->objectType() == ObjectType::Large)
-        return deallocateLarge(lock, LargeObject(object.begin()));
-
     page->deref(lock);
+
     if (!page->hasFreeLines(lock)) {
         page->setHasFreeLines(lock, true);
         m_smallPagesWithFreeLines[page->sizeClass()].push(page);
@@ -446,18 +302,6 @@ void Heap::allocateSmallBumpRangesByObject(
     }
 }
 
-void* Heap::allocateXLarge(std::lock_guard<StaticMutex>& lock, size_t alignment, size_t size)
-{
-    void* result = tryAllocateXLarge(lock, alignment, size);
-    RELEASE_BASSERT(result);
-    return result;
-}
-
-void* Heap::allocateXLarge(std::lock_guard<StaticMutex>& lock, size_t size)
-{
-    return allocateXLarge(lock, alignment, size);
-}
-
 XLargeRange Heap::splitAndAllocate(XLargeRange& range, size_t alignment, size_t size)
 {
     XLargeRange prev;
@@ -471,76 +315,85 @@ XLargeRange Heap::splitAndAllocate(XLargeRange& range, size_t alignment, size_t
         range = pair.second;
     }
 
-    if (range.size() - size >= xLargeAlignment) {
-        size_t alignedSize = roundUpToMultipleOf<xLargeAlignment>(size);
-        std::pair<XLargeRange, XLargeRange> pair = range.split(alignedSize);
+    if (range.size() - size > size / pageSizeWasteFactor) {
+        std::pair<XLargeRange, XLargeRange> pair = range.split(size);
         range = pair.first;
         next = pair.second;
     }
+    
+    if (range.physicalSize() < range.size()) {
+        m_isAllocatingPages = true;
 
-    // At this point our range might contain an unused tail fragment. This is
-    // common. We can't allocate the tail fragment because it's aligned to less
-    // than xLargeAlignment. So, we pair the allocation with its tail fragment
-    // in the allocated list. This is an important optimization because it
-    // keeps the free list short, speeding up allocation and merging.
-
-    std::pair<XLargeRange, XLargeRange> allocated = range.split(roundUpToMultipleOf(m_vmPageSizePhysical, size));
-    if (allocated.first.vmState().hasVirtual()) {
-        vmAllocatePhysicalPagesSloppy(allocated.first.begin(), allocated.first.size());
-        allocated.first.setVMState(VMState::Physical);
+        vmAllocatePhysicalPagesSloppy(range.begin() + range.physicalSize(), range.size() - range.physicalSize());
+        range.setPhysicalSize(range.size());
     }
+    
+    if (prev)
+        m_largeFree.add(prev);
 
-    m_xLargeMap.addAllocated(prev, allocated, next);
-    return allocated.first;
+    if (next)
+        m_largeFree.add(next);
+
+    m_objectTypes.set(Chunk::get(range.begin()), ObjectType::Large);
+
+    m_largeAllocated.set(range.begin(), range.size());
+    return range;
 }
 
-void* Heap::tryAllocateXLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t size)
+void* Heap::tryAllocateLarge(std::lock_guard<StaticMutex>& lock, size_t alignment, size_t size)
 {
+    BASSERT(size <= largeMax);
+    BASSERT(size <= largeMax / 2);
     BASSERT(isPowerOfTwo(alignment));
-    BASSERT(alignment < xLargeMax);
 
-    m_isAllocatingPages = true;
+    size = size ? roundUpToMultipleOf(largeAlignment, size) : largeAlignment;
+    alignment = roundUpToMultipleOf<largeAlignment>(alignment);
 
-    size = std::max(m_vmPageSizePhysical, size);
-    alignment = roundUpToMultipleOf<xLargeAlignment>(alignment);
-
-    XLargeRange range = m_xLargeMap.takeFree(alignment, size);
+    XLargeRange range = m_largeFree.remove(alignment, size);
     if (!range) {
-        // We allocate VM in aligned multiples to increase the chances that
-        // the OS will provide contiguous ranges that we can merge.
-        size_t alignedSize = roundUpToMultipleOf<xLargeAlignment>(size);
-
-        void* begin = tryVMAllocate(alignment, alignedSize);
-        if (!begin)
+        range = m_vmHeap.tryAllocateLargeChunk(lock, alignment, size);
+        if (!range)
             return nullptr;
-        range = XLargeRange(begin, alignedSize, VMState::Virtual);
+
+        m_largeFree.add(range);
+        range = m_largeFree.remove(alignment, size);
     }
 
     return splitAndAllocate(range, alignment, size).begin();
 }
 
-size_t Heap::xLargeSize(std::unique_lock<StaticMutex>&, void* object)
+void* Heap::allocateLarge(std::lock_guard<StaticMutex>& lock, size_t alignment, size_t size)
+{
+    void* result = tryAllocateLarge(lock, alignment, size);
+    RELEASE_BASSERT(result);
+    return result;
+}
+
+bool Heap::isLarge(std::lock_guard<StaticMutex>&, void* object)
 {
-    return m_xLargeMap.getAllocated(object).size();
+    return m_objectTypes.get(Object(object).chunk()) == ObjectType::Large;
 }
 
-void Heap::shrinkXLarge(std::unique_lock<StaticMutex>&, const Range& object, size_t newSize)
+size_t Heap::largeSize(std::lock_guard<StaticMutex>&, void* object)
+{
+    return m_largeAllocated.get(object);
+}
+
+void Heap::shrinkLarge(std::lock_guard<StaticMutex>&, const Range& object, size_t newSize)
 {
     BASSERT(object.size() > newSize);
 
-    if (object.size() - newSize < m_vmPageSizePhysical)
-        return;
-    
-    XLargeRange range = m_xLargeMap.takeAllocated(object.begin());
-    splitAndAllocate(range, xLargeAlignment, newSize);
+    size_t size = m_largeAllocated.remove(object.begin());
+    XLargeRange range = XLargeRange(object, size);
+    splitAndAllocate(range, alignment, newSize);
 
     m_scavenger.run();
 }
 
-void Heap::deallocateXLarge(std::unique_lock<StaticMutex>&, void* object)
+void Heap::deallocateLarge(std::lock_guard<StaticMutex>&, void* object)
 {
-    XLargeRange range = m_xLargeMap.takeAllocated(object);
-    m_xLargeMap.addFree(range);
+    size_t size = m_largeAllocated.remove(object);
+    m_largeFree.add(XLargeRange(object, size, size));
     
     m_scavenger.run();
 }
index 72cd6d2..d863e38 100644 (file)
@@ -30,9 +30,9 @@
 #include "Environment.h"
 #include "LineMetadata.h"
 #include "List.h"
+#include "Map.h"
 #include "Mutex.h"
 #include "Object.h"
-#include "SegregatedFreeList.h"
 #include "SmallLine.h"
 #include "SmallPage.h"
 #include "VMHeap.h"
@@ -56,20 +56,25 @@ public:
     void allocateSmallBumpRanges(std::lock_guard<StaticMutex>&, size_t sizeClass, BumpAllocator&, BumpRangeCache&);
     void derefSmallLine(std::lock_guard<StaticMutex>&, Object);
 
-    void* allocateLarge(std::lock_guard<StaticMutex>&, size_t);
-    void* allocateLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t, size_t unalignedSize);
-    void shrinkLarge(std::lock_guard<StaticMutex>&, LargeObject&, size_t);
+    void* allocateLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t);
+    void* tryAllocateLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t);
+    void deallocateLarge(std::lock_guard<StaticMutex>&, void*);
 
-    void* allocateXLarge(std::lock_guard<StaticMutex>&, size_t);
-    void* allocateXLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t);
-    void* tryAllocateXLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t);
-    size_t xLargeSize(std::unique_lock<StaticMutex>&, void*);
-    void shrinkXLarge(std::unique_lock<StaticMutex>&, const Range&, size_t newSize);
-    void deallocateXLarge(std::unique_lock<StaticMutex>&, void*);
+    bool isLarge(std::lock_guard<StaticMutex>&, void*);
+    size_t largeSize(std::lock_guard<StaticMutex>&, void*);
+    void shrinkLarge(std::lock_guard<StaticMutex>&, const Range&, size_t);
 
     void scavenge(std::unique_lock<StaticMutex>&, std::chrono::milliseconds sleepDuration);
 
 private:
+    struct LargeObjectHash {
+        static unsigned hash(void* key)
+        {
+            return static_cast<unsigned>(
+                reinterpret_cast<uintptr_t>(key) / smallMax);
+        }
+    };
+
     ~Heap() = delete;
     
     void initializeLineMetadata();
@@ -83,10 +88,7 @@ private:
     SmallPage* allocateSmallPage(std::lock_guard<StaticMutex>&, size_t sizeClass);
 
     void deallocateSmallLine(std::lock_guard<StaticMutex>&, Object);
-    void deallocateLarge(std::lock_guard<StaticMutex>&, const LargeObject&);
 
-    LargeObject& splitAndAllocate(std::lock_guard<StaticMutex>&, LargeObject&, size_t);
-    LargeObject& splitAndAllocate(std::lock_guard<StaticMutex>&, LargeObject&, size_t, size_t);
     void mergeLarge(BeginTag*&, EndTag*&, Range&);
     void mergeLargeLeft(EndTag*&, BeginTag*&, Range&, bool& inVMHeap);
     void mergeLargeRight(EndTag*&, BeginTag*&, Range&, bool& inVMHeap);
@@ -96,7 +98,6 @@ private:
     void concurrentScavenge();
     void scavengeSmallPages(std::unique_lock<StaticMutex>&, std::chrono::milliseconds);
     void scavengeLargeObjects(std::unique_lock<StaticMutex>&, std::chrono::milliseconds);
-    void scavengeXLargeObjects(std::unique_lock<StaticMutex>&, std::chrono::milliseconds);
 
     size_t m_vmPageSizePhysical;
     Vector<LineMetadata> m_smallLineMetadata;
@@ -105,9 +106,10 @@ private:
     std::array<List<SmallPage>, sizeClassCount> m_smallPagesWithFreeLines;
     std::array<List<SmallPage>, pageClassCount> m_smallPages;
 
-    SegregatedFreeList m_largeObjects;
-    
-    XLargeMap m_xLargeMap;
+    Map<void*, size_t, LargeObjectHash> m_largeAllocated;
+    XLargeMap m_largeFree;
+
+    Map<Chunk*, ObjectType, ChunkHash> m_objectTypes;
 
     bool m_isAllocatingPages;
     AsyncTask<Heap, decltype(&Heap::concurrentScavenge)> m_scavenger;
diff --git a/Source/bmalloc/bmalloc/LargeObject.h b/Source/bmalloc/bmalloc/LargeObject.h
deleted file mode 100644 (file)
index 70d91b2..0000000
+++ /dev/null
@@ -1,275 +0,0 @@
-/*
- * Copyright (C) 2015-2016 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
- */
-
-#ifndef LargeObject_h
-#define LargeObject_h
-
-#include "BeginTag.h"
-#include "Chunk.h"
-#include "EndTag.h"
-#include "Range.h"
-
-namespace bmalloc {
-
-class LargeObject {
-public:
-    LargeObject();
-    LargeObject(void*);
-
-    enum DoNotValidateTag { DoNotValidate };
-    LargeObject(DoNotValidateTag, void*);
-    
-    operator bool() { return !!*this; }
-    bool operator!() { return !m_object; }
-
-    char* begin() const { return static_cast<char*>(m_object); }
-    char* end() const { return begin() + size(); }
-    size_t size() const { return m_beginTag->size(); }
-    Range range() const { return Range(m_object, size()); }
-
-    void setFree(bool) const;
-    bool isFree() const;
-
-    bool prevCanMerge() const;
-    bool nextCanMerge() const;
-
-    VMState vmState() const;
-    void setVMState(VMState) const;
-
-    bool isMarked() const;
-    void setMarked(bool) const;
-
-    bool isValidAndFree(VMState::HasPhysical, size_t) const;
-
-    LargeObject merge() const;
-    std::pair<LargeObject, LargeObject> split(size_t) const;
-
-private:
-    LargeObject(BeginTag*, EndTag*, void*);
-
-    void validate() const;
-    void validateSelf() const;
-
-    BeginTag* m_beginTag;
-    EndTag* m_endTag;
-    void* m_object;
-};
-
-inline LargeObject::LargeObject()
-    : m_beginTag(nullptr)
-    , m_endTag(nullptr)
-    , m_object(nullptr)
-{
-}
-
-inline LargeObject::LargeObject(void* object)
-    : m_beginTag(Chunk::beginTag(object))
-    , m_endTag(Chunk::endTag(object, m_beginTag->size()))
-    , m_object(object)
-{
-    validate();
-}
-
-inline LargeObject::LargeObject(DoNotValidateTag, void* object)
-    : m_beginTag(Chunk::beginTag(object))
-    , m_endTag(Chunk::endTag(object, m_beginTag->size()))
-    , m_object(object)
-{
-}
-
-inline LargeObject::LargeObject(BeginTag* beginTag, EndTag* endTag, void* object)
-    : m_beginTag(beginTag)
-    , m_endTag(endTag)
-    , m_object(object)
-{
-}
-
-inline void LargeObject::setFree(bool isFree) const
-{
-    validate();
-    m_beginTag->setFree(isFree);
-    m_endTag->setFree(isFree);
-}
-
-inline bool LargeObject::isFree() const
-{
-    validate();
-    return m_beginTag->isFree();
-}
-
-inline bool LargeObject::prevCanMerge() const
-{
-    return m_beginTag->prev()->isFree();
-}
-
-inline bool LargeObject::nextCanMerge() const
-{
-    return m_endTag->next()->isFree();
-}
-
-inline VMState LargeObject::vmState() const
-{
-    validate();
-    return m_beginTag->vmState();
-}
-
-inline void LargeObject::setVMState(VMState vmState) const
-{
-    validate();
-    m_beginTag->setVMState(vmState);
-    m_endTag->setVMState(vmState);
-}
-
-inline bool LargeObject::isMarked() const
-{
-    validate();
-    return m_beginTag->isMarked();
-}
-
-inline void LargeObject::setMarked(bool isMarked) const
-{
-    validate();
-    m_beginTag->setMarked(isMarked);
-    m_endTag->setMarked(isMarked);
-}
-
-inline bool LargeObject::isValidAndFree(VMState::HasPhysical hasPhysical, size_t expectedSize) const
-{
-    if (!m_beginTag->isFree())
-        return false;
-    
-    if (m_beginTag->isEnd())
-        return false;
-
-    if (m_beginTag->size() != expectedSize)
-        return false;
-    
-    if (m_beginTag->compactBegin() != BoundaryTag::compactBegin(m_object))
-        return false;
-
-    if (m_beginTag->vmState().hasPhysical() != static_cast<bool>(hasPhysical))
-        return false;
-
-    return true;
-}
-
-inline LargeObject LargeObject::merge() const
-{
-    validate();
-    BASSERT(isFree());
-
-    BeginTag* beginTag = m_beginTag;
-    EndTag* endTag = m_endTag;
-    Range range = this->range();
-    VMState vmState = this->vmState();
-
-    EndTag* prev = beginTag->prev();
-    if (prev->isFree()) {
-        vmState.merge(prev->vmState());
-        Range left(range.begin() - prev->size(), prev->size());
-        range = Range(left.begin(), left.size() + range.size());
-
-        prev->clear();
-        beginTag->clear();
-
-        beginTag = Chunk::beginTag(range.begin());
-    }
-
-    BeginTag* next = endTag->next();
-    if (next->isFree()) {
-        vmState.merge(next->vmState());
-        Range right(range.end(), next->size());
-        range = Range(range.begin(), range.size() + right.size());
-
-        endTag->clear();
-        next->clear();
-
-        endTag = Chunk::endTag(range.begin(), range.size());
-    }
-
-    beginTag->setRange(range);
-    beginTag->setFree(true);
-    beginTag->setVMState(vmState);
-    endTag->init(beginTag);
-
-    return LargeObject(beginTag, endTag, range.begin());
-}
-
-inline std::pair<LargeObject, LargeObject> LargeObject::split(size_t size) const
-{
-    BASSERT(size <= this->size());
-    Range split(begin(), size);
-    Range leftover = Range(split.end(), this->size() - size);
-    BASSERT(leftover.size() >= largeMin);
-
-    BeginTag* splitBeginTag = m_beginTag;
-    EndTag* splitEndTag = Chunk::endTag(split.begin(), size);
-
-    BeginTag* leftoverBeginTag = Chunk::beginTag(leftover.begin());
-    EndTag* leftoverEndTag = m_endTag;
-
-    splitBeginTag->setRange(split);
-    splitEndTag->init(splitBeginTag);
-
-    *leftoverBeginTag = *splitBeginTag;
-    leftoverBeginTag->setRange(leftover);
-    leftoverEndTag->init(leftoverBeginTag);
-
-    return std::make_pair(
-        LargeObject(splitBeginTag, splitEndTag, split.begin()),
-        LargeObject(leftoverBeginTag, leftoverEndTag, leftover.begin()));
-}
-
-inline void LargeObject::validateSelf() const
-{
-    BASSERT(!m_beginTag->isEnd());
-    BASSERT(m_endTag->isEnd() || static_cast<BoundaryTag*>(m_endTag) == static_cast<BoundaryTag*>(m_beginTag));
-
-    BASSERT(size() >= largeMin);
-
-    BASSERT(m_beginTag->size() == m_endTag->size());
-    BASSERT(m_beginTag->isFree() == m_endTag->isFree());
-    BASSERT(m_beginTag->vmState() == m_endTag->vmState());
-    BASSERT(m_beginTag->isMarked() == m_endTag->isMarked());
-}
-
-inline void LargeObject::validate() const
-{
-    if (!m_beginTag->prev()->isSentinel()) {
-        LargeObject prev(DoNotValidate, begin() - m_beginTag->prev()->size());
-        prev.validateSelf();
-    }
-
-    validateSelf();
-
-    if (!m_endTag->next()->isSentinel()) {
-        LargeObject next(DoNotValidate, begin() + size());
-        next.validateSelf();
-    }
-}
-
-} // namespace bmalloc
-
-#endif // LargeObject_h
diff --git a/Source/bmalloc/bmalloc/Map.h b/Source/bmalloc/bmalloc/Map.h
new file mode 100644 (file)
index 0000000..1124a8c
--- /dev/null
@@ -0,0 +1,134 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#ifndef Map_h
+#define Map_h
+
+#include "Inline.h"
+#include "Sizes.h"
+#include "Vector.h"
+
+namespace bmalloc {
+
+class SmallPage;
+
+template<typename Key, typename Value, typename Hash> class Map {
+    static_assert(std::is_trivially_destructible<Key>::value, "Map must have a trivial destructor.");
+    static_assert(std::is_trivially_destructible<Value>::value, "Map must have a trivial destructor.");
+public:
+    struct Bucket {
+        Key key;
+        Value value;
+    };
+    
+    size_t size() { return m_keyCount; }
+    size_t capacity() { return m_table.size(); }
+
+    // key must be in the map.
+    Value& get(const Key& key)
+    {
+        auto& bucket = find(key, [&](const Bucket& bucket) { return bucket.key == key; });
+        return bucket.value;
+    }
+
+    void set(const Key& key, const Value& value)
+    {
+        if (shouldGrow())
+            rehash();
+
+        auto& bucket = find(key, [&](const Bucket& bucket) { return !bucket.key || bucket.key == key; });
+        if (!bucket.key) {
+            bucket.key = key;
+            ++m_keyCount;
+        }
+        bucket.value = value;
+    }
+
+    // key must be in the map.
+    Value remove(const Key& key)
+    {
+        if (shouldShrink())
+            rehash();
+
+        auto& bucket = find(key, [&](const Bucket& bucket) { return bucket.key == key; });
+        Value value = bucket.value;
+        bucket.key = Key();
+        --m_keyCount;
+        return value;
+    }
+
+private:
+    static const unsigned minCapacity = 16;
+    static const unsigned maxLoad = 2;
+    static const unsigned rehashLoad = 4;
+    static const unsigned minLoad = 8;
+
+    bool shouldGrow() { return m_keyCount * maxLoad >= capacity(); }
+    bool shouldShrink() { return m_keyCount * minLoad <= capacity() && capacity() > minCapacity; }
+
+    void rehash();
+
+    template<typename Predicate>
+    Bucket& find(const Key& key, const Predicate& predicate)
+    {
+        for (unsigned h = Hash::hash(key); ; ++h) {
+            unsigned i = h & m_tableMask;
+
+            Bucket& bucket = m_table[i];
+            if (predicate(bucket))
+                return bucket;
+        }
+    }
+
+    unsigned m_keyCount;
+    unsigned m_tableMask;
+    Vector<Bucket> m_table;
+};
+
+template<typename Key, typename Value, typename Hash>
+void Map<Key, Value, Hash>::rehash()
+{
+    auto oldTable = std::move(m_table);
+
+    size_t newCapacity = std::max(minCapacity, m_keyCount * rehashLoad);
+    m_table.resize(newCapacity);
+
+    m_keyCount = 0;
+    m_tableMask = newCapacity - 1;
+
+    for (auto& bucket : oldTable) {
+        if (!bucket.key)
+            continue;
+
+        BASSERT(!shouldGrow());
+        set(bucket.key, bucket.value);
+    }
+}
+
+template<typename Key, typename Value, typename Hash> const unsigned Map<Key, Value, Hash>::minCapacity;
+
+} // namespace bmalloc
+
+#endif // Map_h
index a6f6800..89650c3 100644 (file)
@@ -26,6 +26,8 @@
 #ifndef Object_h
 #define Object_h
 
+#include <cstddef>
+
 namespace bmalloc {
 
 class Chunk;
index ca4e6f1..a8d3397 100644 (file)
 #include "ObjectType.h"
 
 #include "Chunk.h"
+#include "Heap.h"
 #include "Object.h"
+#include "PerProcess.h"
 
 namespace bmalloc {
 
 ObjectType objectType(void* object)
 {
-    if (isXLarge(object))
-        return ObjectType::XLarge;
+    if (mightBeLarge(object)) {
+        if (!object)
+            return ObjectType::Small;
+
+        std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
+        if (PerProcess<Heap>::getFastCase()->isLarge(lock, object))
+            return ObjectType::Large;
+    }
     
-    return Object(object).chunk()->objectType();
+    return ObjectType::Small;
 }
 
 } // namespace bmalloc
index e870a2b..2cc3ab0 100644 (file)
 
 namespace bmalloc {
 
-enum class ObjectType : unsigned char { Small, Large, XLarge };
+enum class ObjectType : unsigned char { Small, Large };
 
 ObjectType objectType(void*);
 
-inline bool isXLarge(void* object)
+inline bool mightBeLarge(void* object)
 {
-    return !test(object, ~xLargeMask);
+    return !test(object, largeAlignmentMask);
 }
 
 } // namespace bmalloc
diff --git a/Source/bmalloc/bmalloc/SegregatedFreeList.cpp b/Source/bmalloc/bmalloc/SegregatedFreeList.cpp
deleted file mode 100644 (file)
index 395845c..0000000
+++ /dev/null
@@ -1,89 +0,0 @@
-/*
- * Copyright (C) 2014-2016 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
- */
-
-#include "SegregatedFreeList.h"
-
-namespace bmalloc {
-
-SegregatedFreeList::SegregatedFreeList(VMState::HasPhysical hasPhysical)
-    : m_hasPhysical(hasPhysical)
-{
-    BASSERT(static_cast<size_t>(&select(largeObjectMax) - m_freeLists.begin()) == m_freeLists.size() - 1);
-}
-
-void SegregatedFreeList::insert(const LargeObject& largeObject)
-{
-    auto& list = select(largeObject.size());
-    list.push(hasPhysical(), largeObject);
-}
-
-LargeObject SegregatedFreeList::takeGreedy()
-{
-    for (size_t i = m_freeLists.size(); i-- > 0; ) {
-        LargeObject largeObject = m_freeLists[i].takeGreedy(hasPhysical());
-        if (!largeObject)
-            continue;
-
-        return largeObject;
-    }
-    return LargeObject();
-}
-
-LargeObject SegregatedFreeList::take(size_t size)
-{
-    for (auto* list = &select(size); list != m_freeLists.end(); ++list) {
-        LargeObject largeObject = list->take(hasPhysical(), size);
-        if (!largeObject)
-            continue;
-
-        return largeObject;
-    }
-    return LargeObject();
-}
-
-LargeObject SegregatedFreeList::take(size_t alignment, size_t size, size_t unalignedSize)
-{
-    for (auto* list = &select(size); list != m_freeLists.end(); ++list) {
-        LargeObject largeObject = list->take(hasPhysical(), alignment, size, unalignedSize);
-        if (!largeObject)
-            continue;
-
-        return largeObject;
-    }
-    return LargeObject();
-}
-
-INLINE auto SegregatedFreeList::select(size_t size) -> FreeList&
-{
-    size_t alignCount = (size - largeMin) / largeAlignment;
-    size_t result = 0;
-    while (alignCount) {
-        ++result;
-        alignCount >>= 1;
-    }
-    return m_freeLists[result];
-}
-
-} // namespace bmalloc
diff --git a/Source/bmalloc/bmalloc/SegregatedFreeList.h b/Source/bmalloc/bmalloc/SegregatedFreeList.h
deleted file mode 100644 (file)
index 0f363ad..0000000
+++ /dev/null
@@ -1,69 +0,0 @@
-/*
- * Copyright (C) 2014-2016 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
- */
-
-#ifndef SegregatedFreeList_h
-#define SegregatedFreeList_h
-
-#include "FreeList.h"
-#include <array>
-
-namespace bmalloc {
-
-class SegregatedFreeList {
-public:
-    SegregatedFreeList(VMState::HasPhysical);
-
-    void insert(const LargeObject&);
-
-    // Returns a reasonable fit for the provided size, or LargeObject() if no fit
-    // is found. May return LargeObject() spuriously if searching takes too long.
-    // Incrementally removes stale items from the free list while searching.
-    // Does not eagerly remove the returned object from the free list.
-    LargeObject take(size_t);
-
-    // Returns a reasonable fit for the provided alignment and size, or
-    // a reasonable fit for the provided unaligned size, or LargeObject() if no
-    // fit is found. May return LargeObject() spuriously if searching takes too
-    // long. Incrementally removes stale items from the free list while
-    // searching. Does not eagerly remove the returned object from the free list.
-    LargeObject take(size_t alignment, size_t, size_t unalignedSize);
-
-    // Returns an unreasonable fit for the provided size, or LargeObject() if no
-    // fit is found. Never returns LargeObject() spuriously. Incrementally
-    // removes stale items from the free list while searching. Eagerly removes
-    // the returned object from the free list.
-    LargeObject takeGreedy();
-
-    VMState::HasPhysical hasPhysical() const { return m_hasPhysical; }
-private:
-    FreeList& select(size_t);
-
-    VMState::HasPhysical m_hasPhysical;
-    std::array<FreeList, 16> m_freeLists;
-};
-
-} // namespace bmalloc
-
-#endif // SegregatedFreeList_h
index fc10fa4..5789772 100644 (file)
@@ -46,33 +46,25 @@ namespace Sizes {
     static const size_t alignment = 8;
     static const size_t alignmentMask = alignment - 1ul;
 
+    static const size_t chunkSize = 2 * MB;
+    static const size_t chunkMask = ~(chunkSize - 1ul);
+
     static const size_t smallLineSize = 256;
     static const size_t smallPageSize = 4 * kB;
     static const size_t smallPageLineCount = smallPageSize / smallLineSize;
 
     static const size_t maskSizeClassMax = 512;
-    static const size_t smallMax = 16 * kB;
+    static const size_t smallMax = 64 * kB;
 
-    static const size_t pageSizeMax = 32 * kB;
+    static const size_t pageSizeMax = smallMax * 2;
     static const size_t pageClassCount = pageSizeMax / smallPageSize;
 
     static const size_t pageSizeWasteFactor = 8;
     static const size_t logWasteFactor = 8;
 
-    static const size_t chunkSize = 2 * MB;
-    static const size_t chunkMask = ~(chunkSize - 1ul);
-
-    static const size_t largeAlignment = 64;
-    static const size_t largeMin = 1 * kB;
-    static const size_t largeObjectMax = chunkSize;
-    static const size_t largeMax = largeObjectMax / 2;
-
-    static const size_t xLargeAlignment = chunkSize;
-    static const size_t xLargeMask = ~(xLargeAlignment - 1);
-    static const size_t xLargeMax = std::numeric_limits<size_t>::max() - xLargeAlignment; // Make sure that rounding up to xLargeAlignment does not overflow.
-
-    static const size_t freeListSearchDepth = 16;
-    static const size_t freeListGrowFactor = 2;
+    static const size_t largeAlignment = smallMax / pageSizeWasteFactor;
+    static const size_t largeAlignmentMask = largeAlignment - 1;
+    static const size_t largeMax = std::numeric_limits<size_t>::max() - largeAlignment; // Make sure that rounding up to largeAlignment does not overflow.
 
     static const size_t deallocatorLogCapacity = 256;
     static const size_t bumpRangeCacheCapacity = 3;
diff --git a/Source/bmalloc/bmalloc/SortedVector.h b/Source/bmalloc/bmalloc/SortedVector.h
deleted file mode 100644 (file)
index bd4c4b7..0000000
+++ /dev/null
@@ -1,169 +0,0 @@
-/*
- * Copyright (C) 2016 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
- */
-
-#ifndef SortedVector_h
-#define SortedVector_h
-
-#include "Vector.h"
-#include <algorithm>
-
-namespace bmalloc {
-
-template<typename T>
-class SortedVector {
-    static_assert(std::is_trivially_destructible<T>::value, "SortedVector must have a trivial destructor.");
-
-    struct Bucket {
-        explicit Bucket(T value)
-            : value(value)
-            , isDeleted(false)
-        {
-        }
-        
-        template<typename U> bool operator<(const U& other) const
-        {
-            return value < other;
-        }
-
-        T value;
-        bool isDeleted;
-    };
-
-public:
-    class iterator : public std::iterator<std::forward_iterator_tag, T> {
-    public:
-        iterator(Bucket* bucket, Bucket* end)
-            : m_bucket(bucket)
-            , m_end(end)
-        {
-            skipDeletedBuckets();
-        }
-        
-        iterator(const iterator& other)
-            : m_bucket(other.m_bucket)
-            , m_end(other.m_end)
-        {
-        }
-
-        iterator& operator++()
-        {
-            BASSERT(m_bucket != m_end);
-            ++m_bucket;
-            skipDeletedBuckets();
-            return *this;
-        }
-
-        bool operator!=(const iterator& other)
-        {
-            return m_bucket != other.m_bucket;
-        }
-
-        T& operator*()
-        {
-            BASSERT(m_bucket < m_end);
-            BASSERT(!m_bucket->isDeleted);
-            return m_bucket->value;
-        }
-
-        T* operator->()  { return &operator*(); }
-
-    private:
-        friend class SortedVector;
-
-        void skipDeletedBuckets()
-        {
-            while (m_bucket != m_end && m_bucket->isDeleted)
-                ++m_bucket;
-        }
-
-        Bucket* m_bucket;
-        Bucket* m_end;
-    };
-
-    iterator begin() { return iterator(m_vector.begin(), m_vector.end()); }
-    iterator end() { return iterator(m_vector.end(), m_vector.end()); }
-
-    void insert(const T&);
-
-    template<typename U> iterator find(const U&);
-    template<typename U> T get(const U&);
-    template<typename U> T take(const U&);
-
-    void shrinkToFit();
-
-private:
-    Vector<Bucket> m_vector;
-};
-
-template<typename T>
-void SortedVector<T>::insert(const T& value)
-{
-    auto it = std::lower_bound(m_vector.begin(), m_vector.end(), value);
-    if (it != m_vector.end() && it->isDeleted) {
-        *it = Bucket(value);
-        return;
-    }
-
-    m_vector.insert(it, Bucket(value));
-}
-
-template<typename T> template<typename U>
-typename SortedVector<T>::iterator SortedVector<T>::find(const U& value)
-{
-    auto it = std::lower_bound(m_vector.begin(), m_vector.end(), value);
-    return iterator(it, m_vector.end());
-}
-
-template<typename T> template<typename U>
-T SortedVector<T>::get(const U& value)
-{
-    return *find(value);
-}
-
-template<typename T> template<typename U>
-T SortedVector<T>::take(const U& value)
-{
-    auto it = find(value);
-    it.m_bucket->isDeleted = true;
-    return it.m_bucket->value;
-}
-
-template<typename T>
-void SortedVector<T>::shrinkToFit()
-{
-    auto isDeleted = [](const Bucket& bucket) {
-        return bucket.isDeleted;
-    };
-
-    auto newEnd = std::remove_if(m_vector.begin(), m_vector.end(), isDeleted);
-    size_t newSize = newEnd - m_vector.begin();
-    m_vector.shrink(newSize);
-
-    m_vector.shrinkToFit();
-}
-
-} // namespace bmalloc
-
-#endif // SortedVector_h
index 801e04f..bff44c0 100644 (file)
  * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
  */
 
-#include "LargeObject.h"
 #include "PerProcess.h"
 #include "VMHeap.h"
 #include <thread>
 
 namespace bmalloc {
 
-VMHeap::VMHeap()
-    : m_largeObjects(VMState::HasPhysical::False)
+XLargeRange VMHeap::tryAllocateLargeChunk(std::lock_guard<StaticMutex>& lock, size_t alignment, size_t size)
 {
-}
+    // We allocate VM in aligned multiples to increase the chances that
+    // the OS will provide contiguous ranges that we can merge.
+    alignment = roundUpToMultipleOf<chunkSize>(alignment);
+    size = roundUpToMultipleOf<chunkSize>(size);
 
-LargeObject VMHeap::allocateChunk(std::lock_guard<StaticMutex>& lock)
-{
-    Chunk* chunk =
-        new (vmAllocate(chunkSize, chunkSize)) Chunk(lock, ObjectType::Large);
+    void* memory = tryVMAllocate(alignment, size);
+    if (!memory)
+        return XLargeRange();
 
+    Chunk* chunk = new (memory) Chunk(lock);
+    
 #if BOS(DARWIN)
     m_zone.addChunk(chunk);
 #endif
 
-    size_t alignment = largeAlignment;
-    size_t metadataSize = roundUpToMultipleOf(alignment, sizeof(Chunk));
-
-    Range range(chunk->bytes() + metadataSize, chunkSize - metadataSize);
-    BASSERT(range.size() <= largeObjectMax);
-
-    BeginTag* beginTag = Chunk::beginTag(range.begin());
-    beginTag->setRange(range);
-    beginTag->setFree(true);
-    beginTag->setVMState(VMState::Virtual);
-
-    EndTag* endTag = Chunk::endTag(range.begin(), range.size());
-    endTag->init(beginTag);
-
-    // Mark the left and right edges of our range as allocated. This naturally
-    // prevents merging logic from overflowing left (into metadata) or right
-    // (beyond our chunk), without requiring special-case checks.
-
-    EndTag* leftSentinel = beginTag->prev();
-    BASSERT(leftSentinel >= chunk->boundaryTags().begin());
-    BASSERT(leftSentinel < chunk->boundaryTags().end());
-    leftSentinel->initSentinel();
-
-    BeginTag* rightSentinel = endTag->next();
-    BASSERT(rightSentinel >= chunk->boundaryTags().begin());
-    BASSERT(rightSentinel < chunk->boundaryTags().end());
-    rightSentinel->initSentinel();
-
-    return LargeObject(range.begin());
+    return XLargeRange(chunk->bytes(), size, 0);
 }
 
 void VMHeap::allocateSmallChunk(std::lock_guard<StaticMutex>& lock, size_t pageClass)
 {
     Chunk* chunk =
-        new (vmAllocate(chunkSize, chunkSize)) Chunk(lock, ObjectType::Small);
+        new (vmAllocate(chunkSize, chunkSize)) Chunk(lock);
 
 #if BOS(DARWIN)
     m_zone.addChunk(chunk);
index c5fb516..d91d8a6 100644 (file)
 #include "AsyncTask.h"
 #include "Chunk.h"
 #include "FixedVector.h"
-#include "LargeObject.h"
-#include "Range.h"
-#include "SegregatedFreeList.h"
-#include "VMState.h"
+#include "Map.h"
 #include "Vector.h"
+#include "XLargeRange.h"
 #if BOS(DARWIN)
 #include "Zone.h"
 #endif
@@ -46,23 +44,16 @@ class Heap;
 
 class VMHeap {
 public:
-    VMHeap();
-    
     SmallPage* allocateSmallPage(std::lock_guard<StaticMutex>&, size_t);
     void deallocateSmallPage(std::unique_lock<StaticMutex>&, size_t, SmallPage*);
 
-    LargeObject allocateLargeObject(std::lock_guard<StaticMutex>&, size_t);
-    LargeObject allocateLargeObject(std::lock_guard<StaticMutex>&, size_t, size_t, size_t);
-
-    void deallocateLargeObject(std::unique_lock<StaticMutex>&, LargeObject);
+    XLargeRange tryAllocateLargeChunk(std::lock_guard<StaticMutex>&, size_t alignment, size_t);
     
 private:
-    LargeObject allocateChunk(std::lock_guard<StaticMutex>&);
     void allocateSmallChunk(std::lock_guard<StaticMutex>&, size_t);
 
     std::array<List<SmallPage>, pageClassCount> m_smallPages;
-    SegregatedFreeList m_largeObjects;
-
+    
 #if BOS(DARWIN)
     Zone m_zone;
 #endif
@@ -87,46 +78,6 @@ inline void VMHeap::deallocateSmallPage(std::unique_lock<StaticMutex>& lock, siz
     m_smallPages[pageClass].push(page);
 }
 
-inline LargeObject VMHeap::allocateLargeObject(std::lock_guard<StaticMutex>& lock, size_t size)
-{
-    if (LargeObject largeObject = m_largeObjects.take(size))
-        return largeObject;
-
-    BASSERT(size <= largeMax);
-    return allocateChunk(lock);
-}
-
-inline LargeObject VMHeap::allocateLargeObject(std::lock_guard<StaticMutex>& lock, size_t alignment, size_t size, size_t unalignedSize)
-{
-    if (LargeObject largeObject = m_largeObjects.take(alignment, size, unalignedSize))
-        return largeObject;
-
-    BASSERT(unalignedSize <= largeMax);
-    return allocateChunk(lock);
-}
-
-inline void VMHeap::deallocateLargeObject(std::unique_lock<StaticMutex>& lock, LargeObject largeObject)
-{
-    // Multiple threads might scavenge concurrently, meaning that new merging opportunities
-    // become visible after we reacquire the lock. Therefore we loop.
-    do {
-        largeObject = largeObject.merge();
-
-        // Temporarily mark this object as allocated to prevent clients from merging
-        // with it or allocating it while we're messing with its physical pages.
-        largeObject.setFree(false);
-
-        lock.unlock();
-        vmDeallocatePhysicalPagesSloppy(largeObject.begin(), largeObject.size());
-        lock.lock();
-
-        largeObject.setFree(true);
-    } while (largeObject.prevCanMerge() || largeObject.nextCanMerge());
-
-    largeObject.setVMState(VMState::Virtual);
-    m_largeObjects.insert(largeObject);
-}
-
 } // namespace bmalloc
 
 #endif // VMHeap_h
diff --git a/Source/bmalloc/bmalloc/VMState.h b/Source/bmalloc/bmalloc/VMState.h
deleted file mode 100644 (file)
index b87aefe..0000000
+++ /dev/null
@@ -1,86 +0,0 @@
-/*
- * Copyright (C) 2015-2016 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
- */
-
-#ifndef VMState_h
-#define VMState_h
-
-namespace bmalloc {
-
-class VMState {
-public:
-    enum class HasPhysical : bool {
-        False = false,
-        True = true
-    };
-
-    enum State : unsigned {
-        Invalid = 0x0,
-        Physical = 0x1,
-        Virtual = 0x2,
-        Mixed = 0x3
-    };
-
-    VMState(State vmState)
-        : m_state(vmState)
-    {
-    }
-
-    explicit VMState(unsigned vmState)
-        : m_state(static_cast<State>(vmState))
-    {
-    }
-
-    inline bool hasPhysical()
-    {
-        return !!(m_state & VMState::Physical);
-    }
-
-    inline bool hasVirtual()
-    {
-        return !!(m_state & VMState::Virtual);
-    }
-
-    inline void merge(VMState otherVMState)
-    {
-        m_state = static_cast<State>(m_state | otherVMState.m_state);
-    }
-
-    bool operator==(VMState other) const { return m_state == other.m_state; }
-    explicit operator unsigned() const { return m_state; }
-
-private:
-    State m_state;
-};
-
-inline VMState merge(VMState a, VMState b)
-{
-    VMState result(a);
-    result.merge(b);
-    return result;
-}
-
-} // namespace bmalloc
-
-#endif // VMState_h
index d257fc5..5681c34 100644 (file)
@@ -43,10 +43,8 @@ public:
     typedef T* iterator;
     typedef const T* const_iterator;
 
-    Vector(const Vector&) = delete;
-    Vector& operator=(const Vector&) = delete;
-
     Vector();
+    Vector(Vector&&);
     ~Vector();
 
     iterator begin() { return m_buffer; }
@@ -65,9 +63,11 @@ public:
     T pop(const_iterator it) { return pop(it - begin()); }
     
     void insert(iterator, const T&);
+    T remove(iterator);
 
     void grow(size_t);
     void shrink(size_t);
+    void resize(size_t);
 
     void shrinkToFit();
 
@@ -87,13 +87,24 @@ private:
 
 template<typename T>
 inline Vector<T>::Vector()
-    : m_buffer(0)
+    : m_buffer(nullptr)
     , m_size(0)
     , m_capacity(0)
 {
 }
 
 template<typename T>
+inline Vector<T>::Vector(Vector&& other)
+    : m_buffer(other.m_buffer)
+    , m_size(other.m_size)
+    , m_capacity(other.m_capacity)
+{
+    other.m_buffer = nullptr;
+    other.m_size = 0;
+    other.m_capacity = 0;
+}
+
+template<typename T>
 Vector<T>::~Vector()
 {
     if (m_buffer)
@@ -138,12 +149,24 @@ void Vector<T>::insert(iterator it, const T& value)
     size_t index = it - begin();
     size_t moveCount = end() - it;
 
-    if (m_size == m_capacity)
-        growCapacity();
-
+    grow(m_size + 1);
     std::memmove(&m_buffer[index + 1], &m_buffer[index], moveCount * sizeof(T));
+
     m_buffer[index] = value;
-    m_size++;
+}
+
+template<typename T>
+T Vector<T>::remove(iterator it)
+{
+    size_t index = it - begin();
+    size_t moveCount = end() - it - 1;
+
+    T result = *it;
+
+    std::memmove(&m_buffer[index], &m_buffer[index + 1], moveCount * sizeof(T));
+    shrink(m_size - 1);
+    
+    return result;
 }
 
 template<typename T>
@@ -164,6 +187,15 @@ inline void Vector<T>::shrink(size_t size)
 }
 
 template<typename T>
+inline void Vector<T>::resize(size_t size)
+{
+    if (size <= m_size)
+        shrink(size);
+    else
+        grow(size);
+}
+
+template<typename T>
 void Vector<T>::reallocateBuffer(size_t newCapacity)
 {
     size_t vmSize = bmalloc::vmSize(newCapacity * sizeof(T));
index a82a04a..ab38ffe 100644 (file)
  */
 
 #include "XLargeMap.h"
+#include <utility>
 
 namespace bmalloc {
 
-XLargeRange XLargeMap::takeFree(size_t alignment, size_t size)
+XLargeRange XLargeMap::remove(size_t alignment, size_t size)
 {
     size_t alignmentMask = alignment - 1;
 
@@ -61,14 +62,12 @@ XLargeRange XLargeMap::takeFree(size_t alignment, size_t size)
     return m_free.pop(candidate);
 }
 
-void XLargeMap::addFree(const XLargeRange& range)
+void XLargeMap::add(const XLargeRange& range)
 {
     XLargeRange merged = range;
 
     for (size_t i = 0; i < m_free.size(); ++i) {
-        auto& other = m_free[i];
-
-        if (!canMerge(merged, other))
+        if (!canMerge(merged, m_free[i]))
             continue;
 
         merged = merge(merged, m_free.pop(i--));
@@ -77,75 +76,16 @@ void XLargeMap::addFree(const XLargeRange& range)
     m_free.push(merged);
 }
 
-void XLargeMap::addAllocated(const XLargeRange& prev, const std::pair<XLargeRange, XLargeRange>& allocated, const XLargeRange& next)
-{
-    if (prev)
-        m_free.push(prev);
-    
-    if (next)
-        m_free.push(next);
-
-    m_allocated.insert({ allocated.first, allocated.second });
-}
-
-XLargeRange XLargeMap::getAllocated(void* object)
-{
-    return m_allocated.find(object)->object;
-}
-
-XLargeRange XLargeMap::takeAllocated(void* object)
-{
-    Allocation allocation = m_allocated.take(object);
-    return merge(allocation.object, allocation.unused);
-}
-
-void XLargeMap::shrinkToFit()
+XLargeRange XLargeMap::removePhysical()
 {
-    m_free.shrinkToFit();
-    m_allocated.shrinkToFit();
-}
-
-XLargeRange XLargeMap::takePhysical() {
-    auto hasPhysical = [](const XLargeRange& range) {
-        return range.vmState().hasPhysical();
-    };
-
-    auto it = std::find_if(m_free.begin(), m_free.end(), hasPhysical);
-    if (it != m_free.end())
-        return m_free.pop(it);
-
-    auto hasUnused = [](const Allocation& allocation) {
-        return allocation.unused && allocation.unused.vmState().hasPhysical();
-    };
-    
-    XLargeRange swapped;
-    auto it2 = std::find_if(m_allocated.begin(), m_allocated.end(), hasUnused);
-    if (it2 != m_allocated.end())
-        std::swap(it2->unused, swapped);
-    
-    return swapped;
-}
-
-void XLargeMap::addVirtual(const XLargeRange& range)
-{
-    auto canMerge = [&range](const Allocation& other) {
-        return other.object.end() == range.begin();
-    };
-
-    if (range.size() < xLargeAlignment) {
-        // This is an unused fragment, so it might belong in the allocated list.
-        auto it = std::find_if(m_allocated.begin(), m_allocated.end(), canMerge);
-        if (it != m_allocated.end()) {
-            BASSERT(!it->unused);
-            it->unused = range;
-            return;
-        }
+    auto it = std::find_if(m_free.begin(), m_free.end(), [](const XLargeRange& range) {
+        return range.physicalSize();
+    });
 
-        // If we didn't find a neighbor in the allocated list, our neighbor must
-        // have been freed. We'll merge with it below.
-    }
+    if (it == m_free.end())
+        return XLargeRange();
 
-    addFree(range);
+    return m_free.pop(it);
 }
 
 } // namespace bmalloc
index 7a88509..4f168e9 100644 (file)
@@ -26,7 +26,6 @@
 #ifndef XLargeMap_h
 #define XLargeMap_h
 
-#include "SortedVector.h"
 #include "Vector.h"
 #include "XLargeRange.h"
 #include <algorithm>
@@ -35,29 +34,12 @@ namespace bmalloc {
 
 class XLargeMap {
 public:
-    void addFree(const XLargeRange&);
-    XLargeRange takeFree(size_t alignment, size_t);
-
-    void addAllocated(const XLargeRange& prev, const std::pair<XLargeRange, XLargeRange>&, const XLargeRange& next);
-    XLargeRange getAllocated(void*);
-    XLargeRange takeAllocated(void*);
-
-    XLargeRange takePhysical();
-    void addVirtual(const XLargeRange&);
-    
-    void shrinkToFit();
+    void add(const XLargeRange&);
+    XLargeRange remove(size_t alignment, size_t);
+    XLargeRange removePhysical();
 
 private:
-    struct Allocation {
-        bool operator<(const Allocation& other) const { return object < other.object; }
-        bool operator<(void* ptr) const { return object.begin() < ptr; }
-
-        XLargeRange object;
-        XLargeRange unused;
-    };
-
     Vector<XLargeRange> m_free;
-    SortedVector<Allocation> m_allocated;
 };
 
 } // namespace bmalloc
index ed7f831..45013e1 100644 (file)
 #ifndef XLargeRange_h
 #define XLargeRange_h
 
-#include "VMState.h"
-
 namespace bmalloc {
 
 class XLargeRange : public Range {
 public:
     XLargeRange()
         : Range()
-        , m_vmState(VMState::Virtual)
+        , m_physicalSize(0)
+    {
+    }
+
+    XLargeRange(const Range& other, size_t physicalSize)
+        : Range(other)
+        , m_physicalSize(physicalSize)
     {
     }
 
-    XLargeRange(void* begin, size_t size, VMState vmState)
+    XLargeRange(void* begin, size_t size, size_t physicalSize)
         : Range(begin, size)
-        , m_vmState(vmState)
+        , m_physicalSize(physicalSize)
     {
     }
-    
-    VMState vmState() const { return m_vmState; }
-    void setVMState(VMState vmState) { m_vmState = vmState; }
+
+    size_t physicalSize() const { return m_physicalSize; }
+    void setPhysicalSize(size_t physicalSize) { m_physicalSize = physicalSize; }
 
     std::pair<XLargeRange, XLargeRange> split(size_t) const;
 
+    bool operator<(const void* other) const { return begin() < other; }
+    bool operator<(const XLargeRange& other) const { return begin() < other.begin(); }
+
 private:
-    VMState m_vmState;
+    size_t m_physicalSize;
 };
 
 inline bool canMerge(const XLargeRange& a, const XLargeRange& b)
@@ -66,18 +73,32 @@ inline bool canMerge(const XLargeRange& a, const XLargeRange& b)
 
 inline XLargeRange merge(const XLargeRange& a, const XLargeRange& b)
 {
+    const XLargeRange& left = std::min(a, b);
+    if (left.size() == left.physicalSize()) {
+        return XLargeRange(
+            left.begin(),
+            a.size() + b.size(),
+            a.physicalSize() + b.physicalSize());
+    }
+
     return XLargeRange(
-        std::min(a.begin(), b.begin()),
+        left.begin(),
         a.size() + b.size(),
-        merge(a.vmState(), b.vmState()));
+        left.physicalSize());
 }
 
 inline std::pair<XLargeRange, XLargeRange> XLargeRange::split(size_t size) const
 {
     BASSERT(size <= this->size());
+    
+    if (size <= physicalSize()) {
+        XLargeRange left(begin(), size, size);
+        XLargeRange right(left.end(), this->size() - size, physicalSize() - size);
+        return std::make_pair(left, right);
+    }
 
-    XLargeRange left(begin(), size, vmState());
-    XLargeRange right(left.end(), this->size() - size, vmState());
+    XLargeRange left(begin(), size, physicalSize());
+    XLargeRange right(left.end(), this->size() - size, 0);
     return std::make_pair(left, right);
 }