Make MarkedBlock state tracking support overlapped allocation and marking state
authorfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Tue, 20 Sep 2016 18:12:18 +0000 (18:12 +0000)
committerfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Tue, 20 Sep 2016 18:12:18 +0000 (18:12 +0000)
https://bugs.webkit.org/show_bug.cgi?id=161581

Reviewed by Geoffrey Garen.

JSTests:

Add a microbenchmark for why we want to reclaim empty blocks from other allocators.

* microbenchmarks/switching-size-classes.js: Added.

Source/JavaScriptCore:

Concurrent GCs must allow for mutation and allocation during collection. We already know
how to mutate during collection. We have a write barrier for that. Allocation during
collection is more involved: the collector modifies the the mark bits, as well as other
kinds of MarkedBlock state, in-place during a collection. The allocator uses that same
MarkedBlock state to decide which regions of memory are free. This works if the allocator
never runs while the collector is running, but if we want to allow them to run at the same
time, then we need to have two versions of the state: one version built up by the
collector and another consumed by the allocator. We clear the collector state at the
beginning of collection, and splat the collector state onto the allocator state after
collection.

This could be super expensive, but we can make it cheap with some cleverness. The biggest
observation is just that most of the state is a handful of bits per block: is the block
free-listed? is it completely full? completely empty? in the incremental sweeper's
snapshot? is it retired? is it in eden? There is also state inside blocks, like the mark
bits, but I have a solid plan there and I'll save it for another patch. Once we view the
state of blocks as bits, we can put that state into bitvectors, so that if the collector
needs to transform the state of some blocks, it can do it with a single operation over
bitvectors. I like to think of this as 32-way parallelizing block operations, since
doing one operation on a 32-bit word in one of those bitvectors instantly affects 32
blocks.

This change converts all previous collections of MarkedBlocks, along with the MarkedBlock
state, into 8 bitvectors (live, empty, allocated, canAllocateButNotEmpty, eden, unswept,
markingNotEmpty, and markingRetired). The bitvectors separate allocator state (empty,
allocated, canAllocateButNotEmpty) from marking state (markingNotEmpty, markingRetired).

As a nice side-effect of switching to bitvectors, we get size class rebalancing for free.
It used to be that if a MarkedAllocator had an empty block, we would only allow that
memory to be reused by a different MarkedAllocator if we did an incremental sweep or a
full eager sweep. Now we hunt down all destructorless empty blocks before allocating new
MarkedBlocks. It would be relatively easy to also hunt down destructor empty blocks, but
the theory is that those might be expensive to sweep, so it might still be better to leave
those to the incremental sweeper.

This change is perf-neutral all around. I did some tests with two different kinds of
allocation strategies - something that is somewhat easier to do now that you can look for
blocks that are candidates for allocation by just scanning some bitvectors. I tried two
variants:

- Allocate out of non-empty blocks first, leaving empty blocks for last in case a
  different allocator needed them. This is sort of a best-fit strategy. I tried this
  first, and it can be expressed as:

  m_allocationCursor = m_canAllocateButNotEmpty.findBit(m_allocationCursor, true)

- Allocate out of lower-indexed blocks first, treating empty and canAllocateButNotEmpty
  blocks equally. This is sort of a first-fit strategy. This is what I ended up settling
  on, and it can be expressed as:

  m_allocationCursor = (m_canAllocateButNotEmpty | m_empty).findBit(m_allocationCursor, true)

The best-fit strategy meant 1% regressions in LongSpider and Octane overall, and a 11%
regression on Octane/earley. First-fit means perf-neutrality. Most great allocators skew
towards first-fit because it's empirically better, so this result is not surprising.

Overall, the performance of this patch on my machine is as follows, where "neutral" means
less than 1% and not statistically significant.

run-jsc-benchmarks:
    SunSpider: neutral
    LongSpider: 0.6% slower
    V8Spider: neutral
    Octane: neutral
    Kraken: neutral
    Microbenchmarks: 0.37% slower
    AsmBench: neutral
    CompressionBench: maybe 1% faster

For browser benchmarks, I report the ratio of means (bigger / smaller) along with a T-test
from Mathematica reported as % chance of not [sic] the null hypothesis. Note that we
normally consider anything less than 95% confidence to be inconclusive.

Browser benchmarks:
    PLT3: 0.3% faster with 67% confidence
    membuster:
        Snap2FinishedLoadingPost: 0.68% more memory with 50% confidence
        Snap3EndPost: 2.4% more memory with 61% confidence
    JetStream: 0.2% slower with 32% confidence
    Speedometer: 0.7% faster with 82% confidence

Additionally, Octane/splay's heap capacity goes down to ~180KB from ~200KB, so about a 10%
progression. This is due to the allocator rebalancing feature.

Finally, this breaks --useImmortalObjects. It was already broken as far as I can tell. I
filed a bug to reimplement it (bug 162296). Unless someone urgently needs this internal
tool, it's probably best to reimplement it after I'm done refactoring MarkedSpace.

* JavaScriptCore.xcodeproj/project.pbxproj:
* debugger/Debugger.cpp:
* heap/CellContainer.h:
* heap/CellContainerInlines.h:
(JSC::CellContainer::vm):
(JSC::CellContainer::heap):
(JSC::CellContainer::isMarkedOrNewlyAllocated):
(JSC::CellContainer::aboutToMark):
(JSC::CellContainer::isMarked): Deleted.
(JSC::CellContainer::flipIfNecessary): Deleted.
* heap/ConservativeRoots.cpp:
* heap/Heap.cpp:
(JSC::Heap::beginMarking):
(JSC::Heap::endMarking):
(JSC::Heap::collectAllGarbage):
(JSC::Heap::collectImpl):
(JSC::Heap::snapshotMarkedSpace):
(JSC::Heap::prepareForAllocation):
(JSC::Heap::zombifyDeadObjects):
(JSC::MarkedBlockSnapshotFunctor::MarkedBlockSnapshotFunctor): Deleted.
(JSC::MarkedBlockSnapshotFunctor::operator()): Deleted.
(JSC::Heap::resetAllocators): Deleted.
* heap/Heap.h:
* heap/HeapInlines.h:
(JSC::Heap::isMarked):
(JSC::Heap::isMarkedConcurrently):
(JSC::Heap::testAndSetMarked):
* heap/HeapStatistics.cpp:
* heap/HeapUtil.h:
(JSC::HeapUtil::findGCObjectPointersForMarking):
(JSC::HeapUtil::isPointerGCObjectJSCell):
* heap/HeapVerifier.cpp:
* heap/IncrementalSweeper.cpp:
(JSC::IncrementalSweeper::IncrementalSweeper):
(JSC::IncrementalSweeper::doSweep):
(JSC::IncrementalSweeper::sweepNextBlock):
(JSC::IncrementalSweeper::startSweeping):
(JSC::IncrementalSweeper::willFinishSweeping):
* heap/IncrementalSweeper.h:
* heap/LargeAllocation.h:
(JSC::LargeAllocation::isMarked):
(JSC::LargeAllocation::isMarkedConcurrently):
(JSC::LargeAllocation::isMarkedOrNewlyAllocated):
(JSC::LargeAllocation::aboutToMark):
(JSC::LargeAllocation::isMarkedDuringWeakVisiting): Deleted.
(JSC::LargeAllocation::flipIfNecessary): Deleted.
(JSC::LargeAllocation::flipIfNecessaryDuringMarking): Deleted.
* heap/MarkedAllocator.cpp:
(JSC::MarkedAllocator::MarkedAllocator):
(JSC::MarkedAllocator::isPagedOut):
(JSC::MarkedAllocator::findEmptyBlock):
(JSC::MarkedAllocator::tryAllocateWithoutCollectingImpl):
(JSC::MarkedAllocator::allocateIn):
(JSC::MarkedAllocator::tryAllocateIn):
(JSC::MarkedAllocator::allocateSlowCaseImpl):
(JSC::MarkedAllocator::tryAllocateBlock):
(JSC::MarkedAllocator::addBlock):
(JSC::MarkedAllocator::removeBlock):
(JSC::MarkedAllocator::stopAllocating):
(JSC::MarkedAllocator::prepareForAllocation):
(JSC::MarkedAllocator::lastChanceToFinalize):
(JSC::MarkedAllocator::resumeAllocating):
(JSC::MarkedAllocator::beginMarkingForFullCollection):
(JSC::MarkedAllocator::endMarking):
(JSC::MarkedAllocator::snapshotForEdenCollection):
(JSC::MarkedAllocator::snapshotForFullCollection):
(JSC::MarkedAllocator::findBlockToSweep):
(JSC::MarkedAllocator::sweep):
(JSC::MarkedAllocator::shrink):
(JSC::MarkedAllocator::assertSnapshotEmpty):
(JSC::MarkedAllocator::dump):
(JSC::MarkedAllocator::dumpBits):
(JSC::MarkedAllocator::retire): Deleted.
(JSC::MarkedAllocator::filterNextBlock): Deleted.
(JSC::MarkedAllocator::setNextBlockToSweep): Deleted.
(JSC::MarkedAllocator::reset): Deleted.
* heap/MarkedAllocator.h:
(JSC::MarkedAllocator::forEachBitVector):
(JSC::MarkedAllocator::forEachBitVectorWithName):
(JSC::MarkedAllocator::nextAllocator):
(JSC::MarkedAllocator::setNextAllocator):
(JSC::MarkedAllocator::forEachBlock):
(JSC::MarkedAllocator::resumeAllocating): Deleted.
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::tryCreate):
(JSC::MarkedBlock::Handle::Handle):
(JSC::MarkedBlock::Handle::~Handle):
(JSC::MarkedBlock::MarkedBlock):
(JSC::MarkedBlock::Handle::specializedSweep):
(JSC::MarkedBlock::Handle::sweep):
(JSC::MarkedBlock::Handle::sweepHelperSelectScribbleMode):
(JSC::MarkedBlock::Handle::sweepHelperSelectEmptyMode):
(JSC::MarkedBlock::Handle::sweepHelperSelectHasNewlyAllocated):
(JSC::MarkedBlock::Handle::sweepHelperSelectSweepMode):
(JSC::MarkedBlock::Handle::sweepHelperSelectFlipMode):
(JSC::MarkedBlock::Handle::unsweepWithNoNewlyAllocated):
(JSC::MarkedBlock::Handle::setIsFreeListed):
(JSC::MarkedBlock::Handle::stopAllocating):
(JSC::MarkedBlock::Handle::lastChanceToFinalize):
(JSC::MarkedBlock::Handle::resumeAllocating):
(JSC::MarkedBlock::aboutToMarkSlow):
(JSC::MarkedBlock::clearMarks):
(JSC::MarkedBlock::isMarked):
(JSC::MarkedBlock::Handle::isMarkedOrNewlyAllocated):
(JSC::MarkedBlock::isMarkedOrNewlyAllocated):
(JSC::MarkedBlock::Handle::didConsumeFreeList):
(JSC::MarkedBlock::markCount):
(JSC::MarkedBlock::Handle::isEmpty):
(JSC::MarkedBlock::noteMarkedSlow):
(JSC::MarkedBlock::Handle::removeFromAllocator):
(JSC::MarkedBlock::Handle::didAddToAllocator):
(JSC::MarkedBlock::Handle::didRemoveFromAllocator):
(JSC::MarkedBlock::Handle::isLive):
(JSC::MarkedBlock::Handle::isLiveCell):
(JSC::MarkedBlock::Handle::sweepHelperSelectStateAndSweepMode): Deleted.
(JSC::MarkedBlock::flipIfNecessary): Deleted.
(JSC::MarkedBlock::Handle::flipIfNecessary): Deleted.
(JSC::MarkedBlock::flipIfNecessarySlow): Deleted.
(JSC::MarkedBlock::flipIfNecessaryDuringMarkingSlow): Deleted.
(JSC::MarkedBlock::Handle::willRemoveBlock): Deleted.
(WTF::printInternal): Deleted.
* heap/MarkedBlock.h:
(JSC::MarkedBlock::Handle::isFreeListed):
(JSC::MarkedBlock::Handle::index):
(JSC::MarkedBlock::aboutToMark):
(JSC::MarkedBlock::isMarked):
(JSC::MarkedBlock::isMarkedConcurrently):
(JSC::MarkedBlock::Handle::isMarkedOrNewlyAllocated):
(JSC::MarkedBlock::isMarkedOrNewlyAllocated):
(JSC::MarkedBlock::Handle::isOnBlocksToSweep): Deleted.
(JSC::MarkedBlock::Handle::setIsOnBlocksToSweep): Deleted.
(JSC::MarkedBlock::Handle::state): Deleted.
(JSC::MarkedBlock::flipIfNecessary): Deleted.
(JSC::MarkedBlock::flipIfNecessaryDuringMarking): Deleted.
(JSC::MarkedBlock::Handle::flipIfNecessary): Deleted.
(JSC::MarkedBlock::Handle::flipIfNecessaryDuringMarking): Deleted.
(JSC::MarkedBlock::Handle::flipForEdenCollection): Deleted.
(JSC::MarkedBlock::isMarkedDuringWeakVisiting): Deleted.
(JSC::MarkedBlock::Handle::isLive): Deleted.
(JSC::MarkedBlock::Handle::isLiveCell): Deleted.
(JSC::MarkedBlock::Handle::forEachLiveCell): Deleted.
(JSC::MarkedBlock::Handle::forEachDeadCell): Deleted.
(JSC::MarkedBlock::Handle::needsSweeping): Deleted.
(JSC::MarkedBlock::Handle::isAllocated): Deleted.
(JSC::MarkedBlock::Handle::isMarked): Deleted.
* heap/MarkedBlockInlines.h: Added.
(JSC::MarkedBlock::Handle::isLive):
(JSC::MarkedBlock::Handle::isLiveCell):
(JSC::MarkedBlock::Handle::forEachLiveCell):
(JSC::MarkedBlock::Handle::forEachDeadCell):
(JSC::MarkedBlock::resetVersion):
* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::MarkedSpace):
(JSC::MarkedSpace::allocate):
(JSC::MarkedSpace::tryAllocate):
(JSC::MarkedSpace::sweep):
(JSC::MarkedSpace::prepareForAllocation):
(JSC::MarkedSpace::shrink):
(JSC::MarkedSpace::clearNewlyAllocated):
(JSC::MarkedSpace::beginMarking):
(JSC::MarkedSpace::endMarking):
(JSC::MarkedSpace::didAllocateInBlock):
(JSC::MarkedSpace::findEmptyBlock):
(JSC::MarkedSpace::snapshot):
(JSC::MarkedSpace::assertSnapshotEmpty):
(JSC::MarkedSpace::dumpBits):
(JSC::MarkedSpace::zombifySweep): Deleted.
(JSC::MarkedSpace::resetAllocators): Deleted.
(JSC::VerifyMarked::operator()): Deleted.
(JSC::MarkedSpace::flip): Deleted.
* heap/MarkedSpace.h:
(JSC::MarkedSpace::nextVersion):
(JSC::MarkedSpace::firstAllocator):
(JSC::MarkedSpace::allocatorForEmptyAllocation):
(JSC::MarkedSpace::forEachAllocator):
(JSC::MarkedSpace::blocksWithNewObjects): Deleted.
(JSC::MarkedSpace::setIsMarking): Deleted.
(JSC::MarkedSpace::forEachLiveCell): Deleted.
(JSC::MarkedSpace::forEachDeadCell): Deleted.
* heap/MarkedSpaceInlines.h: Added.
(JSC::MarkedSpace::forEachLiveCell):
(JSC::MarkedSpace::forEachDeadCell):
* heap/SlotVisitor.cpp:
(JSC::SlotVisitor::setMarkedAndAppendToMarkStack):
(JSC::SlotVisitor::markAuxiliary):
(JSC::SlotVisitor::visitChildren):
* heap/Weak.h:
(WTF::HashTraits<JSC::Weak<T>>::emptyValue):
(WTF::HashTraits<JSC::Weak<T>>::peek):
* heap/WeakBlock.cpp:
(JSC::WeakBlock::specializedVisit):
(JSC::WeakBlock::reap):
* heap/WeakInlines.h:
(WTF::HashTraits<JSC::Weak<T>>::emptyValue): Deleted.
(WTF::HashTraits<JSC::Weak<T>>::peek): Deleted.
* jit/JITThunks.h:
* runtime/JSGlobalObject.cpp:
* runtime/PrototypeMap.h:
* runtime/SamplingProfiler.cpp:
* runtime/WeakGCMap.h:
* tools/JSDollarVMPrototype.cpp:

Source/WTF:

The main change here is to bring back FastBitVector.cpp, so that I could outline some
large slow path functions. This also adds some utilities, like atomicSetAndCheck() and
isEmpty(). The GC uses these.

* WTF.xcodeproj/project.pbxproj:
* wtf/CMakeLists.txt:
* wtf/FastBitVector.cpp: Added.
(WTF::FastBitVectorWordOwner::setEqualsSlow):
(WTF::FastBitVectorWordOwner::resizeSlow):
* wtf/FastBitVector.h:
(WTF::FastBitVectorWordOwner::operator=):
(WTF::FastBitVectorWordOwner::resize):
(WTF::FastBitVectorImpl::isEmpty):
(WTF::FastBitVector::atomicSetAndCheck):
(WTF::FastBitVector::operator[]): Deleted.

Tools:

Remove the always-trigger-copy-phase configuration.

* Scripts/run-jsc-stress-tests:

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@206154 268f45cc-cd09-0410-ab3c-d52691b4dbfc

52 files changed:
JSTests/ChangeLog
JSTests/microbenchmarks/switching-size-classes.js [new file with mode: 0644]
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
Source/JavaScriptCore/Scripts/builtins/builtins_generate_internals_wrapper_header.py
Source/JavaScriptCore/debugger/Debugger.cpp
Source/JavaScriptCore/heap/AllocationScope.h [new file with mode: 0644]
Source/JavaScriptCore/heap/CellContainer.h
Source/JavaScriptCore/heap/CellContainerInlines.h
Source/JavaScriptCore/heap/ConservativeRoots.cpp
Source/JavaScriptCore/heap/Heap.cpp
Source/JavaScriptCore/heap/Heap.h
Source/JavaScriptCore/heap/HeapInlines.h
Source/JavaScriptCore/heap/HeapStatistics.cpp
Source/JavaScriptCore/heap/HeapUtil.h
Source/JavaScriptCore/heap/HeapVerifier.cpp
Source/JavaScriptCore/heap/IncrementalSweeper.cpp
Source/JavaScriptCore/heap/IncrementalSweeper.h
Source/JavaScriptCore/heap/LargeAllocation.h
Source/JavaScriptCore/heap/MarkedAllocator.cpp
Source/JavaScriptCore/heap/MarkedAllocator.h
Source/JavaScriptCore/heap/MarkedBlock.cpp
Source/JavaScriptCore/heap/MarkedBlock.h
Source/JavaScriptCore/heap/MarkedBlockInlines.h [new file with mode: 0644]
Source/JavaScriptCore/heap/MarkedSpace.cpp
Source/JavaScriptCore/heap/MarkedSpace.h
Source/JavaScriptCore/heap/MarkedSpaceInlines.h [new file with mode: 0644]
Source/JavaScriptCore/heap/SlotVisitor.cpp
Source/JavaScriptCore/heap/Weak.h
Source/JavaScriptCore/heap/WeakBlock.cpp
Source/JavaScriptCore/heap/WeakInlines.h
Source/JavaScriptCore/jit/JITThunks.h
Source/JavaScriptCore/runtime/JSGlobalObject.cpp
Source/JavaScriptCore/runtime/JSObject.cpp
Source/JavaScriptCore/runtime/JSProxy.cpp
Source/JavaScriptCore/runtime/Options.h
Source/JavaScriptCore/runtime/PrototypeMap.cpp
Source/JavaScriptCore/runtime/PrototypeMap.h
Source/JavaScriptCore/runtime/PrototypeMapInlines.h [new file with mode: 0644]
Source/JavaScriptCore/runtime/RegExpCache.h
Source/JavaScriptCore/runtime/SamplingProfiler.cpp
Source/JavaScriptCore/runtime/WeakGCMap.h
Source/JavaScriptCore/runtime/WeakGCMapInlines.h
Source/JavaScriptCore/tools/JSDollarVMPrototype.cpp
Source/WTF/ChangeLog
Source/WTF/WTF.xcodeproj/project.pbxproj
Source/WTF/wtf/CMakeLists.txt
Source/WTF/wtf/FastBitVector.cpp [new file with mode: 0644]
Source/WTF/wtf/FastBitVector.h
Source/WebCore/testing/GCObservation.cpp
Tools/ChangeLog
Tools/Scripts/run-jsc-stress-tests

index 1bc097a..7818060 100644 (file)
@@ -1,3 +1,14 @@
+2016-09-20  Filip Pizlo  <fpizlo@apple.com>
+
+        Make MarkedBlock state tracking support overlapped allocation and marking state
+        https://bugs.webkit.org/show_bug.cgi?id=161581
+
+        Reviewed by Geoffrey Garen.
+        
+        Add a microbenchmark for why we want to reclaim empty blocks from other allocators.
+
+        * microbenchmarks/switching-size-classes.js: Added.
+
 2016-09-20  Saam Barati  <sbarati@apple.com>
 
         Unreviewed, added test for x86 32-bit failure for HasOwnProperty node in DFG.
diff --git a/JSTests/microbenchmarks/switching-size-classes.js b/JSTests/microbenchmarks/switching-size-classes.js
new file mode 100644 (file)
index 0000000..e5ca1c0
--- /dev/null
@@ -0,0 +1,4 @@
+for (var i = 0; i < 10; ++i) {
+    for (var j = 0; j < 100000; ++j)
+        new Array(10 * i);
+}
index 059e8bf..35ffe4a 100644 (file)
@@ -1,3 +1,300 @@
+2016-09-20  Filip Pizlo  <fpizlo@apple.com>
+
+        Make MarkedBlock state tracking support overlapped allocation and marking state
+        https://bugs.webkit.org/show_bug.cgi?id=161581
+
+        Reviewed by Geoffrey Garen.
+        
+        Concurrent GCs must allow for mutation and allocation during collection. We already know
+        how to mutate during collection. We have a write barrier for that. Allocation during
+        collection is more involved: the collector modifies the the mark bits, as well as other
+        kinds of MarkedBlock state, in-place during a collection. The allocator uses that same
+        MarkedBlock state to decide which regions of memory are free. This works if the allocator
+        never runs while the collector is running, but if we want to allow them to run at the same
+        time, then we need to have two versions of the state: one version built up by the
+        collector and another consumed by the allocator. We clear the collector state at the
+        beginning of collection, and splat the collector state onto the allocator state after
+        collection.
+        
+        This could be super expensive, but we can make it cheap with some cleverness. The biggest
+        observation is just that most of the state is a handful of bits per block: is the block
+        free-listed? is it completely full? completely empty? in the incremental sweeper's
+        snapshot? is it retired? is it in eden? There is also state inside blocks, like the mark
+        bits, but I have a solid plan there and I'll save it for another patch. Once we view the
+        state of blocks as bits, we can put that state into bitvectors, so that if the collector
+        needs to transform the state of some blocks, it can do it with a single operation over
+        bitvectors. I like to think of this as 32-way parallelizing block operations, since
+        doing one operation on a 32-bit word in one of those bitvectors instantly affects 32
+        blocks.
+        
+        This change converts all previous collections of MarkedBlocks, along with the MarkedBlock
+        state, into 8 bitvectors (live, empty, allocated, canAllocateButNotEmpty, eden, unswept,
+        markingNotEmpty, and markingRetired). The bitvectors separate allocator state (empty,
+        allocated, canAllocateButNotEmpty) from marking state (markingNotEmpty, markingRetired).
+        
+        As a nice side-effect of switching to bitvectors, we get size class rebalancing for free.
+        It used to be that if a MarkedAllocator had an empty block, we would only allow that
+        memory to be reused by a different MarkedAllocator if we did an incremental sweep or a
+        full eager sweep. Now we hunt down all destructorless empty blocks before allocating new
+        MarkedBlocks. It would be relatively easy to also hunt down destructor empty blocks, but
+        the theory is that those might be expensive to sweep, so it might still be better to leave
+        those to the incremental sweeper.
+        
+        This change is perf-neutral all around. I did some tests with two different kinds of
+        allocation strategies - something that is somewhat easier to do now that you can look for
+        blocks that are candidates for allocation by just scanning some bitvectors. I tried two
+        variants:
+        
+        - Allocate out of non-empty blocks first, leaving empty blocks for last in case a
+          different allocator needed them. This is sort of a best-fit strategy. I tried this
+          first, and it can be expressed as:
+          
+          m_allocationCursor = m_canAllocateButNotEmpty.findBit(m_allocationCursor, true)
+        
+        - Allocate out of lower-indexed blocks first, treating empty and canAllocateButNotEmpty
+          blocks equally. This is sort of a first-fit strategy. This is what I ended up settling
+          on, and it can be expressed as:
+          
+          m_allocationCursor = (m_canAllocateButNotEmpty | m_empty).findBit(m_allocationCursor, true)
+        
+        The best-fit strategy meant 1% regressions in LongSpider and Octane overall, and a 11%
+        regression on Octane/earley. First-fit means perf-neutrality. Most great allocators skew
+        towards first-fit because it's empirically better, so this result is not surprising.
+        
+        Overall, the performance of this patch on my machine is as follows, where "neutral" means
+        less than 1% and not statistically significant.
+        
+        run-jsc-benchmarks:
+            SunSpider: neutral
+            LongSpider: 0.6% slower
+            V8Spider: neutral
+            Octane: neutral
+            Kraken: neutral
+            Microbenchmarks: 0.37% slower
+            AsmBench: neutral
+            CompressionBench: maybe 1% faster
+        
+        For browser benchmarks, I report the ratio of means (bigger / smaller) along with a T-test
+        from Mathematica reported as % chance of not [sic] the null hypothesis. Note that we
+        normally consider anything less than 95% confidence to be inconclusive.
+        
+        Browser benchmarks:
+            PLT3: 0.3% faster with 67% confidence
+            membuster:
+                Snap2FinishedLoadingPost: 0.68% more memory with 50% confidence
+                Snap3EndPost: 2.4% more memory with 61% confidence
+            JetStream: 0.2% slower with 32% confidence
+            Speedometer: 0.7% faster with 82% confidence
+        
+        Additionally, Octane/splay's heap capacity goes down to ~180KB from ~200KB, so about a 10%
+        progression. This is due to the allocator rebalancing feature.
+        
+        Finally, this breaks --useImmortalObjects. It was already broken as far as I can tell. I
+        filed a bug to reimplement it (bug 162296). Unless someone urgently needs this internal
+        tool, it's probably best to reimplement it after I'm done refactoring MarkedSpace. 
+
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * debugger/Debugger.cpp:
+        * heap/CellContainer.h:
+        * heap/CellContainerInlines.h:
+        (JSC::CellContainer::vm):
+        (JSC::CellContainer::heap):
+        (JSC::CellContainer::isMarkedOrNewlyAllocated):
+        (JSC::CellContainer::aboutToMark):
+        (JSC::CellContainer::isMarked): Deleted.
+        (JSC::CellContainer::flipIfNecessary): Deleted.
+        * heap/ConservativeRoots.cpp:
+        * heap/Heap.cpp:
+        (JSC::Heap::beginMarking):
+        (JSC::Heap::endMarking):
+        (JSC::Heap::collectAllGarbage):
+        (JSC::Heap::collectImpl):
+        (JSC::Heap::snapshotMarkedSpace):
+        (JSC::Heap::prepareForAllocation):
+        (JSC::Heap::zombifyDeadObjects):
+        (JSC::MarkedBlockSnapshotFunctor::MarkedBlockSnapshotFunctor): Deleted.
+        (JSC::MarkedBlockSnapshotFunctor::operator()): Deleted.
+        (JSC::Heap::resetAllocators): Deleted.
+        * heap/Heap.h:
+        * heap/HeapInlines.h:
+        (JSC::Heap::isMarked):
+        (JSC::Heap::isMarkedConcurrently):
+        (JSC::Heap::testAndSetMarked):
+        * heap/HeapStatistics.cpp:
+        * heap/HeapUtil.h:
+        (JSC::HeapUtil::findGCObjectPointersForMarking):
+        (JSC::HeapUtil::isPointerGCObjectJSCell):
+        * heap/HeapVerifier.cpp:
+        * heap/IncrementalSweeper.cpp:
+        (JSC::IncrementalSweeper::IncrementalSweeper):
+        (JSC::IncrementalSweeper::doSweep):
+        (JSC::IncrementalSweeper::sweepNextBlock):
+        (JSC::IncrementalSweeper::startSweeping):
+        (JSC::IncrementalSweeper::willFinishSweeping):
+        * heap/IncrementalSweeper.h:
+        * heap/LargeAllocation.h:
+        (JSC::LargeAllocation::isMarked):
+        (JSC::LargeAllocation::isMarkedConcurrently):
+        (JSC::LargeAllocation::isMarkedOrNewlyAllocated):
+        (JSC::LargeAllocation::aboutToMark):
+        (JSC::LargeAllocation::isMarkedDuringWeakVisiting): Deleted.
+        (JSC::LargeAllocation::flipIfNecessary): Deleted.
+        (JSC::LargeAllocation::flipIfNecessaryDuringMarking): Deleted.
+        * heap/MarkedAllocator.cpp:
+        (JSC::MarkedAllocator::MarkedAllocator):
+        (JSC::MarkedAllocator::isPagedOut):
+        (JSC::MarkedAllocator::findEmptyBlock):
+        (JSC::MarkedAllocator::tryAllocateWithoutCollectingImpl):
+        (JSC::MarkedAllocator::allocateIn):
+        (JSC::MarkedAllocator::tryAllocateIn):
+        (JSC::MarkedAllocator::allocateSlowCaseImpl):
+        (JSC::MarkedAllocator::tryAllocateBlock):
+        (JSC::MarkedAllocator::addBlock):
+        (JSC::MarkedAllocator::removeBlock):
+        (JSC::MarkedAllocator::stopAllocating):
+        (JSC::MarkedAllocator::prepareForAllocation):
+        (JSC::MarkedAllocator::lastChanceToFinalize):
+        (JSC::MarkedAllocator::resumeAllocating):
+        (JSC::MarkedAllocator::beginMarkingForFullCollection):
+        (JSC::MarkedAllocator::endMarking):
+        (JSC::MarkedAllocator::snapshotForEdenCollection):
+        (JSC::MarkedAllocator::snapshotForFullCollection):
+        (JSC::MarkedAllocator::findBlockToSweep):
+        (JSC::MarkedAllocator::sweep):
+        (JSC::MarkedAllocator::shrink):
+        (JSC::MarkedAllocator::assertSnapshotEmpty):
+        (JSC::MarkedAllocator::dump):
+        (JSC::MarkedAllocator::dumpBits):
+        (JSC::MarkedAllocator::retire): Deleted.
+        (JSC::MarkedAllocator::filterNextBlock): Deleted.
+        (JSC::MarkedAllocator::setNextBlockToSweep): Deleted.
+        (JSC::MarkedAllocator::reset): Deleted.
+        * heap/MarkedAllocator.h:
+        (JSC::MarkedAllocator::forEachBitVector):
+        (JSC::MarkedAllocator::forEachBitVectorWithName):
+        (JSC::MarkedAllocator::nextAllocator):
+        (JSC::MarkedAllocator::setNextAllocator):
+        (JSC::MarkedAllocator::forEachBlock):
+        (JSC::MarkedAllocator::resumeAllocating): Deleted.
+        * heap/MarkedBlock.cpp:
+        (JSC::MarkedBlock::tryCreate):
+        (JSC::MarkedBlock::Handle::Handle):
+        (JSC::MarkedBlock::Handle::~Handle):
+        (JSC::MarkedBlock::MarkedBlock):
+        (JSC::MarkedBlock::Handle::specializedSweep):
+        (JSC::MarkedBlock::Handle::sweep):
+        (JSC::MarkedBlock::Handle::sweepHelperSelectScribbleMode):
+        (JSC::MarkedBlock::Handle::sweepHelperSelectEmptyMode):
+        (JSC::MarkedBlock::Handle::sweepHelperSelectHasNewlyAllocated):
+        (JSC::MarkedBlock::Handle::sweepHelperSelectSweepMode):
+        (JSC::MarkedBlock::Handle::sweepHelperSelectFlipMode):
+        (JSC::MarkedBlock::Handle::unsweepWithNoNewlyAllocated):
+        (JSC::MarkedBlock::Handle::setIsFreeListed):
+        (JSC::MarkedBlock::Handle::stopAllocating):
+        (JSC::MarkedBlock::Handle::lastChanceToFinalize):
+        (JSC::MarkedBlock::Handle::resumeAllocating):
+        (JSC::MarkedBlock::aboutToMarkSlow):
+        (JSC::MarkedBlock::clearMarks):
+        (JSC::MarkedBlock::isMarked):
+        (JSC::MarkedBlock::Handle::isMarkedOrNewlyAllocated):
+        (JSC::MarkedBlock::isMarkedOrNewlyAllocated):
+        (JSC::MarkedBlock::Handle::didConsumeFreeList):
+        (JSC::MarkedBlock::markCount):
+        (JSC::MarkedBlock::Handle::isEmpty):
+        (JSC::MarkedBlock::noteMarkedSlow):
+        (JSC::MarkedBlock::Handle::removeFromAllocator):
+        (JSC::MarkedBlock::Handle::didAddToAllocator):
+        (JSC::MarkedBlock::Handle::didRemoveFromAllocator):
+        (JSC::MarkedBlock::Handle::isLive):
+        (JSC::MarkedBlock::Handle::isLiveCell):
+        (JSC::MarkedBlock::Handle::sweepHelperSelectStateAndSweepMode): Deleted.
+        (JSC::MarkedBlock::flipIfNecessary): Deleted.
+        (JSC::MarkedBlock::Handle::flipIfNecessary): Deleted.
+        (JSC::MarkedBlock::flipIfNecessarySlow): Deleted.
+        (JSC::MarkedBlock::flipIfNecessaryDuringMarkingSlow): Deleted.
+        (JSC::MarkedBlock::Handle::willRemoveBlock): Deleted.
+        (WTF::printInternal): Deleted.
+        * heap/MarkedBlock.h:
+        (JSC::MarkedBlock::Handle::isFreeListed):
+        (JSC::MarkedBlock::Handle::index):
+        (JSC::MarkedBlock::aboutToMark):
+        (JSC::MarkedBlock::isMarked):
+        (JSC::MarkedBlock::isMarkedConcurrently):
+        (JSC::MarkedBlock::Handle::isMarkedOrNewlyAllocated):
+        (JSC::MarkedBlock::isMarkedOrNewlyAllocated):
+        (JSC::MarkedBlock::Handle::isOnBlocksToSweep): Deleted.
+        (JSC::MarkedBlock::Handle::setIsOnBlocksToSweep): Deleted.
+        (JSC::MarkedBlock::Handle::state): Deleted.
+        (JSC::MarkedBlock::flipIfNecessary): Deleted.
+        (JSC::MarkedBlock::flipIfNecessaryDuringMarking): Deleted.
+        (JSC::MarkedBlock::Handle::flipIfNecessary): Deleted.
+        (JSC::MarkedBlock::Handle::flipIfNecessaryDuringMarking): Deleted.
+        (JSC::MarkedBlock::Handle::flipForEdenCollection): Deleted.
+        (JSC::MarkedBlock::isMarkedDuringWeakVisiting): Deleted.
+        (JSC::MarkedBlock::Handle::isLive): Deleted.
+        (JSC::MarkedBlock::Handle::isLiveCell): Deleted.
+        (JSC::MarkedBlock::Handle::forEachLiveCell): Deleted.
+        (JSC::MarkedBlock::Handle::forEachDeadCell): Deleted.
+        (JSC::MarkedBlock::Handle::needsSweeping): Deleted.
+        (JSC::MarkedBlock::Handle::isAllocated): Deleted.
+        (JSC::MarkedBlock::Handle::isMarked): Deleted.
+        * heap/MarkedBlockInlines.h: Added.
+        (JSC::MarkedBlock::Handle::isLive):
+        (JSC::MarkedBlock::Handle::isLiveCell):
+        (JSC::MarkedBlock::Handle::forEachLiveCell):
+        (JSC::MarkedBlock::Handle::forEachDeadCell):
+        (JSC::MarkedBlock::resetVersion):
+        * heap/MarkedSpace.cpp:
+        (JSC::MarkedSpace::MarkedSpace):
+        (JSC::MarkedSpace::allocate):
+        (JSC::MarkedSpace::tryAllocate):
+        (JSC::MarkedSpace::sweep):
+        (JSC::MarkedSpace::prepareForAllocation):
+        (JSC::MarkedSpace::shrink):
+        (JSC::MarkedSpace::clearNewlyAllocated):
+        (JSC::MarkedSpace::beginMarking):
+        (JSC::MarkedSpace::endMarking):
+        (JSC::MarkedSpace::didAllocateInBlock):
+        (JSC::MarkedSpace::findEmptyBlock):
+        (JSC::MarkedSpace::snapshot):
+        (JSC::MarkedSpace::assertSnapshotEmpty):
+        (JSC::MarkedSpace::dumpBits):
+        (JSC::MarkedSpace::zombifySweep): Deleted.
+        (JSC::MarkedSpace::resetAllocators): Deleted.
+        (JSC::VerifyMarked::operator()): Deleted.
+        (JSC::MarkedSpace::flip): Deleted.
+        * heap/MarkedSpace.h:
+        (JSC::MarkedSpace::nextVersion):
+        (JSC::MarkedSpace::firstAllocator):
+        (JSC::MarkedSpace::allocatorForEmptyAllocation):
+        (JSC::MarkedSpace::forEachAllocator):
+        (JSC::MarkedSpace::blocksWithNewObjects): Deleted.
+        (JSC::MarkedSpace::setIsMarking): Deleted.
+        (JSC::MarkedSpace::forEachLiveCell): Deleted.
+        (JSC::MarkedSpace::forEachDeadCell): Deleted.
+        * heap/MarkedSpaceInlines.h: Added.
+        (JSC::MarkedSpace::forEachLiveCell):
+        (JSC::MarkedSpace::forEachDeadCell):
+        * heap/SlotVisitor.cpp:
+        (JSC::SlotVisitor::setMarkedAndAppendToMarkStack):
+        (JSC::SlotVisitor::markAuxiliary):
+        (JSC::SlotVisitor::visitChildren):
+        * heap/Weak.h:
+        (WTF::HashTraits<JSC::Weak<T>>::emptyValue):
+        (WTF::HashTraits<JSC::Weak<T>>::peek):
+        * heap/WeakBlock.cpp:
+        (JSC::WeakBlock::specializedVisit):
+        (JSC::WeakBlock::reap):
+        * heap/WeakInlines.h:
+        (WTF::HashTraits<JSC::Weak<T>>::emptyValue): Deleted.
+        (WTF::HashTraits<JSC::Weak<T>>::peek): Deleted.
+        * jit/JITThunks.h:
+        * runtime/JSGlobalObject.cpp:
+        * runtime/PrototypeMap.h:
+        * runtime/SamplingProfiler.cpp:
+        * runtime/WeakGCMap.h:
+        * tools/JSDollarVMPrototype.cpp:
+
 2016-09-20  Jonathan Bedard  <jbedard@apple.com>
 
         Undefined behavior: Left shift negative number
index 0fc4d4a..282f109 100644 (file)
                0F7C39FB1C8F629300480151 /* RegExpInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7C39FA1C8F629300480151 /* RegExpInlines.h */; };
                0F7C39FD1C8F659500480151 /* RegExpObjectInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7C39FC1C8F659500480151 /* RegExpObjectInlines.h */; };
                0F7C39FF1C90C55B00480151 /* DFGOpInfo.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7C39FE1C90C55B00480151 /* DFGOpInfo.h */; };
+               0F7C5FB81D888A0C0044F5E2 /* MarkedBlockInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7C5FB71D888A010044F5E2 /* MarkedBlockInlines.h */; };
+               0F7C5FBA1D8895070044F5E2 /* MarkedSpaceInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7C5FB91D8895050044F5E2 /* MarkedSpaceInlines.h */; };
                0F8023EA1613832B00A0BA45 /* ByValInfo.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F8023E91613832300A0BA45 /* ByValInfo.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F8335B71639C1E6001443B5 /* ArrayAllocationProfile.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F8335B41639C1E3001443B5 /* ArrayAllocationProfile.cpp */; };
                0F8335B81639C1EA001443B5 /* ArrayAllocationProfile.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F8335B51639C1E3001443B5 /* ArrayAllocationProfile.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F9FB4F417FCB91700CB67F8 /* DFGStackLayoutPhase.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F9FB4F217FCB91700CB67F8 /* DFGStackLayoutPhase.cpp */; };
                0F9FB4F517FCB91700CB67F8 /* DFGStackLayoutPhase.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F9FB4F317FCB91700CB67F8 /* DFGStackLayoutPhase.h */; };
                0F9FC8C514E1B60400D52AE0 /* PutKind.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F9FC8C114E1B5FB00D52AE0 /* PutKind.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0FA131711D8DD72B00EC130A /* PrototypeMapInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FA131701D8DD72900EC130A /* PrototypeMapInlines.h */; };
                0FA2C17B17D7CF84009D015F /* TestRunnerUtils.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FA2C17917D7CF84009D015F /* TestRunnerUtils.cpp */; };
                0FA2C17C17D7CF84009D015F /* TestRunnerUtils.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FA2C17A17D7CF84009D015F /* TestRunnerUtils.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0FA581BA150E952C00B9A2D9 /* DFGNodeFlags.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FA581B7150E952A00B9A2D9 /* DFGNodeFlags.cpp */; };
                0FB4FB731BC843140025CA5A /* FTLLazySlowPath.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FB4FB701BC843140025CA5A /* FTLLazySlowPath.cpp */; };
                0FB4FB741BC843140025CA5A /* FTLLazySlowPath.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FB4FB711BC843140025CA5A /* FTLLazySlowPath.h */; };
                0FB4FB751BC843140025CA5A /* FTLLazySlowPathCall.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FB4FB721BC843140025CA5A /* FTLLazySlowPathCall.h */; };
+               0FB530391D90404C00F28AE3 /* AllocationScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FB530381D90404600F28AE3 /* AllocationScope.h */; };
                0FB5467714F59B5C002C2989 /* LazyOperandValueProfile.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FB5467614F59AD1002C2989 /* LazyOperandValueProfile.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0FB5467914F5C46B002C2989 /* LazyOperandValueProfile.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FB5467814F5C468002C2989 /* LazyOperandValueProfile.cpp */; };
                0FB5467B14F5C7E1002C2989 /* MethodOfGettingAValueProfile.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FB5467A14F5C7D4002C2989 /* MethodOfGettingAValueProfile.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F7C39FA1C8F629300480151 /* RegExpInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RegExpInlines.h; sourceTree = "<group>"; };
                0F7C39FC1C8F659500480151 /* RegExpObjectInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RegExpObjectInlines.h; sourceTree = "<group>"; };
                0F7C39FE1C90C55B00480151 /* DFGOpInfo.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGOpInfo.h; path = dfg/DFGOpInfo.h; sourceTree = "<group>"; };
+               0F7C5FB71D888A010044F5E2 /* MarkedBlockInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MarkedBlockInlines.h; sourceTree = "<group>"; };
+               0F7C5FB91D8895050044F5E2 /* MarkedSpaceInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MarkedSpaceInlines.h; sourceTree = "<group>"; };
                0F8023E91613832300A0BA45 /* ByValInfo.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ByValInfo.h; sourceTree = "<group>"; };
                0F8335B41639C1E3001443B5 /* ArrayAllocationProfile.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ArrayAllocationProfile.cpp; sourceTree = "<group>"; };
                0F8335B51639C1E3001443B5 /* ArrayAllocationProfile.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ArrayAllocationProfile.h; sourceTree = "<group>"; };
                0F9FB4F217FCB91700CB67F8 /* DFGStackLayoutPhase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGStackLayoutPhase.cpp; path = dfg/DFGStackLayoutPhase.cpp; sourceTree = "<group>"; };
                0F9FB4F317FCB91700CB67F8 /* DFGStackLayoutPhase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGStackLayoutPhase.h; path = dfg/DFGStackLayoutPhase.h; sourceTree = "<group>"; };
                0F9FC8C114E1B5FB00D52AE0 /* PutKind.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = PutKind.h; sourceTree = "<group>"; };
+               0FA131701D8DD72900EC130A /* PrototypeMapInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = PrototypeMapInlines.h; sourceTree = "<group>"; };
                0FA2C17917D7CF84009D015F /* TestRunnerUtils.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = TestRunnerUtils.cpp; sourceTree = "<group>"; };
                0FA2C17A17D7CF84009D015F /* TestRunnerUtils.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = TestRunnerUtils.h; sourceTree = "<group>"; };
                0FA581B7150E952A00B9A2D9 /* DFGNodeFlags.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGNodeFlags.cpp; path = dfg/DFGNodeFlags.cpp; sourceTree = "<group>"; };
                0FB4FB701BC843140025CA5A /* FTLLazySlowPath.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FTLLazySlowPath.cpp; path = ftl/FTLLazySlowPath.cpp; sourceTree = "<group>"; };
                0FB4FB711BC843140025CA5A /* FTLLazySlowPath.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLLazySlowPath.h; path = ftl/FTLLazySlowPath.h; sourceTree = "<group>"; };
                0FB4FB721BC843140025CA5A /* FTLLazySlowPathCall.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLLazySlowPathCall.h; path = ftl/FTLLazySlowPathCall.h; sourceTree = "<group>"; };
+               0FB530381D90404600F28AE3 /* AllocationScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = AllocationScope.h; sourceTree = "<group>"; };
                0FB5467614F59AD1002C2989 /* LazyOperandValueProfile.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = LazyOperandValueProfile.h; sourceTree = "<group>"; };
                0FB5467814F5C468002C2989 /* LazyOperandValueProfile.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = LazyOperandValueProfile.cpp; sourceTree = "<group>"; };
                0FB5467A14F5C7D4002C2989 /* MethodOfGettingAValueProfile.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MethodOfGettingAValueProfile.h; sourceTree = "<group>"; };
                142E312A134FF0A600AFADB5 /* heap */ = {
                        isa = PBXGroup;
                        children = (
+                               0FB530381D90404600F28AE3 /* AllocationScope.h */,
                                0F9630351D4192C3005609D9 /* AllocatorAttributes.cpp */,
                                0F9630361D4192C3005609D9 /* AllocatorAttributes.h */,
                                0F070A421D543A89006E7232 /* CellContainer.h */,
                                C2B916C114DA014E00CBAC86 /* MarkedAllocator.h */,
                                142D6F0613539A2800B02E86 /* MarkedBlock.cpp */,
                                142D6F0713539A2800B02E86 /* MarkedBlock.h */,
+                               0F7C5FB71D888A010044F5E2 /* MarkedBlockInlines.h */,
                                141448CA13A176EC00F5BA1A /* MarkedBlockSet.h */,
                                14D2F3D8139F4BE200491031 /* MarkedSpace.cpp */,
                                14D2F3D9139F4BE200491031 /* MarkedSpace.h */,
+                               0F7C5FB91D8895050044F5E2 /* MarkedSpaceInlines.h */,
                                142D6F0E13539A4100B02E86 /* MarkStack.cpp */,
                                142D6F0F13539A4100B02E86 /* MarkStack.h */,
                                ADDB1F6218D77DB7009B58A8 /* OpaqueRootSet.h */,
                                65C02FBB0637462A003E7EE6 /* Protect.h */,
                                14D844A216AA2C7000A65AF0 /* PrototypeMap.cpp */,
                                14D844A316AA2C7000A65AF0 /* PrototypeMap.h */,
+                               0FA131701D8DD72900EC130A /* PrototypeMapInlines.h */,
                                79B00CB81C6AB07E0088C65D /* ProxyConstructor.cpp */,
                                79B00CB91C6AB07E0088C65D /* ProxyConstructor.h */,
                                79B00CBA1C6AB07E0088C65D /* ProxyObject.cpp */,
                                62D755D71B84FB4A001801FA /* CallFrameShuffler.h in Headers */,
                                0F0B83B114BCF71800885B4F /* CallLinkInfo.h in Headers */,
                                0F93329E14CA7DC50085F3C6 /* CallLinkStatus.h in Headers */,
+                               0FB530391D90404C00F28AE3 /* AllocationScope.h in Headers */,
                                627673241B680C1E00FD9F2E /* CallMode.h in Headers */,
                                0F0B83B914BCF95F00885B4F /* CallReturnOffsetToBytecodeOffset.h in Headers */,
                                0F3B7E2B19A11B8000D9BC56 /* CallVariant.h in Headers */,
                                0F66E16B14DF3F1600B7B2E4 /* DFGAdjacencyList.h in Headers */,
                                0FFB921816D02EB20055A5DB /* DFGAllocator.h in Headers */,
                                0F1E3A461534CBAF000F9456 /* DFGArgumentPosition.h in Headers */,
+                               0F7C5FB81D888A0C0044F5E2 /* MarkedBlockInlines.h in Headers */,
                                0F2DD8121AB3D8BE00BBB8E8 /* DFGArgumentsEliminationPhase.h in Headers */,
                                0F2DD8141AB3D8BE00BBB8E8 /* DFGArgumentsUtilities.h in Headers */,
                                0F485322187750560083B687 /* DFGArithMode.h in Headers */,
                                86E3C614167BABD7006D760A /* JSExport.h in Headers */,
                                A7B4ACAF1484C9CE00B38A36 /* JSExportMacros.h in Headers */,
                                0F2B66EF17B6B5AB00A7AE3F /* JSFloat32Array.h in Headers */,
+                               0FA131711D8DD72B00EC130A /* PrototypeMapInlines.h in Headers */,
                                0F2B66F017B6B5AB00A7AE3F /* JSFloat64Array.h in Headers */,
                                BC18C41F0E16F5CD00B34460 /* JSFunction.h in Headers */,
                                A72028BA1797603D0098028C /* JSFunctionInlines.h in Headers */,
                                A503FA2A188F105900110F14 /* JSGlobalObjectScriptDebugServer.h in Headers */,
                                79A228361D35D71F00D8E067 /* ArithProfile.h in Headers */,
                                A513E5C0185BFACC007E95AD /* JSInjectedScriptHost.h in Headers */,
+                               0F7C5FBA1D8895070044F5E2 /* MarkedSpaceInlines.h in Headers */,
                                A513E5C2185BFACC007E95AD /* JSInjectedScriptHostPrototype.h in Headers */,
                                C442CB251A6CDB8C005D3D7C /* JSInputs.json in Headers */,
                                0F2B66F817B6B5AB00A7AE3F /* JSInt16Array.h in Headers */,
index d916c4e..56c58c4 100644 (file)
@@ -61,9 +61,8 @@ class BuiltinsInternalsWrapperHeaderGenerator(BuiltinsGenerator):
 
     def generate_secondary_header_includes(self):
         header_includes = [
-            (["WebCore"],
-                ("JavaScriptCore", "runtime/VM.h"),
-            ),
+            (["WebCore"], ("JavaScriptCore", "heap/WeakInlines.h")),
+            (["WebCore"], ("JavaScriptCore", "runtime/VM.h"))
         ]
         for object in self.internals:
             header_includes.append((["WebCore"], ("WebCore", object.object_name + "Builtins.h")))
index 55e607d..c0ced33 100644 (file)
@@ -31,6 +31,7 @@
 #include "JSFunction.h"
 #include "JSGlobalObject.h"
 #include "JSCInlines.h"
+#include "MarkedSpaceInlines.h"
 #include "Parser.h"
 #include "Protect.h"
 #include "VMEntryScope.h"
diff --git a/Source/JavaScriptCore/heap/AllocationScope.h b/Source/JavaScriptCore/heap/AllocationScope.h
new file mode 100644 (file)
index 0000000..2e4a654
--- /dev/null
@@ -0,0 +1,53 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include "Heap.h"
+
+namespace JSC {
+
+class AllocationScope {
+public:
+    AllocationScope(Heap& heap)
+        : m_heap(heap)
+        , m_lastOperation(m_heap.m_operationInProgress)
+    {
+        ASSERT(m_lastOperation == NoOperation || m_lastOperation == Allocation);
+        m_heap.m_operationInProgress = Allocation;
+    }
+    
+    ~AllocationScope()
+    {
+        ASSERT(m_heap.m_operationInProgress == Allocation);
+        m_heap.m_operationInProgress = m_lastOperation;
+    }
+private:
+    Heap& m_heap;
+    HeapOperation m_lastOperation;
+};
+
+} // namespace JSC
+
index 3658faf..0f75c96 100644 (file)
 
 namespace JSC {
 
+class Heap;
 class HeapCell;
 class LargeAllocation;
 class MarkedBlock;
 class WeakSet;
+class VM;
 
 typedef uint32_t HeapVersion;
 
@@ -56,6 +58,9 @@ public:
     {
     }
     
+    VM* vm() const;
+    Heap* heap() const;
+    
     explicit operator bool() const { return !!m_encodedPointer; }
     
     bool isMarkedBlock() const { return m_encodedPointer && !(m_encodedPointer & isLargeAllocationBit); }
@@ -73,13 +78,13 @@ public:
         return *bitwise_cast<LargeAllocation*>(m_encodedPointer - isLargeAllocationBit);
     }
     
-    void flipIfNecessary(HeapVersion);
-    void flipIfNecessary();
+    void aboutToMark(HeapVersion);
     bool needsFlip() const;
     
-    bool isMarked() const;
     bool isMarked(HeapCell*) const;
+    bool isMarked(HeapVersion, HeapCell*) const;
     bool isMarkedOrNewlyAllocated(HeapCell*) const;
+    bool isMarkedOrNewlyAllocated(HeapVersion, HeapCell*) const;
     
     void noteMarked();
     
index def5bad..88ee30e 100644 (file)
 #include "JSCell.h"
 #include "LargeAllocation.h"
 #include "MarkedBlock.h"
+#include "VM.h"
 
 namespace JSC {
 
-inline bool CellContainer::isMarked() const
+inline VM* CellContainer::vm() const
 {
     if (isLargeAllocation())
-        return true;
-    return markedBlock().handle().isMarked();
+        return largeAllocation().vm();
+    return markedBlock().vm();
+}
+
+inline Heap* CellContainer::heap() const
+{
+    return &vm()->heap;
 }
 
 inline bool CellContainer::isMarked(HeapCell* cell) const
@@ -46,6 +52,13 @@ inline bool CellContainer::isMarked(HeapCell* cell) const
     return markedBlock().isMarked(cell);
 }
 
+inline bool CellContainer::isMarked(HeapVersion version, HeapCell* cell) const
+{
+    if (isLargeAllocation())
+        return largeAllocation().isMarked();
+    return markedBlock().isMarked(version, cell);
+}
+
 inline bool CellContainer::isMarkedOrNewlyAllocated(HeapCell* cell) const
 {
     if (isLargeAllocation())
@@ -53,6 +66,13 @@ inline bool CellContainer::isMarkedOrNewlyAllocated(HeapCell* cell) const
     return markedBlock().isMarkedOrNewlyAllocated(cell);
 }
 
+inline bool CellContainer::isMarkedOrNewlyAllocated(HeapVersion version, HeapCell* cell) const
+{
+    if (isLargeAllocation())
+        return largeAllocation().isMarkedOrNewlyAllocated();
+    return markedBlock().isMarkedOrNewlyAllocated(version, cell);
+}
+
 inline void CellContainer::noteMarked()
 {
     if (!isLargeAllocation())
@@ -73,16 +93,10 @@ inline WeakSet& CellContainer::weakSet() const
     return markedBlock().weakSet();
 }
 
-inline void CellContainer::flipIfNecessary(HeapVersion heapVersion)
-{
-    if (!isLargeAllocation())
-        markedBlock().flipIfNecessary(heapVersion);
-}
-
-inline void CellContainer::flipIfNecessary()
+inline void CellContainer::aboutToMark(HeapVersion heapVersion)
 {
     if (!isLargeAllocation())
-        markedBlock().flipIfNecessary();
+        markedBlock().aboutToMark(heapVersion);
 }
 
 inline bool CellContainer::needsFlip() const
index 02397cf..8bd2a98 100644 (file)
@@ -34,6 +34,7 @@
 #include "JSCell.h"
 #include "JSObject.h"
 #include "JSCInlines.h"
+#include "MarkedBlockInlines.h"
 #include "Structure.h"
 #include <wtf/OSAllocator.h>
 
index 26e1ebb..667fdf8 100644 (file)
@@ -45,6 +45,7 @@
 #include "JSGlobalObject.h"
 #include "JSLock.h"
 #include "JSVirtualMachineInternal.h"
+#include "MarkedSpaceInlines.h"
 #include "SamplingProfiler.h"
 #include "ShadowChicken.h"
 #include "SuperSampler.h"
@@ -527,11 +528,9 @@ void Heap::beginMarking()
     }
     
     {
-        TimingScope clearMarksTimingScope(*this, "m_objectSpace.clearMarks");
-        m_objectSpace.flip();
+        TimingScope clearMarksTimingScope(*this, "m_objectSpace.beginMarking");
+        m_objectSpace.beginMarking();
     }
-    
-    m_objectSpace.setIsMarking(true);
 }
 
 void Heap::visitExternalRememberedSet()
@@ -780,7 +779,7 @@ void Heap::endMarking()
     ASSERT(m_sharedMarkStack.isEmpty());
     m_weakReferenceHarvesters.removeAll();
     
-    m_objectSpace.setIsMarking(false);
+    m_objectSpace.endMarking();
 }
 
 size_t Heap::objectCount()
@@ -937,10 +936,19 @@ void Heap::collectAllGarbage()
     if (UNLIKELY(Options::useImmortalObjects()))
         sweeper()->willFinishSweeping();
     else {
+        double before = 0;
+        if (Options::logGC()) {
+            dataLog("[Full sweep: ", capacity() / 1024, " kb ");
+            before = currentTimeMS();
+        }
         m_objectSpace.sweep();
         m_objectSpace.shrink();
+        if (Options::logGC()) {
+            double after = currentTimeMS();
+            dataLog("=> ", capacity() / 1024, " kb, ", after - before, " ms]\n");
+        }
     }
-    ASSERT(m_blockSnapshot.isEmpty());
+    m_objectSpace.assertNoUnswept();
 
     sweepAllLogicallyEmptyWeakBlocks();
 }
@@ -1043,7 +1051,7 @@ NEVER_INLINE void Heap::collectImpl(HeapOperation collectionType, void* stackOri
     reapWeakHandles();
     pruneStaleEntriesFromWeakGCMaps();
     sweepArrayBuffers();
-    snapshotMarkedSpace();
+    snapshotUnswept();
     finalizeUnconditionalFinalizers();
     removeDeadCompilerWorklistEntries();
     deleteUnmarkedCompiledCode();
@@ -1052,7 +1060,7 @@ NEVER_INLINE void Heap::collectImpl(HeapOperation collectionType, void* stackOri
     notifyIncrementalSweeper();
     writeBarrierCurrentlyExecutingCodeBlocks();
 
-    resetAllocators();
+    prepareForAllocation();
     updateAllocationLimits();
     didFinishCollection(gcStartTime);
     resumeCompilerThreads();
@@ -1067,6 +1075,11 @@ NEVER_INLINE void Heap::collectImpl(HeapOperation collectionType, void* stackOri
         double after = currentTimeMS();
         dataLog(after - before, " ms]\n");
     }
+    
+    if (false) {
+        dataLog("Heap state after GC:\n");
+        m_objectSpace.dumpBits();
+    }
 }
 
 void Heap::sweepLargeAllocations()
@@ -1166,44 +1179,10 @@ void Heap::sweepArrayBuffers()
     m_arrayBuffers.sweep();
 }
 
-struct MarkedBlockSnapshotFunctor : public MarkedBlock::VoidFunctor {
-    MarkedBlockSnapshotFunctor(Vector<MarkedBlock::Handle*>& blocks) 
-        : m_index(0) 
-        , m_blocks(blocks)
-    {
-    }
-
-    void operator()(MarkedBlock::Handle* block) const
-    {
-        block->setIsOnBlocksToSweep(true);
-        m_blocks[m_index++] = block;
-    }
-
-    // FIXME: This is a mutable field becaue this isn't a C++ lambda.
-    // https://bugs.webkit.org/show_bug.cgi?id=159644
-    mutable size_t m_index;
-    Vector<MarkedBlock::Handle*>& m_blocks;
-};
-
-void Heap::snapshotMarkedSpace()
+void Heap::snapshotUnswept()
 {
-    TimingScope timingScope(*this, "Heap::snapshotMarkedSpace");
-    // FIXME: This should probably be renamed. It's not actually snapshotting all of MarkedSpace.
-    // This is used by IncrementalSweeper, so it only needs to snapshot blocks. However, if we ever
-    // wanted to add other snapshotting login, we'd probably put it here.
-    
-    if (m_operationInProgress == EdenCollection) {
-        for (MarkedBlock::Handle* handle : m_objectSpace.blocksWithNewObjects()) {
-            if (handle->isOnBlocksToSweep())
-                continue;
-            m_blockSnapshot.append(handle);
-            handle->setIsOnBlocksToSweep(true);
-        }
-    } else {
-        m_blockSnapshot.resizeToFit(m_objectSpace.blocks().set().size());
-        MarkedBlockSnapshotFunctor functor(m_blockSnapshot);
-        m_objectSpace.forEachBlock(functor);
-    }
+    TimingScope timingScope(*this, "Heap::snapshotUnswept");
+    m_objectSpace.snapshotUnswept();
 }
 
 void Heap::deleteSourceProviderCaches()
@@ -1226,9 +1205,9 @@ void Heap::writeBarrierCurrentlyExecutingCodeBlocks()
     m_codeBlocks->writeBarrierCurrentlyExecutingCodeBlocks(this);
 }
 
-void Heap::resetAllocators()
+void Heap::prepareForAllocation()
 {
-    m_objectSpace.resetAllocators();
+    m_objectSpace.prepareForAllocation();
 }
 
 void Heap::updateAllocationLimits()
@@ -1477,7 +1456,7 @@ public:
 void Heap::zombifyDeadObjects()
 {
     // Sweep now because destructors will crash once we're zombified.
-    m_objectSpace.zombifySweep();
+    m_objectSpace.sweep();
     HeapIterationScope iterationScope(*this);
     m_objectSpace.forEachDeadCell(iterationScope, Zombify());
 }
index f9ef6d0..bdb23ed 100644 (file)
@@ -49,6 +49,7 @@
 
 namespace JSC {
 
+class AllocationScope;
 class CodeBlock;
 class CodeBlockSet;
 class EdenGCActivityCallback;
@@ -255,6 +256,7 @@ public:
     void didFreeBlock(size_t capacity);
 
 private:
+    friend class AllocationScope;
     friend class CodeBlock;
     friend class DeferGC;
     friend class DeferGCForAWhile;
@@ -327,11 +329,11 @@ private:
     void reapWeakHandles();
     void pruneStaleEntriesFromWeakGCMaps();
     void sweepArrayBuffers();
-    void snapshotMarkedSpace();
+    void snapshotUnswept();
     void deleteSourceProviderCaches();
     void notifyIncrementalSweeper();
     void writeBarrierCurrentlyExecutingCodeBlocks();
-    void resetAllocators();
+    void prepareForAllocation();
     void harvestWeakReferences();
     void finalizeUnconditionalFinalizers();
     void clearUnmarkedExecutables();
@@ -422,7 +424,6 @@ private:
     RefPtr<FullGCActivityCallback> m_fullActivityCallback;
     RefPtr<GCActivityCallback> m_edenActivityCallback;
     std::unique_ptr<IncrementalSweeper> m_sweeper;
-    Vector<MarkedBlock::Handle*> m_blockSnapshot;
 
     Vector<HeapObserver*> m_observers;
 
index 1a9d076..fdf5c96 100644 (file)
@@ -81,9 +81,8 @@ ALWAYS_INLINE bool Heap::isMarked(const void* rawCell)
     if (cell->isLargeAllocation())
         return cell->largeAllocation().isMarked();
     MarkedBlock& block = cell->markedBlock();
-    if (block.needsFlip(block.vm()->heap.objectSpace().version()))
-        return false;
-    return block.isMarked(cell);
+    return block.isMarked(
+        block.vm()->heap.objectSpace().version(), cell);
 }
 
 ALWAYS_INLINE bool Heap::isMarkedConcurrently(const void* rawCell)
@@ -92,10 +91,8 @@ ALWAYS_INLINE bool Heap::isMarkedConcurrently(const void* rawCell)
     if (cell->isLargeAllocation())
         return cell->largeAllocation().isMarked();
     MarkedBlock& block = cell->markedBlock();
-    if (block.needsFlip(block.vm()->heap.objectSpace().version()))
-        return false;
-    WTF::loadLoadFence();
-    return block.isMarked(cell);
+    return block.isMarkedConcurrently(
+        block.vm()->heap.objectSpace().version(), cell);
 }
 
 ALWAYS_INLINE bool Heap::testAndSetMarked(HeapVersion version, const void* rawCell)
@@ -104,7 +101,7 @@ ALWAYS_INLINE bool Heap::testAndSetMarked(HeapVersion version, const void* rawCe
     if (cell->isLargeAllocation())
         return cell->largeAllocation().testAndSetMarked();
     MarkedBlock& block = cell->markedBlock();
-    block.flipIfNecessaryDuringMarking(version);
+    block.aboutToMark(version);
     return block.testAndSetMarked(cell);
 }
 
index ed5e398..1afa098 100644 (file)
@@ -30,6 +30,7 @@
 #include "HeapIterationScope.h"
 #include "JSCInlines.h"
 #include "JSObject.h"
+#include "MarkedSpaceInlines.h"
 #include "Options.h"
 #include <stdlib.h>
 #include <wtf/CurrentTime.h>
index 92bf051..a09b55c 100644 (file)
@@ -84,9 +84,8 @@ public:
             if (!filter.ruleOut(bitwise_cast<Bits>(previousCandidate))
                 && set.contains(previousCandidate)
                 && previousCandidate->handle().cellKind() == HeapCell::Auxiliary) {
-                previousCandidate->flipIfNecessary(heapVersion);
                 previousPointer = static_cast<char*>(previousCandidate->handle().cellAlign(previousPointer));
-                if (previousCandidate->handle().isLiveCell(previousPointer))
+                if (previousCandidate->handle().isLiveCell(heapVersion, previousPointer))
                     func(previousPointer);
             }
         }
@@ -99,10 +98,8 @@ public:
         if (!set.contains(candidate))
             return;
         
-        candidate->flipIfNecessary(heapVersion);
-
         auto tryPointer = [&] (void* pointer) {
-            if (candidate->handle().isLiveCell(pointer))
+            if (candidate->handle().isLiveCell(heapVersion, pointer))
                 func(pointer);
         };
     
@@ -170,7 +167,6 @@ public:
         if (candidate->handle().cellKind() != HeapCell::JSCell)
             return false;
         
-        candidate->flipIfNecessary();
         if (!candidate->handle().isLiveCell(pointer))
             return false;
         
index 7b9d19a..153881f 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014, 2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -30,6 +30,7 @@
 #include "HeapIterationScope.h"
 #include "JSCInlines.h"
 #include "JSObject.h"
+#include "MarkedSpaceInlines.h"
 
 namespace JSC {
 
index 5204e34..d92041b 100644 (file)
@@ -50,7 +50,7 @@ static const double sweepTimeMultiplier = 1.0 / sweepTimeTotal;
 #if USE(CF)
 IncrementalSweeper::IncrementalSweeper(Heap* heap, CFRunLoopRef runLoop)
     : HeapTimer(heap->vm(), runLoop)
-    , m_blocksToSweep(heap->m_blockSnapshot)
+    , m_currentAllocator(nullptr)
 {
 }
 
@@ -66,7 +66,7 @@ void IncrementalSweeper::cancelTimer()
 #elif PLATFORM(EFL)
 IncrementalSweeper::IncrementalSweeper(Heap* heap)
     : HeapTimer(heap->vm())
-    , m_blocksToSweep(heap->m_blockSnapshot)
+    , m_currentAllocator(nullptr)
 {
 }
 
@@ -86,7 +86,7 @@ void IncrementalSweeper::cancelTimer()
 #elif USE(GLIB)
 IncrementalSweeper::IncrementalSweeper(Heap* heap)
     : HeapTimer(heap->vm())
-    , m_blocksToSweep(heap->m_blockSnapshot)
+    , m_currentAllocator(nullptr)
 {
 }
 
@@ -121,19 +121,20 @@ void IncrementalSweeper::doSweep(double sweepBeginTime)
         return;
     }
 
-    m_blocksToSweep.clear();
     cancelTimer();
 }
 
 bool IncrementalSweeper::sweepNextBlock()
 {
-    while (!m_blocksToSweep.isEmpty()) {
-        MarkedBlock::Handle* block = m_blocksToSweep.takeLast();
-        block->setIsOnBlocksToSweep(false);
-
-        if (!block->needsSweeping())
-            continue;
-
+    MarkedBlock::Handle* block = nullptr;
+    
+    for (; m_currentAllocator; m_currentAllocator = m_currentAllocator->nextAllocator()) {
+        block = m_currentAllocator->findBlockToSweep();
+        if (block)
+            break;
+    }
+    
+    if (block) {
         DeferGCForAWhile deferGC(m_vm->heap);
         block->sweep();
         m_vm->heap.objectSpace().freeOrShrinkBlock(block);
@@ -146,11 +147,12 @@ bool IncrementalSweeper::sweepNextBlock()
 void IncrementalSweeper::startSweeping()
 {
     scheduleTimer();
+    m_currentAllocator = m_vm->heap.objectSpace().firstAllocator();
 }
 
 void IncrementalSweeper::willFinishSweeping()
 {
-    m_blocksToSweep.clear();
+    m_currentAllocator = nullptr;
     if (m_vm)
         cancelTimer();
 }
index c610c8d..d2f168d 100644 (file)
 #define IncrementalSweeper_h
 
 #include "HeapTimer.h"
-#include "MarkedBlock.h"
 #include <wtf/Vector.h>
 
 namespace JSC {
 
 class Heap;
-class MarkedBlock;
+class MarkedAllocator;
 
 class IncrementalSweeper : public HeapTimer {
     WTF_MAKE_FAST_ALLOCATED;
@@ -56,7 +55,7 @@ private:
     void scheduleTimer();
     void cancelTimer();
     
-    Vector<MarkedBlock::Handle*>& m_blocksToSweep;
+    MarkedAllocator* m_currentAllocator;
 #endif
 };
 
index f246d13..717b921 100644 (file)
@@ -70,9 +70,10 @@ public:
     
     bool isNewlyAllocated() const { return m_isNewlyAllocated; }
     ALWAYS_INLINE bool isMarked() { return m_isMarked.load(std::memory_order_relaxed); }
+    ALWAYS_INLINE bool isMarked(HeapCell*) { return m_isMarked.load(std::memory_order_relaxed); }
+    ALWAYS_INLINE bool isMarkedConcurrently(HeapVersion, HeapCell*) { return m_isMarked.load(std::memory_order_relaxed); }
     bool isMarkedOrNewlyAllocated() { return isMarked() || isNewlyAllocated(); }
     bool isMarkedOrNewlyAllocated(HeapCell*) { return isMarkedOrNewlyAllocated(); }
-    bool isMarkedDuringWeakVisiting(HeapVersion, HeapCell*) { return isMarked(); }
     bool isLive() { return isMarkedOrNewlyAllocated(); }
     
     bool hasValidCell() const { return m_hasValidCell; }
@@ -106,8 +107,7 @@ public:
     
     const AllocatorAttributes& attributes() const { return m_attributes; }
     
-    void flipIfNecessary(uint64_t) { }
-    void flipIfNecessaryDuringMarking(uint64_t) { }
+    void aboutToMark(HeapVersion) { }
     
     ALWAYS_INLINE bool testAndSetMarked()
     {
index 375395f..d961022 100644 (file)
 #include "config.h"
 #include "MarkedAllocator.h"
 
+#include "AllocationScope.h"
 #include "GCActivityCallback.h"
 #include "Heap.h"
 #include "IncrementalSweeper.h"
 #include "JSCInlines.h"
+#include "MarkedBlockInlines.h"
 #include "SuperSampler.h"
 #include "VM.h"
 #include <wtf/CurrentTime.h>
@@ -39,7 +41,6 @@ namespace JSC {
 MarkedAllocator::MarkedAllocator(Heap* heap, MarkedSpace* markedSpace, size_t cellSize, const AllocatorAttributes& attributes)
     : m_currentBlock(0)
     , m_lastActiveBlock(0)
-    , m_nextBlockToSweep(nullptr)
     , m_cellSize(static_cast<unsigned>(cellSize))
     , m_attributes(attributes)
     , m_heap(heap)
@@ -50,11 +51,13 @@ MarkedAllocator::MarkedAllocator(Heap* heap, MarkedSpace* markedSpace, size_t ce
 bool MarkedAllocator::isPagedOut(double deadline)
 {
     unsigned itersSinceLastTimeCheck = 0;
-    MarkedBlock::Handle* block = m_blockList.begin();
-    while (block) {
-        block = filterNextBlock(block->next());
-        if (block)
-            block->flipIfNecessary(); // Forces us to touch the memory of the block, but has no semantic effect.
+    for (size_t index = 0; index < m_blocks.size(); ++index) {
+        MarkedBlock::Handle* block = m_blocks[index];
+        if (block) {
+            // Forces us to touch the memory of the block, but has no semantic effect.
+            if (block->needsFlip())
+                block->block().resetVersion();
+        }
         ++itersSinceLastTimeCheck;
         if (itersSinceLastTimeCheck >= Heap::s_timeCheckResolution) {
             double currentTime = WTF::monotonicallyIncreasingTime();
@@ -66,84 +69,96 @@ bool MarkedAllocator::isPagedOut(double deadline)
     return false;
 }
 
-void MarkedAllocator::retire(MarkedBlock::Handle* block)
+bool MarkedAllocator::shouldStealEmptyBlocksFromOtherAllocators() const
 {
-    LockHolder locker(m_lock); // This will be called in parallel during GC.
-    if (block == m_currentBlock) {
-        // This happens when the mutator is running. We finished a full GC and marked too few things
-        // to retire. Then we started allocating in this block. Then a barrier ran, which marked an
-        // object in this block, which put it over the retirement threshold. It's OK to simply do
-        // nothing in that case.
-        return;
-    }
-    if (block == m_lastActiveBlock) {
-        // This can easily happen during marking. It would be easy to handle this case, but it's
-        // just as easy to ignore it.
-        return;
-    }
-    RELEASE_ASSERT(block->isOnList());
-    if (block == m_nextBlockToSweep)
-        m_nextBlockToSweep = filterNextBlock(block->next());
-    block->remove();
-    m_retiredBlocks.push(block);
+    return !needsDestruction();
 }
 
-MarkedBlock::Handle* MarkedAllocator::filterNextBlock(MarkedBlock::Handle* block)
+MarkedBlock::Handle* MarkedAllocator::findEmptyBlockToSteal()
 {
-    if (block == m_blockList.end())
+    // Don't allow others to steal from us, if we wouldn't steal from others.
+    if (!shouldStealEmptyBlocksFromOtherAllocators())
+        return nullptr;
+    
+    m_emptyCursor = m_empty.findBit(m_emptyCursor, true);
+    if (m_emptyCursor >= m_blocks.size())
         return nullptr;
-    return block;
+    return m_blocks[m_emptyCursor];
 }
 
-void MarkedAllocator::setNextBlockToSweep(MarkedBlock::Handle* block)
+void MarkedAllocator::didConsumeFreeList()
 {
-    m_nextBlockToSweep = filterNextBlock(block);
+    if (m_currentBlock)
+        m_currentBlock->didConsumeFreeList();
+    
+    setFreeList(FreeList());
+    m_currentBlock = nullptr;
 }
 
-void* MarkedAllocator::tryAllocateWithoutCollectingImpl()
+void* MarkedAllocator::tryAllocateWithoutCollecting()
 {
     SuperSamplerScope superSamplerScope(false);
     
-    if (m_currentBlock) {
-        ASSERT(m_currentBlock == m_nextBlockToSweep);
-        m_currentBlock->didConsumeFreeList();
-        setNextBlockToSweep(m_currentBlock->next());
-    }
+    ASSERT(!m_currentBlock);
+    ASSERT(!m_freeList);
     
-    setFreeList(FreeList());
-
-    RELEASE_ASSERT(m_nextBlockToSweep != m_blockList.end());
-
-    MarkedBlock::Handle* next;
-    for (MarkedBlock::Handle*& block = m_nextBlockToSweep; block; block = next) {
-        next = filterNextBlock(block->next());
-
-        // It would be super weird if the blocks we are sweeping have anything allocated during this
-        // cycle.
-        ASSERT(!block->hasAnyNewlyAllocated());
+    for (;;) {
+        m_allocationCursor = (m_canAllocateButNotEmpty | m_empty).findBit(m_allocationCursor, true);
+        if (m_allocationCursor >= m_blocks.size())
+            break;
         
-        FreeList freeList = block->sweep(MarkedBlock::Handle::SweepToFreeList);
-        
-        // It's possible to stumble on a complete-full block. Marking tries to retire these, but
-        // that algorithm is racy and may forget to do it sometimes.
-        if (freeList.allocationWillFail()) {
-            ASSERT(block->isFreeListed());
-            block->unsweepWithNoNewlyAllocated();
-            ASSERT(block->isMarked());
-            retire(block);
-            continue;
-        }
+        setIsCanAllocateButNotEmpty(m_allocationCursor, false);
 
-        m_currentBlock = block;
-        setFreeList(freeList);
-        break;
+        if (void* result = tryAllocateIn(m_blocks[m_allocationCursor]))
+            return result;
     }
     
-    if (!m_freeList) {
-        m_currentBlock = 0;
-        return 0;
+    if (Options::stealEmptyBlocksFromOtherAllocators()
+        && shouldStealEmptyBlocksFromOtherAllocators()) {
+        if (MarkedBlock::Handle* block = m_markedSpace->findEmptyBlockToSteal()) {
+            block->sweep();
+            
+            // It's good that this clears canAllocateButNotEmpty as well as all other bits,
+            // because there is a remote chance that a block may have both canAllocateButNotEmpty
+            // and empty set at the same time.
+            block->removeFromAllocator();
+            addBlock(block);
+            return allocateIn(block);
+        }
     }
+    
+    return nullptr;
+}
 
+void* MarkedAllocator::allocateIn(MarkedBlock::Handle* block)
+{
+    void* result = tryAllocateIn(block);
+    RELEASE_ASSERT(result);
+    return result;
+}
+
+void* MarkedAllocator::tryAllocateIn(MarkedBlock::Handle* block)
+{
+    ASSERT(block);
+    ASSERT(!block->hasAnyNewlyAllocated());
+    ASSERT(!block->isFreeListed());
+    
+    FreeList freeList = block->sweep(MarkedBlock::Handle::SweepToFreeList);
+    
+    // It's possible to stumble on a completely full block. Marking tries to retire these, but
+    // that algorithm is racy and may forget to do it sometimes.
+    if (freeList.allocationWillFail()) {
+        ASSERT(block->isFreeListed());
+        block->unsweepWithNoNewlyAllocated();
+        ASSERT(!block->isFreeListed());
+        ASSERT(!isEmpty(block));
+        ASSERT(!isCanAllocateButNotEmpty(block));
+        return nullptr;
+    }
+    
+    m_currentBlock = block;
+    setFreeList(freeList);
+    
     void* result;
     if (m_freeList.remaining) {
         unsigned cellSize = m_cellSize;
@@ -155,21 +170,11 @@ void* MarkedAllocator::tryAllocateWithoutCollectingImpl()
         result = head;
     }
     RELEASE_ASSERT(result);
+    setIsEden(m_currentBlock, true);
     m_markedSpace->didAllocateInBlock(m_currentBlock);
     return result;
 }
 
-inline void* MarkedAllocator::tryAllocateWithoutCollecting()
-{
-    ASSERT(!m_heap->isBusy());
-    m_heap->m_operationInProgress = Allocation;
-    void* result = tryAllocateWithoutCollectingImpl();
-
-    m_heap->m_operationInProgress = NoOperation;
-    ASSERT(result || !m_currentBlock);
-    return result;
-}
-
 ALWAYS_INLINE void MarkedAllocator::doTestCollectionsIfNeeded()
 {
     if (!Options::slowPathAllocsBetweenGCs())
@@ -206,19 +211,17 @@ void* MarkedAllocator::allocateSlowCaseImpl(bool crashOnFailure)
     ASSERT(!m_markedSpace->isIterating());
     m_heap->didAllocate(m_freeList.originalSize);
     
+    didConsumeFreeList();
+    
+    m_heap->collectIfNecessaryOrDefer();
+    
+    AllocationScope allocationScope(*m_heap);
+
     void* result = tryAllocateWithoutCollecting();
     
     if (LIKELY(result != 0))
         return result;
     
-    if (m_heap->collectIfNecessaryOrDefer()) {
-        result = tryAllocateWithoutCollecting();
-        if (result)
-            return result;
-    }
-
-    ASSERT(!m_heap->shouldCollect());
-    
     MarkedBlock::Handle* block = tryAllocateBlock();
     if (!block) {
         if (crashOnFailure)
@@ -227,8 +230,7 @@ void* MarkedAllocator::allocateSlowCaseImpl(bool crashOnFailure)
             return nullptr;
     }
     addBlock(block);
-        
-    result = tryAllocateWithoutCollecting();
+    result = allocateIn(block);
     ASSERT(result);
     return result;
 }
@@ -249,37 +251,75 @@ size_t MarkedAllocator::blockSizeForBytes(size_t bytes)
 MarkedBlock::Handle* MarkedAllocator::tryAllocateBlock()
 {
     SuperSamplerScope superSamplerScope(false);
-    return MarkedBlock::tryCreate(*m_heap, this, m_cellSize, m_attributes);
+    
+    MarkedBlock::Handle* handle = MarkedBlock::tryCreate(*m_heap);
+    if (!handle)
+        return nullptr;
+    
+    m_markedSpace->didAddBlock(handle);
+    
+    return handle;
 }
 
 void MarkedAllocator::addBlock(MarkedBlock::Handle* block)
 {
-    ASSERT(!m_currentBlock);
-    ASSERT(!m_freeList);
+    size_t index;
+    if (m_freeBlockIndices.isEmpty()) {
+        index = m_blocks.size();
+
+        size_t oldCapacity = m_blocks.capacity();
+        m_blocks.append(block);
+        if (m_blocks.capacity() != oldCapacity) {
+            forEachBitVector(
+                [&] (FastBitVector& vector) {
+                    ASSERT_UNUSED(vector, vector.numBits() == oldCapacity);
+                });
+            
+            ASSERT(m_blocks.capacity() > oldCapacity);
+            
+            forEachBitVector(
+                [&] (FastBitVector& vector) {
+                    vector.resize(m_blocks.capacity());
+                });
+        }
+    } else {
+        index = m_freeBlockIndices.takeLast();
+        ASSERT(!m_blocks[index]);
+        m_blocks[index] = block;
+    }
     
-    m_blockList.append(block);
-    setNextBlockToSweep(block);
-    m_markedSpace->didAddBlock(block);
+    forEachBitVector(
+        [&] (FastBitVector& vector) {
+            ASSERT_UNUSED(vector, !vector[index]);
+        });
+
+    // This is the point at which the block learns of its cellSize() and attributes().
+    block->didAddToAllocator(this, index);
+    
+    setIsLive(index, true);
+    setIsEmpty(index, true);
 }
 
 void MarkedAllocator::removeBlock(MarkedBlock::Handle* block)
 {
-    if (m_currentBlock == block) {
-        m_currentBlock = filterNextBlock(m_currentBlock->next());
-        setFreeList(FreeList());
-    }
-    if (m_nextBlockToSweep == block)
-        setNextBlockToSweep(m_nextBlockToSweep->next());
+    ASSERT(block->allocator() == this);
+    ASSERT(m_blocks[block->index()] == block);
 
-    block->willRemoveBlock();
-    m_blockList.remove(block);
+    m_blocks[block->index()] = nullptr;
+    m_freeBlockIndices.append(block->index());
+    
+    forEachBitVector(
+        [&] (FastBitVector& vector) {
+            vector[block->index()] = false;
+        });
+    
+    block->didRemoveFromAllocator();
 }
 
 void MarkedAllocator::stopAllocating()
 {
-    if (m_heap->operationInProgress() == FullCollection)
-        m_blockList.takeFrom(m_retiredBlocks);
-
+    if (false)
+        dataLog(RawPointer(this), ": MarkedAllocator::stopAllocating!\n");
     ASSERT(!m_lastActiveBlock);
     if (!m_currentBlock) {
         ASSERT(!m_freeList);
@@ -292,29 +332,27 @@ void MarkedAllocator::stopAllocating()
     m_freeList = FreeList();
 }
 
-void MarkedAllocator::reset()
+void MarkedAllocator::prepareForAllocation()
 {
-    m_lastActiveBlock = 0;
-    m_currentBlock = 0;
+    m_lastActiveBlock = nullptr;
+    m_currentBlock = nullptr;
     setFreeList(FreeList());
 
-    setNextBlockToSweep(m_blockList.begin());
+    m_allocationCursor = 0;
+    m_emptyCursor = 0;
+    m_unsweptCursor = 0;
+    
+    m_eden.clearAll();
 
     if (UNLIKELY(Options::useImmortalObjects())) {
-        MarkedBlock::Handle* next;
-        for (MarkedBlock::Handle*& block = m_nextBlockToSweep; block; block = next) {
-            next = filterNextBlock(block->next());
-
-            FreeList freeList = block->sweep(MarkedBlock::Handle::SweepToFreeList);
-            block->zap(freeList);
-            retire(block);
-        }
+        // FIXME: Make this work again.
+        // https://bugs.webkit.org/show_bug.cgi?id=162296
+        RELEASE_ASSERT_NOT_REACHED();
     }
 }
 
 void MarkedAllocator::lastChanceToFinalize()
 {
-    m_blockList.takeFrom(m_retiredBlocks);
     forEachBlock(
         [&] (MarkedBlock::Handle* block) {
             block->lastChanceToFinalize();
@@ -326,4 +364,121 @@ void MarkedAllocator::setFreeList(const FreeList& freeList)
     m_freeList = freeList;
 }
 
+void MarkedAllocator::resumeAllocating()
+{
+    if (!m_lastActiveBlock)
+        return;
+
+    m_freeList = m_lastActiveBlock->resumeAllocating();
+    m_currentBlock = m_lastActiveBlock;
+    m_lastActiveBlock = nullptr;
+}
+
+void MarkedAllocator::beginMarkingForFullCollection()
+{
+    // Mark bits are sticky and so is our summary of mark bits. We only clear these during full
+    // collections, so if you survived the last collection you will survive the next one so long
+    // as the next one is eden.
+    m_markingNotEmpty.clearAll();
+    m_markingRetired.clearAll();
+}
+
+void MarkedAllocator::endMarking()
+{
+    m_allocated.clearAll();
+    
+    // It's surprising and frustrating to comprehend, but the end-of-marking flip does not need to
+    // know what kind of collection it is. That knowledge is already encoded in the m_markingXYZ
+    // vectors.
+    
+    if (needsDestruction()) {
+        // If blocks need destruction then nothing is empty! This is a correct assertion but may
+        // become wrong once we go full concurrent: when we create a new block, it will flicker
+        // into the empty set for a tiny moment. On the other hand, this code is likely to be run
+        // in stopTheWorld.
+        ASSERT(m_empty.isEmpty());
+        m_canAllocateButNotEmpty = m_live & ~m_markingRetired;
+        return;
+    }
+    
+    m_empty = m_live & ~m_markingNotEmpty;
+    m_canAllocateButNotEmpty = m_live & m_markingNotEmpty & ~m_markingRetired;
+    
+    if (false) {
+        dataLog("Bits for ", m_cellSize, ", ", m_attributes, " after endMarking:\n");
+        dumpBits(WTF::dataFile());
+    }
+}
+
+void MarkedAllocator::snapshotUnsweptForEdenCollection()
+{
+    m_unswept |= m_eden;
+}
+
+void MarkedAllocator::snapshotUnsweptForFullCollection()
+{
+    m_unswept = m_live;
+}
+
+MarkedBlock::Handle* MarkedAllocator::findBlockToSweep()
+{
+    m_unsweptCursor = m_unswept.findBit(m_unsweptCursor, true);
+    if (m_unsweptCursor >= m_blocks.size())
+        return nullptr;
+    return m_blocks[m_unsweptCursor];
+}
+
+void MarkedAllocator::sweep()
+{
+    m_unswept.forEachSetBit(
+        [&] (size_t index) {
+            m_blocks[index]->sweep();
+        });
+}
+
+void MarkedAllocator::shrink()
+{
+    m_empty.forEachSetBit(
+        [&] (size_t index) {
+            m_markedSpace->freeBlock(m_blocks[index]);
+        });
+}
+
+void MarkedAllocator::assertNoUnswept()
+{
+    if (ASSERT_DISABLED)
+        return;
+    
+    if (m_unswept.isEmpty())
+        return;
+    
+    dataLog("Assertion failed: unswept not empty in ", *this, ".\n");
+    dumpBits();
+    ASSERT_NOT_REACHED();
+}
+
+void MarkedAllocator::dump(PrintStream& out) const
+{
+    out.print(RawPointer(this), ":", m_cellSize, "/", m_attributes);
+}
+
+void MarkedAllocator::dumpBits(PrintStream& out)
+{
+    unsigned maxNameLength = 0;
+    forEachBitVectorWithName(
+        [&] (FastBitVector&, const char* name) {
+            unsigned length = strlen(name);
+            maxNameLength = std::max(maxNameLength, length);
+        });
+    
+    forEachBitVectorWithName(
+        [&] (FastBitVector& vector, const char* name) {
+            out.print("    ", name, ": ");
+            for (unsigned i = maxNameLength - strlen(name); i--;)
+                out.print(" ");
+            out.print(vector, "\n");
+        });
+}
+
 } // namespace JSC
+
index d49d1f8..eea17fb 100644 (file)
@@ -29,7 +29,9 @@
 #include "AllocatorAttributes.h"
 #include "FreeList.h"
 #include "MarkedBlock.h"
+#include <wtf/FastBitVector.h>
 #include <wtf/SentinelLinkedList.h>
+#include <wtf/Vector.h>
 
 namespace JSC {
 
@@ -37,6 +39,95 @@ class Heap;
 class MarkedSpace;
 class LLIntOffsetsExtractor;
 
+#define FOR_EACH_MARKED_ALLOCATOR_BIT(macro) \
+    macro(live, Live) /* The set of block indices that have actual blocks. */\
+    macro(empty, Empty) /* The set of all blocks that have no live objects and nothing to destroy. */ \
+    macro(allocated, Allocated) /* The set of allblocks that are full of live objects. */\
+    macro(canAllocateButNotEmpty, CanAllocateButNotEmpty) /* The set of all blocks are neither empty nor retired (i.e. are more than minMarkedBlockUtilization full). */ \
+    macro(eden, Eden) /* The set of all blocks that have new objects since the last GC. */\
+    macro(unswept, Unswept) /* The set of all blocks that could be swept by the incremental sweeper. */\
+    \
+    /* These are computed during marking. */\
+    macro(markingNotEmpty, MarkingNotEmpty) /* The set of all blocks that are not empty. */ \
+    macro(markingRetired, MarkingRetired) /* The set of all blocks that are retired. */
+
+// FIXME: We defined canAllocateButNotEmpty and empty to be exclusive:
+//
+//     canAllocateButNotEmpty & empty == 0
+//
+// Instead of calling it canAllocate and making it inclusive:
+//
+//     canAllocate & empty == empty
+//
+// The latter is probably better. I'll leave it to a future bug to fix that, since breathing on
+// this code leads to regressions for days, and it's not clear that making this change would
+// improve perf since it would not change the collector's behavior, and either way the allocator
+// has to look at both bitvectors.
+// https://bugs.webkit.org/show_bug.cgi?id=162121
+
+// Note that this collector supports overlapping allocator state with marking state, since in a
+// concurrent collector you allow allocation while marking is running. So it's best to visualize a
+// full mutable->eden collect->mutate->full collect cycle and see how the bits above get affected.
+// The example below tries to be exhaustive about what happens to the bits, but omits a lot of
+// things that happen to other state.
+//
+// Create allocator
+// - all bits are empty
+// Start allocating in some block
+// - allocate the block and set the live bit.
+// - the empty bit for the block flickers on and then gets immediately cleared by sweeping.
+// - set the eden bit.
+// Finish allocating in that block
+// - set the allocated bit.
+// Do that to a lot of blocks and then start an eden collection.
+// - beginMarking() has nothing to do.
+// - by default we have cleared markingNotEmpty/markingRetired bits.
+// - marking builds up markingNotEmpty/markingRetired bits.
+// We do endMarking()
+// - clear all allocated bits.
+// - for destructor blocks: fragmented = live & ~markingRetired
+// - for non-destructor blocks:
+//       empty = live & ~markingNotEmpty
+//       fragmented = live & markingNotEmpty & ~markingRetired
+// Snapshotting.
+// - unswept |= eden
+// Prepare for allocation.
+// - clear eden
+// Finish collection.
+// Allocate in some block that had some free and some live objects.
+// - clear the canAllocateButNotEmpty bit
+// - clear the unswept bit
+// - set the eden bit
+// Finish allocating (set the allocated bit).
+// Allocate in some block that was completely empty.
+// - clear the empty bit
+// - clear the unswept bit
+// - set the eden bit.
+// Finish allocating (set the allocated bit).
+// Allocate in some block that was completely empty in another allocator.
+// - clear the empty bit
+// - clear all bits in that allocator
+// - set the live bit in another allocator and the empty bit.
+// - clear the empty, unswept bits.
+// - set the eden bit.
+// Finish allocating (set the allocated bit).
+// Start a full collection.
+// - beginMarking() clears markingNotEmpty, markingRetired
+// - the heap version is incremented
+// - marking rebuilds markingNotEmpty/markingretired bits.
+// We do endMarking()
+// - clear all allocated bits.
+// - set canAllocateButNotEmpty/empty the same way as in eden collection.
+// Snapshotting.
+// - unswept = live
+// prepare for allocation.
+// - clear eden.
+// Finish collection.
+//
+// Notice how in this scheme, the empty/canAllocateButNotEmpty state stays separate from the
+// markingNotEmpty/markingRetired state. This is one step towards having separated allocation and
+// marking state.
+
 class MarkedAllocator {
     friend class LLIntOffsetsExtractor;
 
@@ -46,9 +137,16 @@ public:
 
     MarkedAllocator(Heap*, MarkedSpace*, size_t cellSize, const AllocatorAttributes&);
     void lastChanceToFinalize();
-    void reset();
+    void prepareForAllocation();
     void stopAllocating();
     void resumeAllocating();
+    void beginMarkingForFullCollection();
+    void endMarking();
+    void snapshotUnsweptForEdenCollection();
+    void snapshotUnsweptForFullCollection();
+    void sweep();
+    void shrink();
+    void assertNoUnswept();
     size_t cellSize() const { return m_cellSize; }
     const AllocatorAttributes& attributes() const { return m_attributes; }
     bool needsDestruction() const { return m_attributes.destruction == NeedsDestruction; }
@@ -72,35 +170,89 @@ public:
     bool isPagedOut(double deadline);
     
     static size_t blockSizeForBytes(size_t);
+    
+#define MARKED_ALLOCATOR_BIT_ACCESSORS(lowerBitName, capitalBitName)     \
+    bool is ## capitalBitName(size_t index) const { return m_ ## lowerBitName[index]; } \
+    bool is ## capitalBitName(MarkedBlock::Handle* block) const { return is ## capitalBitName(block->index()); } \
+    void setIs ## capitalBitName(size_t index, bool value) { m_ ## lowerBitName[index] = value; } \
+    void setIs ## capitalBitName(MarkedBlock::Handle* block, bool value) { setIs ## capitalBitName(block->index(), value); } \
+    bool atomicSetAndCheckIs ## capitalBitName(size_t index, bool value) { return m_ ## lowerBitName.atomicSetAndCheck(index, value); } \
+    bool atomicSetAndCheckIs ## capitalBitName(MarkedBlock::Handle* block, bool value) { return m_ ## lowerBitName.atomicSetAndCheck(block->index(), value); }
+    FOR_EACH_MARKED_ALLOCATOR_BIT(MARKED_ALLOCATOR_BIT_ACCESSORS)
+#undef MARKED_ALLOCATOR_BIT_ACCESSORS
+
+    template<typename Func>
+    void forEachBitVector(const Func& func)
+    {
+#define MARKED_ALLOCATOR_BIT_CALLBACK(lowerBitName, capitalBitName) \
+        func(m_ ## lowerBitName);
+        FOR_EACH_MARKED_ALLOCATOR_BIT(MARKED_ALLOCATOR_BIT_CALLBACK);
+#undef MARKED_ALLOCATOR_BIT_CALLBACK
+    }
+    
+    template<typename Func>
+    void forEachBitVectorWithName(const Func& func)
+    {
+#define MARKED_ALLOCATOR_BIT_CALLBACK(lowerBitName, capitalBitName) \
+        func(m_ ## lowerBitName, #capitalBitName);
+        FOR_EACH_MARKED_ALLOCATOR_BIT(MARKED_ALLOCATOR_BIT_CALLBACK);
+#undef MARKED_ALLOCATOR_BIT_CALLBACK
+    }
+    
+    MarkedAllocator* nextAllocator() const { return m_nextAllocator; }
+    
+    // MarkedSpace calls this during init.
+    void setNextAllocator(MarkedAllocator* allocator) { m_nextAllocator = allocator; }
+    
+    MarkedBlock::Handle* findEmptyBlockToSteal();
+    
+    MarkedBlock::Handle* findBlockToSweep();
+    
+    void dump(PrintStream&) const;
+    void dumpBits(PrintStream& = WTF::dataFile());
    
 private:
     friend class MarkedBlock;
     
+    bool shouldStealEmptyBlocksFromOtherAllocators() const;
+    
     JS_EXPORT_PRIVATE void* allocateSlowCase();
     JS_EXPORT_PRIVATE void* tryAllocateSlowCase();
     void* allocateSlowCaseImpl(bool crashOnFailure);
+    void didConsumeFreeList();
     void* tryAllocateWithoutCollecting();
-    void* tryAllocateWithoutCollectingImpl();
     MarkedBlock::Handle* tryAllocateBlock();
+    void* tryAllocateIn(MarkedBlock::Handle*);
+    void* allocateIn(MarkedBlock::Handle*);
     ALWAYS_INLINE void doTestCollectionsIfNeeded();
-    void retire(MarkedBlock::Handle*);
     
     void setFreeList(const FreeList&);
     
-    MarkedBlock::Handle* filterNextBlock(MarkedBlock::Handle*);
-    void setNextBlockToSweep(MarkedBlock::Handle*);
-    
     FreeList m_freeList;
+    
+    Vector<MarkedBlock::Handle*> m_blocks;
+    Vector<unsigned> m_freeBlockIndices;
+    
+#define MARKED_ALLOCATOR_BIT_DECLARATION(lowerBitName, capitalBitName) \
+    FastBitVector m_ ## lowerBitName;
+    FOR_EACH_MARKED_ALLOCATOR_BIT(MARKED_ALLOCATOR_BIT_DECLARATION)
+#undef MARKED_ALLOCATOR_BIT_DECLARATION
+    
+    // After you do something to a block based on one of these cursors, you clear the bit in the
+    // corresponding bitvector and leave the cursor where it was.
+    size_t m_allocationCursor { 0 }; // Points to the next block that is a candidate for allocation.
+    size_t m_emptyCursor { 0 }; // Points to the next block that is a candidate for empty allocation (allocating in empty blocks).
+    size_t m_unsweptCursor { 0 }; // Points to the next block that is a candidate for incremental sweeping.
+    
     MarkedBlock::Handle* m_currentBlock;
     MarkedBlock::Handle* m_lastActiveBlock;
-    MarkedBlock::Handle* m_nextBlockToSweep;
-    SentinelLinkedList<MarkedBlock::Handle, BasicRawSentinelNode<MarkedBlock::Handle>> m_blockList;
-    SentinelLinkedList<MarkedBlock::Handle, BasicRawSentinelNode<MarkedBlock::Handle>> m_retiredBlocks;
+
     Lock m_lock;
     unsigned m_cellSize;
     AllocatorAttributes m_attributes;
     Heap* m_heap;
     MarkedSpace* m_markedSpace;
+    MarkedAllocator* m_nextAllocator;
 };
 
 inline ptrdiff_t MarkedAllocator::offsetOfFreeList()
@@ -149,20 +301,12 @@ ALWAYS_INLINE void* MarkedAllocator::allocate()
     return head;
 }
 
-inline void MarkedAllocator::resumeAllocating()
-{
-    if (!m_lastActiveBlock)
-        return;
-
-    m_freeList = m_lastActiveBlock->resumeAllocating();
-    m_currentBlock = m_lastActiveBlock;
-    m_lastActiveBlock = 0;
-}
-
 template <typename Functor> inline void MarkedAllocator::forEachBlock(const Functor& functor)
 {
-    m_blockList.forEach(functor);
-    m_retiredBlocks.forEach(functor);
+    m_live.forEachSetBit(
+        [&] (size_t index) {
+            functor(m_blocks[index]);
+        });
 }
 
 } // namespace JSC
index 0390664..83d035b 100644 (file)
@@ -29,6 +29,7 @@
 #include "JSCell.h"
 #include "JSDestructibleObject.h"
 #include "JSCInlines.h"
+#include "MarkedBlockInlines.h"
 #include "SuperSampler.h"
 
 namespace JSC {
@@ -36,7 +37,7 @@ namespace JSC {
 static const bool computeBalance = false;
 static size_t balance;
 
-MarkedBlock::Handle* MarkedBlock::tryCreate(Heap& heap, MarkedAllocator* allocator, size_t cellSize, const AllocatorAttributes& attributes)
+MarkedBlock::Handle* MarkedBlock::tryCreate(Heap& heap)
 {
     if (computeBalance) {
         balance++;
@@ -48,26 +49,17 @@ MarkedBlock::Handle* MarkedBlock::tryCreate(Heap& heap, MarkedAllocator* allocat
         return nullptr;
     if (scribbleFreeCells())
         scribble(blockSpace, blockSize);
-    return new Handle(heap, allocator, cellSize, attributes, blockSpace);
+    return new Handle(heap, blockSpace);
 }
 
-MarkedBlock::Handle::Handle(Heap& heap, MarkedAllocator* allocator, size_t cellSize, const AllocatorAttributes& attributes, void* blockSpace)
-    : m_atomsPerCell((cellSize + atomSize - 1) / atomSize)
-    , m_endAtom(atomsPerBlock - m_atomsPerCell + 1)
-    , m_attributes(attributes)
-    , m_state(New) // All cells start out unmarked.
-    , m_allocator(allocator)
-    , m_weakSet(allocator->heap()->vm(), CellContainer())
+MarkedBlock::Handle::Handle(Heap& heap, void* blockSpace)
+    : m_weakSet(heap.vm(), CellContainer())
 {
     m_block = new (NotNull, blockSpace) MarkedBlock(*heap.vm(), *this);
     
     m_weakSet.setContainer(*m_block);
     
     heap.didAllocateBlock(blockSize);
-    HEAP_LOG_BLOCK_STATE_TRANSITION(this);
-    ASSERT(allocator);
-    if (m_attributes.cellKind != HeapCell::JSCell)
-        RELEASE_ASSERT(m_attributes.destruction == DoesNotNeedDestruction);
 }
 
 MarkedBlock::Handle::~Handle()
@@ -78,45 +70,35 @@ MarkedBlock::Handle::~Handle()
         if (!(balance % 10))
             dataLog("MarkedBlock Balance: ", balance, "\n");
     }
+    removeFromAllocator();
     m_block->~MarkedBlock();
     fastAlignedFree(m_block);
     heap.didFreeBlock(blockSize);
 }
 
 MarkedBlock::MarkedBlock(VM& vm, Handle& handle)
-    : m_needsDestruction(handle.needsDestruction())
-    , m_version(vm.heap.objectSpace().version())
+    : m_version(MarkedSpace::nullVersion)
     , m_handle(handle)
     , m_vm(&vm)
 {
-    unsigned cellsPerBlock = MarkedSpace::blockPayload / handle.cellSize();
-    double markCountBias = -(Options::minMarkedBlockUtilization() * cellsPerBlock);
-    
-    // The mark count bias should be comfortably within this range.
-    RELEASE_ASSERT(markCountBias > static_cast<double>(std::numeric_limits<int16_t>::min()));
-    RELEASE_ASSERT(markCountBias < 0);
-    
-    m_markCountBias = static_cast<int16_t>(markCountBias);
-    
-    m_biasedMarkCount = m_markCountBias; // This means we haven't marked anything yet.
 }
 
-template<MarkedBlock::BlockState blockState, MarkedBlock::Handle::SweepMode sweepMode, DestructionMode destructionMode, MarkedBlock::Handle::ScribbleMode scribbleMode, MarkedBlock::Handle::NewlyAllocatedMode newlyAllocatedMode>
+template<MarkedBlock::Handle::EmptyMode emptyMode, MarkedBlock::Handle::SweepMode sweepMode, DestructionMode destructionMode, MarkedBlock::Handle::ScribbleMode scribbleMode, MarkedBlock::Handle::NewlyAllocatedMode newlyAllocatedMode, MarkedBlock::Handle::FlipMode flipMode>
 FreeList MarkedBlock::Handle::specializedSweep()
 {
-    SuperSamplerScope superSamplerScope(false);
-    ASSERT(blockState == New || blockState == Marked);
-    ASSERT(!(destructionMode == DoesNotNeedDestruction && sweepMode == SweepOnly));
+    RELEASE_ASSERT(!(destructionMode == DoesNotNeedDestruction && sweepMode == SweepOnly));
     
-    assertFlipped();
+    SuperSamplerScope superSamplerScope(false);
+
     MarkedBlock& block = this->block();
     
-    bool isNewBlock = blockState == New;
-    bool isEmptyBlock = !block.hasAnyMarked()
-        && newlyAllocatedMode == DoesNotHaveNewlyAllocated
-        && destructionMode == DoesNotNeedDestruction;
-    if (Options::useBumpAllocator() && (isNewBlock || isEmptyBlock)) {
-        ASSERT(block.m_marks.isEmpty());
+    if (false)
+        dataLog(RawPointer(this), ": MarkedBlock::Handle::specializedSweep!\n");
+    
+    if (Options::useBumpAllocator()
+        && emptyMode == IsEmpty
+        && newlyAllocatedMode == DoesNotHaveNewlyAllocated) {
+        ASSERT(flipMode == NeedsFlip);
         
         char* startOfLastCell = static_cast<char*>(cellAlign(block.atoms() + m_endAtom - 1));
         char* payloadEnd = startOfLastCell + cellSize();
@@ -124,7 +106,10 @@ FreeList MarkedBlock::Handle::specializedSweep()
         char* payloadBegin = bitwise_cast<char*>(block.atoms() + firstAtom());
         if (scribbleMode == Scribble)
             scribble(payloadBegin, payloadEnd - payloadBegin);
-        m_state = ((sweepMode == SweepToFreeList) ? FreeListed : Marked);
+        if (sweepMode == SweepToFreeList)
+            setIsFreeListed();
+        else
+            m_allocator->setIsEmpty(this, true);
         FreeList result = FreeList::bump(payloadEnd, payloadEnd - payloadBegin);
         if (false)
             dataLog("Quickly swept block ", RawPointer(this), " with cell size ", cellSize(), " and attributes ", m_attributes, ": ", result, "\n");
@@ -136,15 +121,18 @@ FreeList MarkedBlock::Handle::specializedSweep()
     // order of the free list.
     FreeCell* head = 0;
     size_t count = 0;
+    bool isEmpty = true;
     for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
-        if (blockState == Marked
-            && (block.m_marks.get(i)
-                || (newlyAllocatedMode == HasNewlyAllocated && m_newlyAllocated->get(i))))
+        if (emptyMode == NotEmpty
+            && ((flipMode == DoesNotNeedFlip && block.m_marks.get(i))
+                || (newlyAllocatedMode == HasNewlyAllocated && m_newlyAllocated->get(i)))) {
+            isEmpty = false;
             continue;
-
+        }
+        
         HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&block.atoms()[i]);
 
-        if (destructionMode == NeedsDestruction && blockState != New)
+        if (destructionMode == NeedsDestruction && emptyMode == NotEmpty)
             static_cast<JSCell*>(cell)->callDestructor(*vm());
 
         if (sweepMode == SweepToFreeList) {
@@ -163,7 +151,10 @@ FreeList MarkedBlock::Handle::specializedSweep()
         m_newlyAllocated = nullptr;
 
     FreeList result = FreeList::list(head, count * cellSize());
-    m_state = (sweepMode == SweepToFreeList ? FreeListed : Marked);
+    if (sweepMode == SweepToFreeList)
+        setIsFreeListed();
+    else if (isEmpty)
+        m_allocator->setIsEmpty(this, true);
     if (false)
         dataLog("Slowly swept block ", RawPointer(&block), " with cell size ", cellSize(), " and attributes ", m_attributes, ": ", result, "\n");
     return result;
@@ -171,15 +162,20 @@ FreeList MarkedBlock::Handle::specializedSweep()
 
 FreeList MarkedBlock::Handle::sweep(SweepMode sweepMode)
 {
-    flipIfNecessary();
+    m_allocator->setIsUnswept(this, false);
     
-    HEAP_LOG_BLOCK_STATE_TRANSITION(this);
-
     m_weakSet.sweep();
 
     if (sweepMode == SweepOnly && m_attributes.destruction == DoesNotNeedDestruction)
         return FreeList();
 
+    if (UNLIKELY(m_isFreeListed)) {
+        RELEASE_ASSERT(sweepMode == SweepToFreeList);
+        return FreeList();
+    }
+    
+    ASSERT(!m_allocator->isAllocated(this));
+    
     if (m_attributes.destruction == NeedsDestruction)
         return sweepHelperSelectScribbleMode<NeedsDestruction>(sweepMode);
     return sweepHelperSelectScribbleMode<DoesNotNeedDestruction>(sweepMode);
@@ -189,47 +185,59 @@ template<DestructionMode destructionMode>
 FreeList MarkedBlock::Handle::sweepHelperSelectScribbleMode(SweepMode sweepMode)
 {
     if (scribbleFreeCells())
-        return sweepHelperSelectStateAndSweepMode<destructionMode, Scribble>(sweepMode);
-    return sweepHelperSelectStateAndSweepMode<destructionMode, DontScribble>(sweepMode);
+        return sweepHelperSelectEmptyMode<destructionMode, Scribble>(sweepMode);
+    return sweepHelperSelectEmptyMode<destructionMode, DontScribble>(sweepMode);
 }
 
 template<DestructionMode destructionMode, MarkedBlock::Handle::ScribbleMode scribbleMode>
-FreeList MarkedBlock::Handle::sweepHelperSelectStateAndSweepMode(SweepMode sweepMode)
-{
-    switch (m_state) {
-    case New:
-        ASSERT(sweepMode == SweepToFreeList);
-        return specializedSweep<New, SweepToFreeList, destructionMode, scribbleMode, DoesNotHaveNewlyAllocated>();
-    case FreeListed:
-        // Happens when a block transitions to fully allocated.
-        ASSERT(sweepMode == SweepToFreeList);
-        return FreeList();
-    case Allocated:
-        RELEASE_ASSERT_NOT_REACHED();
-        return FreeList();
-    case Marked:
-        if (m_newlyAllocated) {
-            return sweepMode == SweepToFreeList
-                ? specializedSweep<Marked, SweepToFreeList, destructionMode, scribbleMode, HasNewlyAllocated>()
-                : specializedSweep<Marked, SweepOnly, destructionMode, scribbleMode, HasNewlyAllocated>();
-        } else {
-            return sweepMode == SweepToFreeList
-                ? specializedSweep<Marked, SweepToFreeList, destructionMode, scribbleMode, DoesNotHaveNewlyAllocated>()
-                : specializedSweep<Marked, SweepOnly, destructionMode, scribbleMode, DoesNotHaveNewlyAllocated>();
-        }
-    }
-    RELEASE_ASSERT_NOT_REACHED();
-    return FreeList();
+FreeList MarkedBlock::Handle::sweepHelperSelectEmptyMode(SweepMode sweepMode)
+{
+    // It's not obvious, but this is the only way to know if the block is empty. It's the only
+    // bit that captures these caveats:
+    // - It's true when the block is freshly allocated.
+    // - It's true if the block had been swept in the past, all destructors were called, and that
+    //   sweep proved that the block is empty.
+    // - It's false if there are any destructors that need to be called, even if the block has no
+    //   live objects.
+    if (m_allocator->isEmpty(this))
+        return sweepHelperSelectHasNewlyAllocated<IsEmpty, destructionMode, scribbleMode>(sweepMode);
+    return sweepHelperSelectHasNewlyAllocated<NotEmpty, destructionMode, scribbleMode>(sweepMode);
+}
+
+template<MarkedBlock::Handle::EmptyMode emptyMode, DestructionMode destructionMode, MarkedBlock::Handle::ScribbleMode scribbleMode>
+FreeList MarkedBlock::Handle::sweepHelperSelectHasNewlyAllocated(SweepMode sweepMode)
+{
+    if (m_newlyAllocated)
+        return sweepHelperSelectSweepMode<emptyMode, destructionMode, scribbleMode, HasNewlyAllocated>(sweepMode);
+    return sweepHelperSelectSweepMode<emptyMode, destructionMode, scribbleMode, DoesNotHaveNewlyAllocated>(sweepMode);
+}
+
+template<MarkedBlock::Handle::EmptyMode emptyMode, DestructionMode destructionMode, MarkedBlock::Handle::ScribbleMode scribbleMode, MarkedBlock::Handle::NewlyAllocatedMode newlyAllocatedMode>
+FreeList MarkedBlock::Handle::sweepHelperSelectSweepMode(SweepMode sweepMode)
+{
+    if (sweepMode == SweepToFreeList)
+        return sweepHelperSelectFlipMode<emptyMode, SweepToFreeList, destructionMode, scribbleMode, newlyAllocatedMode>();
+    return sweepHelperSelectFlipMode<emptyMode, SweepOnly, destructionMode, scribbleMode, newlyAllocatedMode>();
+}
+
+template<MarkedBlock::Handle::EmptyMode emptyMode, MarkedBlock::Handle::SweepMode sweepMode, DestructionMode destructionMode, MarkedBlock::Handle::ScribbleMode scribbleMode, MarkedBlock::Handle::NewlyAllocatedMode newlyAllocatedMode>
+FreeList MarkedBlock::Handle::sweepHelperSelectFlipMode()
+{
+    if (needsFlip())
+        return specializedSweep<emptyMode, sweepMode, destructionMode, scribbleMode, newlyAllocatedMode, NeedsFlip>();
+    return specializedSweep<emptyMode, sweepMode, destructionMode, scribbleMode, newlyAllocatedMode, DoesNotNeedFlip>();
 }
 
 void MarkedBlock::Handle::unsweepWithNoNewlyAllocated()
 {
-    flipIfNecessary();
-    
-    HEAP_LOG_BLOCK_STATE_TRANSITION(this);
-    
-    RELEASE_ASSERT(m_state == FreeListed);
-    m_state = Marked;
+    RELEASE_ASSERT(m_isFreeListed);
+    m_isFreeListed = false;
+}
+
+void MarkedBlock::Handle::setIsFreeListed()
+{
+    m_allocator->setIsEmpty(this, false);
+    m_isFreeListed = true;
 }
 
 class SetNewlyAllocatedFunctor : public MarkedBlock::VoidFunctor {
@@ -252,26 +260,17 @@ private:
 
 void MarkedBlock::Handle::stopAllocating(const FreeList& freeList)
 {
-    flipIfNecessary();
-    HEAP_LOG_BLOCK_STATE_TRANSITION(this);
-
-    if (m_state == Marked) {
-        // If the block is in the Marked state then we know that one of these
-        // conditions holds:
-        //
-        // - It was not used for allocation during the previous allocation cycle.
-        //   It may have dead objects, and we only know them to be dead by the
-        //   fact that their mark bits are unset.
-        //
-        // - Someone had already done stopAllocating(), for example because of
-        //   heap iteration, and they had already 
-        // Hence if the block is Marked we need to leave it Marked.
+    if (false)
+        dataLog(RawPointer(this), ": MarkedBlock::Handle::stopAllocating!\n");
+    ASSERT(!allocator()->isAllocated(this));
+
+    if (!isFreeListed()) {
+        // This means that we either didn't use this block at all for allocation since last GC,
+        // or someone had already done stopAllocating() before.
         ASSERT(freeList.allocationWillFail());
         return;
     }
     
-    ASSERT(m_state == FreeListed);
-    
     // Roll back to a coherent state for Heap introspection. Cells newly
     // allocated from our free list are not currently marked, so we need another
     // way to tell what's live vs dead. 
@@ -290,11 +289,12 @@ void MarkedBlock::Handle::stopAllocating(const FreeList& freeList)
             clearNewlyAllocated(cell);
         });
     
-    m_state = Marked;
+    m_isFreeListed = false;
 }
 
 void MarkedBlock::Handle::lastChanceToFinalize()
 {
+    allocator()->setIsAllocated(this, false);
     m_block->clearMarks();
     m_weakSet.lastChanceToFinalize();
 
@@ -304,14 +304,11 @@ void MarkedBlock::Handle::lastChanceToFinalize()
 
 FreeList MarkedBlock::Handle::resumeAllocating()
 {
-    flipIfNecessary();
-    HEAP_LOG_BLOCK_STATE_TRANSITION(this);
-
-    ASSERT(m_state == Marked);
+    ASSERT(!allocator()->isAllocated(this));
+    ASSERT(!isFreeListed());
 
     if (!m_newlyAllocated) {
-        // We didn't have to create a "newly allocated" bitmap. That means we were already Marked
-        // when we last stopped allocation, so return an empty free list and stay in the Marked state.
+        // This means we had already exhausted the block when we stopped allocation.
         return FreeList();
     }
 
@@ -345,40 +342,28 @@ void MarkedBlock::Handle::forEachFreeCell(const FreeList& freeList, const Func&
     }
 }
 
-void MarkedBlock::flipIfNecessary()
-{
-    flipIfNecessary(vm()->heap.objectSpace().version());
-}
-
-void MarkedBlock::Handle::flipIfNecessary()
-{
-    block().flipIfNecessary();
-}
-
-void MarkedBlock::flipIfNecessarySlow()
-{
-    ASSERT(needsFlip());
-    ASSERT(!vm()->heap.objectSpace().isMarking());
-    clearMarks();
-}
-
-void MarkedBlock::flipIfNecessaryDuringMarkingSlow()
+void MarkedBlock::aboutToMarkSlow(HeapVersion heapVersion)
 {
     ASSERT(vm()->heap.objectSpace().isMarking());
     LockHolder locker(m_lock);
-    if (needsFlip())
-        clearMarks();
+    if (needsFlip(heapVersion)) {
+        clearMarks(heapVersion);
+        // This means we're the first ones to mark any object in this block.
+        handle().allocator()->atomicSetAndCheckIsMarkingNotEmpty(&handle(), true);
+    }
 }
 
 void MarkedBlock::clearMarks()
 {
+    clearMarks(vm()->heap.objectSpace().version());
+}
+
+void MarkedBlock::clearMarks(HeapVersion heapVersion)
+{
     m_marks.clearAll();
     clearHasAnyMarked();
-    // This will become true at the end of the mark phase. We set it now to
-    // avoid an extra pass to do so later.
-    handle().m_state = Marked;
     WTF::storeStoreFence();
-    m_version = vm()->heap.objectSpace().version();
+    m_version = heapVersion;
 }
 
 #if !ASSERT_DISABLED
@@ -398,31 +383,38 @@ bool MarkedBlock::Handle::needsFlip()
     return m_block->needsFlip();
 }
 
-void MarkedBlock::Handle::willRemoveBlock()
+bool MarkedBlock::isMarked(const void* p)
 {
-    flipIfNecessary();
+    return isMarked(vm()->heap.objectSpace().version(), p);
 }
 
-void MarkedBlock::Handle::didConsumeFreeList()
+bool MarkedBlock::Handle::isMarkedOrNewlyAllocated(const HeapCell* cell)
 {
-    flipIfNecessary();
-    HEAP_LOG_BLOCK_STATE_TRANSITION(this);
-    
-    ASSERT(m_state == FreeListed);
+    return isMarkedOrNewlyAllocated(vm()->heap.objectSpace().version(), cell);
+}
+
+bool MarkedBlock::isMarkedOrNewlyAllocated(const HeapCell* cell)
+{
+    return isMarkedOrNewlyAllocated(vm()->heap.objectSpace().version(), cell);
+}
 
-    m_state = Allocated;
+void MarkedBlock::Handle::didConsumeFreeList()
+{
+    if (false)
+        dataLog(RawPointer(this), ": MarkedBlock::Handle::didConsumeFreeList!\n");
+    ASSERT(isFreeListed());
+    m_isFreeListed = false;
+    allocator()->setIsAllocated(this, true);
 }
 
 size_t MarkedBlock::markCount()
 {
-    flipIfNecessary();
-    return m_marks.count();
+    return needsFlip() ? 0 : m_marks.count();
 }
 
 bool MarkedBlock::Handle::isEmpty()
 {
-    flipIfNecessary();
-    return m_state == Marked && !block().hasAnyMarked() && m_weakSet.isEmpty() && (!m_newlyAllocated || m_newlyAllocated->isEmpty());
+    return m_allocator->isEmpty(this);
 }
 
 void MarkedBlock::clearHasAnyMarked()
@@ -432,29 +424,78 @@ void MarkedBlock::clearHasAnyMarked()
 
 void MarkedBlock::noteMarkedSlow()
 {
-    handle().m_allocator->retire(&handle());
+    handle().allocator()->atomicSetAndCheckIsMarkingRetired(&handle(), true);
+}
+
+void MarkedBlock::Handle::removeFromAllocator()
+{
+    if (!m_allocator)
+        return;
+    
+    m_allocator->removeBlock(this);
+}
+
+void MarkedBlock::Handle::didAddToAllocator(MarkedAllocator* allocator, size_t index)
+{
+    ASSERT(m_index == std::numeric_limits<size_t>::max());
+    ASSERT(!m_allocator);
+    
+    m_index = index;
+    m_allocator = allocator;
+    
+    size_t cellSize = allocator->cellSize();
+    m_atomsPerCell = (cellSize + atomSize - 1) / atomSize;
+    m_endAtom = atomsPerBlock - m_atomsPerCell + 1;
+    
+    m_attributes = allocator->attributes();
+
+    if (m_attributes.cellKind != HeapCell::JSCell)
+        RELEASE_ASSERT(m_attributes.destruction == DoesNotNeedDestruction);
+    
+    block().m_needsDestruction = needsDestruction();
+    
+    unsigned cellsPerBlock = MarkedSpace::blockPayload / cellSize;
+    double markCountBias = -(Options::minMarkedBlockUtilization() * cellsPerBlock);
+    
+    // The mark count bias should be comfortably within this range.
+    RELEASE_ASSERT(markCountBias > static_cast<double>(std::numeric_limits<int16_t>::min()));
+    RELEASE_ASSERT(markCountBias < 0);
+    
+    // This means we haven't marked anything yet.
+    block().m_biasedMarkCount = block().m_markCountBias = static_cast<int16_t>(markCountBias);
+}
+
+void MarkedBlock::Handle::didRemoveFromAllocator()
+{
+    ASSERT(m_index != std::numeric_limits<size_t>::max());
+    ASSERT(m_allocator);
+    
+    m_index = std::numeric_limits<size_t>::max();
+    m_allocator = nullptr;
+}
+
+bool MarkedBlock::Handle::isLive(const HeapCell* cell)
+{
+    return isLive(vm()->heap.objectSpace().version(), cell);
+}
+
+bool MarkedBlock::Handle::isLiveCell(const void* p)
+{
+    return isLiveCell(vm()->heap.objectSpace().version(), p);
 }
 
 } // namespace JSC
 
 namespace WTF {
 
-using namespace JSC;
-
-void printInternal(PrintStream& out, MarkedBlock::BlockState blockState)
+void printInternal(PrintStream& out, JSC::MarkedBlock::Handle::SweepMode mode)
 {
-    switch (blockState) {
-    case MarkedBlock::New:
-        out.print("New");
-        return;
-    case MarkedBlock::FreeListed:
-        out.print("FreeListed");
-        return;
-    case MarkedBlock::Allocated:
-        out.print("Allocated");
+    switch (mode) {
+    case JSC::MarkedBlock::Handle::SweepToFreeList:
+        out.print("SweepToFreeList");
         return;
-    case MarkedBlock::Marked:
-        out.print("Marked");
+    case JSC::MarkedBlock::Handle::SweepOnly:
+        out.print("SweepOnly");
         return;
     }
     RELEASE_ASSERT_NOT_REACHED();
index 047ca03..88cd27e 100644 (file)
@@ -44,20 +44,6 @@ class MarkedAllocator;
 typedef uintptr_t Bits;
 typedef uint32_t HeapVersion;
 
-// Set to log state transitions of blocks.
-#define HEAP_LOG_BLOCK_STATE_TRANSITIONS 0
-
-#if HEAP_LOG_BLOCK_STATE_TRANSITIONS
-#define HEAP_LOG_BLOCK_STATE_TRANSITION(handle) do {            \
-        dataLogF(                                               \
-            "%s:%d %s: block %s = %p, %d\n",                    \
-            __FILE__, __LINE__, __FUNCTION__,                   \
-            #handle, &(handle)->block(), (handle)->m_state);    \
-    } while (false)
-#else
-#define HEAP_LOG_BLOCK_STATE_TRANSITION(handle) ((void)0)
-#endif
-
 // A marked block is a page-aligned container for heap-allocated objects.
 // Objects are allocated within cells of the marked block. For a given
 // marked block, all cells have the same size. Objects smaller than the
@@ -76,8 +62,6 @@ public:
 private:
     friend class Handle;
 public:
-    enum BlockState : uint8_t { New, FreeListed, Allocated, Marked };
-        
     static const size_t atomSize = 16; // bytes
     static const size_t blockSize = 16 * KB;
     static const size_t blockMask = ~(blockSize - 1); // blockSize must be a power of two.
@@ -86,12 +70,12 @@ public:
 
     static_assert(!(MarkedBlock::atomSize & (MarkedBlock::atomSize - 1)), "MarkedBlock::atomSize must be a power of two.");
     static_assert(!(MarkedBlock::blockSize & (MarkedBlock::blockSize - 1)), "MarkedBlock::blockSize must be a power of two.");
-
+    
     struct VoidFunctor {
         typedef void ReturnType;
         void returnValue() { }
     };
-
+    
     class CountFunctor {
     public:
         typedef size_t ReturnType;
@@ -106,10 +90,9 @@ public:
         mutable ReturnType m_count;
     };
         
-    class Handle : public BasicRawSentinelNode<Handle> {
+    class Handle {
         WTF_MAKE_NONCOPYABLE(Handle);
         WTF_MAKE_FAST_ALLOCATED;
-        friend class DoublyLinkedListNode<Handle>;
         friend class LLIntOffsetsExtractor;
         friend class MarkedBlock;
         friend struct VerifyMarked;
@@ -130,6 +113,15 @@ public:
         VM* vm() const;
         WeakSet& weakSet();
             
+        // Sweeping ensures that destructors get called and removes the block from the unswept
+        // set. Sweeping to free list also removes the block from the empty set, if it was in that
+        // set. Sweeping with SweepOnly may add this block to the empty set, if the block is found
+        // to be empty.
+        //
+        // Note that you need to make sure that the empty bit reflects reality. If it's not set
+        // and the block is freshly created, then we'll make the mistake of running destructors in
+        // the block. If it's not set and the block has nothing marked, then we'll make the
+        // mistake of making a pop freelist rather than a bump freelist.
         enum SweepMode { SweepOnly, SweepToFreeList };
         FreeList sweep(SweepMode = SweepOnly);
         
@@ -153,8 +145,6 @@ public:
         // and was successfully cleared and false otherwise.
         bool clearNewlyAllocated();
             
-        void flipForEdenCollection();
-            
         size_t cellSize();
         const AllocatorAttributes& attributes() const;
         DestructionMode destruction() const;
@@ -164,9 +154,14 @@ public:
         size_t markCount();
         size_t size();
             
+        inline bool isLive(HeapVersion, const HeapCell*);
+        inline bool isLiveCell(HeapVersion, const void*);
+
         bool isLive(const HeapCell*);
         bool isLiveCell(const void*);
+
         bool isMarkedOrNewlyAllocated(const HeapCell*);
+        bool isMarkedOrNewlyAllocated(HeapVersion, const HeapCell*);
             
         bool isNewlyAllocated(const void*);
         void setNewlyAllocated(const void*);
@@ -174,31 +169,25 @@ public:
         
         bool hasAnyNewlyAllocated() const { return !!m_newlyAllocated; }
             
-        bool isAllocated() const;
-        bool isMarked() const;
-        bool isFreeListed() const;
-        bool needsSweeping() const;
-        void willRemoveBlock();
-
         template <typename Functor> IterationStatus forEachCell(const Functor&);
-        template <typename Functor> IterationStatus forEachLiveCell(const Functor&);
-        template <typename Functor> IterationStatus forEachDeadCell(const Functor&);
+        template <typename Functor> inline IterationStatus forEachLiveCell(const Functor&);
+        template <typename Functor> inline IterationStatus forEachDeadCell(const Functor&);
             
         bool needsFlip();
-            
-        void flipIfNecessaryDuringMarking(HeapVersion);
-        void flipIfNecessary(HeapVersion);
-        void flipIfNecessary();
-            
+        
         void assertFlipped();
             
-        bool isOnBlocksToSweep() const { return m_isOnBlocksToSweep; }
-        void setIsOnBlocksToSweep(bool value) { m_isOnBlocksToSweep = value; }
+        bool isFreeListed() const { return m_isFreeListed; }
+        
+        size_t index() const { return m_index; }
+        
+        void removeFromAllocator();
+        
+        void didAddToAllocator(MarkedAllocator*, size_t index);
+        void didRemoveFromAllocator();
         
-        BlockState state() const { return m_state; }
-            
     private:
-        Handle(Heap&, MarkedAllocator*, size_t cellSize, const AllocatorAttributes&, void*);
+        Handle(Heap&, void*);
             
         template<DestructionMode>
         FreeList sweepHelperSelectScribbleMode(SweepMode = SweepOnly);
@@ -206,35 +195,50 @@ public:
         enum ScribbleMode { DontScribble, Scribble };
             
         template<DestructionMode, ScribbleMode>
-        FreeList sweepHelperSelectStateAndSweepMode(SweepMode = SweepOnly);
+        FreeList sweepHelperSelectEmptyMode(SweepMode = SweepOnly);
             
+        enum EmptyMode { IsEmpty, NotEmpty };
+        
+        template<EmptyMode, DestructionMode, ScribbleMode>
+        FreeList sweepHelperSelectHasNewlyAllocated(SweepMode = SweepOnly);
+        
         enum NewlyAllocatedMode { HasNewlyAllocated, DoesNotHaveNewlyAllocated };
-            
-        template<BlockState, SweepMode, DestructionMode, ScribbleMode, NewlyAllocatedMode>
+        
+        template<EmptyMode, DestructionMode, ScribbleMode, NewlyAllocatedMode>
+        FreeList sweepHelperSelectSweepMode(SweepMode = SweepOnly);
+        
+        template<EmptyMode, SweepMode, DestructionMode, ScribbleMode, NewlyAllocatedMode>
+        FreeList sweepHelperSelectFlipMode();
+        
+        enum FlipMode { NeedsFlip, DoesNotNeedFlip };
+        
+        template<EmptyMode, SweepMode, DestructionMode, ScribbleMode, NewlyAllocatedMode, FlipMode>
         FreeList specializedSweep();
             
         template<typename Func>
         void forEachFreeCell(const FreeList&, const Func&);
-            
+        
+        void setIsFreeListed();
+        
         MarkedBlock::Handle* m_prev;
         MarkedBlock::Handle* m_next;
             
-        size_t m_atomsPerCell;
-        size_t m_endAtom; // This is a fuzzy end. Always test for < m_endAtom.
+        size_t m_atomsPerCell { std::numeric_limits<size_t>::max() };
+        size_t m_endAtom { std::numeric_limits<size_t>::max() }; // This is a fuzzy end. Always test for < m_endAtom.
             
         std::unique_ptr<WTF::Bitmap<atomsPerBlock>> m_newlyAllocated;
             
         AllocatorAttributes m_attributes;
-        BlockState m_state;
-        bool m_isOnBlocksToSweep { false };
+        bool m_isFreeListed { false };
             
-        MarkedAllocator* m_allocator;
+        MarkedAllocator* m_allocator { nullptr };
+        size_t m_index { std::numeric_limits<size_t>::max() };
         WeakSet m_weakSet;
             
-        MarkedBlock* m_block;
+        MarkedBlock* m_block { nullptr };
     };
         
-    static MarkedBlock::Handle* tryCreate(Heap&, MarkedAllocator*, size_t cellSize, const AllocatorAttributes&);
+    static MarkedBlock::Handle* tryCreate(Heap&);
         
     Handle& handle();
         
@@ -248,34 +252,35 @@ public:
     size_t markCount();
 
     bool isMarked(const void*);
+    bool isMarked(HeapVersion, const void*);
+    bool isMarkedConcurrently(HeapVersion, const void*);
     bool testAndSetMarked(const void*);
         
     bool isMarkedOrNewlyAllocated(const HeapCell*);
+    bool isMarkedOrNewlyAllocated(HeapVersion, const HeapCell*);
     
-    bool isMarkedDuringWeakVisiting(HeapVersion, const HeapCell*);
-
     bool isAtom(const void*);
     void clearMarked(const void*);
-        
+    
     size_t cellSize();
     const AllocatorAttributes& attributes() const;
-
+    
     bool hasAnyMarked() const;
     void noteMarked();
         
     WeakSet& weakSet();
-
+    
     bool needsFlip(HeapVersion);
     bool needsFlip();
-        
-    void flipIfNecessaryDuringMarking(HeapVersion);
-    void flipIfNecessary(HeapVersion);
-    void flipIfNecessary();
+    
+    void aboutToMark(HeapVersion);
         
     void assertFlipped();
         
     bool needsDestruction() const { return m_needsDestruction; }
-        
+    
+    inline void resetVersion();
+    
 private:
     static const size_t atomAlignmentMask = atomSize - 1;
 
@@ -284,9 +289,9 @@ private:
     MarkedBlock(VM&, Handle&);
     Atom* atoms();
         
-    void flipIfNecessaryDuringMarkingSlow();
-    void flipIfNecessarySlow();
+    void aboutToMarkSlow(HeapVersion);
     void clearMarks();
+    void clearMarks(HeapVersion);
     void clearHasAnyMarked();
     
     void noteMarkedSlow();
@@ -469,40 +474,13 @@ inline bool MarkedBlock::needsFlip(HeapVersion heapVersion)
     return heapVersion != m_version;
 }
 
-inline void MarkedBlock::flipIfNecessary(HeapVersion heapVersion)
+inline void MarkedBlock::aboutToMark(HeapVersion heapVersion)
 {
     if (UNLIKELY(needsFlip(heapVersion)))
-        flipIfNecessarySlow();
-}
-
-inline void MarkedBlock::flipIfNecessaryDuringMarking(HeapVersion heapVersion)
-{
-    if (UNLIKELY(needsFlip(heapVersion)))
-        flipIfNecessaryDuringMarkingSlow();
+        aboutToMarkSlow(heapVersion);
     WTF::loadLoadFence();
 }
 
-inline void MarkedBlock::Handle::flipIfNecessary(HeapVersion heapVersion)
-{
-    block().flipIfNecessary(heapVersion);
-}
-
-inline void MarkedBlock::Handle::flipIfNecessaryDuringMarking(HeapVersion heapVersion)
-{
-    block().flipIfNecessaryDuringMarking(heapVersion);
-}
-
-inline void MarkedBlock::Handle::flipForEdenCollection()
-{
-    assertFlipped();
-        
-    HEAP_LOG_BLOCK_STATE_TRANSITION(this);
-    
-    ASSERT(m_state != New && m_state != FreeListed);
-    
-    m_state = Marked;
-}
-
 #if ASSERT_DISABLED
 inline void MarkedBlock::assertFlipped()
 {
@@ -514,9 +492,16 @@ inline void MarkedBlock::Handle::assertFlipped()
     block().assertFlipped();
 }
 
-inline bool MarkedBlock::isMarked(const void* p)
+inline bool MarkedBlock::isMarked(HeapVersion heapVersion, const void* p)
 {
-    assertFlipped();
+    return needsFlip(heapVersion) ? false : m_marks.get(atomNumber(p));
+}
+
+inline bool MarkedBlock::isMarkedConcurrently(HeapVersion heapVersion, const void* p)
+{
+    if (needsFlip(heapVersion))
+        return false;
+    WTF::loadLoadFence();
     return m_marks.get(atomNumber(p));
 }
 
@@ -550,43 +535,14 @@ inline bool MarkedBlock::Handle::clearNewlyAllocated()
     return false;
 }
 
-inline bool MarkedBlock::Handle::isMarkedOrNewlyAllocated(const HeapCell* cell)
-{
-    ASSERT(m_state == Marked);
-    return m_block->isMarked(cell) || (m_newlyAllocated && isNewlyAllocated(cell));
-}
-
-inline bool MarkedBlock::isMarkedOrNewlyAllocated(const HeapCell* cell)
+inline bool MarkedBlock::Handle::isMarkedOrNewlyAllocated(HeapVersion version, const HeapCell* cell)
 {
-    ASSERT(m_handle.m_state == Marked);
-    return isMarked(cell) || (m_handle.m_newlyAllocated && m_handle.isNewlyAllocated(cell));
+    return m_block->isMarked(version, cell) || (m_newlyAllocated && isNewlyAllocated(cell));
 }
 
-inline bool MarkedBlock::isMarkedDuringWeakVisiting(HeapVersion heapVersion, const HeapCell* cell)
+inline bool MarkedBlock::isMarkedOrNewlyAllocated(HeapVersion version, const HeapCell* cell)
 {
-    if (needsFlip(heapVersion))
-        return false;
-    return isMarked(cell);
-}
-
-inline bool MarkedBlock::Handle::isLive(const HeapCell* cell)
-{
-    assertFlipped();
-    switch (m_state) {
-    case Allocated:
-        return true;
-
-    case Marked:
-        return isMarkedOrNewlyAllocated(cell);
-
-    case New:
-    case FreeListed:
-        RELEASE_ASSERT_NOT_REACHED();
-        return false;
-    }
-
-    RELEASE_ASSERT_NOT_REACHED();
-    return false;
+    return isMarked(version, cell) || (m_handle.m_newlyAllocated && m_handle.isNewlyAllocated(cell));
 }
 
 inline bool MarkedBlock::isAtom(const void* p)
@@ -603,13 +559,6 @@ inline bool MarkedBlock::isAtom(const void* p)
     return true;
 }
 
-inline bool MarkedBlock::Handle::isLiveCell(const void* p)
-{
-    if (!m_block->isAtom(p))
-        return false;
-    return isLive(static_cast<const HeapCell*>(p));
-}
-
 template <typename Functor>
 inline IterationStatus MarkedBlock::Handle::forEachCell(const Functor& functor)
 {
@@ -622,62 +571,6 @@ inline IterationStatus MarkedBlock::Handle::forEachCell(const Functor& functor)
     return IterationStatus::Continue;
 }
 
-template <typename Functor>
-inline IterationStatus MarkedBlock::Handle::forEachLiveCell(const Functor& functor)
-{
-    flipIfNecessary();
-    HeapCell::Kind kind = m_attributes.cellKind;
-    for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
-        HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&m_block->atoms()[i]);
-        if (!isLive(cell))
-            continue;
-
-        if (functor(cell, kind) == IterationStatus::Done)
-            return IterationStatus::Done;
-    }
-    return IterationStatus::Continue;
-}
-
-template <typename Functor>
-inline IterationStatus MarkedBlock::Handle::forEachDeadCell(const Functor& functor)
-{
-    flipIfNecessary();
-    HeapCell::Kind kind = m_attributes.cellKind;
-    for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
-        HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&m_block->atoms()[i]);
-        if (isLive(cell))
-            continue;
-
-        if (functor(cell, kind) == IterationStatus::Done)
-            return IterationStatus::Done;
-    }
-    return IterationStatus::Continue;
-}
-
-inline bool MarkedBlock::Handle::needsSweeping() const
-{
-    const_cast<MarkedBlock::Handle*>(this)->flipIfNecessary();
-    return m_state == Marked;
-}
-
-inline bool MarkedBlock::Handle::isAllocated() const
-{
-    const_cast<MarkedBlock::Handle*>(this)->flipIfNecessary();
-    return m_state == Allocated;
-}
-
-inline bool MarkedBlock::Handle::isMarked() const
-{
-    const_cast<MarkedBlock::Handle*>(this)->flipIfNecessary();
-    return m_state == Marked;
-}
-
-inline bool MarkedBlock::Handle::isFreeListed() const
-{
-    const_cast<MarkedBlock::Handle*>(this)->flipIfNecessary();
-    return m_state == FreeListed;
-}
-
 inline bool MarkedBlock::hasAnyMarked() const
 {
     return m_biasedMarkCount != m_markCountBias;
@@ -711,7 +604,7 @@ template<> struct DefaultHash<JSC::MarkedBlock*> {
     typedef MarkedBlockHash Hash;
 };
 
-void printInternal(PrintStream& out, JSC::MarkedBlock::BlockState);
+void printInternal(PrintStream& out, JSC::MarkedBlock::Handle::SweepMode);
 
 } // namespace WTF
 
diff --git a/Source/JavaScriptCore/heap/MarkedBlockInlines.h b/Source/JavaScriptCore/heap/MarkedBlockInlines.h
new file mode 100644 (file)
index 0000000..d09fafb
--- /dev/null
@@ -0,0 +1,96 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include "MarkedAllocator.h"
+#include "MarkedBlock.h"
+
+namespace JSC {
+
+inline bool MarkedBlock::Handle::isLive(HeapVersion version, const HeapCell* cell)
+{
+    ASSERT(!isFreeListed());
+    
+    if (UNLIKELY(hasAnyNewlyAllocated())) {
+        if (isNewlyAllocated(cell))
+            return true;
+    }
+    
+    MarkedBlock& block = this->block();
+    
+    if (allocator()->isAllocated(this))
+        return true;
+    
+    if (block.needsFlip(version))
+        return false;
+
+    return block.isMarked(cell);
+}
+
+inline bool MarkedBlock::Handle::isLiveCell(HeapVersion version, const void* p)
+{
+    if (!m_block->isAtom(p))
+        return false;
+    return isLive(version, static_cast<const HeapCell*>(p));
+}
+
+template <typename Functor>
+inline IterationStatus MarkedBlock::Handle::forEachLiveCell(const Functor& functor)
+{
+    HeapCell::Kind kind = m_attributes.cellKind;
+    for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
+        HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&m_block->atoms()[i]);
+        if (!isLive(cell))
+            continue;
+
+        if (functor(cell, kind) == IterationStatus::Done)
+            return IterationStatus::Done;
+    }
+    return IterationStatus::Continue;
+}
+
+template <typename Functor>
+inline IterationStatus MarkedBlock::Handle::forEachDeadCell(const Functor& functor)
+{
+    HeapCell::Kind kind = m_attributes.cellKind;
+    for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
+        HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&m_block->atoms()[i]);
+        if (isLive(cell))
+            continue;
+
+        if (functor(cell, kind) == IterationStatus::Done)
+            return IterationStatus::Done;
+    }
+    return IterationStatus::Continue;
+}
+
+inline void MarkedBlock::resetVersion()
+{
+    m_version = MarkedSpace::nullVersion;
+}
+
+} // namespace JSC
+
index 65d46b4..1ccba3b 100644 (file)
@@ -24,6 +24,7 @@
 #include "IncrementalSweeper.h"
 #include "JSObject.h"
 #include "JSCInlines.h"
+#include "MarkedBlockInlines.h"
 #include "SuperSampler.h"
 #include <wtf/ListDump.h>
 
@@ -195,6 +196,19 @@ MarkedSpace::MarkedSpace(Heap* heap)
             
             return IterationStatus::Continue;
         });
+    
+    MarkedAllocator* previous = nullptr;
+    forEachSubspace(
+        [&] (Subspace& subspace, AllocatorAttributes) -> IterationStatus {
+            for (MarkedAllocator* allocator : subspace.bagOfAllocators) {
+                allocator->setNextAllocator(previous);
+                previous = allocator;
+            }
+            
+            return IterationStatus::Continue;
+        });
+    m_firstAllocator = previous;
+    m_allocatorForEmptyAllocation = previous;
 }
 
 MarkedSpace::~MarkedSpace()
@@ -222,15 +236,19 @@ void MarkedSpace::lastChanceToFinalize()
 
 void* MarkedSpace::allocate(Subspace& subspace, size_t bytes)
 {
-    if (MarkedAllocator* allocator = allocatorFor(subspace, bytes))
-        return allocator->allocate();
+    if (MarkedAllocator* allocator = allocatorFor(subspace, bytes)) {
+        void* result = allocator->allocate();
+        return result;
+    }
     return allocateLarge(subspace, bytes);
 }
 
 void* MarkedSpace::tryAllocate(Subspace& subspace, size_t bytes)
 {
-    if (MarkedAllocator* allocator = allocatorFor(subspace, bytes))
-        return allocator->tryAllocate();
+    if (MarkedAllocator* allocator = allocatorFor(subspace, bytes)) {
+        void* result = allocator->tryAllocate();
+        return result;
+    }
     return tryAllocateLarge(subspace, bytes);
 }
 
@@ -259,9 +277,10 @@ void* MarkedSpace::tryAllocateLarge(Subspace& subspace, size_t size)
 void MarkedSpace::sweep()
 {
     m_heap->sweeper()->willFinishSweeping();
-    forEachBlock(
-        [&] (MarkedBlock::Handle* block) {
-            block->sweep();
+    forEachAllocator(
+        [&] (MarkedAllocator& allocator) -> IterationStatus {
+            allocator.sweep();
+            return IterationStatus::Continue;
         });
 }
 
@@ -284,33 +303,23 @@ void MarkedSpace::sweepLargeAllocations()
     m_largeAllocationsNurseryOffset = m_largeAllocations.size();
 }
 
-void MarkedSpace::zombifySweep()
-{
-    if (Options::logGC())
-        dataLog("Zombifying sweep...");
-    m_heap->sweeper()->willFinishSweeping();
-    forEachBlock(
-        [&] (MarkedBlock::Handle* block) {
-            if (block->needsSweeping())
-                block->sweep();
-        });
-}
-
-void MarkedSpace::resetAllocators()
+void MarkedSpace::prepareForAllocation()
 {
     forEachAllocator(
         [&] (MarkedAllocator& allocator) -> IterationStatus {
-            allocator.reset();
+            allocator.prepareForAllocation();
             return IterationStatus::Continue;
         });
 
-    m_blocksWithNewObjects.clear();
     m_activeWeakSets.takeFrom(m_newActiveWeakSets);
+    
     if (m_heap->operationInProgress() == EdenCollection)
         m_largeAllocationsNurseryOffsetForSweep = m_largeAllocationsNurseryOffset;
     else
         m_largeAllocationsNurseryOffsetForSweep = 0;
     m_largeAllocationsNurseryOffset = m_largeAllocations.size();
+    
+    m_allocatorForEmptyAllocation = m_firstAllocator;
 }
 
 void MarkedSpace::visitWeakSets(HeapRootVisitor& heapRootVisitor)
@@ -410,11 +419,11 @@ void MarkedSpace::freeOrShrinkBlock(MarkedBlock::Handle* block)
 
 void MarkedSpace::shrink()
 {
-    forEachBlock(
-        [&] (MarkedBlock::Handle* block) {
-            freeOrShrinkBlock(block);
+    forEachAllocator(
+        [&] (MarkedAllocator& allocator) -> IterationStatus {
+            allocator.shrink();
+            return IterationStatus::Continue;
         });
-    // For LargeAllocations, we do the moral equivalent in sweepLargeAllocations().
 }
 
 void MarkedSpace::clearNewlyAllocated()
@@ -429,57 +438,61 @@ void MarkedSpace::clearNewlyAllocated()
     for (unsigned i = m_largeAllocationsOffsetForThisCollection; i < m_largeAllocations.size(); ++i)
         m_largeAllocations[i]->clearNewlyAllocated();
 
-#if !ASSERT_DISABLED
-    forEachBlock(
-        [&] (MarkedBlock::Handle* block) {
-            ASSERT(!block->clearNewlyAllocated());
-        });
-
-    for (LargeAllocation* allocation : m_largeAllocations)
-        ASSERT(!allocation->isNewlyAllocated());
-#endif // !ASSERT_DISABLED
-}
-
-#ifndef NDEBUG 
-struct VerifyMarked : MarkedBlock::VoidFunctor { 
-    void operator()(MarkedBlock::Handle* block) const
-    {
-        if (block->needsFlip())
-            return;
-        switch (block->m_state) {
-        case MarkedBlock::Marked:
-            return;
-        default:
-            RELEASE_ASSERT_NOT_REACHED();
-        }
+    if (!ASSERT_DISABLED) {
+        forEachBlock(
+            [&] (MarkedBlock::Handle* block) {
+                ASSERT_UNUSED(block, !block->clearNewlyAllocated());
+            });
+        
+        for (LargeAllocation* allocation : m_largeAllocations)
+            ASSERT_UNUSED(allocation, !allocation->isNewlyAllocated());
     }
-}; 
-#endif 
+}
 
-void MarkedSpace::flip()
+void MarkedSpace::beginMarking()
 {
-    if (m_heap->operationInProgress() == EdenCollection) {
-        for (unsigned i = 0; i < m_blocksWithNewObjects.size(); ++i)
-            m_blocksWithNewObjects[i]->flipForEdenCollection();
-    } else {
-        HeapVersion nextVersion = m_version + 1;
-        if (UNLIKELY(nextVersion == initialVersion)) {
-            // Oh no! Version wrap-around! We handle this by flipping all blocks. This happens
-            // super rarely, probably never for most users.
+    if (m_heap->operationInProgress() == FullCollection) {
+        forEachAllocator(
+            [&] (MarkedAllocator& allocator) -> IterationStatus {
+                allocator.beginMarkingForFullCollection();
+                return IterationStatus::Continue;
+            });
+
+        m_version = nextVersion(m_version);
+        
+        if (UNLIKELY(m_version == initialVersion)) {
+            // Oh no! Version wrap-around! We handle this by setting all block versions to null.
             forEachBlock(
                 [&] (MarkedBlock::Handle* handle) {
-                    handle->flipIfNecessary();
+                    handle->block().resetVersion();
                 });
         }
-        m_version = nextVersion; // Henceforth, flipIfNecessary() will trigger on all blocks.
+        
         for (LargeAllocation* allocation : m_largeAllocations)
             allocation->flip();
     }
 
-#ifndef NDEBUG
-    VerifyMarked verifyFunctor;
-    forEachBlock(verifyFunctor);
-#endif
+    if (!ASSERT_DISABLED) {
+        forEachBlock(
+            [&] (MarkedBlock::Handle* block) {
+                if (block->needsFlip())
+                    return;
+                ASSERT(!block->isFreeListed());
+            });
+    }
+    
+    m_isMarking = true;
+}
+
+void MarkedSpace::endMarking()
+{
+    forEachAllocator(
+        [&] (MarkedAllocator& allocator) -> IterationStatus {
+            allocator.endMarking();
+            return IterationStatus::Continue;
+        });
+    
+    m_isMarking = false;
 }
 
 void MarkedSpace::willStartIterating()
@@ -540,19 +553,66 @@ void MarkedSpace::addActiveWeakSet(WeakSet* weakSet)
 
 void MarkedSpace::didAddBlock(MarkedBlock::Handle* block)
 {
+    // WARNING: This function is called before block is fully initialized. The block will not know
+    // its cellSize() or attributes(). The latter implies that you can't ask things like
+    // needsDestruction().
     m_capacity += MarkedBlock::blockSize;
     m_blocks.add(&block->block());
 }
 
 void MarkedSpace::didAllocateInBlock(MarkedBlock::Handle* block)
 {
-    block->assertFlipped();
-    m_blocksWithNewObjects.append(block);
-    
     if (block->weakSet().isOnList()) {
         block->weakSet().remove();
         m_newActiveWeakSets.append(&block->weakSet());
     }
 }
 
+MarkedBlock::Handle* MarkedSpace::findEmptyBlockToSteal()
+{
+    for (; m_allocatorForEmptyAllocation; m_allocatorForEmptyAllocation = m_allocatorForEmptyAllocation->nextAllocator()) {
+        if (MarkedBlock::Handle* block = m_allocatorForEmptyAllocation->findEmptyBlockToSteal())
+            return block;
+    }
+    return nullptr;
+}
+
+void MarkedSpace::snapshotUnswept()
+{
+    if (m_heap->operationInProgress() == EdenCollection) {
+        forEachAllocator(
+            [&] (MarkedAllocator& allocator) -> IterationStatus {
+                allocator.snapshotUnsweptForEdenCollection();
+                return IterationStatus::Continue;
+            });
+    } else {
+        forEachAllocator(
+            [&] (MarkedAllocator& allocator) -> IterationStatus {
+                allocator.snapshotUnsweptForFullCollection();
+                return IterationStatus::Continue;
+            });
+    }
+}
+
+void MarkedSpace::assertNoUnswept()
+{
+    if (ASSERT_DISABLED)
+        return;
+    forEachAllocator(
+        [&] (MarkedAllocator& allocator) -> IterationStatus {
+            allocator.assertNoUnswept();
+            return IterationStatus::Continue;
+        });
+}
+
+void MarkedSpace::dumpBits(PrintStream& out)
+{
+    forEachAllocator(
+        [&] (MarkedAllocator& allocator) -> IterationStatus {
+            out.print("Bits for ", allocator, ":\n");
+            allocator.dumpBits(out);
+            return IterationStatus::Continue;
+        });
+}
+
 } // namespace JSC
index ddd20c2..684b4a9 100644 (file)
@@ -65,7 +65,16 @@ public:
 
     static const size_t numSizeClasses = largeCutoff / sizeStep;
     
-    static const HeapVersion initialVersion = 42;  // This can be any value, including random garbage, so long as it's consistent for the lifetime of the process.
+    static const HeapVersion nullVersion = 0; // The version of freshly allocated blocks.
+    static const HeapVersion initialVersion = 1; // The version that the heap starts out with.
+    
+    HeapVersion nextVersion(HeapVersion version)
+    {
+        version++;
+        if (version == nullVersion)
+            version++;
+        return version;
+    }
     
     static size_t sizeClassToIndex(size_t size)
     {
@@ -114,7 +123,7 @@ public:
     Subspace& subspaceForObjectsWithoutDestructor() { return m_normalSpace; }
     Subspace& subspaceForAuxiliaryData() { return m_auxiliarySpace; }
     
-    void resetAllocators();
+    void prepareForAllocation();
 
     void visitWeakSets(HeapRootVisitor&);
     void reapWeakSets();
@@ -144,11 +153,13 @@ public:
     void didConsumeFreeList(MarkedBlock::Handle*);
     void didAllocateInBlock(MarkedBlock::Handle*);
 
-    void flip();
+    void beginMarking();
+    void endMarking();
+    void snapshotUnswept();
     void clearNewlyAllocated();
     void sweep();
     void sweepLargeAllocations();
-    void zombifySweep();
+    void assertNoUnswept();
     size_t objectCount();
     size_t size();
     size_t capacity();
@@ -157,8 +168,6 @@ public:
     
     HeapVersion version() const { return m_version; }
 
-    const Vector<MarkedBlock::Handle*>& blocksWithNewObjects() const { return m_blocksWithNewObjects; }
-    
     const Vector<LargeAllocation*>& largeAllocations() const { return m_largeAllocations; }
     unsigned largeAllocationsNurseryOffset() const { return m_largeAllocationsNurseryOffset; }
     unsigned largeAllocationsOffsetForThisCollection() const { return m_largeAllocationsOffsetForThisCollection; }
@@ -169,13 +178,16 @@ public:
     LargeAllocation** largeAllocationsForThisCollectionEnd() const { return m_largeAllocationsForThisCollectionEnd; }
     unsigned largeAllocationsForThisCollectionSize() const { return m_largeAllocationsForThisCollectionSize; }
     
+    MarkedAllocator* firstAllocator() const { return m_firstAllocator; }
+    MarkedAllocator* allocatorForEmptyAllocation() const { return m_allocatorForEmptyAllocation; }
+    
+    MarkedBlock::Handle* findEmptyBlockToSteal();
+    
     // When this is true it means that we have flipped but the mark bits haven't converged yet.
     bool isMarking() const { return m_isMarking; }
     
-    // FIXME: After https://bugs.webkit.org/show_bug.cgi?id=161581, MarkedSpace will control this
-    // flag directly.
-    void setIsMarking(bool value) { m_isMarking = value; }
-
+    void dumpBits(PrintStream& = WTF::dataFile());
+    
 private:
     friend class LLIntOffsetsExtractor;
     friend class JIT;
@@ -190,8 +202,8 @@ private:
     
     void initializeSubspace(Subspace&);
 
-    template<typename Functor> void forEachAllocator(const Functor&);
-    template<typename Functor> void forEachSubspace(const Functor&);
+    template<typename Functor> inline void forEachAllocator(const Functor&);
+    template<typename Functor> inline void forEachSubspace(const Functor&);
     
     void addActiveWeakSet(WeakSet*);
 
@@ -205,7 +217,6 @@ private:
     bool m_isIterating;
     bool m_isMarking { false };
     MarkedBlockSet m_blocks;
-    Vector<MarkedBlock::Handle*> m_blocksWithNewObjects;
     Vector<LargeAllocation*> m_largeAllocations;
     unsigned m_largeAllocationsNurseryOffset { 0 };
     unsigned m_largeAllocationsOffsetForThisCollection { 0 };
@@ -215,40 +226,11 @@ private:
     unsigned m_largeAllocationsForThisCollectionSize { 0 };
     SentinelLinkedList<WeakSet, BasicRawSentinelNode<WeakSet>> m_activeWeakSets;
     SentinelLinkedList<WeakSet, BasicRawSentinelNode<WeakSet>> m_newActiveWeakSets;
+    
+    MarkedAllocator* m_firstAllocator { nullptr };
+    MarkedAllocator* m_allocatorForEmptyAllocation { nullptr };
 };
 
-template<typename Functor> inline void MarkedSpace::forEachLiveCell(HeapIterationScope&, const Functor& functor)
-{
-    ASSERT(isIterating());
-    BlockIterator end = m_blocks.set().end();
-    for (BlockIterator it = m_blocks.set().begin(); it != end; ++it) {
-        if ((*it)->handle().forEachLiveCell(functor) == IterationStatus::Done)
-            return;
-    }
-    for (LargeAllocation* allocation : m_largeAllocations) {
-        if (allocation->isLive()) {
-            if (functor(allocation->cell(), allocation->attributes().cellKind) == IterationStatus::Done)
-                return;
-        }
-    }
-}
-
-template<typename Functor> inline void MarkedSpace::forEachDeadCell(HeapIterationScope&, const Functor& functor)
-{
-    ASSERT(isIterating());
-    BlockIterator end = m_blocks.set().end();
-    for (BlockIterator it = m_blocks.set().begin(); it != end; ++it) {
-        if ((*it)->handle().forEachDeadCell(functor) == IterationStatus::Done)
-            return;
-    }
-    for (LargeAllocation* allocation : m_largeAllocations) {
-        if (!allocation->isLive()) {
-            if (functor(allocation->cell(), allocation->attributes().cellKind) == IterationStatus::Done)
-                return;
-        }
-    }
-}
-
 inline MarkedAllocator* MarkedSpace::allocatorFor(Subspace& space, size_t bytes)
 {
     ASSERT(bytes);
@@ -304,15 +286,10 @@ template <typename Functor> inline void MarkedSpace::forEachBlock(const Functor&
 template <typename Functor>
 void MarkedSpace::forEachAllocator(const Functor& functor)
 {
-    forEachSubspace(
-        [&] (Subspace& subspace, AllocatorAttributes) -> IterationStatus {
-            for (MarkedAllocator* allocator : subspace.bagOfAllocators) {
-                if (functor(*allocator) == IterationStatus::Done)
-                    return IterationStatus::Done;
-            }
-            
-            return IterationStatus::Continue;
-        });
+    for (MarkedAllocator* allocator = m_firstAllocator; allocator; allocator = allocator->nextAllocator()) {
+        if (functor(*allocator) == IterationStatus::Done)
+            return;
+    }
 }
 
 template<typename Functor>
diff --git a/Source/JavaScriptCore/heap/MarkedSpaceInlines.h b/Source/JavaScriptCore/heap/MarkedSpaceInlines.h
new file mode 100644 (file)
index 0000000..e629cb0
--- /dev/null
@@ -0,0 +1,66 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include "MarkedBlockInlines.h"
+#include "MarkedSpace.h"
+
+namespace JSC {
+
+template<typename Functor> inline void MarkedSpace::forEachLiveCell(HeapIterationScope&, const Functor& functor)
+{
+    ASSERT(isIterating());
+    BlockIterator end = m_blocks.set().end();
+    for (BlockIterator it = m_blocks.set().begin(); it != end; ++it) {
+        if ((*it)->handle().forEachLiveCell(functor) == IterationStatus::Done)
+            return;
+    }
+    for (LargeAllocation* allocation : m_largeAllocations) {
+        if (allocation->isLive()) {
+            if (functor(allocation->cell(), allocation->attributes().cellKind) == IterationStatus::Done)
+                return;
+        }
+    }
+}
+
+template<typename Functor> inline void MarkedSpace::forEachDeadCell(HeapIterationScope&, const Functor& functor)
+{
+    ASSERT(isIterating());
+    BlockIterator end = m_blocks.set().end();
+    for (BlockIterator it = m_blocks.set().begin(); it != end; ++it) {
+        if ((*it)->handle().forEachDeadCell(functor) == IterationStatus::Done)
+            return;
+    }
+    for (LargeAllocation* allocation : m_largeAllocations) {
+        if (!allocation->isLive()) {
+            if (functor(allocation->cell(), allocation->attributes().cellKind) == IterationStatus::Done)
+                return;
+        }
+    }
+}
+
+} // namespace JSC
+
index 269b1d5..3f8b233 100644 (file)
@@ -189,6 +189,8 @@ void SlotVisitor::setMarkedAndAppendToMarkStack(JSCell* cell)
     validate(cell);
 #endif
     
+    //dataLog("    Marking ", RawPointer(cell), "\n");
+    
     if (cell->isLargeAllocation())
         setMarkedAndAppendToMarkStack(cell->largeAllocation(), cell);
     else
@@ -198,7 +200,7 @@ void SlotVisitor::setMarkedAndAppendToMarkStack(JSCell* cell)
 template<typename ContainerType>
 ALWAYS_INLINE void SlotVisitor::setMarkedAndAppendToMarkStack(ContainerType& container, JSCell* cell)
 {
-    container.flipIfNecessaryDuringMarking(m_version);
+    container.aboutToMark(m_version);
     
     if (container.testAndSetMarked(cell))
         return;
@@ -246,10 +248,8 @@ void SlotVisitor::markAuxiliary(const void* base)
     
     ASSERT(cell->heap() == heap());
     
-    if (Heap::testAndSetMarked(m_version, cell)) {
-        RELEASE_ASSERT(Heap::isMarkedConcurrently(cell));
+    if (Heap::testAndSetMarked(m_version, cell))
         return;
-    }
     
     noteLiveAuxiliaryCell(cell);
 }
index 2d82f67..bbdfdc2 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2009, 2012, 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2009, 2012, 2013, 2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -28,7 +28,9 @@
 
 #include "JSExportMacros.h"
 #include <cstddef>
+#include <wtf/HashTraits.h>
 #include <wtf/Noncopyable.h>
+#include <wtf/VectorTraits.h>
 
 namespace JSC {
 
@@ -51,33 +53,33 @@ public:
     {
     }
 
-    Weak(T*, WeakHandleOwner* = 0, void* context = 0);
+    inline Weak(T*, WeakHandleOwner* = 0, void* context = 0);
 
     enum HashTableDeletedValueTag { HashTableDeletedValue };
-    bool isHashTableDeletedValue() const;
-    Weak(HashTableDeletedValueTag);
+    inline bool isHashTableDeletedValue() const;
+    inline Weak(HashTableDeletedValueTag);
 
-    Weak(Weak&&);
+    inline Weak(Weak&&);
 
     ~Weak()
     {
         clear();
     }
 
-    void swap(Weak&);
+    inline void swap(Weak&);
 
-    Weak& operator=(Weak&&);
+    inline Weak& operator=(Weak&&);
 
-    bool operator!() const;
-    T* operator->() const;
-    T& operator*() const;
-    T* get() const;
+    inline bool operator!() const;
+    inline T* operator->() const;
+    inline T& operator*() const;
+    inline T* get() const;
 
-    bool was(T*) const;
+    inline bool was(T*) const;
 
-    explicit operator bool() const;
+    inline explicit operator bool() const;
 
-    WeakImpl* leakImpl() WARN_UNUSED_RETURN;
+    inline WeakImpl* leakImpl() WARN_UNUSED_RETURN;
     void clear()
     {
         if (!m_impl)
@@ -86,11 +88,30 @@ public:
     }
     
 private:
-    static WeakImpl* hashTableDeletedValue();
+    static inline WeakImpl* hashTableDeletedValue();
 
     WeakImpl* m_impl;
 };
 
 } // namespace JSC
 
+namespace WTF {
+
+template<typename T> struct VectorTraits<JSC::Weak<T>> : SimpleClassVectorTraits {
+    static const bool canCompareWithMemcmp = false;
+};
+
+template<typename T> struct HashTraits<JSC::Weak<T>> : SimpleClassHashTraits<JSC::Weak<T>> {
+    typedef JSC::Weak<T> StorageType;
+
+    typedef std::nullptr_t EmptyValueType;
+    static EmptyValueType emptyValue() { return nullptr; }
+
+    typedef T* PeekType;
+    static PeekType peek(const StorageType& value) { return value.get(); }
+    static PeekType peek(EmptyValueType) { return PeekType(); }
+};
+
+} // namespace WTF
+
 #endif // Weak_h
index 00dbc0e..aed147f 100644 (file)
@@ -113,7 +113,7 @@ void WeakBlock::specializedVisit(ContainerType& container, HeapRootVisitor& heap
             continue;
 
         const JSValue& jsValue = weakImpl->jsValue();
-        if (container.isMarkedDuringWeakVisiting(version, jsValue.asCell()))
+        if (container.isMarkedConcurrently(version, jsValue.asCell()))
             continue;
         
         if (!weakHandleOwner->isReachableFromOpaqueRoots(Handle<Unknown>::wrapSlot(&const_cast<JSValue&>(jsValue)), weakImpl->context(), visitor))
@@ -147,17 +147,14 @@ void WeakBlock::reap()
     // If this WeakBlock doesn't belong to a CellContainer, we won't even be here.
     ASSERT(m_container);
     
-    m_container.flipIfNecessary();
-
-    // We only reap after marking.
-    ASSERT(m_container.isMarked());
+    HeapVersion version = m_container.heap()->objectSpace().version();
 
     for (size_t i = 0; i < weakImplCount(); ++i) {
         WeakImpl* weakImpl = &weakImpls()[i];
         if (weakImpl->state() > WeakImpl::Dead)
             continue;
 
-        if (m_container.isMarked(weakImpl->jsValue().asCell())) {
+        if (m_container.isMarked(version, weakImpl->jsValue().asCell())) {
             ASSERT(weakImpl->state() == WeakImpl::Live);
             continue;
         }
index 4653a9f..f8679f7 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2009, 2012, 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2009, 2012, 2013, 2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -29,7 +29,6 @@
 #include "JSCell.h"
 #include "WeakSetInlines.h"
 #include <wtf/Assertions.h>
-#include <wtf/HashTraits.h>
 
 namespace JSC {
 
@@ -149,23 +148,4 @@ template<typename T> inline void weakClear(Weak<T>& weak, T* cell)
 
 } // namespace JSC
 
-namespace WTF {
-
-template<typename T> struct VectorTraits<JSC::Weak<T>> : SimpleClassVectorTraits {
-    static const bool canCompareWithMemcmp = false;
-};
-
-template<typename T> struct HashTraits<JSC::Weak<T>> : SimpleClassHashTraits<JSC::Weak<T>> {
-    typedef JSC::Weak<T> StorageType;
-
-    typedef std::nullptr_t EmptyValueType;
-    static EmptyValueType emptyValue() { return nullptr; }
-
-    typedef T* PeekType;
-    static PeekType peek(const StorageType& value) { return value.get(); }
-    static PeekType peek(EmptyValueType) { return PeekType(); }
-};
-
-} // namespace WTF
-
 #endif // WeakInlines_h
index 4176d23..1862a42 100644 (file)
 #include "ThunkGenerator.h"
 #include "Weak.h"
 #include "WeakHandleOwner.h"
-#include "WeakInlines.h"
 #include <tuple>
 #include <wtf/HashMap.h>
 #include <wtf/ThreadingPrimitives.h>
+#include <wtf/text/StringHash.h>
 
 namespace JSC {
 
index 1bbab0c..973c2e8 100644 (file)
 #include "MapConstructor.h"
 #include "MapIteratorPrototype.h"
 #include "MapPrototype.h"
+#include "MarkedSpaceInlines.h"
 #include "MathObject.h"
 #include "Microtask.h"
 #include "ModuleLoaderPrototype.h"
index 86bc94a..b7aa202 100644 (file)
@@ -43,6 +43,7 @@
 #include "ObjectPrototype.h"
 #include "PropertyDescriptor.h"
 #include "PropertyNameArray.h"
+#include "PrototypeMapInlines.h"
 #include "ProxyObject.h"
 #include "Reject.h"
 #include "SlotVisitorInlines.h"
index 192ad4b..9f5d985 100644 (file)
@@ -28,6 +28,7 @@
 
 #include "JSGlobalObject.h"
 #include "JSCInlines.h"
+#include "PrototypeMapInlines.h"
 
 namespace JSC {
 
index fec51ad..00f0b44 100644 (file)
@@ -187,6 +187,7 @@ typedef const char* optionString;
     v(unsigned, largeAllocationCutoff, 100000, Normal, nullptr) \
     v(bool, dumpSizeClasses, false, Normal, nullptr) \
     v(bool, useBumpAllocator, true, Normal, nullptr) \
+    v(bool, stealEmptyBlocksFromOtherAllocators, true, Normal, nullptr) \
     v(bool, eagerlyUpdateTopCallFrame, false, Normal, nullptr) \
     \
     v(bool, useOSREntryToDFG, true, Normal, nullptr) \
@@ -302,7 +303,6 @@ typedef const char* optionString;
     v(unsigned, numberOfGCMarkers, computeNumberOfGCMarkers(7), Normal, nullptr) \
     v(unsigned, opaqueRootMergeThreshold, 1000, Normal, nullptr) \
     v(double, minHeapUtilization, 0.8, Normal, nullptr) \
-    v(double, minCopiedBlockUtilization, 0.9, Normal, nullptr) \
     v(double, minMarkedBlockUtilization, 0.9, Normal, nullptr) \
     v(unsigned, slowPathAllocsBetweenGCs, 0, Normal, "force a GC on every Nth slow path alloc, where N is specified by this option") \
     v(bool, deferGCShouldCollectWithProbability, false, Normal, "If true, we perform a collection based on flipping a coin according the probability in the 'deferGCProbability' option when DeferGC is destructed.") \
index d9a150d..d9e0c08 100644 (file)
@@ -29,6 +29,7 @@
 #include "IndexingType.h"
 #include "JSGlobalObject.h"
 #include "JSCInlines.h"
+#include "PrototypeMapInlines.h"
 
 namespace JSC {
 
index 4d2122d..9628224 100644 (file)
@@ -26,6 +26,8 @@
 #ifndef PrototypeMap_h
 #define PrototypeMap_h
 
+#include "IndexingType.h"
+#include "JSTypeInfo.h"
 #include "WeakGCMap.h"
 #include <wtf/TriState.h>
 
@@ -48,7 +50,7 @@ public:
     JS_EXPORT_PRIVATE Structure* emptyStructureForPrototypeFromBaseStructure(JSObject*, Structure*);
     void clearEmptyObjectStructureForPrototype(JSObject*, unsigned inlineCapacity);
     JS_EXPORT_PRIVATE void addPrototype(JSObject*);
-    TriState isPrototype(JSObject*) const; // Returns a conservative estimate.
+    inline TriState isPrototype(JSObject*) const; // Returns a conservative estimate.
 
 private:
     Structure* createEmptyStructure(JSObject* prototype, const TypeInfo&, const ClassInfo*, IndexingType, unsigned inlineCapacity);
@@ -58,18 +60,6 @@ private:
     StructureMap m_structures;
 };
 
-inline TriState PrototypeMap::isPrototype(JSObject* object) const
-{
-    if (!m_prototypes.contains(object))
-        return FalseTriState;
-
-    // We know that 'object' was used as a prototype at one time, so be
-    // conservative and say that it might still be so. (It would be expensive
-    // to find out for sure, and we don't know of any cases where being precise
-    // would improve performance.)
-    return MixedTriState;
-}
-
 } // namespace JSC
 
 #endif // PrototypeMap_h
diff --git a/Source/JavaScriptCore/runtime/PrototypeMapInlines.h b/Source/JavaScriptCore/runtime/PrototypeMapInlines.h
new file mode 100644 (file)
index 0000000..0777668
--- /dev/null
@@ -0,0 +1,46 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include "PrototypeMap.h"
+#include "WeakGCMapInlines.h"
+
+namespace JSC {
+
+inline TriState PrototypeMap::isPrototype(JSObject* object) const
+{
+    if (!m_prototypes.contains(object))
+        return FalseTriState;
+
+    // We know that 'object' was used as a prototype at one time, so be
+    // conservative and say that it might still be so. (It would be expensive
+    // to find out for sure, and we don't know of any cases where being precise
+    // would improve performance.)
+    return MixedTriState;
+}
+
+} // namespace JSC
+
index 0df04cb..e078d2a 100644 (file)
@@ -29,7 +29,6 @@
 #include "RegExpKey.h"
 #include "Strong.h"
 #include "Weak.h"
-#include "WeakInlines.h"
 #include <array>
 #include <wtf/HashMap.h>
 
index 81f7a7d..f00488b 100644 (file)
@@ -42,6 +42,7 @@
 #include "LLIntPCRanges.h"
 #include "MarkedBlock.h"
 #include "MarkedBlockSet.h"
+#include "MarkedSpaceInlines.h"
 #include "PCToCodeOriginMap.h"
 #include "SlotVisitor.h"
 #include "SlotVisitorInlines.h"
index 2627030..d59ec83 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2009, 2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2009, 2015-2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -27,7 +27,6 @@
 #define WeakGCMap_h
 
 #include "Weak.h"
-#include "WeakInlines.h"
 #include <wtf/HashMap.h>
 
 namespace JSC {
@@ -81,19 +80,9 @@ public:
         return false;
     }
 
-    iterator find(const KeyType& key)
-    {
-        iterator it = m_map.find(key);
-        iterator end = m_map.end();
-        if (it != end && !it->value) // Found a zombie value.
-            return end;
-        return it;
-    }
+    inline iterator find(const KeyType& key);
 
-    const_iterator find(const KeyType& key) const
-    {
-        return const_cast<WeakGCMap*>(this)->find(key);
-    }
+    inline const_iterator find(const KeyType& key) const;
 
     template<typename Functor>
     void forEach(Functor functor)
@@ -104,10 +93,7 @@ public:
         }
     }
 
-    bool contains(const KeyType& key) const
-    {
-        return find(key) != m_map.end();
-    }
+    inline bool contains(const KeyType& key) const;
 
     void pruneStaleEntries();
 
index b90acc0..849fd4b 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2015-2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -28,6 +28,7 @@
 
 #include "HeapInlines.h"
 #include "WeakGCMap.h"
+#include "WeakInlines.h"
 
 namespace JSC {
 
@@ -47,6 +48,28 @@ inline WeakGCMap<KeyArg, ValueArg, HashArg, KeyTraitsArg>::~WeakGCMap()
 }
 
 template<typename KeyArg, typename ValueArg, typename HashArg, typename KeyTraitsArg>
+inline typename WeakGCMap<KeyArg, ValueArg, HashArg, KeyTraitsArg>::iterator WeakGCMap<KeyArg, ValueArg, HashArg, KeyTraitsArg>::find(const KeyType& key)
+{
+    iterator it = m_map.find(key);
+    iterator end = m_map.end();
+    if (it != end && !it->value) // Found a zombie value.
+        return end;
+    return it;
+}
+
+template<typename KeyArg, typename ValueArg, typename HashArg, typename KeyTraitsArg>
+inline typename WeakGCMap<KeyArg, ValueArg, HashArg, KeyTraitsArg>::const_iterator WeakGCMap<KeyArg, ValueArg, HashArg, KeyTraitsArg>::find(const KeyType& key) const
+{
+    return const_cast<WeakGCMap*>(this)->find(key);
+}
+
+template<typename KeyArg, typename ValueArg, typename HashArg, typename KeyTraitsArg>
+inline bool WeakGCMap<KeyArg, ValueArg, HashArg, KeyTraitsArg>::contains(const KeyType& key) const
+{
+    return find(key) != m_map.end();
+}
+
+template<typename KeyArg, typename ValueArg, typename HashArg, typename KeyTraitsArg>
 NEVER_INLINE void WeakGCMap<KeyArg, ValueArg, HashArg, KeyTraitsArg>::pruneStaleEntries()
 {
     m_map.removeIf([](typename HashMapType::KeyValuePairType& entry) {
index e961c61..514d0b9 100644 (file)
@@ -31,6 +31,7 @@
 #include "HeapIterationScope.h"
 #include "JSCInlines.h"
 #include "JSFunction.h"
+#include "MarkedSpaceInlines.h"
 #include "StackVisitor.h"
 #include <wtf/DataLog.h>
 #include <wtf/StringPrintStream.h>
index 46c9a67..fc528de 100644 (file)
@@ -1,3 +1,26 @@
+2016-09-20  Filip Pizlo  <fpizlo@apple.com>
+
+        Make MarkedBlock state tracking support overlapped allocation and marking state
+        https://bugs.webkit.org/show_bug.cgi?id=161581
+
+        Reviewed by Geoffrey Garen.
+        
+        The main change here is to bring back FastBitVector.cpp, so that I could outline some
+        large slow path functions. This also adds some utilities, like atomicSetAndCheck() and
+        isEmpty(). The GC uses these.
+
+        * WTF.xcodeproj/project.pbxproj:
+        * wtf/CMakeLists.txt:
+        * wtf/FastBitVector.cpp: Added.
+        (WTF::FastBitVectorWordOwner::setEqualsSlow):
+        (WTF::FastBitVectorWordOwner::resizeSlow):
+        * wtf/FastBitVector.h:
+        (WTF::FastBitVectorWordOwner::operator=):
+        (WTF::FastBitVectorWordOwner::resize):
+        (WTF::FastBitVectorImpl::isEmpty):
+        (WTF::FastBitVector::atomicSetAndCheck):
+        (WTF::FastBitVector::operator[]): Deleted.
+
 2016-09-20  Jonathan Bedard  <jbedard@apple.com>
 
         Undefined behavior: Left shift negative number
index f8b2121..4a3ab35 100644 (file)
@@ -28,6 +28,7 @@
                0F4570431BE5B58F0062A629 /* Dominators.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4570421BE5B58F0062A629 /* Dominators.h */; };
                0F4570451BE834410062A629 /* BubbleSort.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4570441BE834410062A629 /* BubbleSort.h */; };
                0F725CAC1C50461600AD943A /* RangeSet.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F725CAB1C50461600AD943A /* RangeSet.h */; };
+               0F7C5FB61D885CF20044F5E2 /* FastBitVector.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F7C5FB51D885CF20044F5E2 /* FastBitVector.cpp */; };
                0F824A681B7443A0002E345D /* ParkingLot.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F824A641B7443A0002E345D /* ParkingLot.cpp */; };
                0F824A691B7443A0002E345D /* ParkingLot.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F824A651B7443A0002E345D /* ParkingLot.h */; };
                0F87105A16643F190090B0AD /* RawPointer.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F87105916643F190090B0AD /* RawPointer.h */; };
                0F4570421BE5B58F0062A629 /* Dominators.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = Dominators.h; sourceTree = "<group>"; };
                0F4570441BE834410062A629 /* BubbleSort.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = BubbleSort.h; sourceTree = "<group>"; };
                0F725CAB1C50461600AD943A /* RangeSet.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RangeSet.h; sourceTree = "<group>"; };
+               0F7C5FB51D885CF20044F5E2 /* FastBitVector.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = FastBitVector.cpp; sourceTree = "<group>"; };
                0F824A641B7443A0002E345D /* ParkingLot.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ParkingLot.cpp; sourceTree = "<group>"; };
                0F824A651B7443A0002E345D /* ParkingLot.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ParkingLot.h; sourceTree = "<group>"; };
                0F87105916643F190090B0AD /* RawPointer.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RawPointer.h; sourceTree = "<group>"; };
                                A8A47298151A825A004123FF /* dtoa.h */,
                                1AEA88E11D6BBCF400E5AD64 /* EnumTraits.h */,
                                A8A4729F151A825A004123FF /* ExportMacros.h */,
+                               0F7C5FB51D885CF20044F5E2 /* FastBitVector.cpp */,
                                0FD81AC4154FB22E00983E72 /* FastBitVector.h */,
                                A8A472A1151A825A004123FF /* FastMalloc.cpp */,
                                A8A472A2151A825A004123FF /* FastMalloc.h */,
                                CD5497AC15857D0300B5BC30 /* MediaTime.cpp in Sources */,
                                A8A473EC151A825B004123FF /* MetaAllocator.cpp in Sources */,
                                A8A473F4151A825B004123FF /* NumberOfCores.cpp in Sources */,
+                               0F7C5FB61D885CF20044F5E2 /* FastBitVector.cpp in Sources */,
                                A8A473F7151A825B004123FF /* OSAllocatorPosix.cpp in Sources */,
                                A8A473F9151A825B004123FF /* OSRandomSource.cpp in Sources */,
                                A8A47402151A825B004123FF /* PageBlock.cpp in Sources */,
index 857b06c..867999e 100644 (file)
@@ -181,6 +181,7 @@ set(WTF_SOURCES
     DataLog.cpp
     DateMath.cpp
     DecimalNumber.cpp
+    FastBitVector.cpp
     FastMalloc.cpp
     FilePrintStream.cpp
     FunctionDispatcher.cpp
diff --git a/Source/WTF/wtf/FastBitVector.cpp b/Source/WTF/wtf/FastBitVector.cpp
new file mode 100644 (file)
index 0000000..5a76ad7
--- /dev/null
@@ -0,0 +1,57 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "config.h"
+#include "FastBitVector.h"
+
+namespace WTF {
+
+void FastBitVectorWordOwner::setEqualsSlow(const FastBitVectorWordOwner& other)
+{
+    uint32_t* newArray = static_cast<uint32_t*>(
+        fastCalloc(other.arrayLength(), sizeof(uint32_t)));
+    memcpy(newArray, other.m_words, other.arrayLength() * sizeof(uint32_t));
+    if (m_words)
+        fastFree(m_words);
+    m_words = newArray;
+    m_numBits = other.m_numBits;
+}
+
+void FastBitVectorWordOwner::resizeSlow(size_t numBits)
+{
+    size_t newLength = fastBitVectorArrayLength(numBits);
+    
+    // Use fastCalloc instead of fastRealloc because we expect the common
+    // use case for this method to be initializing the size of the bitvector.
+    
+    uint32_t* newArray = static_cast<uint32_t*>(fastCalloc(newLength, sizeof(uint32_t)));
+    memcpy(newArray, m_words, arrayLength() * sizeof(uint32_t));
+    if (m_words)
+        fastFree(m_words);
+    m_words = newArray;
+}
+
+} // namespace WTF
+
index bb2f252..2a65ad4 100644 (file)
@@ -26,6 +26,7 @@
 #pragma once
 
 #include <string.h>
+#include <wtf/Atomics.h>
 #include <wtf/FastMalloc.h>
 #include <wtf/PrintStream.h>
 #include <wtf/StdLibExtras.h>
@@ -91,17 +92,12 @@ public:
     
     FastBitVectorWordOwner& operator=(const FastBitVectorWordOwner& other)
     {
-        size_t length = other.arrayLength();
-        if (length == arrayLength()) {
-            memcpy(m_words, other.m_words, length * sizeof(uint32_t));
-            return *this;
+        if (arrayLength() != other.arrayLength())
+            setEqualsSlow(other);
+        else {
+            memcpy(m_words, other.m_words, arrayLength() * sizeof(uint32_t));
+            m_numBits = other.m_numBits;
         }
-        uint32_t* newArray = static_cast<uint32_t*>(fastCalloc(length, sizeof(uint32_t)));
-        memcpy(newArray, other.m_words, length * sizeof(uint32_t));
-        if (m_words)
-            fastFree(m_words);
-        m_words = newArray;
-        m_numBits = other.m_numBits;
         return *this;
     }
     
@@ -140,18 +136,8 @@ public:
     
     void resize(size_t numBits)
     {
-        if (numBits == m_numBits)
-            return;
-        
-        // Use fastCalloc instead of fastRealloc because we expect the common
-        // use case for this method to be initializing the size of the bitvector.
-        
-        size_t newLength = fastBitVectorArrayLength(numBits);
-        uint32_t* newArray = static_cast<uint32_t*>(fastCalloc(newLength, sizeof(uint32_t)));
-        memcpy(newArray, m_words, arrayLength() * sizeof(uint32_t));
-        if (m_words)
-            fastFree(m_words);
-        m_words = newArray;
+        if (arrayLength() != fastBitVectorArrayLength(numBits))
+            resizeSlow(numBits);
         m_numBits = numBits;
     }
     
@@ -171,6 +157,9 @@ public:
     uint32_t* words() { return m_words; }
 
 private:
+    WTF_EXPORT_PRIVATE void setEqualsSlow(const FastBitVectorWordOwner& other);
+    WTF_EXPORT_PRIVATE void resizeSlow(size_t numBits);
+    
     uint32_t* m_words { nullptr };
     size_t m_numBits { 0 };
 };
@@ -320,6 +309,15 @@ public:
         return result;
     }
     
+    bool isEmpty() const
+    {
+        for (size_t index = arrayLength(); index--;) {
+            if (m_words.word(index))
+                return false;
+        }
+        return true;
+    }
+    
     template<typename OtherWords>
     FastBitVectorImpl<FastBitVectorAndWords<typename Words::ViewType, typename OtherWords::ViewType>> operator&(const FastBitVectorImpl<OtherWords>& other) const
     {
@@ -380,7 +378,7 @@ public:
         // written this way so that it performs well regardless of whether value is a constant.
         uint32_t skipValue = -(static_cast<uint32_t>(value) ^ 1);
         
-        size_t numWords = m_words.arrayLength();
+        size_t numWords = fastBitVectorArrayLength(m_words.numBits());
         
         size_t wordIndex = startIndex / 32;
         size_t startIndexInWord = startIndex - wordIndex * 32;
@@ -471,6 +469,7 @@ public:
         m_words.clearAll();
     }
 
+    // Returns true if the contents of this bitvector changed.
     template<typename OtherWords>
     bool setAndCheck(const FastBitVectorImpl<OtherWords>& other)
     {
@@ -550,6 +549,28 @@ public:
     {
         return at(index);
     }
+    
+    // Returns true if the contents changed.
+    ALWAYS_INLINE bool atomicSetAndCheck(size_t index, bool value)
+    {
+        uint32_t* pointer = &m_words.word(index >> 5);
+        uint32_t mask = 1 << (index & 31);
+        for (;;) {
+            uint32_t oldValue = *pointer;
+            uint32_t newValue;
+            if (value) {
+                if (oldValue & mask)
+                    return false;
+                newValue = oldValue | mask;
+            } else {
+                if (!(oldValue & mask))
+                    return false;
+                newValue = oldValue & ~mask;
+            }
+            if (weakCompareAndSwap(pointer, oldValue, newValue))
+                return true;
+        }
+    }
 };
 
 } // namespace WTF
index 0657aa3..0309192 100644 (file)
@@ -27,6 +27,7 @@
 #include "GCObservation.h"
 
 #include <heap/HeapInlines.h>
+#include <heap/WeakInlines.h>
 
 namespace WebCore {
 
index 8afaf20..556c5b7 100644 (file)
@@ -1,3 +1,14 @@
+2016-09-20  Filip Pizlo  <fpizlo@apple.com>
+
+        Make MarkedBlock state tracking support overlapped allocation and marking state
+        https://bugs.webkit.org/show_bug.cgi?id=161581
+
+        Reviewed by Geoffrey Garen.
+        
+        Remove the always-trigger-copy-phase configuration.
+
+        * Scripts/run-jsc-stress-tests:
+
 2016-09-20  Don Olmstead  <don.olmstead@am.sony.com>
 
         [WinCairo] Use find_package cairo in build
index d0a8cc6..3b5984b 100755 (executable)
@@ -869,10 +869,6 @@ def runFTLEagerNoCJITOSRValidation
     run("ftl-eager-no-cjit-osr-validation", "--validateFTLOSRExitLiveness=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + EAGER_OPTIONS))
 end
 
-def runAlwaysTriggerCopyPhase
-    run("always-trigger-copy-phase", "--minHeapUtilization=2.0", "--minCopiedBlockUtilization=2.0")
-end
-
 def runNoCJITNoASO
     run("no-cjit-no-aso", "--useArchitectureSpecificOptimizations=false", *NO_CJIT_OPTIONS)
 end
@@ -910,7 +906,6 @@ def defaultRun
         defaultQuickRun
     else
         runDefault
-        runAlwaysTriggerCopyPhase
         if $jitTests
             runNoLLInt
             runNoCJITValidatePhases
@@ -936,7 +931,6 @@ def defaultNoNoLLIntRun
         defaultQuickRun
     else
         runDefault
-        runAlwaysTriggerCopyPhase
         if $jitTests
             runNoCJITValidatePhases
             runDFGEager
@@ -988,7 +982,6 @@ end
 # by counting recompilations.
 def defaultNoEagerRun
     runDefault
-    runAlwaysTriggerCopyPhase
     if $jitTests
         runNoLLInt
         runNoCJITValidatePhases
@@ -1003,7 +996,6 @@ end
 
 def defaultNoSamplingProfilerRun
     runDefault
-    runAlwaysTriggerCopyPhase
     if $jitTests
         runNoLLInt
         runNoCJITValidatePhases
@@ -1141,7 +1133,6 @@ end
 
 def runModules
     run("default-modules", "-m")
-    run("always-trigger-copy-phase-modules", "-m", "--minHeapUtilization=2.0", "--minCopiedBlockUtilization=2.0")
 
     if !$jitTests
         return