The collector thread should only start when the mutator doesn't have heap access
authorfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 22 Feb 2017 00:58:15 +0000 (00:58 +0000)
committerfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 22 Feb 2017 00:58:15 +0000 (00:58 +0000)
https://bugs.webkit.org/show_bug.cgi?id=167737

Reviewed by Keith Miller.
JSTests:

Add versions of splay that flash heap access, to simulate what might happen if a third-party app
was running concurrent GC. In this case, we might actually start the collector thread.

* stress/splay-flash-access-1ms.js: Added.
(performance.now):
(this.Setup.setup.setup):
(this.TearDown.tearDown.tearDown):
(Benchmark):
(BenchmarkResult):
(BenchmarkResult.prototype.valueOf):
(BenchmarkSuite):
(alert):
(Math.random):
(BenchmarkSuite.ResetRNG):
(RunStep):
(BenchmarkSuite.RunSuites):
(BenchmarkSuite.CountBenchmarks):
(BenchmarkSuite.GeometricMean):
(BenchmarkSuite.GeometricMeanTime):
(BenchmarkSuite.AverageAbovePercentile):
(BenchmarkSuite.GeometricMeanLatency):
(BenchmarkSuite.FormatScore):
(BenchmarkSuite.prototype.NotifyStep):
(BenchmarkSuite.prototype.NotifyResult):
(BenchmarkSuite.prototype.NotifyError):
(BenchmarkSuite.prototype.RunSingleBenchmark):
(RunNextSetup):
(RunNextBenchmark):
(RunNextTearDown):
(BenchmarkSuite.prototype.RunStep):
(GeneratePayloadTree):
(GenerateKey):
(SplayUpdateStats):
(InsertNewNode):
(SplaySetup):
(SplayTearDown):
(SplayRun):
(SplayTree):
(SplayTree.prototype.isEmpty):
(SplayTree.prototype.insert):
(SplayTree.prototype.remove):
(SplayTree.prototype.find):
(SplayTree.prototype.findMax):
(SplayTree.prototype.findGreatestLessThan):
(SplayTree.prototype.exportKeys):
(SplayTree.prototype.splay_):
(SplayTree.Node):
(SplayTree.Node.prototype.traverse_):
(jscSetUp):
(jscTearDown):
(jscRun):
(averageAbovePercentile):
(printPercentile):
* stress/splay-flash-access.js: Added.
(performance.now):
(this.Setup.setup.setup):
(this.TearDown.tearDown.tearDown):
(Benchmark):
(BenchmarkResult):
(BenchmarkResult.prototype.valueOf):
(BenchmarkSuite):
(alert):
(Math.random):
(BenchmarkSuite.ResetRNG):
(RunStep):
(BenchmarkSuite.RunSuites):
(BenchmarkSuite.CountBenchmarks):
(BenchmarkSuite.GeometricMean):
(BenchmarkSuite.GeometricMeanTime):
(BenchmarkSuite.AverageAbovePercentile):
(BenchmarkSuite.GeometricMeanLatency):
(BenchmarkSuite.FormatScore):
(BenchmarkSuite.prototype.NotifyStep):
(BenchmarkSuite.prototype.NotifyResult):
(BenchmarkSuite.prototype.NotifyError):
(BenchmarkSuite.prototype.RunSingleBenchmark):
(RunNextSetup):
(RunNextBenchmark):
(RunNextTearDown):
(BenchmarkSuite.prototype.RunStep):
(GeneratePayloadTree):
(GenerateKey):
(SplayUpdateStats):
(InsertNewNode):
(SplaySetup):
(SplayTearDown):
(SplayRun):
(SplayTree):
(SplayTree.prototype.isEmpty):
(SplayTree.prototype.insert):
(SplayTree.prototype.remove):
(SplayTree.prototype.find):
(SplayTree.prototype.findMax):
(SplayTree.prototype.findGreatestLessThan):
(SplayTree.prototype.exportKeys):
(SplayTree.prototype.splay_):
(SplayTree.Node):
(SplayTree.Node.prototype.traverse_):
(jscSetUp):
(jscTearDown):
(jscRun):
(averageAbovePercentile):
(printPercentile):

Source/JavaScriptCore:

This turns the collector thread's workflow into a state machine, so that the mutator thread can
run it directly. This reduces the amount of synchronization we do with the collector thread, and
means that most apps will never start the collector thread. The collector thread will still start
when we need to finish collecting and we don't have heap access.

In this new world, "stopping the world" means relinquishing control of collection to the mutator.
This means tracking who is conducting collection. I use the GCConductor enum to say who is
conducting. It's either GCConductor::Mutator or GCConductor::Collector. I use the term "conn" to
refer to the concept of conducting (having the conn, relinquishing the conn, taking the conn).
So, stopping the world means giving the mutator the conn. Releasing heap access means giving the
collector the conn.

This meant bringing back the conservative scan of the calling thread. It turns out that this
scan was too slow to be called on each GC increment because apparently setjmp() now does system
calls. So, I wrote our own callee save register saving for the GC. Then I had doubts about
whether or not it was correct, so I also made it so that the GC only rarely asks for the register
state. I think we still want to use my register saving code instead of setjmp because setjmp
seems to save things we don't need, and that could make us overly conservative.

It turns out that this new scheduling discipline makes the old space-time scheduler perform
better than the new stochastic space-time scheduler on systems with fewer than 4 cores. This is
because the mutator having the conn enables us to time the mutator<->collector context switches
by polling. The OS is never involved. So, we can use super precise timing. This allows the old
space-time schduler to shine like it hadn't before.

The splay results imply that this is all a good thing. On 2-core systems, this reduces pause
times by 40% and it increases throughput about 5%. On 1-core systems, this reduces pause times by
half and reduces throughput by 8%. On 4-or-more-core systems, this doesn't seem to have much
effect.

* CMakeLists.txt:
* JavaScriptCore.xcodeproj/project.pbxproj:
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::visitChildren):
* dfg/DFGWorklist.cpp:
(JSC::DFG::Worklist::ThreadBody::ThreadBody):
(JSC::DFG::Worklist::dump):
(JSC::DFG::numberOfWorklists):
(JSC::DFG::ensureWorklistForIndex):
(JSC::DFG::existingWorklistForIndexOrNull):
(JSC::DFG::existingWorklistForIndex):
* dfg/DFGWorklist.h:
(JSC::DFG::numberOfWorklists): Deleted.
(JSC::DFG::ensureWorklistForIndex): Deleted.
(JSC::DFG::existingWorklistForIndexOrNull): Deleted.
(JSC::DFG::existingWorklistForIndex): Deleted.
* heap/CollectingScope.h: Added.
(JSC::CollectingScope::CollectingScope):
(JSC::CollectingScope::~CollectingScope):
* heap/CollectorPhase.cpp: Added.
(JSC::worldShouldBeSuspended):
(WTF::printInternal):
* heap/CollectorPhase.h: Added.
* heap/EdenGCActivityCallback.cpp:
(JSC::EdenGCActivityCallback::lastGCLength):
* heap/FullGCActivityCallback.cpp:
(JSC::FullGCActivityCallback::doCollection):
(JSC::FullGCActivityCallback::lastGCLength):
* heap/GCConductor.cpp: Added.
(JSC::gcConductorShortName):
(WTF::printInternal):
* heap/GCConductor.h: Added.
* heap/GCFinalizationCallback.cpp: Added.
(JSC::GCFinalizationCallback::GCFinalizationCallback):
(JSC::GCFinalizationCallback::~GCFinalizationCallback):
* heap/GCFinalizationCallback.h: Added.
(JSC::GCFinalizationCallbackFuncAdaptor::GCFinalizationCallbackFuncAdaptor):
(JSC::createGCFinalizationCallback):
* heap/Heap.cpp:
(JSC::Heap::Thread::Thread):
(JSC::Heap::Heap):
(JSC::Heap::lastChanceToFinalize):
(JSC::Heap::gatherStackRoots):
(JSC::Heap::updateObjectCounts):
(JSC::Heap::sweepSynchronously):
(JSC::Heap::collectAllGarbage):
(JSC::Heap::collectAsync):
(JSC::Heap::collectSync):
(JSC::Heap::shouldCollectInCollectorThread):
(JSC::Heap::collectInCollectorThread):
(JSC::Heap::checkConn):
(JSC::Heap::runNotRunningPhase):
(JSC::Heap::runBeginPhase):
(JSC::Heap::runFixpointPhase):
(JSC::Heap::runConcurrentPhase):
(JSC::Heap::runReloopPhase):
(JSC::Heap::runEndPhase):
(JSC::Heap::changePhase):
(JSC::Heap::finishChangingPhase):
(JSC::Heap::stopThePeriphery):
(JSC::Heap::resumeThePeriphery):
(JSC::Heap::stopTheMutator):
(JSC::Heap::resumeTheMutator):
(JSC::Heap::stopIfNecessarySlow):
(JSC::Heap::collectInMutatorThread):
(JSC::Heap::waitForCollector):
(JSC::Heap::acquireAccessSlow):
(JSC::Heap::releaseAccessSlow):
(JSC::Heap::relinquishConn):
(JSC::Heap::finishRelinquishingConn):
(JSC::Heap::handleNeedFinalize):
(JSC::Heap::notifyThreadStopping):
(JSC::Heap::finalize):
(JSC::Heap::addFinalizationCallback):
(JSC::Heap::requestCollection):
(JSC::Heap::waitForCollection):
(JSC::Heap::updateAllocationLimits):
(JSC::Heap::didFinishCollection):
(JSC::Heap::collectIfNecessaryOrDefer):
(JSC::Heap::notifyIsSafeToCollect):
(JSC::Heap::preventCollection):
(JSC::Heap::performIncrement):
(JSC::Heap::markToFixpoint): Deleted.
(JSC::Heap::shouldCollectInThread): Deleted.
(JSC::Heap::collectInThread): Deleted.
(JSC::Heap::stopTheWorld): Deleted.
(JSC::Heap::resumeTheWorld): Deleted.
* heap/Heap.h:
(JSC::Heap::machineThreads):
(JSC::Heap::lastFullGCLength):
(JSC::Heap::lastEdenGCLength):
(JSC::Heap::increaseLastFullGCLength):
* heap/HeapInlines.h:
(JSC::Heap::mutatorIsStopped): Deleted.
* heap/HeapStatistics.cpp: Removed.
* heap/HeapStatistics.h: Removed.
* heap/HelpingGCScope.h: Removed.
* heap/IncrementalSweeper.cpp:
(JSC::IncrementalSweeper::stopSweeping):
(JSC::IncrementalSweeper::willFinishSweeping): Deleted.
* heap/IncrementalSweeper.h:
* heap/MachineStackMarker.cpp:
(JSC::MachineThreads::gatherFromCurrentThread):
(JSC::MachineThreads::gatherConservativeRoots):
(JSC::callWithCurrentThreadState):
* heap/MachineStackMarker.h:
* heap/MarkedAllocator.cpp:
(JSC::MarkedAllocator::allocateSlowCaseImpl):
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::Handle::sweep):
* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::sweep):
* heap/MutatorState.cpp:
(WTF::printInternal):
* heap/MutatorState.h:
* heap/RegisterState.h: Added.
* heap/RunningScope.h: Added.
(JSC::RunningScope::RunningScope):
(JSC::RunningScope::~RunningScope):
* heap/SlotVisitor.cpp:
(JSC::SlotVisitor::SlotVisitor):
(JSC::SlotVisitor::drain):
(JSC::SlotVisitor::drainFromShared):
(JSC::SlotVisitor::drainInParallelPassively):
(JSC::SlotVisitor::donateAll):
(JSC::SlotVisitor::donate):
* heap/SlotVisitor.h:
(JSC::SlotVisitor::codeName):
* heap/StochasticSpaceTimeMutatorScheduler.cpp:
(JSC::StochasticSpaceTimeMutatorScheduler::beginCollection):
(JSC::StochasticSpaceTimeMutatorScheduler::synchronousDrainingDidStall):
(JSC::StochasticSpaceTimeMutatorScheduler::timeToStop):
* heap/SweepingScope.h: Added.
(JSC::SweepingScope::SweepingScope):
(JSC::SweepingScope::~SweepingScope):
* jit/JITWorklist.cpp:
(JSC::JITWorklist::Thread::Thread):
* jsc.cpp:
(GlobalObject::finishCreation):
(functionFlashHeapAccess):
* runtime/InitializeThreading.cpp:
(JSC::initializeThreading):
* runtime/JSCellInlines.h:
(JSC::JSCell::classInfo):
* runtime/Options.cpp:
(JSC::overrideDefaults):
* runtime/Options.h:
* runtime/TestRunnerUtils.cpp:
(JSC::finalizeStatsAtEndOfTesting):

Source/WebCore:

Added new tests in JSTests.

The WebCore changes involve:

- Refactoring around new header discipline.

- Adding crazy GC APIs to window.internals to enable us to test the GC's runloop discipline.

* ForwardingHeaders/heap/GCFinalizationCallback.h: Added.
* ForwardingHeaders/heap/IncrementalSweeper.h: Added.
* ForwardingHeaders/heap/MachineStackMarker.h: Added.
* ForwardingHeaders/heap/RunningScope.h: Added.
* bindings/js/CommonVM.cpp:
* testing/Internals.cpp:
(WebCore::Internals::parserMetaData):
(WebCore::Internals::isReadableStreamDisturbed):
(WebCore::Internals::isGCRunning):
(WebCore::Internals::addGCFinalizationCallback):
(WebCore::Internals::stopSweeping):
(WebCore::Internals::startSweeping):
* testing/Internals.h:
* testing/Internals.idl:

Source/WTF:

Extend the use of AbstractLocker so that we can use more locking idioms.

* wtf/AutomaticThread.cpp:
(WTF::AutomaticThreadCondition::notifyOne):
(WTF::AutomaticThreadCondition::notifyAll):
(WTF::AutomaticThreadCondition::add):
(WTF::AutomaticThreadCondition::remove):
(WTF::AutomaticThreadCondition::contains):
(WTF::AutomaticThread::AutomaticThread):
(WTF::AutomaticThread::tryStop):
(WTF::AutomaticThread::isWaiting):
(WTF::AutomaticThread::notify):
(WTF::AutomaticThread::start):
(WTF::AutomaticThread::threadIsStopping):
* wtf/AutomaticThread.h:
* wtf/NumberOfCores.cpp:
(WTF::numberOfProcessorCores):
* wtf/ParallelHelperPool.cpp:
(WTF::ParallelHelperClient::finish):
(WTF::ParallelHelperClient::claimTask):
(WTF::ParallelHelperPool::Thread::Thread):
(WTF::ParallelHelperPool::didMakeWorkAvailable):
(WTF::ParallelHelperPool::hasClientWithTask):
(WTF::ParallelHelperPool::getClientWithTask):
* wtf/ParallelHelperPool.h:

Tools:

Make more tests collect continuously.

* Scripts/run-jsc-stress-tests:

git-svn-id: http://svn.webkit.org/repository/webkit/trunk@212778 268f45cc-cd09-0410-ab3c-d52691b4dbfc

56 files changed:
JSTests/ChangeLog
JSTests/stress/splay-flash-access-1ms.js [new file with mode: 0644]
JSTests/stress/splay-flash-access.js [new file with mode: 0644]
Source/JavaScriptCore/CMakeLists.txt
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
Source/JavaScriptCore/bytecode/CodeBlock.cpp
Source/JavaScriptCore/dfg/DFGWorklist.cpp
Source/JavaScriptCore/dfg/DFGWorklist.h
Source/JavaScriptCore/heap/CollectingScope.h [moved from Source/JavaScriptCore/heap/HeapStatistics.h with 71% similarity]
Source/JavaScriptCore/heap/CollectorPhase.cpp [new file with mode: 0644]
Source/JavaScriptCore/heap/CollectorPhase.h [new file with mode: 0644]
Source/JavaScriptCore/heap/EdenGCActivityCallback.cpp
Source/JavaScriptCore/heap/FullGCActivityCallback.cpp
Source/JavaScriptCore/heap/GCConductor.cpp [new file with mode: 0644]
Source/JavaScriptCore/heap/GCConductor.h [new file with mode: 0644]
Source/JavaScriptCore/heap/Heap.cpp
Source/JavaScriptCore/heap/Heap.h
Source/JavaScriptCore/heap/HeapInlines.h
Source/JavaScriptCore/heap/HeapStatistics.cpp [deleted file]
Source/JavaScriptCore/heap/IncrementalSweeper.cpp
Source/JavaScriptCore/heap/IncrementalSweeper.h
Source/JavaScriptCore/heap/MachineStackMarker.cpp
Source/JavaScriptCore/heap/MachineStackMarker.h
Source/JavaScriptCore/heap/MarkedAllocator.cpp
Source/JavaScriptCore/heap/MarkedBlock.cpp
Source/JavaScriptCore/heap/MarkedSpace.cpp
Source/JavaScriptCore/heap/MutatorState.cpp
Source/JavaScriptCore/heap/MutatorState.h
Source/JavaScriptCore/heap/RegisterState.h [new file with mode: 0644]
Source/JavaScriptCore/heap/RunningScope.h [moved from Source/JavaScriptCore/heap/HelpingGCScope.h with 89% similarity]
Source/JavaScriptCore/heap/SlotVisitor.cpp
Source/JavaScriptCore/heap/SlotVisitor.h
Source/JavaScriptCore/heap/StochasticSpaceTimeMutatorScheduler.cpp
Source/JavaScriptCore/heap/SweepingScope.h [new file with mode: 0644]
Source/JavaScriptCore/jit/JITWorklist.cpp
Source/JavaScriptCore/jsc.cpp
Source/JavaScriptCore/runtime/InitializeThreading.cpp
Source/JavaScriptCore/runtime/JSCellInlines.h
Source/JavaScriptCore/runtime/Options.cpp
Source/JavaScriptCore/runtime/Options.h
Source/JavaScriptCore/runtime/TestRunnerUtils.cpp
Source/WTF/ChangeLog
Source/WTF/wtf/AutomaticThread.cpp
Source/WTF/wtf/AutomaticThread.h
Source/WTF/wtf/NumberOfCores.cpp
Source/WTF/wtf/ParallelHelperPool.cpp
Source/WTF/wtf/ParallelHelperPool.h
Source/WebCore/ChangeLog
Source/WebCore/ForwardingHeaders/heap/GCFinalizationCallback.h [new file with mode: 0644]
Source/WebCore/ForwardingHeaders/heap/IncrementalSweeper.h [new file with mode: 0644]
Source/WebCore/ForwardingHeaders/heap/MachineStackMarker.h [new file with mode: 0644]
Source/WebCore/ForwardingHeaders/heap/RunningScope.h [new file with mode: 0644]
Source/WebCore/bindings/js/CommonVM.cpp
Tools/ChangeLog
Tools/Scripts/run-jsc-stress-tests

index 8c24266..a062dbb 100644 (file)
@@ -1,3 +1,114 @@
+2017-02-20  Filip Pizlo  <fpizlo@apple.com>
+
+        The collector thread should only start when the mutator doesn't have heap access
+        https://bugs.webkit.org/show_bug.cgi?id=167737
+
+        Reviewed by Keith Miller.
+        
+        Add versions of splay that flash heap access, to simulate what might happen if a third-party app
+        was running concurrent GC. In this case, we might actually start the collector thread.
+
+        * stress/splay-flash-access-1ms.js: Added.
+        (performance.now):
+        (this.Setup.setup.setup):
+        (this.TearDown.tearDown.tearDown):
+        (Benchmark):
+        (BenchmarkResult):
+        (BenchmarkResult.prototype.valueOf):
+        (BenchmarkSuite):
+        (alert):
+        (Math.random):
+        (BenchmarkSuite.ResetRNG):
+        (RunStep):
+        (BenchmarkSuite.RunSuites):
+        (BenchmarkSuite.CountBenchmarks):
+        (BenchmarkSuite.GeometricMean):
+        (BenchmarkSuite.GeometricMeanTime):
+        (BenchmarkSuite.AverageAbovePercentile):
+        (BenchmarkSuite.GeometricMeanLatency):
+        (BenchmarkSuite.FormatScore):
+        (BenchmarkSuite.prototype.NotifyStep):
+        (BenchmarkSuite.prototype.NotifyResult):
+        (BenchmarkSuite.prototype.NotifyError):
+        (BenchmarkSuite.prototype.RunSingleBenchmark):
+        (RunNextSetup):
+        (RunNextBenchmark):
+        (RunNextTearDown):
+        (BenchmarkSuite.prototype.RunStep):
+        (GeneratePayloadTree):
+        (GenerateKey):
+        (SplayUpdateStats):
+        (InsertNewNode):
+        (SplaySetup):
+        (SplayTearDown):
+        (SplayRun):
+        (SplayTree):
+        (SplayTree.prototype.isEmpty):
+        (SplayTree.prototype.insert):
+        (SplayTree.prototype.remove):
+        (SplayTree.prototype.find):
+        (SplayTree.prototype.findMax):
+        (SplayTree.prototype.findGreatestLessThan):
+        (SplayTree.prototype.exportKeys):
+        (SplayTree.prototype.splay_):
+        (SplayTree.Node):
+        (SplayTree.Node.prototype.traverse_):
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        (averageAbovePercentile):
+        (printPercentile):
+        * stress/splay-flash-access.js: Added.
+        (performance.now):
+        (this.Setup.setup.setup):
+        (this.TearDown.tearDown.tearDown):
+        (Benchmark):
+        (BenchmarkResult):
+        (BenchmarkResult.prototype.valueOf):
+        (BenchmarkSuite):
+        (alert):
+        (Math.random):
+        (BenchmarkSuite.ResetRNG):
+        (RunStep):
+        (BenchmarkSuite.RunSuites):
+        (BenchmarkSuite.CountBenchmarks):
+        (BenchmarkSuite.GeometricMean):
+        (BenchmarkSuite.GeometricMeanTime):
+        (BenchmarkSuite.AverageAbovePercentile):
+        (BenchmarkSuite.GeometricMeanLatency):
+        (BenchmarkSuite.FormatScore):
+        (BenchmarkSuite.prototype.NotifyStep):
+        (BenchmarkSuite.prototype.NotifyResult):
+        (BenchmarkSuite.prototype.NotifyError):
+        (BenchmarkSuite.prototype.RunSingleBenchmark):
+        (RunNextSetup):
+        (RunNextBenchmark):
+        (RunNextTearDown):
+        (BenchmarkSuite.prototype.RunStep):
+        (GeneratePayloadTree):
+        (GenerateKey):
+        (SplayUpdateStats):
+        (InsertNewNode):
+        (SplaySetup):
+        (SplayTearDown):
+        (SplayRun):
+        (SplayTree):
+        (SplayTree.prototype.isEmpty):
+        (SplayTree.prototype.insert):
+        (SplayTree.prototype.remove):
+        (SplayTree.prototype.find):
+        (SplayTree.prototype.findMax):
+        (SplayTree.prototype.findGreatestLessThan):
+        (SplayTree.prototype.exportKeys):
+        (SplayTree.prototype.splay_):
+        (SplayTree.Node):
+        (SplayTree.Node.prototype.traverse_):
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        (averageAbovePercentile):
+        (printPercentile):
+
 2017-02-21  Ryan Haddad  <ryanhaddad@apple.com>
 
         Unreviewed, rolling out r212712.
diff --git a/JSTests/stress/splay-flash-access-1ms.js b/JSTests/stress/splay-flash-access-1ms.js
new file mode 100644 (file)
index 0000000..eaad964
--- /dev/null
@@ -0,0 +1,902 @@
+//@ runNoisyTestDefault
+//@ runNoisyTestNoCJIT
+
+// Copyright 2013 the V8 project authors. All rights reserved.
+// Copyright (C) 2015 Apple Inc. All rights reserved.
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+//     * Redistributions of source code must retain the above copyright
+//       notice, this list of conditions and the following disclaimer.
+//     * Redistributions in binary form must reproduce the above
+//       copyright notice, this list of conditions and the following
+//       disclaimer in the documentation and/or other materials provided
+//       with the distribution.
+//     * Neither the name of Google Inc. nor the names of its
+//       contributors may be used to endorse or promote products derived
+//       from this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+// Performance.now is used in latency benchmarks, the fallback is Date.now.
+var performance = performance || {};
+performance.now = (function() {
+  return performance.now       ||
+         performance.mozNow    ||
+         performance.msNow     ||
+         performance.oNow      ||
+         performance.webkitNow ||
+         Date.now;
+})();
+
+// Simple framework for running the benchmark suites and
+// computing a score based on the timing measurements.
+
+
+// A benchmark has a name (string) and a function that will be run to
+// do the performance measurement. The optional setup and tearDown
+// arguments are functions that will be invoked before and after
+// running the benchmark, but the running time of these functions will
+// not be accounted for in the benchmark score.
+function Benchmark(name, doWarmup, doDeterministic, run, setup, tearDown, latencyResult, minIterations) {
+  this.name = name;
+  this.doWarmup = doWarmup;
+  this.doDeterministic = doDeterministic;
+  this.run = run;
+  this.Setup = setup ? setup : function() { };
+  this.TearDown = tearDown ? tearDown : function() { };
+  this.latencyResult = latencyResult ? latencyResult : null; 
+  this.minIterations = minIterations ? minIterations : 32;
+}
+
+
+// Benchmark results hold the benchmark and the measured time used to
+// run the benchmark. The benchmark score is computed later once a
+// full benchmark suite has run to completion. If latency is set to 0
+// then there is no latency score for this benchmark.
+function BenchmarkResult(benchmark, time, latency) {
+  this.benchmark = benchmark;
+  this.time = time;
+  this.latency = latency;
+}
+
+
+// Automatically convert results to numbers. Used by the geometric
+// mean computation.
+BenchmarkResult.prototype.valueOf = function() {
+  return this.time;
+}
+
+
+// Suites of benchmarks consist of a name and the set of benchmarks in
+// addition to the reference timing that the final score will be based
+// on. This way, all scores are relative to a reference run and higher
+// scores implies better performance.
+function BenchmarkSuite(name, reference, benchmarks) {
+  this.name = name;
+  this.reference = reference;
+  this.benchmarks = benchmarks;
+  BenchmarkSuite.suites.push(this);
+}
+
+
+// Keep track of all declared benchmark suites.
+BenchmarkSuite.suites = [];
+
+// Scores are not comparable across versions. Bump the version if
+// you're making changes that will affect that scores, e.g. if you add
+// a new benchmark or change an existing one.
+BenchmarkSuite.version = '9';
+
+// Override the alert function to throw an exception instead.
+alert = function(s) {
+  throw "Alert called with argument: " + s;
+};
+
+
+// To make the benchmark results predictable, we replace Math.random
+// with a 100% deterministic alternative.
+BenchmarkSuite.ResetRNG = function() {
+  Math.random = (function() {
+    var seed = 49734321;
+    return function() {
+      // Robert Jenkins' 32 bit integer hash function.
+      seed = ((seed + 0x7ed55d16) + (seed << 12))  & 0xffffffff;
+      seed = ((seed ^ 0xc761c23c) ^ (seed >>> 19)) & 0xffffffff;
+      seed = ((seed + 0x165667b1) + (seed << 5))   & 0xffffffff;
+      seed = ((seed + 0xd3a2646c) ^ (seed << 9))   & 0xffffffff;
+      seed = ((seed + 0xfd7046c5) + (seed << 3))   & 0xffffffff;
+      seed = ((seed ^ 0xb55a4f09) ^ (seed >>> 16)) & 0xffffffff;
+      return (seed & 0xfffffff) / 0x10000000;
+    };
+  })();
+}
+
+
+// Runs all registered benchmark suites and optionally yields between
+// each individual benchmark to avoid running for too long in the
+// context of browsers. Once done, the final score is reported to the
+// runner.
+BenchmarkSuite.RunSuites = function(runner) {
+  var continuation = null;
+  var suites = BenchmarkSuite.suites;
+  var length = suites.length;
+  BenchmarkSuite.scores = [];
+  var index = 0;
+  function RunStep() {
+    while (continuation || index < length) {
+      if (continuation) {
+        continuation = continuation();
+      } else {
+        var suite = suites[index++];
+        if (runner.NotifyStart) runner.NotifyStart(suite.name);
+        continuation = suite.RunStep(runner);
+      }
+      if (continuation && typeof window != 'undefined' && window.setTimeout) {
+        window.setTimeout(RunStep, 25);
+        return;
+      }
+    }
+
+    // show final result
+    if (runner.NotifyScore) {
+      var score = BenchmarkSuite.GeometricMean(BenchmarkSuite.scores);
+      var formatted = BenchmarkSuite.FormatScore(100 * score);
+      runner.NotifyScore(formatted);
+    }
+  }
+  RunStep();
+}
+
+
+// Counts the total number of registered benchmarks. Useful for
+// showing progress as a percentage.
+BenchmarkSuite.CountBenchmarks = function() {
+  var result = 0;
+  var suites = BenchmarkSuite.suites;
+  for (var i = 0; i < suites.length; i++) {
+    result += suites[i].benchmarks.length;
+  }
+  return result;
+}
+
+
+// Computes the geometric mean of a set of numbers.
+BenchmarkSuite.GeometricMean = function(numbers) {
+  var log = 0;
+  for (var i = 0; i < numbers.length; i++) {
+    log += Math.log(numbers[i]);
+  }
+  return Math.pow(Math.E, log / numbers.length);
+}
+
+
+// Computes the geometric mean of a set of throughput time measurements.
+BenchmarkSuite.GeometricMeanTime = function(measurements) {
+  var log = 0;
+  for (var i = 0; i < measurements.length; i++) {
+    log += Math.log(measurements[i].time);
+  }
+  return Math.pow(Math.E, log / measurements.length);
+}
+
+
+// Computes the average of the worst samples. For example, if percentile is 99, this will report the
+// average of the worst 1% of the samples.
+BenchmarkSuite.AverageAbovePercentile = function(numbers, percentile) {
+  // Don't change the original array.
+  numbers = numbers.slice();
+  
+  // Sort in ascending order.
+  numbers.sort(function(a, b) { return a - b; });
+  
+  // Now the elements we want are at the end. Keep removing them until the array size shrinks too much.
+  // Examples assuming percentile = 99:
+  //
+  // - numbers.length starts at 100: we will remove just the worst entry and then not remove anymore,
+  //   since then numbers.length / originalLength = 0.99.
+  //
+  // - numbers.length starts at 1000: we will remove the ten worst.
+  //
+  // - numbers.length starts at 10: we will remove just the worst.
+  var numbersWeWant = [];
+  var originalLength = numbers.length;
+  while (numbers.length / originalLength > percentile / 100)
+    numbersWeWant.push(numbers.pop());
+  
+  var sum = 0;
+  for (var i = 0; i < numbersWeWant.length; ++i)
+    sum += numbersWeWant[i];
+  
+  var result = sum / numbersWeWant.length;
+  
+  // Do a sanity check.
+  if (numbers.length && result < numbers[numbers.length - 1]) {
+    throw "Sanity check fail: the worst case result is " + result +
+      " but we didn't take into account " + numbers;
+  }
+  
+  return result;
+}
+
+
+// Computes the geometric mean of a set of latency measurements.
+BenchmarkSuite.GeometricMeanLatency = function(measurements) {
+  var log = 0;
+  var hasLatencyResult = false;
+  for (var i = 0; i < measurements.length; i++) {
+    if (measurements[i].latency != 0) {
+      log += Math.log(measurements[i].latency);
+      hasLatencyResult = true;
+    }
+  }
+  if (hasLatencyResult) {
+    return Math.pow(Math.E, log / measurements.length);
+  } else {
+    return 0;
+  }
+}
+
+
+// Converts a score value to a string with at least three significant
+// digits.
+BenchmarkSuite.FormatScore = function(value) {
+  if (value > 100) {
+    return value.toFixed(0);
+  } else {
+    return value.toPrecision(3);
+  }
+}
+
+// Notifies the runner that we're done running a single benchmark in
+// the benchmark suite. This can be useful to report progress.
+BenchmarkSuite.prototype.NotifyStep = function(result) {
+  this.results.push(result);
+  if (this.runner.NotifyStep) this.runner.NotifyStep(result.benchmark.name);
+}
+
+
+// Notifies the runner that we're done with running a suite and that
+// we have a result which can be reported to the user if needed.
+BenchmarkSuite.prototype.NotifyResult = function() {
+  var mean = BenchmarkSuite.GeometricMeanTime(this.results);
+  var score = this.reference[0] / mean;
+  BenchmarkSuite.scores.push(score);
+  if (this.runner.NotifyResult) {
+    var formatted = BenchmarkSuite.FormatScore(100 * score);
+    this.runner.NotifyResult(this.name, formatted);
+  }
+  if (this.reference.length == 2) {
+    var meanLatency = BenchmarkSuite.GeometricMeanLatency(this.results);
+    if (meanLatency != 0) {
+      var scoreLatency = this.reference[1] / meanLatency;
+      BenchmarkSuite.scores.push(scoreLatency);
+      if (this.runner.NotifyResult) {
+        var formattedLatency = BenchmarkSuite.FormatScore(100 * scoreLatency)
+        this.runner.NotifyResult(this.name + "Latency", formattedLatency);
+      }
+    }
+  }
+}
+
+
+// Notifies the runner that running a benchmark resulted in an error.
+BenchmarkSuite.prototype.NotifyError = function(error) {
+  if (this.runner.NotifyError) {
+    this.runner.NotifyError(this.name, error);
+  }
+  if (this.runner.NotifyStep) {
+    this.runner.NotifyStep(this.name);
+  }
+}
+
+
+// Runs a single benchmark for at least a second and computes the
+// average time it takes to run a single iteration.
+BenchmarkSuite.prototype.RunSingleBenchmark = function(benchmark, data) {
+  function Measure(data) {
+    var elapsed = 0;
+    var start = new Date();
+  
+  // Run either for 1 second or for the number of iterations specified
+  // by minIterations, depending on the config flag doDeterministic.
+    for (var i = 0; (benchmark.doDeterministic ? 
+      i<benchmark.minIterations : elapsed < 1000); i++) {
+      benchmark.run();
+      elapsed = new Date() - start;
+    }
+    if (data != null) {
+      data.runs += i;
+      data.elapsed += elapsed;
+    }
+  }
+
+  // Sets up data in order to skip or not the warmup phase.
+  if (!benchmark.doWarmup && data == null) {
+    data = { runs: 0, elapsed: 0 };
+  }
+
+  if (data == null) {
+    Measure(null);
+    return { runs: 0, elapsed: 0 };
+  } else {
+    Measure(data);
+    // If we've run too few iterations, we continue for another second.
+    if (data.runs < benchmark.minIterations) return data;
+    var usec = (data.elapsed * 1000) / data.runs;
+    var latencySamples = (benchmark.latencyResult != null) ? benchmark.latencyResult() : [0];
+    var percentile = 99.5;
+    var latency = BenchmarkSuite.AverageAbovePercentile(latencySamples, percentile) * 1000;
+    this.NotifyStep(new BenchmarkResult(benchmark, usec, latency));
+    return null;
+  }
+}
+
+
+// This function starts running a suite, but stops between each
+// individual benchmark in the suite and returns a continuation
+// function which can be invoked to run the next benchmark. Once the
+// last benchmark has been executed, null is returned.
+BenchmarkSuite.prototype.RunStep = function(runner) {
+  BenchmarkSuite.ResetRNG();
+  this.results = [];
+  this.runner = runner;
+  var length = this.benchmarks.length;
+  var index = 0;
+  var suite = this;
+  var data;
+
+  // Run the setup, the actual benchmark, and the tear down in three
+  // separate steps to allow the framework to yield between any of the
+  // steps.
+
+  function RunNextSetup() {
+    if (index < length) {
+      try {
+        suite.benchmarks[index].Setup();
+      } catch (e) {
+        suite.NotifyError(e);
+        return null;
+      }
+      return RunNextBenchmark;
+    }
+    suite.NotifyResult();
+    return null;
+  }
+
+  function RunNextBenchmark() {
+    try {
+      data = suite.RunSingleBenchmark(suite.benchmarks[index], data);
+    } catch (e) {
+      suite.NotifyError(e);
+      return null;
+    }
+    // If data is null, we're done with this benchmark.
+    return (data == null) ? RunNextTearDown : RunNextBenchmark();
+  }
+
+  function RunNextTearDown() {
+    try {
+      suite.benchmarks[index++].TearDown();
+    } catch (e) {
+      suite.NotifyError(e);
+      return null;
+    }
+    return RunNextSetup;
+  }
+
+  // Start out running the setup.
+  return RunNextSetup();
+}
+// Copyright 2009 the V8 project authors. All rights reserved.
+// Copyright (C) 2015 Apple Inc. All rights reserved.
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+//     * Redistributions of source code must retain the above copyright
+//       notice, this list of conditions and the following disclaimer.
+//     * Redistributions in binary form must reproduce the above
+//       copyright notice, this list of conditions and the following
+//       disclaimer in the documentation and/or other materials provided
+//       with the distribution.
+//     * Neither the name of Google Inc. nor the names of its
+//       contributors may be used to endorse or promote products derived
+//       from this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+// This benchmark is based on a JavaScript log processing module used
+// by the V8 profiler to generate execution time profiles for runs of
+// JavaScript applications, and it effectively measures how fast the
+// JavaScript engine is at allocating nodes and reclaiming the memory
+// used for old nodes. Because of the way splay trees work, the engine
+// also has to deal with a lot of changes to the large tree object
+// graph.
+
+var Splay = new BenchmarkSuite('Splay', [81491, 2739514], [
+  new Benchmark("Splay", true, false, 
+    SplayRun, SplaySetup, SplayTearDown, SplayLatency)
+]);
+
+
+// Configuration.
+var kSplayTreeSize = 8000;
+var kSplayTreeModifications = 80;
+var kSplayTreePayloadDepth = 5;
+
+var splayTree = null;
+var splaySampleTimeStart = 0.0;
+
+function GeneratePayloadTree(depth, tag) {
+  if (depth == 0) {
+    return {
+      array  : [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ],
+      string : 'String for key ' + tag + ' in leaf node'
+    };
+  } else {
+    return {
+      left:  GeneratePayloadTree(depth - 1, tag),
+      right: GeneratePayloadTree(depth - 1, tag)
+    };
+  }
+}
+
+
+function GenerateKey() {
+  // The benchmark framework guarantees that Math.random is
+  // deterministic; see base.js.
+  return Math.random();
+}
+
+var splaySamples = [];
+
+function SplayLatency() {
+  return splaySamples;
+}
+
+function SplayUpdateStats(time) {
+  var pause = time - splaySampleTimeStart;
+  splaySampleTimeStart = time;
+  splaySamples.push(pause);
+}
+
+function InsertNewNode() {
+  // Insert new node with a unique key.
+  var key;
+  do {
+    key = GenerateKey();
+  } while (splayTree.find(key) != null);
+  var payload = GeneratePayloadTree(kSplayTreePayloadDepth, String(key));
+  splayTree.insert(key, payload);
+  return key;
+}
+
+
+function SplaySetup() {
+  // Check if the platform has the performance.now high resolution timer.
+  // If not, throw exception and quit.
+  if (!performance.now) {
+    throw "PerformanceNowUnsupported";
+  }
+
+  splayTree = new SplayTree();
+  splaySampleTimeStart = performance.now()
+  for (var i = 0; i < kSplayTreeSize; i++) {
+    InsertNewNode();
+    if ((i+1) % 20 == 19) {
+      SplayUpdateStats(performance.now());
+    }
+  }
+}
+
+
+function SplayTearDown() {
+  // Allow the garbage collector to reclaim the memory
+  // used by the splay tree no matter how we exit the
+  // tear down function.
+  var keys = splayTree.exportKeys();
+  splayTree = null;
+
+  splaySamples = [];
+
+  // Verify that the splay tree has the right size.
+  var length = keys.length;
+  if (length != kSplayTreeSize) {
+    throw new Error("Splay tree has wrong size");
+  }
+
+  // Verify that the splay tree has sorted, unique keys.
+  for (var i = 0; i < length - 1; i++) {
+    if (keys[i] >= keys[i + 1]) {
+      throw new Error("Splay tree not sorted");
+    }
+  }
+}
+
+
+function SplayRun() {
+  // Replace a few nodes in the splay tree.
+  for (var i = 0; i < kSplayTreeModifications; i++) {
+    var key = InsertNewNode();
+    var greatest = splayTree.findGreatestLessThan(key);
+    if (greatest == null) splayTree.remove(key);
+    else splayTree.remove(greatest.key);
+  }
+  SplayUpdateStats(performance.now());
+}
+
+
+/**
+ * Constructs a Splay tree.  A splay tree is a self-balancing binary
+ * search tree with the additional property that recently accessed
+ * elements are quick to access again. It performs basic operations
+ * such as insertion, look-up and removal in O(log(n)) amortized time.
+ *
+ * @constructor
+ */
+function SplayTree() {
+};
+
+
+/**
+ * Pointer to the root node of the tree.
+ *
+ * @type {SplayTree.Node}
+ * @private
+ */
+SplayTree.prototype.root_ = null;
+
+
+/**
+ * @return {boolean} Whether the tree is empty.
+ */
+SplayTree.prototype.isEmpty = function() {
+  return !this.root_;
+};
+
+
+/**
+ * Inserts a node into the tree with the specified key and value if
+ * the tree does not already contain a node with the specified key. If
+ * the value is inserted, it becomes the root of the tree.
+ *
+ * @param {number} key Key to insert into the tree.
+ * @param {*} value Value to insert into the tree.
+ */
+SplayTree.prototype.insert = function(key, value) {
+  if (this.isEmpty()) {
+    this.root_ = new SplayTree.Node(key, value);
+    return;
+  }
+  // Splay on the key to move the last node on the search path for
+  // the key to the root of the tree.
+  this.splay_(key);
+  if (this.root_.key == key) {
+    return;
+  }
+  var node = new SplayTree.Node(key, value);
+  if (key > this.root_.key) {
+    node.left = this.root_;
+    node.right = this.root_.right;
+    this.root_.right = null;
+  } else {
+    node.right = this.root_;
+    node.left = this.root_.left;
+    this.root_.left = null;
+  }
+  this.root_ = node;
+};
+
+
+/**
+ * Removes a node with the specified key from the tree if the tree
+ * contains a node with this key. The removed node is returned. If the
+ * key is not found, an exception is thrown.
+ *
+ * @param {number} key Key to find and remove from the tree.
+ * @return {SplayTree.Node} The removed node.
+ */
+SplayTree.prototype.remove = function(key) {
+  if (this.isEmpty()) {
+    throw Error('Key not found: ' + key);
+  }
+  this.splay_(key);
+  if (this.root_.key != key) {
+    throw Error('Key not found: ' + key);
+  }
+  var removed = this.root_;
+  if (!this.root_.left) {
+    this.root_ = this.root_.right;
+  } else {
+    var right = this.root_.right;
+    this.root_ = this.root_.left;
+    // Splay to make sure that the new root has an empty right child.
+    this.splay_(key);
+    // Insert the original right child as the right child of the new
+    // root.
+    this.root_.right = right;
+  }
+  return removed;
+};
+
+
+/**
+ * Returns the node having the specified key or null if the tree doesn't contain
+ * a node with the specified key.
+ *
+ * @param {number} key Key to find in the tree.
+ * @return {SplayTree.Node} Node having the specified key.
+ */
+SplayTree.prototype.find = function(key) {
+  if (this.isEmpty()) {
+    return null;
+  }
+  this.splay_(key);
+  return this.root_.key == key ? this.root_ : null;
+};
+
+
+/**
+ * @return {SplayTree.Node} Node having the maximum key value.
+ */
+SplayTree.prototype.findMax = function(opt_startNode) {
+  if (this.isEmpty()) {
+    return null;
+  }
+  var current = opt_startNode || this.root_;
+  while (current.right) {
+    current = current.right;
+  }
+  return current;
+};
+
+
+/**
+ * @return {SplayTree.Node} Node having the maximum key value that
+ *     is less than the specified key value.
+ */
+SplayTree.prototype.findGreatestLessThan = function(key) {
+  if (this.isEmpty()) {
+    return null;
+  }
+  // Splay on the key to move the node with the given key or the last
+  // node on the search path to the top of the tree.
+  this.splay_(key);
+  // Now the result is either the root node or the greatest node in
+  // the left subtree.
+  if (this.root_.key < key) {
+    return this.root_;
+  } else if (this.root_.left) {
+    return this.findMax(this.root_.left);
+  } else {
+    return null;
+  }
+};
+
+
+/**
+ * @return {Array<*>} An array containing all the keys of tree's nodes.
+ */
+SplayTree.prototype.exportKeys = function() {
+  var result = [];
+  if (!this.isEmpty()) {
+    this.root_.traverse_(function(node) { result.push(node.key); });
+  }
+  return result;
+};
+
+
+/**
+ * Perform the splay operation for the given key. Moves the node with
+ * the given key to the top of the tree.  If no node has the given
+ * key, the last node on the search path is moved to the top of the
+ * tree. This is the simplified top-down splaying algorithm from:
+ * "Self-adjusting Binary Search Trees" by Sleator and Tarjan
+ *
+ * @param {number} key Key to splay the tree on.
+ * @private
+ */
+SplayTree.prototype.splay_ = function(key) {
+  if (this.isEmpty()) {
+    return;
+  }
+  // Create a dummy node.  The use of the dummy node is a bit
+  // counter-intuitive: The right child of the dummy node will hold
+  // the L tree of the algorithm.  The left child of the dummy node
+  // will hold the R tree of the algorithm.  Using a dummy node, left
+  // and right will always be nodes and we avoid special cases.
+  var dummy, left, right;
+  dummy = left = right = new SplayTree.Node(null, null);
+  var current = this.root_;
+  while (true) {
+    if (key < current.key) {
+      if (!current.left) {
+        break;
+      }
+      if (key < current.left.key) {
+        // Rotate right.
+        var tmp = current.left;
+        current.left = tmp.right;
+        tmp.right = current;
+        current = tmp;
+        if (!current.left) {
+          break;
+        }
+      }
+      // Link right.
+      right.left = current;
+      right = current;
+      current = current.left;
+    } else if (key > current.key) {
+      if (!current.right) {
+        break;
+      }
+      if (key > current.right.key) {
+        // Rotate left.
+        var tmp = current.right;
+        current.right = tmp.left;
+        tmp.left = current;
+        current = tmp;
+        if (!current.right) {
+          break;
+        }
+      }
+      // Link left.
+      left.right = current;
+      left = current;
+      current = current.right;
+    } else {
+      break;
+    }
+  }
+  // Assemble.
+  left.right = current.left;
+  right.left = current.right;
+  current.left = dummy.right;
+  current.right = dummy.left;
+  this.root_ = current;
+};
+
+
+/**
+ * Constructs a Splay tree node.
+ *
+ * @param {number} key Key.
+ * @param {*} value Value.
+ */
+SplayTree.Node = function(key, value) {
+  this.key = key;
+  this.value = value;
+};
+
+
+/**
+ * @type {SplayTree.Node}
+ */
+SplayTree.Node.prototype.left = null;
+
+
+/**
+ * @type {SplayTree.Node}
+ */
+SplayTree.Node.prototype.right = null;
+
+
+/**
+ * Performs an ordered traversal of the subtree starting at
+ * this SplayTree.Node.
+ *
+ * @param {function(SplayTree.Node)} f Visitor function.
+ * @private
+ */
+SplayTree.Node.prototype.traverse_ = function(f) {
+  var current = this;
+  while (current) {
+    var left = current.left;
+    if (left) left.traverse_(f);
+    f(current);
+    current = current.right;
+  }
+};
+function jscSetUp() {
+    SplaySetup();
+}
+
+function jscTearDown() {
+    SplayTearDown();
+}
+
+function jscRun() {
+    SplayRun();
+}
+
+jscSetUp();
+var __before = preciseTime();
+var times = [];
+for (var i = 0; i < 2000; ++i) {
+    var _before = preciseTime();
+    jscRun();
+    var _after = preciseTime();
+    times.push(_after - _before);
+    flashHeapAccess(1);
+}
+var __after = preciseTime();
+jscTearDown();
+
+function averageAbovePercentile(numbers, percentile) {
+    // Don't change the original array.
+    numbers = numbers.slice();
+    
+    // Sort in ascending order.
+    numbers.sort(function(a, b) { return a - b; });
+    
+    // Now the elements we want are at the end. Keep removing them until the array size shrinks too much.
+    // Examples assuming percentile = 99:
+    //
+    // - numbers.length starts at 100: we will remove just the worst entry and then not remove anymore,
+    //   since then numbers.length / originalLength = 0.99.
+    //
+    // - numbers.length starts at 1000: we will remove the ten worst.
+    //
+    // - numbers.length starts at 10: we will remove just the worst.
+    var numbersWeWant = [];
+    var originalLength = numbers.length;
+    while (numbers.length / originalLength > percentile / 100)
+        numbersWeWant.push(numbers.pop());
+    
+    var sum = 0;
+    for (var i = 0; i < numbersWeWant.length; ++i)
+        sum += numbersWeWant[i];
+    
+    var result = sum / numbersWeWant.length;
+    
+    // Do a sanity check.
+    if (numbers.length && result < numbers[numbers.length - 1]) {
+        throw "Sanity check fail: the worst case result is " + result +
+            " but we didn't take into account " + numbers;
+    }
+    
+    return result;
+}
+
+print("That took " + (__after - __before) * 1000 + " ms.");
+
+function printPercentile(percentile)
+{
+    print("Above " + percentile + "%: " + averageAbovePercentile(times, percentile) * 1000 + " ms.");
+}
+
+printPercentile(99.9);
+printPercentile(99.5);
+printPercentile(99);
+printPercentile(97.5);
+printPercentile(95);
+printPercentile(90);
+printPercentile(75);
+printPercentile(50);
+printPercentile(0);
+
+gc();
diff --git a/JSTests/stress/splay-flash-access.js b/JSTests/stress/splay-flash-access.js
new file mode 100644 (file)
index 0000000..e66c41b
--- /dev/null
@@ -0,0 +1,903 @@
+//@ runNoisyTestDefault
+//@ runNoisyTestNoCJIT
+
+// Copyright 2013 the V8 project authors. All rights reserved.
+// Copyright (C) 2015 Apple Inc. All rights reserved.
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+//     * Redistributions of source code must retain the above copyright
+//       notice, this list of conditions and the following disclaimer.
+//     * Redistributions in binary form must reproduce the above
+//       copyright notice, this list of conditions and the following
+//       disclaimer in the documentation and/or other materials provided
+//       with the distribution.
+//     * Neither the name of Google Inc. nor the names of its
+//       contributors may be used to endorse or promote products derived
+//       from this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+// Performance.now is used in latency benchmarks, the fallback is Date.now.
+var performance = performance || {};
+performance.now = (function() {
+  return performance.now       ||
+         performance.mozNow    ||
+         performance.msNow     ||
+         performance.oNow      ||
+         performance.webkitNow ||
+         Date.now;
+})();
+
+// Simple framework for running the benchmark suites and
+// computing a score based on the timing measurements.
+
+
+// A benchmark has a name (string) and a function that will be run to
+// do the performance measurement. The optional setup and tearDown
+// arguments are functions that will be invoked before and after
+// running the benchmark, but the running time of these functions will
+// not be accounted for in the benchmark score.
+function Benchmark(name, doWarmup, doDeterministic, run, setup, tearDown, latencyResult, minIterations) {
+  this.name = name;
+  this.doWarmup = doWarmup;
+  this.doDeterministic = doDeterministic;
+  this.run = run;
+  this.Setup = setup ? setup : function() { };
+  this.TearDown = tearDown ? tearDown : function() { };
+  this.latencyResult = latencyResult ? latencyResult : null; 
+  this.minIterations = minIterations ? minIterations : 32;
+}
+
+
+// Benchmark results hold the benchmark and the measured time used to
+// run the benchmark. The benchmark score is computed later once a
+// full benchmark suite has run to completion. If latency is set to 0
+// then there is no latency score for this benchmark.
+function BenchmarkResult(benchmark, time, latency) {
+  this.benchmark = benchmark;
+  this.time = time;
+  this.latency = latency;
+}
+
+
+// Automatically convert results to numbers. Used by the geometric
+// mean computation.
+BenchmarkResult.prototype.valueOf = function() {
+  return this.time;
+}
+
+
+// Suites of benchmarks consist of a name and the set of benchmarks in
+// addition to the reference timing that the final score will be based
+// on. This way, all scores are relative to a reference run and higher
+// scores implies better performance.
+function BenchmarkSuite(name, reference, benchmarks) {
+  this.name = name;
+  this.reference = reference;
+  this.benchmarks = benchmarks;
+  BenchmarkSuite.suites.push(this);
+}
+
+
+// Keep track of all declared benchmark suites.
+BenchmarkSuite.suites = [];
+
+// Scores are not comparable across versions. Bump the version if
+// you're making changes that will affect that scores, e.g. if you add
+// a new benchmark or change an existing one.
+BenchmarkSuite.version = '9';
+
+// Override the alert function to throw an exception instead.
+alert = function(s) {
+  throw "Alert called with argument: " + s;
+};
+
+
+// To make the benchmark results predictable, we replace Math.random
+// with a 100% deterministic alternative.
+BenchmarkSuite.ResetRNG = function() {
+  Math.random = (function() {
+    var seed = 49734321;
+    return function() {
+      // Robert Jenkins' 32 bit integer hash function.
+      seed = ((seed + 0x7ed55d16) + (seed << 12))  & 0xffffffff;
+      seed = ((seed ^ 0xc761c23c) ^ (seed >>> 19)) & 0xffffffff;
+      seed = ((seed + 0x165667b1) + (seed << 5))   & 0xffffffff;
+      seed = ((seed + 0xd3a2646c) ^ (seed << 9))   & 0xffffffff;
+      seed = ((seed + 0xfd7046c5) + (seed << 3))   & 0xffffffff;
+      seed = ((seed ^ 0xb55a4f09) ^ (seed >>> 16)) & 0xffffffff;
+      return (seed & 0xfffffff) / 0x10000000;
+    };
+  })();
+}
+
+
+// Runs all registered benchmark suites and optionally yields between
+// each individual benchmark to avoid running for too long in the
+// context of browsers. Once done, the final score is reported to the
+// runner.
+BenchmarkSuite.RunSuites = function(runner) {
+  var continuation = null;
+  var suites = BenchmarkSuite.suites;
+  var length = suites.length;
+  BenchmarkSuite.scores = [];
+  var index = 0;
+  function RunStep() {
+    while (continuation || index < length) {
+      if (continuation) {
+        continuation = continuation();
+      } else {
+        var suite = suites[index++];
+        if (runner.NotifyStart) runner.NotifyStart(suite.name);
+        continuation = suite.RunStep(runner);
+      }
+      if (continuation && typeof window != 'undefined' && window.setTimeout) {
+        window.setTimeout(RunStep, 25);
+        return;
+      }
+    }
+
+    // show final result
+    if (runner.NotifyScore) {
+      var score = BenchmarkSuite.GeometricMean(BenchmarkSuite.scores);
+      var formatted = BenchmarkSuite.FormatScore(100 * score);
+      runner.NotifyScore(formatted);
+    }
+  }
+  RunStep();
+}
+
+
+// Counts the total number of registered benchmarks. Useful for
+// showing progress as a percentage.
+BenchmarkSuite.CountBenchmarks = function() {
+  var result = 0;
+  var suites = BenchmarkSuite.suites;
+  for (var i = 0; i < suites.length; i++) {
+    result += suites[i].benchmarks.length;
+  }
+  return result;
+}
+
+
+// Computes the geometric mean of a set of numbers.
+BenchmarkSuite.GeometricMean = function(numbers) {
+  var log = 0;
+  for (var i = 0; i < numbers.length; i++) {
+    log += Math.log(numbers[i]);
+  }
+  return Math.pow(Math.E, log / numbers.length);
+}
+
+
+// Computes the geometric mean of a set of throughput time measurements.
+BenchmarkSuite.GeometricMeanTime = function(measurements) {
+  var log = 0;
+  for (var i = 0; i < measurements.length; i++) {
+    log += Math.log(measurements[i].time);
+  }
+  return Math.pow(Math.E, log / measurements.length);
+}
+
+
+// Computes the average of the worst samples. For example, if percentile is 99, this will report the
+// average of the worst 1% of the samples.
+BenchmarkSuite.AverageAbovePercentile = function(numbers, percentile) {
+  // Don't change the original array.
+  numbers = numbers.slice();
+  
+  // Sort in ascending order.
+  numbers.sort(function(a, b) { return a - b; });
+  
+  // Now the elements we want are at the end. Keep removing them until the array size shrinks too much.
+  // Examples assuming percentile = 99:
+  //
+  // - numbers.length starts at 100: we will remove just the worst entry and then not remove anymore,
+  //   since then numbers.length / originalLength = 0.99.
+  //
+  // - numbers.length starts at 1000: we will remove the ten worst.
+  //
+  // - numbers.length starts at 10: we will remove just the worst.
+  var numbersWeWant = [];
+  var originalLength = numbers.length;
+  while (numbers.length / originalLength > percentile / 100)
+    numbersWeWant.push(numbers.pop());
+  
+  var sum = 0;
+  for (var i = 0; i < numbersWeWant.length; ++i)
+    sum += numbersWeWant[i];
+  
+  var result = sum / numbersWeWant.length;
+  
+  // Do a sanity check.
+  if (numbers.length && result < numbers[numbers.length - 1]) {
+    throw "Sanity check fail: the worst case result is " + result +
+      " but we didn't take into account " + numbers;
+  }
+  
+  return result;
+}
+
+
+// Computes the geometric mean of a set of latency measurements.
+BenchmarkSuite.GeometricMeanLatency = function(measurements) {
+  var log = 0;
+  var hasLatencyResult = false;
+  for (var i = 0; i < measurements.length; i++) {
+    if (measurements[i].latency != 0) {
+      log += Math.log(measurements[i].latency);
+      hasLatencyResult = true;
+    }
+  }
+  if (hasLatencyResult) {
+    return Math.pow(Math.E, log / measurements.length);
+  } else {
+    return 0;
+  }
+}
+
+
+// Converts a score value to a string with at least three significant
+// digits.
+BenchmarkSuite.FormatScore = function(value) {
+  if (value > 100) {
+    return value.toFixed(0);
+  } else {
+    return value.toPrecision(3);
+  }
+}
+
+// Notifies the runner that we're done running a single benchmark in
+// the benchmark suite. This can be useful to report progress.
+BenchmarkSuite.prototype.NotifyStep = function(result) {
+  this.results.push(result);
+  if (this.runner.NotifyStep) this.runner.NotifyStep(result.benchmark.name);
+}
+
+
+// Notifies the runner that we're done with running a suite and that
+// we have a result which can be reported to the user if needed.
+BenchmarkSuite.prototype.NotifyResult = function() {
+  var mean = BenchmarkSuite.GeometricMeanTime(this.results);
+  var score = this.reference[0] / mean;
+  BenchmarkSuite.scores.push(score);
+  if (this.runner.NotifyResult) {
+    var formatted = BenchmarkSuite.FormatScore(100 * score);
+    this.runner.NotifyResult(this.name, formatted);
+  }
+  if (this.reference.length == 2) {
+    var meanLatency = BenchmarkSuite.GeometricMeanLatency(this.results);
+    if (meanLatency != 0) {
+      var scoreLatency = this.reference[1] / meanLatency;
+      BenchmarkSuite.scores.push(scoreLatency);
+      if (this.runner.NotifyResult) {
+        var formattedLatency = BenchmarkSuite.FormatScore(100 * scoreLatency)
+        this.runner.NotifyResult(this.name + "Latency", formattedLatency);
+      }
+    }
+  }
+}
+
+
+// Notifies the runner that running a benchmark resulted in an error.
+BenchmarkSuite.prototype.NotifyError = function(error) {
+  if (this.runner.NotifyError) {
+    this.runner.NotifyError(this.name, error);
+  }
+  if (this.runner.NotifyStep) {
+    this.runner.NotifyStep(this.name);
+  }
+}
+
+
+// Runs a single benchmark for at least a second and computes the
+// average time it takes to run a single iteration.
+BenchmarkSuite.prototype.RunSingleBenchmark = function(benchmark, data) {
+  function Measure(data) {
+    var elapsed = 0;
+    var start = new Date();
+  
+  // Run either for 1 second or for the number of iterations specified
+  // by minIterations, depending on the config flag doDeterministic.
+    for (var i = 0; (benchmark.doDeterministic ? 
+      i<benchmark.minIterations : elapsed < 1000); i++) {
+      benchmark.run();
+      elapsed = new Date() - start;
+    }
+    if (data != null) {
+      data.runs += i;
+      data.elapsed += elapsed;
+    }
+  }
+
+  // Sets up data in order to skip or not the warmup phase.
+  if (!benchmark.doWarmup && data == null) {
+    data = { runs: 0, elapsed: 0 };
+  }
+
+  if (data == null) {
+    Measure(null);
+    return { runs: 0, elapsed: 0 };
+  } else {
+    Measure(data);
+    // If we've run too few iterations, we continue for another second.
+    if (data.runs < benchmark.minIterations) return data;
+    var usec = (data.elapsed * 1000) / data.runs;
+    var latencySamples = (benchmark.latencyResult != null) ? benchmark.latencyResult() : [0];
+    var percentile = 99.5;
+    var latency = BenchmarkSuite.AverageAbovePercentile(latencySamples, percentile) * 1000;
+    this.NotifyStep(new BenchmarkResult(benchmark, usec, latency));
+    return null;
+  }
+}
+
+
+// This function starts running a suite, but stops between each
+// individual benchmark in the suite and returns a continuation
+// function which can be invoked to run the next benchmark. Once the
+// last benchmark has been executed, null is returned.
+BenchmarkSuite.prototype.RunStep = function(runner) {
+  BenchmarkSuite.ResetRNG();
+  this.results = [];
+  this.runner = runner;
+  var length = this.benchmarks.length;
+  var index = 0;
+  var suite = this;
+  var data;
+
+  // Run the setup, the actual benchmark, and the tear down in three
+  // separate steps to allow the framework to yield between any of the
+  // steps.
+
+  function RunNextSetup() {
+    if (index < length) {
+      try {
+        suite.benchmarks[index].Setup();
+      } catch (e) {
+        suite.NotifyError(e);
+        return null;
+      }
+      return RunNextBenchmark;
+    }
+    suite.NotifyResult();
+    return null;
+  }
+
+  function RunNextBenchmark() {
+    try {
+      data = suite.RunSingleBenchmark(suite.benchmarks[index], data);
+    } catch (e) {
+      suite.NotifyError(e);
+      return null;
+    }
+    // If data is null, we're done with this benchmark.
+    return (data == null) ? RunNextTearDown : RunNextBenchmark();
+  }
+
+  function RunNextTearDown() {
+    try {
+      suite.benchmarks[index++].TearDown();
+    } catch (e) {
+      suite.NotifyError(e);
+      return null;
+    }
+    return RunNextSetup;
+  }
+
+  // Start out running the setup.
+  return RunNextSetup();
+}
+// Copyright 2009 the V8 project authors. All rights reserved.
+// Copyright (C) 2015 Apple Inc. All rights reserved.
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+//     * Redistributions of source code must retain the above copyright
+//       notice, this list of conditions and the following disclaimer.
+//     * Redistributions in binary form must reproduce the above
+//       copyright notice, this list of conditions and the following
+//       disclaimer in the documentation and/or other materials provided
+//       with the distribution.
+//     * Neither the name of Google Inc. nor the names of its
+//       contributors may be used to endorse or promote products derived
+//       from this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+// This benchmark is based on a JavaScript log processing module used
+// by the V8 profiler to generate execution time profiles for runs of
+// JavaScript applications, and it effectively measures how fast the
+// JavaScript engine is at allocating nodes and reclaiming the memory
+// used for old nodes. Because of the way splay trees work, the engine
+// also has to deal with a lot of changes to the large tree object
+// graph.
+
+var Splay = new BenchmarkSuite('Splay', [81491, 2739514], [
+  new Benchmark("Splay", true, false, 
+    SplayRun, SplaySetup, SplayTearDown, SplayLatency)
+]);
+
+
+// Configuration.
+var kSplayTreeSize = 8000;
+var kSplayTreeModifications = 80;
+var kSplayTreePayloadDepth = 5;
+
+var splayTree = null;
+var splaySampleTimeStart = 0.0;
+
+function GeneratePayloadTree(depth, tag) {
+  if (depth == 0) {
+    return {
+      array  : [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ],
+      string : 'String for key ' + tag + ' in leaf node'
+    };
+  } else {
+    return {
+      left:  GeneratePayloadTree(depth - 1, tag),
+      right: GeneratePayloadTree(depth - 1, tag)
+    };
+  }
+}
+
+
+function GenerateKey() {
+  // The benchmark framework guarantees that Math.random is
+  // deterministic; see base.js.
+  return Math.random();
+}
+
+var splaySamples = [];
+
+function SplayLatency() {
+  return splaySamples;
+}
+
+function SplayUpdateStats(time) {
+  var pause = time - splaySampleTimeStart;
+  splaySampleTimeStart = time;
+  splaySamples.push(pause);
+}
+
+function InsertNewNode() {
+  // Insert new node with a unique key.
+  var key;
+  do {
+    key = GenerateKey();
+  } while (splayTree.find(key) != null);
+  var payload = GeneratePayloadTree(kSplayTreePayloadDepth, String(key));
+  splayTree.insert(key, payload);
+  return key;
+}
+
+
+function SplaySetup() {
+  // Check if the platform has the performance.now high resolution timer.
+  // If not, throw exception and quit.
+  if (!performance.now) {
+    throw "PerformanceNowUnsupported";
+  }
+
+  splayTree = new SplayTree();
+  splaySampleTimeStart = performance.now()
+  for (var i = 0; i < kSplayTreeSize; i++) {
+    InsertNewNode();
+    if ((i+1) % 20 == 19) {
+      SplayUpdateStats(performance.now());
+    }
+  }
+}
+
+
+function SplayTearDown() {
+  // Allow the garbage collector to reclaim the memory
+  // used by the splay tree no matter how we exit the
+  // tear down function.
+  var keys = splayTree.exportKeys();
+  splayTree = null;
+
+  splaySamples = [];
+
+  // Verify that the splay tree has the right size.
+  var length = keys.length;
+  if (length != kSplayTreeSize) {
+    throw new Error("Splay tree has wrong size");
+  }
+
+  // Verify that the splay tree has sorted, unique keys.
+  for (var i = 0; i < length - 1; i++) {
+    if (keys[i] >= keys[i + 1]) {
+      throw new Error("Splay tree not sorted");
+    }
+  }
+}
+
+
+function SplayRun() {
+  // Replace a few nodes in the splay tree.
+  for (var i = 0; i < kSplayTreeModifications; i++) {
+    var key = InsertNewNode();
+    var greatest = splayTree.findGreatestLessThan(key);
+    if (greatest == null) splayTree.remove(key);
+    else splayTree.remove(greatest.key);
+  }
+  SplayUpdateStats(performance.now());
+}
+
+
+/**
+ * Constructs a Splay tree.  A splay tree is a self-balancing binary
+ * search tree with the additional property that recently accessed
+ * elements are quick to access again. It performs basic operations
+ * such as insertion, look-up and removal in O(log(n)) amortized time.
+ *
+ * @constructor
+ */
+function SplayTree() {
+};
+
+
+/**
+ * Pointer to the root node of the tree.
+ *
+ * @type {SplayTree.Node}
+ * @private
+ */
+SplayTree.prototype.root_ = null;
+
+
+/**
+ * @return {boolean} Whether the tree is empty.
+ */
+SplayTree.prototype.isEmpty = function() {
+  return !this.root_;
+};
+
+
+/**
+ * Inserts a node into the tree with the specified key and value if
+ * the tree does not already contain a node with the specified key. If
+ * the value is inserted, it becomes the root of the tree.
+ *
+ * @param {number} key Key to insert into the tree.
+ * @param {*} value Value to insert into the tree.
+ */
+SplayTree.prototype.insert = function(key, value) {
+  if (this.isEmpty()) {
+    this.root_ = new SplayTree.Node(key, value);
+    return;
+  }
+  // Splay on the key to move the last node on the search path for
+  // the key to the root of the tree.
+  this.splay_(key);
+  if (this.root_.key == key) {
+    return;
+  }
+  var node = new SplayTree.Node(key, value);
+  if (key > this.root_.key) {
+    node.left = this.root_;
+    node.right = this.root_.right;
+    this.root_.right = null;
+  } else {
+    node.right = this.root_;
+    node.left = this.root_.left;
+    this.root_.left = null;
+  }
+  this.root_ = node;
+};
+
+
+/**
+ * Removes a node with the specified key from the tree if the tree
+ * contains a node with this key. The removed node is returned. If the
+ * key is not found, an exception is thrown.
+ *
+ * @param {number} key Key to find and remove from the tree.
+ * @return {SplayTree.Node} The removed node.
+ */
+SplayTree.prototype.remove = function(key) {
+  if (this.isEmpty()) {
+    throw Error('Key not found: ' + key);
+  }
+  this.splay_(key);
+  if (this.root_.key != key) {
+    throw Error('Key not found: ' + key);
+  }
+  var removed = this.root_;
+  if (!this.root_.left) {
+    this.root_ = this.root_.right;
+  } else {
+    var right = this.root_.right;
+    this.root_ = this.root_.left;
+    // Splay to make sure that the new root has an empty right child.
+    this.splay_(key);
+    // Insert the original right child as the right child of the new
+    // root.
+    this.root_.right = right;
+  }
+  return removed;
+};
+
+
+/**
+ * Returns the node having the specified key or null if the tree doesn't contain
+ * a node with the specified key.
+ *
+ * @param {number} key Key to find in the tree.
+ * @return {SplayTree.Node} Node having the specified key.
+ */
+SplayTree.prototype.find = function(key) {
+  if (this.isEmpty()) {
+    return null;
+  }
+  this.splay_(key);
+  return this.root_.key == key ? this.root_ : null;
+};
+
+
+/**
+ * @return {SplayTree.Node} Node having the maximum key value.
+ */
+SplayTree.prototype.findMax = function(opt_startNode) {
+  if (this.isEmpty()) {
+    return null;
+  }
+  var current = opt_startNode || this.root_;
+  while (current.right) {
+    current = current.right;
+  }
+  return current;
+};
+
+
+/**
+ * @return {SplayTree.Node} Node having the maximum key value that
+ *     is less than the specified key value.
+ */
+SplayTree.prototype.findGreatestLessThan = function(key) {
+  if (this.isEmpty()) {
+    return null;
+  }
+  // Splay on the key to move the node with the given key or the last
+  // node on the search path to the top of the tree.
+  this.splay_(key);
+  // Now the result is either the root node or the greatest node in
+  // the left subtree.
+  if (this.root_.key < key) {
+    return this.root_;
+  } else if (this.root_.left) {
+    return this.findMax(this.root_.left);
+  } else {
+    return null;
+  }
+};
+
+
+/**
+ * @return {Array<*>} An array containing all the keys of tree's nodes.
+ */
+SplayTree.prototype.exportKeys = function() {
+  var result = [];
+  if (!this.isEmpty()) {
+    this.root_.traverse_(function(node) { result.push(node.key); });
+  }
+  return result;
+};
+
+
+/**
+ * Perform the splay operation for the given key. Moves the node with
+ * the given key to the top of the tree.  If no node has the given
+ * key, the last node on the search path is moved to the top of the
+ * tree. This is the simplified top-down splaying algorithm from:
+ * "Self-adjusting Binary Search Trees" by Sleator and Tarjan
+ *
+ * @param {number} key Key to splay the tree on.
+ * @private
+ */
+SplayTree.prototype.splay_ = function(key) {
+  if (this.isEmpty()) {
+    return;
+  }
+  // Create a dummy node.  The use of the dummy node is a bit
+  // counter-intuitive: The right child of the dummy node will hold
+  // the L tree of the algorithm.  The left child of the dummy node
+  // will hold the R tree of the algorithm.  Using a dummy node, left
+  // and right will always be nodes and we avoid special cases.
+  var dummy, left, right;
+  dummy = left = right = new SplayTree.Node(null, null);
+  var current = this.root_;
+  while (true) {
+    if (key < current.key) {
+      if (!current.left) {
+        break;
+      }
+      if (key < current.left.key) {
+        // Rotate right.
+        var tmp = current.left;
+        current.left = tmp.right;
+        tmp.right = current;
+        current = tmp;
+        if (!current.left) {
+          break;
+        }
+      }
+      // Link right.
+      right.left = current;
+      right = current;
+      current = current.left;
+    } else if (key > current.key) {
+      if (!current.right) {
+        break;
+      }
+      if (key > current.right.key) {
+        // Rotate left.
+        var tmp = current.right;
+        current.right = tmp.left;
+        tmp.left = current;
+        current = tmp;
+        if (!current.right) {
+          break;
+        }
+      }
+      // Link left.
+      left.right = current;
+      left = current;
+      current = current.right;
+    } else {
+      break;
+    }
+  }
+  // Assemble.
+  left.right = current.left;
+  right.left = current.right;
+  current.left = dummy.right;
+  current.right = dummy.left;
+  this.root_ = current;
+};
+
+
+/**
+ * Constructs a Splay tree node.
+ *
+ * @param {number} key Key.
+ * @param {*} value Value.
+ */
+SplayTree.Node = function(key, value) {
+  this.key = key;
+  this.value = value;
+};
+
+
+/**
+ * @type {SplayTree.Node}
+ */
+SplayTree.Node.prototype.left = null;
+
+
+/**
+ * @type {SplayTree.Node}
+ */
+SplayTree.Node.prototype.right = null;
+
+
+/**
+ * Performs an ordered traversal of the subtree starting at
+ * this SplayTree.Node.
+ *
+ * @param {function(SplayTree.Node)} f Visitor function.
+ * @private
+ */
+SplayTree.Node.prototype.traverse_ = function(f) {
+  var current = this;
+  while (current) {
+    var left = current.left;
+    if (left) left.traverse_(f);
+    f(current);
+    current = current.right;
+  }
+};
+function jscSetUp() {
+    SplaySetup();
+}
+
+function jscTearDown() {
+    SplayTearDown();
+}
+
+function jscRun() {
+    SplayRun();
+}
+
+jscSetUp();
+var __before = preciseTime();
+var times = [];
+for (var i = 0; i < 10000; ++i) {
+//for (var i = 0; i < 1000000; ++i) {
+    var _before = preciseTime();
+    jscRun();
+    var _after = preciseTime();
+    times.push(_after - _before);
+    flashHeapAccess();
+}
+var __after = preciseTime();
+jscTearDown();
+
+function averageAbovePercentile(numbers, percentile) {
+    // Don't change the original array.
+    numbers = numbers.slice();
+    
+    // Sort in ascending order.
+    numbers.sort(function(a, b) { return a - b; });
+    
+    // Now the elements we want are at the end. Keep removing them until the array size shrinks too much.
+    // Examples assuming percentile = 99:
+    //
+    // - numbers.length starts at 100: we will remove just the worst entry and then not remove anymore,
+    //   since then numbers.length / originalLength = 0.99.
+    //
+    // - numbers.length starts at 1000: we will remove the ten worst.
+    //
+    // - numbers.length starts at 10: we will remove just the worst.
+    var numbersWeWant = [];
+    var originalLength = numbers.length;
+    while (numbers.length / originalLength > percentile / 100)
+        numbersWeWant.push(numbers.pop());
+    
+    var sum = 0;
+    for (var i = 0; i < numbersWeWant.length; ++i)
+        sum += numbersWeWant[i];
+    
+    var result = sum / numbersWeWant.length;
+    
+    // Do a sanity check.
+    if (numbers.length && result < numbers[numbers.length - 1]) {
+        throw "Sanity check fail: the worst case result is " + result +
+            " but we didn't take into account " + numbers;
+    }
+    
+    return result;
+}
+
+print("That took " + (__after - __before) * 1000 + " ms.");
+
+function printPercentile(percentile)
+{
+    print("Above " + percentile + "%: " + averageAbovePercentile(times, percentile) * 1000 + " ms.");
+}
+
+printPercentile(99.9);
+printPercentile(99.5);
+printPercentile(99);
+printPercentile(97.5);
+printPercentile(95);
+printPercentile(90);
+printPercentile(75);
+printPercentile(50);
+printPercentile(0);
+
+gc();
index ba73878..f8b8797 100644 (file)
@@ -475,6 +475,7 @@ set(JavaScriptCore_SOURCES
     heap/CellContainer.cpp
     heap/CodeBlockSet.cpp
     heap/CollectionScope.cpp
+    heap/CollectorPhase.cpp
     heap/ConservativeRoots.cpp
     heap/DeferGC.cpp
     heap/DestructionMode.cpp
@@ -482,6 +483,7 @@ set(JavaScriptCore_SOURCES
     heap/FullGCActivityCallback.cpp
     heap/FreeList.cpp
     heap/GCActivityCallback.cpp
+    heap/GCConductor.cpp
     heap/GCLogging.cpp
     heap/HandleSet.cpp
     heap/HandleStack.cpp
@@ -491,7 +493,6 @@ set(JavaScriptCore_SOURCES
     heap/HeapProfiler.cpp
     heap/HeapSnapshot.cpp
     heap/HeapSnapshotBuilder.cpp
-    heap/HeapStatistics.cpp
     heap/HeapTimer.cpp
     heap/HeapVerifier.cpp
     heap/IncrementalSweeper.cpp
index ea97484..911e5ef 100644 (file)
@@ -1,3 +1,190 @@
+2017-02-20  Filip Pizlo  <fpizlo@apple.com>
+
+        The collector thread should only start when the mutator doesn't have heap access
+        https://bugs.webkit.org/show_bug.cgi?id=167737
+
+        Reviewed by Keith Miller.
+        
+        This turns the collector thread's workflow into a state machine, so that the mutator thread can
+        run it directly. This reduces the amount of synchronization we do with the collector thread, and
+        means that most apps will never start the collector thread. The collector thread will still start
+        when we need to finish collecting and we don't have heap access.
+        
+        In this new world, "stopping the world" means relinquishing control of collection to the mutator.
+        This means tracking who is conducting collection. I use the GCConductor enum to say who is
+        conducting. It's either GCConductor::Mutator or GCConductor::Collector. I use the term "conn" to
+        refer to the concept of conducting (having the conn, relinquishing the conn, taking the conn).
+        So, stopping the world means giving the mutator the conn. Releasing heap access means giving the
+        collector the conn.
+        
+        This meant bringing back the conservative scan of the calling thread. It turns out that this
+        scan was too slow to be called on each GC increment because apparently setjmp() now does system
+        calls. So, I wrote our own callee save register saving for the GC. Then I had doubts about
+        whether or not it was correct, so I also made it so that the GC only rarely asks for the register
+        state. I think we still want to use my register saving code instead of setjmp because setjmp
+        seems to save things we don't need, and that could make us overly conservative.
+        
+        It turns out that this new scheduling discipline makes the old space-time scheduler perform
+        better than the new stochastic space-time scheduler on systems with fewer than 4 cores. This is
+        because the mutator having the conn enables us to time the mutator<->collector context switches
+        by polling. The OS is never involved. So, we can use super precise timing. This allows the old
+        space-time schduler to shine like it hadn't before.
+        
+        The splay results imply that this is all a good thing. On 2-core systems, this reduces pause
+        times by 40% and it increases throughput about 5%. On 1-core systems, this reduces pause times by
+        half and reduces throughput by 8%. On 4-or-more-core systems, this doesn't seem to have much
+        effect.
+
+        * CMakeLists.txt:
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * bytecode/CodeBlock.cpp:
+        (JSC::CodeBlock::visitChildren):
+        * dfg/DFGWorklist.cpp:
+        (JSC::DFG::Worklist::ThreadBody::ThreadBody):
+        (JSC::DFG::Worklist::dump):
+        (JSC::DFG::numberOfWorklists):
+        (JSC::DFG::ensureWorklistForIndex):
+        (JSC::DFG::existingWorklistForIndexOrNull):
+        (JSC::DFG::existingWorklistForIndex):
+        * dfg/DFGWorklist.h:
+        (JSC::DFG::numberOfWorklists): Deleted.
+        (JSC::DFG::ensureWorklistForIndex): Deleted.
+        (JSC::DFG::existingWorklistForIndexOrNull): Deleted.
+        (JSC::DFG::existingWorklistForIndex): Deleted.
+        * heap/CollectingScope.h: Added.
+        (JSC::CollectingScope::CollectingScope):
+        (JSC::CollectingScope::~CollectingScope):
+        * heap/CollectorPhase.cpp: Added.
+        (JSC::worldShouldBeSuspended):
+        (WTF::printInternal):
+        * heap/CollectorPhase.h: Added.
+        * heap/EdenGCActivityCallback.cpp:
+        (JSC::EdenGCActivityCallback::lastGCLength):
+        * heap/FullGCActivityCallback.cpp:
+        (JSC::FullGCActivityCallback::doCollection):
+        (JSC::FullGCActivityCallback::lastGCLength):
+        * heap/GCConductor.cpp: Added.
+        (JSC::gcConductorShortName):
+        (WTF::printInternal):
+        * heap/GCConductor.h: Added.
+        * heap/GCFinalizationCallback.cpp: Added.
+        (JSC::GCFinalizationCallback::GCFinalizationCallback):
+        (JSC::GCFinalizationCallback::~GCFinalizationCallback):
+        * heap/GCFinalizationCallback.h: Added.
+        (JSC::GCFinalizationCallbackFuncAdaptor::GCFinalizationCallbackFuncAdaptor):
+        (JSC::createGCFinalizationCallback):
+        * heap/Heap.cpp:
+        (JSC::Heap::Thread::Thread):
+        (JSC::Heap::Heap):
+        (JSC::Heap::lastChanceToFinalize):
+        (JSC::Heap::gatherStackRoots):
+        (JSC::Heap::updateObjectCounts):
+        (JSC::Heap::sweepSynchronously):
+        (JSC::Heap::collectAllGarbage):
+        (JSC::Heap::collectAsync):
+        (JSC::Heap::collectSync):
+        (JSC::Heap::shouldCollectInCollectorThread):
+        (JSC::Heap::collectInCollectorThread):
+        (JSC::Heap::checkConn):
+        (JSC::Heap::runNotRunningPhase):
+        (JSC::Heap::runBeginPhase):
+        (JSC::Heap::runFixpointPhase):
+        (JSC::Heap::runConcurrentPhase):
+        (JSC::Heap::runReloopPhase):
+        (JSC::Heap::runEndPhase):
+        (JSC::Heap::changePhase):
+        (JSC::Heap::finishChangingPhase):
+        (JSC::Heap::stopThePeriphery):
+        (JSC::Heap::resumeThePeriphery):
+        (JSC::Heap::stopTheMutator):
+        (JSC::Heap::resumeTheMutator):
+        (JSC::Heap::stopIfNecessarySlow):
+        (JSC::Heap::collectInMutatorThread):
+        (JSC::Heap::waitForCollector):
+        (JSC::Heap::acquireAccessSlow):
+        (JSC::Heap::releaseAccessSlow):
+        (JSC::Heap::relinquishConn):
+        (JSC::Heap::finishRelinquishingConn):
+        (JSC::Heap::handleNeedFinalize):
+        (JSC::Heap::notifyThreadStopping):
+        (JSC::Heap::finalize):
+        (JSC::Heap::addFinalizationCallback):
+        (JSC::Heap::requestCollection):
+        (JSC::Heap::waitForCollection):
+        (JSC::Heap::updateAllocationLimits):
+        (JSC::Heap::didFinishCollection):
+        (JSC::Heap::collectIfNecessaryOrDefer):
+        (JSC::Heap::notifyIsSafeToCollect):
+        (JSC::Heap::preventCollection):
+        (JSC::Heap::performIncrement):
+        (JSC::Heap::markToFixpoint): Deleted.
+        (JSC::Heap::shouldCollectInThread): Deleted.
+        (JSC::Heap::collectInThread): Deleted.
+        (JSC::Heap::stopTheWorld): Deleted.
+        (JSC::Heap::resumeTheWorld): Deleted.
+        * heap/Heap.h:
+        (JSC::Heap::machineThreads):
+        (JSC::Heap::lastFullGCLength):
+        (JSC::Heap::lastEdenGCLength):
+        (JSC::Heap::increaseLastFullGCLength):
+        * heap/HeapInlines.h:
+        (JSC::Heap::mutatorIsStopped): Deleted.
+        * heap/HeapStatistics.cpp: Removed.
+        * heap/HeapStatistics.h: Removed.
+        * heap/HelpingGCScope.h: Removed.
+        * heap/IncrementalSweeper.cpp:
+        (JSC::IncrementalSweeper::stopSweeping):
+        (JSC::IncrementalSweeper::willFinishSweeping): Deleted.
+        * heap/IncrementalSweeper.h:
+        * heap/MachineStackMarker.cpp:
+        (JSC::MachineThreads::gatherFromCurrentThread):
+        (JSC::MachineThreads::gatherConservativeRoots):
+        (JSC::callWithCurrentThreadState):
+        * heap/MachineStackMarker.h:
+        * heap/MarkedAllocator.cpp:
+        (JSC::MarkedAllocator::allocateSlowCaseImpl):
+        * heap/MarkedBlock.cpp:
+        (JSC::MarkedBlock::Handle::sweep):
+        * heap/MarkedSpace.cpp:
+        (JSC::MarkedSpace::sweep):
+        * heap/MutatorState.cpp:
+        (WTF::printInternal):
+        * heap/MutatorState.h:
+        * heap/RegisterState.h: Added.
+        * heap/RunningScope.h: Added.
+        (JSC::RunningScope::RunningScope):
+        (JSC::RunningScope::~RunningScope):
+        * heap/SlotVisitor.cpp:
+        (JSC::SlotVisitor::SlotVisitor):
+        (JSC::SlotVisitor::drain):
+        (JSC::SlotVisitor::drainFromShared):
+        (JSC::SlotVisitor::drainInParallelPassively):
+        (JSC::SlotVisitor::donateAll):
+        (JSC::SlotVisitor::donate):
+        * heap/SlotVisitor.h:
+        (JSC::SlotVisitor::codeName):
+        * heap/StochasticSpaceTimeMutatorScheduler.cpp:
+        (JSC::StochasticSpaceTimeMutatorScheduler::beginCollection):
+        (JSC::StochasticSpaceTimeMutatorScheduler::synchronousDrainingDidStall):
+        (JSC::StochasticSpaceTimeMutatorScheduler::timeToStop):
+        * heap/SweepingScope.h: Added.
+        (JSC::SweepingScope::SweepingScope):
+        (JSC::SweepingScope::~SweepingScope):
+        * jit/JITWorklist.cpp:
+        (JSC::JITWorklist::Thread::Thread):
+        * jsc.cpp:
+        (GlobalObject::finishCreation):
+        (functionFlashHeapAccess):
+        * runtime/InitializeThreading.cpp:
+        (JSC::initializeThreading):
+        * runtime/JSCellInlines.h:
+        (JSC::JSCell::classInfo):
+        * runtime/Options.cpp:
+        (JSC::overrideDefaults):
+        * runtime/Options.h:
+        * runtime/TestRunnerUtils.cpp:
+        (JSC::finalizeStatsAtEndOfTesting):
+
 2017-02-21  Saam Barati  <sbarati@apple.com>
 
         Air should have a disassembly mode that dumps IR and assembly intermixed
index 1f0bc60..fcb702a 100644 (file)
                0F2BDC4D1522818600CD8910 /* DFGMinifiedNode.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F2BDC4C1522818300CD8910 /* DFGMinifiedNode.cpp */; };
                0F2BDC4F15228BF300CD8910 /* DFGValueSource.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F2BDC4E15228BE700CD8910 /* DFGValueSource.cpp */; };
                0F2BDC5115228FFD00CD8910 /* DFGVariableEvent.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F2BDC5015228FFA00CD8910 /* DFGVariableEvent.cpp */; };
+               0F2C63AA1E4FA42E00C13839 /* RunningScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F2C63A91E4FA42C00C13839 /* RunningScope.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F2D4DDD19832D34007D4B19 /* DebuggerScope.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F2D4DDB19832D34007D4B19 /* DebuggerScope.cpp */; };
                0F2D4DDE19832D34007D4B19 /* DebuggerScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F2D4DDC19832D34007D4B19 /* DebuggerScope.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F2D4DE819832DAC007D4B19 /* ToThisStatus.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F2D4DE519832DAC007D4B19 /* ToThisStatus.cpp */; };
                0FA762051DB9242900B7A2FD /* CollectionScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FA762011DB9242300B7A2FD /* CollectionScope.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0FA762061DB9243100B7A2FD /* MutatorState.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FA762021DB9242300B7A2FD /* MutatorState.cpp */; };
                0FA762071DB9243300B7A2FD /* MutatorState.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FA762031DB9242300B7A2FD /* MutatorState.h */; settings = {ATTRIBUTES = (Private, ); }; };
-               0FA762091DB9283E00B7A2FD /* HelpingGCScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FA762081DB9283C00B7A2FD /* HelpingGCScope.h */; };
                0FA7620B1DB959F900B7A2FD /* AllocatingScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FA7620A1DB959F600B7A2FD /* AllocatingScope.h */; };
                0FA7A8EB18B413C80052371D /* Reg.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FA7A8E918B413C80052371D /* Reg.cpp */; };
                0FA7A8EC18B413C80052371D /* Reg.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FA7A8EA18B413C80052371D /* Reg.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0FCEFAAC1804C13E00472CE4 /* FTLSaveRestore.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FCEFAAA1804C13E00472CE4 /* FTLSaveRestore.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0FCEFADF180738C000472CE4 /* FTLLocation.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FCEFADD180738C000472CE4 /* FTLLocation.cpp */; };
                0FCEFAE0180738C000472CE4 /* FTLLocation.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FCEFADE180738C000472CE4 /* FTLLocation.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0FD0E5E91E43D3490006AB08 /* CollectorPhase.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD0E5E61E43D3470006AB08 /* CollectorPhase.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0FD0E5EA1E43D34D0006AB08 /* GCConductor.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD0E5E81E43D3470006AB08 /* GCConductor.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0FD0E5EB1E43D3500006AB08 /* CollectorPhase.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FD0E5E51E43D3470006AB08 /* CollectorPhase.cpp */; };
+               0FD0E5EC1E43D3530006AB08 /* GCConductor.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FD0E5E71E43D3470006AB08 /* GCConductor.cpp */; };
+               0FD0E5EE1E468A570006AB08 /* SweepingScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD0E5ED1E468A540006AB08 /* SweepingScope.h */; };
+               0FD0E5F01E46BF250006AB08 /* RegisterState.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD0E5EF1E46BF230006AB08 /* RegisterState.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0FD0E5F21E46C8AF0006AB08 /* CollectingScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD0E5F11E46C8AD0006AB08 /* CollectingScope.h */; };
                0FD2C92416D01EE900C7803F /* StructureInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD2C92316D01EE900C7803F /* StructureInlines.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0FD3C82614115D4000FD81CB /* DFGDriver.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FD3C82014115CF800FD81CB /* DFGDriver.cpp */; };
                0FD3C82814115D4F00FD81CB /* DFGDriver.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD3C82214115D0E00FD81CB /* DFGDriver.h */; };
                14B723B812D7DA6F003BD5ED /* MachineStackMarker.h in Headers */ = {isa = PBXBuildFile; fileRef = 14B7234012D7D0DA003BD5ED /* MachineStackMarker.h */; settings = {ATTRIBUTES = (Private, ); }; };
                14B8EC720A5652090062BE54 /* CoreFoundation.framework in Frameworks */ = {isa = PBXBuildFile; fileRef = 6560A4CF04B3B3E7008AE952 /* CoreFoundation.framework */; };
                14BA78F113AAB88F005B7C2C /* SlotVisitor.h in Headers */ = {isa = PBXBuildFile; fileRef = 14BA78F013AAB88F005B7C2C /* SlotVisitor.h */; settings = {ATTRIBUTES = (Private, ); }; };
-               14BA7A9713AADFF8005B7C2C /* Heap.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 14BA7A9513AADFF8005B7C2C /* Heap.cpp */; };
+               14BA7A9713AADFF8005B7C2C /* Heap.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 14BA7A9513AADFF8005B7C2C /* Heap.cpp */; settings = {COMPILER_FLAGS = "-fno-optimize-sibling-calls"; }; };
                14BA7A9813AADFF8005B7C2C /* Heap.h in Headers */ = {isa = PBXBuildFile; fileRef = 14BA7A9613AADFF8005B7C2C /* Heap.h */; settings = {ATTRIBUTES = (Private, ); }; };
                14BD59C50A3E8F9F00BAF59C /* JavaScriptCore.framework in Frameworks */ = {isa = PBXBuildFile; fileRef = 932F5BD90822A1C700736975 /* JavaScriptCore.framework */; };
                14BD5A300A3E91F600BAF59C /* JSContextRef.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 14BD5A290A3E91F600BAF59C /* JSContextRef.cpp */; };
                C2181FC218A948FB0025A235 /* JSExportTests.mm in Sources */ = {isa = PBXBuildFile; fileRef = C2181FC118A948FB0025A235 /* JSExportTests.mm */; };
                C225494315F7DBAA0065E898 /* SlotVisitor.cpp in Sources */ = {isa = PBXBuildFile; fileRef = C225494215F7DBAA0065E898 /* SlotVisitor.cpp */; };
                C22B31B9140577D700DB475A /* SamplingCounter.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F77008E1402FDD60078EB39 /* SamplingCounter.h */; settings = {ATTRIBUTES = (Private, ); }; };
-               C24D31E2161CD695002AA4DB /* HeapStatistics.cpp in Sources */ = {isa = PBXBuildFile; fileRef = C24D31E0161CD695002AA4DB /* HeapStatistics.cpp */; };
-               C24D31E3161CD695002AA4DB /* HeapStatistics.h in Headers */ = {isa = PBXBuildFile; fileRef = C24D31E1161CD695002AA4DB /* HeapStatistics.h */; settings = {ATTRIBUTES = (Private, ); }; };
                C25D709B16DE99F400FCA6BC /* JSManagedValue.mm in Sources */ = {isa = PBXBuildFile; fileRef = C25D709916DE99F400FCA6BC /* JSManagedValue.mm */; };
                C25D709C16DE99F400FCA6BC /* JSManagedValue.h in Headers */ = {isa = PBXBuildFile; fileRef = C25D709A16DE99F400FCA6BC /* JSManagedValue.h */; settings = {ATTRIBUTES = (Public, ); }; };
                C25F8BCD157544A900245B71 /* IncrementalSweeper.cpp in Sources */ = {isa = PBXBuildFile; fileRef = C25F8BCB157544A900245B71 /* IncrementalSweeper.cpp */; };
                0F2BDC4C1522818300CD8910 /* DFGMinifiedNode.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGMinifiedNode.cpp; path = dfg/DFGMinifiedNode.cpp; sourceTree = "<group>"; };
                0F2BDC4E15228BE700CD8910 /* DFGValueSource.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGValueSource.cpp; path = dfg/DFGValueSource.cpp; sourceTree = "<group>"; };
                0F2BDC5015228FFA00CD8910 /* DFGVariableEvent.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGVariableEvent.cpp; path = dfg/DFGVariableEvent.cpp; sourceTree = "<group>"; };
+               0F2C63A91E4FA42C00C13839 /* RunningScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RunningScope.h; sourceTree = "<group>"; };
                0F2D4DDB19832D34007D4B19 /* DebuggerScope.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = DebuggerScope.cpp; sourceTree = "<group>"; };
                0F2D4DDC19832D34007D4B19 /* DebuggerScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = DebuggerScope.h; sourceTree = "<group>"; };
                0F2D4DDF19832D91007D4B19 /* TypeProfilerLog.cpp */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.cpp.cpp; path = TypeProfilerLog.cpp; sourceTree = "<group>"; };
                0FA762011DB9242300B7A2FD /* CollectionScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CollectionScope.h; sourceTree = "<group>"; };
                0FA762021DB9242300B7A2FD /* MutatorState.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = MutatorState.cpp; sourceTree = "<group>"; };
                0FA762031DB9242300B7A2FD /* MutatorState.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MutatorState.h; sourceTree = "<group>"; };
-               0FA762081DB9283C00B7A2FD /* HelpingGCScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = HelpingGCScope.h; sourceTree = "<group>"; };
                0FA7620A1DB959F600B7A2FD /* AllocatingScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = AllocatingScope.h; sourceTree = "<group>"; };
                0FA7A8E918B413C80052371D /* Reg.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = Reg.cpp; sourceTree = "<group>"; };
                0FA7A8EA18B413C80052371D /* Reg.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = Reg.h; sourceTree = "<group>"; };
                0FCEFAAA1804C13E00472CE4 /* FTLSaveRestore.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLSaveRestore.h; path = ftl/FTLSaveRestore.h; sourceTree = "<group>"; };
                0FCEFADD180738C000472CE4 /* FTLLocation.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FTLLocation.cpp; path = ftl/FTLLocation.cpp; sourceTree = "<group>"; };
                0FCEFADE180738C000472CE4 /* FTLLocation.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLLocation.h; path = ftl/FTLLocation.h; sourceTree = "<group>"; };
+               0FD0E5E51E43D3470006AB08 /* CollectorPhase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = CollectorPhase.cpp; sourceTree = "<group>"; };
+               0FD0E5E61E43D3470006AB08 /* CollectorPhase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CollectorPhase.h; sourceTree = "<group>"; };
+               0FD0E5E71E43D3470006AB08 /* GCConductor.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = GCConductor.cpp; sourceTree = "<group>"; };
+               0FD0E5E81E43D3470006AB08 /* GCConductor.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = GCConductor.h; sourceTree = "<group>"; };
+               0FD0E5ED1E468A540006AB08 /* SweepingScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = SweepingScope.h; sourceTree = "<group>"; };
+               0FD0E5EF1E46BF230006AB08 /* RegisterState.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RegisterState.h; sourceTree = "<group>"; };
+               0FD0E5F11E46C8AD0006AB08 /* CollectingScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CollectingScope.h; sourceTree = "<group>"; };
                0FD2C92316D01EE900C7803F /* StructureInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = StructureInlines.h; sourceTree = "<group>"; };
                0FD3C82014115CF800FD81CB /* DFGDriver.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGDriver.cpp; path = dfg/DFGDriver.cpp; sourceTree = "<group>"; };
                0FD3C82214115D0E00FD81CB /* DFGDriver.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGDriver.h; path = dfg/DFGDriver.h; sourceTree = "<group>"; };
                C2181FC018A948FB0025A235 /* JSExportTests.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = JSExportTests.h; path = API/tests/JSExportTests.h; sourceTree = "<group>"; };
                C2181FC118A948FB0025A235 /* JSExportTests.mm */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.objcpp; name = JSExportTests.mm; path = API/tests/JSExportTests.mm; sourceTree = "<group>"; };
                C225494215F7DBAA0065E898 /* SlotVisitor.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = SlotVisitor.cpp; sourceTree = "<group>"; };
-               C24D31E0161CD695002AA4DB /* HeapStatistics.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = HeapStatistics.cpp; sourceTree = "<group>"; };
-               C24D31E1161CD695002AA4DB /* HeapStatistics.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = HeapStatistics.h; sourceTree = "<group>"; };
                C25D709916DE99F400FCA6BC /* JSManagedValue.mm */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.objcpp; path = JSManagedValue.mm; sourceTree = "<group>"; };
                C25D709A16DE99F400FCA6BC /* JSManagedValue.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSManagedValue.h; sourceTree = "<group>"; };
                C25F8BCB157544A900245B71 /* IncrementalSweeper.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = IncrementalSweeper.cpp; sourceTree = "<group>"; };
                                0FD8A31117D4326C00CA2C40 /* CodeBlockSet.cpp */,
                                0FD8A31217D4326C00CA2C40 /* CodeBlockSet.h */,
                                0F664CE71DA304ED00B00A11 /* CodeBlockSetInlines.h */,
+                               0FD0E5F11E46C8AD0006AB08 /* CollectingScope.h */,
                                0FA762001DB9242300B7A2FD /* CollectionScope.cpp */,
                                0FA762011DB9242300B7A2FD /* CollectionScope.h */,
+                               0FD0E5E51E43D3470006AB08 /* CollectorPhase.cpp */,
+                               0FD0E5E61E43D3470006AB08 /* CollectorPhase.h */,
                                146B14DB12EB5B12001BEC1B /* ConservativeRoots.cpp */,
                                149DAAF212EB559D0083B12B /* ConservativeRoots.h */,
                                0F7DF12F1E2970D50095951B /* ConstraintVolatility.h */,
                                2AACE63A18CA5A0300ED0191 /* GCActivityCallback.cpp */,
                                2AACE63B18CA5A0300ED0191 /* GCActivityCallback.h */,
                                BCBE2CAD14E985AA000593AD /* GCAssertions.h */,
+                               0FD0E5E71E43D3470006AB08 /* GCConductor.cpp */,
+                               0FD0E5E81E43D3470006AB08 /* GCConductor.h */,
                                0FB4767C1D99AEA7008EA6CB /* GCDeferralContext.h */,
                                0FB4767D1D99AEA7008EA6CB /* GCDeferralContextInlines.h */,
                                0F2B66A817B6B53D00A7AE3F /* GCIncomingRefCounted.h */,
                                A54C2AAF1C6544D100A18D78 /* HeapSnapshot.h */,
                                A5311C341C77CEAC00E6B1B6 /* HeapSnapshotBuilder.cpp */,
                                A5311C351C77CEAC00E6B1B6 /* HeapSnapshotBuilder.h */,
-                               C24D31E0161CD695002AA4DB /* HeapStatistics.cpp */,
-                               C24D31E1161CD695002AA4DB /* HeapStatistics.h */,
                                C2E526BB1590EF000054E48D /* HeapTimer.cpp */,
                                C2E526BC1590EF000054E48D /* HeapTimer.h */,
                                0FADE6721D4D23BC00768457 /* HeapUtil.h */,
                                FE7BA60D1A1A7CEC00F1F7B4 /* HeapVerifier.cpp */,
                                FE7BA60E1A1A7CEC00F1F7B4 /* HeapVerifier.h */,
-                               0FA762081DB9283C00B7A2FD /* HelpingGCScope.h */,
                                C25F8BCB157544A900245B71 /* IncrementalSweeper.cpp */,
                                C25F8BCC157544A900245B71 /* IncrementalSweeper.h */,
                                0F766D2915A8CC34008F363E /* JITStubRoutineSet.cpp */,
                                0FA762031DB9242300B7A2FD /* MutatorState.h */,
                                ADDB1F6218D77DB7009B58A8 /* OpaqueRootSet.h */,
                                0FBB73B61DEF3AAC002C009E /* PreventCollectionScope.h */,
+                               0FD0E5EF1E46BF230006AB08 /* RegisterState.h */,
                                0F7CF94E1DBEEE860098CC12 /* ReleaseHeapAccessScope.h */,
+                               0F2C63A91E4FA42C00C13839 /* RunningScope.h */,
                                C225494215F7DBAA0065E898 /* SlotVisitor.cpp */,
                                14BA78F013AAB88F005B7C2C /* SlotVisitor.h */,
                                0FCB408515C0A3C30048932B /* SlotVisitorInlines.h */,
                                0F7DF1311E2970D50095951B /* Subspace.cpp */,
                                0F7DF1321E2970D50095951B /* Subspace.h */,
                                0F7DF1331E2970D50095951B /* SubspaceInlines.h */,
+                               0FD0E5ED1E468A540006AB08 /* SweepingScope.h */,
                                0F1FB38A1E173A6200A9BE50 /* SynchronousStopTheWorldMutatorScheduler.cpp */,
                                0F1FB38B1E173A6200A9BE50 /* SynchronousStopTheWorldMutatorScheduler.h */,
                                141448CC13A1783700F5BA1A /* TinyBloomFilter.h */,
                                0F6B8AE31C4EFE1700969052 /* B3BreakCriticalEdges.h in Headers */,
                                DC9A0C201D2D9CB30085124E /* B3CaseCollection.h in Headers */,
                                DC9A0C1F1D2D9CB10085124E /* B3CaseCollectionInlines.h in Headers */,
+                               0FD0E5EE1E468A570006AB08 /* SweepingScope.h in Headers */,
                                0F338DFA1BE96AA80013C88F /* B3CCallValue.h in Headers */,
                                0F33FCFB1C1625BE00323F67 /* B3CFG.h in Headers */,
                                0FEC85061BDACDAC0080FF74 /* B3CheckSpecial.h in Headers */,
                                0F2BDC16151C5D4F00CD8910 /* DFGFixupPhase.h in Headers */,
                                0F2017801DCADC3500EA5950 /* DFGFlowIndexing.h in Headers */,
                                0F2017821DCADD4200EA5950 /* DFGFlowMap.h in Headers */,
+                               0F2C63AA1E4FA42E00C13839 /* RunningScope.h in Headers */,
                                0F9D339717FFC4E60073C2BC /* DFGFlushedAt.h in Headers */,
                                A7D89CF817A0B8CC00773AD8 /* DFGFlushFormat.h in Headers */,
                                0F2DD8151AB3D8BE00BBB8E8 /* DFGForAllKills.h in Headers */,
                                A5398FAB1C750DA40060A963 /* HeapProfiler.h in Headers */,
                                A54C2AB11C6544F200A18D78 /* HeapSnapshot.h in Headers */,
                                A5311C361C77CEC500E6B1B6 /* HeapSnapshotBuilder.h in Headers */,
-                               C24D31E3161CD695002AA4DB /* HeapStatistics.h in Headers */,
                                C2E526BE1590EF000054E48D /* HeapTimer.h in Headers */,
+                               0FD0E5EA1E43D34D0006AB08 /* GCConductor.h in Headers */,
                                0FADE6731D4D23BE00768457 /* HeapUtil.h in Headers */,
                                FE7BA6101A1A7CEC00F1F7B4 /* HeapVerifier.h in Headers */,
-                               0FA762091DB9283E00B7A2FD /* HelpingGCScope.h in Headers */,
                                0F4680D514BBD24B00BFE272 /* HostCallReturnValue.h in Headers */,
                                DC2143071CA32E55000A8869 /* ICStats.h in Headers */,
                                BC18C40F0E16F5CD00B34460 /* Identifier.h in Headers */,
                                A1B9E23E1B4E0D6700BC7FED /* IntlCollatorPrototype.h in Headers */,
                                A18193E41B4E0CDB00FC1029 /* IntlCollatorPrototype.lut.h in Headers */,
                                A1587D6E1B4DC14100D69849 /* IntlDateTimeFormat.h in Headers */,
+                               0FD0E5E91E43D3490006AB08 /* CollectorPhase.h in Headers */,
                                A1587D701B4DC14100D69849 /* IntlDateTimeFormatConstructor.h in Headers */,
                                A1587D751B4DC1C600D69849 /* IntlDateTimeFormatConstructor.lut.h in Headers */,
                                A1587D721B4DC14100D69849 /* IntlDateTimeFormatPrototype.h in Headers */,
                                A125846E1B45A36000CC7F6C /* IntlNumberFormatConstructor.lut.h in Headers */,
                                A1D793011B43864B004516F5 /* IntlNumberFormatPrototype.h in Headers */,
                                A125846F1B45A36000CC7F6C /* IntlNumberFormatPrototype.lut.h in Headers */,
+                               0FD0E5F01E46BF250006AB08 /* RegisterState.h in Headers */,
                                A12BBFF21B044A8B00664B69 /* IntlObject.h in Headers */,
                                708EBE241CE8F35800453146 /* IntlObjectInlines.h in Headers */,
                                860BD801148EA6F200112B2F /* Intrinsic.h in Headers */,
                                AD2FCBED1DB58DAD00B3E736 /* WebAssemblyCompileErrorConstructor.h in Headers */,
                                AD2FCC161DB59CB200B3E736 /* WebAssemblyCompileErrorConstructor.lut.h in Headers */,
                                AD2FCBEF1DB58DAD00B3E736 /* WebAssemblyCompileErrorPrototype.h in Headers */,
+                               0FD0E5F21E46C8AF0006AB08 /* CollectingScope.h in Headers */,
                                AD2FCC171DB59CB200B3E736 /* WebAssemblyCompileErrorPrototype.lut.h in Headers */,
                                AD4937D41DDD27DE0077C807 /* WebAssemblyFunction.h in Headers */,
                                AD2FCBF11DB58DAD00B3E736 /* WebAssemblyInstanceConstructor.h in Headers */,
                                A5398FAC1C750DA60060A963 /* HeapProfiler.cpp in Sources */,
                                A54C2AB01C6544EE00A18D78 /* HeapSnapshot.cpp in Sources */,
                                A5311C371C77CECA00E6B1B6 /* HeapSnapshotBuilder.cpp in Sources */,
-                               C24D31E2161CD695002AA4DB /* HeapStatistics.cpp in Sources */,
                                C2E526BD1590EF000054E48D /* HeapTimer.cpp in Sources */,
                                FE7BA60F1A1A7CEC00F1F7B4 /* HeapVerifier.cpp in Sources */,
                                0F4680D414BBD24900BFE272 /* HostCallReturnValue.cpp in Sources */,
                                0FAF7EFD165BA91B000C8455 /* JITDisassembler.cpp in Sources */,
                                FE187A0E1C030D640038BBCA /* JITDivGenerator.cpp in Sources */,
                                0F46808314BA573100BFE272 /* JITExceptions.cpp in Sources */,
+                               0FD0E5EB1E43D3500006AB08 /* CollectorPhase.cpp in Sources */,
                                0FB14E1E18124ACE009B6B4D /* JITInlineCacheGenerator.cpp in Sources */,
                                FE3A06BD1C11040D00390FDD /* JITLeftShiftGenerator.cpp in Sources */,
                                FE187A011BFBE55E0038BBCA /* JITMulGenerator.cpp in Sources */,
                                6540C7A01B82E1C3000F6B79 /* RegisterAtOffset.cpp in Sources */,
                                6540C7A11B82E1C3000F6B79 /* RegisterAtOffsetList.cpp in Sources */,
                                0FC3141518146D7000033232 /* RegisterSet.cpp in Sources */,
+                               0FD0E5EC1E43D3530006AB08 /* GCConductor.cpp in Sources */,
                                A57D23ED1891B5540031C7FA /* RegularExpression.cpp in Sources */,
                                992ABCF91BEA9BD2006403A0 /* RemoteAutomationTarget.cpp in Sources */,
                                992F56B41E4E84A40035953B /* RemoteConnectionToTargetCocoa.mm in Sources */,
index 16c5871..1aa4249 100644 (file)
@@ -2532,7 +2532,12 @@ void CodeBlock::visitChildren(SlotVisitor& visitor)
         visitor.reportExtraMemoryVisited(m_jitCode->size());
     if (m_instructions.size()) {
         unsigned refCount = m_instructions.refCount();
-        RELEASE_ASSERT(refCount);
+        if (!refCount) {
+            dataLog("CodeBlock: ", RawPointer(this), "\n");
+            dataLog("m_instructions.data(): ", RawPointer(m_instructions.data()), "\n");
+            dataLog("refCount: ", refCount, "\n");
+            RELEASE_ASSERT_NOT_REACHED();
+        }
         visitor.reportExtraMemoryVisited(m_instructions.size() * sizeof(Instruction) / refCount);
     }
 
index af3b63e..cef990c 100644 (file)
@@ -40,7 +40,7 @@ namespace JSC { namespace DFG {
 
 class Worklist::ThreadBody : public AutomaticThread {
 public:
-    ThreadBody(const LockHolder& locker, Worklist& worklist, ThreadData& data, Box<Lock> lock, RefPtr<AutomaticThreadCondition> condition, int relativePriority)
+    ThreadBody(const AbstractLocker& locker, Worklist& worklist, ThreadData& data, Box<Lock> lock, RefPtr<AutomaticThreadCondition> condition, int relativePriority)
         : AutomaticThread(locker, lock, condition)
         , m_worklist(worklist)
         , m_data(data)
@@ -49,7 +49,7 @@ public:
     }
     
 protected:
-    PollResult poll(const LockHolder& locker) override
+    PollResult poll(const AbstractLocker& locker) override
     {
         if (m_worklist.m_queue.isEmpty())
             return PollResult::Wait;
@@ -150,7 +150,7 @@ protected:
         m_longLivedState = std::make_unique<LongLivedState>();
     }
     
-    void threadIsStopping(const LockHolder&) override
+    void threadIsStopping(const AbstractLocker&) override
     {
         // We're holding the Worklist::m_lock, so we should be careful not to deadlock.
         
@@ -479,7 +479,7 @@ void Worklist::dump(PrintStream& out) const
     dump(locker, out);
 }
 
-void Worklist::dump(const LockHolder&, PrintStream& out) const
+void Worklist::dump(const AbstractLocker&, PrintStream& out) const
 {
     out.print(
         "Worklist(", RawPointer(this), ")[Queue Length = ", m_queue.size(),
@@ -535,6 +535,41 @@ Worklist& ensureGlobalWorklistFor(CompilationMode mode)
     return ensureGlobalDFGWorklist();
 }
 
+unsigned numberOfWorklists() { return 2; }
+
+Worklist& ensureWorklistForIndex(unsigned index)
+{
+    switch (index) {
+    case 0:
+        return ensureGlobalDFGWorklist();
+    case 1:
+        return ensureGlobalFTLWorklist();
+    default:
+        RELEASE_ASSERT_NOT_REACHED();
+        return ensureGlobalDFGWorklist();
+    }
+}
+
+Worklist* existingWorklistForIndexOrNull(unsigned index)
+{
+    switch (index) {
+    case 0:
+        return existingGlobalDFGWorklistOrNull();
+    case 1:
+        return existingGlobalFTLWorklistOrNull();
+    default:
+        RELEASE_ASSERT_NOT_REACHED();
+        return 0;
+    }
+}
+
+Worklist& existingWorklistForIndex(unsigned index)
+{
+    Worklist* result = existingWorklistForIndexOrNull(index);
+    RELEASE_ASSERT(result);
+    return *result;
+}
+
 void completeAllPlansForVM(VM& vm)
 {
     for (unsigned i = DFG::numberOfWorklists(); i--;) {
index 28d9abf..eb8282c 100644 (file)
@@ -93,7 +93,7 @@ private:
     
     void removeAllReadyPlansForVM(VM&, Vector<RefPtr<Plan>, 8>&);
 
-    void dump(const LockHolder&, PrintStream&) const;
+    void dump(const AbstractLocker&, PrintStream&) const;
     
     CString m_threadName;
     
@@ -132,37 +132,10 @@ Worklist* existingGlobalFTLWorklistOrNull();
 Worklist& ensureGlobalWorklistFor(CompilationMode);
 
 // Simplify doing things for all worklists.
-inline unsigned numberOfWorklists() { return 2; }
-inline Worklist& ensureWorklistForIndex(unsigned index)
-{
-    switch (index) {
-    case 0:
-        return ensureGlobalDFGWorklist();
-    case 1:
-        return ensureGlobalFTLWorklist();
-    default:
-        RELEASE_ASSERT_NOT_REACHED();
-        return ensureGlobalDFGWorklist();
-    }
-}
-inline Worklist* existingWorklistForIndexOrNull(unsigned index)
-{
-    switch (index) {
-    case 0:
-        return existingGlobalDFGWorklistOrNull();
-    case 1:
-        return existingGlobalFTLWorklistOrNull();
-    default:
-        RELEASE_ASSERT_NOT_REACHED();
-        return 0;
-    }
-}
-inline Worklist& existingWorklistForIndex(unsigned index)
-{
-    Worklist* result = existingWorklistForIndexOrNull(index);
-    RELEASE_ASSERT(result);
-    return *result;
-}
+unsigned numberOfWorklists();
+Worklist& ensureWorklistForIndex(unsigned index);
+Worklist* existingWorklistForIndexOrNull(unsigned index);
+Worklist& existingWorklistForIndex(unsigned index);
 
 #endif // ENABLE(DFG_JIT)
 
similarity index 71%
rename from Source/JavaScriptCore/heap/HeapStatistics.h
rename to Source/JavaScriptCore/heap/CollectingScope.h
index 890da5e..c1f860a 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012 Apple Inc. All rights reserved.
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
 
 #pragma once
 
-#include "JSExportMacros.h"
-#include <wtf/Vector.h>
+#include "Heap.h"
 
 namespace JSC {
 
-class Heap;
-
-class HeapStatistics {
+class CollectingScope {
 public:
-    NO_RETURN static void exitWithFailure();
-    JS_EXPORT_PRIVATE static void reportSuccess();
-
-    static void initialize();
-    static void recordGCPauseTime(double start, double end);
-
-    static void dumpObjectStatistics(Heap*);
+    CollectingScope(Heap& heap)
+        : m_heap(heap)
+        , m_oldState(m_heap.m_mutatorState)
+    {
+        m_heap.m_mutatorState = MutatorState::Collecting;
+    }
+    
+    ~CollectingScope()
+    {
+        m_heap.m_mutatorState = m_oldState;
+    }
 
 private:
-    static void logStatistics();
-    static Vector<double>* s_pauseTimeStarts;
-    static Vector<double>* s_pauseTimeEnds;
-    static double s_startTime;
-    static double s_endTime;
+    Heap& m_heap;
+    MutatorState m_oldState;
 };
 
 } // namespace JSC
+
diff --git a/Source/JavaScriptCore/heap/CollectorPhase.cpp b/Source/JavaScriptCore/heap/CollectorPhase.cpp
new file mode 100644 (file)
index 0000000..610fed1
--- /dev/null
@@ -0,0 +1,84 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "config.h"
+#include "CollectorPhase.h"
+
+#include <wtf/PrintStream.h>
+
+namespace JSC {
+
+bool worldShouldBeSuspended(CollectorPhase phase)
+{
+    switch (phase) {
+    case CollectorPhase::NotRunning:
+    case CollectorPhase::Concurrent:
+        return false;
+        
+    case CollectorPhase::Begin:
+    case CollectorPhase::Fixpoint:
+    case CollectorPhase::Reloop:
+    case CollectorPhase::End:
+        return true;
+    }
+    
+    RELEASE_ASSERT_NOT_REACHED();
+    return false;
+}
+
+} // namespace JSC
+
+namespace WTF {
+
+using namespace JSC;
+
+void printInternal(PrintStream& out, JSC::CollectorPhase phase)
+{
+    switch (phase) {
+    case CollectorPhase::NotRunning:
+        out.print("NotRunning");
+        return;
+    case CollectorPhase::Begin:
+        out.print("Begin");
+        return;
+    case CollectorPhase::Fixpoint:
+        out.print("Fixpoint");
+        return;
+    case CollectorPhase::Concurrent:
+        out.print("Concurrent");
+        return;
+    case CollectorPhase::Reloop:
+        out.print("Reloop");
+        return;
+    case CollectorPhase::End:
+        out.print("End");
+        return;
+    }
+    
+    RELEASE_ASSERT_NOT_REACHED();
+}
+
+} // namespace WTF
+
diff --git a/Source/JavaScriptCore/heap/CollectorPhase.h b/Source/JavaScriptCore/heap/CollectorPhase.h
new file mode 100644 (file)
index 0000000..d5a0035
--- /dev/null
@@ -0,0 +1,75 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+namespace JSC {
+
+// We track collector phase in order to allow either the collector thread or the mutator thread to
+// jump in and do work. The collector and mutator trade the conn
+// (https://en.wikipedia.org/wiki/Conn_(nautical)) with each other based on who is available to do work,
+// and they use the phase to help each other know what to do once they take the conn.
+//
+// The collector relinquishes the conn whenever it stopTheMutator's and the mutator is running. Then the
+// collector thread goes to sleep.
+//
+// The mutator relinquishes the conn whenever it releaseAccess's. That wakes up the collector thread.
+enum class CollectorPhase : uint8_t {
+    // We use this phase when the collector is not running at all. After this state is Begin.
+    NotRunning,
+    
+    // This is everything from when the collector begins to when it first yields to the mutator for
+    // marking. After this is Fixpoint.
+    Begin,
+    
+    // This means that we should try to do some progress with the world stopped. This usually means
+    // doing an iteration of MarkingConstraintSet::executeConvergence, but it could also mean marking
+    // with the world stopped. After this is either Concurrent or End.
+    Fixpoint,
+    
+    // In this state the collector is relying on the parallel helpers and incremental mutator work to
+    // make progress. After this is Reloop, once marking stalls.
+    Concurrent,
+        
+    // We did some concurrent marking and now we ran out of work. This phase prepares the GC for another
+    // Fixpoint. After this is Fixpoint.
+    Reloop,
+    
+    // The collector is trying to finish up. After this state is NotRunning.
+    End
+};
+
+bool worldShouldBeSuspended(CollectorPhase phase);
+
+} // namespace JSC
+
+namespace WTF {
+
+class PrintStream;
+
+void printInternal(PrintStream&, JSC::CollectorPhase);
+
+} // namespace WTF
+
index 014905a..07e6c7a 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014, 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -45,7 +45,7 @@ void EdenGCActivityCallback::doCollection()
 
 double EdenGCActivityCallback::lastGCLength()
 {
-    return m_vm->heap.lastEdenGCLength();
+    return m_vm->heap.lastEdenGCLength().seconds();
 }
 
 double EdenGCActivityCallback::deathRate()
index 3d1545f..e57dd3a 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014, 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -50,7 +50,7 @@ void FullGCActivityCallback::doCollection()
     double startTime = WTF::monotonicallyIncreasingTime();
     if (heap.isPagedOut(startTime + pagingTimeOut)) {
         cancel();
-        heap.increaseLastFullGCLength(pagingTimeOut);
+        heap.increaseLastFullGCLength(Seconds(pagingTimeOut));
         return;
     }
 #endif
@@ -60,7 +60,7 @@ void FullGCActivityCallback::doCollection()
 
 double FullGCActivityCallback::lastGCLength()
 {
-    return m_vm->heap.lastFullGCLength();
+    return m_vm->heap.lastFullGCLength().seconds();
 }
 
 double FullGCActivityCallback::deathRate()
diff --git a/Source/JavaScriptCore/heap/GCConductor.cpp b/Source/JavaScriptCore/heap/GCConductor.cpp
new file mode 100644 (file)
index 0000000..4d24436
--- /dev/null
@@ -0,0 +1,66 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "config.h"
+#include "GCConductor.h"
+
+#include <wtf/PrintStream.h>
+
+namespace JSC {
+
+const char* gcConductorShortName(GCConductor conn)
+{
+    switch (conn) {
+    case GCConductor::Mutator:
+        return "M";
+    case GCConductor::Collector:
+        return "C";
+    }
+    
+    RELEASE_ASSERT_NOT_REACHED();
+}
+
+} // namespace JSC
+
+namespace WTF {
+
+using namespace JSC;
+
+void printInternal(PrintStream& out, GCConductor conn)
+{
+    switch (conn) {
+    case GCConductor::Mutator:
+        out.print("Mutator");
+        return;
+    case GCConductor::Collector:
+        out.print("Collector");
+        return;
+    }
+    
+    RELEASE_ASSERT_NOT_REACHED();
+}
+
+} // namespace WTF
+
diff --git a/Source/JavaScriptCore/heap/GCConductor.h b/Source/JavaScriptCore/heap/GCConductor.h
new file mode 100644 (file)
index 0000000..bdc18ca
--- /dev/null
@@ -0,0 +1,49 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+namespace JSC {
+
+// Either the mutator has the conn (https://en.wikipedia.org/wiki/Conn_(nautical)), meaning that the
+// mutator will incrementally drive the collector when it calls into slow paths; or the collector has the
+// conn, meaning that the collector thread will drive the collector.
+enum class GCConductor : uint8_t {
+    Mutator,
+    Collector
+};
+
+const char* gcConductorShortName(GCConductor officer);
+
+} // namespace JSC
+
+namespace WTF {
+
+class PrintStream;
+
+void printInternal(PrintStream&, JSC::GCConductor);
+
+} // namespace WTF
+
index ff55188..4f3aee3 100644 (file)
@@ -23,6 +23,7 @@
 
 #include "CodeBlock.h"
 #include "CodeBlockSetInlines.h"
+#include "CollectingScope.h"
 #include "ConservativeRoots.h"
 #include "DFGWorklistInlines.h"
 #include "EdenGCActivityCallback.h"
@@ -37,9 +38,7 @@
 #include "HeapIterationScope.h"
 #include "HeapProfiler.h"
 #include "HeapSnapshot.h"
-#include "HeapStatistics.h"
 #include "HeapVerifier.h"
-#include "HelpingGCScope.h"
 #include "IncrementalSweeper.h"
 #include "Interpreter.h"
 #include "JITStubRoutineSet.h"
@@ -48,6 +47,7 @@
 #include "JSGlobalObject.h"
 #include "JSLock.h"
 #include "JSVirtualMachineInternal.h"
+#include "MachineStackMarker.h"
 #include "MarkedSpaceInlines.h"
 #include "MarkingConstraintSet.h"
 #include "PreventCollectionScope.h"
@@ -57,6 +57,7 @@
 #include "SuperSampler.h"
 #include "StochasticSpaceTimeMutatorScheduler.h"
 #include "StopIfNecessaryTimer.h"
+#include "SweepingScope.h"
 #include "SynchronousStopTheWorldMutatorScheduler.h"
 #include "TypeProfilerLog.h"
 #include "UnlinkedCodeBlock.h"
@@ -206,27 +207,27 @@ private:
 
 class Heap::Thread : public AutomaticThread {
 public:
-    Thread(const LockHolder& locker, Heap& heap)
+    Thread(const AbstractLocker& locker, Heap& heap)
         : AutomaticThread(locker, heap.m_threadLock, heap.m_threadCondition)
         , m_heap(heap)
     {
     }
     
 protected:
-    PollResult poll(const LockHolder& locker) override
+    PollResult poll(const AbstractLocker& locker) override
     {
         if (m_heap.m_threadShouldStop) {
             m_heap.notifyThreadStopping(locker);
             return PollResult::Stop;
         }
-        if (m_heap.shouldCollectInThread(locker))
+        if (m_heap.shouldCollectInCollectorThread(locker))
             return PollResult::Work;
         return PollResult::Wait;
     }
     
     WorkResult work() override
     {
-        m_heap.collectInThread();
+        m_heap.collectInCollectorThread();
         return WorkResult::Continue;
     }
     
@@ -257,9 +258,9 @@ Heap::Heap(VM* vm, HeapType heapType)
     , m_objectSpace(this)
     , m_extraMemorySize(0)
     , m_deprecatedExtraMemorySize(0)
-    , m_machineThreads(this)
-    , m_collectorSlotVisitor(std::make_unique<SlotVisitor>(*this))
-    , m_mutatorSlotVisitor(std::make_unique<SlotVisitor>(*this))
+    , m_machineThreads(std::make_unique<MachineThreads>(this))
+    , m_collectorSlotVisitor(std::make_unique<SlotVisitor>(*this, "C"))
+    , m_mutatorSlotVisitor(std::make_unique<SlotVisitor>(*this, "M"))
     , m_mutatorMarkStack(std::make_unique<MarkStackArray>())
     , m_raceMarkStack(std::make_unique<MarkStackArray>())
     , m_constraintSet(std::make_unique<MarkingConstraintSet>())
@@ -333,6 +334,12 @@ bool Heap::isPagedOut(double deadline)
 // Run all pending finalizers now because we won't get another chance.
 void Heap::lastChanceToFinalize()
 {
+    MonotonicTime before;
+    if (Options::logGC()) {
+        before = MonotonicTime::now();
+        dataLog("[GC<", RawPointer(this), ">: shutdown ");
+    }
+    
     RELEASE_ASSERT(!m_vm->entryScope);
     RELEASE_ASSERT(m_mutatorState == MutatorState::Running);
     
@@ -345,25 +352,61 @@ void Heap::lastChanceToFinalize()
         waitForThreadCompletion(m_collectContinuouslyThread);
     }
     
-    // Carefully bring the thread down. We need to use waitForCollector() until we know that there
-    // won't be any other collections.
+    if (Options::logGC())
+        dataLog("1");
+    
+    // Prevent new collections from being started. This is probably not even necessary, since we're not
+    // going to call into anything that starts collections. Still, this makes the algorithm more
+    // obviously sound.
+    m_isSafeToCollect = false;
+    
+    if (Options::logGC())
+        dataLog("2");
+
+    bool isCollecting;
+    {
+        auto locker = holdLock(*m_threadLock);
+        RELEASE_ASSERT(m_lastServedTicket <= m_lastGrantedTicket);
+        isCollecting = m_lastServedTicket < m_lastGrantedTicket;
+    }
+    if (isCollecting) {
+        if (Options::logGC())
+            dataLog("...]\n");
+        
+        // Wait for the current collection to finish.
+        waitForCollector(
+            [&] (const AbstractLocker&) -> bool {
+                RELEASE_ASSERT(m_lastServedTicket <= m_lastGrantedTicket);
+                return m_lastServedTicket == m_lastGrantedTicket;
+            });
+        
+        if (Options::logGC())
+            dataLog("[GC<", RawPointer(this), ">: shutdown ");
+    }
+    if (Options::logGC())
+        dataLog("3");
+
+    RELEASE_ASSERT(m_requests.isEmpty());
+    RELEASE_ASSERT(m_lastServedTicket == m_lastGrantedTicket);
+    
+    // Carefully bring the thread down.
     bool stopped = false;
     {
         LockHolder locker(*m_threadLock);
         stopped = m_thread->tryStop(locker);
-        if (!stopped) {
-            m_threadShouldStop = true;
+        m_threadShouldStop = true;
+        if (!stopped)
             m_threadCondition->notifyOne(locker);
-        }
     }
-    if (!stopped) {
-        waitForCollector(
-            [&] (const LockHolder&) -> bool {
-                return m_threadIsStopping;
-            });
-        // It's now safe to join the thread, since we know that there will not be any more collections.
+
+    if (Options::logGC())
+        dataLog("4");
+    
+    if (!stopped)
         m_thread->join();
-    }
+    
+    if (Options::logGC())
+        dataLog("5 ");
     
     m_arrayBuffers.lastChanceToFinalize();
     m_codeBlocks->lastChanceToFinalize(*m_vm);
@@ -372,6 +415,9 @@ void Heap::lastChanceToFinalize()
     releaseDelayedReleasedObjects();
 
     sweepAllLogicallyEmptyWeakBlocks();
+    
+    if (Options::logGC())
+        dataLog((MonotonicTime::now() - before).milliseconds(), "ms]\n");
 }
 
 void Heap::releaseDelayedReleasedObjects()
@@ -525,186 +571,9 @@ void Heap::assertSharedMarkStacksEmpty()
     RELEASE_ASSERT(ok);
 }
 
-void Heap::markToFixpoint(double gcStartTime)
-{
-    TimingScope markToFixpointTimingScope(*this, "Heap::markToFixpoint");
-    
-    if (m_collectionScope == CollectionScope::Full) {
-        m_opaqueRoots.clear();
-        m_collectorSlotVisitor->clearMarkStacks();
-        m_mutatorMarkStack->clear();
-    }
-
-    RELEASE_ASSERT(m_raceMarkStack->isEmpty());
-
-    beginMarking();
-
-    forEachSlotVisitor(
-        [&] (SlotVisitor& visitor) {
-            visitor.didStartMarking();
-        });
-
-    m_parallelMarkersShouldExit = false;
-
-    m_helperClient.setFunction(
-        [this] () {
-            SlotVisitor* slotVisitor;
-            {
-                LockHolder locker(m_parallelSlotVisitorLock);
-                if (m_availableParallelSlotVisitors.isEmpty()) {
-                    std::unique_ptr<SlotVisitor> newVisitor =
-                        std::make_unique<SlotVisitor>(*this);
-                    
-                    if (Options::optimizeParallelSlotVisitorsForStoppedMutator())
-                        newVisitor->optimizeForStoppedMutator();
-                    
-                    newVisitor->didStartMarking();
-                    
-                    slotVisitor = newVisitor.get();
-                    m_parallelSlotVisitors.append(WTFMove(newVisitor));
-                } else
-                    slotVisitor = m_availableParallelSlotVisitors.takeLast();
-            }
-
-            WTF::registerGCThread(GCThreadType::Helper);
-
-            {
-                ParallelModeEnabler parallelModeEnabler(*slotVisitor);
-                slotVisitor->drainFromShared(SlotVisitor::SlaveDrain);
-            }
-
-            {
-                LockHolder locker(m_parallelSlotVisitorLock);
-                m_availableParallelSlotVisitors.append(slotVisitor);
-            }
-        });
-
-    SlotVisitor& slotVisitor = *m_collectorSlotVisitor;
-
-    m_constraintSet->didStartMarking();
-    
-    m_scheduler->beginCollection();
-    if (Options::logGC())
-        m_scheduler->log();
-    
-    // After this, we will almost certainly fall through all of the "slotVisitor.isEmpty()"
-    // checks because bootstrap would have put things into the visitor. So, we should fall
-    // through to draining.
-    
-    if (!slotVisitor.didReachTermination()) {
-        dataLog("Fatal: SlotVisitor should think that GC should terminate before constraint solving, but it does not think this.\n");
-        dataLog("slotVisitor.isEmpty(): ", slotVisitor.isEmpty(), "\n");
-        dataLog("slotVisitor.collectorMarkStack().isEmpty(): ", slotVisitor.collectorMarkStack().isEmpty(), "\n");
-        dataLog("slotVisitor.mutatorMarkStack().isEmpty(): ", slotVisitor.mutatorMarkStack().isEmpty(), "\n");
-        dataLog("m_numberOfActiveParallelMarkers: ", m_numberOfActiveParallelMarkers, "\n");
-        dataLog("m_sharedCollectorMarkStack->isEmpty(): ", m_sharedCollectorMarkStack->isEmpty(), "\n");
-        dataLog("m_sharedMutatorMarkStack->isEmpty(): ", m_sharedMutatorMarkStack->isEmpty(), "\n");
-        dataLog("slotVisitor.didReachTermination(): ", slotVisitor.didReachTermination(), "\n");
-        RELEASE_ASSERT_NOT_REACHED();
-    }
-    
-    for (;;) {
-        if (Options::logGC())
-            dataLog("v=", bytesVisited() / 1024, "kb o=", m_opaqueRoots.size(), " b=", m_barriersExecuted, " ");
-        
-        if (slotVisitor.didReachTermination()) {
-            m_scheduler->didReachTermination();
-            
-            assertSharedMarkStacksEmpty();
-            
-            slotVisitor.mergeIfNecessary();
-            for (auto& parallelVisitor : m_parallelSlotVisitors)
-                parallelVisitor->mergeIfNecessary();
-            
-            // FIXME: Take m_mutatorDidRun into account when scheduling constraints. Most likely,
-            // we don't have to execute root constraints again unless the mutator did run. At a
-            // minimum, we could use this for work estimates - but it's probably more than just an
-            // estimate.
-            // https://bugs.webkit.org/show_bug.cgi?id=166828
-            
-            // FIXME: We should take advantage of the fact that we could timeout. This only comes
-            // into play if we're executing constraints for the first time. But that will matter
-            // when we have deep stacks or a lot of DOM stuff.
-            // https://bugs.webkit.org/show_bug.cgi?id=166831
-            
-            // Wondering what this does? Look at Heap::addCoreConstraints(). The DOM and others can also
-            // add their own using Heap::addMarkingConstraint().
-            bool converged =
-                m_constraintSet->executeConvergence(slotVisitor, MonotonicTime::infinity());
-            if (converged && slotVisitor.isEmpty()) {
-                assertSharedMarkStacksEmpty();
-                break;
-            }
-            
-            m_scheduler->didExecuteConstraints();
-        }
-        
-        if (Options::logGC())
-            dataLog(slotVisitor.collectorMarkStack().size(), "+", m_mutatorMarkStack->size() + slotVisitor.mutatorMarkStack().size(), " ");
-        
-        {
-            ParallelModeEnabler enabler(slotVisitor);
-            slotVisitor.drainInParallel(m_scheduler->timeToResume());
-        }
-        
-        m_scheduler->synchronousDrainingDidStall();
-
-        if (slotVisitor.didReachTermination())
-            continue;
-        
-        if (!m_scheduler->shouldResume())
-            continue;
-        
-        m_scheduler->willResume();
-        
-        if (Options::logGC()) {
-            double thisPauseMS = (MonotonicTime::now() - m_stopTime).milliseconds();
-            dataLog("p=", thisPauseMS, "ms (max ", maxPauseMS(thisPauseMS), ")...]\n");
-        }
-
-        // Forgive the mutator for its past failures to keep up.
-        // FIXME: Figure out if moving this to different places results in perf changes.
-        m_incrementBalance = 0;
-        
-        resumeTheWorld();
-        
-        {
-            ParallelModeEnabler enabler(slotVisitor);
-            slotVisitor.drainInParallelPassively(m_scheduler->timeToStop());
-        }
-
-        stopTheWorld();
-        
-        if (Options::logGC())
-            dataLog("[GC: ");
-        
-        m_scheduler->didStop();
-        
-        if (Options::logGC())
-            m_scheduler->log();
-    }
-    
-    m_scheduler->endCollection();
-
-    {
-        std::lock_guard<Lock> lock(m_markingMutex);
-        m_parallelMarkersShouldExit = true;
-        m_markingConditionVariable.notifyAll();
-    }
-    m_helperClient.finish();
-
-    iterateExecutingAndCompilingCodeBlocks(
-        [&] (CodeBlock* codeBlock) {
-            writeBarrier(codeBlock);
-        });
-        
-    updateObjectCounts(gcStartTime);
-    endMarking();
-}
-
 void Heap::gatherStackRoots(ConservativeRoots& roots)
 {
-    m_machineThreads.gatherConservativeRoots(roots, *m_jitStubRoutines, *m_codeBlocks);
+    m_machineThreads->gatherConservativeRoots(roots, *m_jitStubRoutines, *m_codeBlocks, m_currentThreadState);
 }
 
 void Heap::gatherJSStackRoots(ConservativeRoots& roots)
@@ -804,12 +673,8 @@ void Heap::removeDeadHeapSnapshotNodes(HeapProfiler& heapProfiler)
     }
 }
 
-void Heap::updateObjectCounts(double gcStartTime)
+void Heap::updateObjectCounts()
 {
-    if (Options::logGC() == GCLogging::Verbose) {
-        dataLogF("\nNumber of live Objects after GC %lu, took %.6f secs\n", static_cast<unsigned long>(visitCount()), WTF::monotonicallyIncreasingTime() - gcStartTime);
-    }
-    
     if (m_collectionScope == CollectionScope::Full)
         m_totalBytesVisited = 0;
 
@@ -1033,14 +898,14 @@ void Heap::sweepSynchronously()
 {
     double before = 0;
     if (Options::logGC()) {
-        dataLog("[Full sweep: ", capacity() / 1024, "kb ");
+        dataLog("Full sweep: ", capacity() / 1024, "kb ");
         before = currentTimeMS();
     }
     m_objectSpace.sweep();
     m_objectSpace.shrink();
     if (Options::logGC()) {
         double after = currentTimeMS();
-        dataLog("=> ", capacity() / 1024, "kb, ", after - before, "ms");
+        dataLog("=> ", capacity() / 1024, "kb, ", after - before, "ms");
     }
 }
 
@@ -1051,125 +916,422 @@ void Heap::collectAllGarbage()
     
     collectSync(CollectionScope::Full);
 
-    DeferGCForAWhile deferGC(*this);
-    if (UNLIKELY(Options::useImmortalObjects()))
-        sweeper()->willFinishSweeping();
+    DeferGCForAWhile deferGC(*this);
+    if (UNLIKELY(Options::useImmortalObjects()))
+        sweeper()->stopSweeping();
+
+    bool alreadySweptInCollectSync = Options::sweepSynchronously();
+    if (!alreadySweptInCollectSync) {
+        if (Options::logGC())
+            dataLog("[GC<", RawPointer(this), ">: ");
+        sweepSynchronously();
+        if (Options::logGC())
+            dataLog("]\n");
+    }
+    m_objectSpace.assertNoUnswept();
+
+    sweepAllLogicallyEmptyWeakBlocks();
+}
+
+void Heap::collectAsync(std::optional<CollectionScope> scope)
+{
+    if (!m_isSafeToCollect)
+        return;
+
+    bool alreadyRequested = false;
+    {
+        LockHolder locker(*m_threadLock);
+        for (std::optional<CollectionScope> request : m_requests) {
+            if (scope) {
+                if (scope == CollectionScope::Eden) {
+                    alreadyRequested = true;
+                    break;
+                } else {
+                    RELEASE_ASSERT(scope == CollectionScope::Full);
+                    if (request == CollectionScope::Full) {
+                        alreadyRequested = true;
+                        break;
+                    }
+                }
+            } else {
+                if (!request || request == CollectionScope::Full) {
+                    alreadyRequested = true;
+                    break;
+                }
+            }
+        }
+    }
+    if (alreadyRequested)
+        return;
+
+    requestCollection(scope);
+}
+
+void Heap::collectSync(std::optional<CollectionScope> scope)
+{
+    if (!m_isSafeToCollect)
+        return;
+    
+    waitForCollection(requestCollection(scope));
+}
+
+bool Heap::shouldCollectInCollectorThread(const AbstractLocker&)
+{
+    RELEASE_ASSERT(m_requests.isEmpty() == (m_lastServedTicket == m_lastGrantedTicket));
+    RELEASE_ASSERT(m_lastServedTicket <= m_lastGrantedTicket);
+    
+    if (false)
+        dataLog("Mutator has the conn = ", !!(m_worldState.load() & mutatorHasConnBit), "\n");
+    
+    return !m_requests.isEmpty() && !(m_worldState.load() & mutatorHasConnBit);
+}
+
+void Heap::collectInCollectorThread()
+{
+    for (;;) {
+        RunCurrentPhaseResult result = runCurrentPhase(GCConductor::Collector, nullptr);
+        switch (result) {
+        case RunCurrentPhaseResult::Finished:
+            return;
+        case RunCurrentPhaseResult::Continue:
+            break;
+        case RunCurrentPhaseResult::NeedCurrentThreadState:
+            RELEASE_ASSERT_NOT_REACHED();
+            break;
+        }
+    }
+}
+
+void Heap::checkConn(GCConductor conn)
+{
+    switch (conn) {
+    case GCConductor::Mutator:
+        RELEASE_ASSERT(m_worldState.load() & mutatorHasConnBit);
+        return;
+    case GCConductor::Collector:
+        RELEASE_ASSERT(!(m_worldState.load() & mutatorHasConnBit));
+        return;
+    }
+    RELEASE_ASSERT_NOT_REACHED();
+}
+
+auto Heap::runCurrentPhase(GCConductor conn, CurrentThreadState* currentThreadState) -> RunCurrentPhaseResult
+{
+    checkConn(conn);
+    m_currentThreadState = currentThreadState;
+    
+    // If the collector transfers the conn to the mutator, it leaves us in between phases.
+    if (!finishChangingPhase(conn)) {
+        // A mischevious mutator could repeatedly relinquish the conn back to us. We try to avoid doing
+        // this, but it's probably not the end of the world if it did happen.
+        if (false)
+            dataLog("Conn bounce-back.\n");
+        return RunCurrentPhaseResult::Finished;
+    }
+    
+    bool result = false;
+    switch (m_currentPhase) {
+    case CollectorPhase::NotRunning:
+        result = runNotRunningPhase(conn);
+        break;
+        
+    case CollectorPhase::Begin:
+        result = runBeginPhase(conn);
+        break;
+        
+    case CollectorPhase::Fixpoint:
+        if (!currentThreadState && conn == GCConductor::Mutator)
+            return RunCurrentPhaseResult::NeedCurrentThreadState;
+        
+        result = runFixpointPhase(conn);
+        break;
+        
+    case CollectorPhase::Concurrent:
+        result = runConcurrentPhase(conn);
+        break;
+        
+    case CollectorPhase::Reloop:
+        result = runReloopPhase(conn);
+        break;
+        
+    case CollectorPhase::End:
+        result = runEndPhase(conn);
+        break;
+    }
+
+    return result ? RunCurrentPhaseResult::Continue : RunCurrentPhaseResult::Finished;
+}
+
+NEVER_INLINE bool Heap::runNotRunningPhase(GCConductor conn)
+{
+    // Check m_requests since the mutator calls this to poll what's going on.
+    {
+        auto locker = holdLock(*m_threadLock);
+        if (m_requests.isEmpty())
+            return false;
+    }
+    
+    return changePhase(conn, CollectorPhase::Begin);
+}
+
+NEVER_INLINE bool Heap::runBeginPhase(GCConductor conn)
+{
+    m_currentGCStartTime = MonotonicTime::now();
+        
+    std::optional<CollectionScope> scope;
+    {
+        LockHolder locker(*m_threadLock);
+        RELEASE_ASSERT(!m_requests.isEmpty());
+        scope = m_requests.first();
+    }
+        
+    if (Options::logGC())
+        dataLog("[GC<", RawPointer(this), ">: START ", gcConductorShortName(conn), " ", capacity() / 1024, "kb ");
+
+    m_beforeGC = MonotonicTime::now();
+
+    if (m_collectionScope) {
+        dataLog("Collection scope already set during GC: ", *m_collectionScope, "\n");
+        RELEASE_ASSERT_NOT_REACHED();
+    }
+        
+    willStartCollection(scope);
+        
+    if (m_verifier) {
+        // Verify that live objects from the last GC cycle haven't been corrupted by
+        // mutators before we begin this new GC cycle.
+        m_verifier->verify(HeapVerifier::Phase::BeforeGC);
+            
+        m_verifier->initializeGCCycle();
+        m_verifier->gatherLiveObjects(HeapVerifier::Phase::BeforeMarking);
+    }
+        
+    prepareForMarking();
+        
+    if (m_collectionScope == CollectionScope::Full) {
+        m_opaqueRoots.clear();
+        m_collectorSlotVisitor->clearMarkStacks();
+        m_mutatorMarkStack->clear();
+    }
+
+    RELEASE_ASSERT(m_raceMarkStack->isEmpty());
+
+    beginMarking();
+
+    forEachSlotVisitor(
+        [&] (SlotVisitor& visitor) {
+            visitor.didStartMarking();
+        });
+
+    m_parallelMarkersShouldExit = false;
+
+    m_helperClient.setFunction(
+        [this] () {
+            SlotVisitor* slotVisitor;
+            {
+                LockHolder locker(m_parallelSlotVisitorLock);
+                if (m_availableParallelSlotVisitors.isEmpty()) {
+                    std::unique_ptr<SlotVisitor> newVisitor = std::make_unique<SlotVisitor>(
+                        *this, toCString("P", m_parallelSlotVisitors.size() + 1));
+                    
+                    if (Options::optimizeParallelSlotVisitorsForStoppedMutator())
+                        newVisitor->optimizeForStoppedMutator();
+                    
+                    newVisitor->didStartMarking();
+                    
+                    slotVisitor = newVisitor.get();
+                    m_parallelSlotVisitors.append(WTFMove(newVisitor));
+                } else
+                    slotVisitor = m_availableParallelSlotVisitors.takeLast();
+            }
+
+            WTF::registerGCThread(GCThreadType::Helper);
+
+            {
+                ParallelModeEnabler parallelModeEnabler(*slotVisitor);
+                slotVisitor->drainFromShared(SlotVisitor::SlaveDrain);
+            }
+
+            {
+                LockHolder locker(m_parallelSlotVisitorLock);
+                m_availableParallelSlotVisitors.append(slotVisitor);
+            }
+        });
+
+    SlotVisitor& slotVisitor = *m_collectorSlotVisitor;
+
+    m_constraintSet->didStartMarking();
+    
+    m_scheduler->beginCollection();
+    if (Options::logGC())
+        m_scheduler->log();
+    
+    // After this, we will almost certainly fall through all of the "slotVisitor.isEmpty()"
+    // checks because bootstrap would have put things into the visitor. So, we should fall
+    // through to draining.
+    
+    if (!slotVisitor.didReachTermination()) {
+        dataLog("Fatal: SlotVisitor should think that GC should terminate before constraint solving, but it does not think this.\n");
+        dataLog("slotVisitor.isEmpty(): ", slotVisitor.isEmpty(), "\n");
+        dataLog("slotVisitor.collectorMarkStack().isEmpty(): ", slotVisitor.collectorMarkStack().isEmpty(), "\n");
+        dataLog("slotVisitor.mutatorMarkStack().isEmpty(): ", slotVisitor.mutatorMarkStack().isEmpty(), "\n");
+        dataLog("m_numberOfActiveParallelMarkers: ", m_numberOfActiveParallelMarkers, "\n");
+        dataLog("m_sharedCollectorMarkStack->isEmpty(): ", m_sharedCollectorMarkStack->isEmpty(), "\n");
+        dataLog("m_sharedMutatorMarkStack->isEmpty(): ", m_sharedMutatorMarkStack->isEmpty(), "\n");
+        dataLog("slotVisitor.didReachTermination(): ", slotVisitor.didReachTermination(), "\n");
+        RELEASE_ASSERT_NOT_REACHED();
+    }
+        
+    return changePhase(conn, CollectorPhase::Fixpoint);
+}
+
+NEVER_INLINE bool Heap::runFixpointPhase(GCConductor conn)
+{
+    RELEASE_ASSERT(conn == GCConductor::Collector || m_currentThreadState);
+    
+    SlotVisitor& slotVisitor = *m_collectorSlotVisitor;
+    
+    if (Options::logGC()) {
+        HashMap<const char*, size_t> visitMap;
+        forEachSlotVisitor(
+            [&] (SlotVisitor& slotVisitor) {
+                visitMap.add(slotVisitor.codeName(), slotVisitor.bytesVisited() / 1024);
+            });
+        
+        auto perVisitorDump = sortedMapDump(
+            visitMap,
+            [] (const char* a, const char* b) -> bool {
+                return strcmp(a, b) < 0;
+            },
+            ":", " ");
+        
+        dataLog("v=", bytesVisited() / 1024, "kb (", perVisitorDump, ") o=", m_opaqueRoots.size(), " b=", m_barriersExecuted, " ");
+    }
+        
+    if (slotVisitor.didReachTermination()) {
+        m_scheduler->didReachTermination();
+            
+        assertSharedMarkStacksEmpty();
+            
+        slotVisitor.mergeIfNecessary();
+        for (auto& parallelVisitor : m_parallelSlotVisitors)
+            parallelVisitor->mergeIfNecessary();
+            
+        // FIXME: Take m_mutatorDidRun into account when scheduling constraints. Most likely,
+        // we don't have to execute root constraints again unless the mutator did run. At a
+        // minimum, we could use this for work estimates - but it's probably more than just an
+        // estimate.
+        // https://bugs.webkit.org/show_bug.cgi?id=166828
+            
+        // FIXME: We should take advantage of the fact that we could timeout. This only comes
+        // into play if we're executing constraints for the first time. But that will matter
+        // when we have deep stacks or a lot of DOM stuff.
+        // https://bugs.webkit.org/show_bug.cgi?id=166831
+            
+        // Wondering what this does? Look at Heap::addCoreConstraints(). The DOM and others can also
+        // add their own using Heap::addMarkingConstraint().
+        bool converged =
+            m_constraintSet->executeConvergence(slotVisitor, MonotonicTime::infinity());
+        if (converged && slotVisitor.isEmpty()) {
+            assertSharedMarkStacksEmpty();
+            return changePhase(conn, CollectorPhase::End);
+        }
+            
+        m_scheduler->didExecuteConstraints();
+    }
+        
+    if (Options::logGC())
+        dataLog(slotVisitor.collectorMarkStack().size(), "+", m_mutatorMarkStack->size() + slotVisitor.mutatorMarkStack().size(), " ");
+        
+    {
+        ParallelModeEnabler enabler(slotVisitor);
+        slotVisitor.drainInParallel(m_scheduler->timeToResume());
+    }
+        
+    m_scheduler->synchronousDrainingDidStall();
+
+    if (slotVisitor.didReachTermination())
+        return true; // This is like relooping to the top if runFixpointPhase().
+        
+    if (!m_scheduler->shouldResume())
+        return true;
 
-    bool alreadySweptInCollectSync = Options::sweepSynchronously();
-    if (!alreadySweptInCollectSync) {
-        sweepSynchronously();
-        if (Options::logGC())
-            dataLog("\n");
+    m_scheduler->willResume();
+        
+    if (Options::logGC()) {
+        double thisPauseMS = (MonotonicTime::now() - m_stopTime).milliseconds();
+        dataLog("p=", thisPauseMS, "ms (max ", maxPauseMS(thisPauseMS), ")...]\n");
     }
-    m_objectSpace.assertNoUnswept();
 
-    sweepAllLogicallyEmptyWeakBlocks();
+    // Forgive the mutator for its past failures to keep up.
+    // FIXME: Figure out if moving this to different places results in perf changes.
+    m_incrementBalance = 0;
+        
+    return changePhase(conn, CollectorPhase::Concurrent);
 }
 
-void Heap::collectAsync(std::optional<CollectionScope> scope)
+NEVER_INLINE bool Heap::runConcurrentPhase(GCConductor conn)
 {
-    if (!m_isSafeToCollect)
-        return;
+    SlotVisitor& slotVisitor = *m_collectorSlotVisitor;
 
-    bool alreadyRequested = false;
-    {
-        LockHolder locker(*m_threadLock);
-        for (std::optional<CollectionScope> request : m_requests) {
-            if (scope) {
-                if (scope == CollectionScope::Eden) {
-                    alreadyRequested = true;
-                    break;
-                } else {
-                    RELEASE_ASSERT(scope == CollectionScope::Full);
-                    if (request == CollectionScope::Full) {
-                        alreadyRequested = true;
-                        break;
-                    }
-                }
-            } else {
-                if (!request || request == CollectionScope::Full) {
-                    alreadyRequested = true;
-                    break;
-                }
-            }
-        }
+    switch (conn) {
+    case GCConductor::Mutator: {
+        // When the mutator has the conn, we poll runConcurrentPhase() on every time someone says
+        // stopIfNecessary(), so on every allocation slow path. When that happens we poll if it's time
+        // to stop and do some work.
+        if (slotVisitor.didReachTermination()
+            || m_scheduler->shouldStop())
+            return changePhase(conn, CollectorPhase::Reloop);
+        
+        // We could be coming from a collector phase that stuffed our SlotVisitor, so make sure we donate
+        // everything. This is super cheap if the SlotVisitor is already empty.
+        slotVisitor.donateAll();
+        return false;
     }
-    if (alreadyRequested)
-        return;
-
-    requestCollection(scope);
-}
-
-void Heap::collectSync(std::optional<CollectionScope> scope)
-{
-    if (!m_isSafeToCollect)
-        return;
+    case GCConductor::Collector: {
+        {
+            ParallelModeEnabler enabler(slotVisitor);
+            slotVisitor.drainInParallelPassively(m_scheduler->timeToStop());
+        }
+        return changePhase(conn, CollectorPhase::Reloop);
+    } }
     
-    waitForCollection(requestCollection(scope));
+    RELEASE_ASSERT_NOT_REACHED();
+    return false;
 }
 
-bool Heap::shouldCollectInThread(const LockHolder&)
+NEVER_INLINE bool Heap::runReloopPhase(GCConductor conn)
 {
-    RELEASE_ASSERT(m_requests.isEmpty() == (m_lastServedTicket == m_lastGrantedTicket));
-    RELEASE_ASSERT(m_lastServedTicket <= m_lastGrantedTicket);
+    if (Options::logGC())
+        dataLog("[GC<", RawPointer(this), ">: ", gcConductorShortName(conn), " ");
+    
+    m_scheduler->didStop();
+    
+    if (Options::logGC())
+        m_scheduler->log();
     
-    return !m_requests.isEmpty();
+    return changePhase(conn, CollectorPhase::Fixpoint);
 }
 
-void Heap::collectInThread()
+NEVER_INLINE bool Heap::runEndPhase(GCConductor conn)
 {
-    m_currentGCStartTime = MonotonicTime::now();
-    
-    std::optional<CollectionScope> scope;
+    m_scheduler->endCollection();
+        
     {
-        LockHolder locker(*m_threadLock);
-        RELEASE_ASSERT(!m_requests.isEmpty());
-        scope = m_requests.first();
-    }
-    
-    SuperSamplerScope superSamplerScope(false);
-    TimingScope collectImplTimingScope(scope, "Heap::collectInThread");
-    
-#if ENABLE(ALLOCATION_LOGGING)
-    dataLogF("JSC GC starting collection.\n");
-#endif
-    
-    stopTheWorld();
-    
-    if (false)
-        dataLog("GC START!\n");
-
-    MonotonicTime before;
-    if (Options::logGC()) {
-        dataLog("[GC: START ", capacity() / 1024, "kb ");
-        before = MonotonicTime::now();
-    }
-    
-    double gcStartTime;
-    
-    ASSERT(m_isSafeToCollect);
-    if (m_collectionScope) {
-        dataLog("Collection scope already set during GC: ", *m_collectionScope, "\n");
-        RELEASE_ASSERT_NOT_REACHED();
-    }
-    
-    willStartCollection(scope);
-    collectImplTimingScope.setScope(*this);
-    
-    gcStartTime = WTF::monotonicallyIncreasingTime();
-    if (m_verifier) {
-        // Verify that live objects from the last GC cycle haven't been corrupted by
-        // mutators before we begin this new GC cycle.
-        m_verifier->verify(HeapVerifier::Phase::BeforeGC);
-            
-        m_verifier->initializeGCCycle();
-        m_verifier->gatherLiveObjects(HeapVerifier::Phase::BeforeMarking);
+        auto locker = holdLock(m_markingMutex);
+        m_parallelMarkersShouldExit = true;
+        m_markingConditionVariable.notifyAll();
     }
+    m_helperClient.finish();
     
-    prepareForMarking();
-    
-    markToFixpoint(gcStartTime);
-    
+    iterateExecutingAndCompilingCodeBlocks(
+        [&] (CodeBlock* codeBlock) {
+            writeBarrier(codeBlock);
+        });
+        
+    updateObjectCounts();
+    endMarking();
+        
     if (m_verifier) {
         m_verifier->gatherLiveObjects(HeapVerifier::Phase::AfterMarking);
         m_verifier->verify(HeapVerifier::Phase::AfterMarking);
@@ -1195,7 +1357,7 @@ void Heap::collectInThread()
     m_objectSpace.prepareForAllocation();
     updateAllocationLimits();
 
-    didFinishCollection(gcStartTime);
+    didFinishCollection();
     
     if (m_verifier) {
         m_verifier->trimDeadObjects();
@@ -1208,13 +1370,12 @@ void Heap::collectInThread()
     }
     
     if (Options::logGC()) {
-        MonotonicTime after = MonotonicTime::now();
-        double thisPauseMS = (after - m_stopTime).milliseconds();
-        dataLog("p=", thisPauseMS, "ms (max ", maxPauseMS(thisPauseMS), "), cycle ", (after - before).milliseconds(), "ms END]\n");
+        double thisPauseMS = (m_afterGC - m_stopTime).milliseconds();
+        dataLog("p=", thisPauseMS, "ms (max ", maxPauseMS(thisPauseMS), "), cycle ", (m_afterGC - m_beforeGC).milliseconds(), "ms END]\n");
     }
     
     {
-        LockHolder locker(*m_threadLock);
+        auto locker = holdLock(*m_threadLock);
         m_requests.removeFirst();
         m_lastServedTicket++;
         clearMutatorWaiting();
@@ -1225,23 +1386,79 @@ void Heap::collectInThread()
         dataLog("GC END!\n");
 
     setNeedFinalize();
-    resumeTheWorld();
-    
+
     m_lastGCStartTime = m_currentGCStartTime;
     m_lastGCEndTime = MonotonicTime::now();
+        
+    return changePhase(conn, CollectorPhase::NotRunning);
+}
+
+bool Heap::changePhase(GCConductor conn, CollectorPhase nextPhase)
+{
+    checkConn(conn);
+
+    m_nextPhase = nextPhase;
+
+    return finishChangingPhase(conn);
+}
+
+NEVER_INLINE bool Heap::finishChangingPhase(GCConductor conn)
+{
+    checkConn(conn);
+    
+    if (m_nextPhase == m_currentPhase)
+        return true;
+
+    if (false)
+        dataLog(conn, ": Going to phase: ", m_nextPhase, " (from ", m_currentPhase, ")\n");
+    
+    bool suspendedBefore = worldShouldBeSuspended(m_currentPhase);
+    bool suspendedAfter = worldShouldBeSuspended(m_nextPhase);
+    
+    if (suspendedBefore != suspendedAfter) {
+        if (suspendedBefore) {
+            RELEASE_ASSERT(!suspendedAfter);
+            
+            resumeThePeriphery();
+            if (conn == GCConductor::Collector)
+                resumeTheMutator();
+            else
+                handleNeedFinalize();
+        } else {
+            RELEASE_ASSERT(!suspendedBefore);
+            RELEASE_ASSERT(suspendedAfter);
+            
+            if (conn == GCConductor::Collector) {
+                waitWhileNeedFinalize();
+                if (!stopTheMutator()) {
+                    if (false)
+                        dataLog("Returning false.\n");
+                    return false;
+                }
+            } else {
+                sanitizeStackForVM(m_vm);
+                handleNeedFinalize();
+            }
+            stopThePeriphery(conn);
+        }
+    }
+    
+    m_currentPhase = m_nextPhase;
+    return true;
 }
 
-void Heap::stopTheWorld()
+void Heap::stopThePeriphery(GCConductor conn)
 {
-    RELEASE_ASSERT(!m_collectorBelievesThatTheWorldIsStopped);
-    waitWhileNeedFinalize();
-    stopTheMutator();
+    if (m_collectorBelievesThatTheWorldIsStopped) {
+        dataLog("FATAL: world already stopped.\n");
+        RELEASE_ASSERT_NOT_REACHED();
+    }
     
     if (m_mutatorDidRun)
         m_mutatorExecutionVersion++;
     
     m_mutatorDidRun = false;
-    
+
     suspendCompilerThreads();
     m_collectorBelievesThatTheWorldIsStopped = true;
 
@@ -1253,7 +1470,8 @@ void Heap::stopTheWorld()
 #if ENABLE(JIT)
     {
         DeferGCForAWhile awhile(*this);
-        if (JITWorklist::instance()->completeAllForVM(*m_vm))
+        if (JITWorklist::instance()->completeAllForVM(*m_vm)
+            && conn == GCConductor::Collector)
             setGCDidJIT();
     }
 #endif // ENABLE(JIT)
@@ -1266,7 +1484,7 @@ void Heap::stopTheWorld()
     m_stopTime = MonotonicTime::now();
 }
 
-void Heap::resumeTheWorld()
+NEVER_INLINE void Heap::resumeThePeriphery()
 {
     // Calling resumeAllocating does the Right Thing depending on whether this is the end of a
     // collection cycle or this is just a concurrent phase within a collection cycle:
@@ -1277,7 +1495,10 @@ void Heap::resumeTheWorld()
     
     m_barriersExecuted = 0;
     
-    RELEASE_ASSERT(m_collectorBelievesThatTheWorldIsStopped);
+    if (!m_collectorBelievesThatTheWorldIsStopped) {
+        dataLog("Fatal: collector does not believe that the world is stopped.\n");
+        RELEASE_ASSERT_NOT_REACHED();
+    }
     m_collectorBelievesThatTheWorldIsStopped = false;
     
     // FIXME: This could be vastly improved: we want to grab the locks in the order in which they
@@ -1315,59 +1536,72 @@ void Heap::resumeTheWorld()
         slotVisitor->updateMutatorIsStopped();
     
     resumeCompilerThreads();
-    resumeTheMutator();
 }
 
-void Heap::stopTheMutator()
+bool Heap::stopTheMutator()
 {
     for (;;) {
         unsigned oldState = m_worldState.load();
-        if ((oldState & stoppedBit)
-            && (oldState & shouldStopBit))
-            return;
-        
-        // Note: We could just have the mutator stop in-place like we do when !hasAccessBit. We could
-        // switch to that if it turned out to be less confusing, but then it would not give the
-        // mutator the opportunity to react to the world being stopped.
-        if (oldState & mutatorWaitingBit) {
-            if (m_worldState.compareExchangeWeak(oldState, oldState & ~mutatorWaitingBit))
-                ParkingLot::unparkAll(&m_worldState);
-            continue;
+        if (oldState & stoppedBit) {
+            RELEASE_ASSERT(!(oldState & hasAccessBit));
+            RELEASE_ASSERT(!(oldState & mutatorWaitingBit));
+            RELEASE_ASSERT(!(oldState & mutatorHasConnBit));
+            return true;
         }
         
-        if (!(oldState & hasAccessBit)
-            || (oldState & stoppedBit)) {
+        if (oldState & mutatorHasConnBit) {
+            RELEASE_ASSERT(!(oldState & hasAccessBit));
+            RELEASE_ASSERT(!(oldState & stoppedBit));
+            return false;
+        }
+
+        if (!(oldState & hasAccessBit)) {
+            RELEASE_ASSERT(!(oldState & mutatorHasConnBit));
+            RELEASE_ASSERT(!(oldState & mutatorWaitingBit));
             // We can stop the world instantly.
-            if (m_worldState.compareExchangeWeak(oldState, oldState | stoppedBit | shouldStopBit))
-                return;
+            if (m_worldState.compareExchangeWeak(oldState, oldState | stoppedBit))
+                return true;
             continue;
         }
         
+        // Transfer the conn to the mutator and bail.
         RELEASE_ASSERT(oldState & hasAccessBit);
         RELEASE_ASSERT(!(oldState & stoppedBit));
-        m_worldState.compareExchangeStrong(oldState, oldState | shouldStopBit);
-        m_stopIfNecessaryTimer->scheduleSoon();
-        ParkingLot::compareAndPark(&m_worldState, oldState | shouldStopBit);
+        unsigned newState = (oldState | mutatorHasConnBit) & ~mutatorWaitingBit;
+        if (m_worldState.compareExchangeWeak(oldState, newState)) {
+            if (false)
+                dataLog("Handed off the conn.\n");
+            m_stopIfNecessaryTimer->scheduleSoon();
+            ParkingLot::unparkAll(&m_worldState);
+            return false;
+        }
     }
 }
 
-void Heap::resumeTheMutator()
+NEVER_INLINE void Heap::resumeTheMutator()
 {
+    if (false)
+        dataLog("Resuming the mutator.\n");
     for (;;) {
         unsigned oldState = m_worldState.load();
-        RELEASE_ASSERT(oldState & shouldStopBit);
+        if (!!(oldState & hasAccessBit) != !(oldState & stoppedBit)) {
+            dataLog("Fatal: hasAccess = ", !!(oldState & hasAccessBit), ", stopped = ", !!(oldState & stoppedBit), "\n");
+            RELEASE_ASSERT_NOT_REACHED();
+        }
+        if (oldState & mutatorHasConnBit) {
+            dataLog("Fatal: mutator has the conn.\n");
+            RELEASE_ASSERT_NOT_REACHED();
+        }
         
-        if (!(oldState & hasAccessBit)) {
-            // We can resume the world instantly.
-            if (m_worldState.compareExchangeWeak(oldState, oldState & ~(stoppedBit | shouldStopBit))) {
-                ParkingLot::unparkAll(&m_worldState);
-                return;
-            }
-            continue;
+        if (!(oldState & stoppedBit)) {
+            if (false)
+                dataLog("Returning because not stopped.\n");
+            return;
         }
         
-        // We can tell the world to resume.
-        if (m_worldState.compareExchangeWeak(oldState, oldState & ~shouldStopBit)) {
+        if (m_worldState.compareExchangeWeak(oldState, oldState & ~stoppedBit)) {
+            if (false)
+                dataLog("CASing and returning.\n");
             ParkingLot::unparkAll(&m_worldState);
             return;
         }
@@ -1389,30 +1623,54 @@ void Heap::stopIfNecessarySlow()
 bool Heap::stopIfNecessarySlow(unsigned oldState)
 {
     RELEASE_ASSERT(oldState & hasAccessBit);
+    RELEASE_ASSERT(!(oldState & stoppedBit));
     
     // It's possible for us to wake up with finalization already requested but the world not yet
     // resumed. If that happens, we can't run finalization yet.
-    if (!(oldState & stoppedBit)
-        && handleNeedFinalize(oldState))
+    if (handleNeedFinalize(oldState))
         return true;
+
+    // FIXME: When entering the concurrent phase, we could arrange for this branch not to fire, and then
+    // have the SlotVisitor do things to the m_worldState to make this branch fire again. That would
+    // prevent us from polling this so much. Ideally, stopIfNecessary would ignore the mutatorHasConnBit
+    // and there would be some other bit indicating whether we were in some GC phase other than the
+    // NotRunning or Concurrent ones.
+    if (oldState & mutatorHasConnBit)
+        collectInMutatorThread();
     
-    if (!(oldState & shouldStopBit) && !m_scheduler->shouldStop()) {
-        if (!(oldState & stoppedBit))
-            return false;
-        m_worldState.compareExchangeStrong(oldState, oldState & ~stoppedBit);
-        return true;
-    }
-    
-    sanitizeStackForVM(m_vm);
+    return false;
+}
 
-    if (verboseStop) {
-        dataLog("Stopping!\n");
-        WTFReportBacktrace();
+NEVER_INLINE void Heap::collectInMutatorThread()
+{
+    CollectingScope collectingScope(*this);
+    for (;;) {
+        RunCurrentPhaseResult result = runCurrentPhase(GCConductor::Mutator, nullptr);
+        switch (result) {
+        case RunCurrentPhaseResult::Finished:
+            return;
+        case RunCurrentPhaseResult::Continue:
+            break;
+        case RunCurrentPhaseResult::NeedCurrentThreadState:
+            sanitizeStackForVM(m_vm);
+            auto lambda = [&] (CurrentThreadState& state) {
+                for (;;) {
+                    RunCurrentPhaseResult result = runCurrentPhase(GCConductor::Mutator, &state);
+                    switch (result) {
+                    case RunCurrentPhaseResult::Finished:
+                        return;
+                    case RunCurrentPhaseResult::Continue:
+                        break;
+                    case RunCurrentPhaseResult::NeedCurrentThreadState:
+                        RELEASE_ASSERT_NOT_REACHED();
+                        break;
+                    }
+                }
+            };
+            callWithCurrentThreadState(scopedLambda<void(CurrentThreadState&)>(WTFMove(lambda)));
+            return;
+        }
     }
-    m_worldState.compareExchangeStrong(oldState, oldState | stoppedBit);
-    ParkingLot::unparkAll(&m_worldState);
-    ParkingLot::compareAndPark(&m_worldState, oldState | stoppedBit);
-    return true;
 }
 
 template<typename Func>
@@ -1425,18 +1683,23 @@ void Heap::waitForCollector(const Func& func)
             done = func(locker);
             if (!done) {
                 setMutatorWaiting();
+                
                 // At this point, the collector knows that we intend to wait, and he will clear the
                 // waiting bit and then unparkAll when the GC cycle finishes. Clearing the bit
                 // prevents us from parking except if there is also stop-the-world. Unparking after
                 // clearing means that if the clearing happens after we park, then we will unpark.
             }
         }
-
+        
         // If we're in a stop-the-world scenario, we need to wait for that even if done is true.
         unsigned oldState = m_worldState.load();
         if (stopIfNecessarySlow(oldState))
             continue;
         
+        // FIXME: We wouldn't need this if stopIfNecessarySlow() had a mode where it knew to just
+        // do the collection.
+        relinquishConn();
+        
         if (done) {
             clearMutatorWaiting(); // Clean up just in case.
             return;
@@ -1453,8 +1716,7 @@ void Heap::acquireAccessSlow()
         unsigned oldState = m_worldState.load();
         RELEASE_ASSERT(!(oldState & hasAccessBit));
         
-        if (oldState & shouldStopBit) {
-            RELEASE_ASSERT(oldState & stoppedBit);
+        if (oldState & stoppedBit) {
             if (verboseStop) {
                 dataLog("Stopping in acquireAccess!\n");
                 WTFReportBacktrace();
@@ -1470,6 +1732,7 @@ void Heap::acquireAccessSlow()
             handleGCDidJIT();
             handleNeedFinalize();
             m_mutatorDidRun = true;
+            stopIfNecessary();
             return;
         }
     }
@@ -1479,28 +1742,73 @@ void Heap::releaseAccessSlow()
 {
     for (;;) {
         unsigned oldState = m_worldState.load();
-        RELEASE_ASSERT(oldState & hasAccessBit);
-        RELEASE_ASSERT(!(oldState & stoppedBit));
+        if (!(oldState & hasAccessBit)) {
+            dataLog("FATAL: Attempting to release access but the mutator does not have access.\n");
+            RELEASE_ASSERT_NOT_REACHED();
+        }
+        if (oldState & stoppedBit) {
+            dataLog("FATAL: Attempting to release access but the mutator is stopped.\n");
+            RELEASE_ASSERT_NOT_REACHED();
+        }
         
         if (handleNeedFinalize(oldState))
             continue;
         
-        if (oldState & shouldStopBit) {
-            unsigned newState = (oldState & ~hasAccessBit) | stoppedBit;
-            if (m_worldState.compareExchangeWeak(oldState, newState)) {
-                ParkingLot::unparkAll(&m_worldState);
-                return;
-            }
-            continue;
-        }
-        
-        RELEASE_ASSERT(!(oldState & shouldStopBit));
+        unsigned newState = oldState & ~(hasAccessBit | mutatorHasConnBit);
         
-        if (m_worldState.compareExchangeWeak(oldState, oldState & ~hasAccessBit))
+        if ((oldState & mutatorHasConnBit)
+            && m_nextPhase != m_currentPhase) {
+            // This means that the collector thread had given us the conn so that we would do something
+            // for it. Stop ourselves as we release access. This ensures that acquireAccess blocks. In
+            // the meantime, since we're handing the conn over, the collector will be awoken and it is
+            // sure to have work to do.
+            newState |= stoppedBit;
+        }
+
+        if (m_worldState.compareExchangeWeak(oldState, newState)) {
+            if (oldState & mutatorHasConnBit)
+                finishRelinquishingConn();
             return;
+        }
     }
 }
 
+bool Heap::relinquishConn(unsigned oldState)
+{
+    RELEASE_ASSERT(oldState & hasAccessBit);
+    RELEASE_ASSERT(!(oldState & stoppedBit));
+    
+    if (!(oldState & mutatorHasConnBit))
+        return false; // Done.
+    
+    if (m_threadShouldStop)
+        return false;
+    
+    if (!m_worldState.compareExchangeWeak(oldState, oldState & ~mutatorHasConnBit))
+        return true; // Loop around.
+    
+    finishRelinquishingConn();
+    return true;
+}
+
+void Heap::finishRelinquishingConn()
+{
+    if (false)
+        dataLog("Relinquished the conn.\n");
+    
+    sanitizeStackForVM(m_vm);
+    
+    auto locker = holdLock(*m_threadLock);
+    if (!m_requests.isEmpty())
+        m_threadCondition->notifyOne(locker);
+    ParkingLot::unparkAll(&m_worldState);
+}
+
+void Heap::relinquishConn()
+{
+    while (relinquishConn(m_worldState.load())) { }
+}
+
 bool Heap::handleGCDidJIT(unsigned oldState)
 {
     RELEASE_ASSERT(oldState & hasAccessBit);
@@ -1513,7 +1821,7 @@ bool Heap::handleGCDidJIT(unsigned oldState)
     return true;
 }
 
-bool Heap::handleNeedFinalize(unsigned oldState)
+NEVER_INLINE bool Heap::handleNeedFinalize(unsigned oldState)
 {
     RELEASE_ASSERT(oldState & hasAccessBit);
     RELEASE_ASSERT(!(oldState & stoppedBit));
@@ -1580,7 +1888,7 @@ void Heap::clearMutatorWaiting()
     m_worldState.exchangeAnd(~mutatorWaitingBit);
 }
 
-void Heap::notifyThreadStopping(const LockHolder&)
+void Heap::notifyThreadStopping(const AbstractLocker&)
 {
     m_threadIsStopping = true;
     clearMutatorWaiting();
@@ -1592,11 +1900,11 @@ void Heap::finalize()
     MonotonicTime before;
     if (Options::logGC()) {
         before = MonotonicTime::now();
-        dataLog("[GC: finalize ");
+        dataLog("[GC<", RawPointer(this), ">: finalize ");
     }
     
     {
-        HelpingGCScope helpingGCScope(*this);
+        SweepingScope helpingGCScope(*this);
         deleteUnmarkedCompiledCode();
         deleteSourceProviderCaches();
         sweepLargeAllocations();
@@ -1604,7 +1912,7 @@ void Heap::finalize()
     
     if (HasOwnPropertyCache* cache = vm()->hasOwnPropertyCache())
         cache->clear();
-
+    
     if (Options::sweepSynchronously())
         sweepSynchronously();
 
@@ -1622,16 +1930,27 @@ Heap::Ticket Heap::requestCollection(std::optional<CollectionScope> scope)
     RELEASE_ASSERT(vm()->atomicStringTable() == wtfThreadData().atomicStringTable());
     
     LockHolder locker(*m_threadLock);
+    // We may be able to steal the conn. That only works if the collector is definitely not running
+    // right now. This is an optimization that prevents the collector thread from ever starting in most
+    // cases.
+    ASSERT(m_lastServedTicket <= m_lastGrantedTicket);
+    if (m_lastServedTicket == m_lastGrantedTicket) {
+        if (false)
+            dataLog("Taking the conn.\n");
+        m_worldState.exchangeOr(mutatorHasConnBit);
+    }
+    
     m_requests.append(scope);
     m_lastGrantedTicket++;
-    m_threadCondition->notifyOne(locker);
+    if (!(m_worldState.load() & mutatorHasConnBit))
+        m_threadCondition->notifyOne(locker);
     return m_lastGrantedTicket;
 }
 
 void Heap::waitForCollection(Ticket ticket)
 {
     waitForCollector(
-        [&] (const LockHolder&) -> bool {
+        [&] (const AbstractLocker&) -> bool {
             return m_lastServedTicket >= ticket;
         });
 }
@@ -1770,9 +2089,6 @@ void Heap::updateAllocationLimits()
     if (verbose)
         dataLog("extraMemorySize() = ", extraMemorySize(), ", currentHeapSize = ", currentHeapSize, "\n");
     
-    if (Options::gcMaxHeapSize() && currentHeapSize > Options::gcMaxHeapSize())
-        HeapStatistics::exitWithFailure();
-
     if (m_collectionScope == CollectionScope::Full) {
         // To avoid pathological GC churn in very small and very large heaps, we set
         // the new allocation limit based on the current size of the heap, with a
@@ -1825,25 +2141,19 @@ void Heap::updateAllocationLimits()
         dataLog("=> ", currentHeapSize / 1024, "kb, ");
 }
 
-void Heap::didFinishCollection(double gcStartTime)
+void Heap::didFinishCollection()
 {
-    double gcEndTime = WTF::monotonicallyIncreasingTime();
+    m_afterGC = MonotonicTime::now();
     CollectionScope scope = *m_collectionScope;
     if (scope == CollectionScope::Full)
-        m_lastFullGCLength = gcEndTime - gcStartTime;
+        m_lastFullGCLength = m_afterGC - m_beforeGC;
     else
-        m_lastEdenGCLength = gcEndTime - gcStartTime;
+        m_lastEdenGCLength = m_afterGC - m_beforeGC;
 
 #if ENABLE(RESOURCE_USAGE)
     ASSERT(externalMemorySize() <= extraMemorySize());
 #endif
 
-    if (Options::recordGCPauseTimes())
-        HeapStatistics::recordGCPauseTime(gcStartTime, gcEndTime);
-
-    if (Options::dumpObjectStatistics())
-        HeapStatistics::dumpObjectStatistics(this);
-
     if (HeapProfiler* heapProfiler = m_vm->heapProfiler()) {
         gatherExtraHeapSnapshotData(*heapProfiler);
         removeDeadHeapSnapshotNodes(*heapProfiler);
@@ -2070,8 +2380,14 @@ void Heap::collectIfNecessaryOrDefer(GCDeferralContext* deferralContext)
 
     if (!m_isSafeToCollect)
         return;
-    if (mutatorState() == MutatorState::HelpingGC)
+    switch (mutatorState()) {
+    case MutatorState::Running:
+    case MutatorState::Allocating:
+        break;
+    case MutatorState::Sweeping:
+    case MutatorState::Collecting:
         return;
+    }
     if (!Options::useGC())
         return;
     
@@ -2080,11 +2396,8 @@ void Heap::collectIfNecessaryOrDefer(GCDeferralContext* deferralContext)
             deferralContext->m_shouldGC = true;
         else if (isDeferred())
             m_didDeferGCWork = true;
-        else {
+        else
             stopIfNecessary();
-            // FIXME: Check if the scheduler wants us to stop.
-            // https://bugs.webkit.org/show_bug.cgi?id=166827
-        }
     }
     
     if (UNLIKELY(Options::gcMaxHeapSize())) {
@@ -2099,8 +2412,10 @@ void Heap::collectIfNecessaryOrDefer(GCDeferralContext* deferralContext)
         deferralContext->m_shouldGC = true;
     else if (isDeferred())
         m_didDeferGCWork = true;
-    else
+    else {
         collectAsync();
+        stopIfNecessary(); // This will immediately start the collection if we have the conn.
+    }
 }
 
 void Heap::decrementDeferralDepthAndGCIfNeededSlow()
@@ -2303,6 +2618,12 @@ void Heap::addMarkingConstraint(std::unique_ptr<MarkingConstraint> constraint)
 
 void Heap::notifyIsSafeToCollect()
 {
+    MonotonicTime before;
+    if (Options::logGC()) {
+        before = MonotonicTime::now();
+        dataLog("[GC<", RawPointer(this), ">: starting ");
+    }
+    
     addCoreConstraints();
     
     m_isSafeToCollect = true;
@@ -2337,6 +2658,9 @@ void Heap::notifyIsSafeToCollect()
                 }
             });
     }
+    
+    if (Options::logGC())
+        dataLog((MonotonicTime::now() - before).milliseconds(), "ms]\n");
 }
 
 void Heap::preventCollection()
@@ -2349,7 +2673,7 @@ void Heap::preventCollection()
     
     // Wait for all collections to finish.
     waitForCollector(
-        [&] (const LockHolder&) -> bool {
+        [&] (const AbstractLocker&) -> bool {
             ASSERT(m_lastServedTicket <= m_lastGrantedTicket);
             return m_lastServedTicket == m_lastGrantedTicket;
         });
@@ -2402,22 +2726,11 @@ void Heap::performIncrement(size_t bytes)
         return;
     targetBytes = std::min(targetBytes, Options::gcIncrementMaxBytes());
 
-    MonotonicTime before;
-    if (Options::logGC()) {
-        dataLog("[GC: increment t=", targetBytes / 1024, "kb ");
-        before = MonotonicTime::now();
-    }
-
     SlotVisitor& slotVisitor = *m_mutatorSlotVisitor;
     ParallelModeEnabler parallelModeEnabler(slotVisitor);
     size_t bytesVisited = slotVisitor.performIncrementOfDraining(static_cast<size_t>(targetBytes));
     // incrementBalance may go negative here because it'll remember how many bytes we overshot.
     m_incrementBalance -= bytesVisited;
-
-    if (Options::logGC()) {
-        MonotonicTime after = MonotonicTime::now();
-        dataLog("p=", (after - before).milliseconds(), "ms b=", m_incrementBalance / 1024, "kb]\n");
-    }
 }
 
 } // namespace JSC
index d3b7964..8b23b9e 100644 (file)
 #include "ArrayBuffer.h"
 #include "CellState.h"
 #include "CollectionScope.h"
+#include "CollectorPhase.h"
 #include "DeleteAllCodeEffort.h"
+#include "GCConductor.h"
 #include "GCIncomingRefCountedSet.h"
 #include "HandleSet.h"
 #include "HandleStack.h"
 #include "HeapObserver.h"
 #include "ListableHandler.h"
-#include "MachineStackMarker.h"
 #include "MarkedBlock.h"
 #include "MarkedBlockSet.h"
 #include "MarkedSpace.h"
@@ -53,6 +54,8 @@ namespace JSC {
 
 class CodeBlock;
 class CodeBlockSet;
+class CollectingScope;
+class ConservativeRoots;
 class GCDeferralContext;
 class EdenGCActivityCallback;
 class ExecutableBase;
@@ -62,23 +65,26 @@ class GCAwareJITStubRoutine;
 class Heap;
 class HeapProfiler;
 class HeapVerifier;
-class HelpingGCScope;
 class IncrementalSweeper;
 class JITStubRoutine;
 class JITStubRoutineSet;
 class JSCell;
 class JSValue;
 class LLIntOffsetsExtractor;
+class MachineThreads;
 class MarkStackArray;
 class MarkedAllocator;
 class MarkedArgumentBuffer;
 class MarkingConstraint;
 class MarkingConstraintSet;
 class MutatorScheduler;
+class RunningScope;
 class SlotVisitor;
 class SpaceTimeMutatorScheduler;
 class StopIfNecessaryTimer;
+class SweepingScope;
 class VM;
+struct CurrentThreadState;
 
 namespace DFG {
 class SpeculativeJIT;
@@ -131,7 +137,7 @@ public:
     VM* vm() const;
 
     MarkedSpace& objectSpace() { return m_objectSpace; }
-    MachineThreads& machineThreads() { return m_machineThreads; }
+    MachineThreads& machineThreads() { return *m_machineThreads; }
 
     SlotVisitor& collectorSlotVisitor() { return *m_collectorSlotVisitor; }
 
@@ -147,7 +153,6 @@ public:
     MutatorState mutatorState() const { return m_mutatorState; }
     std::optional<CollectionScope> collectionScope() const { return m_collectionScope; }
     bool hasHeapAccess() const;
-    bool mutatorIsStopped() const;
     bool collectorBelievesThatTheWorldIsStopped() const;
 
     // We're always busy on the collection threads. On the main thread, this returns true if we're
@@ -229,9 +234,9 @@ public:
     void willStartIterating();
     void didFinishIterating();
 
-    double lastFullGCLength() const { return m_lastFullGCLength; }
-    double lastEdenGCLength() const { return m_lastEdenGCLength; }
-    void increaseLastFullGCLength(double amount) { m_lastFullGCLength += amount; }
+    Seconds lastFullGCLength() const { return m_lastFullGCLength; }
+    Seconds lastEdenGCLength() const { return m_lastEdenGCLength; }
+    void increaseLastFullGCLength(Seconds amount) { m_lastFullGCLength += amount; }
 
     size_t sizeBeforeLastEdenCollection() const { return m_sizeBeforeLastEdenCollect; }
     size_t sizeAfterLastEdenCollection() const { return m_sizeAfterLastEdenCollect; }
@@ -319,6 +324,9 @@ public:
     // already be called for you at the right times.
     void stopIfNecessary();
     
+    // This gives the conn to the collector.
+    void relinquishConn();
+    
     bool mayNeedToStop();
 
     void performIncrement(size_t bytes);
@@ -343,10 +351,11 @@ public:
     CFRunLoopRef runLoop() const { return m_runLoop.get(); }
     JS_EXPORT_PRIVATE void setRunLoop(CFRunLoopRef);
 #endif // USE(CF)
-
+    
 private:
     friend class AllocatingScope;
     friend class CodeBlock;
+    friend class CollectingScope;
     friend class DeferGC;
     friend class DeferGCForAWhile;
     friend class GCAwareJITStubRoutine;
@@ -355,15 +364,16 @@ private:
     friend class HandleSet;
     friend class HeapUtil;
     friend class HeapVerifier;
-    friend class HelpingGCScope;
     friend class JITStubRoutine;
     friend class LLIntOffsetsExtractor;
     friend class MarkedSpace;
     friend class MarkedAllocator;
     friend class MarkedBlock;
+    friend class RunningScope;
     friend class SlotVisitor;
     friend class SpaceTimeMutatorScheduler;
     friend class StochasticSpaceTimeMutatorScheduler;
+    friend class SweepingScope;
     friend class IncrementalSweeper;
     friend class HeapStatistics;
     friend class VM;
@@ -382,13 +392,35 @@ private:
     JS_EXPORT_PRIVATE void reportExtraMemoryAllocatedSlowCase(size_t);
     JS_EXPORT_PRIVATE void deprecatedReportExtraMemorySlowCase(size_t);
     
-    bool shouldCollectInThread(const LockHolder&);
-    void collectInThread();
+    bool shouldCollectInCollectorThread(const AbstractLocker&);
+    void collectInCollectorThread();
+    
+    void checkConn(GCConductor);
+
+    enum class RunCurrentPhaseResult {
+        Finished,
+        Continue,
+        NeedCurrentThreadState
+    };
+    RunCurrentPhaseResult runCurrentPhase(GCConductor, CurrentThreadState*);
+    
+    // Returns true if we should keep doing things.
+    bool runNotRunningPhase(GCConductor);
+    bool runBeginPhase(GCConductor);
+    bool runFixpointPhase(GCConductor);
+    bool runConcurrentPhase(GCConductor);
+    bool runReloopPhase(GCConductor);
+    bool runEndPhase(GCConductor);
+    bool changePhase(GCConductor, CollectorPhase);
+    bool finishChangingPhase(GCConductor);
     
-    void stopTheWorld();
-    void resumeTheWorld();
+    void collectInMutatorThread();
     
-    void stopTheMutator();
+    void stopThePeriphery(GCConductor);
+    void resumeThePeriphery();
+    
+    // Returns true if the mutator is stopped, false if the mutator has the conn now.
+    bool stopTheMutator();
     void resumeTheMutator();
     
     void stopIfNecessarySlow();
@@ -401,17 +433,21 @@ private:
     JS_EXPORT_PRIVATE void releaseAccessSlow();
     
     bool handleGCDidJIT(unsigned);
-    bool handleNeedFinalize(unsigned);
     void handleGCDidJIT();
+    
+    bool handleNeedFinalize(unsigned);
     void handleNeedFinalize();
     
+    bool relinquishConn(unsigned);
+    void finishRelinquishingConn();
+    
     void setGCDidJIT();
     void setNeedFinalize();
     void waitWhileNeedFinalize();
     
     void setMutatorWaiting();
     void clearMutatorWaiting();
-    void notifyThreadStopping(const LockHolder&);
+    void notifyThreadStopping(const AbstractLocker&);
     
     typedef uint64_t Ticket;
     Ticket requestCollection(std::optional<CollectionScope>);
@@ -421,14 +457,13 @@ private:
     void willStartCollection(std::optional<CollectionScope>);
     void prepareForMarking();
     
-    void markToFixpoint(double gcStartTime);
     void gatherStackRoots(ConservativeRoots&);
     void gatherJSStackRoots(ConservativeRoots&);
     void gatherScratchBufferRoots(ConservativeRoots&);
     void beginMarking();
     void visitCompilerWorklistWeakReferences();
     void removeDeadCompilerWorklistEntries();
-    void updateObjectCounts(double gcStartTime);
+    void updateObjectCounts();
     void endMarking();
 
     void reapWeakHandles();
@@ -443,7 +478,7 @@ private:
     void deleteUnmarkedCompiledCode();
     JS_EXPORT_PRIVATE void addToRememberedSet(const JSCell*);
     void updateAllocationLimits();
-    void didFinishCollection(double gcStartTime);
+    void didFinishCollection();
     void resumeCompilerThreads();
     void gatherExtraHeapSnapshotData(HeapProfiler&);
     void removeDeadHeapSnapshotNodes(HeapProfiler&);
@@ -510,7 +545,7 @@ private:
     ProtectCountSet m_protectedValues;
     std::unique_ptr<HashSet<MarkedArgumentBuffer*>> m_markListSet;
 
-    MachineThreads m_machineThreads;
+    std::unique_ptr<MachineThreads> m_machineThreads;
     
     std::unique_ptr<SlotVisitor> m_collectorSlotVisitor;
     std::unique_ptr<SlotVisitor> m_mutatorSlotVisitor;
@@ -544,8 +579,8 @@ private:
     unsigned m_barrierThreshold { Options::forceFencedBarrier() ? tautologicalThreshold : blackThreshold };
 
     VM* m_vm;
-    double m_lastFullGCLength;
-    double m_lastEdenGCLength;
+    Seconds m_lastFullGCLength;
+    Seconds m_lastEdenGCLength;
 
     Vector<ExecutableBase*> m_executables;
 
@@ -601,19 +636,23 @@ private:
     
     std::unique_ptr<MutatorScheduler> m_scheduler;
     
-    static const unsigned shouldStopBit = 1u << 0u;
-    static const unsigned stoppedBit = 1u << 1u;
+    static const unsigned mutatorHasConnBit = 1u << 0u; // Must also be protected by threadLock.
+    static const unsigned stoppedBit = 1u << 1u; // Only set when !hasAccessBit
     static const unsigned hasAccessBit = 1u << 2u;
     static const unsigned gcDidJITBit = 1u << 3u; // Set when the GC did some JITing, so on resume we need to cpuid.
     static const unsigned needFinalizeBit = 1u << 4u;
     static const unsigned mutatorWaitingBit = 1u << 5u; // Allows the mutator to use this as a condition variable.
     Atomic<unsigned> m_worldState;
     bool m_collectorBelievesThatTheWorldIsStopped { false };
+    MonotonicTime m_beforeGC;
+    MonotonicTime m_afterGC;
     MonotonicTime m_stopTime;
     
     Deque<std::optional<CollectionScope>> m_requests;
     Ticket m_lastServedTicket { 0 };
     Ticket m_lastGrantedTicket { 0 };
+    CollectorPhase m_currentPhase { CollectorPhase::NotRunning };
+    CollectorPhase m_nextPhase { CollectorPhase::NotRunning };
     bool m_threadShouldStop { false };
     bool m_threadIsStopping { false };
     bool m_mutatorDidRun { true };
@@ -632,6 +671,8 @@ private:
     MonotonicTime m_currentGCStartTime;
     
     uintptr_t m_barriersExecuted { 0 };
+    
+    CurrentThreadState* m_currentThreadState { nullptr };
 };
 
 } // namespace JSC
index 6652f4f..21de028 100644 (file)
@@ -61,24 +61,6 @@ inline bool Heap::hasHeapAccess() const
     return m_worldState.load() & hasAccessBit;
 }
 
-inline bool Heap::mutatorIsStopped() const
-{
-    unsigned state = m_worldState.load();
-    bool shouldStop = state & shouldStopBit;
-    bool stopped = state & stoppedBit;
-    // I only got it right when I considered all four configurations of shouldStop/stopped:
-    // !shouldStop, !stopped: The GC has not requested that we stop and we aren't stopped, so we
-    //     should return false.
-    // !shouldStop, stopped: The mutator is still stopped but the GC is done and the GC has requested
-    //     that we resume, so we should return false.
-    // shouldStop, !stopped: The GC called stopTheWorld() but the mutator hasn't hit a safepoint yet.
-    //     The mutator should be able to do whatever it wants in this state, as if we were not
-    //     stopped. So return false.
-    // shouldStop, stopped: The GC requested stop the world and the mutator obliged. The world is
-    //     stopped, so return true.
-    return shouldStop & stopped;
-}
-
 inline bool Heap::collectorBelievesThatTheWorldIsStopped() const
 {
     return m_collectorBelievesThatTheWorldIsStopped;
diff --git a/Source/JavaScriptCore/heap/HeapStatistics.cpp b/Source/JavaScriptCore/heap/HeapStatistics.cpp
deleted file mode 100644 (file)
index 1afa098..0000000
+++ /dev/null
@@ -1,251 +0,0 @@
-/*
- * Copyright (C) 2012, 2016 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
- * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
- * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
- * THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#include "config.h"
-#include "HeapStatistics.h"
-
-#include "Heap.h"
-#include "HeapIterationScope.h"
-#include "JSCInlines.h"
-#include "JSObject.h"
-#include "MarkedSpaceInlines.h"
-#include "Options.h"
-#include <stdlib.h>
-#include <wtf/CurrentTime.h>
-#include <wtf/DataLog.h>
-#include <wtf/StdLibExtras.h>
-
-#if OS(UNIX)
-#include <sys/resource.h>
-#endif
-
-namespace JSC {
-
-double HeapStatistics::s_startTime = 0.0;
-double HeapStatistics::s_endTime = 0.0;
-Vector<double>* HeapStatistics::s_pauseTimeStarts = 0;
-Vector<double>* HeapStatistics::s_pauseTimeEnds = 0;
-
-#if OS(UNIX) 
-
-void HeapStatistics::initialize()
-{
-    ASSERT(Options::recordGCPauseTimes());
-    s_startTime = WTF::monotonicallyIncreasingTime();
-    s_pauseTimeStarts = new Vector<double>();
-    s_pauseTimeEnds = new Vector<double>();
-}
-
-void HeapStatistics::recordGCPauseTime(double start, double end)
-{
-    ASSERT(Options::recordGCPauseTimes());
-    ASSERT(s_pauseTimeStarts);
-    ASSERT(s_pauseTimeEnds);
-    s_pauseTimeStarts->append(start);
-    s_pauseTimeEnds->append(end);
-}
-
-void HeapStatistics::logStatistics()
-{
-    struct rusage usage;
-    getrusage(RUSAGE_SELF, &usage);
-#if USE(CF) || OS(UNIX)
-    char* vmName = getenv("JSVMName");
-    char* suiteName = getenv("JSSuiteName");
-    char* benchmarkName = getenv("JSBenchmarkName");
-#else
-#error "The HeapStatistics module is not supported on this platform."
-#endif
-    if (!vmName || !suiteName || !benchmarkName)
-        dataLogF("HeapStatistics: {\"max_rss\": %ld", usage.ru_maxrss);
-    else
-        dataLogF("HeapStatistics: {\"max_rss\": %ld, \"vm_name\": \"%s\", \"suite_name\": \"%s\", \"benchmark_name\": \"%s\"", 
-            usage.ru_maxrss, vmName, suiteName, benchmarkName); 
-
-    if (Options::recordGCPauseTimes()) {
-        dataLogF(", \"pause_times\": [");
-        Vector<double>::iterator startIt = s_pauseTimeStarts->begin();
-        Vector<double>::iterator endIt = s_pauseTimeEnds->begin();
-        if (startIt != s_pauseTimeStarts->end() && endIt != s_pauseTimeEnds->end()) {
-            dataLogF("[%f, %f]", *startIt, *endIt);
-            ++startIt;
-            ++endIt;
-        }
-        while (startIt != s_pauseTimeStarts->end() && endIt != s_pauseTimeEnds->end()) {
-            dataLogF(", [%f, %f]", *startIt, *endIt);
-            ++startIt;
-            ++endIt;
-        }
-        dataLogF("], \"start_time\": %f, \"end_time\": %f", s_startTime, s_endTime);
-    }
-    dataLogF("}\n");
-}
-
-void HeapStatistics::exitWithFailure()
-{
-    ASSERT(Options::logHeapStatisticsAtExit());
-    s_endTime = WTF::monotonicallyIncreasingTime();
-    logStatistics();
-    exit(-1);
-}
-
-void HeapStatistics::reportSuccess()
-{
-    ASSERT(Options::logHeapStatisticsAtExit());
-    s_endTime = WTF::monotonicallyIncreasingTime();
-    logStatistics();
-}
-
-#else
-
-void HeapStatistics::initialize()
-{
-}
-
-void HeapStatistics::recordGCPauseTime(double, double)
-{
-}
-
-void HeapStatistics::logStatistics()
-{
-}
-
-void HeapStatistics::exitWithFailure()
-{
-    exit(-1);
-}
-
-void HeapStatistics::reportSuccess()
-{
-}
-
-#endif // OS(UNIX)
-
-class StorageStatistics : public MarkedBlock::VoidFunctor {
-public:
-    StorageStatistics();
-
-    IterationStatus operator()(HeapCell*, HeapCell::Kind) const;
-
-    size_t objectWithOutOfLineStorageCount();
-    size_t objectCount();
-
-    size_t storageSize();
-    size_t storageCapacity();
-
-private:
-    void visit(JSCell*);
-
-    size_t m_objectWithOutOfLineStorageCount;
-    size_t m_objectCount;
-    size_t m_storageSize;
-    size_t m_storageCapacity;
-};
-
-inline StorageStatistics::StorageStatistics()
-    : m_objectWithOutOfLineStorageCount(0)
-    , m_objectCount(0)
-    , m_storageSize(0)
-    , m_storageCapacity(0)
-{
-}
-
-inline void StorageStatistics::visit(JSCell* cell)
-{
-    if (!cell->isObject())
-        return;
-
-    JSObject* object = jsCast<JSObject*>(cell);
-    if (hasIndexedProperties(object->indexingType()))
-        return;
-
-    if (object->structure()->isUncacheableDictionary())
-        return;
-
-    ++m_objectCount;
-    if (!object->hasInlineStorage())
-        ++m_objectWithOutOfLineStorageCount;
-    m_storageSize += object->structure()->totalStorageSize() * sizeof(WriteBarrierBase<Unknown>);
-    m_storageCapacity += object->structure()->totalStorageCapacity() * sizeof(WriteBarrierBase<Unknown>); 
-}
-
-inline IterationStatus StorageStatistics::operator()(HeapCell* cell, HeapCell::Kind kind) const
-{
-    if (kind == HeapCell::JSCell) {
-        // FIXME: This const_cast exists because this isn't a C++ lambda.
-        // https://bugs.webkit.org/show_bug.cgi?id=159644
-        const_cast<StorageStatistics*>(this)->visit(static_cast<JSCell*>(cell));
-    }
-    return IterationStatus::Continue;
-}
-
-inline size_t StorageStatistics::objectWithOutOfLineStorageCount()
-{
-    return m_objectWithOutOfLineStorageCount;
-}
-
-inline size_t StorageStatistics::objectCount()
-{
-    return m_objectCount;
-}
-
-inline size_t StorageStatistics::storageSize()
-{
-    return m_storageSize;
-}
-
-inline size_t StorageStatistics::storageCapacity()
-{
-    return m_storageCapacity;
-}
-
-void HeapStatistics::dumpObjectStatistics(Heap* heap)
-{
-    dataLogF("\n=== Heap Statistics: ===\n");
-    dataLogF("size: %ldkB\n", static_cast<long>(heap->m_sizeAfterLastCollect / KB));
-    dataLogF("capacity: %ldkB\n", static_cast<long>(heap->capacity() / KB));
-    dataLogF("pause time: %lfs\n\n", heap->m_lastFullGCLength);
-
-    StorageStatistics storageStatistics;
-    {
-        HeapIterationScope iterationScope(*heap);
-        heap->m_objectSpace.forEachLiveCell(iterationScope, storageStatistics);
-    }
-    long wastedPropertyStorageBytes = 0;
-    long wastedPropertyStoragePercent = 0;
-    long objectWithOutOfLineStorageCount = 0;
-    long objectsWithOutOfLineStoragePercent = 0;
-    if ((storageStatistics.storageCapacity() > 0) && (storageStatistics.objectCount() > 0)) {
-        wastedPropertyStorageBytes = static_cast<long>((storageStatistics.storageCapacity() - storageStatistics.storageSize()) / KB);
-        wastedPropertyStoragePercent = static_cast<long>(
-            (storageStatistics.storageCapacity() - storageStatistics.storageSize()) * 100 / storageStatistics.storageCapacity());
-        objectWithOutOfLineStorageCount = static_cast<long>(storageStatistics.objectWithOutOfLineStorageCount());
-        objectsWithOutOfLineStoragePercent = objectWithOutOfLineStorageCount * 100 / storageStatistics.objectCount();
-    }
-    dataLogF("wasted .property storage: %ldkB (%ld%%)\n", wastedPropertyStorageBytes, wastedPropertyStoragePercent);
-    dataLogF("objects with out-of-line .property storage: %ld (%ld%%)\n", objectWithOutOfLineStorageCount, objectsWithOutOfLineStoragePercent);
-}
-
-} // namespace JSC
index 6df607f..cfe89b8 100644 (file)
@@ -97,7 +97,7 @@ void IncrementalSweeper::startSweeping()
     m_currentAllocator = m_vm->heap.objectSpace().firstAllocator();
 }
 
-void IncrementalSweeper::willFinishSweeping()
+void IncrementalSweeper::stopSweeping()
 {
     m_currentAllocator = nullptr;
     if (m_vm)
index a84830b..34fa88c 100644 (file)
@@ -37,11 +37,11 @@ class IncrementalSweeper : public HeapTimer {
 public:
     JS_EXPORT_PRIVATE explicit IncrementalSweeper(Heap*);
 
-    void startSweeping();
+    JS_EXPORT_PRIVATE void startSweeping();
 
     JS_EXPORT_PRIVATE void doWork() override;
     bool sweepNextBlock();
-    void willFinishSweeping();
+    JS_EXPORT_PRIVATE void stopSweeping();
 
 private:
     void doSweep(double startTime);
index e33f3b2..65eb0cf 100644 (file)
@@ -1,5 +1,5 @@
 /*
- *  Copyright (C) 2003-2009, 2015-2016 Apple Inc. All rights reserved.
+ *  Copyright (C) 2003-2017 Apple Inc. All rights reserved.
  *  Copyright (C) 2007 Eric Seidel <eric@webkit.org>
  *  Copyright (C) 2009 Acision BV. All rights reserved.
  *
@@ -313,6 +313,18 @@ void MachineThreads::removeThreadIfFound(PlatformThread platformThread)
     }
 }
 
+SUPPRESS_ASAN
+void MachineThreads::gatherFromCurrentThread(ConservativeRoots& conservativeRoots, JITStubRoutineSet& jitStubRoutines, CodeBlockSet& codeBlocks, CurrentThreadState& currentThreadState)
+{
+    if (currentThreadState.registerState) {
+        void* registersBegin = currentThreadState.registerState;
+        void* registersEnd = reinterpret_cast<void*>(roundUpToMultipleOf<sizeof(void*)>(reinterpret_cast<uintptr_t>(currentThreadState.registerState + 1)));
+        conservativeRoots.add(registersBegin, registersEnd, jitStubRoutines, codeBlocks);
+    }
+
+    conservativeRoots.add(currentThreadState.stackTop, currentThreadState.stackOrigin, jitStubRoutines, codeBlocks);
+}
+
 MachineThreads::Thread::Thread(const PlatformThread& platThread, void* base, void* end)
     : platformThread(platThread)
     , stackBase(base)
@@ -1020,8 +1032,11 @@ static void growBuffer(size_t size, void** buffer, size_t* capacity)
     *buffer = fastMalloc(*capacity);
 }
 
-void MachineThreads::gatherConservativeRoots(ConservativeRoots& conservativeRoots, JITStubRoutineSet& jitStubRoutines, CodeBlockSet& codeBlocks)
+void MachineThreads::gatherConservativeRoots(ConservativeRoots& conservativeRoots, JITStubRoutineSet& jitStubRoutines, CodeBlockSet& codeBlocks, CurrentThreadState* currentThreadState)
 {
+    if (currentThreadState)
+        gatherFromCurrentThread(conservativeRoots, jitStubRoutines, codeBlocks, *currentThreadState);
+
     size_t size;
     size_t capacity = 0;
     void* buffer = nullptr;
@@ -1036,4 +1051,11 @@ void MachineThreads::gatherConservativeRoots(ConservativeRoots& conservativeRoot
     fastFree(buffer);
 }
 
+NEVER_INLINE int callWithCurrentThreadState(const ScopedLambda<void(CurrentThreadState&)>& lambda)
+{
+    DECLARE_AND_COMPUTE_CURRENT_THREAD_STATE(state);
+    lambda(state);
+    return 42; // Suppress tail call optimization.
+}
+
 } // namespace JSC
index 07b40dc..a5a5087 100644 (file)
@@ -1,7 +1,7 @@
 /*
  *  Copyright (C) 1999-2000 Harri Porten (porten@kde.org)
  *  Copyright (C) 2001 Peter Kelly (pmk@post.com)
- *  Copyright (C) 2003-2009, 2015-2016 Apple Inc. All rights reserved.
+ *  Copyright (C) 2003-2017 Apple Inc. All rights reserved.
  *
  *  This library is free software; you can redistribute it and/or
  *  modify it under the terms of the GNU Lesser General Public
 
 #pragma once
 
-#include <setjmp.h>
+#include "RegisterState.h"
 #include <wtf/Lock.h>
 #include <wtf/Noncopyable.h>
+#include <wtf/ScopedLambda.h>
 #include <wtf/ThreadSpecific.h>
 
 #if OS(DARWIN)
@@ -57,15 +58,19 @@ class ConservativeRoots;
 class Heap;
 class JITStubRoutineSet;
 
+struct CurrentThreadState {
+    void* stackOrigin { nullptr };
+    void* stackTop { nullptr };
+    RegisterState* registerState { nullptr };
+};
+    
 class MachineThreads {
     WTF_MAKE_NONCOPYABLE(MachineThreads);
 public:
-    typedef jmp_buf RegisterState;
-
     MachineThreads(Heap*);
     ~MachineThreads();
 
-    void gatherConservativeRoots(ConservativeRoots&, JITStubRoutineSet&, CodeBlockSet&);
+    void gatherConservativeRoots(ConservativeRoots&, JITStubRoutineSet&, CodeBlockSet&, CurrentThreadState*);
 
     JS_EXPORT_PRIVATE void addCurrentThread(); // Only needs to be called by clients that can use the same heap from multiple threads.
 
@@ -145,6 +150,8 @@ public:
     Thread* machineThreadForCurrentThread();
 
 private:
+    void gatherFromCurrentThread(ConservativeRoots&, JITStubRoutineSet&, CodeBlockSet&, CurrentThreadState&);
+
     void tryCopyOtherThreadStack(Thread*, void*, size_t capacity, size_t*);
     bool tryCopyOtherThreadStacks(LockHolder&, void*, size_t capacity, size_t*);
 
@@ -161,24 +168,15 @@ private:
 #endif
 };
 
-} // namespace JSC
+#define DECLARE_AND_COMPUTE_CURRENT_THREAD_STATE(stateName) \
+    CurrentThreadState stateName; \
+    stateName.stackTop = &stateName; \
+    stateName.stackOrigin = wtfThreadData().stack().origin(); \
+    ALLOCATE_AND_GET_REGISTER_STATE(stateName ## _registerState); \
+    stateName.registerState = &stateName ## _registerState
 
-#if COMPILER(GCC_OR_CLANG)
-#define REGISTER_BUFFER_ALIGNMENT __attribute__ ((aligned (sizeof(void*))))
-#else
-#define REGISTER_BUFFER_ALIGNMENT
-#endif
+// The return value is meaningless. We just use it to suppress tail call optimization.
+int callWithCurrentThreadState(const ScopedLambda<void(CurrentThreadState&)>&);
+
+} // namespace JSC
 
-// ALLOCATE_AND_GET_REGISTER_STATE() is a macro so that it is always "inlined" even in debug builds.
-#if COMPILER(MSVC)
-#pragma warning(push)
-#pragma warning(disable: 4611)
-#define ALLOCATE_AND_GET_REGISTER_STATE(registers) \
-    MachineThreads::RegisterState registers REGISTER_BUFFER_ALIGNMENT; \
-    setjmp(registers)
-#pragma warning(pop)
-#else
-#define ALLOCATE_AND_GET_REGISTER_STATE(registers) \
-    MachineThreads::RegisterState registers REGISTER_BUFFER_ALIGNMENT; \
-    setjmp(registers)
-#endif
index 952db02..5ee544a 100644 (file)
@@ -216,6 +216,14 @@ void* MarkedAllocator::allocateSlowCaseImpl(GCDeferralContext* deferralContext,
 
     m_heap->collectIfNecessaryOrDefer(deferralContext);
     
+    // Goofy corner case: the GC called a callback and now this allocator has a currentBlock. This only
+    // happens when running WebKit tests, which inject a callback into the GC's finalization.
+    if (UNLIKELY(m_currentBlock)) {
+        if (crashOnFailure)
+            return allocate(deferralContext);
+        return tryAllocate(deferralContext);
+    }
+    
     void* result = tryAllocateWithoutCollecting();
     
     if (LIKELY(result != 0))
index 88c631d..3e4aca2 100644 (file)
 #include "config.h"
 #include "MarkedBlock.h"
 
-#include "HelpingGCScope.h"
 #include "JSCell.h"
 #include "JSDestructibleObject.h"
 #include "JSCInlines.h"
 #include "MarkedBlockInlines.h"
 #include "SuperSampler.h"
+#include "SweepingScope.h"
 
 namespace JSC {
 
@@ -409,8 +409,7 @@ Subspace* MarkedBlock::Handle::subspace() const
 
 FreeList MarkedBlock::Handle::sweep(SweepMode sweepMode)
 {
-    // FIXME: Maybe HelpingGCScope should just be called SweepScope?
-    HelpingGCScope helpingGCScope(*heap());
+    SweepingScope sweepingScope(*heap());
     
     m_allocator->setIsUnswept(NoLockingNecessary, this, false);
     
index ab2bdc1..0dee44e 100644 (file)
@@ -225,7 +225,7 @@ void MarkedSpace::lastChanceToFinalize()
 
 void MarkedSpace::sweep()
 {
-    m_heap->sweeper()->willFinishSweeping();
+    m_heap->sweeper()->stopSweeping();
     forEachAllocator(
         [&] (MarkedAllocator& allocator) -> IterationStatus {
             allocator.sweep();
index 8368f43..5a90b7b 100644 (file)
@@ -41,8 +41,11 @@ void printInternal(PrintStream& out, MutatorState state)
     case MutatorState::Allocating:
         out.print("Allocating");
         return;
-    case MutatorState::HelpingGC:
-        out.print("HelpingGC");
+    case MutatorState::Sweeping:
+        out.print("Sweeping");
+        return;
+    case MutatorState::Collecting:
+        out.print("Collecting");
         return;
     }
     RELEASE_ASSERT_NOT_REACHED();
index 65b61f7..38a1981 100644 (file)
@@ -34,8 +34,11 @@ enum class MutatorState {
     // The mutator is in an allocation slow path.
     Allocating,
     
-    // The mutator was asked by the GC to do some work.
-    HelpingGC
+    // The mutator is sweeping.
+    Sweeping,
+    
+    // The mutator is collecting.
+    Collecting
 };
 
 } // namespace JSC
diff --git a/Source/JavaScriptCore/heap/RegisterState.h b/Source/JavaScriptCore/heap/RegisterState.h
new file mode 100644 (file)
index 0000000..6005a91
--- /dev/null
@@ -0,0 +1,158 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include <setjmp.h>
+
+namespace JSC {
+
+#if !OS(WINDOWS)
+
+// ALLOCATE_AND_GET_REGISTER_STATE has to ensure that the GC sees callee-saves. It achieves this by
+// ensuring that the callee-saves are either spilled to the stack or saved in the RegisterState. The code
+// looks like it's achieving only the latter. However, it's possible that the compiler chooses to use
+// a callee-save for one of the caller's variables, which means that the value that we were interested in
+// got spilled. In that case, we will store something bogus into the RegisterState, and that's OK.
+
+#if CPU(X86)
+struct RegisterState {
+    uint32_t ebx;
+    uint32_t edi;
+    uint32_t esi;
+};
+
+#define SAVE_REG(regname, where) \
+    asm volatile ("movl %%" #regname ", %0" : "=m"(where) : : "memory")
+
+#define ALLOCATE_AND_GET_REGISTER_STATE(registers) \
+    RegisterState registers; \
+    SAVE_REG(ebx, registers.ebx); \
+    SAVE_REG(edi, registers.edi); \
+    SAVE_REG(esi, registers.esi)
+
+#elif CPU(X86_64)
+struct RegisterState {
+    uint64_t rbx;
+    uint64_t r12;
+    uint64_t r13;
+    uint64_t r14;
+    uint64_t r15;
+};
+
+#define SAVE_REG(regname, where) \
+    asm volatile ("movq %%" #regname ", %0" : "=m"(where) : : "memory")
+
+#define ALLOCATE_AND_GET_REGISTER_STATE(registers) \
+    RegisterState registers; \
+    SAVE_REG(rbx, registers.rbx); \
+    SAVE_REG(r12, registers.r12); \
+    SAVE_REG(r13, registers.r13); \
+    SAVE_REG(r14, registers.r14); \
+    SAVE_REG(r15, registers.r15)
+
+#elif CPU(ARM_THUMB2)
+struct RegisterState {
+    uint32_t r4;
+    uint32_t r5;
+    uint32_t r6;
+    uint32_t r8;
+    uint32_t r9;
+    uint32_t r10;
+    uint32_t r11;
+};
+
+#define SAVE_REG(regname, where) \
+    asm volatile ("str " #regname ", %0" : "=m"(where) : : "memory")
+
+#define ALLOCATE_AND_GET_REGISTER_STATE(registers) \
+    RegisterState registers; \
+    SAVE_REG(r4, registers.r4); \
+    SAVE_REG(r5, registers.r5); \
+    SAVE_REG(r6, registers.r6); \
+    SAVE_REG(r8, registers.r8); \
+    SAVE_REG(r9, registers.r9); \
+    SAVE_REG(r10, registers.r10); \
+    SAVE_REG(r11, registers.r11)
+
+#elif CPU(ARM64)
+struct RegisterState {
+    uint64_t x19;
+    uint64_t x20;
+    uint64_t x21;
+    uint64_t x22;
+    uint64_t x23;
+    uint64_t x24;
+    uint64_t x25;
+    uint64_t x26;
+    uint64_t x27;
+    uint64_t x28;
+};
+
+#define SAVE_REG(regname, where) \
+    asm volatile ("str " #regname ", %0" : "=m"(where) : : "memory")
+
+#define ALLOCATE_AND_GET_REGISTER_STATE(registers) \
+    RegisterState registers; \
+    SAVE_REG(x19, registers.x19); \
+    SAVE_REG(x20, registers.x20); \
+    SAVE_REG(x21, registers.x21); \
+    SAVE_REG(x22, registers.x22); \
+    SAVE_REG(x23, registers.x23); \
+    SAVE_REG(x24, registers.x24); \
+    SAVE_REG(x25, registers.x25); \
+    SAVE_REG(x26, registers.x26); \
+    SAVE_REG(x27, registers.x27); \
+    SAVE_REG(x28, registers.x28)
+
+#endif
+#endif // !OS(WINDOWS)
+
+#ifndef ALLOCATE_AND_GET_REGISTER_STATE
+#if COMPILER(GCC_OR_CLANG)
+#define REGISTER_BUFFER_ALIGNMENT __attribute__ ((aligned (sizeof(void*))))
+#else
+#define REGISTER_BUFFER_ALIGNMENT
+#endif
+
+typedef jmp_buf RegisterState;
+
+// ALLOCATE_AND_GET_REGISTER_STATE() is a macro so that it is always "inlined" even in debug builds.
+#if COMPILER(MSVC)
+#pragma warning(push)
+#pragma warning(disable: 4611)
+#define ALLOCATE_AND_GET_REGISTER_STATE(registers) \
+    RegisterState registers REGISTER_BUFFER_ALIGNMENT; \
+    setjmp(registers)
+#pragma warning(pop)
+#else
+#define ALLOCATE_AND_GET_REGISTER_STATE(registers) \
+    RegisterState registers REGISTER_BUFFER_ALIGNMENT; \
+    setjmp(registers)
+#endif
+#endif // ALLOCATE_AND_GET_REGISTER_STATE
+
+} // namespace JSC
+
similarity index 89%
rename from Source/JavaScriptCore/heap/HelpingGCScope.h
rename to Source/JavaScriptCore/heap/RunningScope.h
index e8f4085..b89d03c 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
 
 namespace JSC {
 
-class HelpingGCScope {
+class RunningScope {
 public:
-    HelpingGCScope(Heap& heap)
+    RunningScope(Heap& heap)
         : m_heap(heap)
         , m_oldState(m_heap.m_mutatorState)
     {
-        m_heap.m_mutatorState = MutatorState::HelpingGC;
+        m_heap.m_mutatorState = MutatorState::Running;
     }
     
-    ~HelpingGCScope()
+    ~RunningScope()
     {
         m_heap.m_mutatorState = m_oldState;
     }
index b511e6d..f0260fc 100644 (file)
@@ -38,6 +38,7 @@
 #include "JSString.h"
 #include "JSCInlines.h"
 #include "SlotVisitorInlines.h"
+#include "StopIfNecessaryTimer.h"
 #include "SuperSampler.h"
 #include "VM.h"
 #include <wtf/Lock.h>
@@ -75,12 +76,13 @@ static void validate(JSCell* cell)
 }
 #endif
 
-SlotVisitor::SlotVisitor(Heap& heap)
+SlotVisitor::SlotVisitor(Heap& heap, CString codeName)
     : m_bytesVisited(0)
     , m_visitCount(0)
     , m_isInParallelMode(false)
     , m_markingVersion(MarkedSpace::initialVersion)
     , m_heap(heap)
+    , m_codeName(codeName)
 #if !ASSERT_DISABLED
     , m_isCheckingForDefaultMarkViolation(false)
     , m_isDraining(false)
@@ -469,9 +471,12 @@ void SlotVisitor::optimizeForStoppedMutator()
     m_canOptimizeForStoppedMutator = true;
 }
 
-void SlotVisitor::drain(MonotonicTime timeout)
+NEVER_INLINE void SlotVisitor::drain(MonotonicTime timeout)
 {
-    RELEASE_ASSERT(m_isInParallelMode);
+    if (!m_isInParallelMode) {
+        dataLog("FATAL: attempting to drain when not in parallel mode.\n");
+        RELEASE_ASSERT_NOT_REACHED();
+    }
     
     auto locker = holdLock(m_rightToRun);
     
@@ -581,7 +586,7 @@ bool SlotVisitor::hasWork(const AbstractLocker&)
         || !m_heap.m_sharedMutatorMarkStack->isEmpty();
 }
 
-SlotVisitor::SharedDrainResult SlotVisitor::drainFromShared(SharedDrainMode sharedDrainMode, MonotonicTime timeout)
+NEVER_INLINE SlotVisitor::SharedDrainResult SlotVisitor::drainFromShared(SharedDrainMode sharedDrainMode, MonotonicTime timeout)
 {
     ASSERT(m_isInParallelMode);
     
@@ -616,8 +621,22 @@ SlotVisitor::SharedDrainResult SlotVisitor::drainFromShared(SharedDrainMode shar
                 if (hasElapsed(timeout))
                     return SharedDrainResult::TimedOut;
                 
-                if (didReachTermination(locker))
+                if (didReachTermination(locker)) {
                     m_heap.m_markingConditionVariable.notifyAll();
+                    
+                    // If we're in concurrent mode, then we know that the mutator will eventually do
+                    // the right thing because:
+                    // - It's possible that the collector has the conn. In that case, the collector will
+                    //   wake up from the notification above. This will happen if the app released heap
+                    //   access. Native apps can spend a lot of time with heap access released.
+                    // - It's possible that the mutator will allocate soon. Then it will check if we
+                    //   reached termination. This is the most likely outcome in programs that allocate
+                    //   a lot.
+                    // - WebCore never releases access. But WebCore has a runloop. The runloop will check
+                    //   if we reached termination.
+                    // So, this tells the runloop that it's got things to do.
+                    m_heap.m_stopIfNecessaryTimer->scheduleSoon();
+                }
 
                 auto isReady = [&] () -> bool {
                     return hasWork(locker)
@@ -659,7 +678,9 @@ SlotVisitor::SharedDrainResult SlotVisitor::drainInParallelPassively(MonotonicTi
     
     ASSERT(Options::numberOfGCMarkers());
     
-    if (!m_heap.hasHeapAccess()
+    if (Options::numberOfGCMarkers() == 1
+        || (m_heap.m_worldState.load() & Heap::mutatorWaitingBit)
+        || !m_heap.hasHeapAccess()
         || m_heap.collectorBelievesThatTheWorldIsStopped()) {
         // This is an optimization over drainInParallel() when we have a concurrent mutator but
         // otherwise it is not profitable.
@@ -684,6 +705,9 @@ SlotVisitor::SharedDrainResult SlotVisitor::drainInParallelPassively(MonotonicTi
 
 void SlotVisitor::donateAll()
 {
+    if (isEmpty())
+        return;
+    
     donateAll(holdLock(m_heap.m_markingMutex));
 }
 
@@ -755,7 +779,11 @@ void SlotVisitor::mergeOpaqueRootsIfProfitable()
     
 void SlotVisitor::donate()
 {
-    ASSERT(m_isInParallelMode);
+    if (!m_isInParallelMode) {
+        dataLog("FATAL: Attempting to donate when not in parallel mode.\n");
+        RELEASE_ASSERT_NOT_REACHED();
+    }
+    
     if (Options::numberOfGCMarkers() == 1)
         return;
     
index b75a794..83479af 100644 (file)
@@ -56,7 +56,7 @@ class SlotVisitor {
     friend class Heap;
 
 public:
-    SlotVisitor(Heap&);
+    SlotVisitor(Heap&, CString codeName);
     ~SlotVisitor();
 
     MarkStackArray& collectorMarkStack() { return m_collectorStack; }
@@ -167,6 +167,8 @@ public:
     void setIgnoreNewOpaqueRoots(bool value) { m_ignoreNewOpaqueRoots = value; }
 
     void donateAll();
+    
+    const char* codeName() const { return m_codeName.data(); }
 
 private:
     friend class ParallelModeEnabler;
@@ -228,6 +230,8 @@ private:
     bool m_canOptimizeForStoppedMutator { false };
     Lock m_rightToRun;
     
+    CString m_codeName;
+    
 public:
 #if !ASSERT_DISABLED
     bool m_isCheckingForDefaultMarkViolation;
index dbd3130..158355a 100644 (file)
@@ -76,6 +76,10 @@ void StochasticSpaceTimeMutatorScheduler::beginCollection()
     m_bytesAllocatedThisCycleAtTheEnd = 
         Options::concurrentGCMaxHeadroom() *
         std::max<double>(m_bytesAllocatedThisCycleAtTheBeginning, m_heap.m_maxEdenSize);
+    
+    if (Options::logGC())
+        dataLog("ca=", m_bytesAllocatedThisCycleAtTheBeginning / 1024, "kb h=", (m_bytesAllocatedThisCycleAtTheEnd - m_bytesAllocatedThisCycleAtTheBeginning) / 1024, "kb ");
+    
     m_beforeConstraints = MonotonicTime::now();
 }
 
@@ -117,6 +121,11 @@ void StochasticSpaceTimeMutatorScheduler::synchronousDrainingDidStall()
     Snapshot snapshot(*this);
     
     double resumeProbability = mutatorUtilization(snapshot);
+    if (resumeProbability < Options::epsilonMutatorUtilization()) {
+        m_plannedResumeTime = MonotonicTime::infinity();
+        return;
+    }
+    
     bool shouldResume = m_random.get() < resumeProbability;
     
     if (shouldResume) {
@@ -135,8 +144,10 @@ MonotonicTime StochasticSpaceTimeMutatorScheduler::timeToStop()
     case Stopped:
         return MonotonicTime::now();
     case Resumed: {
-        // Once we're running, we keep going.
-        // FIXME: Maybe force stop when we run out of headroom?
+        // Once we're running, we keep going unless we run out of headroom.
+        Snapshot snapshot(*this);
+        if (mutatorUtilization(snapshot) < Options::epsilonMutatorUtilization())
+            return MonotonicTime::now();
         return MonotonicTime::infinity();
     } }
     
diff --git a/Source/JavaScriptCore/heap/SweepingScope.h b/Source/JavaScriptCore/heap/SweepingScope.h
new file mode 100644 (file)
index 0000000..a3f6862
--- /dev/null
@@ -0,0 +1,52 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#include "Heap.h"
+
+namespace JSC {
+
+class SweepingScope {
+public:
+    SweepingScope(Heap& heap)
+        : m_heap(heap)
+        , m_oldState(m_heap.m_mutatorState)
+    {
+        m_heap.m_mutatorState = MutatorState::Sweeping;
+    }
+    
+    ~SweepingScope()
+    {
+        m_heap.m_mutatorState = m_oldState;
+    }
+
+private:
+    Heap& m_heap;
+    MutatorState m_oldState;
+};
+
+} // namespace JSC
+
index a013e1a..f645965 100644 (file)
@@ -99,7 +99,7 @@ private:
 
 class JITWorklist::Thread : public AutomaticThread {
 public:
-    Thread(const LockHolder& locker, JITWorklist& worklist)
+    Thread(const AbstractLocker& locker, JITWorklist& worklist)
         : AutomaticThread(locker, worklist.m_lock, worklist.m_condition)
         , m_worklist(worklist)
     {
@@ -107,7 +107,7 @@ public:
     }
     
 protected:
-    PollResult poll(const LockHolder&) override
+    PollResult poll(const AbstractLocker&) override
     {
         RELEASE_ASSERT(m_worklist.m_numAvailableThreads);
         
index be3fea9..aedf42e 100644 (file)
@@ -38,7 +38,6 @@
 #include "GetterSetter.h"
 #include "HeapProfiler.h"
 #include "HeapSnapshotBuilder.h"
-#include "HeapStatistics.h"
 #include "InitializeThreading.h"
 #include "Interpreter.h"
 #include "JIT.h"
@@ -1084,6 +1083,7 @@ static EncodedJSValue JSC_HOST_CALL functionDollarAgentGetReport(ExecState*);
 static EncodedJSValue JSC_HOST_CALL functionDollarAgentLeaving(ExecState*);
 static EncodedJSValue JSC_HOST_CALL functionWaitForReport(ExecState*);
 static EncodedJSValue JSC_HOST_CALL functionHeapCapacity(ExecState*);
+static EncodedJSValue JSC_HOST_CALL functionFlashHeapAccess(ExecState*);
 
 struct Script {
     enum class StrictMode {
@@ -1366,6 +1366,7 @@ protected:
         addFunction(vm, "waitForReport", functionWaitForReport, 0);
 
         addFunction(vm, "heapCapacity", functionHeapCapacity, 0);
+        addFunction(vm, "flashHeapAccess", functionFlashHeapAccess, 0);
     }
     
     void addFunction(VM& vm, JSObject* object, const char* name, NativeFunction function, unsigned arguments)
@@ -2648,6 +2649,21 @@ EncodedJSValue JSC_HOST_CALL functionHeapCapacity(ExecState* exec)
     return JSValue::encode(jsNumber(vm.heap.capacity()));
 }
 
+EncodedJSValue JSC_HOST_CALL functionFlashHeapAccess(ExecState* exec)
+{
+    VM& vm = exec->vm();
+    auto scope = DECLARE_THROW_SCOPE(vm);
+    
+    vm.heap.releaseAccess();
+    if (exec->argumentCount() >= 1) {
+        double ms = exec->argument(0).toNumber(exec);
+        RETURN_IF_EXCEPTION(scope, encodedJSValue());
+        sleep(Seconds::fromMilliseconds(ms));
+    }
+    vm.heap.acquireAccess();
+    return JSValue::encode(jsUndefined());
+}
+
 template<typename ValueType>
 typename std::enable_if<!std::is_fundamental<ValueType>::value>::type addOption(VM&, JSObject*, Identifier, ValueType) { }
 
index f72da99..47ddd61 100644 (file)
@@ -31,7 +31,6 @@
 
 #include "ExecutableAllocator.h"
 #include "Heap.h"
-#include "HeapStatistics.h"
 #include "Identifier.h"
 #include "JSDateMath.h"
 #include "JSGlobalObject.h"
@@ -60,8 +59,6 @@ void initializeThreading()
         WTF::initializeThreading();
         WTF::initializeGCThreads();
         Options::initialize();
-        if (Options::recordGCPauseTimes())
-            HeapStatistics::initialize();
 #if ENABLE(WRITE_BARRIER_PROFILING)
         WriteBarrierCounters::initialize();
 #endif
index 775f93a..38a2ecf 100644 (file)
@@ -280,7 +280,7 @@ ALWAYS_INLINE const ClassInfo* JSCell::classInfo(VM& vm) const
     // destructing the object. The GC thread or JIT threads, unlike the mutator thread, are able to access classInfo
     // independent of whether the mutator thread is sweeping or not. Hence, we also check for ownerThread() !=
     // std::this_thread::get_id() to allow the GC thread or JIT threads to pass this assertion.
-    ASSERT(vm.heap.mutatorState() == MutatorState::Running || vm.apiLock().ownerThread() != std::this_thread::get_id());
+    ASSERT(vm.heap.mutatorState() != MutatorState::Sweeping || vm.apiLock().ownerThread() != std::this_thread::get_id());
     return structure(vm)->classInfo();
 }
 
index bad5807..f1c2f13 100644 (file)
@@ -317,7 +317,11 @@ static void overrideDefaults()
         Options::maximumMutatorUtilization() = 0.6;
         Options::concurrentGCMaxHeadroom() = 1.4;
         Options::minimumGCPauseMS() = 1;
-        Options::gcIncrementScale() = 1;
+        Options::useStochasticMutatorScheduler() = false;
+        if (WTF::numberOfProcessorCores() <= 1)
+            Options::gcIncrementScale() = 1;
+        else
+            Options::gcIncrementScale() = 0;
     }
 }
 
index bbd7096..77e98eb 100644 (file)
@@ -200,6 +200,7 @@ typedef const char* optionString;
     v(double, largeHeapGrowthFactor, 1.24, Normal, nullptr) \
     v(double, minimumMutatorUtilization, 0, Normal, nullptr) \
     v(double, maximumMutatorUtilization, 0.7, Normal, nullptr) \
+    v(double, epsilonMutatorUtilization, 0.01, Normal, nullptr) \
     v(double, concurrentGCMaxHeadroom, 1.5, Normal, nullptr) \
     v(double, concurrentGCPeriodMS, 2, Normal, nullptr) \
     v(bool, useStochasticMutatorScheduler, true, Normal, nullptr) \
@@ -344,7 +345,6 @@ typedef const char* optionString;
     v(bool, useZombieMode, false, Normal, "debugging option to scribble over dead objects with 0xbadbeef0") \
     v(bool, useImmortalObjects, false, Normal, "debugging option to keep all objects alive forever") \
     v(bool, sweepSynchronously, false, Normal, "debugging option to sweep all dead objects synchronously at GC end before resuming mutator") \
-    v(bool, dumpObjectStatistics, false, Normal, nullptr) \
     v(unsigned, maxSingleAllocationSize, 0, Configurable, "debugging option to limit individual allocations to a max size (0 = limit not set, N = limit size in bytes)") \
     \
     v(gcLogLevel, logGC, GCLogging::None, Normal, "debugging option to log GC activity (0 = None, 1 = Basic, 2 = Verbose)") \
index edbb0e4..8b85f8e 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013-2014, 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -28,7 +28,6 @@
 
 #include "CodeBlock.h"
 #include "FunctionCodeBlock.h"
-#include "HeapStatistics.h"
 #include "JSCInlines.h"
 #include "LLIntData.h"
 
@@ -160,8 +159,6 @@ JSValue optimizeNextInvocation(ExecState* exec)
 // This is a hook called at the bitter end of some of our tests.
 void finalizeStatsAtEndOfTesting()
 {
-    if (Options::logHeapStatisticsAtExit())
-        HeapStatistics::reportSuccess();
     if (Options::reportLLIntStats())
         LLInt::Data::finalizeStats();
 }
index 6aa713d..0c55163 100644 (file)
@@ -1,3 +1,36 @@
+2017-02-20  Filip Pizlo  <fpizlo@apple.com>
+
+        The collector thread should only start when the mutator doesn't have heap access
+        https://bugs.webkit.org/show_bug.cgi?id=167737
+
+        Reviewed by Keith Miller.
+        
+        Extend the use of AbstractLocker so that we can use more locking idioms.
+
+        * wtf/AutomaticThread.cpp:
+        (WTF::AutomaticThreadCondition::notifyOne):
+        (WTF::AutomaticThreadCondition::notifyAll):
+        (WTF::AutomaticThreadCondition::add):
+        (WTF::AutomaticThreadCondition::remove):
+        (WTF::AutomaticThreadCondition::contains):
+        (WTF::AutomaticThread::AutomaticThread):
+        (WTF::AutomaticThread::tryStop):
+        (WTF::AutomaticThread::isWaiting):
+        (WTF::AutomaticThread::notify):
+        (WTF::AutomaticThread::start):
+        (WTF::AutomaticThread::threadIsStopping):
+        * wtf/AutomaticThread.h:
+        * wtf/NumberOfCores.cpp:
+        (WTF::numberOfProcessorCores):
+        * wtf/ParallelHelperPool.cpp:
+        (WTF::ParallelHelperClient::finish):
+        (WTF::ParallelHelperClient::claimTask):
+        (WTF::ParallelHelperPool::Thread::Thread):
+        (WTF::ParallelHelperPool::didMakeWorkAvailable):
+        (WTF::ParallelHelperPool::hasClientWithTask):
+        (WTF::ParallelHelperPool::getClientWithTask):
+        * wtf/ParallelHelperPool.h:
+
 2017-02-21  John Wilander  <wilander@apple.com>
 
         Resource Load Statistics: Add alternate classification method
index 4d8fb84..387c6e2 100644 (file)
@@ -45,7 +45,7 @@ AutomaticThreadCondition::~AutomaticThreadCondition()
 {
 }
 
-void AutomaticThreadCondition::notifyOne(const LockHolder& locker)
+void AutomaticThreadCondition::notifyOne(const AbstractLocker& locker)
 {
     for (AutomaticThread* thread : m_threads) {
         if (thread->isWaiting(locker)) {
@@ -64,7 +64,7 @@ void AutomaticThreadCondition::notifyOne(const LockHolder& locker)
     m_condition.notifyOne();
 }
 
-void AutomaticThreadCondition::notifyAll(const LockHolder& locker)
+void AutomaticThreadCondition::notifyAll(const AbstractLocker& locker)
 {
     m_condition.notifyAll();
 
@@ -81,24 +81,24 @@ void AutomaticThreadCondition::wait(Lock& lock)
     m_condition.wait(lock);
 }
 
-void AutomaticThreadCondition::add(const LockHolder&, AutomaticThread* thread)
+void AutomaticThreadCondition::add(const AbstractLocker&, AutomaticThread* thread)
 {
     ASSERT(!m_threads.contains(thread));
     m_threads.append(thread);
 }
 
-void AutomaticThreadCondition::remove(const LockHolder&, AutomaticThread* thread)
+void AutomaticThreadCondition::remove(const AbstractLocker&, AutomaticThread* thread)
 {
     m_threads.removeFirst(thread);
     ASSERT(!m_threads.contains(thread));
 }
 
-bool AutomaticThreadCondition::contains(const LockHolder&, AutomaticThread* thread)
+bool AutomaticThreadCondition::contains(const AbstractLocker&, AutomaticThread* thread)
 {
     return m_threads.contains(thread);
 }
 
-AutomaticThread::AutomaticThread(const LockHolder& locker, Box<Lock> lock, RefPtr<AutomaticThreadCondition> condition)
+AutomaticThread::AutomaticThread(const AbstractLocker& locker, Box<Lock> lock, RefPtr<AutomaticThreadCondition> condition)
     : m_lock(lock)
     , m_condition(condition)
 {
@@ -118,7 +118,7 @@ AutomaticThread::~AutomaticThread()
     m_condition->remove(locker, this);
 }
 
-bool AutomaticThread::tryStop(const LockHolder&)
+bool AutomaticThread::tryStop(const AbstractLocker&)
 {
     if (!m_isRunning)
         return true;
@@ -128,12 +128,12 @@ bool AutomaticThread::tryStop(const LockHolder&)
     return true;
 }
 
-bool AutomaticThread::isWaiting(const LockHolder& locker)
+bool AutomaticThread::isWaiting(const AbstractLocker& locker)
 {
     return hasUnderlyingThread(locker) && m_isWaiting;
 }
 
-bool AutomaticThread::notify(const LockHolder& locker)
+bool AutomaticThread::notify(const AbstractLocker& locker)
 {
     ASSERT_UNUSED(locker, hasUnderlyingThread(locker));
     m_isWaiting = false;
@@ -147,7 +147,7 @@ void AutomaticThread::join()
         m_isRunningCondition.wait(*m_lock);
 }
 
-void AutomaticThread::start(const LockHolder&)
+void AutomaticThread::start(const AbstractLocker&)
 {
     RELEASE_ASSERT(m_isRunning);
     
@@ -169,18 +169,18 @@ void AutomaticThread::start(const LockHolder&)
                 ASSERT(m_condition->contains(locker, this));
             }
             
-            auto stopImpl = [&] (const LockHolder& locker) {
+            auto stopImpl = [&] (const AbstractLocker& locker) {
                 thread->threadIsStopping(locker);
                 thread->m_hasUnderlyingThread = false;
             };
             
-            auto stopPermanently = [&] (const LockHolder& locker) {
+            auto stopPermanently = [&] (const AbstractLocker& locker) {
                 m_isRunning = false;
                 m_isRunningCondition.notifyAll();
                 stopImpl(locker);
             };
             
-            auto stopForTimeout = [&] (const LockHolder& locker) {
+            auto stopForTimeout = [&] (const AbstractLocker& locker) {
                 stopImpl(locker);
             };
             
@@ -227,7 +227,7 @@ void AutomaticThread::threadDidStart()
 {
 }
 
-void AutomaticThread::threadIsStopping(const LockHolder&)
+void AutomaticThread::threadIsStopping(const AbstractLocker&)
 {
 }
 
index 6926f62..ec6ba43 100644 (file)
@@ -75,8 +75,8 @@ public:
     
     WTF_EXPORT_PRIVATE ~AutomaticThreadCondition();
     
-    WTF_EXPORT_PRIVATE void notifyOne(const LockHolder&);
-    WTF_EXPORT_PRIVATE void notifyAll(const LockHolder&);
+    WTF_EXPORT_PRIVATE void notifyOne(const AbstractLocker&);
+    WTF_EXPORT_PRIVATE void notifyAll(const AbstractLocker&);
     
     // You can reuse this condition for other things, just as you would any other condition.
     // However, since conflating conditions could lead to thundering herd, it's best to avoid it.
@@ -90,9 +90,9 @@ private:
     
     WTF_EXPORT_PRIVATE AutomaticThreadCondition();
 
-    void add(const LockHolder&, AutomaticThread*);
-    void remove(const LockHolder&, AutomaticThread*);
-    bool contains(const LockHolder&, AutomaticThread*);
+    void add(const AbstractLocker&, AutomaticThread*);
+    void remove(const AbstractLocker&, AutomaticThread*);
+    bool contains(const AbstractLocker&, AutomaticThread*);
     
     Condition m_condition;
     Vector<AutomaticThread*> m_threads;
@@ -113,24 +113,24 @@ public:
     virtual ~AutomaticThread();
     
     // Sometimes it's possible to optimize for the case that there is no underlying thread.
-    bool hasUnderlyingThread(const LockHolder&) const { return m_hasUnderlyingThread; }
+    bool hasUnderlyingThread(const AbstractLocker&) const { return m_hasUnderlyingThread; }
     
     // This attempts to quickly stop the thread. This will succeed if the thread happens to not be
     // running. Returns true if the thread has been stopped. A good idiom for stopping your automatic
     // thread is to first try this, and if that doesn't work, to tell the thread using your own
     // mechanism (set some flag and then notify the condition).
-    bool tryStop(const LockHolder&);
+    bool tryStop(const AbstractLocker&);
 
-    bool isWaiting(const LockHolder&);
+    bool isWaiting(const AbstractLocker&);
 
-    bool notify(const LockHolder&);
+    bool notify(const AbstractLocker&);
 
     void join();
     
 protected:
     // This logically creates the thread, but in reality the thread won't be created until someone
     // calls AutomaticThreadCondition::notifyOne() or notifyAll().
-    AutomaticThread(const LockHolder&, Box<Lock>, RefPtr<AutomaticThreadCondition>);
+    AutomaticThread(const AbstractLocker&, Box<Lock>, RefPtr<AutomaticThreadCondition>);
     
     // To understand PollResult and WorkResult, imagine that poll() and work() are being called like
     // so:
@@ -159,7 +159,7 @@ protected:
     // }
     
     enum class PollResult { Work, Stop, Wait };
-    virtual PollResult poll(const LockHolder&) = 0;
+    virtual PollResult poll(const AbstractLocker&) = 0;
     
     enum class WorkResult { Continue, Stop };
     virtual WorkResult work() = 0;
@@ -168,12 +168,12 @@ protected:
     // when the thread dies. These methods let you do this. You can override these methods, and you
     // can be sure that the default ones don't do anything (so you don't need a super call).
     virtual void threadDidStart();
-    virtual void threadIsStopping(const LockHolder&);
+    virtual void threadIsStopping(const AbstractLocker&);
     
 private:
     friend class AutomaticThreadCondition;
     
-    void start(const LockHolder&);
+    void start(const AbstractLocker&);
     
     Box<Lock> m_lock;
     RefPtr<AutomaticThreadCondition> m_condition;
index 5805446..e657bb0 100644 (file)
@@ -47,6 +47,15 @@ int numberOfProcessorCores()
 
     if (s_numberOfCores > 0)
         return s_numberOfCores;
+    
+    if (const char* coresEnv = getenv("WTF_numberOfProcessorCores")) {
+        unsigned numberOfCores;
+        if (sscanf(coresEnv, "%u", &numberOfCores) == 1) {
+            s_numberOfCores = numberOfCores;
+            return s_numberOfCores;
+        } else
+            fprintf(stderr, "WARNING: failed to parse WTF_numberOfProcessorCores=%s\n", coresEnv);
+    }
 
 #if OS(DARWIN)
     unsigned result;
index 9492fdd..8d0f9e4 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2015-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2015-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -88,14 +88,14 @@ void ParallelHelperClient::runTaskInParallel(RefPtr<SharedTask<void ()>> task)
     finish();
 }
 
-void ParallelHelperClient::finish(const LockHolder&)
+void ParallelHelperClient::finish(const AbstractLocker&)
 {
     m_task = nullptr;
     while (m_numActive)
         m_pool->m_workCompleteCondition.wait(*m_pool->m_lock);
 }
 
-RefPtr<SharedTask<void ()>> ParallelHelperClient::claimTask(const LockHolder&)
+RefPtr<SharedTask<void ()>> ParallelHelperClient::claimTask(const AbstractLocker&)
 {
     if (!m_task)
         return nullptr;
@@ -170,14 +170,14 @@ void ParallelHelperPool::doSomeHelping()
 
 class ParallelHelperPool::Thread : public AutomaticThread {
 public:
-    Thread(const LockHolder& locker, ParallelHelperPool& pool)
+    Thread(const AbstractLocker& locker, ParallelHelperPool& pool)
         : AutomaticThread(locker, pool.m_lock, pool.m_workAvailableCondition)
         , m_pool(pool)
     {
     }
     
 protected:
-    PollResult poll(const LockHolder& locker) override
+    PollResult poll(const AbstractLocker& locker) override
     {
         if (m_pool.m_isDying)
             return PollResult::Stop;
@@ -203,19 +203,19 @@ private:
     RefPtr<SharedTask<void ()>> m_task;
 };
 
-void ParallelHelperPool::didMakeWorkAvailable(const LockHolder& locker)
+void ParallelHelperPool::didMakeWorkAvailable(const AbstractLocker& locker)
 {
     while (m_numThreads > m_threads.size())
         m_threads.append(adoptRef(new Thread(locker, *this)));
     m_workAvailableCondition->notifyAll(locker);
 }
 
-bool ParallelHelperPool::hasClientWithTask(const LockHolder& locker)
+bool ParallelHelperPool::hasClientWithTask(const AbstractLocker& locker)
 {
     return !!getClientWithTask(locker);
 }
 
-ParallelHelperClient* ParallelHelperPool::getClientWithTask(const LockHolder&)
+ParallelHelperClient* ParallelHelperPool::getClientWithTask(const AbstractLocker&)
 {
     // We load-balance by being random.
     unsigned startIndex = m_random.getUint32(m_clients.size());
index 81635bb..21859cb 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2015-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2015-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -168,8 +168,8 @@ public:
 private:
     friend class ParallelHelperPool;
 
-    void finish(const LockHolder&);
-    RefPtr<SharedTask<void ()>> claimTask(const LockHolder&);
+    void finish(const AbstractLocker&);
+    RefPtr<SharedTask<void ()>> claimTask(const AbstractLocker&);
     void runTask(RefPtr<SharedTask<void ()>>);
     
     RefPtr<ParallelHelperPool> m_pool;
@@ -193,11 +193,11 @@ private:
     class Thread;
     friend class Thread;
 
-    void didMakeWorkAvailable(const LockHolder&);
+    void didMakeWorkAvailable(const AbstractLocker&);
 
-    bool hasClientWithTask(const LockHolder&);
-    ParallelHelperClient* getClientWithTask(const LockHolder&);
-    ParallelHelperClient* waitForClientWithTask(const LockHolder&);
+    bool hasClientWithTask(const AbstractLocker&);
+    ParallelHelperClient* getClientWithTask(const AbstractLocker&);
+    ParallelHelperClient* waitForClientWithTask(const AbstractLocker&);
     
     Box<Lock> m_lock; // AutomaticThread wants this in a box for safety.
     RefPtr<AutomaticThreadCondition> m_workAvailableCondition;
index 96b3bb5..8e463ee 100644 (file)
@@ -1,3 +1,33 @@
+2017-02-20  Filip Pizlo  <fpizlo@apple.com>
+
+        The collector thread should only start when the mutator doesn't have heap access
+        https://bugs.webkit.org/show_bug.cgi?id=167737
+
+        Reviewed by Keith Miller.
+
+        Added new tests in JSTests.
+        
+        The WebCore changes involve:
+        
+        - Refactoring around new header discipline.
+        
+        - Adding crazy GC APIs to window.internals to enable us to test the GC's runloop discipline.
+
+        * ForwardingHeaders/heap/GCFinalizationCallback.h: Added.
+        * ForwardingHeaders/heap/IncrementalSweeper.h: Added.
+        * ForwardingHeaders/heap/MachineStackMarker.h: Added.
+        * ForwardingHeaders/heap/RunningScope.h: Added.
+        * bindings/js/CommonVM.cpp:
+        * testing/Internals.cpp:
+        (WebCore::Internals::parserMetaData):
+        (WebCore::Internals::isReadableStreamDisturbed):
+        (WebCore::Internals::isGCRunning):
+        (WebCore::Internals::addGCFinalizationCallback):
+        (WebCore::Internals::stopSweeping):
+        (WebCore::Internals::startSweeping):
+        * testing/Internals.h:
+        * testing/Internals.idl:
+
 2017-02-20  Simon Fraser  <simon.fraser@apple.com>
 
         Add support to PlatformCALayer/GraphicsLayerCA for subpixel-antialiased text, with a Setting and a MiniBrowser switch
diff --git a/Source/WebCore/ForwardingHeaders/heap/GCFinalizationCallback.h b/Source/WebCore/ForwardingHeaders/heap/GCFinalizationCallback.h
new file mode 100644 (file)
index 0000000..075ac7e
--- /dev/null
@@ -0,0 +1,3 @@
+#pragma once
+#include <JavaScriptCore/GCFinalizationCallback.h>
+
diff --git a/Source/WebCore/ForwardingHeaders/heap/IncrementalSweeper.h b/Source/WebCore/ForwardingHeaders/heap/IncrementalSweeper.h
new file mode 100644 (file)
index 0000000..f52ebde
--- /dev/null
@@ -0,0 +1,2 @@
+#pragma once
+#include <JavaScriptCore/IncrementalSweeper.h>
diff --git a/Source/WebCore/ForwardingHeaders/heap/MachineStackMarker.h b/Source/WebCore/ForwardingHeaders/heap/MachineStackMarker.h
new file mode 100644 (file)
index 0000000..fb80b57
--- /dev/null
@@ -0,0 +1,2 @@
+#pragma once
+#include <JavaScriptCore/MachineStackMarker.h>
diff --git a/Source/WebCore/ForwardingHeaders/heap/RunningScope.h b/Source/WebCore/ForwardingHeaders/heap/RunningScope.h
new file mode 100644 (file)
index 0000000..05cbb08
--- /dev/null
@@ -0,0 +1,2 @@
+#pragma once
+#include <JavaScriptCore/RunningScope.h>
index 7338d0f..0187e19 100644 (file)
@@ -30,6 +30,7 @@
 #include "Settings.h"
 #include "WebCoreJSClientData.h"
 #include <heap/HeapInlines.h>
+#include "heap/MachineStackMarker.h"
 #include <runtime/VM.h>
 #include <wtf/MainThread.h>
 #include <wtf/text/AtomicString.h>
index b388612..fac9067 100644 (file)
@@ -1,3 +1,14 @@
+2017-02-20  Filip Pizlo  <fpizlo@apple.com>
+
+        The collector thread should only start when the mutator doesn't have heap access
+        https://bugs.webkit.org/show_bug.cgi?id=167737
+
+        Reviewed by Keith Miller.
+        
+        Make more tests collect continuously.
+
+        * Scripts/run-jsc-stress-tests:
+
 2017-02-20  Simon Fraser  <simon.fraser@apple.com>
 
         Add support to PlatformCALayer/GraphicsLayerCA for subpixel-antialiased text, with a Setting and a MiniBrowser switch
index c35041d..041700e 100755 (executable)
@@ -1468,11 +1468,11 @@ def runNoisyTestNoFTL
 end
 
 def runNoisyTestNoCJIT
-    runNoisyTest("ftl-no-cjit", "--validateBytecode=true", "--validateGraphAtEachPhase=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS))
+    runNoisyTest("ftl-no-cjit", "--validateBytecode=true", "--validateGraphAtEachPhase=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + COLLECT_CONTINUOUSLY_OPTIONS))
 end
 
 def runNoisyTestEagerNoCJIT
-    runNoisyTest("ftl-eager-no-cjit", "--validateBytecode=true", "--validateGraphAtEachPhase=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + EAGER_OPTIONS))
+    runNoisyTest("ftl-eager-no-cjit", "--validateBytecode=true", "--validateGraphAtEachPhase=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + EAGER_OPTIONS + COLLECT_CONTINUOUSLY_OPTIONS))
 end
 
 def defaultRunNoisyTest