GC constraint solving should be parallel
authorfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Tue, 5 Dec 2017 17:53:57 +0000 (17:53 +0000)
committerfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Tue, 5 Dec 2017 17:53:57 +0000 (17:53 +0000)
https://bugs.webkit.org/show_bug.cgi?id=179934

Reviewed by JF Bastien.
PerformanceTests:

Added a version of splay that measures latency in a way that run-jsc-benchmarks groks.

* Octane/splay.js: Added.
(this.Setup.setup.setup):
(this.TearDown.tearDown.tearDown):
(Benchmark):
(BenchmarkResult):
(BenchmarkResult.prototype.valueOf):
(BenchmarkSuite):
(alert):
(Math.random):
(BenchmarkSuite.ResetRNG):
(RunStep):
(BenchmarkSuite.RunSuites):
(BenchmarkSuite.CountBenchmarks):
(BenchmarkSuite.GeometricMean):
(BenchmarkSuite.GeometricMeanTime):
(BenchmarkSuite.AverageAbovePercentile):
(BenchmarkSuite.GeometricMeanLatency):
(BenchmarkSuite.FormatScore):
(BenchmarkSuite.prototype.NotifyStep):
(BenchmarkSuite.prototype.NotifyResult):
(BenchmarkSuite.prototype.NotifyError):
(BenchmarkSuite.prototype.RunSingleBenchmark):
(RunNextSetup):
(RunNextBenchmark):
(RunNextTearDown):
(BenchmarkSuite.prototype.RunStep):
(GeneratePayloadTree):
(GenerateKey):
(SplayUpdateStats):
(InsertNewNode):
(SplaySetup):
(SplayTearDown):
(SplayRun):
(SplayTree):
(SplayTree.prototype.isEmpty):
(SplayTree.prototype.insert):
(SplayTree.prototype.remove):
(SplayTree.prototype.find):
(SplayTree.prototype.findMax):
(SplayTree.prototype.findGreatestLessThan):
(SplayTree.prototype.exportKeys):
(SplayTree.prototype.splay_):
(SplayTree.Node):
(SplayTree.Node.prototype.traverse_):
(report):
(start):

Source/JavaScriptCore:

This makes it possible to do constraint solving in parallel. This looks like a 1% Speedometer
speed-up. It's more than 1% on trunk-Speedometer.

The constraint solver supports running constraints in parallel in two different ways:

- Run multiple constraints in parallel to each other. This only works for constraints that can
  tolerate other constraints running concurrently to them (constraint.concurrency() ==
  ConstraintConcurrency::Concurrent). This is the most basic kind of parallelism that the
  constraint solver supports. All constraints except the JSC SPI constraints are concurrent. We
  could probably make them concurrent, but I'm playing it safe for now.

- A constraint can create parallel work for itself, which the constraint solver will interleave
  with other stuff. A constraint can report that it has parallel work by returning
  ConstraintParallelism::Parallel from its executeImpl() function. Then the solver will allow that
  constraint's doParallelWorkImpl() function to run on as many GC marker threads as are available,
  for as long as that function wants to run.

It's not possible to have a non-concurrent constraint that creates parallel work.

The parallelism is implemented in terms of the existing GC marker threads. This turns out to be
most natural for two reasons:

- No need to start any other threads.

- The constraints all want to be passed a SlotVisitor. Running on the marker threads means having
  access to those threads' SlotVisitors. Also, it means less load balancing. The solver will
  create work on each marking thread's SlotVisitor. When the solver is done "stealing" a marker
  thread, that thread will have work it can start doing immediately. Before this change, we had to
  contribute the work found by the constraint solver to the global worklist so that it could be
  distributed to the marker threads by load balancing. This change probably helps to avoid that
  load balancing step.

A lot of this change is about making it easy to iterate GC data structures in parallel. This
change makes almost all constraints parallel-enabled, but only the DOM's output constraint uses
the parallel work API. That constraint iterates the marked cells in two subspaces. This change
makes it very easy to compose parallel iterators over subspaces, allocators, blocks, and cells.
The marked cell parallel iterator is composed out of parallel iterators for the others. A parallel
iterator is just an iterator that can do an atomic next() very quickly. We abstract them using
RefPtr<SharedTask<...()>>, where ... is the type returned from the iterator. We know it's done
when it returns a falsish version of ... (in the current code, that's always a pointer type, so
done is indicated by null).

* API/JSMarkingConstraintPrivate.cpp:
(JSContextGroupAddMarkingConstraint):
* API/JSVirtualMachine.mm:
(scanExternalObjectGraph):
(scanExternalRememberedSet):
* JavaScriptCore.xcodeproj/project.pbxproj:
* Sources.txt:
* bytecode/AccessCase.cpp:
(JSC::AccessCase::propagateTransitions const):
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::visitWeakly):
(JSC::CodeBlock::shouldJettisonDueToOldAge):
(JSC::shouldMarkTransition):
(JSC::CodeBlock::propagateTransitions):
(JSC::CodeBlock::determineLiveness):
* dfg/DFGWorklist.cpp:
* ftl/FTLCompile.cpp:
(JSC::FTL::compile):
* heap/ConstraintParallelism.h: Added.
(WTF::printInternal):
* heap/Heap.cpp:
(JSC::Heap::Heap):
(JSC::Heap::addToRememberedSet):
(JSC::Heap::runFixpointPhase):
(JSC::Heap::stopThePeriphery):
(JSC::Heap::resumeThePeriphery):
(JSC::Heap::addCoreConstraints):
(JSC::Heap::setBonusVisitorTask):
(JSC::Heap::runTaskInParallel):
(JSC::Heap::forEachSlotVisitor): Deleted.
* heap/Heap.h:
(JSC::Heap::worldIsRunning const):
(JSC::Heap::runFunctionInParallel):
* heap/HeapInlines.h:
(JSC::Heap::worldIsStopped const):
(JSC::Heap::isMarked):
(JSC::Heap::incrementDeferralDepth):
(JSC::Heap::decrementDeferralDepth):
(JSC::Heap::decrementDeferralDepthAndGCIfNeeded):
(JSC::Heap::forEachSlotVisitor):
(JSC::Heap::collectorBelievesThatTheWorldIsStopped const): Deleted.
(JSC::Heap::isMarkedConcurrently): Deleted.
* heap/HeapSnapshotBuilder.cpp:
(JSC::HeapSnapshotBuilder::appendNode):
* heap/LargeAllocation.h:
(JSC::LargeAllocation::isMarked):
(JSC::LargeAllocation::isMarkedConcurrently): Deleted.
* heap/LockDuringMarking.h:
(JSC::lockDuringMarking):
* heap/MarkedAllocator.cpp:
(JSC::MarkedAllocator::parallelNotEmptyBlockSource):
* heap/MarkedAllocator.h:
* heap/MarkedBlock.h:
(JSC::MarkedBlock::aboutToMark):
(JSC::MarkedBlock::isMarked):
(JSC::MarkedBlock::areMarksStaleWithDependency): Deleted.
(JSC::MarkedBlock::isMarkedConcurrently): Deleted.
* heap/MarkedSpace.h:
(JSC::MarkedSpace::activeWeakSetsBegin):
(JSC::MarkedSpace::activeWeakSetsEnd):
(JSC::MarkedSpace::newActiveWeakSetsBegin):
(JSC::MarkedSpace::newActiveWeakSetsEnd):
* heap/MarkingConstraint.cpp:
(JSC::MarkingConstraint::MarkingConstraint):
(JSC::MarkingConstraint::execute):
(JSC::MarkingConstraint::quickWorkEstimate):
(JSC::MarkingConstraint::workEstimate):
(JSC::MarkingConstraint::doParallelWork):
(JSC::MarkingConstraint::finishParallelWork):
(JSC::MarkingConstraint::doParallelWorkImpl):
(JSC::MarkingConstraint::finishParallelWorkImpl):
* heap/MarkingConstraint.h:
(JSC::MarkingConstraint::lastExecuteParallelism const):
(JSC::MarkingConstraint::parallelism const):
(JSC::MarkingConstraint::quickWorkEstimate): Deleted.
(JSC::MarkingConstraint::workEstimate): Deleted.
* heap/MarkingConstraintSet.cpp:
(JSC::MarkingConstraintSet::MarkingConstraintSet):
(JSC::MarkingConstraintSet::add):
(JSC::MarkingConstraintSet::executeConvergence):
(JSC::MarkingConstraintSet::executeConvergenceImpl):
(JSC::MarkingConstraintSet::executeAll):
(JSC::MarkingConstraintSet::ExecutionContext::ExecutionContext): Deleted.
(JSC::MarkingConstraintSet::ExecutionContext::didVisitSomething const): Deleted.
(JSC::MarkingConstraintSet::ExecutionContext::shouldTimeOut const): Deleted.
(JSC::MarkingConstraintSet::ExecutionContext::drain): Deleted.
(JSC::MarkingConstraintSet::ExecutionContext::didExecute const): Deleted.
(JSC::MarkingConstraintSet::ExecutionContext::execute): Deleted.
(): Deleted.
* heap/MarkingConstraintSet.h:
* heap/MarkingConstraintSolver.cpp: Added.
(JSC::MarkingConstraintSolver::MarkingConstraintSolver):
(JSC::MarkingConstraintSolver::~MarkingConstraintSolver):
(JSC::MarkingConstraintSolver::didVisitSomething const):
(JSC::MarkingConstraintSolver::execute):
(JSC::MarkingConstraintSolver::drain):
(JSC::MarkingConstraintSolver::converge):
(JSC::MarkingConstraintSolver::runExecutionThread):
(JSC::MarkingConstraintSolver::didExecute):
* heap/MarkingConstraintSolver.h: Added.
* heap/OpaqueRootSet.h: Removed.
* heap/ParallelSourceAdapter.h: Added.
(JSC::ParallelSourceAdapter::ParallelSourceAdapter):
(JSC::createParallelSourceAdapter):
* heap/SimpleMarkingConstraint.cpp: Added.
(JSC::SimpleMarkingConstraint::SimpleMarkingConstraint):
(JSC::SimpleMarkingConstraint::~SimpleMarkingConstraint):
(JSC::SimpleMarkingConstraint::quickWorkEstimate):
(JSC::SimpleMarkingConstraint::executeImpl):
* heap/SimpleMarkingConstraint.h: Added.
* heap/SlotVisitor.cpp:
(JSC::SlotVisitor::didStartMarking):
(JSC::SlotVisitor::reset):
(JSC::SlotVisitor::appendToMarkStack):
(JSC::SlotVisitor::visitChildren):
(JSC::SlotVisitor::updateMutatorIsStopped):
(JSC::SlotVisitor::mutatorIsStoppedIsUpToDate const):
(JSC::SlotVisitor::drain):
(JSC::SlotVisitor::performIncrementOfDraining):
(JSC::SlotVisitor::didReachTermination):
(JSC::SlotVisitor::hasWork):
(JSC::SlotVisitor::drainFromShared):
(JSC::SlotVisitor::drainInParallelPassively):
(JSC::SlotVisitor::waitForTermination):
(JSC::SlotVisitor::addOpaqueRoot): Deleted.
(JSC::SlotVisitor::containsOpaqueRoot const): Deleted.
(JSC::SlotVisitor::containsOpaqueRootTriState const): Deleted.
(JSC::SlotVisitor::mergeIfNecessary): Deleted.
(JSC::SlotVisitor::mergeOpaqueRootsIfProfitable): Deleted.
(JSC::SlotVisitor::mergeOpaqueRoots): Deleted.
* heap/SlotVisitor.h:
* heap/SlotVisitorInlines.h:
(JSC::SlotVisitor::addOpaqueRoot):
(JSC::SlotVisitor::containsOpaqueRoot const):
(JSC::SlotVisitor::vm):
(JSC::SlotVisitor::vm const):
* heap/Subspace.cpp:
(JSC::Subspace::parallelAllocatorSource):
(JSC::Subspace::parallelNotEmptyMarkedBlockSource):
* heap/Subspace.h:
* heap/SubspaceInlines.h:
(JSC::Subspace::forEachMarkedCellInParallel):
* heap/VisitCounter.h: Added.
(JSC::VisitCounter::VisitCounter):
(JSC::VisitCounter::visitCount const):
* heap/VisitingTimeout.h: Removed.
* heap/WeakBlock.cpp:
(JSC::WeakBlock::specializedVisit):
* runtime/Structure.cpp:
(JSC::Structure::isCheapDuringGC):
(JSC::Structure::markIfCheap):

Source/WebCore:

No new tests because no change in behavior. This change is best tested using DOM-GC-intensive
benchmarks like Speedometer and Dromaeo.

This parallelizes the DOM's output constraint, and makes some small changes to make this more
scalable.

* ForwardingHeaders/heap/SimpleMarkingConstraint.h: Added.
* ForwardingHeaders/heap/VisitingTimeout.h: Removed.
* Sources.txt:
* WebCore.xcodeproj/project.pbxproj:
* bindings/js/DOMGCOutputConstraint.cpp: Added.
(WebCore::DOMGCOutputConstraint::DOMGCOutputConstraint):
(WebCore::DOMGCOutputConstraint::~DOMGCOutputConstraint):
(WebCore::DOMGCOutputConstraint::executeImpl):
(WebCore::DOMGCOutputConstraint::doParallelWorkImpl):
(WebCore::DOMGCOutputConstraint::finishParallelWorkImpl):
* bindings/js/DOMGCOutputConstraint.h: Added.
* bindings/js/WebCoreJSClientData.cpp:
(WebCore::JSVMClientData::initNormalWorld):
* dom/Node.cpp:
(WebCore::Node::eventTargetDataConcurrently):
(WebCore::Node::ensureEventTargetData):
(WebCore::Node::clearEventTargetData):

Source/WTF:

This does some changes to make it easier to do parallel constraint solving:

- I finally removed dependencyWith. This was a silly construct whose only purpose is to confuse
  people about what it means to have a dependency chain. I took that as an opportunity to grealy
  simplify the GC's use of dependency chaining.

- Added more logic to Deque<>, since I use it for part of the load balancer.

- Made it possible to profile lock contention. See
  https://bugs.webkit.org/show_bug.cgi?id=180250#c0 for some preliminary measurements.

- Introduced holdLockIf, which makes it easy to perform predicated lock acquisition. We use that
  to pick a lock in WebCore.

- Introduced CountingLock. It's like WTF::Lock except it also enables optimistic read transactions
  sorta like Java's StampedLock.

* WTF.xcodeproj/project.pbxproj:
* wtf/Atomics.h:
(WTF::dependency):
(WTF::DependencyWith::DependencyWith): Deleted.
(WTF::dependencyWith): Deleted.
* wtf/BitVector.h:
(WTF::BitVector::iterator::operator++):
* wtf/CMakeLists.txt:
* wtf/ConcurrentPtrHashSet.cpp: Added.
(WTF::ConcurrentPtrHashSet::ConcurrentPtrHashSet):
(WTF::ConcurrentPtrHashSet::~ConcurrentPtrHashSet):
(WTF::ConcurrentPtrHashSet::deleteOldTables):
(WTF::ConcurrentPtrHashSet::clear):
(WTF::ConcurrentPtrHashSet::initialize):
(WTF::ConcurrentPtrHashSet::addSlow):
(WTF::ConcurrentPtrHashSet::resizeIfNecessary):
(WTF::ConcurrentPtrHashSet::resizeAndAdd):
(WTF::ConcurrentPtrHashSet::Table::create):
* wtf/ConcurrentPtrHashSet.h: Added.
(WTF::ConcurrentPtrHashSet::contains):
(WTF::ConcurrentPtrHashSet::add):
(WTF::ConcurrentPtrHashSet::size const):
(WTF::ConcurrentPtrHashSet::Table::maxLoad const):
(WTF::ConcurrentPtrHashSet::hash):
(WTF::ConcurrentPtrHashSet::cast):
(WTF::ConcurrentPtrHashSet::containsImpl const):
(WTF::ConcurrentPtrHashSet::addImpl):
* wtf/Deque.h:
(WTF::inlineCapacity>::takeFirst):
* wtf/FastMalloc.h:
* wtf/Lock.cpp:
(WTF::LockBase::lockSlow):
* wtf/Locker.h:
(WTF::holdLockIf):
* wtf/ScopedLambda.h:
* wtf/SharedTask.h:
(WTF::SharedTask<PassedResultType):
(WTF::SharedTask<ResultType): Deleted.
* wtf/StackShot.h: Added.
(WTF::StackShot::StackShot):
(WTF::StackShot::operator=):
(WTF::StackShot::array const):
(WTF::StackShot::size const):
(WTF::StackShot::operator bool const):
(WTF::StackShot::operator== const):
(WTF::StackShot::hash const):
(WTF::StackShot::isHashTableDeletedValue const):
(WTF::StackShot::operator> const):
(WTF::StackShot::deletedValueArray):
(WTF::StackShotHash::hash):
(WTF::StackShotHash::equal):
* wtf/StackShotProfiler.h: Added.
(WTF::StackShotProfiler::StackShotProfiler):
(WTF::StackShotProfiler::profile):
(WTF::StackShotProfiler::run):

Tools:

* Scripts/run-jsc-benchmarks: Add splay-latency test, since this change needed to be carefully validated with that benchmark.
* TestWebKitAPI/CMakeLists.txt:
* TestWebKitAPI/TestWebKitAPI.xcodeproj/project.pbxproj:
* TestWebKitAPI/Tests/WTF/ConcurrentPtrHashSet.cpp: Added. This has unit tests of the new concurrent data structure. The tests focus on correctness under serial execution, which appears to be enough for now (it's so easy to catch a concurrency bug by just running the GC).
(TestWebKitAPI::TEST):

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@225524 268f45cc-cd09-0410-ab3c-d52691b4dbfc

88 files changed:
PerformanceTests/ChangeLog
PerformanceTests/Octane/splay.js [new file with mode: 0644]
Source/JavaScriptCore/API/JSMarkingConstraintPrivate.cpp
Source/JavaScriptCore/API/JSVirtualMachine.mm
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
Source/JavaScriptCore/Sources.txt
Source/JavaScriptCore/bytecode/AccessCase.cpp
Source/JavaScriptCore/bytecode/CodeBlock.cpp
Source/JavaScriptCore/dfg/DFGWorklist.cpp
Source/JavaScriptCore/ftl/FTLCompile.cpp
Source/JavaScriptCore/heap/ConservativeRoots.cpp
Source/JavaScriptCore/heap/ConservativeRoots.h
Source/JavaScriptCore/heap/ConstraintConcurrency.h [new file with mode: 0644]
Source/JavaScriptCore/heap/ConstraintParallelism.h [new file with mode: 0644]
Source/JavaScriptCore/heap/GCSegmentedArrayInlines.h
Source/JavaScriptCore/heap/Heap.cpp
Source/JavaScriptCore/heap/Heap.h
Source/JavaScriptCore/heap/HeapInlines.h
Source/JavaScriptCore/heap/HeapSnapshotBuilder.cpp
Source/JavaScriptCore/heap/HeapUtil.h
Source/JavaScriptCore/heap/LargeAllocation.h
Source/JavaScriptCore/heap/LockDuringMarking.h
Source/JavaScriptCore/heap/MachineStackMarker.cpp
Source/JavaScriptCore/heap/MachineStackMarker.h
Source/JavaScriptCore/heap/MarkStackMergingConstraint.cpp [new file with mode: 0644]
Source/JavaScriptCore/heap/MarkStackMergingConstraint.h [new file with mode: 0644]
Source/JavaScriptCore/heap/MarkedAllocator.cpp
Source/JavaScriptCore/heap/MarkedAllocator.h
Source/JavaScriptCore/heap/MarkedBlock.cpp
Source/JavaScriptCore/heap/MarkedBlock.h
Source/JavaScriptCore/heap/MarkedBlockInlines.h
Source/JavaScriptCore/heap/MarkedSpace.cpp
Source/JavaScriptCore/heap/MarkedSpace.h
Source/JavaScriptCore/heap/MarkingConstraint.cpp
Source/JavaScriptCore/heap/MarkingConstraint.h
Source/JavaScriptCore/heap/MarkingConstraintSet.cpp
Source/JavaScriptCore/heap/MarkingConstraintSet.h
Source/JavaScriptCore/heap/MarkingConstraintSolver.cpp [new file with mode: 0644]
Source/JavaScriptCore/heap/MarkingConstraintSolver.h [new file with mode: 0644]
Source/JavaScriptCore/heap/ParallelSourceAdapter.h [new file with mode: 0644]
Source/JavaScriptCore/heap/SimpleMarkingConstraint.cpp [new file with mode: 0644]
Source/JavaScriptCore/heap/SimpleMarkingConstraint.h [moved from Source/JavaScriptCore/heap/OpaqueRootSet.h with 52% similarity]
Source/JavaScriptCore/heap/SlotVisitor.cpp
Source/JavaScriptCore/heap/SlotVisitor.h
Source/JavaScriptCore/heap/SlotVisitorInlines.h
Source/JavaScriptCore/heap/Subspace.cpp
Source/JavaScriptCore/heap/Subspace.h
Source/JavaScriptCore/heap/SubspaceInlines.h
Source/JavaScriptCore/heap/VisitCounter.h [moved from Source/JavaScriptCore/heap/VisitingTimeout.h with 65% similarity]
Source/JavaScriptCore/heap/WeakBlock.cpp
Source/JavaScriptCore/runtime/JSObject.cpp
Source/JavaScriptCore/runtime/Options.h
Source/JavaScriptCore/runtime/Structure.cpp
Source/WTF/ChangeLog
Source/WTF/WTF.xcodeproj/project.pbxproj
Source/WTF/wtf/Atomics.h
Source/WTF/wtf/BitVector.h
Source/WTF/wtf/Bitmap.h
Source/WTF/wtf/CMakeLists.txt
Source/WTF/wtf/ConcurrentPtrHashSet.cpp [new file with mode: 0644]
Source/WTF/wtf/ConcurrentPtrHashSet.h [new file with mode: 0644]
Source/WTF/wtf/CountingLock.cpp [new file with mode: 0644]
Source/WTF/wtf/CountingLock.h [new file with mode: 0644]
Source/WTF/wtf/Deque.h
Source/WTF/wtf/FastMalloc.h
Source/WTF/wtf/Lock.cpp
Source/WTF/wtf/LockAlgorithm.h
Source/WTF/wtf/LockAlgorithmInlines.h
Source/WTF/wtf/Locker.h
Source/WTF/wtf/ScopedLambda.h
Source/WTF/wtf/SharedTask.h
Source/WTF/wtf/StackShot.h [new file with mode: 0644]
Source/WTF/wtf/StackShotProfiler.h [new file with mode: 0644]
Source/WebCore/ChangeLog
Source/WebCore/ForwardingHeaders/heap/SimpleMarkingConstraint.h [new file with mode: 0644]
Source/WebCore/ForwardingHeaders/heap/VisitingTimeout.h [deleted file]
Source/WebCore/Sources.txt
Source/WebCore/WebCore.xcodeproj/project.pbxproj
Source/WebCore/bindings/js/DOMGCOutputConstraint.cpp [new file with mode: 0644]
Source/WebCore/bindings/js/DOMGCOutputConstraint.h [new file with mode: 0644]
Source/WebCore/bindings/js/WebCoreJSClientData.cpp
Source/WebCore/dom/Node.cpp
Tools/ChangeLog
Tools/Scripts/run-jsc-benchmarks
Tools/TestWebKitAPI/CMakeLists.txt
Tools/TestWebKitAPI/TestWebKitAPI.xcodeproj/project.pbxproj
Tools/TestWebKitAPI/Tests/WTF/ConcurrentPtrHashSet.cpp [new file with mode: 0644]

index 687a801..cd99ff4 100644 (file)
@@ -1,3 +1,59 @@
+2017-12-01  Filip Pizlo  <fpizlo@apple.com>
+
+        GC constraint solving should be parallel
+        https://bugs.webkit.org/show_bug.cgi?id=179934
+
+        Reviewed by JF Bastien.
+        
+        Added a version of splay that measures latency in a way that run-jsc-benchmarks groks.
+
+        * Octane/splay.js: Added.
+        (this.Setup.setup.setup):
+        (this.TearDown.tearDown.tearDown):
+        (Benchmark):
+        (BenchmarkResult):
+        (BenchmarkResult.prototype.valueOf):
+        (BenchmarkSuite):
+        (alert):
+        (Math.random):
+        (BenchmarkSuite.ResetRNG):
+        (RunStep):
+        (BenchmarkSuite.RunSuites):
+        (BenchmarkSuite.CountBenchmarks):
+        (BenchmarkSuite.GeometricMean):
+        (BenchmarkSuite.GeometricMeanTime):
+        (BenchmarkSuite.AverageAbovePercentile):
+        (BenchmarkSuite.GeometricMeanLatency):
+        (BenchmarkSuite.FormatScore):
+        (BenchmarkSuite.prototype.NotifyStep):
+        (BenchmarkSuite.prototype.NotifyResult):
+        (BenchmarkSuite.prototype.NotifyError):
+        (BenchmarkSuite.prototype.RunSingleBenchmark):
+        (RunNextSetup):
+        (RunNextBenchmark):
+        (RunNextTearDown):
+        (BenchmarkSuite.prototype.RunStep):
+        (GeneratePayloadTree):
+        (GenerateKey):
+        (SplayUpdateStats):
+        (InsertNewNode):
+        (SplaySetup):
+        (SplayTearDown):
+        (SplayRun):
+        (SplayTree):
+        (SplayTree.prototype.isEmpty):
+        (SplayTree.prototype.insert):
+        (SplayTree.prototype.remove):
+        (SplayTree.prototype.find):
+        (SplayTree.prototype.findMax):
+        (SplayTree.prototype.findGreatestLessThan):
+        (SplayTree.prototype.exportKeys):
+        (SplayTree.prototype.splay_):
+        (SplayTree.Node):
+        (SplayTree.Node.prototype.traverse_):
+        (report):
+        (start):
+
 2017-12-04  Antti Koivisto  <antti@apple.com>
 
         Fix StyleBench/InteractiveRunner.html
 2017-12-04  Antti Koivisto  <antti@apple.com>
 
         Fix StyleBench/InteractiveRunner.html
diff --git a/PerformanceTests/Octane/splay.js b/PerformanceTests/Octane/splay.js
new file mode 100644 (file)
index 0000000..22ed93f
--- /dev/null
@@ -0,0 +1,850 @@
+// Copyright 2013 the V8 project authors. All rights reserved.
+// Copyright (C) 2015-2017 Apple Inc. All rights reserved.
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+//     * Redistributions of source code must retain the above copyright
+//       notice, this list of conditions and the following disclaimer.
+//     * Redistributions in binary form must reproduce the above
+//       copyright notice, this list of conditions and the following
+//       disclaimer in the documentation and/or other materials provided
+//       with the distribution.
+//     * Neither the name of Google Inc. nor the names of its
+//       contributors may be used to endorse or promote products derived
+//       from this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+// Performance.now is used in latency benchmarks, the fallback is Date.now.
+var performance = performance || {};
+performance.now = () => preciseTime() * 1000;
+
+// Simple framework for running the benchmark suites and
+// computing a score based on the timing measurements.
+
+
+// A benchmark has a name (string) and a function that will be run to
+// do the performance measurement. The optional setup and tearDown
+// arguments are functions that will be invoked before and after
+// running the benchmark, but the running time of these functions will
+// not be accounted for in the benchmark score.
+function Benchmark(name, doWarmup, doDeterministic, run, setup, tearDown, latencyResult, minIterations) {
+  this.name = name;
+  this.doWarmup = doWarmup;
+  this.doDeterministic = doDeterministic;
+  this.run = run;
+  this.Setup = setup ? setup : function() { };
+  this.TearDown = tearDown ? tearDown : function() { };
+  this.latencyResult = latencyResult ? latencyResult : null; 
+  this.minIterations = minIterations ? minIterations : 32;
+}
+
+
+// Benchmark results hold the benchmark and the measured time used to
+// run the benchmark. The benchmark score is computed later once a
+// full benchmark suite has run to completion. If latency is set to 0
+// then there is no latency score for this benchmark.
+function BenchmarkResult(benchmark, time, latency) {
+  this.benchmark = benchmark;
+  this.time = time;
+  this.latency = latency;
+}
+
+
+// Automatically convert results to numbers. Used by the geometric
+// mean computation.
+BenchmarkResult.prototype.valueOf = function() {
+  return this.time;
+}
+
+
+// Suites of benchmarks consist of a name and the set of benchmarks in
+// addition to the reference timing that the final score will be based
+// on. This way, all scores are relative to a reference run and higher
+// scores implies better performance.
+function BenchmarkSuite(name, reference, benchmarks) {
+  this.name = name;
+  this.reference = reference;
+  this.benchmarks = benchmarks;
+  BenchmarkSuite.suites.push(this);
+}
+
+
+// Keep track of all declared benchmark suites.
+BenchmarkSuite.suites = [];
+
+// Scores are not comparable across versions. Bump the version if
+// you're making changes that will affect that scores, e.g. if you add
+// a new benchmark or change an existing one.
+BenchmarkSuite.version = '9';
+
+// Override the alert function to throw an exception instead.
+alert = function(s) {
+  throw "Alert called with argument: " + s;
+};
+
+
+// To make the benchmark results predictable, we replace Math.random
+// with a 100% deterministic alternative.
+BenchmarkSuite.ResetRNG = function() {
+  Math.random = (function() {
+    var seed = 49734321;
+    return function() {
+      // Robert Jenkins' 32 bit integer hash function.
+      seed = ((seed + 0x7ed55d16) + (seed << 12))  & 0xffffffff;
+      seed = ((seed ^ 0xc761c23c) ^ (seed >>> 19)) & 0xffffffff;
+      seed = ((seed + 0x165667b1) + (seed << 5))   & 0xffffffff;
+      seed = ((seed + 0xd3a2646c) ^ (seed << 9))   & 0xffffffff;
+      seed = ((seed + 0xfd7046c5) + (seed << 3))   & 0xffffffff;
+      seed = ((seed ^ 0xb55a4f09) ^ (seed >>> 16)) & 0xffffffff;
+      return (seed & 0xfffffff) / 0x10000000;
+    };
+  })();
+}
+
+
+// Runs all registered benchmark suites and optionally yields between
+// each individual benchmark to avoid running for too long in the
+// context of browsers. Once done, the final score is reported to the
+// runner.
+BenchmarkSuite.RunSuites = function(runner) {
+  var continuation = null;
+  var suites = BenchmarkSuite.suites;
+  var length = suites.length;
+  BenchmarkSuite.scores = [];
+  var index = 0;
+  function RunStep() {
+    while (continuation || index < length) {
+      if (continuation) {
+        continuation = continuation();
+      } else {
+        var suite = suites[index++];
+        if (runner.NotifyStart) runner.NotifyStart(suite.name);
+        continuation = suite.RunStep(runner);
+      }
+      if (continuation && typeof window != 'undefined' && window.setTimeout) {
+        window.setTimeout(RunStep, 25);
+        return;
+      }
+    }
+
+    // show final result
+    if (runner.NotifyScore) {
+      var score = BenchmarkSuite.GeometricMean(BenchmarkSuite.scores);
+      var formatted = BenchmarkSuite.FormatScore(100 * score);
+      runner.NotifyScore(formatted);
+    }
+  }
+  RunStep();
+}
+
+
+// Counts the total number of registered benchmarks. Useful for
+// showing progress as a percentage.
+BenchmarkSuite.CountBenchmarks = function() {
+  var result = 0;
+  var suites = BenchmarkSuite.suites;
+  for (var i = 0; i < suites.length; i++) {
+    result += suites[i].benchmarks.length;
+  }
+  return result;
+}
+
+
+// Computes the geometric mean of a set of numbers.
+BenchmarkSuite.GeometricMean = function(numbers) {
+  var log = 0;
+  for (var i = 0; i < numbers.length; i++) {
+    log += Math.log(numbers[i]);
+  }
+  return Math.pow(Math.E, log / numbers.length);
+}
+
+
+// Computes the geometric mean of a set of throughput time measurements.
+BenchmarkSuite.GeometricMeanTime = function(measurements) {
+  var log = 0;
+  for (var i = 0; i < measurements.length; i++) {
+    log += Math.log(measurements[i].time);
+  }
+  return Math.pow(Math.E, log / measurements.length);
+}
+
+
+// Computes the average of the worst samples. For example, if percentile is 99, this will report the
+// average of the worst 1% of the samples.
+BenchmarkSuite.AverageAbovePercentile = function(numbers, percentile) {
+  // Don't change the original array.
+  numbers = numbers.slice();
+  
+  // Sort in ascending order.
+  numbers.sort(function(a, b) { return a - b; });
+  
+  // Now the elements we want are at the end. Keep removing them until the array size shrinks too much.
+  // Examples assuming percentile = 99:
+  //
+  // - numbers.length starts at 100: we will remove just the worst entry and then not remove anymore,
+  //   since then numbers.length / originalLength = 0.99.
+  //
+  // - numbers.length starts at 1000: we will remove the ten worst.
+  //
+  // - numbers.length starts at 10: we will remove just the worst.
+  var numbersWeWant = [];
+  var originalLength = numbers.length;
+  while (numbers.length / originalLength > percentile / 100)
+    numbersWeWant.push(numbers.pop());
+  
+  var sum = 0;
+  for (var i = 0; i < numbersWeWant.length; ++i)
+    sum += numbersWeWant[i];
+  
+  var result = sum / numbersWeWant.length;
+  
+  // Do a sanity check.
+  if (numbers.length && result < numbers[numbers.length - 1]) {
+    throw "Sanity check fail: the worst case result is " + result +
+      " but we didn't take into account " + numbers;
+  }
+  
+  return result;
+}
+
+
+// Computes the geometric mean of a set of latency measurements.
+BenchmarkSuite.GeometricMeanLatency = function(measurements) {
+  var log = 0;
+  var hasLatencyResult = false;
+  for (var i = 0; i < measurements.length; i++) {
+    if (measurements[i].latency != 0) {
+      log += Math.log(measurements[i].latency);
+      hasLatencyResult = true;
+    }
+  }
+  if (hasLatencyResult) {
+    return Math.pow(Math.E, log / measurements.length);
+  } else {
+    return 0;
+  }
+}
+
+
+// Converts a score value to a string with at least three significant
+// digits.
+BenchmarkSuite.FormatScore = function(value) {
+  if (value > 100) {
+    return value.toFixed(0);
+  } else {
+    return value.toPrecision(3);
+  }
+}
+
+// Notifies the runner that we're done running a single benchmark in
+// the benchmark suite. This can be useful to report progress.
+BenchmarkSuite.prototype.NotifyStep = function(result) {
+  this.results.push(result);
+  if (this.runner.NotifyStep) this.runner.NotifyStep(result.benchmark.name);
+}
+
+
+// Notifies the runner that we're done with running a suite and that
+// we have a result which can be reported to the user if needed.
+BenchmarkSuite.prototype.NotifyResult = function() {
+  var mean = BenchmarkSuite.GeometricMeanTime(this.results);
+  var score = this.reference[0] / mean;
+  BenchmarkSuite.scores.push(score);
+  if (this.runner.NotifyResult) {
+    var formatted = BenchmarkSuite.FormatScore(100 * score);
+    this.runner.NotifyResult(this.name, formatted);
+  }
+  if (this.reference.length == 2) {
+    var meanLatency = BenchmarkSuite.GeometricMeanLatency(this.results);
+    if (meanLatency != 0) {
+      var scoreLatency = this.reference[1] / meanLatency;
+      BenchmarkSuite.scores.push(scoreLatency);
+      if (this.runner.NotifyResult) {
+        var formattedLatency = BenchmarkSuite.FormatScore(100 * scoreLatency)
+        this.runner.NotifyResult(this.name + "Latency", formattedLatency);
+      }
+    }
+  }
+}
+
+
+// Notifies the runner that running a benchmark resulted in an error.
+BenchmarkSuite.prototype.NotifyError = function(error) {
+  if (this.runner.NotifyError) {
+    this.runner.NotifyError(this.name, error);
+  }
+  if (this.runner.NotifyStep) {
+    this.runner.NotifyStep(this.name);
+  }
+}
+
+
+// Runs a single benchmark for at least a second and computes the
+// average time it takes to run a single iteration.
+BenchmarkSuite.prototype.RunSingleBenchmark = function(benchmark, data) {
+  function Measure(data) {
+    var elapsed = 0;
+    var start = new Date();
+  
+  // Run either for 1 second or for the number of iterations specified
+  // by minIterations, depending on the config flag doDeterministic.
+    for (var i = 0; (benchmark.doDeterministic ? 
+      i<benchmark.minIterations : elapsed < 1000); i++) {
+      benchmark.run();
+      elapsed = new Date() - start;
+    }
+    if (data != null) {
+      data.runs += i;
+      data.elapsed += elapsed;
+    }
+  }
+
+  // Sets up data in order to skip or not the warmup phase.
+  if (!benchmark.doWarmup && data == null) {
+    data = { runs: 0, elapsed: 0 };
+  }
+
+  if (data == null) {
+    Measure(null);
+    return { runs: 0, elapsed: 0 };
+  } else {
+    Measure(data);
+    // If we've run too few iterations, we continue for another second.
+    if (data.runs < benchmark.minIterations) return data;
+    var usec = (data.elapsed * 1000) / data.runs;
+    var latencySamples = (benchmark.latencyResult != null) ? benchmark.latencyResult() : [0];
+    var percentile = 99.5;
+    var latency = BenchmarkSuite.AverageAbovePercentile(latencySamples, percentile) * 1000;
+    this.NotifyStep(new BenchmarkResult(benchmark, usec, latency));
+    return null;
+  }
+}
+
+
+// This function starts running a suite, but stops between each
+// individual benchmark in the suite and returns a continuation
+// function which can be invoked to run the next benchmark. Once the
+// last benchmark has been executed, null is returned.
+BenchmarkSuite.prototype.RunStep = function(runner) {
+  BenchmarkSuite.ResetRNG();
+  this.results = [];
+  this.runner = runner;
+  var length = this.benchmarks.length;
+  var index = 0;
+  var suite = this;
+  var data;
+
+  // Run the setup, the actual benchmark, and the tear down in three
+  // separate steps to allow the framework to yield between any of the
+  // steps.
+
+  function RunNextSetup() {
+    if (index < length) {
+      try {
+        suite.benchmarks[index].Setup();
+      } catch (e) {
+        suite.NotifyError(e);
+        return null;
+      }
+      return RunNextBenchmark;
+    }
+    suite.NotifyResult();
+    return null;
+  }
+
+  function RunNextBenchmark() {
+    try {
+      data = suite.RunSingleBenchmark(suite.benchmarks[index], data);
+    } catch (e) {
+      suite.NotifyError(e);
+      return null;
+    }
+    // If data is null, we're done with this benchmark.
+    return (data == null) ? RunNextTearDown : RunNextBenchmark();
+  }
+
+  function RunNextTearDown() {
+    try {
+      suite.benchmarks[index++].TearDown();
+    } catch (e) {
+      suite.NotifyError(e);
+      return null;
+    }
+    return RunNextSetup;
+  }
+
+  // Start out running the setup.
+  return RunNextSetup();
+}
+
+// Copyright 2009 the V8 project authors. All rights reserved.
+// Copyright (C) 2015 Apple Inc. All rights reserved.
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+//     * Redistributions of source code must retain the above copyright
+//       notice, this list of conditions and the following disclaimer.
+//     * Redistributions in binary form must reproduce the above
+//       copyright notice, this list of conditions and the following
+//       disclaimer in the documentation and/or other materials provided
+//       with the distribution.
+//     * Neither the name of Google Inc. nor the names of its
+//       contributors may be used to endorse or promote products derived
+//       from this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+// This benchmark is based on a JavaScript log processing module used
+// by the V8 profiler to generate execution time profiles for runs of
+// JavaScript applications, and it effectively measures how fast the
+// JavaScript engine is at allocating nodes and reclaiming the memory
+// used for old nodes. Because of the way splay trees work, the engine
+// also has to deal with a lot of changes to the large tree object
+// graph.
+
+var Splay = new BenchmarkSuite('Splay', [81491, 2739514], [
+  new Benchmark("Splay", true, false, 
+    SplayRun, SplaySetup, SplayTearDown, SplayLatency)
+]);
+
+
+// Configuration.
+var kSplayTreeSize = 8000;
+var kSplayTreeModifications = 80;
+var kSplayTreePayloadDepth = 5;
+
+var splayTree = null;
+var splaySampleTimeStart = 0.0;
+
+function GeneratePayloadTree(depth, tag) {
+  if (depth == 0) {
+    return {
+      array  : [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ],
+      string : 'String for key ' + tag + ' in leaf node'
+    };
+  } else {
+    return {
+      left:  GeneratePayloadTree(depth - 1, tag),
+      right: GeneratePayloadTree(depth - 1, tag)
+    };
+  }
+}
+
+
+function GenerateKey() {
+  // The benchmark framework guarantees that Math.random is
+  // deterministic; see base.js.
+  return Math.random();
+}
+
+var splaySamples = [];
+
+function SplayLatency() {
+  return splaySamples;
+}
+
+function SplayUpdateStats(time) {
+  var pause = time - splaySampleTimeStart;
+  splaySampleTimeStart = time;
+  splaySamples.push(pause);
+}
+
+function InsertNewNode() {
+  // Insert new node with a unique key.
+  var key;
+  do {
+    key = GenerateKey();
+  } while (splayTree.find(key) != null);
+  var payload = GeneratePayloadTree(kSplayTreePayloadDepth, String(key));
+  splayTree.insert(key, payload);
+  return key;
+}
+
+
+function SplaySetup() {
+  // Check if the platform has the performance.now high resolution timer.
+  // If not, throw exception and quit.
+  if (!performance.now) {
+    throw "PerformanceNowUnsupported";
+  }
+
+  splayTree = new SplayTree();
+  splaySampleTimeStart = performance.now()
+  for (var i = 0; i < kSplayTreeSize; i++) {
+    InsertNewNode();
+    if ((i+1) % 20 == 19) {
+      SplayUpdateStats(performance.now());
+    }
+  }
+}
+
+
+function SplayTearDown() {
+  // Allow the garbage collector to reclaim the memory
+  // used by the splay tree no matter how we exit the
+  // tear down function.
+  var keys = splayTree.exportKeys();
+  splayTree = null;
+
+  splaySamples = [];
+
+  // Verify that the splay tree has the right size.
+  var length = keys.length;
+  if (length != kSplayTreeSize) {
+    throw new Error("Splay tree has wrong size");
+  }
+
+  // Verify that the splay tree has sorted, unique keys.
+  for (var i = 0; i < length - 1; i++) {
+    if (keys[i] >= keys[i + 1]) {
+      throw new Error("Splay tree not sorted");
+    }
+  }
+}
+
+
+function SplayRun() {
+  // Replace a few nodes in the splay tree.
+  for (var i = 0; i < kSplayTreeModifications; i++) {
+    var key = InsertNewNode();
+    var greatest = splayTree.findGreatestLessThan(key);
+    if (greatest == null) splayTree.remove(key);
+    else splayTree.remove(greatest.key);
+  }
+  SplayUpdateStats(performance.now());
+}
+
+
+/**
+ * Constructs a Splay tree.  A splay tree is a self-balancing binary
+ * search tree with the additional property that recently accessed
+ * elements are quick to access again. It performs basic operations
+ * such as insertion, look-up and removal in O(log(n)) amortized time.
+ *
+ * @constructor
+ */
+function SplayTree() {
+};
+
+
+/**
+ * Pointer to the root node of the tree.
+ *
+ * @type {SplayTree.Node}
+ * @private
+ */
+SplayTree.prototype.root_ = null;
+
+
+/**
+ * @return {boolean} Whether the tree is empty.
+ */
+SplayTree.prototype.isEmpty = function() {
+  return !this.root_;
+};
+
+
+/**
+ * Inserts a node into the tree with the specified key and value if
+ * the tree does not already contain a node with the specified key. If
+ * the value is inserted, it becomes the root of the tree.
+ *
+ * @param {number} key Key to insert into the tree.
+ * @param {*} value Value to insert into the tree.
+ */
+SplayTree.prototype.insert = function(key, value) {
+  if (this.isEmpty()) {
+    this.root_ = new SplayTree.Node(key, value);
+    return;
+  }
+  // Splay on the key to move the last node on the search path for
+  // the key to the root of the tree.
+  this.splay_(key);
+  if (this.root_.key == key) {
+    return;
+  }
+  var node = new SplayTree.Node(key, value);
+  if (key > this.root_.key) {
+    node.left = this.root_;
+    node.right = this.root_.right;
+    this.root_.right = null;
+  } else {
+    node.right = this.root_;
+    node.left = this.root_.left;
+    this.root_.left = null;
+  }
+  this.root_ = node;
+};
+
+
+/**
+ * Removes a node with the specified key from the tree if the tree
+ * contains a node with this key. The removed node is returned. If the
+ * key is not found, an exception is thrown.
+ *
+ * @param {number} key Key to find and remove from the tree.
+ * @return {SplayTree.Node} The removed node.
+ */
+SplayTree.prototype.remove = function(key) {
+  if (this.isEmpty()) {
+    throw Error('Key not found: ' + key);
+  }
+  this.splay_(key);
+  if (this.root_.key != key) {
+    throw Error('Key not found: ' + key);
+  }
+  var removed = this.root_;
+  if (!this.root_.left) {
+    this.root_ = this.root_.right;
+  } else {
+    var right = this.root_.right;
+    this.root_ = this.root_.left;
+    // Splay to make sure that the new root has an empty right child.
+    this.splay_(key);
+    // Insert the original right child as the right child of the new
+    // root.
+    this.root_.right = right;
+  }
+  return removed;
+};
+
+
+/**
+ * Returns the node having the specified key or null if the tree doesn't contain
+ * a node with the specified key.
+ *
+ * @param {number} key Key to find in the tree.
+ * @return {SplayTree.Node} Node having the specified key.
+ */
+SplayTree.prototype.find = function(key) {
+  if (this.isEmpty()) {
+    return null;
+  }
+  this.splay_(key);
+  return this.root_.key == key ? this.root_ : null;
+};
+
+
+/**
+ * @return {SplayTree.Node} Node having the maximum key value.
+ */
+SplayTree.prototype.findMax = function(opt_startNode) {
+  if (this.isEmpty()) {
+    return null;
+  }
+  var current = opt_startNode || this.root_;
+  while (current.right) {
+    current = current.right;
+  }
+  return current;
+};
+
+
+/**
+ * @return {SplayTree.Node} Node having the maximum key value that
+ *     is less than the specified key value.
+ */
+SplayTree.prototype.findGreatestLessThan = function(key) {
+  if (this.isEmpty()) {
+    return null;
+  }
+  // Splay on the key to move the node with the given key or the last
+  // node on the search path to the top of the tree.
+  this.splay_(key);
+  // Now the result is either the root node or the greatest node in
+  // the left subtree.
+  if (this.root_.key < key) {
+    return this.root_;
+  } else if (this.root_.left) {
+    return this.findMax(this.root_.left);
+  } else {
+    return null;
+  }
+};
+
+
+/**
+ * @return {Array<*>} An array containing all the keys of tree's nodes.
+ */
+SplayTree.prototype.exportKeys = function() {
+  var result = [];
+  if (!this.isEmpty()) {
+    this.root_.traverse_(function(node) { result.push(node.key); });
+  }
+  return result;
+};
+
+
+/**
+ * Perform the splay operation for the given key. Moves the node with
+ * the given key to the top of the tree.  If no node has the given
+ * key, the last node on the search path is moved to the top of the
+ * tree. This is the simplified top-down splaying algorithm from:
+ * "Self-adjusting Binary Search Trees" by Sleator and Tarjan
+ *
+ * @param {number} key Key to splay the tree on.
+ * @private
+ */
+SplayTree.prototype.splay_ = function(key) {
+  if (this.isEmpty()) {
+    return;
+  }
+  // Create a dummy node.  The use of the dummy node is a bit
+  // counter-intuitive: The right child of the dummy node will hold
+  // the L tree of the algorithm.  The left child of the dummy node
+  // will hold the R tree of the algorithm.  Using a dummy node, left
+  // and right will always be nodes and we avoid special cases.
+  var dummy, left, right;
+  dummy = left = right = new SplayTree.Node(null, null);
+  var current = this.root_;
+  while (true) {
+    if (key < current.key) {
+      if (!current.left) {
+        break;
+      }
+      if (key < current.left.key) {
+        // Rotate right.
+        var tmp = current.left;
+        current.left = tmp.right;
+        tmp.right = current;
+        current = tmp;
+        if (!current.left) {
+          break;
+        }
+      }
+      // Link right.
+      right.left = current;
+      right = current;
+      current = current.left;
+    } else if (key > current.key) {
+      if (!current.right) {
+        break;
+      }
+      if (key > current.right.key) {
+        // Rotate left.
+        var tmp = current.right;
+        current.right = tmp.left;
+        tmp.left = current;
+        current = tmp;
+        if (!current.right) {
+          break;
+        }
+      }
+      // Link left.
+      left.right = current;
+      left = current;
+      current = current.right;
+    } else {
+      break;
+    }
+  }
+  // Assemble.
+  left.right = current.left;
+  right.left = current.right;
+  current.left = dummy.right;
+  current.right = dummy.left;
+  this.root_ = current;
+};
+
+
+/**
+ * Constructs a Splay tree node.
+ *
+ * @param {number} key Key.
+ * @param {*} value Value.
+ */
+SplayTree.Node = function(key, value) {
+  this.key = key;
+  this.value = value;
+};
+
+
+/**
+ * @type {SplayTree.Node}
+ */
+SplayTree.Node.prototype.left = null;
+
+
+/**
+ * @type {SplayTree.Node}
+ */
+SplayTree.Node.prototype.right = null;
+
+
+/**
+ * Performs an ordered traversal of the subtree starting at
+ * this SplayTree.Node.
+ *
+ * @param {function(SplayTree.Node)} f Visitor function.
+ * @private
+ */
+SplayTree.Node.prototype.traverse_ = function(f) {
+  var current = this;
+  while (current) {
+    var left = current.left;
+    if (left) left.traverse_(f);
+    f(current);
+    current = current.right;
+  }
+};
+
+function report(msg)
+{
+}
+
+function start(resultObject)
+{
+    SplaySetup();
+    var samples = [];
+    var before = performance.now();
+    for (var i = 0; i < 10000; ++i) {
+        SplayRun();
+        var after = performance.now();
+        samples.push(after - before);
+        before = after;
+    }
+    SplayTearDown();
+    
+    var scatterData = [];
+    for (var i = 0; i < samples.length; ++i)
+        scatterData.push({x: i + 1, y: samples[i]});
+    
+    report("JetStream-like Latency Score: " + Math.round(4000 / BenchmarkSuite.AverageAbovePercentile(samples, 99.5)));
+    
+    var sumOfSquares = 0;
+    for (var i = 0; i < samples.length; ++i)
+        sumOfSquares += samples[i] * samples[i];
+    report("Octane-like Latency Score: " + Math.round(27395.14 / Math.sqrt(sumOfSquares / samples.length)));
+    
+    for (var percentile of [99.5, 95, 87, 75, 50, 0])
+        report("Average above " + percentile + "%: " + BenchmarkSuite.AverageAbovePercentile(samples, percentile));
+    
+    resultObject.value = BenchmarkSuite.AverageAbovePercentile(samples, 99.5);
+}
+
+start(arguments[0]);
index 17074e4..dd4bab7 100644 (file)
@@ -28,8 +28,7 @@
 
 #include "APICast.h"
 #include "JSCInlines.h"
 
 #include "APICast.h"
 #include "JSCInlines.h"
-#include "MarkingConstraint.h"
-#include "VisitingTimeout.h"
+#include "SimpleMarkingConstraint.h"
 
 using namespace JSC;
 
 
 using namespace JSC;
 
@@ -72,11 +71,11 @@ void JSContextGroupAddMarkingConstraint(JSContextGroupRef group, JSMarkingConstr
     // else gets marked.
     ConstraintVolatility volatility = ConstraintVolatility::GreyedByMarking;
     
     // else gets marked.
     ConstraintVolatility volatility = ConstraintVolatility::GreyedByMarking;
     
-    auto constraint = std::make_unique<MarkingConstraint>(
+    auto constraint = std::make_unique<SimpleMarkingConstraint>(
         toCString("Amc", constraintIndex, "(", RawPointer(bitwise_cast<void*>(constraintCallback)), ")"),
         toCString("API Marking Constraint #", constraintIndex, " (", RawPointer(bitwise_cast<void*>(constraintCallback)), ", ", RawPointer(userData), ")"),
         [constraintCallback, userData]
         toCString("Amc", constraintIndex, "(", RawPointer(bitwise_cast<void*>(constraintCallback)), ")"),
         toCString("API Marking Constraint #", constraintIndex, " (", RawPointer(bitwise_cast<void*>(constraintCallback)), ", ", RawPointer(userData), ")"),
         [constraintCallback, userData]
-        (SlotVisitor& slotVisitor, const VisitingTimeout&) {
+        (SlotVisitor& slotVisitor) {
             Marker marker;
             marker.IsMarked = isMarked;
             marker.Mark = mark;
             Marker marker;
             marker.IsMarked = isMarked;
             marker.Mark = mark;
@@ -84,7 +83,8 @@ void JSContextGroupAddMarkingConstraint(JSContextGroupRef group, JSMarkingConstr
             
             constraintCallback(&marker, userData);
         },
             
             constraintCallback(&marker, userData);
         },
-        volatility);
+        volatility,
+        ConstraintConcurrency::Sequential);
     
     vm.heap.addMarkingConstraint(WTFMove(constraint));
 }
     
     vm.heap.addMarkingConstraint(WTFMove(constraint));
 }
index 5145ad9..e7c9af8 100644 (file)
@@ -283,9 +283,8 @@ static void scanExternalObjectGraph(JSC::VM& vm, JSC::SlotVisitor& visitor, void
         while (!stack.isEmpty()) {
             void* nextRoot = stack.last();
             stack.removeLast();
         while (!stack.isEmpty()) {
             void* nextRoot = stack.last();
             stack.removeLast();
-            if (visitor.containsOpaqueRootTriState(nextRoot) == TrueTriState)
+            if (!visitor.addOpaqueRoot(nextRoot))
                 continue;
                 continue;
-            visitor.addOpaqueRoot(nextRoot);
 
             auto appendOwnedObjects = [&] {
                 NSMapTable *ownedObjects = [externalObjectGraph objectForKey:static_cast<id>(nextRoot)];
 
             auto appendOwnedObjects = [&] {
                 NSMapTable *ownedObjects = [externalObjectGraph objectForKey:static_cast<id>(nextRoot)];
@@ -327,8 +326,6 @@ void scanExternalRememberedSet(JSC::VM& vm, JSC::SlotVisitor& visitor)
         }
         [externalRememberedSet removeAllObjects];
     }
         }
         [externalRememberedSet removeAllObjects];
     }
-
-    visitor.mergeIfNecessary();
 }
 
 #endif // JSC_OBJC_API_ENABLED
 }
 
 #endif // JSC_OBJC_API_ENABLED
index 64d9efc..4b42084 100644 (file)
@@ -1,3 +1,204 @@
+2017-12-01  Filip Pizlo  <fpizlo@apple.com>
+
+        GC constraint solving should be parallel
+        https://bugs.webkit.org/show_bug.cgi?id=179934
+
+        Reviewed by JF Bastien.
+        
+        This makes it possible to do constraint solving in parallel. This looks like a 1% Speedometer
+        speed-up. It's more than 1% on trunk-Speedometer.
+        
+        The constraint solver supports running constraints in parallel in two different ways:
+        
+        - Run multiple constraints in parallel to each other. This only works for constraints that can
+          tolerate other constraints running concurrently to them (constraint.concurrency() ==
+          ConstraintConcurrency::Concurrent). This is the most basic kind of parallelism that the
+          constraint solver supports. All constraints except the JSC SPI constraints are concurrent. We
+          could probably make them concurrent, but I'm playing it safe for now.
+        
+        - A constraint can create parallel work for itself, which the constraint solver will interleave
+          with other stuff. A constraint can report that it has parallel work by returning
+          ConstraintParallelism::Parallel from its executeImpl() function. Then the solver will allow that
+          constraint's doParallelWorkImpl() function to run on as many GC marker threads as are available,
+          for as long as that function wants to run.
+        
+        It's not possible to have a non-concurrent constraint that creates parallel work.
+        
+        The parallelism is implemented in terms of the existing GC marker threads. This turns out to be
+        most natural for two reasons:
+        
+        - No need to start any other threads.
+        
+        - The constraints all want to be passed a SlotVisitor. Running on the marker threads means having
+          access to those threads' SlotVisitors. Also, it means less load balancing. The solver will
+          create work on each marking thread's SlotVisitor. When the solver is done "stealing" a marker
+          thread, that thread will have work it can start doing immediately. Before this change, we had to
+          contribute the work found by the constraint solver to the global worklist so that it could be
+          distributed to the marker threads by load balancing. This change probably helps to avoid that
+          load balancing step.
+        
+        A lot of this change is about making it easy to iterate GC data structures in parallel. This
+        change makes almost all constraints parallel-enabled, but only the DOM's output constraint uses
+        the parallel work API. That constraint iterates the marked cells in two subspaces. This change
+        makes it very easy to compose parallel iterators over subspaces, allocators, blocks, and cells.
+        The marked cell parallel iterator is composed out of parallel iterators for the others. A parallel
+        iterator is just an iterator that can do an atomic next() very quickly. We abstract them using
+        RefPtr<SharedTask<...()>>, where ... is the type returned from the iterator. We know it's done
+        when it returns a falsish version of ... (in the current code, that's always a pointer type, so
+        done is indicated by null).
+        
+        * API/JSMarkingConstraintPrivate.cpp:
+        (JSContextGroupAddMarkingConstraint):
+        * API/JSVirtualMachine.mm:
+        (scanExternalObjectGraph):
+        (scanExternalRememberedSet):
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * Sources.txt:
+        * bytecode/AccessCase.cpp:
+        (JSC::AccessCase::propagateTransitions const):
+        * bytecode/CodeBlock.cpp:
+        (JSC::CodeBlock::visitWeakly):
+        (JSC::CodeBlock::shouldJettisonDueToOldAge):
+        (JSC::shouldMarkTransition):
+        (JSC::CodeBlock::propagateTransitions):
+        (JSC::CodeBlock::determineLiveness):
+        * dfg/DFGWorklist.cpp:
+        * ftl/FTLCompile.cpp:
+        (JSC::FTL::compile):
+        * heap/ConstraintParallelism.h: Added.
+        (WTF::printInternal):
+        * heap/Heap.cpp:
+        (JSC::Heap::Heap):
+        (JSC::Heap::addToRememberedSet):
+        (JSC::Heap::runFixpointPhase):
+        (JSC::Heap::stopThePeriphery):
+        (JSC::Heap::resumeThePeriphery):
+        (JSC::Heap::addCoreConstraints):
+        (JSC::Heap::setBonusVisitorTask):
+        (JSC::Heap::runTaskInParallel):
+        (JSC::Heap::forEachSlotVisitor): Deleted.
+        * heap/Heap.h:
+        (JSC::Heap::worldIsRunning const):
+        (JSC::Heap::runFunctionInParallel):
+        * heap/HeapInlines.h:
+        (JSC::Heap::worldIsStopped const):
+        (JSC::Heap::isMarked):
+        (JSC::Heap::incrementDeferralDepth):
+        (JSC::Heap::decrementDeferralDepth):
+        (JSC::Heap::decrementDeferralDepthAndGCIfNeeded):
+        (JSC::Heap::forEachSlotVisitor):
+        (JSC::Heap::collectorBelievesThatTheWorldIsStopped const): Deleted.
+        (JSC::Heap::isMarkedConcurrently): Deleted.
+        * heap/HeapSnapshotBuilder.cpp:
+        (JSC::HeapSnapshotBuilder::appendNode):
+        * heap/LargeAllocation.h:
+        (JSC::LargeAllocation::isMarked):
+        (JSC::LargeAllocation::isMarkedConcurrently): Deleted.
+        * heap/LockDuringMarking.h:
+        (JSC::lockDuringMarking):
+        * heap/MarkedAllocator.cpp:
+        (JSC::MarkedAllocator::parallelNotEmptyBlockSource):
+        * heap/MarkedAllocator.h:
+        * heap/MarkedBlock.h:
+        (JSC::MarkedBlock::aboutToMark):
+        (JSC::MarkedBlock::isMarked):
+        (JSC::MarkedBlock::areMarksStaleWithDependency): Deleted.
+        (JSC::MarkedBlock::isMarkedConcurrently): Deleted.
+        * heap/MarkedSpace.h:
+        (JSC::MarkedSpace::activeWeakSetsBegin):
+        (JSC::MarkedSpace::activeWeakSetsEnd):
+        (JSC::MarkedSpace::newActiveWeakSetsBegin):
+        (JSC::MarkedSpace::newActiveWeakSetsEnd):
+        * heap/MarkingConstraint.cpp:
+        (JSC::MarkingConstraint::MarkingConstraint):
+        (JSC::MarkingConstraint::execute):
+        (JSC::MarkingConstraint::quickWorkEstimate):
+        (JSC::MarkingConstraint::workEstimate):
+        (JSC::MarkingConstraint::doParallelWork):
+        (JSC::MarkingConstraint::finishParallelWork):
+        (JSC::MarkingConstraint::doParallelWorkImpl):
+        (JSC::MarkingConstraint::finishParallelWorkImpl):
+        * heap/MarkingConstraint.h:
+        (JSC::MarkingConstraint::lastExecuteParallelism const):
+        (JSC::MarkingConstraint::parallelism const):
+        (JSC::MarkingConstraint::quickWorkEstimate): Deleted.
+        (JSC::MarkingConstraint::workEstimate): Deleted.
+        * heap/MarkingConstraintSet.cpp:
+        (JSC::MarkingConstraintSet::MarkingConstraintSet):
+        (JSC::MarkingConstraintSet::add):
+        (JSC::MarkingConstraintSet::executeConvergence):
+        (JSC::MarkingConstraintSet::executeConvergenceImpl):
+        (JSC::MarkingConstraintSet::executeAll):
+        (JSC::MarkingConstraintSet::ExecutionContext::ExecutionContext): Deleted.
+        (JSC::MarkingConstraintSet::ExecutionContext::didVisitSomething const): Deleted.
+        (JSC::MarkingConstraintSet::ExecutionContext::shouldTimeOut const): Deleted.
+        (JSC::MarkingConstraintSet::ExecutionContext::drain): Deleted.
+        (JSC::MarkingConstraintSet::ExecutionContext::didExecute const): Deleted.
+        (JSC::MarkingConstraintSet::ExecutionContext::execute): Deleted.
+        (): Deleted.
+        * heap/MarkingConstraintSet.h:
+        * heap/MarkingConstraintSolver.cpp: Added.
+        (JSC::MarkingConstraintSolver::MarkingConstraintSolver):
+        (JSC::MarkingConstraintSolver::~MarkingConstraintSolver):
+        (JSC::MarkingConstraintSolver::didVisitSomething const):
+        (JSC::MarkingConstraintSolver::execute):
+        (JSC::MarkingConstraintSolver::drain):
+        (JSC::MarkingConstraintSolver::converge):
+        (JSC::MarkingConstraintSolver::runExecutionThread):
+        (JSC::MarkingConstraintSolver::didExecute):
+        * heap/MarkingConstraintSolver.h: Added.
+        * heap/OpaqueRootSet.h: Removed.
+        * heap/ParallelSourceAdapter.h: Added.
+        (JSC::ParallelSourceAdapter::ParallelSourceAdapter):
+        (JSC::createParallelSourceAdapter):
+        * heap/SimpleMarkingConstraint.cpp: Added.
+        (JSC::SimpleMarkingConstraint::SimpleMarkingConstraint):
+        (JSC::SimpleMarkingConstraint::~SimpleMarkingConstraint):
+        (JSC::SimpleMarkingConstraint::quickWorkEstimate):
+        (JSC::SimpleMarkingConstraint::executeImpl):
+        * heap/SimpleMarkingConstraint.h: Added.
+        * heap/SlotVisitor.cpp:
+        (JSC::SlotVisitor::didStartMarking):
+        (JSC::SlotVisitor::reset):
+        (JSC::SlotVisitor::appendToMarkStack):
+        (JSC::SlotVisitor::visitChildren):
+        (JSC::SlotVisitor::updateMutatorIsStopped):
+        (JSC::SlotVisitor::mutatorIsStoppedIsUpToDate const):
+        (JSC::SlotVisitor::drain):
+        (JSC::SlotVisitor::performIncrementOfDraining):
+        (JSC::SlotVisitor::didReachTermination):
+        (JSC::SlotVisitor::hasWork):
+        (JSC::SlotVisitor::drainFromShared):
+        (JSC::SlotVisitor::drainInParallelPassively):
+        (JSC::SlotVisitor::waitForTermination):
+        (JSC::SlotVisitor::addOpaqueRoot): Deleted.
+        (JSC::SlotVisitor::containsOpaqueRoot const): Deleted.
+        (JSC::SlotVisitor::containsOpaqueRootTriState const): Deleted.
+        (JSC::SlotVisitor::mergeIfNecessary): Deleted.
+        (JSC::SlotVisitor::mergeOpaqueRootsIfProfitable): Deleted.
+        (JSC::SlotVisitor::mergeOpaqueRoots): Deleted.
+        * heap/SlotVisitor.h:
+        * heap/SlotVisitorInlines.h:
+        (JSC::SlotVisitor::addOpaqueRoot):
+        (JSC::SlotVisitor::containsOpaqueRoot const):
+        (JSC::SlotVisitor::vm):
+        (JSC::SlotVisitor::vm const):
+        * heap/Subspace.cpp:
+        (JSC::Subspace::parallelAllocatorSource):
+        (JSC::Subspace::parallelNotEmptyMarkedBlockSource):
+        * heap/Subspace.h:
+        * heap/SubspaceInlines.h:
+        (JSC::Subspace::forEachMarkedCellInParallel):
+        * heap/VisitCounter.h: Added.
+        (JSC::VisitCounter::VisitCounter):
+        (JSC::VisitCounter::visitCount const):
+        * heap/VisitingTimeout.h: Removed.
+        * heap/WeakBlock.cpp:
+        (JSC::WeakBlock::specializedVisit):
+        * runtime/Structure.cpp:
+        (JSC::Structure::isCheapDuringGC):
+        (JSC::Structure::markIfCheap):
+
 2017-12-04  JF Bastien  <jfbastien@apple.com>
 
         Math: don't redundantly check for exceptions, just release scope
 2017-12-04  JF Bastien  <jfbastien@apple.com>
 
         Math: don't redundantly check for exceptions, just release scope
index cc2cec5..b7a6a14 100644 (file)
                0F1E3A471534CBB9000F9456 /* DFGDoubleFormatState.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F1E3A441534CBAD000F9456 /* DFGDoubleFormatState.h */; };
                0F1E3A67153A21E2000F9456 /* DFGSilentRegisterSavePlan.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F1E3A65153A21DF000F9456 /* DFGSilentRegisterSavePlan.h */; };
                0F1FB38F1E173A6700A9BE50 /* SynchronousStopTheWorldMutatorScheduler.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F1FB38B1E173A6200A9BE50 /* SynchronousStopTheWorldMutatorScheduler.h */; };
                0F1E3A471534CBB9000F9456 /* DFGDoubleFormatState.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F1E3A441534CBAD000F9456 /* DFGDoubleFormatState.h */; };
                0F1E3A67153A21E2000F9456 /* DFGSilentRegisterSavePlan.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F1E3A65153A21DF000F9456 /* DFGSilentRegisterSavePlan.h */; };
                0F1FB38F1E173A6700A9BE50 /* SynchronousStopTheWorldMutatorScheduler.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F1FB38B1E173A6200A9BE50 /* SynchronousStopTheWorldMutatorScheduler.h */; };
-               0F1FB3931E177A7200A9BE50 /* VisitingTimeout.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F1FB3921E177A6F00A9BE50 /* VisitingTimeout.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F1FB3961E1AF7E100A9BE50 /* DFGPlanInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F1FB3941E1AF7DF00A9BE50 /* DFGPlanInlines.h */; };
                0F1FB3971E1AF7E300A9BE50 /* DFGWorklistInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F1FB3951E1AF7DF00A9BE50 /* DFGWorklistInlines.h */; };
                0F1FB3991E1F65FB00A9BE50 /* MutatorScheduler.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F1FB3981E1F65F900A9BE50 /* MutatorScheduler.h */; };
                0F1FB3961E1AF7E100A9BE50 /* DFGPlanInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F1FB3941E1AF7DF00A9BE50 /* DFGPlanInlines.h */; };
                0F1FB3971E1AF7E300A9BE50 /* DFGWorklistInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F1FB3951E1AF7DF00A9BE50 /* DFGWorklistInlines.h */; };
                0F1FB3991E1F65FB00A9BE50 /* MutatorScheduler.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F1FB3981E1F65F900A9BE50 /* MutatorScheduler.h */; };
                0F40E4A71C497F7400A577FA /* AirOpcode.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6183321C45F35C0072450B /* AirOpcode.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F40E4A81C497F7400A577FA /* AirOpcodeGenerated.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6183341C45F3B60072450B /* AirOpcodeGenerated.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F40E4A91C497F7400A577FA /* AirOpcodeUtils.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6183351C45F3B60072450B /* AirOpcodeUtils.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F40E4A71C497F7400A577FA /* AirOpcode.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6183321C45F35C0072450B /* AirOpcode.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F40E4A81C497F7400A577FA /* AirOpcodeGenerated.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6183341C45F3B60072450B /* AirOpcodeGenerated.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F40E4A91C497F7400A577FA /* AirOpcodeUtils.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6183351C45F3B60072450B /* AirOpcodeUtils.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F41545B1FD20B22001B58F6 /* ConstraintConcurrency.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F41545A1FD20B1F001B58F6 /* ConstraintConcurrency.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F426A481460CBB300131F8F /* ValueRecovery.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F426A451460CBAB00131F8F /* ValueRecovery.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F426A491460CBB700131F8F /* VirtualRegister.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F426A461460CBAB00131F8F /* VirtualRegister.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F426A4B1460CD6E00131F8F /* DataFormat.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F426A4A1460CD6B00131F8F /* DataFormat.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F426A481460CBB300131F8F /* ValueRecovery.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F426A451460CBAB00131F8F /* ValueRecovery.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F426A491460CBB700131F8F /* VirtualRegister.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F426A461460CBAB00131F8F /* VirtualRegister.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F426A4B1460CD6E00131F8F /* DataFormat.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F426A4A1460CD6B00131F8F /* DataFormat.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F4A38FA1C8E13DF00190318 /* SuperSampler.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4A38F81C8E13DF00190318 /* SuperSampler.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F4B94DC17B9F07500DD03A4 /* TypedArrayInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4B94DB17B9F07500DD03A4 /* TypedArrayInlines.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F4C91661C29F4F2004341A6 /* B3OriginDump.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4C91651C29F4F2004341A6 /* B3OriginDump.h */; };
                0F4A38FA1C8E13DF00190318 /* SuperSampler.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4A38F81C8E13DF00190318 /* SuperSampler.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F4B94DC17B9F07500DD03A4 /* TypedArrayInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4B94DB17B9F07500DD03A4 /* TypedArrayInlines.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F4C91661C29F4F2004341A6 /* B3OriginDump.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4C91651C29F4F2004341A6 /* B3OriginDump.h */; };
+               0F4D8C741FC7A97A001D32AC /* VisitCounter.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4D8C721FC7A973001D32AC /* VisitCounter.h */; };
+               0F4D8C751FC7A97D001D32AC /* ConstraintParallelism.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4D8C731FC7A974001D32AC /* ConstraintParallelism.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F4D8C781FCA3CFA001D32AC /* SimpleMarkingConstraint.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4D8C771FCA3CF3001D32AC /* SimpleMarkingConstraint.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F4DE1CF1C4C1B54004D6C11 /* AirFixObviousSpills.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4DE1CD1C4C1B54004D6C11 /* AirFixObviousSpills.h */; };
                0F4F29E018B6AD1C0057BC15 /* DFGStaticExecutionCountEstimationPhase.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4F29DE18B6AD1C0057BC15 /* DFGStaticExecutionCountEstimationPhase.h */; };
                0F4F82881E2FFDE00075184C /* JSSegmentedVariableObjectHeapCellType.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4F82861E2FFDDB0075184C /* JSSegmentedVariableObjectHeapCellType.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F4DE1CF1C4C1B54004D6C11 /* AirFixObviousSpills.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4DE1CD1C4C1B54004D6C11 /* AirFixObviousSpills.h */; };
                0F4F29E018B6AD1C0057BC15 /* DFGStaticExecutionCountEstimationPhase.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4F29DE18B6AD1C0057BC15 /* DFGStaticExecutionCountEstimationPhase.h */; };
                0F4F82881E2FFDE00075184C /* JSSegmentedVariableObjectHeapCellType.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4F82861E2FFDDB0075184C /* JSSegmentedVariableObjectHeapCellType.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F63945515D07057006A597C /* ArrayProfile.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F63945215D07051006A597C /* ArrayProfile.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F63947815DCE34B006A597C /* DFGStructureAbstractValue.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F63947615DCE347006A597C /* DFGStructureAbstractValue.h */; };
                0F63948515E4811B006A597C /* DFGArrayMode.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F63948215E48114006A597C /* DFGArrayMode.h */; };
                0F63945515D07057006A597C /* ArrayProfile.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F63945215D07051006A597C /* ArrayProfile.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F63947815DCE34B006A597C /* DFGStructureAbstractValue.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F63947615DCE347006A597C /* DFGStructureAbstractValue.h */; };
                0F63948515E4811B006A597C /* DFGArrayMode.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F63948215E48114006A597C /* DFGArrayMode.h */; };
+               0F6453181FD246A7002432A1 /* MarkStackMergingConstraint.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6453151FD246A0002432A1 /* MarkStackMergingConstraint.h */; };
                0F64B2721A784BAF006E4E66 /* BinarySwitch.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F64B2701A784BAF006E4E66 /* BinarySwitch.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F64B27A1A7957B2006E4E66 /* CallEdge.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F64B2781A7957B2006E4E66 /* CallEdge.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F64EAF31C4ECD0600621E9B /* AirArgInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F64EAF21C4ECD0600621E9B /* AirArgInlines.h */; };
                0F64B2721A784BAF006E4E66 /* BinarySwitch.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F64B2701A784BAF006E4E66 /* BinarySwitch.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F64B27A1A7957B2006E4E66 /* CallEdge.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F64B2781A7957B2006E4E66 /* CallEdge.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F64EAF31C4ECD0600621E9B /* AirArgInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F64EAF21C4ECD0600621E9B /* AirArgInlines.h */; };
                0F9D36951AE9CC33000D4DFB /* DFGCleanUpPhase.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F9D36931AE9CC33000D4DFB /* DFGCleanUpPhase.h */; };
                0F9D4C0D1C3E1C11006CD984 /* FTLExceptionTarget.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F9D4C0B1C3E1C11006CD984 /* FTLExceptionTarget.h */; };
                0F9D4C111C3E2C74006CD984 /* FTLPatchpointExceptionHandle.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F9D4C0F1C3E2C74006CD984 /* FTLPatchpointExceptionHandle.h */; };
                0F9D36951AE9CC33000D4DFB /* DFGCleanUpPhase.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F9D36931AE9CC33000D4DFB /* DFGCleanUpPhase.h */; };
                0F9D4C0D1C3E1C11006CD984 /* FTLExceptionTarget.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F9D4C0B1C3E1C11006CD984 /* FTLExceptionTarget.h */; };
                0F9D4C111C3E2C74006CD984 /* FTLPatchpointExceptionHandle.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F9D4C0F1C3E2C74006CD984 /* FTLPatchpointExceptionHandle.h */; };
+               0F9DAA091FD1C3CF0079C5B2 /* MarkingConstraintSolver.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F9DAA071FD1C3C80079C5B2 /* MarkingConstraintSolver.h */; };
+               0F9DAA0A1FD1C3D30079C5B2 /* ParallelSourceAdapter.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F9DAA081FD1C3C80079C5B2 /* ParallelSourceAdapter.h */; };
                0F9E32641B05AB0400801ED5 /* DFGStoreBarrierInsertionPhase.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F9E32621B05AB0400801ED5 /* DFGStoreBarrierInsertionPhase.h */; };
                0F9FB4F517FCB91700CB67F8 /* DFGStackLayoutPhase.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F9FB4F317FCB91700CB67F8 /* DFGStackLayoutPhase.h */; };
                0F9FC8C514E1B60400D52AE0 /* PutKind.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F9FC8C114E1B5FB00D52AE0 /* PutKind.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F9E32641B05AB0400801ED5 /* DFGStoreBarrierInsertionPhase.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F9E32621B05AB0400801ED5 /* DFGStoreBarrierInsertionPhase.h */; };
                0F9FB4F517FCB91700CB67F8 /* DFGStackLayoutPhase.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F9FB4F317FCB91700CB67F8 /* DFGStackLayoutPhase.h */; };
                0F9FC8C514E1B60400D52AE0 /* PutKind.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F9FC8C114E1B5FB00D52AE0 /* PutKind.h */; settings = {ATTRIBUTES = (Private, ); }; };
                AD9E852F1E8A0C7C008DE39E /* JSWebAssemblyCodeBlock.h in Headers */ = {isa = PBXBuildFile; fileRef = AD9E852E1E8A0C6E008DE39E /* JSWebAssemblyCodeBlock.h */; settings = {ATTRIBUTES = (Private, ); }; };
                ADBC54D51DF8EA2B005BF738 /* WebAssemblyToJSCallee.h in Headers */ = {isa = PBXBuildFile; fileRef = ADBC54D31DF8EA00005BF738 /* WebAssemblyToJSCallee.h */; };
                ADD8FA461EB3079700DF542F /* WasmNameSectionParser.h in Headers */ = {isa = PBXBuildFile; fileRef = ADD8FA431EB3077100DF542F /* WasmNameSectionParser.h */; };
                AD9E852F1E8A0C7C008DE39E /* JSWebAssemblyCodeBlock.h in Headers */ = {isa = PBXBuildFile; fileRef = AD9E852E1E8A0C6E008DE39E /* JSWebAssemblyCodeBlock.h */; settings = {ATTRIBUTES = (Private, ); }; };
                ADBC54D51DF8EA2B005BF738 /* WebAssemblyToJSCallee.h in Headers */ = {isa = PBXBuildFile; fileRef = ADBC54D31DF8EA00005BF738 /* WebAssemblyToJSCallee.h */; };
                ADD8FA461EB3079700DF542F /* WasmNameSectionParser.h in Headers */ = {isa = PBXBuildFile; fileRef = ADD8FA431EB3077100DF542F /* WasmNameSectionParser.h */; };
-               ADDB1F6318D77DBE009B58A8 /* OpaqueRootSet.h in Headers */ = {isa = PBXBuildFile; fileRef = ADDB1F6218D77DB7009B58A8 /* OpaqueRootSet.h */; settings = {ATTRIBUTES = (Private, ); }; };
                ADE802991E08F1DE0058DE78 /* JSWebAssemblyLinkError.h in Headers */ = {isa = PBXBuildFile; fileRef = ADE802941E08F1C90058DE78 /* JSWebAssemblyLinkError.h */; settings = {ATTRIBUTES = (Private, ); }; };
                ADE8029A1E08F1DE0058DE78 /* WebAssemblyLinkErrorConstructor.h in Headers */ = {isa = PBXBuildFile; fileRef = ADE802951E08F1C90058DE78 /* WebAssemblyLinkErrorConstructor.h */; settings = {ATTRIBUTES = (Private, ); }; };
                ADE8029C1E08F1DE0058DE78 /* WebAssemblyLinkErrorPrototype.h in Headers */ = {isa = PBXBuildFile; fileRef = ADE802971E08F1C90058DE78 /* WebAssemblyLinkErrorPrototype.h */; settings = {ATTRIBUTES = (Private, ); }; };
                ADE802991E08F1DE0058DE78 /* JSWebAssemblyLinkError.h in Headers */ = {isa = PBXBuildFile; fileRef = ADE802941E08F1C90058DE78 /* JSWebAssemblyLinkError.h */; settings = {ATTRIBUTES = (Private, ); }; };
                ADE8029A1E08F1DE0058DE78 /* WebAssemblyLinkErrorConstructor.h in Headers */ = {isa = PBXBuildFile; fileRef = ADE802951E08F1C90058DE78 /* WebAssemblyLinkErrorConstructor.h */; settings = {ATTRIBUTES = (Private, ); }; };
                ADE8029C1E08F1DE0058DE78 /* WebAssemblyLinkErrorPrototype.h in Headers */ = {isa = PBXBuildFile; fileRef = ADE802971E08F1C90058DE78 /* WebAssemblyLinkErrorPrototype.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F1FB38A1E173A6200A9BE50 /* SynchronousStopTheWorldMutatorScheduler.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = SynchronousStopTheWorldMutatorScheduler.cpp; sourceTree = "<group>"; };
                0F1FB38B1E173A6200A9BE50 /* SynchronousStopTheWorldMutatorScheduler.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = SynchronousStopTheWorldMutatorScheduler.h; sourceTree = "<group>"; };
                0F1FB38C1E173A6200A9BE50 /* MutatorScheduler.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = MutatorScheduler.cpp; sourceTree = "<group>"; };
                0F1FB38A1E173A6200A9BE50 /* SynchronousStopTheWorldMutatorScheduler.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = SynchronousStopTheWorldMutatorScheduler.cpp; sourceTree = "<group>"; };
                0F1FB38B1E173A6200A9BE50 /* SynchronousStopTheWorldMutatorScheduler.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = SynchronousStopTheWorldMutatorScheduler.h; sourceTree = "<group>"; };
                0F1FB38C1E173A6200A9BE50 /* MutatorScheduler.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = MutatorScheduler.cpp; sourceTree = "<group>"; };
-               0F1FB3921E177A6F00A9BE50 /* VisitingTimeout.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = VisitingTimeout.h; sourceTree = "<group>"; };
                0F1FB3941E1AF7DF00A9BE50 /* DFGPlanInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGPlanInlines.h; path = dfg/DFGPlanInlines.h; sourceTree = "<group>"; };
                0F1FB3951E1AF7DF00A9BE50 /* DFGWorklistInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGWorklistInlines.h; path = dfg/DFGWorklistInlines.h; sourceTree = "<group>"; };
                0F1FB3981E1F65F900A9BE50 /* MutatorScheduler.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MutatorScheduler.h; sourceTree = "<group>"; };
                0F1FB3941E1AF7DF00A9BE50 /* DFGPlanInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGPlanInlines.h; path = dfg/DFGPlanInlines.h; sourceTree = "<group>"; };
                0F1FB3951E1AF7DF00A9BE50 /* DFGWorklistInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGWorklistInlines.h; path = dfg/DFGWorklistInlines.h; sourceTree = "<group>"; };
                0F1FB3981E1F65F900A9BE50 /* MutatorScheduler.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MutatorScheduler.h; sourceTree = "<group>"; };
                0F3BD1B61B896A0700598AA6 /* DFGInsertionSet.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGInsertionSet.cpp; path = dfg/DFGInsertionSet.cpp; sourceTree = "<group>"; };
                0F3C1F181B868E7900ABB08B /* DFGClobbersExitState.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGClobbersExitState.cpp; path = dfg/DFGClobbersExitState.cpp; sourceTree = "<group>"; };
                0F3C1F191B868E7900ABB08B /* DFGClobbersExitState.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGClobbersExitState.h; path = dfg/DFGClobbersExitState.h; sourceTree = "<group>"; };
                0F3BD1B61B896A0700598AA6 /* DFGInsertionSet.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGInsertionSet.cpp; path = dfg/DFGInsertionSet.cpp; sourceTree = "<group>"; };
                0F3C1F181B868E7900ABB08B /* DFGClobbersExitState.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGClobbersExitState.cpp; path = dfg/DFGClobbersExitState.cpp; sourceTree = "<group>"; };
                0F3C1F191B868E7900ABB08B /* DFGClobbersExitState.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGClobbersExitState.h; path = dfg/DFGClobbersExitState.h; sourceTree = "<group>"; };
+               0F41545A1FD20B1F001B58F6 /* ConstraintConcurrency.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ConstraintConcurrency.h; sourceTree = "<group>"; };
                0F426A451460CBAB00131F8F /* ValueRecovery.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ValueRecovery.h; sourceTree = "<group>"; };
                0F426A461460CBAB00131F8F /* VirtualRegister.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = VirtualRegister.h; sourceTree = "<group>"; };
                0F426A4A1460CD6B00131F8F /* DataFormat.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = DataFormat.h; sourceTree = "<group>"; };
                0F426A451460CBAB00131F8F /* ValueRecovery.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ValueRecovery.h; sourceTree = "<group>"; };
                0F426A461460CBAB00131F8F /* VirtualRegister.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = VirtualRegister.h; sourceTree = "<group>"; };
                0F426A4A1460CD6B00131F8F /* DataFormat.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = DataFormat.h; sourceTree = "<group>"; };
                0F4A38F81C8E13DF00190318 /* SuperSampler.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = SuperSampler.h; sourceTree = "<group>"; };
                0F4B94DB17B9F07500DD03A4 /* TypedArrayInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = TypedArrayInlines.h; sourceTree = "<group>"; };
                0F4C91651C29F4F2004341A6 /* B3OriginDump.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = B3OriginDump.h; path = b3/B3OriginDump.h; sourceTree = "<group>"; };
                0F4A38F81C8E13DF00190318 /* SuperSampler.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = SuperSampler.h; sourceTree = "<group>"; };
                0F4B94DB17B9F07500DD03A4 /* TypedArrayInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = TypedArrayInlines.h; sourceTree = "<group>"; };
                0F4C91651C29F4F2004341A6 /* B3OriginDump.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = B3OriginDump.h; path = b3/B3OriginDump.h; sourceTree = "<group>"; };
+               0F4D8C721FC7A973001D32AC /* VisitCounter.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = VisitCounter.h; sourceTree = "<group>"; };
+               0F4D8C731FC7A974001D32AC /* ConstraintParallelism.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ConstraintParallelism.h; sourceTree = "<group>"; };
+               0F4D8C761FCA3CF2001D32AC /* SimpleMarkingConstraint.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = SimpleMarkingConstraint.cpp; sourceTree = "<group>"; };
+               0F4D8C771FCA3CF3001D32AC /* SimpleMarkingConstraint.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = SimpleMarkingConstraint.h; sourceTree = "<group>"; };
                0F4DE1CC1C4C1B54004D6C11 /* AirFixObviousSpills.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = AirFixObviousSpills.cpp; path = b3/air/AirFixObviousSpills.cpp; sourceTree = "<group>"; };
                0F4DE1CD1C4C1B54004D6C11 /* AirFixObviousSpills.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = AirFixObviousSpills.h; path = b3/air/AirFixObviousSpills.h; sourceTree = "<group>"; };
                0F4DE1D01C4D764B004D6C11 /* B3OriginDump.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = B3OriginDump.cpp; path = b3/B3OriginDump.cpp; sourceTree = "<group>"; };
                0F4DE1CC1C4C1B54004D6C11 /* AirFixObviousSpills.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = AirFixObviousSpills.cpp; path = b3/air/AirFixObviousSpills.cpp; sourceTree = "<group>"; };
                0F4DE1CD1C4C1B54004D6C11 /* AirFixObviousSpills.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = AirFixObviousSpills.h; path = b3/air/AirFixObviousSpills.h; sourceTree = "<group>"; };
                0F4DE1D01C4D764B004D6C11 /* B3OriginDump.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = B3OriginDump.cpp; path = b3/B3OriginDump.cpp; sourceTree = "<group>"; };
                0F63947615DCE347006A597C /* DFGStructureAbstractValue.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGStructureAbstractValue.h; path = dfg/DFGStructureAbstractValue.h; sourceTree = "<group>"; };
                0F63948115E48114006A597C /* DFGArrayMode.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGArrayMode.cpp; path = dfg/DFGArrayMode.cpp; sourceTree = "<group>"; };
                0F63948215E48114006A597C /* DFGArrayMode.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGArrayMode.h; path = dfg/DFGArrayMode.h; sourceTree = "<group>"; };
                0F63947615DCE347006A597C /* DFGStructureAbstractValue.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGStructureAbstractValue.h; path = dfg/DFGStructureAbstractValue.h; sourceTree = "<group>"; };
                0F63948115E48114006A597C /* DFGArrayMode.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGArrayMode.cpp; path = dfg/DFGArrayMode.cpp; sourceTree = "<group>"; };
                0F63948215E48114006A597C /* DFGArrayMode.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGArrayMode.h; path = dfg/DFGArrayMode.h; sourceTree = "<group>"; };
+               0F6453151FD246A0002432A1 /* MarkStackMergingConstraint.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MarkStackMergingConstraint.h; sourceTree = "<group>"; };
+               0F6453161FD246A0002432A1 /* MarkStackMergingConstraint.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = MarkStackMergingConstraint.cpp; sourceTree = "<group>"; };
                0F64B26F1A784BAF006E4E66 /* BinarySwitch.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = BinarySwitch.cpp; sourceTree = "<group>"; };
                0F64B2701A784BAF006E4E66 /* BinarySwitch.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = BinarySwitch.h; sourceTree = "<group>"; };
                0F64B2771A7957B2006E4E66 /* CallEdge.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = CallEdge.cpp; sourceTree = "<group>"; };
                0F64B26F1A784BAF006E4E66 /* BinarySwitch.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = BinarySwitch.cpp; sourceTree = "<group>"; };
                0F64B2701A784BAF006E4E66 /* BinarySwitch.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = BinarySwitch.h; sourceTree = "<group>"; };
                0F64B2771A7957B2006E4E66 /* CallEdge.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = CallEdge.cpp; sourceTree = "<group>"; };
                0F9D4C0B1C3E1C11006CD984 /* FTLExceptionTarget.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLExceptionTarget.h; path = ftl/FTLExceptionTarget.h; sourceTree = "<group>"; };
                0F9D4C0E1C3E2C74006CD984 /* FTLPatchpointExceptionHandle.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FTLPatchpointExceptionHandle.cpp; path = ftl/FTLPatchpointExceptionHandle.cpp; sourceTree = "<group>"; };
                0F9D4C0F1C3E2C74006CD984 /* FTLPatchpointExceptionHandle.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLPatchpointExceptionHandle.h; path = ftl/FTLPatchpointExceptionHandle.h; sourceTree = "<group>"; };
                0F9D4C0B1C3E1C11006CD984 /* FTLExceptionTarget.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLExceptionTarget.h; path = ftl/FTLExceptionTarget.h; sourceTree = "<group>"; };
                0F9D4C0E1C3E2C74006CD984 /* FTLPatchpointExceptionHandle.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FTLPatchpointExceptionHandle.cpp; path = ftl/FTLPatchpointExceptionHandle.cpp; sourceTree = "<group>"; };
                0F9D4C0F1C3E2C74006CD984 /* FTLPatchpointExceptionHandle.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLPatchpointExceptionHandle.h; path = ftl/FTLPatchpointExceptionHandle.h; sourceTree = "<group>"; };
+               0F9DAA061FD1C3C80079C5B2 /* MarkingConstraintSolver.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = MarkingConstraintSolver.cpp; sourceTree = "<group>"; };
+               0F9DAA071FD1C3C80079C5B2 /* MarkingConstraintSolver.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MarkingConstraintSolver.h; sourceTree = "<group>"; };
+               0F9DAA081FD1C3C80079C5B2 /* ParallelSourceAdapter.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ParallelSourceAdapter.h; sourceTree = "<group>"; };
                0F9E32611B05AB0400801ED5 /* DFGStoreBarrierInsertionPhase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGStoreBarrierInsertionPhase.cpp; path = dfg/DFGStoreBarrierInsertionPhase.cpp; sourceTree = "<group>"; };
                0F9E32621B05AB0400801ED5 /* DFGStoreBarrierInsertionPhase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGStoreBarrierInsertionPhase.h; path = dfg/DFGStoreBarrierInsertionPhase.h; sourceTree = "<group>"; };
                0F9FB4F217FCB91700CB67F8 /* DFGStackLayoutPhase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGStackLayoutPhase.cpp; path = dfg/DFGStackLayoutPhase.cpp; sourceTree = "<group>"; };
                0F9E32611B05AB0400801ED5 /* DFGStoreBarrierInsertionPhase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGStoreBarrierInsertionPhase.cpp; path = dfg/DFGStoreBarrierInsertionPhase.cpp; sourceTree = "<group>"; };
                0F9E32621B05AB0400801ED5 /* DFGStoreBarrierInsertionPhase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGStoreBarrierInsertionPhase.h; path = dfg/DFGStoreBarrierInsertionPhase.h; sourceTree = "<group>"; };
                0F9FB4F217FCB91700CB67F8 /* DFGStackLayoutPhase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGStackLayoutPhase.cpp; path = dfg/DFGStackLayoutPhase.cpp; sourceTree = "<group>"; };
                ADD09AF31F62482E001313C2 /* JSWebAssembly.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = JSWebAssembly.h; path = js/JSWebAssembly.h; sourceTree = "<group>"; };
                ADD8FA431EB3077100DF542F /* WasmNameSectionParser.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WasmNameSectionParser.h; sourceTree = "<group>"; };
                ADD8FA441EB3077100DF542F /* WasmNameSectionParser.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = WasmNameSectionParser.cpp; sourceTree = "<group>"; };
                ADD09AF31F62482E001313C2 /* JSWebAssembly.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = JSWebAssembly.h; path = js/JSWebAssembly.h; sourceTree = "<group>"; };
                ADD8FA431EB3077100DF542F /* WasmNameSectionParser.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WasmNameSectionParser.h; sourceTree = "<group>"; };
                ADD8FA441EB3077100DF542F /* WasmNameSectionParser.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = WasmNameSectionParser.cpp; sourceTree = "<group>"; };
-               ADDB1F6218D77DB7009B58A8 /* OpaqueRootSet.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = OpaqueRootSet.h; sourceTree = "<group>"; };
                ADE802931E08F1C90058DE78 /* JSWebAssemblyLinkError.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = JSWebAssemblyLinkError.cpp; path = js/JSWebAssemblyLinkError.cpp; sourceTree = "<group>"; };
                ADE802941E08F1C90058DE78 /* JSWebAssemblyLinkError.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = JSWebAssemblyLinkError.h; path = js/JSWebAssemblyLinkError.h; sourceTree = "<group>"; };
                ADE802951E08F1C90058DE78 /* WebAssemblyLinkErrorConstructor.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = WebAssemblyLinkErrorConstructor.h; path = js/WebAssemblyLinkErrorConstructor.h; sourceTree = "<group>"; };
                ADE802931E08F1C90058DE78 /* JSWebAssemblyLinkError.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = JSWebAssemblyLinkError.cpp; path = js/JSWebAssemblyLinkError.cpp; sourceTree = "<group>"; };
                ADE802941E08F1C90058DE78 /* JSWebAssemblyLinkError.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = JSWebAssemblyLinkError.h; path = js/JSWebAssemblyLinkError.h; sourceTree = "<group>"; };
                ADE802951E08F1C90058DE78 /* WebAssemblyLinkErrorConstructor.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = WebAssemblyLinkErrorConstructor.h; path = js/WebAssemblyLinkErrorConstructor.h; sourceTree = "<group>"; };
                                0FDCE1281FAFA859006F3901 /* CompleteSubspace.h */,
                                146B14DB12EB5B12001BEC1B /* ConservativeRoots.cpp */,
                                149DAAF212EB559D0083B12B /* ConservativeRoots.h */,
                                0FDCE1281FAFA859006F3901 /* CompleteSubspace.h */,
                                146B14DB12EB5B12001BEC1B /* ConservativeRoots.cpp */,
                                149DAAF212EB559D0083B12B /* ConservativeRoots.h */,
+                               0F41545A1FD20B1F001B58F6 /* ConstraintConcurrency.h */,
+                               0F4D8C731FC7A974001D32AC /* ConstraintParallelism.h */,
                                0F7DF12F1E2970D50095951B /* ConstraintVolatility.h */,
                                2A7A58EE1808A4C40020BDF7 /* DeferGC.cpp */,
                                0F136D4B174AD69B0075B354 /* DeferGC.h */,
                                0F7DF12F1E2970D50095951B /* ConstraintVolatility.h */,
                                2A7A58EE1808A4C40020BDF7 /* DeferGC.cpp */,
                                0F136D4B174AD69B0075B354 /* DeferGC.h */,
                                0F660E341E0517B70031462C /* MarkingConstraint.h */,
                                0F660E351E0517B70031462C /* MarkingConstraintSet.cpp */,
                                0F660E361E0517B80031462C /* MarkingConstraintSet.h */,
                                0F660E341E0517B70031462C /* MarkingConstraint.h */,
                                0F660E351E0517B70031462C /* MarkingConstraintSet.cpp */,
                                0F660E361E0517B80031462C /* MarkingConstraintSet.h */,
+                               0F9DAA061FD1C3C80079C5B2 /* MarkingConstraintSolver.cpp */,
+                               0F9DAA071FD1C3C80079C5B2 /* MarkingConstraintSolver.h */,
                                142D6F0E13539A4100B02E86 /* MarkStack.cpp */,
                                142D6F0F13539A4100B02E86 /* MarkStack.h */,
                                142D6F0E13539A4100B02E86 /* MarkStack.cpp */,
                                142D6F0F13539A4100B02E86 /* MarkStack.h */,
+                               0F6453161FD246A0002432A1 /* MarkStackMergingConstraint.cpp */,
+                               0F6453151FD246A0002432A1 /* MarkStackMergingConstraint.h */,
                                0F1FB38C1E173A6200A9BE50 /* MutatorScheduler.cpp */,
                                0F1FB3981E1F65F900A9BE50 /* MutatorScheduler.h */,
                                0FA762021DB9242300B7A2FD /* MutatorState.cpp */,
                                0FA762031DB9242300B7A2FD /* MutatorState.h */,
                                0F1FB38C1E173A6200A9BE50 /* MutatorScheduler.cpp */,
                                0F1FB3981E1F65F900A9BE50 /* MutatorScheduler.h */,
                                0FA762021DB9242300B7A2FD /* MutatorState.cpp */,
                                0FA762031DB9242300B7A2FD /* MutatorState.h */,
-                               ADDB1F6218D77DB7009B58A8 /* OpaqueRootSet.h */,
+                               0F9DAA081FD1C3C80079C5B2 /* ParallelSourceAdapter.h */,
                                0FBB73B61DEF3AAC002C009E /* PreventCollectionScope.h */,
                                0FD0E5EF1E46BF230006AB08 /* RegisterState.h */,
                                0F7CF94E1DBEEE860098CC12 /* ReleaseHeapAccessScope.h */,
                                0F2C63A91E4FA42C00C13839 /* RunningScope.h */,
                                0FBB73B61DEF3AAC002C009E /* PreventCollectionScope.h */,
                                0FD0E5EF1E46BF230006AB08 /* RegisterState.h */,
                                0F7CF94E1DBEEE860098CC12 /* ReleaseHeapAccessScope.h */,
                                0F2C63A91E4FA42C00C13839 /* RunningScope.h */,
+                               0F4D8C761FCA3CF2001D32AC /* SimpleMarkingConstraint.cpp */,
+                               0F4D8C771FCA3CF3001D32AC /* SimpleMarkingConstraint.h */,
                                C225494215F7DBAA0065E898 /* SlotVisitor.cpp */,
                                14BA78F013AAB88F005B7C2C /* SlotVisitor.h */,
                                0FCB408515C0A3C30048932B /* SlotVisitorInlines.h */,
                                C225494215F7DBAA0065E898 /* SlotVisitor.cpp */,
                                14BA78F013AAB88F005B7C2C /* SlotVisitor.h */,
                                0FCB408515C0A3C30048932B /* SlotVisitorInlines.h */,
                                0F1FB38B1E173A6200A9BE50 /* SynchronousStopTheWorldMutatorScheduler.h */,
                                141448CC13A1783700F5BA1A /* TinyBloomFilter.h */,
                                0F5F08CE146C762F000472A9 /* UnconditionalFinalizer.h */,
                                0F1FB38B1E173A6200A9BE50 /* SynchronousStopTheWorldMutatorScheduler.h */,
                                141448CC13A1783700F5BA1A /* TinyBloomFilter.h */,
                                0F5F08CE146C762F000472A9 /* UnconditionalFinalizer.h */,
-                               0F1FB3921E177A6F00A9BE50 /* VisitingTimeout.h */,
+                               0F4D8C721FC7A973001D32AC /* VisitCounter.h */,
                                0F952A9F1DF7860700E06FBD /* VisitRaceKey.cpp */,
                                0F952AA01DF7860700E06FBD /* VisitRaceKey.h */,
                                1ACF7376171CA6FB00C9BB1E /* Weak.cpp */,
                                0F952A9F1DF7860700E06FBD /* VisitRaceKey.cpp */,
                                0F952AA01DF7860700E06FBD /* VisitRaceKey.h */,
                                1ACF7376171CA6FB00C9BB1E /* Weak.cpp */,
                                0FEC85881BDACDC70080FF74 /* AirSpecial.h in Headers */,
                                0F5CF9891E9ED65200C18692 /* AirStackAllocation.h in Headers */,
                                0FEC858C1BDACDC70080FF74 /* AirStackSlot.h in Headers */,
                                0FEC85881BDACDC70080FF74 /* AirSpecial.h in Headers */,
                                0F5CF9891E9ED65200C18692 /* AirStackAllocation.h in Headers */,
                                0FEC858C1BDACDC70080FF74 /* AirStackSlot.h in Headers */,
+                               0F41545B1FD20B22001B58F6 /* ConstraintConcurrency.h in Headers */,
                                0F2BBD9E1C5FF4050023EF23 /* AirStackSlotKind.h in Headers */,
                                0FEC858E1BDACDC70080FF74 /* AirTmp.h in Headers */,
                                0FEC858F1BDACDC70080FF74 /* AirTmpInlines.h in Headers */,
                                0F2BBD9E1C5FF4050023EF23 /* AirStackSlotKind.h in Headers */,
                                0FEC858E1BDACDC70080FF74 /* AirTmp.h in Headers */,
                                0FEC858F1BDACDC70080FF74 /* AirTmpInlines.h in Headers */,
                                6514F21918B3E1670098FF8B /* Bytecodes.h in Headers */,
                                0F885E111849A3BE00F1E3FA /* BytecodeUseDef.h in Headers */,
                                0F8023EA1613832B00A0BA45 /* ByValInfo.h in Headers */,
                                6514F21918B3E1670098FF8B /* Bytecodes.h in Headers */,
                                0F885E111849A3BE00F1E3FA /* BytecodeUseDef.h in Headers */,
                                0F8023EA1613832B00A0BA45 /* ByValInfo.h in Headers */,
+                               0F4D8C751FC7A97D001D32AC /* ConstraintParallelism.h in Headers */,
                                65B8392E1BACAD360044E824 /* CachedRecovery.h in Headers */,
                                0FEC3C601F379F5300F59B6C /* CagedBarrierPtr.h in Headers */,
                                BC18C3ED0E16F5CD00B34460 /* CallData.h in Headers */,
                                65B8392E1BACAD360044E824 /* CachedRecovery.h in Headers */,
                                0FEC3C601F379F5300F59B6C /* CagedBarrierPtr.h in Headers */,
                                BC18C3ED0E16F5CD00B34460 /* CallData.h in Headers */,
                                A77A423E17A0BBFD00A8DB81 /* DFGAbstractHeap.h in Headers */,
                                A704D90317A0BAA8006BA554 /* DFGAbstractInterpreter.h in Headers */,
                                A704D90417A0BAA8006BA554 /* DFGAbstractInterpreterInlines.h in Headers */,
                                A77A423E17A0BBFD00A8DB81 /* DFGAbstractHeap.h in Headers */,
                                A704D90317A0BAA8006BA554 /* DFGAbstractInterpreter.h in Headers */,
                                A704D90417A0BAA8006BA554 /* DFGAbstractInterpreterInlines.h in Headers */,
+                               0F6453181FD246A7002432A1 /* MarkStackMergingConstraint.h in Headers */,
                                0F620177143FCD3F0068B77C /* DFGAbstractValue.h in Headers */,
                                0FD3E4021B618AAF00C80E1E /* DFGAdaptiveInferredPropertyValueWatchpoint.h in Headers */,
                                0F18D3D01B55A6E0002C5C9F /* DFGAdaptiveStructureWatchpoint.h in Headers */,
                                0F620177143FCD3F0068B77C /* DFGAbstractValue.h in Headers */,
                                0FD3E4021B618AAF00C80E1E /* DFGAdaptiveInferredPropertyValueWatchpoint.h in Headers */,
                                0F18D3D01B55A6E0002C5C9F /* DFGAdaptiveStructureWatchpoint.h in Headers */,
                                0F2B66E517B6B5AB00A7AE3F /* JSArrayBufferConstructor.h in Headers */,
                                0F2B66E717B6B5AB00A7AE3F /* JSArrayBufferPrototype.h in Headers */,
                                0F2B66E917B6B5AB00A7AE3F /* JSArrayBufferView.h in Headers */,
                                0F2B66E517B6B5AB00A7AE3F /* JSArrayBufferConstructor.h in Headers */,
                                0F2B66E717B6B5AB00A7AE3F /* JSArrayBufferPrototype.h in Headers */,
                                0F2B66E917B6B5AB00A7AE3F /* JSArrayBufferView.h in Headers */,
+                               0F9DAA091FD1C3CF0079C5B2 /* MarkingConstraintSolver.h in Headers */,
                                0F2B66EA17B6B5AB00A7AE3F /* JSArrayBufferViewInlines.h in Headers */,
                                539FB8BA1C99DA7C00940FA1 /* JSArrayInlines.h in Headers */,
                                5B70CFDE1DB69E6600EC23F9 /* JSAsyncFunction.h in Headers */,
                                0F2B66EA17B6B5AB00A7AE3F /* JSArrayBufferViewInlines.h in Headers */,
                                539FB8BA1C99DA7C00940FA1 /* JSArrayInlines.h in Headers */,
                                5B70CFDE1DB69E6600EC23F9 /* JSAsyncFunction.h in Headers */,
                                A7482E93116A7CAD003B0712 /* JSWeakObjectMapRefInternal.h in Headers */,
                                A7482B9311671147003B0712 /* JSWeakObjectMapRefPrivate.h in Headers */,
                                0F0B286B1EB8E6CF000EB5D2 /* JSWeakPrivate.h in Headers */,
                                A7482E93116A7CAD003B0712 /* JSWeakObjectMapRefInternal.h in Headers */,
                                A7482B9311671147003B0712 /* JSWeakObjectMapRefPrivate.h in Headers */,
                                0F0B286B1EB8E6CF000EB5D2 /* JSWeakPrivate.h in Headers */,
+                               0F4D8C741FC7A97A001D32AC /* VisitCounter.h in Headers */,
                                709FB8681AE335C60039D069 /* JSWeakSet.h in Headers */,
                                AD5C36EB1F75AD73000BCAAF /* JSWebAssembly.h in Headers */,
                                AD9E852F1E8A0C7C008DE39E /* JSWebAssemblyCodeBlock.h in Headers */,
                                709FB8681AE335C60039D069 /* JSWeakSet.h in Headers */,
                                AD5C36EB1F75AD73000BCAAF /* JSWebAssembly.h in Headers */,
                                AD9E852F1E8A0C7C008DE39E /* JSWebAssemblyCodeBlock.h in Headers */,
                                E3C295DD1ED2CBDA00D3016F /* ObjectPropertyChangeAdaptiveWatchpoint.h in Headers */,
                                0FD3E40A1B618B6600C80E1E /* ObjectPropertyCondition.h in Headers */,
                                0FD3E40C1B618B6600C80E1E /* ObjectPropertyConditionSet.h in Headers */,
                                E3C295DD1ED2CBDA00D3016F /* ObjectPropertyChangeAdaptiveWatchpoint.h in Headers */,
                                0FD3E40A1B618B6600C80E1E /* ObjectPropertyCondition.h in Headers */,
                                0FD3E40C1B618B6600C80E1E /* ObjectPropertyConditionSet.h in Headers */,
+                               0F4D8C781FCA3CFA001D32AC /* SimpleMarkingConstraint.h in Headers */,
                                BC18C4460E16F5CD00B34460 /* ObjectPrototype.h in Headers */,
                                E124A8F70E555775003091F1 /* OpaqueJSString.h in Headers */,
                                BC18C4460E16F5CD00B34460 /* ObjectPrototype.h in Headers */,
                                E124A8F70E555775003091F1 /* OpaqueJSString.h in Headers */,
-                               ADDB1F6318D77DBE009B58A8 /* OpaqueRootSet.h in Headers */,
                                969A079B0ED1D3AE00F1F681 /* Opcode.h in Headers */,
                                0F2BDC2C151FDE9100CD8910 /* Operands.h in Headers */,
                                A70447EA17A0BD4600F5898E /* OperandsInlines.h in Headers */,
                                969A079B0ED1D3AE00F1F681 /* Opcode.h in Headers */,
                                0F2BDC2C151FDE9100CD8910 /* Operands.h in Headers */,
                                A70447EA17A0BD4600F5898E /* OperandsInlines.h in Headers */,
                                0F952ABD1B487A7700C367C5 /* TrackedReferences.h in Headers */,
                                0F2B670617B6B5AB00A7AE3F /* TypedArrayAdaptors.h in Headers */,
                                0F2B670817B6B5AB00A7AE3F /* TypedArrayController.h in Headers */,
                                0F952ABD1B487A7700C367C5 /* TrackedReferences.h in Headers */,
                                0F2B670617B6B5AB00A7AE3F /* TypedArrayAdaptors.h in Headers */,
                                0F2B670817B6B5AB00A7AE3F /* TypedArrayController.h in Headers */,
+                               0F9DAA0A1FD1C3D30079C5B2 /* ParallelSourceAdapter.h in Headers */,
                                0F4B94DC17B9F07500DD03A4 /* TypedArrayInlines.h in Headers */,
                                0F2B670917B6B5AB00A7AE3F /* TypedArrays.h in Headers */,
                                0F2B670B17B6B5AB00A7AE3F /* TypedArrayType.h in Headers */,
                                0F4B94DC17B9F07500DD03A4 /* TypedArrayInlines.h in Headers */,
                                0F2B670917B6B5AB00A7AE3F /* TypedArrays.h in Headers */,
                                0F2B670B17B6B5AB00A7AE3F /* TypedArrayType.h in Headers */,
                                0F6C73511AC9F99F00BE1682 /* VariableWriteFireDetail.h in Headers */,
                                0FE0502D1AA9095600D33B33 /* VarOffset.h in Headers */,
                                0F426A491460CBB700131F8F /* VirtualRegister.h in Headers */,
                                0F6C73511AC9F99F00BE1682 /* VariableWriteFireDetail.h in Headers */,
                                0FE0502D1AA9095600D33B33 /* VarOffset.h in Headers */,
                                0F426A491460CBB700131F8F /* VirtualRegister.h in Headers */,
-                               0F1FB3931E177A7200A9BE50 /* VisitingTimeout.h in Headers */,
                                0F952AA11DF7860900E06FBD /* VisitRaceKey.h in Headers */,
                                BC18C4200E16F5CD00B34460 /* VM.h in Headers */,
                                658D3A5619638268003C45D6 /* VMEntryRecord.h in Headers */,
                                0F952AA11DF7860900E06FBD /* VisitRaceKey.h in Headers */,
                                BC18C4200E16F5CD00B34460 /* VM.h in Headers */,
                                658D3A5619638268003C45D6 /* VMEntryRecord.h in Headers */,
index a71600e..d821657 100644 (file)
@@ -500,13 +500,16 @@ heap/JITStubRoutineSet.cpp
 heap/LargeAllocation.cpp
 heap/MachineStackMarker.cpp
 heap/MarkStack.cpp
 heap/LargeAllocation.cpp
 heap/MachineStackMarker.cpp
 heap/MarkStack.cpp
+heap/MarkStackMergingConstraint.cpp
 heap/MarkedAllocator.cpp
 heap/MarkedBlock.cpp
 heap/MarkedSpace.cpp
 heap/MarkingConstraint.cpp
 heap/MarkingConstraintSet.cpp
 heap/MarkedAllocator.cpp
 heap/MarkedBlock.cpp
 heap/MarkedSpace.cpp
 heap/MarkingConstraint.cpp
 heap/MarkingConstraintSet.cpp
+heap/MarkingConstraintSolver.cpp
 heap/MutatorScheduler.cpp
 heap/MutatorState.cpp
 heap/MutatorScheduler.cpp
 heap/MutatorState.cpp
+heap/SimpleMarkingConstraint.cpp
 heap/SlotVisitor.cpp
 heap/SpaceTimeMutatorScheduler.cpp
 heap/StochasticSpaceTimeMutatorScheduler.cpp
 heap/SlotVisitor.cpp
 heap/SpaceTimeMutatorScheduler.cpp
 heap/StochasticSpaceTimeMutatorScheduler.cpp
index d405696..651a8c9 100644 (file)
@@ -320,7 +320,7 @@ bool AccessCase::propagateTransitions(SlotVisitor& visitor) const
 
     switch (m_type) {
     case Transition:
 
     switch (m_type) {
     case Transition:
-        if (Heap::isMarkedConcurrently(m_structure->previousID()))
+        if (Heap::isMarked(m_structure->previousID()))
             visitor.appendUnbarriered(m_structure.get());
         else
             result = false;
             visitor.appendUnbarriered(m_structure.get());
         else
             result = false;
index 1fb3e14..7d5e766 100644 (file)
@@ -972,7 +972,7 @@ void CodeBlock::visitWeakly(SlotVisitor& visitor)
     
     m_visitWeaklyHasBeenCalled = true;
 
     
     m_visitWeaklyHasBeenCalled = true;
 
-    if (Heap::isMarkedConcurrently(this))
+    if (Heap::isMarked(this))
         return;
 
     if (shouldVisitStrongly(locker)) {
         return;
 
     if (shouldVisitStrongly(locker)) {
@@ -1124,7 +1124,7 @@ static std::chrono::milliseconds timeToLive(JITCode::JITType jitType)
 
 bool CodeBlock::shouldJettisonDueToOldAge(const ConcurrentJSLocker&)
 {
 
 bool CodeBlock::shouldJettisonDueToOldAge(const ConcurrentJSLocker&)
 {
-    if (Heap::isMarkedConcurrently(this))
+    if (Heap::isMarked(this))
         return false;
 
     if (UNLIKELY(Options::forceCodeBlockToJettisonDueToOldAge()))
         return false;
 
     if (UNLIKELY(Options::forceCodeBlockToJettisonDueToOldAge()))
@@ -1139,10 +1139,10 @@ bool CodeBlock::shouldJettisonDueToOldAge(const ConcurrentJSLocker&)
 #if ENABLE(DFG_JIT)
 static bool shouldMarkTransition(DFG::WeakReferenceTransition& transition)
 {
 #if ENABLE(DFG_JIT)
 static bool shouldMarkTransition(DFG::WeakReferenceTransition& transition)
 {
-    if (transition.m_codeOrigin && !Heap::isMarkedConcurrently(transition.m_codeOrigin.get()))
+    if (transition.m_codeOrigin && !Heap::isMarked(transition.m_codeOrigin.get()))
         return false;
     
         return false;
     
-    if (!Heap::isMarkedConcurrently(transition.m_from.get()))
+    if (!Heap::isMarked(transition.m_from.get()))
         return false;
     
     return true;
         return false;
     
     return true;
@@ -1172,7 +1172,7 @@ void CodeBlock::propagateTransitions(const ConcurrentJSLocker&, SlotVisitor& vis
                     m_vm->heap.structureIDTable().get(oldStructureID);
                 Structure* newStructure =
                     m_vm->heap.structureIDTable().get(newStructureID);
                     m_vm->heap.structureIDTable().get(oldStructureID);
                 Structure* newStructure =
                     m_vm->heap.structureIDTable().get(newStructureID);
-                if (Heap::isMarkedConcurrently(oldStructure))
+                if (Heap::isMarked(oldStructure))
                     visitor.appendUnbarriered(newStructure);
                 else
                     allAreMarkedSoFar = false;
                     visitor.appendUnbarriered(newStructure);
                 else
                     allAreMarkedSoFar = false;
@@ -1246,14 +1246,14 @@ void CodeBlock::determineLiveness(const ConcurrentJSLocker&, SlotVisitor& visito
     for (unsigned i = 0; i < dfgCommon->weakReferences.size(); ++i) {
         JSCell* reference = dfgCommon->weakReferences[i].get();
         ASSERT(!jsDynamicCast<CodeBlock*>(*reference->vm(), reference));
     for (unsigned i = 0; i < dfgCommon->weakReferences.size(); ++i) {
         JSCell* reference = dfgCommon->weakReferences[i].get();
         ASSERT(!jsDynamicCast<CodeBlock*>(*reference->vm(), reference));
-        if (!Heap::isMarkedConcurrently(reference)) {
+        if (!Heap::isMarked(reference)) {
             allAreLiveSoFar = false;
             break;
         }
     }
     if (allAreLiveSoFar) {
         for (unsigned i = 0; i < dfgCommon->weakStructureReferences.size(); ++i) {
             allAreLiveSoFar = false;
             break;
         }
     }
     if (allAreLiveSoFar) {
         for (unsigned i = 0; i < dfgCommon->weakStructureReferences.size(); ++i) {
-            if (!Heap::isMarkedConcurrently(dfgCommon->weakStructureReferences[i].get())) {
+            if (!Heap::isMarked(dfgCommon->weakStructureReferences[i].get())) {
                 allAreLiveSoFar = false;
                 break;
             }
                 allAreLiveSoFar = false;
                 break;
             }
index 4bbcc5d..685f042 100644 (file)
@@ -104,13 +104,13 @@ protected:
             dataLog(m_worklist, ": Compiling ", m_plan->key(), " asynchronously\n");
         
         // There's no way for the GC to be safepointing since we own rightToRun.
             dataLog(m_worklist, ": Compiling ", m_plan->key(), " asynchronously\n");
         
         // There's no way for the GC to be safepointing since we own rightToRun.
-        if (m_plan->vm->heap.collectorBelievesThatTheWorldIsStopped()) {
+        if (m_plan->vm->heap.worldIsStopped()) {
             dataLog("Heap is stoped but here we are! (1)\n");
             RELEASE_ASSERT_NOT_REACHED();
         }
         m_plan->compileInThread(&m_data);
         if (m_plan->stage != Plan::Cancelled) {
             dataLog("Heap is stoped but here we are! (1)\n");
             RELEASE_ASSERT_NOT_REACHED();
         }
         m_plan->compileInThread(&m_data);
         if (m_plan->stage != Plan::Cancelled) {
-            if (m_plan->vm->heap.collectorBelievesThatTheWorldIsStopped()) {
+            if (m_plan->vm->heap.worldIsStopped()) {
                 dataLog("Heap is stopped but here we are! (2)\n");
                 RELEASE_ASSERT_NOT_REACHED();
             }
                 dataLog("Heap is stopped but here we are! (2)\n");
                 RELEASE_ASSERT_NOT_REACHED();
             }
@@ -130,7 +130,7 @@ protected:
             
             m_worklist.m_readyPlans.append(m_plan);
             
             
             m_worklist.m_readyPlans.append(m_plan);
             
-            RELEASE_ASSERT(!m_plan->vm->heap.collectorBelievesThatTheWorldIsStopped());
+            RELEASE_ASSERT(!m_plan->vm->heap.worldIsStopped());
             m_worklist.m_planCompiled.notifyAll();
         }
         
             m_worklist.m_planCompiled.notifyAll();
         }
         
index d98f96c..e22b9ad 100644 (file)
@@ -71,7 +71,7 @@ void compile(State& state, Safepoint::Result& safepointResult)
 
     if (safepointResult.didGetCancelled())
         return;
 
     if (safepointResult.didGetCancelled())
         return;
-    RELEASE_ASSERT(!state.graph.m_vm.heap.collectorBelievesThatTheWorldIsStopped());
+    RELEASE_ASSERT(!state.graph.m_vm.heap.worldIsStopped());
     
     if (state.allocationFailed)
         return;
     
     if (state.allocationFailed)
         return;
index 595330a..741792a 100644 (file)
@@ -66,12 +66,12 @@ void ConservativeRoots::grow()
 }
 
 template<typename MarkHook>
 }
 
 template<typename MarkHook>
-inline void ConservativeRoots::genericAddPointer(void* p, HeapVersion markingVersion, TinyBloomFilter filter, MarkHook& markHook)
+inline void ConservativeRoots::genericAddPointer(void* p, HeapVersion markingVersion, HeapVersion newlyAllocatedVersion, TinyBloomFilter filter, MarkHook& markHook)
 {
     markHook.mark(p);
 
     HeapUtil::findGCObjectPointersForMarking(
 {
     markHook.mark(p);
 
     HeapUtil::findGCObjectPointersForMarking(
-        m_heap, markingVersion, filter, p,
+        m_heap, markingVersion, newlyAllocatedVersion, filter, p,
         [&] (void* p) {
             if (m_size == m_capacity)
                 grow();
         [&] (void* p) {
             if (m_size == m_capacity)
                 grow();
@@ -95,8 +95,9 @@ void ConservativeRoots::genericAddSpan(void* begin, void* end, MarkHook& markHoo
 
     TinyBloomFilter filter = m_heap.objectSpace().blocks().filter(); // Make a local copy of filter to show the compiler it won't alias, and can be register-allocated.
     HeapVersion markingVersion = m_heap.objectSpace().markingVersion();
 
     TinyBloomFilter filter = m_heap.objectSpace().blocks().filter(); // Make a local copy of filter to show the compiler it won't alias, and can be register-allocated.
     HeapVersion markingVersion = m_heap.objectSpace().markingVersion();
+    HeapVersion newlyAllocatedVersion = m_heap.objectSpace().newlyAllocatedVersion();
     for (char** it = static_cast<char**>(begin); it != static_cast<char**>(end); ++it)
     for (char** it = static_cast<char**>(begin); it != static_cast<char**>(end); ++it)
-        genericAddPointer(*it, markingVersion, filter, markHook);
+        genericAddPointer(*it, markingVersion, newlyAllocatedVersion, filter, markHook);
 }
 
 class DummyMarkHook {
 }
 
 class DummyMarkHook {
index e46445b..5448392 100644 (file)
@@ -50,7 +50,7 @@ private:
     static const size_t nonInlineCapacity = 8192 / sizeof(HeapCell*);
     
     template<typename MarkHook>
     static const size_t nonInlineCapacity = 8192 / sizeof(HeapCell*);
     
     template<typename MarkHook>
-    void genericAddPointer(void*, HeapVersion, TinyBloomFilter, MarkHook&);
+    void genericAddPointer(void*, HeapVersion markingVersion, HeapVersion newlyAllocatedVersion, TinyBloomFilter, MarkHook&);
 
     template<typename MarkHook>
     void genericAddSpan(void*, void* end, MarkHook&);
 
     template<typename MarkHook>
     void genericAddSpan(void*, void* end, MarkHook&);
diff --git a/Source/JavaScriptCore/heap/ConstraintConcurrency.h b/Source/JavaScriptCore/heap/ConstraintConcurrency.h
new file mode 100644 (file)
index 0000000..3d97404
--- /dev/null
@@ -0,0 +1,55 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#include <wtf/PrintStream.h>
+
+namespace JSC {
+
+enum class ConstraintConcurrency : uint8_t {
+    Sequential,
+    Concurrent
+};
+    
+} // namespace JSC
+
+namespace WTF {
+
+inline void printInternal(PrintStream& out, JSC::ConstraintConcurrency concurrency)
+{
+    switch (concurrency) {
+    case JSC::ConstraintConcurrency::Sequential:
+        out.print("Sequential");
+        return;
+    case JSC::ConstraintConcurrency::Concurrent:
+        out.print("Concurrent");
+        return;
+    }
+    RELEASE_ASSERT_NOT_REACHED();
+}
+
+} // namespace WTF
+
diff --git a/Source/JavaScriptCore/heap/ConstraintParallelism.h b/Source/JavaScriptCore/heap/ConstraintParallelism.h
new file mode 100644 (file)
index 0000000..1bca5a1
--- /dev/null
@@ -0,0 +1,55 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#include <wtf/PrintStream.h>
+
+namespace JSC {
+
+enum class ConstraintParallelism : uint8_t {
+    Sequential,
+    Parallel
+};
+    
+} // namespace JSC
+
+namespace WTF {
+
+inline void printInternal(PrintStream& out, JSC::ConstraintParallelism parallelism)
+{
+    switch (parallelism) {
+    case JSC::ConstraintParallelism::Sequential:
+        out.print("Sequential");
+        return;
+    case JSC::ConstraintParallelism::Parallel:
+        out.print("Parallel");
+        return;
+    }
+    RELEASE_ASSERT_NOT_REACHED();
+}
+
+} // namespace WTF
+
index 19aca9b..b4638f9 100644 (file)
@@ -209,12 +209,11 @@ inline const T GCSegmentedArray<T>::removeLast()
 template <typename T>
 inline bool GCSegmentedArray<T>::isEmpty()
 {
 template <typename T>
 inline bool GCSegmentedArray<T>::isEmpty()
 {
+    // This happens to be safe to call concurrently. It's important to preserve that capability.
     if (m_top)
         return false;
     if (m_top)
         return false;
-    if (m_segments.head()->next()) {
-        ASSERT(m_segments.head()->next()->m_top == s_segmentCapacity);
+    if (m_segments.head()->next())
         return false;
         return false;
-    }
     return true;
 }
 
     return true;
 }
 
index d4d3fec..cf5ba85 100644 (file)
@@ -49,6 +49,7 @@
 #include "JSVirtualMachineInternal.h"
 #include "JSWebAssemblyCodeBlock.h"
 #include "MachineStackMarker.h"
 #include "JSVirtualMachineInternal.h"
 #include "JSWebAssemblyCodeBlock.h"
 #include "MachineStackMarker.h"
+#include "MarkStackMergingConstraint.h"
 #include "MarkedAllocatorInlines.h"
 #include "MarkedSpaceInlines.h"
 #include "MarkingConstraintSet.h"
 #include "MarkedAllocatorInlines.h"
 #include "MarkedSpaceInlines.h"
 #include "MarkingConstraintSet.h"
@@ -66,6 +67,7 @@
 #include "TypeProfilerLog.h"
 #include "UnlinkedCodeBlock.h"
 #include "VM.h"
 #include "TypeProfilerLog.h"
 #include "UnlinkedCodeBlock.h"
 #include "VM.h"
+#include "VisitCounter.h"
 #include "WasmMemory.h"
 #include "WeakSetInlines.h"
 #include <algorithm>
 #include "WasmMemory.h"
 #include "WeakSetInlines.h"
 #include <algorithm>
@@ -281,7 +283,7 @@ Heap::Heap(VM* vm, HeapType heapType)
     , m_mutatorSlotVisitor(std::make_unique<SlotVisitor>(*this, "M"))
     , m_mutatorMarkStack(std::make_unique<MarkStackArray>())
     , m_raceMarkStack(std::make_unique<MarkStackArray>())
     , m_mutatorSlotVisitor(std::make_unique<SlotVisitor>(*this, "M"))
     , m_mutatorMarkStack(std::make_unique<MarkStackArray>())
     , m_raceMarkStack(std::make_unique<MarkStackArray>())
-    , m_constraintSet(std::make_unique<MarkingConstraintSet>())
+    , m_constraintSet(std::make_unique<MarkingConstraintSet>(*this))
     , m_handleSet(vm)
     , m_codeBlocks(std::make_unique<CodeBlockSet>())
     , m_jitStubRoutines(std::make_unique<JITStubRoutineSet>())
     , m_handleSet(vm)
     , m_codeBlocks(std::make_unique<CodeBlockSet>())
     , m_jitStubRoutines(std::make_unique<JITStubRoutineSet>())
@@ -594,7 +596,7 @@ void Heap::iterateExecutingAndCompilingCodeBlocksWithoutHoldingLocks(const Func&
         func(codeBlock);
 }
 
         func(codeBlock);
 }
 
-void Heap::assertSharedMarkStacksEmpty()
+void Heap::assertMarkStacksEmpty()
 {
     bool ok = true;
     
 {
     bool ok = true;
     
@@ -608,12 +610,21 @@ void Heap::assertSharedMarkStacksEmpty()
         ok = false;
     }
     
         ok = false;
     }
     
+    forEachSlotVisitor(
+        [&] (SlotVisitor& visitor) {
+            if (visitor.isEmpty())
+                return;
+            
+            dataLog("FATAL: Visitor ", RawPointer(&visitor), " is not empty!\n");
+            ok = false;
+        });
+    
     RELEASE_ASSERT(ok);
 }
 
 void Heap::gatherStackRoots(ConservativeRoots& roots)
 {
     RELEASE_ASSERT(ok);
 }
 
 void Heap::gatherStackRoots(ConservativeRoots& roots)
 {
-    m_machineThreads->gatherConservativeRoots(roots, *m_jitStubRoutines, *m_codeBlocks, m_currentThreadState);
+    m_machineThreads->gatherConservativeRoots(roots, *m_jitStubRoutines, *m_codeBlocks, m_currentThreadState, m_currentThread);
 }
 
 void Heap::gatherJSStackRoots(ConservativeRoots& roots)
 }
 
 void Heap::gatherJSStackRoots(ConservativeRoots& roots)
@@ -730,7 +741,7 @@ void Heap::endMarking()
             visitor.reset();
         });
 
             visitor.reset();
         });
 
-    assertSharedMarkStacksEmpty();
+    assertMarkStacksEmpty();
     m_weakReferenceHarvesters.removeAll();
 
     RELEASE_ASSERT(m_raceMarkStack->isEmpty());
     m_weakReferenceHarvesters.removeAll();
 
     RELEASE_ASSERT(m_raceMarkStack->isEmpty());
@@ -911,7 +922,7 @@ void Heap::addToRememberedSet(const JSCell* constCell)
     m_barriersExecuted++;
     if (m_mutatorShouldBeFenced) {
         WTF::loadLoadFence();
     m_barriersExecuted++;
     if (m_mutatorShouldBeFenced) {
         WTF::loadLoadFence();
-        if (!isMarkedConcurrently(cell)) {
+        if (!isMarked(cell)) {
             // During a full collection a store into an unmarked object that had surivived past
             // collections will manifest as a store to an unmarked PossiblyBlack object. If the
             // object gets marked at some time after this then it will go down the normal marking
             // During a full collection a store into an unmarked object that had surivived past
             // collections will manifest as a store to an unmarked PossiblyBlack object. If the
             // object gets marked at some time after this then it will go down the normal marking
@@ -924,15 +935,15 @@ void Heap::addToRememberedSet(const JSCell* constCell)
                 // Now we protect against this race:
                 //
                 //     1) Object starts out black + unmarked.
                 // Now we protect against this race:
                 //
                 //     1) Object starts out black + unmarked.
-                //     --> We do isMarkedConcurrently here.
+                //     --> We do isMarked here.
                 //     2) Object is marked and greyed.
                 //     3) Object is scanned and blacked.
                 //     --> We do atomicCompareExchangeCellStateStrong here.
                 //
                 // In this case we would have made the object white again, even though it should
                 // be black. This check lets us correct our mistake. This relies on the fact that
                 //     2) Object is marked and greyed.
                 //     3) Object is scanned and blacked.
                 //     --> We do atomicCompareExchangeCellStateStrong here.
                 //
                 // In this case we would have made the object white again, even though it should
                 // be black. This check lets us correct our mistake. This relies on the fact that
-                // isMarkedConcurrently converges monotonically to true.
-                if (isMarkedConcurrently(cell)) {
+                // isMarked converges monotonically to true.
+                if (isMarked(cell)) {
                     // It's difficult to work out whether the object should be grey or black at
                     // this point. We say black conservatively.
                     cell->setCellState(CellState::PossiblyBlack);
                     // It's difficult to work out whether the object should be grey or black at
                     // this point. We say black conservatively.
                     cell->setCellState(CellState::PossiblyBlack);
@@ -948,7 +959,7 @@ void Heap::addToRememberedSet(const JSCell* constCell)
             return;
         }
     } else
             return;
         }
     } else
-        ASSERT(Heap::isMarkedConcurrently(cell));
+        ASSERT(Heap::isMarked(cell));
     // It could be that the object was *just* marked. This means that the collector may set the
     // state to DefinitelyGrey and then to PossiblyOldOrBlack at any time. It's OK for us to
     // race with the collector here. If we win then this is accurate because the object _will_
     // It could be that the object was *just* marked. This means that the collector may set the
     // state to DefinitelyGrey and then to PossiblyOldOrBlack at any time. It's OK for us to
     // race with the collector here. If we win then this is accurate because the object _will_
@@ -1091,6 +1102,7 @@ auto Heap::runCurrentPhase(GCConductor conn, CurrentThreadState* currentThreadSt
 {
     checkConn(conn);
     m_currentThreadState = currentThreadState;
 {
     checkConn(conn);
     m_currentThreadState = currentThreadState;
+    m_currentThread = &WTF::Thread::current();
     
     if (conn == GCConductor::Mutator)
         sanitizeStackForVM(vm());
     
     if (conn == GCConductor::Mutator)
         sanitizeStackForVM(vm());
@@ -1283,13 +1295,11 @@ NEVER_INLINE bool Heap::runFixpointPhase(GCConductor conn)
     }
         
     if (slotVisitor.didReachTermination()) {
     }
         
     if (slotVisitor.didReachTermination()) {
+        m_opaqueRoots.deleteOldTables();
+        
         m_scheduler->didReachTermination();
         m_scheduler->didReachTermination();
-            
-        assertSharedMarkStacksEmpty();
-            
-        slotVisitor.mergeIfNecessary();
-        for (auto& parallelVisitor : m_parallelSlotVisitors)
-            parallelVisitor->mergeIfNecessary();
+        
+        assertMarkStacksEmpty();
             
         // FIXME: Take m_mutatorDidRun into account when scheduling constraints. Most likely,
         // we don't have to execute root constraints again unless the mutator did run. At a
             
         // FIXME: Take m_mutatorDidRun into account when scheduling constraints. Most likely,
         // we don't have to execute root constraints again unless the mutator did run. At a
@@ -1297,17 +1307,14 @@ NEVER_INLINE bool Heap::runFixpointPhase(GCConductor conn)
         // estimate.
         // https://bugs.webkit.org/show_bug.cgi?id=166828
             
         // estimate.
         // https://bugs.webkit.org/show_bug.cgi?id=166828
             
-        // FIXME: We should take advantage of the fact that we could timeout. This only comes
-        // into play if we're executing constraints for the first time. But that will matter
-        // when we have deep stacks or a lot of DOM stuff.
-        // https://bugs.webkit.org/show_bug.cgi?id=166831
-            
         // Wondering what this does? Look at Heap::addCoreConstraints(). The DOM and others can also
         // add their own using Heap::addMarkingConstraint().
         // Wondering what this does? Look at Heap::addCoreConstraints(). The DOM and others can also
         // add their own using Heap::addMarkingConstraint().
-        bool converged =
-            m_constraintSet->executeConvergence(slotVisitor, MonotonicTime::infinity());
+        bool converged = m_constraintSet->executeConvergence(slotVisitor);
+        
+        // FIXME: The slotVisitor.isEmpty() check is most likely not needed.
+        // https://bugs.webkit.org/show_bug.cgi?id=180310
         if (converged && slotVisitor.isEmpty()) {
         if (converged && slotVisitor.isEmpty()) {
-            assertSharedMarkStacksEmpty();
+            assertMarkStacksEmpty();
             return changePhase(conn, CollectorPhase::End);
         }
             
             return changePhase(conn, CollectorPhase::End);
         }
             
@@ -1324,6 +1331,15 @@ NEVER_INLINE bool Heap::runFixpointPhase(GCConductor conn)
         
     m_scheduler->synchronousDrainingDidStall();
 
         
     m_scheduler->synchronousDrainingDidStall();
 
+    // This is kinda tricky. The termination check looks at:
+    //
+    // - Whether the marking threads are active. If they are not, this means that the marking threads'
+    //   SlotVisitors are empty.
+    // - Whether the collector's slot visitor is empty.
+    // - Whether the shared mark stacks are empty.
+    //
+    // This doesn't have to check the mutator SlotVisitor because that one becomes empty after every GC
+    // work increment, so it must be empty now.
     if (slotVisitor.didReachTermination())
         return true; // This is like relooping to the top if runFixpointPhase().
         
     if (slotVisitor.didReachTermination())
         return true; // This is like relooping to the top if runFixpointPhase().
         
@@ -1489,6 +1505,8 @@ NEVER_INLINE bool Heap::finishChangingPhase(GCConductor conn)
     if (false)
         dataLog(conn, ": Going to phase: ", m_nextPhase, " (from ", m_currentPhase, ")\n");
     
     if (false)
         dataLog(conn, ": Going to phase: ", m_nextPhase, " (from ", m_currentPhase, ")\n");
     
+    m_phaseVersion++;
+    
     bool suspendedBefore = worldShouldBeSuspended(m_currentPhase);
     bool suspendedAfter = worldShouldBeSuspended(m_nextPhase);
     
     bool suspendedBefore = worldShouldBeSuspended(m_currentPhase);
     bool suspendedAfter = worldShouldBeSuspended(m_nextPhase);
     
@@ -1526,7 +1544,7 @@ NEVER_INLINE bool Heap::finishChangingPhase(GCConductor conn)
 
 void Heap::stopThePeriphery(GCConductor conn)
 {
 
 void Heap::stopThePeriphery(GCConductor conn)
 {
-    if (m_collectorBelievesThatTheWorldIsStopped) {
+    if (m_worldIsStopped) {
         dataLog("FATAL: world already stopped.\n");
         RELEASE_ASSERT_NOT_REACHED();
     }
         dataLog("FATAL: world already stopped.\n");
         RELEASE_ASSERT_NOT_REACHED();
     }
@@ -1537,7 +1555,7 @@ void Heap::stopThePeriphery(GCConductor conn)
     m_mutatorDidRun = false;
 
     suspendCompilerThreads();
     m_mutatorDidRun = false;
 
     suspendCompilerThreads();
-    m_collectorBelievesThatTheWorldIsStopped = true;
+    m_worldIsStopped = true;
 
     forEachSlotVisitor(
         [&] (SlotVisitor& slotVisitor) {
 
     forEachSlotVisitor(
         [&] (SlotVisitor& slotVisitor) {
@@ -1574,11 +1592,11 @@ NEVER_INLINE void Heap::resumeThePeriphery()
     
     m_barriersExecuted = 0;
     
     
     m_barriersExecuted = 0;
     
-    if (!m_collectorBelievesThatTheWorldIsStopped) {
+    if (!m_worldIsStopped) {
         dataLog("Fatal: collector does not believe that the world is stopped.\n");
         RELEASE_ASSERT_NOT_REACHED();
     }
         dataLog("Fatal: collector does not believe that the world is stopped.\n");
         RELEASE_ASSERT_NOT_REACHED();
     }
-    m_collectorBelievesThatTheWorldIsStopped = false;
+    m_worldIsStopped = false;
     
     // FIXME: This could be vastly improved: we want to grab the locks in the order in which they
     // become available. We basically want a lockAny() method that will lock whatever lock is available
     
     // FIXME: This could be vastly improved: we want to grab the locks in the order in which they
     // become available. We basically want a lockAny() method that will lock whatever lock is available
@@ -2578,7 +2596,11 @@ void Heap::addCoreConstraints()
 {
     m_constraintSet->add(
         "Cs", "Conservative Scan",
 {
     m_constraintSet->add(
         "Cs", "Conservative Scan",
-        [this] (SlotVisitor& slotVisitor, const VisitingTimeout&) {
+        [this, lastVersion = static_cast<uint64_t>(0)] (SlotVisitor& slotVisitor) mutable {
+            bool shouldNotProduceWork = lastVersion == m_phaseVersion;
+            if (shouldNotProduceWork)
+                return;
+            
             TimingScope preConvergenceTimingScope(*this, "Constraint: conservative scan");
             m_objectSpace.prepareForConservativeScan();
             ConservativeRoots conservativeRoots(*this);
             TimingScope preConvergenceTimingScope(*this, "Constraint: conservative scan");
             m_objectSpace.prepareForConservativeScan();
             ConservativeRoots conservativeRoots(*this);
@@ -2587,12 +2609,14 @@ void Heap::addCoreConstraints()
             gatherJSStackRoots(conservativeRoots);
             gatherScratchBufferRoots(conservativeRoots);
             slotVisitor.append(conservativeRoots);
             gatherJSStackRoots(conservativeRoots);
             gatherScratchBufferRoots(conservativeRoots);
             slotVisitor.append(conservativeRoots);
+            
+            lastVersion = m_phaseVersion;
         },
         ConstraintVolatility::GreyedByExecution);
     
     m_constraintSet->add(
         "Msr", "Misc Small Roots",
         },
         ConstraintVolatility::GreyedByExecution);
     
     m_constraintSet->add(
         "Msr", "Misc Small Roots",
-        [this] (SlotVisitor& slotVisitor, const VisitingTimeout&) {
+        [this] (SlotVisitor& slotVisitor) {
 #if JSC_OBJC_API_ENABLED
             scanExternalRememberedSet(*m_vm, slotVisitor);
 #endif
 #if JSC_OBJC_API_ENABLED
             scanExternalRememberedSet(*m_vm, slotVisitor);
 #endif
@@ -2613,7 +2637,7 @@ void Heap::addCoreConstraints()
     
     m_constraintSet->add(
         "Sh", "Strong Handles",
     
     m_constraintSet->add(
         "Sh", "Strong Handles",
-        [this] (SlotVisitor& slotVisitor, const VisitingTimeout&) {
+        [this] (SlotVisitor& slotVisitor) {
             m_handleSet.visitStrongHandles(slotVisitor);
             m_handleStack.visit(slotVisitor);
         },
             m_handleSet.visitStrongHandles(slotVisitor);
             m_handleStack.visit(slotVisitor);
         },
@@ -2621,7 +2645,7 @@ void Heap::addCoreConstraints()
     
     m_constraintSet->add(
         "D", "Debugger",
     
     m_constraintSet->add(
         "D", "Debugger",
-        [this] (SlotVisitor& slotVisitor, const VisitingTimeout&) {
+        [this] (SlotVisitor& slotVisitor) {
 #if ENABLE(SAMPLING_PROFILER)
             if (SamplingProfiler* samplingProfiler = m_vm->samplingProfiler()) {
                 LockHolder locker(samplingProfiler->getLock());
 #if ENABLE(SAMPLING_PROFILER)
             if (SamplingProfiler* samplingProfiler = m_vm->samplingProfiler()) {
                 LockHolder locker(samplingProfiler->getLock());
@@ -2641,21 +2665,21 @@ void Heap::addCoreConstraints()
     
     m_constraintSet->add(
         "Jsr", "JIT Stub Routines",
     
     m_constraintSet->add(
         "Jsr", "JIT Stub Routines",
-        [this] (SlotVisitor& slotVisitor, const VisitingTimeout&) {
+        [this] (SlotVisitor& slotVisitor) {
             m_jitStubRoutines->traceMarkedStubRoutines(slotVisitor);
         },
         ConstraintVolatility::GreyedByExecution);
     
     m_constraintSet->add(
         "Ws", "Weak Sets",
             m_jitStubRoutines->traceMarkedStubRoutines(slotVisitor);
         },
         ConstraintVolatility::GreyedByExecution);
     
     m_constraintSet->add(
         "Ws", "Weak Sets",
-        [this] (SlotVisitor& slotVisitor, const VisitingTimeout&) {
+        [this] (SlotVisitor& slotVisitor) {
             m_objectSpace.visitWeakSets(slotVisitor);
         },
         ConstraintVolatility::GreyedByMarking);
     
     m_constraintSet->add(
         "Wrh", "Weak Reference Harvesters",
             m_objectSpace.visitWeakSets(slotVisitor);
         },
         ConstraintVolatility::GreyedByMarking);
     
     m_constraintSet->add(
         "Wrh", "Weak Reference Harvesters",
-        [this] (SlotVisitor& slotVisitor, const VisitingTimeout&) {
+        [this] (SlotVisitor& slotVisitor) {
             for (WeakReferenceHarvester* current = m_weakReferenceHarvesters.head(); current; current = current->next())
                 current->visitWeakReferences(slotVisitor);
         },
             for (WeakReferenceHarvester* current = m_weakReferenceHarvesters.head(); current; current = current->next())
                 current->visitWeakReferences(slotVisitor);
         },
@@ -2664,7 +2688,7 @@ void Heap::addCoreConstraints()
 #if ENABLE(DFG_JIT)
     m_constraintSet->add(
         "Dw", "DFG Worklists",
 #if ENABLE(DFG_JIT)
     m_constraintSet->add(
         "Dw", "DFG Worklists",
-        [this] (SlotVisitor& slotVisitor, const VisitingTimeout&) {
+        [this] (SlotVisitor& slotVisitor) {
             for (unsigned i = DFG::numberOfWorklists(); i--;)
                 DFG::existingWorklistForIndex(i).visitWeakReferences(slotVisitor);
             
             for (unsigned i = DFG::numberOfWorklists(); i--;)
                 DFG::existingWorklistForIndex(i).visitWeakReferences(slotVisitor);
             
@@ -2684,7 +2708,7 @@ void Heap::addCoreConstraints()
     
     m_constraintSet->add(
         "Cb", "CodeBlocks",
     
     m_constraintSet->add(
         "Cb", "CodeBlocks",
-        [this] (SlotVisitor& slotVisitor, const VisitingTimeout&) {
+        [this] (SlotVisitor& slotVisitor) {
             iterateExecutingAndCompilingCodeBlocksWithoutHoldingLocks(
                 [&] (CodeBlock* codeBlock) {
                     // Visit the CodeBlock as a constraint only if it's black.
             iterateExecutingAndCompilingCodeBlocksWithoutHoldingLocks(
                 [&] (CodeBlock* codeBlock) {
                     // Visit the CodeBlock as a constraint only if it's black.
@@ -2695,23 +2719,7 @@ void Heap::addCoreConstraints()
         },
         ConstraintVolatility::SeldomGreyed);
     
         },
         ConstraintVolatility::SeldomGreyed);
     
-    m_constraintSet->add(
-        "Mrms", "Mutator+Race Mark Stack",
-        [this] (SlotVisitor& slotVisitor, const VisitingTimeout&) {
-            // Indicate to the fixpoint that we introduced work!
-            size_t size = m_mutatorMarkStack->size() + m_raceMarkStack->size();
-            slotVisitor.addToVisitCount(size);
-            
-            if (Options::logGC())
-                dataLog("(", size, ")");
-            
-            m_mutatorMarkStack->transferTo(slotVisitor.mutatorMarkStack());
-            m_raceMarkStack->transferTo(slotVisitor.mutatorMarkStack());
-        },
-        [this] (SlotVisitor&) -> double {
-            return m_mutatorMarkStack->size() + m_raceMarkStack->size();
-        },
-        ConstraintVolatility::GreyedByExecution);
+    m_constraintSet->add(std::make_unique<MarkStackMergingConstraint>(*this));
 }
 
 void Heap::addMarkingConstraint(std::unique_ptr<MarkingConstraint> constraint)
 }
 
 void Heap::addMarkingConstraint(std::unique_ptr<MarkingConstraint> constraint)
@@ -2794,16 +2802,6 @@ void Heap::allowCollection()
     m_collectContinuouslyLock.unlock();
 }
 
     m_collectContinuouslyLock.unlock();
 }
 
-template<typename Func>
-void Heap::forEachSlotVisitor(const Func& func)
-{
-    auto locker = holdLock(m_parallelSlotVisitorLock);
-    func(*m_collectorSlotVisitor);
-    func(*m_mutatorSlotVisitor);
-    for (auto& slotVisitor : m_parallelSlotVisitors)
-        func(*slotVisitor);
-}
-
 void Heap::setMutatorShouldBeFenced(bool value)
 {
     m_mutatorShouldBeFenced = value;
 void Heap::setMutatorShouldBeFenced(bool value)
 {
     m_mutatorShouldBeFenced = value;
@@ -2847,4 +2845,26 @@ void Heap::removeHeapFinalizerCallback(const HeapFinalizerCallback& callback)
     m_heapFinalizerCallbacks.removeFirst(callback);
 }
 
     m_heapFinalizerCallbacks.removeFirst(callback);
 }
 
+void Heap::setBonusVisitorTask(RefPtr<SharedTask<void(SlotVisitor&)>> task)
+{
+    auto locker = holdLock(m_markingMutex);
+    m_bonusVisitorTask = task;
+    m_markingConditionVariable.notifyAll();
+}
+
+void Heap::runTaskInParallel(RefPtr<SharedTask<void(SlotVisitor&)>> task)
+{
+    unsigned initialRefCount = task->refCount();
+    setBonusVisitorTask(task);
+    task->run(*m_collectorSlotVisitor);
+    setBonusVisitorTask(nullptr);
+    // The constraint solver expects return of this function to imply termination of the task in all
+    // threads. This ensures that property.
+    {
+        auto locker = holdLock(m_markingMutex);
+        while (task->refCount() > initialRefCount)
+            m_markingConditionVariable.wait(m_markingMutex);
+    }
+}
+
 } // namespace JSC
 } // namespace JSC
index fe14c27..3646d8c 100644 (file)
@@ -44,6 +44,7 @@
 #include "WeakHandleOwner.h"
 #include "WeakReferenceHarvester.h"
 #include <wtf/AutomaticThread.h>
 #include "WeakHandleOwner.h"
 #include "WeakReferenceHarvester.h"
 #include <wtf/AutomaticThread.h>
+#include <wtf/ConcurrentPtrHashSet.h>
 #include <wtf/Deque.h>
 #include <wtf/HashCountedSet.h>
 #include <wtf/HashSet.h>
 #include <wtf/Deque.h>
 #include <wtf/HashCountedSet.h>
 #include <wtf/HashSet.h>
@@ -73,6 +74,7 @@ class JSValue;
 class LLIntOffsetsExtractor;
 class MachineThreads;
 class MarkStackArray;
 class LLIntOffsetsExtractor;
 class MachineThreads;
 class MarkStackArray;
+class MarkStackMergingConstraint;
 class MarkedAllocator;
 class MarkedArgumentBuffer;
 class MarkingConstraint;
 class MarkedAllocator;
 class MarkedArgumentBuffer;
 class MarkingConstraint;
@@ -114,7 +116,6 @@ public:
     static const unsigned s_timeCheckResolution = 16;
 
     static bool isMarked(const void*);
     static const unsigned s_timeCheckResolution = 16;
 
     static bool isMarked(const void*);
-    static bool isMarkedConcurrently(const void*);
     static bool testAndSetMarked(HeapVersion, const void*);
     
     static size_t cellSize(const void*);
     static bool testAndSetMarked(HeapVersion, const void*);
     
     static size_t cellSize(const void*);
@@ -154,7 +155,8 @@ public:
     MutatorState mutatorState() const { return m_mutatorState; }
     std::optional<CollectionScope> collectionScope() const { return m_collectionScope; }
     bool hasHeapAccess() const;
     MutatorState mutatorState() const { return m_mutatorState; }
     std::optional<CollectionScope> collectionScope() const { return m_collectionScope; }
     bool hasHeapAccess() const;
-    bool collectorBelievesThatTheWorldIsStopped() const;
+    bool worldIsStopped() const;
+    bool worldIsRunning() const { return !worldIsStopped(); }
 
     // We're always busy on the collection threads. On the main thread, this returns true if we're
     // helping heap.
 
     // We're always busy on the collection threads. On the main thread, this returns true if we're
     // helping heap.
@@ -349,6 +351,7 @@ public:
     void allowCollection();
     
     uint64_t mutatorExecutionVersion() const { return m_mutatorExecutionVersion; }
     void allowCollection();
     
     uint64_t mutatorExecutionVersion() const { return m_mutatorExecutionVersion; }
+    uint64_t phaseVersion() const { return m_phaseVersion; }
     
     JS_EXPORT_PRIVATE void addMarkingConstraint(std::unique_ptr<MarkingConstraint>);
     
     
     JS_EXPORT_PRIVATE void addMarkingConstraint(std::unique_ptr<MarkingConstraint>);
     
@@ -358,6 +361,17 @@ public:
     
     void addHeapFinalizerCallback(const HeapFinalizerCallback&);
     void removeHeapFinalizerCallback(const HeapFinalizerCallback&);
     
     void addHeapFinalizerCallback(const HeapFinalizerCallback&);
     void removeHeapFinalizerCallback(const HeapFinalizerCallback&);
+    
+    void runTaskInParallel(RefPtr<SharedTask<void(SlotVisitor&)>>);
+    
+    template<typename Func>
+    void runFunctionInParallel(const Func& func)
+    {
+        runTaskInParallel(createSharedTask<void(SlotVisitor&)>(func));
+    }
+
+    template<typename Func>
+    void forEachSlotVisitor(const Func&);
 
 private:
     friend class AllocatingScope;
 
 private:
     friend class AllocatingScope;
@@ -373,6 +387,7 @@ private:
     friend class HeapVerifier;
     friend class JITStubRoutine;
     friend class LLIntOffsetsExtractor;
     friend class HeapVerifier;
     friend class JITStubRoutine;
     friend class LLIntOffsetsExtractor;
+    friend class MarkStackMergingConstraint;
     friend class MarkedSpace;
     friend class MarkedAllocator;
     friend class MarkedBlock;
     friend class MarkedSpace;
     friend class MarkedAllocator;
     friend class MarkedBlock;
@@ -525,8 +540,10 @@ private:
     template<typename Func>
     void iterateExecutingAndCompilingCodeBlocksWithoutHoldingLocks(const Func&);
     
     template<typename Func>
     void iterateExecutingAndCompilingCodeBlocksWithoutHoldingLocks(const Func&);
     
-    void assertSharedMarkStacksEmpty();
+    void assertMarkStacksEmpty();
 
 
+    void setBonusVisitorTask(RefPtr<SharedTask<void(SlotVisitor&)>>);
+    
     const HeapType m_heapType;
     const size_t m_ramSize;
     const size_t m_minBytesPerCycle;
     const HeapType m_heapType;
     const size_t m_ramSize;
     const size_t m_minBytesPerCycle;
@@ -579,9 +596,6 @@ private:
     Vector<SlotVisitor*> m_availableParallelSlotVisitors;
     Lock m_parallelSlotVisitorLock;
     
     Vector<SlotVisitor*> m_availableParallelSlotVisitors;
     Lock m_parallelSlotVisitorLock;
     
-    template<typename Func>
-    void forEachSlotVisitor(const Func&);
-
     HandleSet m_handleSet;
     HandleStack m_handleStack;
     std::unique_ptr<CodeBlockSet> m_codeBlocks;
     HandleSet m_handleSet;
     HandleStack m_handleStack;
     std::unique_ptr<CodeBlockSet> m_codeBlocks;
@@ -633,8 +647,7 @@ private:
     unsigned m_numberOfWaitingParallelMarkers { 0 };
     bool m_parallelMarkersShouldExit { false };
 
     unsigned m_numberOfWaitingParallelMarkers { 0 };
     bool m_parallelMarkersShouldExit { false };
 
-    Lock m_opaqueRootsMutex;
-    HashSet<const void*> m_opaqueRoots;
+    ConcurrentPtrHashSet m_opaqueRoots;
 
     static const size_t s_blockFragmentLength = 32;
 
 
     static const size_t s_blockFragmentLength = 32;
 
@@ -642,6 +655,7 @@ private:
     ListableHandler<UnconditionalFinalizer>::List m_unconditionalFinalizers;
 
     ParallelHelperClient m_helperClient;
     ListableHandler<UnconditionalFinalizer>::List m_unconditionalFinalizers;
 
     ParallelHelperClient m_helperClient;
+    RefPtr<SharedTask<void(SlotVisitor&)>> m_bonusVisitorTask;
 
 #if ENABLE(RESOURCE_USAGE)
     size_t m_blockBytesAllocated { 0 };
 
 #if ENABLE(RESOURCE_USAGE)
     size_t m_blockBytesAllocated { 0 };
@@ -657,7 +671,7 @@ private:
     static const unsigned needFinalizeBit = 1u << 4u;
     static const unsigned mutatorWaitingBit = 1u << 5u; // Allows the mutator to use this as a condition variable.
     Atomic<unsigned> m_worldState;
     static const unsigned needFinalizeBit = 1u << 4u;
     static const unsigned mutatorWaitingBit = 1u << 5u; // Allows the mutator to use this as a condition variable.
     Atomic<unsigned> m_worldState;
-    bool m_collectorBelievesThatTheWorldIsStopped { false };
+    bool m_worldIsStopped { false };
     MonotonicTime m_beforeGC;
     MonotonicTime m_afterGC;
     MonotonicTime m_stopTime;
     MonotonicTime m_beforeGC;
     MonotonicTime m_afterGC;
     MonotonicTime m_stopTime;
@@ -672,6 +686,7 @@ private:
     bool m_threadIsStopping { false };
     bool m_mutatorDidRun { true };
     uint64_t m_mutatorExecutionVersion { 0 };
     bool m_threadIsStopping { false };
     bool m_mutatorDidRun { true };
     uint64_t m_mutatorExecutionVersion { 0 };
+    uint64_t m_phaseVersion { 0 };
     Box<Lock> m_threadLock;
     RefPtr<AutomaticThreadCondition> m_threadCondition; // The mutator must not wait on this. It would cause a deadlock.
     RefPtr<AutomaticThread> m_thread;
     Box<Lock> m_threadLock;
     RefPtr<AutomaticThreadCondition> m_threadCondition; // The mutator must not wait on this. It would cause a deadlock.
     RefPtr<AutomaticThread> m_thread;
@@ -693,6 +708,7 @@ private:
     uintptr_t m_barriersExecuted { 0 };
     
     CurrentThreadState* m_currentThreadState { nullptr };
     uintptr_t m_barriersExecuted { 0 };
     
     CurrentThreadState* m_currentThreadState { nullptr };
+    WTF::Thread* m_currentThread { nullptr }; // It's OK if this becomes a dangling pointer.
 };
 
 } // namespace JSC
 };
 
 } // namespace JSC
index 5a6f059..ed0b296 100644 (file)
@@ -63,30 +63,20 @@ inline bool Heap::hasHeapAccess() const
     return m_worldState.load() & hasAccessBit;
 }
 
     return m_worldState.load() & hasAccessBit;
 }
 
-inline bool Heap::collectorBelievesThatTheWorldIsStopped() const
+inline bool Heap::worldIsStopped() const
 {
 {
-    return m_collectorBelievesThatTheWorldIsStopped;
+    return m_worldIsStopped;
 }
 
 }
 
+// FIXME: This should be an instance method, so that it can get the markingVersion() quickly.
+// https://bugs.webkit.org/show_bug.cgi?id=179988
 ALWAYS_INLINE bool Heap::isMarked(const void* rawCell)
 {
 ALWAYS_INLINE bool Heap::isMarked(const void* rawCell)
 {
-    ASSERT(mayBeGCThread() != GCThreadType::Helper);
     HeapCell* cell = bitwise_cast<HeapCell*>(rawCell);
     if (cell->isLargeAllocation())
         return cell->largeAllocation().isMarked();
     MarkedBlock& block = cell->markedBlock();
     HeapCell* cell = bitwise_cast<HeapCell*>(rawCell);
     if (cell->isLargeAllocation())
         return cell->largeAllocation().isMarked();
     MarkedBlock& block = cell->markedBlock();
-    return block.isMarked(
-        block.vm()->heap.objectSpace().markingVersion(), cell);
-}
-
-ALWAYS_INLINE bool Heap::isMarkedConcurrently(const void* rawCell)
-{
-    HeapCell* cell = bitwise_cast<HeapCell*>(rawCell);
-    if (cell->isLargeAllocation())
-        return cell->largeAllocation().isMarked();
-    MarkedBlock& block = cell->markedBlock();
-    return block.isMarkedConcurrently(
-        block.vm()->heap.objectSpace().markingVersion(), cell);
+    return block.isMarked(block.vm()->heap.objectSpace().markingVersion(), cell);
 }
 
 ALWAYS_INLINE bool Heap::testAndSetMarked(HeapVersion markingVersion, const void* rawCell)
 }
 
 ALWAYS_INLINE bool Heap::testAndSetMarked(HeapVersion markingVersion, const void* rawCell)
@@ -179,19 +169,19 @@ inline void Heap::releaseSoon(RetainPtr<T>&& object)
 
 inline void Heap::incrementDeferralDepth()
 {
 
 inline void Heap::incrementDeferralDepth()
 {
-    ASSERT(!mayBeGCThread() || m_collectorBelievesThatTheWorldIsStopped);
+    ASSERT(!mayBeGCThread() || m_worldIsStopped);
     m_deferralDepth++;
 }
 
 inline void Heap::decrementDeferralDepth()
 {
     m_deferralDepth++;
 }
 
 inline void Heap::decrementDeferralDepth()
 {
-    ASSERT(!mayBeGCThread() || m_collectorBelievesThatTheWorldIsStopped);
+    ASSERT(!mayBeGCThread() || m_worldIsStopped);
     m_deferralDepth--;
 }
 
 inline void Heap::decrementDeferralDepthAndGCIfNeeded()
 {
     m_deferralDepth--;
 }
 
 inline void Heap::decrementDeferralDepthAndGCIfNeeded()
 {
-    ASSERT(!mayBeGCThread() || m_collectorBelievesThatTheWorldIsStopped);
+    ASSERT(!mayBeGCThread() || m_worldIsStopped);
     m_deferralDepth--;
     
     if (UNLIKELY(m_didDeferGCWork)) {
     m_deferralDepth--;
     
     if (UNLIKELY(m_didDeferGCWork)) {
@@ -269,4 +259,14 @@ inline void Heap::stopIfNecessary()
         stopIfNecessarySlow();
 }
 
         stopIfNecessarySlow();
 }
 
+template<typename Func>
+void Heap::forEachSlotVisitor(const Func& func)
+{
+    auto locker = holdLock(m_parallelSlotVisitorLock);
+    func(*m_collectorSlotVisitor);
+    func(*m_mutatorSlotVisitor);
+    for (auto& slotVisitor : m_parallelSlotVisitors)
+        func(*slotVisitor);
+}
+
 } // namespace JSC
 } // namespace JSC
index 632183d..2345b2d 100644 (file)
@@ -69,7 +69,7 @@ void HeapSnapshotBuilder::buildSnapshot()
 void HeapSnapshotBuilder::appendNode(JSCell* cell)
 {
     ASSERT(m_profiler.activeSnapshotBuilder() == this);
 void HeapSnapshotBuilder::appendNode(JSCell* cell)
 {
     ASSERT(m_profiler.activeSnapshotBuilder() == this);
-    ASSERT(Heap::isMarkedConcurrently(cell));
+    ASSERT(Heap::isMarked(cell));
 
     if (hasExistingNodeForCell(cell))
         return;
 
     if (hasExistingNodeForCell(cell))
         return;
index 44d14ba..57fcd04 100644 (file)
@@ -46,8 +46,8 @@ public:
     // before liveness data is cleared to be accurate.
     template<typename Func>
     static void findGCObjectPointersForMarking(
     // before liveness data is cleared to be accurate.
     template<typename Func>
     static void findGCObjectPointersForMarking(
-        Heap& heap, HeapVersion markingVersion, TinyBloomFilter filter, void* passedPointer,
-        const Func& func)
+        Heap& heap, HeapVersion markingVersion, HeapVersion newlyAllocatedVersion, TinyBloomFilter filter,
+        void* passedPointer, const Func& func)
     {
         const HashSet<MarkedBlock*>& set = heap.objectSpace().blocks().set();
         
     {
         const HashSet<MarkedBlock*>& set = heap.objectSpace().blocks().set();
         
@@ -88,7 +88,7 @@ public:
                 && set.contains(previousCandidate)
                 && previousCandidate->handle().cellKind() == HeapCell::Auxiliary) {
                 previousPointer = static_cast<char*>(previousCandidate->handle().cellAlign(previousPointer));
                 && set.contains(previousCandidate)
                 && previousCandidate->handle().cellKind() == HeapCell::Auxiliary) {
                 previousPointer = static_cast<char*>(previousCandidate->handle().cellAlign(previousPointer));
-                if (previousCandidate->handle().isLiveCell(markingVersion, isMarking, previousPointer))
+                if (previousCandidate->handle().isLiveCell(markingVersion, newlyAllocatedVersion, isMarking, previousPointer))
                     func(previousPointer);
             }
         }
                     func(previousPointer);
             }
         }
@@ -102,7 +102,7 @@ public:
             return;
         
         auto tryPointer = [&] (void* pointer) {
             return;
         
         auto tryPointer = [&] (void* pointer) {
-            if (candidate->handle().isLiveCell(markingVersion, isMarking, pointer))
+            if (candidate->handle().isLiveCell(markingVersion, newlyAllocatedVersion, isMarking, pointer))
                 func(pointer);
         };
     
                 func(pointer);
         };
     
index 9bc3a8b..a26d9a5 100644 (file)
@@ -76,7 +76,7 @@ public:
     ALWAYS_INLINE bool isMarked() { return m_isMarked.load(std::memory_order_relaxed); }
     ALWAYS_INLINE bool isMarked(HeapCell*) { return isMarked(); }
     ALWAYS_INLINE bool isMarked(HeapCell*, Dependency) { return isMarked(); }
     ALWAYS_INLINE bool isMarked() { return m_isMarked.load(std::memory_order_relaxed); }
     ALWAYS_INLINE bool isMarked(HeapCell*) { return isMarked(); }
     ALWAYS_INLINE bool isMarked(HeapCell*, Dependency) { return isMarked(); }
-    ALWAYS_INLINE bool isMarkedConcurrently(HeapVersion, HeapCell*) { return isMarked(); }
+    ALWAYS_INLINE bool isMarked(HeapVersion, HeapCell*) { return isMarked(); }
     bool isLive() { return isMarked() || isNewlyAllocated(); }
     
     bool hasValidCell() const { return m_hasValidCell; }
     bool isLive() { return isMarked() || isNewlyAllocated(); }
     
     bool hasValidCell() const { return m_hasValidCell; }
@@ -110,7 +110,7 @@ public:
     
     const AllocatorAttributes& attributes() const { return m_attributes; }
     
     
     const AllocatorAttributes& attributes() const { return m_attributes; }
     
-    Dependency aboutToMark(HeapVersion) { return nullDependency(); }
+    Dependency aboutToMark(HeapVersion) { return Dependency(); }
     
     ALWAYS_INLINE bool testAndSetMarked()
     {
     
     ALWAYS_INLINE bool testAndSetMarked()
     {
index 2603787..71da8f0 100644 (file)
@@ -1,5 +1,5 @@
 /*
 /*
- * Copyright (C) 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2016-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -33,14 +33,9 @@ namespace JSC {
 // Use this lock scope like so:
 // auto locker = lockDuringMarking(heap, lock);
 template<typename LockType>
 // Use this lock scope like so:
 // auto locker = lockDuringMarking(heap, lock);
 template<typename LockType>
-Locker<LockType> lockDuringMarking(Heap& heap, LockType& passedLock)
+auto lockDuringMarking(Heap& heap, LockType& passedLock)
 {
 {
-    LockType* lock;
-    if (heap.mutatorShouldBeFenced())
-        lock = &passedLock;
-    else
-        lock = nullptr;
-    return Locker<LockType>(lock);
+    return holdLockIf(passedLock, heap.mutatorShouldBeFenced());
 }
 
 } // namespace JSC
 }
 
 } // namespace JSC
index e6af53b..ef7dd44 100644 (file)
@@ -135,7 +135,7 @@ void MachineThreads::tryCopyOtherThreadStack(Thread& thread, void* buffer, size_
     *size += stack.second;
 }
 
     *size += stack.second;
 }
 
-bool MachineThreads::tryCopyOtherThreadStacks(const AbstractLocker& locker, void* buffer, size_t capacity, size_t* size)
+bool MachineThreads::tryCopyOtherThreadStacks(const AbstractLocker& locker, void* buffer, size_t capacity, size_t* size, Thread& currentThreadForGC)
 {
     // Prevent two VMs from suspending each other's threads at the same time,
     // which can cause deadlock: <rdar://problem/20300842>.
 {
     // Prevent two VMs from suspending each other's threads at the same time,
     // which can cause deadlock: <rdar://problem/20300842>.
@@ -145,13 +145,14 @@ bool MachineThreads::tryCopyOtherThreadStacks(const AbstractLocker& locker, void
     *size = 0;
 
     Thread& currentThread = Thread::current();
     *size = 0;
 
     Thread& currentThread = Thread::current();
-    const auto& threads = m_threadGroup->threads(locker);
+    const ListHashSet<Ref<Thread>>& threads = m_threadGroup->threads(locker);
     BitVector isSuspended(threads.size());
 
     {
         unsigned index = 0;
     BitVector isSuspended(threads.size());
 
     {
         unsigned index = 0;
-        for (auto& thread : threads) {
-            if (thread.ptr() != &currentThread) {
+        for (const Ref<Thread>& thread : threads) {
+            if (thread.ptr() != &currentThread
+                && thread.ptr() != &currentThreadForGC) {
                 auto result = thread->suspend();
                 if (result)
                     isSuspended.set(index);
                 auto result = thread->suspend();
                 if (result)
                     isSuspended.set(index);
@@ -199,7 +200,7 @@ static void growBuffer(size_t size, void** buffer, size_t* capacity)
     *buffer = fastMalloc(*capacity);
 }
 
     *buffer = fastMalloc(*capacity);
 }
 
-void MachineThreads::gatherConservativeRoots(ConservativeRoots& conservativeRoots, JITStubRoutineSet& jitStubRoutines, CodeBlockSet& codeBlocks, CurrentThreadState* currentThreadState)
+void MachineThreads::gatherConservativeRoots(ConservativeRoots& conservativeRoots, JITStubRoutineSet& jitStubRoutines, CodeBlockSet& codeBlocks, CurrentThreadState* currentThreadState, Thread* currentThread)
 {
     if (currentThreadState)
         gatherFromCurrentThread(conservativeRoots, jitStubRoutines, codeBlocks, *currentThreadState);
 {
     if (currentThreadState)
         gatherFromCurrentThread(conservativeRoots, jitStubRoutines, codeBlocks, *currentThreadState);
@@ -208,7 +209,7 @@ void MachineThreads::gatherConservativeRoots(ConservativeRoots& conservativeRoot
     size_t capacity = 0;
     void* buffer = nullptr;
     auto locker = holdLock(m_threadGroup->getLock());
     size_t capacity = 0;
     void* buffer = nullptr;
     auto locker = holdLock(m_threadGroup->getLock());
-    while (!tryCopyOtherThreadStacks(locker, buffer, capacity, &size))
+    while (!tryCopyOtherThreadStacks(locker, buffer, capacity, &size, *currentThread))
         growBuffer(size, &buffer, &capacity);
 
     if (!buffer)
         growBuffer(size, &buffer, &capacity);
 
     if (!buffer)
index 900b179..e2cba5b 100644 (file)
@@ -44,7 +44,7 @@ class MachineThreads {
 public:
     MachineThreads();
 
 public:
     MachineThreads();
 
-    void gatherConservativeRoots(ConservativeRoots&, JITStubRoutineSet&, CodeBlockSet&, CurrentThreadState*);
+    void gatherConservativeRoots(ConservativeRoots&, JITStubRoutineSet&, CodeBlockSet&, CurrentThreadState*, Thread*);
 
     // Only needs to be called by clients that can use the same heap from multiple threads.
     void addCurrentThread() { m_threadGroup->addCurrentThread(); }
 
     // Only needs to be called by clients that can use the same heap from multiple threads.
     void addCurrentThread() { m_threadGroup->addCurrentThread(); }
@@ -56,7 +56,7 @@ private:
     void gatherFromCurrentThread(ConservativeRoots&, JITStubRoutineSet&, CodeBlockSet&, CurrentThreadState&);
 
     void tryCopyOtherThreadStack(Thread&, void*, size_t capacity, size_t*);
     void gatherFromCurrentThread(ConservativeRoots&, JITStubRoutineSet&, CodeBlockSet&, CurrentThreadState&);
 
     void tryCopyOtherThreadStack(Thread&, void*, size_t capacity, size_t*);
-    bool tryCopyOtherThreadStacks(const AbstractLocker&, void*, size_t capacity, size_t*);
+    bool tryCopyOtherThreadStacks(const AbstractLocker&, void*, size_t capacity, size_t*, Thread&);
 
     std::shared_ptr<ThreadGroup> m_threadGroup;
 };
 
     std::shared_ptr<ThreadGroup> m_threadGroup;
 };
diff --git a/Source/JavaScriptCore/heap/MarkStackMergingConstraint.cpp b/Source/JavaScriptCore/heap/MarkStackMergingConstraint.cpp
new file mode 100644 (file)
index 0000000..bab5a43
--- /dev/null
@@ -0,0 +1,65 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "config.h"
+#include "MarkStackMergingConstraint.h"
+
+namespace JSC {
+
+MarkStackMergingConstraint::MarkStackMergingConstraint(Heap& heap)
+    : MarkingConstraint("Msm", "Mark Stack Merging", ConstraintVolatility::GreyedByExecution)
+    , m_heap(heap)
+{
+}
+
+MarkStackMergingConstraint::~MarkStackMergingConstraint()
+{
+}
+
+double MarkStackMergingConstraint::quickWorkEstimate(SlotVisitor&)
+{
+    return m_heap.m_mutatorMarkStack->size() + m_heap.m_raceMarkStack->size();
+}
+
+void MarkStackMergingConstraint::prepareToExecuteImpl(const AbstractLocker&, SlotVisitor& visitor)
+{
+    // Logging the work here ensures that the constraint solver knows that it doesn't need to produce
+    // anymore work.
+    size_t size = m_heap.m_mutatorMarkStack->size() + m_heap.m_raceMarkStack->size();
+    visitor.addToVisitCount(size);
+    
+    if (Options::logGC())
+        dataLog("(", size, ")");
+}
+
+ConstraintParallelism MarkStackMergingConstraint::executeImpl(SlotVisitor& visitor)
+{
+    m_heap.m_mutatorMarkStack->transferTo(visitor.mutatorMarkStack());
+    m_heap.m_raceMarkStack->transferTo(visitor.mutatorMarkStack());
+    return ConstraintParallelism::Sequential;
+}
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/heap/MarkStackMergingConstraint.h b/Source/JavaScriptCore/heap/MarkStackMergingConstraint.h
new file mode 100644 (file)
index 0000000..ffcac43
--- /dev/null
@@ -0,0 +1,48 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#include "MarkingConstraint.h"
+
+namespace JSC {
+
+class MarkStackMergingConstraint : public MarkingConstraint {
+public:
+    MarkStackMergingConstraint(Heap&);
+    ~MarkStackMergingConstraint();
+    
+    double quickWorkEstimate(SlotVisitor&) override;
+    
+protected:
+    void prepareToExecuteImpl(const AbstractLocker& constraintSolvingLocker, SlotVisitor&) override;
+    ConstraintParallelism executeImpl(SlotVisitor&) override;
+    
+private:
+    Heap& m_heap;
+};
+
+} // namespace JSC
+
index 7cbbfc6..f0cf26c 100644 (file)
@@ -61,7 +61,7 @@ bool MarkedAllocator::isPagedOut(double deadline)
     unsigned itersSinceLastTimeCheck = 0;
     for (auto* block : m_blocks) {
         if (block)
     unsigned itersSinceLastTimeCheck = 0;
     for (auto* block : m_blocks) {
         if (block)
-            block->block().updateNeedsDestruction();
+            holdLock(block->block().lock());
         ++itersSinceLastTimeCheck;
         if (itersSinceLastTimeCheck >= Heap::s_timeCheckResolution) {
             double currentTime = WTF::monotonicallyIncreasingTime();
         ++itersSinceLastTimeCheck;
         if (itersSinceLastTimeCheck >= Heap::s_timeCheckResolution) {
             double currentTime = WTF::monotonicallyIncreasingTime();
@@ -444,6 +444,33 @@ void MarkedAllocator::assertNoUnswept()
     ASSERT_NOT_REACHED();
 }
 
     ASSERT_NOT_REACHED();
 }
 
+RefPtr<SharedTask<MarkedBlock::Handle*()>> MarkedAllocator::parallelNotEmptyBlockSource()
+{
+    class Task : public SharedTask<MarkedBlock::Handle*()> {
+    public:
+        Task(MarkedAllocator& allocator)
+            : m_allocator(allocator)
+        {
+        }
+        
+        MarkedBlock::Handle* run() override
+        {
+            auto locker = holdLock(m_lock);
+            m_index = m_allocator.m_markingNotEmpty.findBit(m_index, true);
+            if (m_index >= m_allocator.m_blocks.size())
+                return nullptr;
+            return m_allocator.m_blocks[m_index++];
+        }
+        
+    private:
+        MarkedAllocator& m_allocator;
+        size_t m_index { 0 };
+        Lock m_lock;
+    };
+    
+    return adoptRef(new Task(*this));
+}
+
 void MarkedAllocator::dump(PrintStream& out) const
 {
     out.print(RawPointer(this), ":", m_cellSize, "/", m_attributes);
 void MarkedAllocator::dump(PrintStream& out) const
 {
     out.print(RawPointer(this), ":", m_cellSize, "/", m_attributes);
index c169d9d..ba7df46 100644 (file)
@@ -31,6 +31,7 @@
 #include "MarkedBlock.h"
 #include <wtf/DataLog.h>
 #include <wtf/FastBitVector.h>
 #include "MarkedBlock.h"
 #include <wtf/DataLog.h>
 #include <wtf/FastBitVector.h>
+#include <wtf/SharedTask.h>
 #include <wtf/Vector.h>
 
 namespace JSC {
 #include <wtf/Vector.h>
 
 namespace JSC {
@@ -103,6 +104,8 @@ public:
     template<typename Functor> void forEachBlock(const Functor&);
     template<typename Functor> void forEachNotEmptyBlock(const Functor&);
     
     template<typename Functor> void forEachBlock(const Functor&);
     template<typename Functor> void forEachNotEmptyBlock(const Functor&);
     
+    RefPtr<SharedTask<MarkedBlock::Handle*()>> parallelNotEmptyBlockSource();
+    
     void addBlock(MarkedBlock::Handle*);
     void removeBlock(MarkedBlock::Handle*);
 
     void addBlock(MarkedBlock::Handle*);
     void removeBlock(MarkedBlock::Handle*);
 
index 7d167db..bfb8064 100644 (file)
@@ -86,9 +86,9 @@ MarkedBlock::Handle::~Handle()
 }
 
 MarkedBlock::MarkedBlock(VM& vm, Handle& handle)
 }
 
 MarkedBlock::MarkedBlock(VM& vm, Handle& handle)
-    : m_markingVersion(MarkedSpace::nullVersion)
-    , m_handle(handle)
+    : m_handle(handle)
     , m_vm(&vm)
     , m_vm(&vm)
+    , m_markingVersion(MarkedSpace::nullVersion)
 {
     if (false)
         dataLog(RawPointer(this), ": Allocated.\n");
 {
     if (false)
         dataLog(RawPointer(this), ": Allocated.\n");
@@ -200,7 +200,7 @@ void MarkedBlock::Handle::zap(const FreeList& freeList)
 void MarkedBlock::aboutToMarkSlow(HeapVersion markingVersion)
 {
     ASSERT(vm()->heap.objectSpace().isMarking());
 void MarkedBlock::aboutToMarkSlow(HeapVersion markingVersion)
 {
     ASSERT(vm()->heap.objectSpace().isMarking());
-    LockHolder locker(m_lock);
+    auto locker = holdLock(m_lock);
     
     if (!areMarksStale(markingVersion))
         return;
     
     if (!areMarksStale(markingVersion))
         return;
@@ -223,15 +223,19 @@ void MarkedBlock::aboutToMarkSlow(HeapVersion markingVersion)
             dataLog(RawPointer(this), ": Doing things.\n");
         HeapVersion newlyAllocatedVersion = space()->newlyAllocatedVersion();
         if (handle().m_newlyAllocatedVersion == newlyAllocatedVersion) {
             dataLog(RawPointer(this), ": Doing things.\n");
         HeapVersion newlyAllocatedVersion = space()->newlyAllocatedVersion();
         if (handle().m_newlyAllocatedVersion == newlyAllocatedVersion) {
-            // Merge the contents of marked into newlyAllocated. If we get the full set of bits
-            // then invalidate newlyAllocated and set allocated.
-            handle().m_newlyAllocated.mergeAndClear(m_marks);
+            // When do we get here? The block could not have been filled up. The newlyAllocated bits would
+            // have had to be created since the end of the last collection. The only things that create
+            // them are aboutToMarkSlow, lastChanceToFinalize, and stopAllocating. If it had been
+            // aboutToMarkSlow, then we shouldn't be here since the marks wouldn't be stale anymore. It
+            // cannot be lastChanceToFinalize. So it must be stopAllocating. That means that we just
+            // computed the newlyAllocated bits just before the start of an increment. When we are in that
+            // mode, it seems as if newlyAllocated should subsume marks.
+            ASSERT(handle().m_newlyAllocated.subsumes(m_marks));
+            m_marks.clearAll();
         } else {
         } else {
-            // Replace the contents of newlyAllocated with marked. If we get the full set of
-            // bits then invalidate newlyAllocated and set allocated.
             handle().m_newlyAllocated.setAndClear(m_marks);
             handle().m_newlyAllocated.setAndClear(m_marks);
+            handle().m_newlyAllocatedVersion = newlyAllocatedVersion;
         }
         }
-        handle().m_newlyAllocatedVersion = newlyAllocatedVersion;
     }
     clearHasAnyMarked();
     WTF::storeStoreFence();
     }
     clearHasAnyMarked();
     WTF::storeStoreFence();
@@ -321,11 +325,6 @@ void MarkedBlock::Handle::removeFromAllocator()
     m_allocator->removeBlock(this);
 }
 
     m_allocator->removeBlock(this);
 }
 
-void MarkedBlock::updateNeedsDestruction()
-{
-    m_needsDestruction = handle().needsDestruction();
-}
-
 void MarkedBlock::Handle::didAddToAllocator(MarkedAllocator* allocator, size_t index)
 {
     ASSERT(m_index == std::numeric_limits<size_t>::max());
 void MarkedBlock::Handle::didAddToAllocator(MarkedAllocator* allocator, size_t index)
 {
     ASSERT(m_index == std::numeric_limits<size_t>::max());
@@ -345,8 +344,6 @@ void MarkedBlock::Handle::didAddToAllocator(MarkedAllocator* allocator, size_t i
     if (m_attributes.cellKind != HeapCell::JSCell)
         RELEASE_ASSERT(m_attributes.destruction == DoesNotNeedDestruction);
     
     if (m_attributes.cellKind != HeapCell::JSCell)
         RELEASE_ASSERT(m_attributes.destruction == DoesNotNeedDestruction);
     
-    block().updateNeedsDestruction();
-    
     double markCountBias = -(Options::minMarkedBlockUtilization() * cellsPerBlock());
     
     // The mark count bias should be comfortably within this range.
     double markCountBias = -(Options::minMarkedBlockUtilization() * cellsPerBlock());
     
     // The mark count bias should be comfortably within this range.
@@ -366,16 +363,6 @@ void MarkedBlock::Handle::didRemoveFromAllocator()
     m_allocator = nullptr;
 }
 
     m_allocator = nullptr;
 }
 
-bool MarkedBlock::Handle::isLive(const HeapCell* cell)
-{
-    return isLive(space()->markingVersion(), space()->isMarking(), cell);
-}
-
-bool MarkedBlock::Handle::isLiveCell(const void* p)
-{
-    return isLiveCell(space()->markingVersion(), space()->isMarking(), p);
-}
-
 #if !ASSERT_DISABLED
 void MarkedBlock::assertValidCell(VM& vm, HeapCell* cell) const
 {
 #if !ASSERT_DISABLED
 void MarkedBlock::assertValidCell(VM& vm, HeapCell* cell) const
 {
index e0f0de2..411fa74 100644 (file)
@@ -29,6 +29,7 @@
 #include <wtf/Atomics.h>
 #include <wtf/Bitmap.h>
 #include <wtf/HashFunctions.h>
 #include <wtf/Atomics.h>
 #include <wtf/Bitmap.h>
 #include <wtf/HashFunctions.h>
+#include <wtf/CountingLock.h>
 #include <wtf/StdLibExtras.h>
 
 namespace JSC {
 #include <wtf/StdLibExtras.h>
 
 namespace JSC {
@@ -161,8 +162,8 @@ public:
         size_t markCount();
         size_t size();
         
         size_t markCount();
         size_t size();
         
-        inline bool isLive(HeapVersion markingVersion, bool isMarking, const HeapCell*);
-        inline bool isLiveCell(HeapVersion markingVersion, bool isMarking, const void*);
+        bool isLive(HeapVersion markingVersion, HeapVersion newlyAllocatedVersion, bool isMarking, const HeapCell*);
+        inline bool isLiveCell(HeapVersion markingVersion, HeapVersion newlyAllocatedVersion, bool isMarking, const void*);
 
         bool isLive(const HeapCell*);
         bool isLiveCell(const void*);
 
         bool isLive(const HeapCell*);
         bool isLiveCell(const void*);
@@ -258,7 +259,6 @@ public:
 
     bool isMarked(const void*);
     bool isMarked(HeapVersion markingVersion, const void*);
 
     bool isMarked(const void*);
     bool isMarked(HeapVersion markingVersion, const void*);
-    bool isMarkedConcurrently(HeapVersion markingVersion, const void*);
     bool isMarked(const void*, Dependency);
     bool testAndSetMarked(const void*, Dependency);
         
     bool isMarked(const void*, Dependency);
     bool testAndSetMarked(const void*, Dependency);
         
@@ -280,7 +280,6 @@ public:
 
     JS_EXPORT_PRIVATE bool areMarksStale();
     bool areMarksStale(HeapVersion markingVersion);
 
     JS_EXPORT_PRIVATE bool areMarksStale();
     bool areMarksStale(HeapVersion markingVersion);
-    DependencyWith<bool> areMarksStaleWithDependency(HeapVersion markingVersion);
     
     Dependency aboutToMark(HeapVersion markingVersion);
         
     
     Dependency aboutToMark(HeapVersion markingVersion);
         
@@ -290,15 +289,12 @@ public:
     JS_EXPORT_PRIVATE void assertMarksNotStale();
 #endif
         
     JS_EXPORT_PRIVATE void assertMarksNotStale();
 #endif
         
-    bool needsDestruction() const { return m_needsDestruction; }
-    
-    // This is usually a no-op, and we use it as a no-op that touches the page in isPagedOut().
-    void updateNeedsDestruction();
-    
     void resetMarks();
     
     bool isMarkedRaw(const void* p);
     HeapVersion markingVersion() const { return m_markingVersion; }
     void resetMarks();
     
     bool isMarkedRaw(const void* p);
     HeapVersion markingVersion() const { return m_markingVersion; }
+    
+    CountingLock& lock() { return m_lock; }
 
 private:
     static const size_t atomAlignmentMask = atomSize - 1;
 
 private:
     static const size_t atomAlignmentMask = atomSize - 1;
@@ -314,11 +310,12 @@ private:
     void noteMarkedSlow();
     
     inline bool marksConveyLivenessDuringMarking(HeapVersion markingVersion);
     void noteMarkedSlow();
     
     inline bool marksConveyLivenessDuringMarking(HeapVersion markingVersion);
+    inline bool marksConveyLivenessDuringMarking(HeapVersion myMarkingVersion, HeapVersion markingVersion);
         
         
-    WTF::Bitmap<atomsPerBlock> m_marks;
+    Handle& m_handle;
+    VM* m_vm;
 
 
-    bool m_needsDestruction;
-    Lock m_lock;
+    CountingLock m_lock;
     
     // The actual mark count can be computed by doing: m_biasedMarkCount - m_markCountBias. Note
     // that this count is racy. It will accurately detect whether or not exactly zero things were
     
     // The actual mark count can be computed by doing: m_biasedMarkCount - m_markCountBias. Note
     // that this count is racy. It will accurately detect whether or not exactly zero things were
@@ -348,9 +345,8 @@ private:
     int16_t m_markCountBias;
 
     HeapVersion m_markingVersion;
     int16_t m_markCountBias;
 
     HeapVersion m_markingVersion;
-    
-    Handle& m_handle;
-    VM* m_vm;
+
+    WTF::Bitmap<atomsPerBlock> m_marks;
 };
 
 inline MarkedBlock::Handle& MarkedBlock::handle()
 };
 
 inline MarkedBlock::Handle& MarkedBlock::handle()
@@ -498,18 +494,12 @@ inline bool MarkedBlock::areMarksStale(HeapVersion markingVersion)
     return markingVersion != m_markingVersion;
 }
 
     return markingVersion != m_markingVersion;
 }
 
-ALWAYS_INLINE DependencyWith<bool> MarkedBlock::areMarksStaleWithDependency(HeapVersion markingVersion)
-{
-    HeapVersion version = m_markingVersion;
-    return dependencyWith(dependency(version), version != markingVersion);
-}
-
 inline Dependency MarkedBlock::aboutToMark(HeapVersion markingVersion)
 {
 inline Dependency MarkedBlock::aboutToMark(HeapVersion markingVersion)
 {
-    auto result = areMarksStaleWithDependency(markingVersion);
-    if (UNLIKELY(result.value))
+    HeapVersion version = m_markingVersion;
+    if (UNLIKELY(version != markingVersion))
         aboutToMarkSlow(markingVersion);
         aboutToMarkSlow(markingVersion);
-    return result.dependency;
+    return Dependency::fence(version);
 }
 
 inline void MarkedBlock::Handle::assertMarksNotStale()
 }
 
 inline void MarkedBlock::Handle::assertMarksNotStale()
@@ -524,15 +514,10 @@ inline bool MarkedBlock::isMarkedRaw(const void* p)
 
 inline bool MarkedBlock::isMarked(HeapVersion markingVersion, const void* p)
 {
 
 inline bool MarkedBlock::isMarked(HeapVersion markingVersion, const void* p)
 {
-    return areMarksStale(markingVersion) ? false : isMarkedRaw(p);
-}
-
-inline bool MarkedBlock::isMarkedConcurrently(HeapVersion markingVersion, const void* p)
-{
-    auto result = areMarksStaleWithDependency(markingVersion);
-    if (result.value)
+    HeapVersion version = m_markingVersion;
+    if (UNLIKELY(version != markingVersion))
         return false;
         return false;
-    return m_marks.get(atomNumber(p), result.dependency);
+    return m_marks.get(atomNumber(p), Dependency::fence(version));
 }
 
 inline bool MarkedBlock::isMarked(const void* p, Dependency dependency)
 }
 
 inline bool MarkedBlock::isMarked(const void* p, Dependency dependency)
index 4e0b9bc..68772b0 100644 (file)
@@ -67,6 +67,11 @@ inline MarkedSpace* MarkedBlock::Handle::space() const
 
 inline bool MarkedBlock::marksConveyLivenessDuringMarking(HeapVersion markingVersion)
 {
 
 inline bool MarkedBlock::marksConveyLivenessDuringMarking(HeapVersion markingVersion)
 {
+    return marksConveyLivenessDuringMarking(m_markingVersion, markingVersion);
+}
+
+inline bool MarkedBlock::marksConveyLivenessDuringMarking(HeapVersion myMarkingVersion, HeapVersion markingVersion)
+{
     // This returns true if any of these is true:
     // - We just created the block and so the bits are clear already.
     // - This block has objects marked during the last GC, and so its version was up-to-date just
     // This returns true if any of these is true:
     // - We just created the block and so the bits are clear already.
     // - This block has objects marked during the last GC, and so its version was up-to-date just
@@ -82,39 +87,114 @@ inline bool MarkedBlock::marksConveyLivenessDuringMarking(HeapVersion markingVer
     ASSERT(space()->isMarking());
     if (heap()->collectionScope() != CollectionScope::Full)
         return false;
     ASSERT(space()->isMarking());
     if (heap()->collectionScope() != CollectionScope::Full)
         return false;
-    return m_markingVersion == MarkedSpace::nullVersion
-        || MarkedSpace::nextVersion(m_markingVersion) == markingVersion;
+    return myMarkingVersion == MarkedSpace::nullVersion
+        || MarkedSpace::nextVersion(myMarkingVersion) == markingVersion;
 }
 
 }
 
-inline bool MarkedBlock::Handle::isLive(HeapVersion markingVersion, bool isMarking, const HeapCell* cell)
+ALWAYS_INLINE bool MarkedBlock::Handle::isLive(HeapVersion markingVersion, HeapVersion newlyAllocatedVersion, bool isMarking, const HeapCell* cell)
 {
 {
-    ASSERT(!isFreeListed());
-    
-    if (UNLIKELY(hasAnyNewlyAllocated())) {
-        if (isNewlyAllocated(cell))
-            return true;
-    }
-    
     if (allocator()->isAllocated(NoLockingNecessary, this))
         return true;
     
     if (allocator()->isAllocated(NoLockingNecessary, this))
         return true;
     
+    // We need to do this while holding the lock because marks might be stale. In that case, newly
+    // allocated will not yet be valid. Consider this interleaving.
+    // 
+    // One thread is doing this:
+    //
+    // 1) IsLiveChecksNewlyAllocated: We check if newly allocated is valid. If it is valid, and the bit is
+    //    set, we return true. Let's assume that this executes atomically. It doesn't have to in general,
+    //    but we can assume that for the purpose of seeing this bug.
+    //
+    // 2) IsLiveChecksMarks: Having failed that, we check the mark bits. This step implies the rest of
+    //    this function. It happens under a lock so it's atomic.
+    //
+    // Another thread is doing:
+    //
+    // 1) AboutToMarkSlow: This is the entire aboutToMarkSlow function, and let's say it's atomic. It
+    //    sorta is since it holds a lock, but that doesn't actually make it atomic with respect to
+    //    IsLiveChecksNewlyAllocated, since that does not hold a lock in our scenario.
+    //
+    // The harmful interleaving happens if we start out with a block that has stale mark bits that
+    // nonetheless convey liveness during marking (the off-by-one version trick). The interleaving is
+    // just:
+    //
+    // IsLiveChecksNewlyAllocated AboutToMarkSlow IsLiveChecksMarks
+    //
+    // We started with valid marks but invalid newly allocated. So, the first part doesn't think that
+    // anything is live, but dutifully drops down to the marks step. But in the meantime, we clear the
+    // mark bits and transfer their contents into newlyAllocated. So IsLiveChecksMarks also sees nothing
+    // live. Ooops!
+    //
+    // Fortunately, since this is just a read critical section, we can use a CountingLock.
+    //
+    // Probably many users of CountingLock could use its lambda-based and locker-based APIs. But here, we
+    // need to ensure that everything is ALWAYS_INLINE. It's hard to do that when using lambdas. It's
+    // more reliable to write it inline instead. Empirically, it seems like how inline this is has some
+    // impact on perf - around 2% on splay if you get it wrong.
+
     MarkedBlock& block = this->block();
     
     MarkedBlock& block = this->block();
     
-    if (block.areMarksStale()) {
+    auto count = block.m_lock.tryOptimisticFencelessRead();
+    if (count.value) {
+        Dependency fenceBefore = Dependency::fence(count.input);
+        MarkedBlock::Handle* fencedThis = fenceBefore.consume(this);
+        
+        ASSERT(!fencedThis->isFreeListed());
+        
+        HeapVersion myNewlyAllocatedVersion = fencedThis->m_newlyAllocatedVersion;
+        if (myNewlyAllocatedVersion == newlyAllocatedVersion) {
+            bool result = fencedThis->isNewlyAllocated(cell);
+            if (block.m_lock.fencelessValidate(count.value, Dependency::fence(result)))
+                return result;
+        } else {
+            MarkedBlock& fencedBlock = *fenceBefore.consume(&block);
+            
+            HeapVersion myMarkingVersion = fencedBlock.m_markingVersion;
+            if (myMarkingVersion != markingVersion
+                && (!isMarking || !fencedBlock.marksConveyLivenessDuringMarking(myMarkingVersion, markingVersion))) {
+                if (block.m_lock.fencelessValidate(count.value, Dependency::fence(myMarkingVersion)))
+                    return false;
+            } else {
+                bool result = fencedBlock.m_marks.get(block.atomNumber(cell));
+                if (block.m_lock.fencelessValidate(count.value, Dependency::fence(result)))
+                    return result;
+            }
+        }
+    }
+    
+    auto locker = holdLock(block.m_lock);
+
+    ASSERT(!isFreeListed());
+    
+    HeapVersion myNewlyAllocatedVersion = m_newlyAllocatedVersion;
+    if (myNewlyAllocatedVersion == newlyAllocatedVersion)
+        return isNewlyAllocated(cell);
+    
+    if (block.areMarksStale(markingVersion)) {
         if (!isMarking)
             return false;
         if (!block.marksConveyLivenessDuringMarking(markingVersion))
             return false;
     }
         if (!isMarking)
             return false;
         if (!block.marksConveyLivenessDuringMarking(markingVersion))
             return false;
     }
-
+    
     return block.m_marks.get(block.atomNumber(cell));
 }
 
     return block.m_marks.get(block.atomNumber(cell));
 }
 
-inline bool MarkedBlock::Handle::isLiveCell(HeapVersion markingVersion, bool isMarking, const void* p)
+inline bool MarkedBlock::Handle::isLiveCell(HeapVersion markingVersion, HeapVersion newlyAllocatedVersion, bool isMarking, const void* p)
 {
     if (!m_block->isAtom(p))
         return false;
 {
     if (!m_block->isAtom(p))
         return false;
-    return isLive(markingVersion, isMarking, static_cast<const HeapCell*>(p));
+    return isLive(markingVersion, newlyAllocatedVersion, isMarking, static_cast<const HeapCell*>(p));
+}
+
+inline bool MarkedBlock::Handle::isLive(const HeapCell* cell)
+{
+    return isLive(space()->markingVersion(), space()->newlyAllocatedVersion(), space()->isMarking(), cell);
+}
+
+inline bool MarkedBlock::Handle::isLiveCell(const void* p)
+{
+    return isLiveCell(space()->markingVersion(), space()->newlyAllocatedVersion(), space()->isMarking(), p);
 }
 
 // The following has to be true for specialization to kick in:
 }
 
 // The following has to be true for specialization to kick in:
@@ -386,6 +466,15 @@ inline MarkedBlock::Handle::MarksMode MarkedBlock::Handle::marksMode()
 template <typename Functor>
 inline IterationStatus MarkedBlock::Handle::forEachLiveCell(const Functor& functor)
 {
 template <typename Functor>
 inline IterationStatus MarkedBlock::Handle::forEachLiveCell(const Functor& functor)
 {
+    // FIXME: This is not currently efficient to use in the constraint solver because isLive() grabs a
+    // lock to protect itself from concurrent calls to aboutToMarkSlow(). But we could get around this by
+    // having this function grab the lock before and after the iteration, and check if the marking version
+    // changed. If it did, just run again. Inside the loop, we only need to ensure that if a race were to
+    // happen, we will just overlook objects. I think that because of how aboutToMarkSlow() does things,
+    // a race ought to mean that it just returns false when it should have returned true - but this is
+    // something that would have to be verified carefully.
+    // https://bugs.webkit.org/show_bug.cgi?id=180315
+    
     HeapCell::Kind kind = m_attributes.cellKind;
     for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
         HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&m_block->atoms()[i]);
     HeapCell::Kind kind = m_attributes.cellKind;
     for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
         HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&m_block->atoms()[i]);
index fce6070..b7f97c3 100644 (file)
@@ -421,7 +421,7 @@ void MarkedSpace::endMarking()
                 handle->resetAllocated();
             });
     }
                 handle->resetAllocated();
             });
     }
-        
+    
     m_newlyAllocatedVersion = nextVersion(m_newlyAllocatedVersion);
     
     for (unsigned i = m_largeAllocationsOffsetForThisCollection; i < m_largeAllocations.size(); ++i)
     m_newlyAllocatedVersion = nextVersion(m_newlyAllocatedVersion);
     
     for (unsigned i = m_largeAllocationsOffsetForThisCollection; i < m_largeAllocations.size(); ++i)
index a6528b3..8eb0b32 100644 (file)
@@ -165,6 +165,11 @@ public:
     // When this is true it means that we have flipped but the mark bits haven't converged yet.
     bool isMarking() const { return m_isMarking; }
     
     // When this is true it means that we have flipped but the mark bits haven't converged yet.
     bool isMarking() const { return m_isMarking; }
     
+    WeakSet* activeWeakSetsBegin() { return m_activeWeakSets.begin(); }
+    WeakSet* activeWeakSetsEnd() { return m_activeWeakSets.end(); }
+    WeakSet* newActiveWeakSetsBegin() { return m_newActiveWeakSets.begin(); }
+    WeakSet* newActiveWeakSetsEnd() { return m_newActiveWeakSets.end(); }
+    
     void dumpBits(PrintStream& = WTF::dataFile());
     
     JS_EXPORT_PRIVATE static std::array<size_t, numSizeClasses> s_sizeClassForSizeStep;
     void dumpBits(PrintStream& = WTF::dataFile());
     
     JS_EXPORT_PRIVATE static std::array<size_t, numSizeClasses> s_sizeClassForSizeStep;
index 39a3081..64084fe 100644 (file)
 #include "MarkingConstraint.h"
 
 #include "JSCInlines.h"
 #include "MarkingConstraint.h"
 
 #include "JSCInlines.h"
+#include "VisitCounter.h"
 
 namespace JSC {
 
 
 namespace JSC {
 
-MarkingConstraint::MarkingConstraint(
-    CString abbreviatedName, CString name,
-    ::Function<void(SlotVisitor&, const VisitingTimeout&)> executeFunction,
-    ConstraintVolatility volatility)
-    : m_abbreviatedName(abbreviatedName)
-    , m_name(WTFMove(name))
-    , m_executeFunction(WTFMove(executeFunction))
-    , m_volatility(volatility)
-{
-}
+static constexpr bool verboseMarkingConstraint = false;
 
 
-MarkingConstraint::MarkingConstraint(
-    CString abbreviatedName, CString name,
-    ::Function<void(SlotVisitor&, const VisitingTimeout&)> executeFunction,
-    ::Function<double(SlotVisitor&)> quickWorkEstimateFunction,
-    ConstraintVolatility volatility)
+MarkingConstraint::MarkingConstraint(CString abbreviatedName, CString name, ConstraintVolatility volatility, ConstraintConcurrency concurrency, ConstraintParallelism parallelism)
     : m_abbreviatedName(abbreviatedName)
     , m_name(WTFMove(name))
     : m_abbreviatedName(abbreviatedName)
     , m_name(WTFMove(name))
-    , m_executeFunction(WTFMove(executeFunction))
-    , m_quickWorkEstimateFunction(WTFMove(quickWorkEstimateFunction))
     , m_volatility(volatility)
     , m_volatility(volatility)
+    , m_concurrency(concurrency)
+    , m_parallelism(parallelism)
 {
 }
 
 {
 }
 
@@ -63,14 +51,74 @@ void MarkingConstraint::resetStats()
     m_lastVisitCount = 0;
 }
 
     m_lastVisitCount = 0;
 }
 
-void MarkingConstraint::execute(SlotVisitor& visitor, bool& didVisitSomething, MonotonicTime timeout)
+ConstraintParallelism MarkingConstraint::execute(SlotVisitor& visitor)
+{
+    VisitCounter visitCounter(visitor);
+    ConstraintParallelism result = executeImpl(visitor);
+    m_lastVisitCount += visitCounter.visitCount();
+    if (verboseMarkingConstraint && visitCounter.visitCount())
+        dataLog("(", abbreviatedName(), " visited ", visitCounter.visitCount(), " in execute)");
+    if (result == ConstraintParallelism::Parallel) {
+        // It's illegal to produce parallel work if you haven't advertised it upfront because the solver
+        // has optimizations for constraints that promise to never produce parallel work.
+        RELEASE_ASSERT(m_parallelism == ConstraintParallelism::Parallel);
+    }
+    return result;
+}
+
+double MarkingConstraint::quickWorkEstimate(SlotVisitor&)
+{
+    return 0;
+}
+
+double MarkingConstraint::workEstimate(SlotVisitor& visitor)
+{
+    return lastVisitCount() + quickWorkEstimate(visitor);
+}
+
+void MarkingConstraint::prepareToExecute(const AbstractLocker& constraintSolvingLocker, SlotVisitor& visitor)
 {
     if (Options::logGC())
         dataLog(abbreviatedName());
 {
     if (Options::logGC())
         dataLog(abbreviatedName());
-    VisitingTimeout visitingTimeout(visitor, didVisitSomething, timeout);
-    m_executeFunction(visitor, visitingTimeout);
-    m_lastVisitCount = visitingTimeout.visitCount(visitor);
-    didVisitSomething = visitingTimeout.didVisitSomething(visitor);
+    VisitCounter visitCounter(visitor);
+    prepareToExecuteImpl(constraintSolvingLocker, visitor);
+    m_lastVisitCount = visitCounter.visitCount();
+    if (verboseMarkingConstraint && visitCounter.visitCount())
+        dataLog("(", abbreviatedName(), " visited ", visitCounter.visitCount(), " in prepareToExecute)");
+}
+
+void MarkingConstraint::doParallelWork(SlotVisitor& visitor)
+{
+    VisitCounter visitCounter(visitor);
+    doParallelWorkImpl(visitor);
+    if (verboseMarkingConstraint && visitCounter.visitCount())
+        dataLog("(", abbreviatedName(), " visited ", visitCounter.visitCount(), " in doParallelWork)");
+    {
+        auto locker = holdLock(m_lock);
+        m_lastVisitCount += visitCounter.visitCount();
+    }
+}
+
+void MarkingConstraint::finishParallelWork(SlotVisitor& visitor)
+{
+    VisitCounter visitCounter(visitor);
+    finishParallelWorkImpl(visitor);
+    m_lastVisitCount += visitCounter.visitCount();
+    if (verboseMarkingConstraint && visitCounter.visitCount())
+        dataLog("(", abbreviatedName(), " visited ", visitCounter.visitCount(), " in finishParallelWork)");
+}
+
+void MarkingConstraint::prepareToExecuteImpl(const AbstractLocker&, SlotVisitor&)
+{
+}
+
+void MarkingConstraint::doParallelWorkImpl(SlotVisitor&)
+{
+    UNREACHABLE_FOR_PLATFORM();
+}
+
+void MarkingConstraint::finishParallelWorkImpl(SlotVisitor&)
+{
 }
 
 } // namespace JSC
 }
 
 } // namespace JSC
index d7aa54c..f0fcc18 100644 (file)
 
 #pragma once
 
 
 #pragma once
 
+#include "ConstraintConcurrency.h"
+#include "ConstraintParallelism.h"
 #include "ConstraintVolatility.h"
 #include "ConstraintVolatility.h"
-#include "VisitingTimeout.h"
 #include <limits.h>
 #include <wtf/FastMalloc.h>
 #include <limits.h>
 #include <wtf/FastMalloc.h>
-#include <wtf/Function.h>
-#include <wtf/MonotonicTime.h>
+#include <wtf/Lock.h>
 #include <wtf/Noncopyable.h>
 #include <wtf/text/CString.h>
 
 #include <wtf/Noncopyable.h>
 #include <wtf/text/CString.h>
 
@@ -44,17 +44,11 @@ class MarkingConstraint {
     WTF_MAKE_FAST_ALLOCATED;
 public:
     JS_EXPORT_PRIVATE MarkingConstraint(
     WTF_MAKE_FAST_ALLOCATED;
 public:
     JS_EXPORT_PRIVATE MarkingConstraint(
-        CString abbreviatedName, CString name,
-        ::Function<void(SlotVisitor&, const VisitingTimeout&)>,
-        ConstraintVolatility);
+        CString abbreviatedName, CString name, ConstraintVolatility,
+        ConstraintConcurrency = ConstraintConcurrency::Concurrent,
+        ConstraintParallelism = ConstraintParallelism::Sequential);
     
     
-    JS_EXPORT_PRIVATE MarkingConstraint(
-        CString abbreviatedName, CString name,
-        ::Function<void(SlotVisitor&, const VisitingTimeout&)>,
-        ::Function<double(SlotVisitor&)>,
-        ConstraintVolatility);
-    
-    JS_EXPORT_PRIVATE ~MarkingConstraint();
+    JS_EXPORT_PRIVATE virtual ~MarkingConstraint();
     
     unsigned index() const { return m_index; }
     
     
     unsigned index() const { return m_index; }
     
@@ -65,32 +59,39 @@ public:
     
     size_t lastVisitCount() const { return m_lastVisitCount; }
     
     
     size_t lastVisitCount() const { return m_lastVisitCount; }
     
-    void execute(SlotVisitor&, bool& didVisitSomething, MonotonicTime timeout);
+    ConstraintParallelism execute(SlotVisitor&);
+    
+    JS_EXPORT_PRIVATE virtual double quickWorkEstimate(SlotVisitor& visitor);
     
     
-    double quickWorkEstimate(SlotVisitor& visitor)
-    {
-        if (!m_quickWorkEstimateFunction)
-            return 0;
-        return m_quickWorkEstimateFunction(visitor);
-    }
+    double workEstimate(SlotVisitor& visitor);
     
     
-    double workEstimate(SlotVisitor& visitor)
-    {
-        return lastVisitCount() + quickWorkEstimate(visitor);
-    }
+    void prepareToExecute(const AbstractLocker& constraintSolvingLocker, SlotVisitor&);
+    
+    void doParallelWork(SlotVisitor&);
+    void finishParallelWork(SlotVisitor&);
     
     ConstraintVolatility volatility() const { return m_volatility; }
     
     
     ConstraintVolatility volatility() const { return m_volatility; }
     
+    ConstraintConcurrency concurrency() const { return m_concurrency; }
+    ConstraintParallelism parallelism() const { return m_parallelism; }
+
+protected:
+    virtual ConstraintParallelism executeImpl(SlotVisitor&) = 0;
+    JS_EXPORT_PRIVATE virtual void prepareToExecuteImpl(const AbstractLocker& constraintSolvingLocker, SlotVisitor&);
+    virtual void doParallelWorkImpl(SlotVisitor&);
+    virtual void finishParallelWorkImpl(SlotVisitor&);
+    
 private:
     friend class MarkingConstraintSet; // So it can set m_index.
     
     unsigned m_index { UINT_MAX };
     CString m_abbreviatedName;
     CString m_name;
 private:
     friend class MarkingConstraintSet; // So it can set m_index.
     
     unsigned m_index { UINT_MAX };
     CString m_abbreviatedName;
     CString m_name;
-    ::Function<void(SlotVisitor&, const VisitingTimeout& timeout)> m_executeFunction;
-    ::Function<double(SlotVisitor&)> m_quickWorkEstimateFunction;
     ConstraintVolatility m_volatility;
     ConstraintVolatility m_volatility;
+    ConstraintConcurrency m_concurrency;
+    ConstraintParallelism m_parallelism;
     size_t m_lastVisitCount { 0 };
     size_t m_lastVisitCount { 0 };
+    Lock m_lock;
 };
 
 } // namespace JSC
 };
 
 } // namespace JSC
index c5a6c66..27c907c 100644 (file)
 #include "config.h"
 #include "MarkingConstraintSet.h"
 
 #include "config.h"
 #include "MarkingConstraintSet.h"
 
+#include "JSCInlines.h"
+#include "MarkingConstraintSolver.h"
 #include "Options.h"
 #include "Options.h"
+#include "SimpleMarkingConstraint.h"
+#include "SuperSampler.h"
 #include <wtf/Function.h>
 #include <wtf/TimeWithDynamicClockType.h>
 
 namespace JSC {
 
 #include <wtf/Function.h>
 #include <wtf/TimeWithDynamicClockType.h>
 
 namespace JSC {
 
-class MarkingConstraintSet::ExecutionContext {
-public:
-    ExecutionContext(MarkingConstraintSet& set, SlotVisitor& visitor, MonotonicTime timeout)
-        : m_set(set)
-        , m_visitor(visitor)
-        , m_timeout(timeout)
-    {
-    }
-    
-    bool didVisitSomething() const
-    {
-        return m_didVisitSomething;
-    }
-    
-    bool shouldTimeOut() const
-    {
-        return didVisitSomething() && hasElapsed(m_timeout);
-    }
-    
-    // Returns false if it times out.
-    bool drain(BitVector& unexecuted)
-    {
-        for (size_t index : unexecuted) {
-            execute(index);
-            unexecuted.clear(index);
-            if (shouldTimeOut())
-                return false;
-        }
-        return true;
-    }
-    
-    bool didExecute(size_t index) const { return m_executed.get(index); }
-
-    void execute(size_t index)
-    {
-        m_set.m_set[index]->execute(m_visitor, m_didVisitSomething, m_timeout);
-        m_executed.set(index);
-    }
-    
-private:
-    MarkingConstraintSet& m_set;
-    SlotVisitor& m_visitor;
-    MonotonicTime m_timeout;
-    BitVector m_executed;
-    bool m_didVisitSomething { false };
-};
-
-MarkingConstraintSet::MarkingConstraintSet()
+MarkingConstraintSet::MarkingConstraintSet(Heap& heap)
+    : m_heap(heap)
 {
 }
 
 {
 }
 
@@ -107,18 +65,9 @@ void MarkingConstraintSet::didStartMarking()
     m_iteration = 1;
 }
 
     m_iteration = 1;
 }
 
-void MarkingConstraintSet::add(CString abbreviatedName, CString name, ::Function<void(SlotVisitor&, const VisitingTimeout&)> function, ConstraintVolatility volatility)
-{
-    add(std::make_unique<MarkingConstraint>(WTFMove(abbreviatedName), WTFMove(name), WTFMove(function), volatility));
-}
-
-void MarkingConstraintSet::add(
-    CString abbreviatedName, CString name,
-    ::Function<void(SlotVisitor&, const VisitingTimeout&)> executeFunction,
-    ::Function<double(SlotVisitor&)> quickWorkEstimateFunction,
-    ConstraintVolatility volatility)
+void MarkingConstraintSet::add(CString abbreviatedName, CString name, ::Function<void(SlotVisitor&)> function, ConstraintVolatility volatility, ConstraintConcurrency concurrency)
 {
 {
-    add(std::make_unique<MarkingConstraint>(WTFMove(abbreviatedName), WTFMove(name), WTFMove(executeFunction), WTFMove(quickWorkEstimateFunction), volatility));
+    add(std::make_unique<SimpleMarkingConstraint>(WTFMove(abbreviatedName), WTFMove(name), WTFMove(function), volatility, concurrency));
 }
 
 void MarkingConstraintSet::add(
 }
 
 void MarkingConstraintSet::add(
@@ -131,9 +80,9 @@ void MarkingConstraintSet::add(
     m_set.append(WTFMove(constraint));
 }
 
     m_set.append(WTFMove(constraint));
 }
 
-bool MarkingConstraintSet::executeConvergence(SlotVisitor& visitor, MonotonicTime timeout)
+bool MarkingConstraintSet::executeConvergence(SlotVisitor& visitor)
 {
 {
-    bool result = executeConvergenceImpl(visitor, timeout);
+    bool result = executeConvergenceImpl(visitor);
     if (Options::logGC())
         dataLog(" ");
     return result;
     if (Options::logGC())
         dataLog(" ");
     return result;
@@ -148,27 +97,27 @@ bool MarkingConstraintSet::isWavefrontAdvancing(SlotVisitor& visitor)
     return false;
 }
 
     return false;
 }
 
-bool MarkingConstraintSet::executeConvergenceImpl(SlotVisitor& visitor, MonotonicTime timeout)
+bool MarkingConstraintSet::executeConvergenceImpl(SlotVisitor& visitor)
 {
 {
-    ExecutionContext executionContext(*this, visitor, timeout);
+    SuperSamplerScope superSamplerScope(false);
+    MarkingConstraintSolver solver(*this);
     
     unsigned iteration = m_iteration++;
     
     if (Options::logGC())
         dataLog("i#", iteration, ":");
 
     
     unsigned iteration = m_iteration++;
     
     if (Options::logGC())
         dataLog("i#", iteration, ":");
 
-    // If there are any constraints that we have not executed at all during this cycle, then
-    // we should execute those now.
-    if (!executionContext.drain(m_unexecutedRoots))
-        return false;
-    
-    // First iteration is before any visitor draining, so it's unlikely to trigger any constraints other
-    // than roots.
-    if (iteration == 1)
+    if (iteration == 1) {
+        // First iteration is before any visitor draining, so it's unlikely to trigger any constraints
+        // other than roots.
+        solver.drain(m_unexecutedRoots);
         return false;
         return false;
+    }
     
     
-    if (!executionContext.drain(m_unexecutedOutgrowths))
+    if (iteration == 2) {
+        solver.drain(m_unexecutedOutgrowths);
         return false;
         return false;
+    }
     
     // We want to keep preferring the outgrowth constraints - the ones that need to be fixpointed
     // even in a stop-the-world GC - until they stop producing. They have a tendency to go totally
     
     // We want to keep preferring the outgrowth constraints - the ones that need to be fixpointed
     // even in a stop-the-world GC - until they stop producing. They have a tendency to go totally
@@ -215,33 +164,16 @@ bool MarkingConstraintSet::executeConvergenceImpl(SlotVisitor& visitor, Monotoni
             return a->volatility() > b->volatility();
         });
     
             return a->volatility() > b->volatility();
         });
     
-    for (MarkingConstraint* constraint : m_ordered) {
-        size_t i = constraint->index();
-        
-        if (executionContext.didExecute(i))
-            continue;
-        executionContext.execute(i);
-        
-        // Once we're in convergence, it makes the most sense to let some marking happen anytime
-        // we find work.
-        // FIXME: Maybe this should execute all constraints until timeout? Not clear if that's
-        // better or worse. Maybe even better is this:
-        // - If the visitor is empty, keep running.
-        // - If the visitor is has at least N things, return.
-        // - Else run until timeout.
-        // https://bugs.webkit.org/show_bug.cgi?id=166832
-        if (executionContext.didVisitSomething())
-            return false;
-    }
+    solver.converge(m_ordered);
     
     
-    return true;
+    // Return true if we've converged. That happens if we did not visit anything.
+    return !solver.didVisitSomething();
 }
 
 void MarkingConstraintSet::executeAll(SlotVisitor& visitor)
 {
 }
 
 void MarkingConstraintSet::executeAll(SlotVisitor& visitor)
 {
-    bool didVisitSomething = false;
     for (auto& constraint : m_set)
     for (auto& constraint : m_set)
-        constraint->execute(visitor, didVisitSomething, MonotonicTime::infinity());
+        constraint->execute(visitor);
     if (Options::logGC())
         dataLog(" ");
 }
     if (Options::logGC())
         dataLog(" ");
 }
index 40e616f..a7e1849 100644 (file)
 
 namespace JSC {
 
 
 namespace JSC {
 
+class Heap;
+class MarkingConstraintSolver;
+
 class MarkingConstraintSet {
 public:
 class MarkingConstraintSet {
 public:
-    MarkingConstraintSet();
+    MarkingConstraintSet(Heap&);
     ~MarkingConstraintSet();
     
     void didStartMarking();
     ~MarkingConstraintSet();
     
     void didStartMarking();
@@ -41,15 +44,9 @@ public:
     void add(
         CString abbreviatedName,
         CString name,
     void add(
         CString abbreviatedName,
         CString name,
-        ::Function<void(SlotVisitor&, const VisitingTimeout&)>,
-        ConstraintVolatility);
-    
-    void add(
-        CString abbreviatedName,
-        CString name,
-        ::Function<void(SlotVisitor&, const VisitingTimeout&)>,
-        ::Function<double(SlotVisitor&)>,
-        ConstraintVolatility);
+        ::Function<void(SlotVisitor&)>,
+        ConstraintVolatility,
+        ConstraintConcurrency = ConstraintConcurrency::Concurrent);
     
     void add(std::unique_ptr<MarkingConstraint>);
     
     
     void add(std::unique_ptr<MarkingConstraint>);
     
@@ -60,21 +57,19 @@ public:
     
     // Returns true if this executed all constraints and none of them produced new work. This
     // assumes that you've alraedy visited roots and drained from there.
     
     // Returns true if this executed all constraints and none of them produced new work. This
     // assumes that you've alraedy visited roots and drained from there.
-    bool executeConvergence(
-        SlotVisitor&,
-        MonotonicTime timeout = MonotonicTime::infinity());
+    bool executeConvergence(SlotVisitor&);
     
     // Simply runs all constraints without any shenanigans.
     void executeAll(SlotVisitor&);
     
 private:
     
     // Simply runs all constraints without any shenanigans.
     void executeAll(SlotVisitor&);
     
 private:
-    class ExecutionContext;
-    friend class ExecutionContext;
+    friend class MarkingConstraintSolver;
     
     
-    bool executeConvergenceImpl(SlotVisitor&, MonotonicTime timeout);
+    bool executeConvergenceImpl(SlotVisitor&);
     
     
-    bool drain(SlotVisitor&, MonotonicTime, BitVector& unexecuted, BitVector& executed, bool& didVisitSomething);
+    bool drain(SlotVisitor&, BitVector& unexecuted, BitVector& executed, bool& didVisitSomething);
     
     
+    Heap& m_heap;
     BitVector m_unexecutedRoots;
     BitVector m_unexecutedOutgrowths;
     Vector<std::unique_ptr<MarkingConstraint>> m_set;
     BitVector m_unexecutedRoots;
     BitVector m_unexecutedOutgrowths;
     Vector<std::unique_ptr<MarkingConstraint>> m_set;
diff --git a/Source/JavaScriptCore/heap/MarkingConstraintSolver.cpp b/Source/JavaScriptCore/heap/MarkingConstraintSolver.cpp
new file mode 100644 (file)
index 0000000..2329e6b
--- /dev/null
@@ -0,0 +1,270 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "config.h"
+#include "MarkingConstraintSolver.h"
+
+#include "JSCInlines.h"
+
+namespace JSC { 
+
+MarkingConstraintSolver::MarkingConstraintSolver(MarkingConstraintSet& set)
+    : m_heap(set.m_heap)
+    , m_mainVisitor(m_heap.collectorSlotVisitor())
+    , m_set(set)
+{
+    m_heap.forEachSlotVisitor(
+        [&] (SlotVisitor& visitor) {
+            m_visitCounters.append(VisitCounter(visitor));
+        });
+}
+
+MarkingConstraintSolver::~MarkingConstraintSolver()
+{
+}
+
+bool MarkingConstraintSolver::didVisitSomething() const
+{
+    for (const VisitCounter& visitCounter : m_visitCounters) {
+        if (visitCounter.visitCount())
+            return true;
+    }
+    return false;
+}
+
+void MarkingConstraintSolver::execute(SchedulerPreference preference, ScopedLambda<std::optional<unsigned>()> pickNext)
+{
+    m_pickNextIsStillActive = true;
+    RELEASE_ASSERT(!m_numThreadsThatMayProduceWork);
+    
+    if (Options::useParallelMarkingConstraintSolver()) {
+        if (Options::logGC())
+            dataLog(preference == ParallelWorkFirst ? "P" : "N", "<");
+        
+        m_heap.runFunctionInParallel(
+            [&] (SlotVisitor& visitor) { runExecutionThread(visitor, preference, pickNext); });
+        
+        if (Options::logGC())
+            dataLog(">");
+    } else
+        runExecutionThread(m_mainVisitor, preference, pickNext);
+    
+    RELEASE_ASSERT(!m_pickNextIsStillActive);
+    RELEASE_ASSERT(!m_numThreadsThatMayProduceWork);
+        
+    for (unsigned indexToRun : m_didExecuteInParallel)
+        m_set.m_set[indexToRun]->finishParallelWork(m_mainVisitor);
+    m_didExecuteInParallel.clear();
+    
+    if (!m_toExecuteSequentially.isEmpty()) {
+        for (unsigned indexToRun : m_toExecuteSequentially)
+            execute(*m_set.m_set[indexToRun]);
+        m_toExecuteSequentially.clear();
+    }
+        
+    RELEASE_ASSERT(m_toExecuteInParallel.isEmpty());
+    RELEASE_ASSERT(!m_toExecuteInParallelSet.bitCount());
+}
+
+void MarkingConstraintSolver::drain(BitVector& unexecuted)
+{
+    auto iter = unexecuted.begin();
+    auto end = unexecuted.end();
+    if (iter == end)
+        return;
+    auto pickNext = scopedLambda<std::optional<unsigned>()>(
+        [&] () -> std::optional<unsigned> {
+            if (iter == end)
+                return std::nullopt;
+            return *iter++;
+        });
+    execute(NextConstraintFirst, pickNext);
+    unexecuted.clearAll();
+}
+
+void MarkingConstraintSolver::converge(const Vector<MarkingConstraint*>& order)
+{
+    if (didVisitSomething())
+        return;
+    
+    if (order.isEmpty())
+        return;
+        
+    size_t index = 0;
+
+    // We want to execute the first constraint sequentially if we think it will quickly give us a
+    // result. If we ran it in parallel to other constraints, then we might end up having to wait for
+    // those other constraints to finish, which would be a waste of time since during convergence it's
+    // empirically most optimal to return to draining as soon as a constraint generates work. Most
+    // constraints don't generate any work most of the time, and when they do generate work, they tend
+    // to generate enough of it to feed a decent draining cycle. Therefore, pause times are lowest if
+    // we get the heck out of here as soon as a constraint generates work. I think that part of what
+    // makes this optimal is that we also never abort running a constraint early, so when we do run
+    // one, it has an opportunity to generate as much work as it possibly can.
+    if (order[index]->quickWorkEstimate(m_mainVisitor) > 0.) {
+        execute(*order[index++]);
+        
+        if (m_toExecuteInParallel.isEmpty()
+            && (order.isEmpty() || didVisitSomething()))
+            return;
+    }
+    
+    auto pickNext = scopedLambda<std::optional<unsigned>()>(
+        [&] () -> std::optional<unsigned> {
+            if (didVisitSomething())
+                return std::nullopt;
+            
+            if (index >= order.size())
+                return std::nullopt;
+            
+            MarkingConstraint& constraint = *order[index++];
+            return constraint.index();
+        });
+    
+    execute(ParallelWorkFirst, pickNext);
+}
+
+void MarkingConstraintSolver::execute(MarkingConstraint& constraint)
+{
+    if (m_executed.get(constraint.index()))
+        return;
+    
+    constraint.prepareToExecute(NoLockingNecessary, m_mainVisitor);
+    ConstraintParallelism parallelism = constraint.execute(m_mainVisitor);
+    didExecute(parallelism, constraint.index());
+}
+
+void MarkingConstraintSolver::runExecutionThread(SlotVisitor& visitor, SchedulerPreference preference, ScopedLambda<std::optional<unsigned>()> pickNext)
+{
+    for (;;) {
+        bool doParallelWorkMode;
+        unsigned indexToRun;
+        {
+            auto locker = holdLock(m_lock);
+                        
+            for (;;) {
+                auto tryParallelWork = [&] () -> bool {
+                    if (m_toExecuteInParallel.isEmpty())
+                        return false;
+                    
+                    indexToRun = m_toExecuteInParallel.first();
+                    doParallelWorkMode = true;
+                    return true;
+                };
+                            
+                auto tryNextConstraint = [&] () -> bool {
+                    if (!m_pickNextIsStillActive)
+                        return false;
+                    
+                    for (;;) {
+                        std::optional<unsigned> pickResult = pickNext();
+                        if (!pickResult) {
+                            m_pickNextIsStillActive = false;
+                            return false;
+                        }
+                        
+                        if (m_executed.get(*pickResult))
+                            continue;
+                                    
+                        MarkingConstraint& constraint = *m_set.m_set[*pickResult];
+                        if (constraint.concurrency() == ConstraintConcurrency::Sequential) {
+                            m_toExecuteSequentially.append(*pickResult);
+                            continue;
+                        }
+                        if (constraint.parallelism() == ConstraintParallelism::Parallel)
+                            m_numThreadsThatMayProduceWork++;
+                        indexToRun = *pickResult;
+                        doParallelWorkMode = false;
+                        constraint.prepareToExecute(locker, visitor);
+                        return true;
+                    }
+                };
+                
+                if (preference == ParallelWorkFirst) {
+                    if (tryParallelWork() || tryNextConstraint())
+                        break;
+                } else {
+                    if (tryNextConstraint() || tryParallelWork())
+                        break;
+                }
+                
+                // This means that we have nothing left to run. The only way for us to have more work is
+                // if someone is running a constraint that may produce parallel work.
+                
+                if (!m_numThreadsThatMayProduceWork)
+                    return;
+                
+                // FIXME: Any waiting could be replaced with just running the SlotVisitor.
+                // I wonder if that would be profitable.
+                m_condition.wait(m_lock);
+            }
+        }
+                    
+        ConstraintParallelism parallelism = ConstraintParallelism::Sequential;
+                    
+        MarkingConstraint& constraint = *m_set.m_set[indexToRun];
+                    
+        if (doParallelWorkMode)
+            constraint.doParallelWork(visitor);
+        else
+            parallelism = constraint.execute(visitor);
+                    
+        {
+            auto locker = holdLock(m_lock);
+                        
+            if (doParallelWorkMode) {
+                if (m_toExecuteInParallelSet.get(indexToRun)) {
+                    m_didExecuteInParallel.append(indexToRun);
+                                
+                    m_toExecuteInParallel.takeFirst(
+                        [&] (unsigned value) { return value == indexToRun; });
+                    m_toExecuteInParallelSet.clear(indexToRun);
+                }
+            } else {
+                if (constraint.parallelism() == ConstraintParallelism::Parallel)
+                    m_numThreadsThatMayProduceWork--;
+                m_executed.set(indexToRun);
+                if (parallelism == ConstraintParallelism::Parallel) {
+                    m_toExecuteInParallel.append(indexToRun);
+                    m_toExecuteInParallelSet.set(indexToRun);
+                }
+            }
+                        
+            m_condition.notifyAll();
+        }
+    }
+}
+
+void MarkingConstraintSolver::didExecute(ConstraintParallelism parallelism, unsigned index)
+{
+    m_executed.set(index);
+    if (parallelism == ConstraintParallelism::Parallel) {
+        m_toExecuteInParallel.append(index);
+        m_toExecuteInParallelSet.set(index);
+    }
+}
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/heap/MarkingConstraintSolver.h b/Source/JavaScriptCore/heap/MarkingConstraintSolver.h
new file mode 100644 (file)
index 0000000..16c9a56
--- /dev/null
@@ -0,0 +1,88 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#include "VisitCounter.h"
+#include <wtf/BitVector.h>
+#include <wtf/Condition.h>
+#include <wtf/Deque.h>
+#include <wtf/FastMalloc.h>
+#include <wtf/Lock.h>
+#include <wtf/Noncopyable.h>
+#include <wtf/ScopedLambda.h>
+#include <wtf/Vector.h>
+
+namespace JSC {
+
+class Heap;
+class MarkingConstraint;
+class MarkingConstraintSet;
+
+class MarkingConstraintSolver {
+    WTF_MAKE_NONCOPYABLE(MarkingConstraintSolver);
+    WTF_MAKE_FAST_ALLOCATED;
+    
+public:
+    MarkingConstraintSolver(MarkingConstraintSet&);
+    ~MarkingConstraintSolver();
+    
+    bool didVisitSomething() const;
+    
+    enum SchedulerPreference {
+        ParallelWorkFirst,
+        NextConstraintFirst
+    };
+
+    void execute(SchedulerPreference, ScopedLambda<std::optional<unsigned>()> pickNext);
+    
+    void drain(BitVector& unexecuted);
+    
+    void converge(const Vector<MarkingConstraint*>& order);
+    
+    void execute(MarkingConstraint&);
+    
+private:
+    void runExecutionThread(SlotVisitor&, SchedulerPreference, ScopedLambda<std::optional<unsigned>()> pickNext);
+    
+    void didExecute(ConstraintParallelism, unsigned index);
+
+    Heap& m_heap;
+    SlotVisitor& m_mainVisitor;
+    MarkingConstraintSet& m_set;
+    BitVector m_executed;
+    Deque<unsigned, 32> m_toExecuteInParallel;
+    BitVector m_toExecuteInParallelSet;
+    Vector<unsigned, 32> m_didExecuteInParallel;
+    Vector<unsigned, 32> m_toExecuteSequentially;
+    Lock m_lock;
+    Condition m_condition;
+    unsigned m_numThreadsThatMayProduceWork { 0 };
+    bool m_pickNextIsStillActive { true };
+    Vector<VisitCounter, 16> m_visitCounters;
+};
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/heap/ParallelSourceAdapter.h b/Source/JavaScriptCore/heap/ParallelSourceAdapter.h
new file mode 100644 (file)
index 0000000..f4b6456
--- /dev/null
@@ -0,0 +1,71 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include <wtf/Lock.h>
+#include <wtf/SharedTask.h>
+
+namespace JSC {
+
+template<typename OuterType, typename InnerType, typename UnwrapFunc>
+class ParallelSourceAdapter : public SharedTask<InnerType()> {
+public:
+    ParallelSourceAdapter(RefPtr<SharedTask<OuterType()>> outerSource, const UnwrapFunc& unwrapFunc)
+        : m_outerSource(outerSource)
+        , m_unwrapFunc(unwrapFunc)
+    {
+    }
+    
+    InnerType run() override
+    {
+        auto locker = holdLock(m_lock);
+        do {
+            if (m_innerSource) {
+                if (InnerType result = m_innerSource->run())
+                    return result;
+                m_innerSource = nullptr;
+            }
+            
+            m_innerSource = m_unwrapFunc(m_outerSource->run());
+        } while (m_innerSource);
+        return InnerType();
+    }
+
+private:
+    RefPtr<SharedTask<OuterType()>> m_outerSource;
+    RefPtr<SharedTask<InnerType()>> m_innerSource;
+    UnwrapFunc m_unwrapFunc;
+    Lock m_lock;
+};
+
+template<typename OuterType, typename InnerType, typename UnwrapFunc>
+Ref<ParallelSourceAdapter<OuterType, InnerType, UnwrapFunc>> createParallelSourceAdapter(RefPtr<SharedTask<OuterType()>> outerSource, const UnwrapFunc& unwrapFunc)
+{
+    return adoptRef(*new ParallelSourceAdapter<OuterType, InnerType, UnwrapFunc>(outerSource, unwrapFunc));
+}
+    
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/heap/SimpleMarkingConstraint.cpp b/Source/JavaScriptCore/heap/SimpleMarkingConstraint.cpp
new file mode 100644 (file)
index 0000000..0d80e3f
--- /dev/null
@@ -0,0 +1,51 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "config.h"
+#include "SimpleMarkingConstraint.h"
+
+namespace JSC {
+
+SimpleMarkingConstraint::SimpleMarkingConstraint(
+    CString abbreviatedName, CString name,
+    ::Function<void(SlotVisitor&)> executeFunction,
+    ConstraintVolatility volatility, ConstraintConcurrency concurrency)
+    : MarkingConstraint(WTFMove(abbreviatedName), WTFMove(name), volatility, concurrency, ConstraintParallelism::Sequential)
+    , m_executeFunction(WTFMove(executeFunction))
+{
+}
+
+SimpleMarkingConstraint::~SimpleMarkingConstraint()
+{
+}
+
+ConstraintParallelism SimpleMarkingConstraint::executeImpl(SlotVisitor& visitor)
+{
+    m_executeFunction(visitor);
+    return ConstraintParallelism::Sequential;
+}
+
+} // namespace JSC
+
@@ -1,5 +1,5 @@
 /*
 /*
- * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
 
 #pragma once
 
 
 #pragma once
 
-#include <wtf/HashSet.h>
+#include "MarkingConstraint.h"
+#include <wtf/Function.h>
 
 namespace JSC {
 
 
 namespace JSC {
 
-class OpaqueRootSet {
-    WTF_MAKE_NONCOPYABLE(OpaqueRootSet);
+// This allows for an informal way to define constraints. Just pass a lambda to the constructor. The only
+// downside is that this makes it hard for constraints to be stateful, which is necessary for them to be
+// parallel. In those cases, it's easier to just subclass MarkingConstraint.
+class SimpleMarkingConstraint : public MarkingConstraint {
 public:
 public:
-    OpaqueRootSet()
-        : m_lastQueriedRoot(nullptr)
-        , m_containsLastQueriedRoot(false)
-    {
-    }
-
-    bool contains(void* root) const
-    {
-        if (root != m_lastQueriedRoot) {
-            m_lastQueriedRoot = root;
-            m_containsLastQueriedRoot = m_roots.contains(root);
-        }
-        return m_containsLastQueriedRoot;
-    }
-
-    bool isEmpty() const
-    {
-        return m_roots.isEmpty();
-    }
-
-    void clear()
-    {
-        m_roots.clear();
-        m_lastQueriedRoot = nullptr;
-        m_containsLastQueriedRoot = false;
-    }
-
-    bool add(void* root)
-    {
-        if (root == m_lastQueriedRoot)
-            m_containsLastQueriedRoot = true;
-        return m_roots.add(root).isNewEntry;
-    }
-
-    int size() const
-    {
-        return m_roots.size();
-    }
-
-    HashSet<void*>::const_iterator begin() const
-    {
-        return m_roots.begin();
-    }
-
-    HashSet<void*>::const_iterator end() const
-    {
-        return m_roots.end();
-    }
-
-
+    JS_EXPORT_PRIVATE SimpleMarkingConstraint(
+        CString abbreviatedName, CString name,
+        ::Function<void(SlotVisitor&)>,
+        ConstraintVolatility,
+        ConstraintConcurrency = ConstraintConcurrency::Concurrent);
+    
+    JS_EXPORT_PRIVATE ~SimpleMarkingConstraint();
+    
 private:
 private:
-    HashSet<void*> m_roots;
-    mutable void* m_lastQueriedRoot;
-    mutable bool m_containsLastQueriedRoot;
+    ConstraintParallelism executeImpl(SlotVisitor&) override;
+
+    ::Function<void(SlotVisitor&)> m_executeFunction;
 };
 
 } // namespace JSC
 };
 
 } // namespace JSC
+
index 1785af6..40ff95c 100644 (file)
@@ -98,9 +98,7 @@ SlotVisitor::~SlotVisitor()
 
 void SlotVisitor::didStartMarking()
 {
 
 void SlotVisitor::didStartMarking()
 {
-    if (heap()->collectionScope() == CollectionScope::Full)
-        RELEASE_ASSERT(m_opaqueRoots.isEmpty()); // Should have merged by now.
-    else
+    if (heap()->collectionScope() == CollectionScope::Eden)
         reset();
 
     if (HeapProfiler* heapProfiler = vm().heapProfiler())
         reset();
 
     if (HeapProfiler* heapProfiler = vm().heapProfiler())
@@ -111,7 +109,6 @@ void SlotVisitor::didStartMarking()
 
 void SlotVisitor::reset()
 {
 
 void SlotVisitor::reset()
 {
-    RELEASE_ASSERT(!m_opaqueRoots.size());
     m_bytesVisited = 0;
     m_visitCount = 0;
     m_heapSnapshotBuilder = nullptr;
     m_bytesVisited = 0;
     m_visitCount = 0;
     m_heapSnapshotBuilder = nullptr;
@@ -277,7 +274,7 @@ void SlotVisitor::appendToMarkStack(JSCell* cell)
 template<typename ContainerType>
 ALWAYS_INLINE void SlotVisitor::appendToMarkStack(ContainerType& container, JSCell* cell)
 {
 template<typename ContainerType>
 ALWAYS_INLINE void SlotVisitor::appendToMarkStack(ContainerType& container, JSCell* cell)
 {
-    ASSERT(Heap::isMarkedConcurrently(cell));
+    ASSERT(Heap::isMarked(cell));
     ASSERT(!cell->isZapped());
     
     container.noteMarked();
     ASSERT(!cell->isZapped());
     
     container.noteMarked();
@@ -346,7 +343,7 @@ private:
 
 ALWAYS_INLINE void SlotVisitor::visitChildren(const JSCell* cell)
 {
 
 ALWAYS_INLINE void SlotVisitor::visitChildren(const JSCell* cell)
 {
-    ASSERT(Heap::isMarkedConcurrently(cell));
+    ASSERT(Heap::isMarked(cell));
     
     SetCurrentCellScope currentCellScope(*this, cell);
     
     
     SetCurrentCellScope currentCellScope(*this, cell);
     
@@ -435,7 +432,7 @@ void SlotVisitor::donateKnownParallel()
 
 void SlotVisitor::updateMutatorIsStopped(const AbstractLocker&)
 {
 
 void SlotVisitor::updateMutatorIsStopped(const AbstractLocker&)
 {
-    m_mutatorIsStopped = (m_heap.collectorBelievesThatTheWorldIsStopped() & m_canOptimizeForStoppedMutator);
+    m_mutatorIsStopped = (m_heap.worldIsStopped() & m_canOptimizeForStoppedMutator);
 }
 
 void SlotVisitor::updateMutatorIsStopped()
 }
 
 void SlotVisitor::updateMutatorIsStopped()
@@ -452,7 +449,7 @@ bool SlotVisitor::hasAcknowledgedThatTheMutatorIsResumed() const
 
 bool SlotVisitor::mutatorIsStoppedIsUpToDate() const
 {
 
 bool SlotVisitor::mutatorIsStoppedIsUpToDate() const
 {
-    return m_mutatorIsStopped == (m_heap.collectorBelievesThatTheWorldIsStopped() & m_canOptimizeForStoppedMutator);
+    return m_mutatorIsStopped == (m_heap.worldIsStopped() & m_canOptimizeForStoppedMutator);
 }
 
 void SlotVisitor::optimizeForStoppedMutator()
 }
 
 void SlotVisitor::optimizeForStoppedMutator()
@@ -490,8 +487,6 @@ NEVER_INLINE void SlotVisitor::drain(MonotonicTime timeout)
         m_rightToRun.safepoint();
         donateKnownParallel();
     }
         m_rightToRun.safepoint();
         donateKnownParallel();
     }
-    
-    mergeIfNecessary();
 }
 
 size_t SlotVisitor::performIncrementOfDraining(size_t bytesRequested)
 }
 
 size_t SlotVisitor::performIncrementOfDraining(size_t bytesRequested)
@@ -550,7 +545,6 @@ size_t SlotVisitor::performIncrementOfDraining(size_t bytesRequested)
     }
 
     donateAll();
     }
 
     donateAll();
-    mergeIfNecessary();
 
     return bytesVisited();
 }
 
     return bytesVisited();
 }
@@ -561,17 +555,16 @@ bool SlotVisitor::didReachTermination()
     return didReachTermination(locker);
 }
 
     return didReachTermination(locker);
 }
 
-bool SlotVisitor::didReachTermination(const AbstractLocker&)
+bool SlotVisitor::didReachTermination(const AbstractLocker& locker)
 {
 {
-    return isEmpty()
-        && !m_heap.m_numberOfActiveParallelMarkers
-        && m_heap.m_sharedCollectorMarkStack->isEmpty()
-        && m_heap.m_sharedMutatorMarkStack->isEmpty();
+    return !m_heap.m_numberOfActiveParallelMarkers
+        && !hasWork(locker);
 }
 
 bool SlotVisitor::hasWork(const AbstractLocker&)
 {
 }
 
 bool SlotVisitor::hasWork(const AbstractLocker&)
 {
-    return !m_heap.m_sharedCollectorMarkStack->isEmpty()
+    return !isEmpty()
+        || !m_heap.m_sharedCollectorMarkStack->isEmpty()
         || !m_heap.m_sharedMutatorMarkStack->isEmpty();
 }
 
         || !m_heap.m_sharedMutatorMarkStack->isEmpty();
 }
 
@@ -583,12 +576,14 @@ NEVER_INLINE SlotVisitor::SharedDrainResult SlotVisitor::drainFromShared(SharedD
 
     bool isActive = false;
     while (true) {
 
     bool isActive = false;
     while (true) {
+        RefPtr<SharedTask<void(SlotVisitor&)>> bonusTask;
+        
         {
         {
-            LockHolder locker(m_heap.m_markingMutex);
+            auto locker = holdLock(m_heap.m_markingMutex);
             if (isActive)
                 m_heap.m_numberOfActiveParallelMarkers--;
             m_heap.m_numberOfWaitingParallelMarkers++;
             if (isActive)
                 m_heap.m_numberOfActiveParallelMarkers--;
             m_heap.m_numberOfWaitingParallelMarkers++;
-
+            
             if (sharedDrainMode == MasterDrain) {
                 while (true) {
                     if (hasElapsed(timeout))
             if (sharedDrainMode == MasterDrain) {
                 while (true) {
                     if (hasElapsed(timeout))
@@ -629,28 +624,51 @@ NEVER_INLINE SlotVisitor::SharedDrainResult SlotVisitor::drainFromShared(SharedD
 
                 auto isReady = [&] () -> bool {
                     return hasWork(locker)
 
                 auto isReady = [&] () -> bool {
                     return hasWork(locker)
+                        || m_heap.m_bonusVisitorTask
                         || m_heap.m_parallelMarkersShouldExit;
                 };
 
                 m_heap.m_markingConditionVariable.waitUntil(m_heap.m_markingMutex, timeout, isReady);
                 
                         || m_heap.m_parallelMarkersShouldExit;
                 };
 
                 m_heap.m_markingConditionVariable.waitUntil(m_heap.m_markingMutex, timeout, isReady);
                 
+                if (!hasWork(locker)
+                    && m_heap.m_bonusVisitorTask)
+                    bonusTask = m_heap.m_bonusVisitorTask;
+                
                 if (m_heap.m_parallelMarkersShouldExit)
                     return SharedDrainResult::Done;
             }
                 if (m_heap.m_parallelMarkersShouldExit)
                     return SharedDrainResult::Done;
             }
-
-            forEachMarkStack(
-                [&] (MarkStackArray& stack) -> IterationStatus {
-                    stack.stealSomeCellsFrom(
-                        correspondingGlobalStack(stack),
-                        m_heap.m_numberOfWaitingParallelMarkers);
-                    return IterationStatus::Continue;
-                });
+            
+            if (!bonusTask && isEmpty()) {
+                forEachMarkStack(
+                    [&] (MarkStackArray& stack) -> IterationStatus {
+                        stack.stealSomeCellsFrom(
+                            correspondingGlobalStack(stack),
+                            m_heap.m_numberOfWaitingParallelMarkers);
+                        return IterationStatus::Continue;
+                    });
+            }
 
             m_heap.m_numberOfActiveParallelMarkers++;
             m_heap.m_numberOfWaitingParallelMarkers--;
         }
         
 
             m_heap.m_numberOfActiveParallelMarkers++;
             m_heap.m_numberOfWaitingParallelMarkers--;
         }
         
-        drain(timeout);
+        if (bonusTask) {
+            bonusTask->run(*this);
+            
+            // The main thread could still be running, and may run for a while. Unless we clear the task
+            // ourselves, we will keep looping around trying to run the task.
+            {
+                auto locker = holdLock(m_heap.m_markingMutex);
+                if (m_heap.m_bonusVisitorTask == bonusTask)
+                    m_heap.m_bonusVisitorTask = nullptr;
+                bonusTask = nullptr;
+                m_heap.m_markingConditionVariable.notifyAll();
+            }
+        } else {
+            RELEASE_ASSERT(!isEmpty());
+            drain(timeout);
+        }
+        
         isActive = true;
     }
 }
         isActive = true;
     }
 }
@@ -670,15 +688,19 @@ SlotVisitor::SharedDrainResult SlotVisitor::drainInParallelPassively(MonotonicTi
     if (Options::numberOfGCMarkers() == 1
         || (m_heap.m_worldState.load() & Heap::mutatorWaitingBit)
         || !m_heap.hasHeapAccess()
     if (Options::numberOfGCMarkers() == 1
         || (m_heap.m_worldState.load() & Heap::mutatorWaitingBit)
         || !m_heap.hasHeapAccess()
-        || m_heap.collectorBelievesThatTheWorldIsStopped()) {
+        || m_heap.worldIsStopped()) {
         // This is an optimization over drainInParallel() when we have a concurrent mutator but
         // otherwise it is not profitable.
         return drainInParallel(timeout);
     }
 
         // This is an optimization over drainInParallel() when we have a concurrent mutator but
         // otherwise it is not profitable.
         return drainInParallel(timeout);
     }
 
-    LockHolder locker(m_heap.m_markingMutex);
-    donateAll(locker);
-    
+    donateAll(holdLock(m_heap.m_markingMutex));
+    return waitForTermination(timeout);
+}
+
+SlotVisitor::SharedDrainResult SlotVisitor::waitForTermination(MonotonicTime timeout)
+{
+    auto locker = holdLock(m_heap.m_markingMutex);
     for (;;) {
         if (hasElapsed(timeout))
             return SharedDrainResult::TimedOut;
     for (;;) {
         if (hasElapsed(timeout))
             return SharedDrainResult::TimedOut;
@@ -711,61 +733,6 @@ void SlotVisitor::donateAll(const AbstractLocker&)
     m_heap.m_markingConditionVariable.notifyAll();
 }
 
     m_heap.m_markingConditionVariable.notifyAll();
 }
 
-void SlotVisitor::addOpaqueRoot(void* root)
-{
-    if (!root)
-        return;
-    
-    if (m_ignoreNewOpaqueRoots)
-        return;
-    
-    if (Options::numberOfGCMarkers() == 1) {
-        // Put directly into the shared HashSet.
-        m_heap.m_opaqueRoots.add(root);
-        return;
-    }
-    // Put into the local set, but merge with the shared one every once in
-    // a while to make sure that the local sets don't grow too large.
-    mergeOpaqueRootsIfProfitable();
-    m_opaqueRoots.add(root);
-}
-
-bool SlotVisitor::containsOpaqueRoot(void* root) const
-{
-    if (!root)
-        return false;
-    
-    ASSERT(!m_isInParallelMode);
-    return m_heap.m_opaqueRoots.contains(root);
-}
-
-TriState SlotVisitor::containsOpaqueRootTriState(void* root) const
-{
-    if (!root)
-        return FalseTriState;
-    
-    if (m_opaqueRoots.contains(root))
-        return TrueTriState;
-    std::lock_guard<Lock> lock(m_heap.m_opaqueRootsMutex);
-    if (m_heap.m_opaqueRoots.contains(root))
-        return TrueTriState;
-    return MixedTriState;
-}
-
-void SlotVisitor::mergeIfNecessary()
-{
-    if (m_opaqueRoots.isEmpty())
-        return;
-    mergeOpaqueRoots();
-}
-
-void SlotVisitor::mergeOpaqueRootsIfProfitable()
-{
-    if (static_cast<unsigned>(m_opaqueRoots.size()) < Options::opaqueRootMergeThreshold())
-        return;
-    mergeOpaqueRoots();
-}
-    
 void SlotVisitor::donate()
 {
     if (!m_isInParallelMode) {
 void SlotVisitor::donate()
 {
     if (!m_isInParallelMode) {
@@ -785,16 +752,6 @@ void SlotVisitor::donateAndDrain(MonotonicTime timeout)
     drain(timeout);
 }
 
     drain(timeout);
 }
 
-void SlotVisitor::mergeOpaqueRoots()
-{
-    {
-        std::lock_guard<Lock> lock(m_heap.m_opaqueRootsMutex);
-        for (auto* root : m_opaqueRoots)
-            m_heap.m_opaqueRoots.add(root);
-    }
-    m_opaqueRoots.clear();
-}
-
 void SlotVisitor::addWeakReferenceHarvester(WeakReferenceHarvester* weakReferenceHarvester)
 {
     m_heap.m_weakReferenceHarvesters.addThreadSafe(weakReferenceHarvester);
 void SlotVisitor::addWeakReferenceHarvester(WeakReferenceHarvester* weakReferenceHarvester)
 {
     m_heap.m_weakReferenceHarvesters.addThreadSafe(weakReferenceHarvester);
index fe4a44d..d6f5665 100644 (file)
@@ -28,7 +28,6 @@
 #include "HandleTypes.h"
 #include "IterationStatus.h"
 #include "MarkStack.h"
 #include "HandleTypes.h"
 #include "IterationStatus.h"
 #include "MarkStack.h"
-#include "OpaqueRootSet.h"
 #include "VisitRaceKey.h"
 #include <wtf/MonotonicTime.h>
 #include <wtf/text/CString.h>
 #include "VisitRaceKey.h"
 #include <wtf/MonotonicTime.h>
 #include <wtf/text/CString.h>
@@ -95,10 +94,9 @@ public:
     void appendHiddenUnbarriered(JSValue);
     void appendHiddenUnbarriered(JSCell*);
 
     void appendHiddenUnbarriered(JSValue);
     void appendHiddenUnbarriered(JSCell*);
 
-    JS_EXPORT_PRIVATE void addOpaqueRoot(void*);
+    bool addOpaqueRoot(void*); // Returns true if the root was new.
     
     
-    JS_EXPORT_PRIVATE bool containsOpaqueRoot(void*) const;
-    TriState containsOpaqueRootTriState(void*) const;
+    bool containsOpaqueRoot(void*) const;
 
     bool isEmpty() { return m_collectorStack.isEmpty() && m_mutatorStack.isEmpty(); }
 
 
     bool isEmpty() { return m_collectorStack.isEmpty() && m_mutatorStack.isEmpty(); }
 
@@ -121,6 +119,8 @@ public:
 
     SharedDrainResult drainInParallel(MonotonicTime timeout = MonotonicTime::infinity());
     SharedDrainResult drainInParallelPassively(MonotonicTime timeout = MonotonicTime::infinity());
 
     SharedDrainResult drainInParallel(MonotonicTime timeout = MonotonicTime::infinity());
     SharedDrainResult drainInParallelPassively(MonotonicTime timeout = MonotonicTime::infinity());
+    
+    SharedDrainResult waitForTermination(MonotonicTime timeout = MonotonicTime::infinity());
 
     // Attempts to perform an increment of draining that involves only walking `bytes` worth of data. This
     // is likely to accidentally walk more or less than that. It will usually mark more than bytes. It may
 
     // Attempts to perform an increment of draining that involves only walking `bytes` worth of data. This
     // is likely to accidentally walk more or less than that. It will usually mark more than bytes. It may
@@ -128,8 +128,6 @@ public:
     // rare cases happen temporarily even if we're not reaching termination).
     size_t performIncrementOfDraining(size_t bytes);
     
     // rare cases happen temporarily even if we're not reaching termination).
     size_t performIncrementOfDraining(size_t bytes);
     
-    JS_EXPORT_PRIVATE void mergeIfNecessary();
-
     // This informs the GC about auxiliary of some size that we are keeping alive. If you don't do
     // this then the space will be freed at end of GC.
     void markAuxiliary(const void* base);
     // This informs the GC about auxiliary of some size that we are keeping alive. If you don't do
     // this then the space will be freed at end of GC.
     void markAuxiliary(const void* base);
@@ -194,10 +192,6 @@ private:
     
     void noteLiveAuxiliaryCell(HeapCell*);
     
     
     void noteLiveAuxiliaryCell(HeapCell*);
     
-    void mergeOpaqueRoots();
-
-    void mergeOpaqueRootsIfProfitable();
-
     void visitChildren(const JSCell*);
     
     void donateKnownParallel();
     void visitChildren(const JSCell*);
     
     void donateKnownParallel();
@@ -215,7 +209,6 @@ private:
 
     MarkStackArray m_collectorStack;
     MarkStackArray m_mutatorStack;
 
     MarkStackArray m_collectorStack;
     MarkStackArray m_mutatorStack;
-    OpaqueRootSet m_opaqueRoots; // Handle-owning data structures not visible to the garbage collector.
     bool m_ignoreNewOpaqueRoots { false }; // Useful as a debugging mode.
     
     size_t m_bytesVisited;
     bool m_ignoreNewOpaqueRoots { false }; // Useful as a debugging mode.
     
     size_t m_bytesVisited;
index 95591c3..ab09893 100644 (file)
@@ -47,7 +47,6 @@ ALWAYS_INLINE void SlotVisitor::appendUnbarriered(JSCell* cell)
     
     Dependency dependency;
     if (UNLIKELY(cell->isLargeAllocation())) {
     
     Dependency dependency;
     if (UNLIKELY(cell->isLargeAllocation())) {
-        dependency = nullDependency();
         if (LIKELY(cell->largeAllocation().isMarked())) {
             if (LIKELY(!m_heapSnapshotBuilder))
                 return;
         if (LIKELY(cell->largeAllocation().isMarked())) {
             if (LIKELY(!m_heapSnapshotBuilder))
                 return;
@@ -86,7 +85,6 @@ ALWAYS_INLINE void SlotVisitor::appendHiddenUnbarriered(JSCell* cell)
     
     Dependency dependency;
     if (UNLIKELY(cell->isLargeAllocation())) {
     
     Dependency dependency;
     if (UNLIKELY(cell->isLargeAllocation())) {
-        dependency = nullDependency();
         if (LIKELY(cell->largeAllocation().isMarked()))
             return;
     } else {
         if (LIKELY(cell->largeAllocation().isMarked()))
             return;
     } else {
@@ -136,6 +134,23 @@ ALWAYS_INLINE void SlotVisitor::appendValuesHidden(const WriteBarrierBase<Unknow
         appendHidden(barriers[i]);
 }
 
         appendHidden(barriers[i]);
 }
 
+inline bool SlotVisitor::addOpaqueRoot(void* ptr)
+{
+    if (!ptr)
+        return false;
+    if (m_ignoreNewOpaqueRoots)
+        return false;
+    if (!heap()->m_opaqueRoots.add(ptr))
+        return false;
+    m_visitCount++;
+    return true;
+}
+
+inline bool SlotVisitor::containsOpaqueRoot(void* ptr) const
+{
+    return heap()->m_opaqueRoots.contains(ptr);
+}
+
 inline void SlotVisitor::reportExtraMemoryVisited(size_t size)
 {
     if (m_isFirstVisit) {
 inline void SlotVisitor::reportExtraMemoryVisited(size_t size)
 {
     if (m_isFirstVisit) {
@@ -159,12 +174,12 @@ inline Heap* SlotVisitor::heap() const
 
 inline VM& SlotVisitor::vm()
 {
 
 inline VM& SlotVisitor::vm()
 {
-    return *m_heap.m_vm;
+    return *m_heap.vm();
 }
 
 inline const VM& SlotVisitor::vm() const
 {
 }
 
 inline const VM& SlotVisitor::vm() const
 {
-    return *m_heap.m_vm;
+    return *m_heap.vm();
 }
 
 template<typename Func>
 }
 
 template<typename Func>
index 6247da6..598cf02 100644 (file)
@@ -31,6 +31,7 @@
 #include "JSCInlines.h"
 #include "MarkedAllocatorInlines.h"
 #include "MarkedBlockInlines.h"
 #include "JSCInlines.h"
 #include "MarkedAllocatorInlines.h"
 #include "MarkedBlockInlines.h"
+#include "ParallelSourceAdapter.h"
 #include "PreventCollectionScope.h"
 #include "SubspaceInlines.h"
 
 #include "PreventCollectionScope.h"
 #include "SubspaceInlines.h"
 
@@ -88,5 +89,42 @@ MarkedBlock::Handle* Subspace::findEmptyBlockToSteal()
     return nullptr;
 }
 
     return nullptr;
 }
 
+RefPtr<SharedTask<MarkedAllocator*()>> Subspace::parallelAllocatorSource()
+{
+    class Task : public SharedTask<MarkedAllocator*()> {
+    public:
+        Task(MarkedAllocator* allocator)
+            : m_allocator(allocator)
+        {
+        }
+        
+        MarkedAllocator* run() override
+        {
+            auto locker = holdLock(m_lock);
+            MarkedAllocator* result = m_allocator;
+            if (result)
+                m_allocator = result->nextAllocatorInSubspace();
+            return result;
+        }
+        
+    private:
+        MarkedAllocator* m_allocator;
+        Lock m_lock;
+    };
+    
+    return adoptRef(new Task(m_firstAllocator));
+}
+
+RefPtr<SharedTask<MarkedBlock::Handle*()>> Subspace::parallelNotEmptyMarkedBlockSource()
+{
+    return createParallelSourceAdapter<MarkedAllocator*, MarkedBlock::Handle*>(
+        parallelAllocatorSource(),
+        [] (MarkedAllocator* allocator) -> RefPtr<SharedTask<MarkedBlock::Handle*()>> {
+            if (!allocator)
+                return nullptr;
+            return allocator->parallelNotEmptyBlockSource();
+        });
+}
+
 } // namespace JSC
 
 } // namespace JSC
 
index ffcbe85..7aaa1ea 100644 (file)
@@ -69,17 +69,24 @@ public:
     template<typename Func>
     void forEachAllocator(const Func&);
     
     template<typename Func>
     void forEachAllocator(const Func&);
     
+    RefPtr<SharedTask<MarkedAllocator*()>> parallelAllocatorSource();
+    
     template<typename Func>
     void forEachMarkedBlock(const Func&);
     
     template<typename Func>
     void forEachNotEmptyMarkedBlock(const Func&);
     
     template<typename Func>
     void forEachMarkedBlock(const Func&);
     
     template<typename Func>
     void forEachNotEmptyMarkedBlock(const Func&);
     
+    JS_EXPORT_PRIVATE RefPtr<SharedTask<MarkedBlock::Handle*()>> parallelNotEmptyMarkedBlockSource();
+    
     template<typename Func>
     void forEachLargeAllocation(const Func&);
     
     template<typename Func>
     void forEachMarkedCell(const Func&);
     template<typename Func>
     void forEachLargeAllocation(const Func&);
     
     template<typename Func>
     void forEachMarkedCell(const Func&);
+    
+    template<typename Func>
+    RefPtr<SharedTask<void(SlotVisitor&)>> forEachMarkedCellInParallel(const Func&);
 
     template<typename Func>
     void forEachLiveCell(const Func&);
 
     template<typename Func>
     void forEachLiveCell(const Func&);
index ce817f8..3210d99 100644 (file)
@@ -84,6 +84,53 @@ void Subspace::forEachMarkedCell(const Func& func)
 }
 
 template<typename Func>
 }
 
 template<typename Func>
+RefPtr<SharedTask<void(SlotVisitor&)>> Subspace::forEachMarkedCellInParallel(const Func& func)
+{
+    class Task : public SharedTask<void(SlotVisitor&)> {
+    public:
+        Task(Subspace& subspace, const Func& func)
+            : m_subspace(subspace)
+            , m_blockSource(subspace.parallelNotEmptyMarkedBlockSource())
+            , m_func(func)
+        {
+        }
+        
+        void run(SlotVisitor& visitor) override
+        {
+            while (MarkedBlock::Handle* handle = m_blockSource->run()) {
+                handle->forEachMarkedCell(
+                    [&] (HeapCell* cell, HeapCell::Kind kind) -> IterationStatus {
+                        m_func(visitor, cell, kind);
+                        return IterationStatus::Continue;
+                    });
+            }
+            
+            {
+                auto locker = holdLock(m_lock);
+                if (!m_needToVisitLargeAllocations)
+                    return;
+                m_needToVisitLargeAllocations = false;
+            }
+            
+            m_subspace.forEachLargeAllocation(
+                [&] (LargeAllocation* allocation) {
+                    if (allocation->isMarked())
+                        m_func(visitor, allocation->cell(), m_subspace.m_attributes.cellKind);
+                });
+        }
+        
+    private:
+        Subspace& m_subspace;
+        RefPtr<SharedTask<MarkedBlock::Handle*()>> m_blockSource;
+        Func m_func;
+        Lock m_lock;
+        bool m_needToVisitLargeAllocations { true };
+    };
+    
+    return adoptRef(new Task(*this, func));
+}
+
+template<typename Func>
 void Subspace::forEachLiveCell(const Func& func)
 {
     forEachMarkedBlock(
 void Subspace::forEachLiveCell(const Func& func)
 {
     forEachMarkedBlock(
similarity index 65%
rename from Source/JavaScriptCore/heap/VisitingTimeout.h
rename to Source/JavaScriptCore/heap/VisitCounter.h
index 6097662..22e9582 100644 (file)
 #pragma once
 
 #include "SlotVisitor.h"
 #pragma once
 
 #include "SlotVisitor.h"
-#include <wtf/TimeWithDynamicClockType.h>
 
 namespace JSC {
 
 
 namespace JSC {
 
-class VisitingTimeout {
+class VisitCounter {
 public:
 public:
-    VisitingTimeout()
-    {
-    }
+    VisitCounter() { }
     
     
-    VisitingTimeout(SlotVisitor& visitor, bool didVisitSomething, MonotonicTime timeout)
-        : m_didVisitSomething(didVisitSomething)
-        , m_visitCountBefore(visitor.visitCount())
-        , m_timeout(timeout)
+    VisitCounter(SlotVisitor& visitor)
+        : m_visitor(&visitor)
+        , m_initialVisitCount(visitor.visitCount())
     {
     }
     
     {
     }
     
-    size_t visitCount(SlotVisitor& visitor) const
-    {
-        return visitor.visitCount() - m_visitCountBefore;
-    }
-
-    bool didVisitSomething(SlotVisitor& visitor) const
-    {
-        return m_didVisitSomething || visitCount(visitor);
-    }
+    SlotVisitor& visitor() const { return *m_visitor; }
     
     
-    bool shouldTimeOut(SlotVisitor& visitor) const
+    size_t visitCount() const
     {
     {
-        return didVisitSomething(visitor) && hasElapsed(m_timeout);
+        return m_visitor->visitCount() - m_initialVisitCount;
     }
     
 private:
     }
     
 private:
-    bool m_didVisitSomething { false };
-    size_t m_visitCountBefore { 0 };
-    MonotonicTime m_timeout;
+    SlotVisitor* m_visitor { nullptr };
+    size_t m_initialVisitCount { 0 };
 };
 
 } // namespace JSC
 };
 
 } // namespace JSC
index 0ac3180..f328f96 100644 (file)
@@ -111,7 +111,7 @@ void WeakBlock::specializedVisit(ContainerType& container, SlotVisitor& visitor)
             continue;
 
         JSValue jsValue = weakImpl->jsValue();
             continue;
 
         JSValue jsValue = weakImpl->jsValue();
-        if (container.isMarkedConcurrently(markingVersion, jsValue.asCell()))
+        if (container.isMarked(markingVersion, jsValue.asCell()))
             continue;
         
         if (!weakHandleOwner->isReachableFromOpaqueRoots(Handle<Unknown>::wrapSlot(&const_cast<JSValue&>(jsValue)), weakImpl->context(), visitor))
             continue;
         
         if (!weakHandleOwner->isReachableFromOpaqueRoots(Handle<Unknown>::wrapSlot(&const_cast<JSValue&>(jsValue)), weakImpl->context(), visitor))
index bf4dd6a..17b7f03 100644 (file)
@@ -382,7 +382,7 @@ ALWAYS_INLINE Structure* JSObject::visitButterflyImpl(SlotVisitor& visitor)
     structure = vm.getStructure(structureID);
     lastOffset = structure->lastOffset();
     IndexingType indexingType = structure->indexingType();
     structure = vm.getStructure(structureID);
     lastOffset = structure->lastOffset();
     IndexingType indexingType = structure->indexingType();
-    Dependency indexingTypeDependency = dependency(indexingType);
+    Dependency indexingTypeDependency = Dependency::fence(indexingType);
     Locker<JSCell> locker(NoLockingNecessary);
     switch (indexingType) {
     case ALL_CONTIGUOUS_INDEXING_TYPES:
     Locker<JSCell> locker(NoLockingNecessary);
     switch (indexingType) {
     case ALL_CONTIGUOUS_INDEXING_TYPES:
@@ -396,13 +396,13 @@ ALWAYS_INLINE Structure* JSObject::visitButterflyImpl(SlotVisitor& visitor)
     default:
         break;
     }
     default:
         break;
     }
-    butterfly = consume(this, indexingTypeDependency)->butterfly();
-    Dependency butterflyDependency = dependency(butterfly);
+    butterfly = indexingTypeDependency.consume(this)->butterfly();
+    Dependency butterflyDependency = Dependency::fence(butterfly);
     if (!butterfly)
         return structure;
     if (!butterfly)
         return structure;
-    if (consume(this, butterflyDependency)->structureID() != structureID)
+    if (butterflyDependency.consume(this)->structureID() != structureID)
         return nullptr;
         return nullptr;
-    if (consume(structure, butterflyDependency)->lastOffset() != lastOffset)
+    if (butterflyDependency.consume(structure)->lastOffset() != lastOffset)
         return nullptr;
     
     markAuxiliaryAndVisitOutOfLineProperties(visitor, butterfly, structure, lastOffset);
         return nullptr;
     
     markAuxiliaryAndVisitOutOfLineProperties(visitor, butterfly, structure, lastOffset);
index 78e77ad..b207799 100644 (file)
@@ -354,6 +354,7 @@ constexpr bool enableAsyncIteration = false;
     \
     v(unsigned, minimumNumberOfScansBetweenRebalance, 100, Normal, nullptr) \
     v(unsigned, numberOfGCMarkers, computeNumberOfGCMarkers(8), Normal, nullptr) \
     \
     v(unsigned, minimumNumberOfScansBetweenRebalance, 100, Normal, nullptr) \
     v(unsigned, numberOfGCMarkers, computeNumberOfGCMarkers(8), Normal, nullptr) \
+    v(bool, useParallelMarkingConstraintSolver, true, Normal, nullptr) \
     v(unsigned, opaqueRootMergeThreshold, 1000, Normal, nullptr) \
     v(double, minHeapUtilization, 0.8, Normal, nullptr) \
     v(double, minMarkedBlockUtilization, 0.9, Normal, nullptr) \
     v(unsigned, opaqueRootMergeThreshold, 1000, Normal, nullptr) \
     v(double, minHeapUtilization, 0.8, Normal, nullptr) \
     v(double, minMarkedBlockUtilization, 0.9, Normal, nullptr) \
index 4075fd7..1f25aea 100644 (file)
@@ -1101,14 +1101,14 @@ bool Structure::isCheapDuringGC()
     // has any large property names.
     // https://bugs.webkit.org/show_bug.cgi?id=157334
     
     // has any large property names.
     // https://bugs.webkit.org/show_bug.cgi?id=157334
     
-    return (!m_globalObject || Heap::isMarkedConcurrently(m_globalObject.get()))
-        && (hasPolyProto() || !storedPrototypeObject() || Heap::isMarkedConcurrently(storedPrototypeObject()));
+    return (!m_globalObject || Heap::isMarked(m_globalObject.get()))
+        && (hasPolyProto() || !storedPrototypeObject() || Heap::isMarked(storedPrototypeObject()));
 }
 
 bool Structure::markIfCheap(SlotVisitor& visitor)
 {
     if (!isCheapDuringGC())
 }
 
 bool Structure::markIfCheap(SlotVisitor& visitor)
 {
     if (!isCheapDuringGC())
-        return Heap::isMarkedConcurrently(this);
+        return Heap::isMarked(this);
     
     visitor.appendUnbarriered(this);
     return true;
     
     visitor.appendUnbarriered(this);
     return true;
index c406bd7..215f178 100644 (file)
@@ -1,3 +1,83 @@
+2017-12-01  Filip Pizlo  <fpizlo@apple.com>
+
+        GC constraint solving should be parallel
+        https://bugs.webkit.org/show_bug.cgi?id=179934
+
+        Reviewed by JF Bastien.
+        
+        This does some changes to make it easier to do parallel constraint solving:
+        
+        - I finally removed dependencyWith. This was a silly construct whose only purpose is to confuse
+          people about what it means to have a dependency chain. I took that as an opportunity to grealy
+          simplify the GC's use of dependency chaining.
+        
+        - Added more logic to Deque<>, since I use it for part of the load balancer.
+        
+        - Made it possible to profile lock contention. See
+          https://bugs.webkit.org/show_bug.cgi?id=180250#c0 for some preliminary measurements.
+        
+        - Introduced holdLockIf, which makes it easy to perform predicated lock acquisition. We use that
+          to pick a lock in WebCore.
+        
+        - Introduced CountingLock. It's like WTF::Lock except it also enables optimistic read transactions
+          sorta like Java's StampedLock.
+
+        * WTF.xcodeproj/project.pbxproj:
+        * wtf/Atomics.h:
+        (WTF::dependency):
+        (WTF::DependencyWith::DependencyWith): Deleted.
+        (WTF::dependencyWith): Deleted.
+        * wtf/BitVector.h:
+        (WTF::BitVector::iterator::operator++):
+        * wtf/CMakeLists.txt:
+        * wtf/ConcurrentPtrHashSet.cpp: Added.
+        (WTF::ConcurrentPtrHashSet::ConcurrentPtrHashSet):
+        (WTF::ConcurrentPtrHashSet::~ConcurrentPtrHashSet):
+        (WTF::ConcurrentPtrHashSet::deleteOldTables):
+        (WTF::ConcurrentPtrHashSet::clear):
+        (WTF::ConcurrentPtrHashSet::initialize):
+        (WTF::ConcurrentPtrHashSet::addSlow):
+        (WTF::ConcurrentPtrHashSet::resizeIfNecessary):
+        (WTF::ConcurrentPtrHashSet::resizeAndAdd):
+        (WTF::ConcurrentPtrHashSet::Table::create):
+        * wtf/ConcurrentPtrHashSet.h: Added.
+        (WTF::ConcurrentPtrHashSet::contains):
+        (WTF::ConcurrentPtrHashSet::add):
+        (WTF::ConcurrentPtrHashSet::size const):
+        (WTF::ConcurrentPtrHashSet::Table::maxLoad const):
+        (WTF::ConcurrentPtrHashSet::hash):
+        (WTF::ConcurrentPtrHashSet::cast):
+        (WTF::ConcurrentPtrHashSet::containsImpl const):
+        (WTF::ConcurrentPtrHashSet::addImpl):
+        * wtf/Deque.h:
+        (WTF::inlineCapacity>::takeFirst):
+        * wtf/FastMalloc.h:
+        * wtf/Lock.cpp:
+        (WTF::LockBase::lockSlow):
+        * wtf/Locker.h:
+        (WTF::holdLockIf):
+        * wtf/ScopedLambda.h:
+        * wtf/SharedTask.h:
+        (WTF::SharedTask<PassedResultType):
+        (WTF::SharedTask<ResultType): Deleted.
+        * wtf/StackShot.h: Added.
+        (WTF::StackShot::StackShot):
+        (WTF::StackShot::operator=):
+        (WTF::StackShot::array const):
+        (WTF::StackShot::size const):
+        (WTF::StackShot::operator bool const):
+        (WTF::StackShot::operator== const):
+        (WTF::StackShot::hash const):
+        (WTF::StackShot::isHashTableDeletedValue const):
+        (WTF::StackShot::operator> const):
+        (WTF::StackShot::deletedValueArray):
+        (WTF::StackShotHash::hash):
+        (WTF::StackShotHash::equal):
+        * wtf/StackShotProfiler.h: Added.
+        (WTF::StackShotProfiler::StackShotProfiler):
+        (WTF::StackShotProfiler::profile):
+        (WTF::StackShotProfiler::run):
+
 2017-12-05  Yusuke Suzuki  <utatane.tea@gmail.com>
 
         [WTF] Use m_suspendCount instead of m_suspended flag in Thread
 2017-12-05  Yusuke Suzuki  <utatane.tea@gmail.com>
 
         [WTF] Use m_suspendCount instead of m_suspended flag in Thread
index d89b2fb..d0c9c73 100644 (file)
@@ -23,6 +23,7 @@
 /* Begin PBXBuildFile section */
                0F0F526B1F421FF8004A452C /* StringMalloc.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F0F52691F421FF8004A452C /* StringMalloc.cpp */; };
                0F30BA901E78708E002CA847 /* GlobalVersion.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F30BA8A1E78708E002CA847 /* GlobalVersion.cpp */; };
 /* Begin PBXBuildFile section */
                0F0F526B1F421FF8004A452C /* StringMalloc.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F0F52691F421FF8004A452C /* StringMalloc.cpp */; };
                0F30BA901E78708E002CA847 /* GlobalVersion.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F30BA8A1E78708E002CA847 /* GlobalVersion.cpp */; };
+               0F30CB5A1FCDF134004B5323 /* ConcurrentPtrHashSet.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F30CB581FCDF133004B5323 /* ConcurrentPtrHashSet.cpp */; };
                0F43D8F11DB5ADDC00108FB6 /* AutomaticThread.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F43D8EF1DB5ADDC00108FB6 /* AutomaticThread.cpp */; };
                0F5BF1761F23D49A0029D91D /* Gigacage.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F5BF1741F23D49A0029D91D /* Gigacage.cpp */; };
                0F60F32F1DFCBD1B00416D6C /* LockedPrintStream.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F60F32D1DFCBD1B00416D6C /* LockedPrintStream.cpp */; };
                0F43D8F11DB5ADDC00108FB6 /* AutomaticThread.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F43D8EF1DB5ADDC00108FB6 /* AutomaticThread.cpp */; };
                0F5BF1761F23D49A0029D91D /* Gigacage.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F5BF1741F23D49A0029D91D /* Gigacage.cpp */; };
                0F60F32F1DFCBD1B00416D6C /* LockedPrintStream.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F60F32D1DFCBD1B00416D6C /* LockedPrintStream.cpp */; };
@@ -34,6 +35,7 @@
                0F7075F51FBF53CD00489AF0 /* TimingScope.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F7075F41FBF537A00489AF0 /* TimingScope.cpp */; };
                0F7C5FB61D885CF20044F5E2 /* FastBitVector.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F7C5FB51D885CF20044F5E2 /* FastBitVector.cpp */; };
                0F824A681B7443A0002E345D /* ParkingLot.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F824A641B7443A0002E345D /* ParkingLot.cpp */; };
                0F7075F51FBF53CD00489AF0 /* TimingScope.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F7075F41FBF537A00489AF0 /* TimingScope.cpp */; };
                0F7C5FB61D885CF20044F5E2 /* FastBitVector.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F7C5FB51D885CF20044F5E2 /* FastBitVector.cpp */; };
                0F824A681B7443A0002E345D /* ParkingLot.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F824A641B7443A0002E345D /* ParkingLot.cpp */; };
+               0F8E85DB1FD485B000691889 /* CountingLock.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F8E85DA1FD485B000691889 /* CountingLock.cpp */; };
                0F8F2B92172E0103007DBDA5 /* CompilationThread.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F8F2B8F172E00F0007DBDA5 /* CompilationThread.cpp */; };
                0F9D3360165DBA73005AD387 /* FilePrintStream.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F9D335B165DBA73005AD387 /* FilePrintStream.cpp */; };
                0F9D3362165DBA73005AD387 /* PrintStream.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F9D335D165DBA73005AD387 /* PrintStream.cpp */; };
                0F8F2B92172E0103007DBDA5 /* CompilationThread.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F8F2B8F172E00F0007DBDA5 /* CompilationThread.cpp */; };
                0F9D3360165DBA73005AD387 /* FilePrintStream.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F9D335B165DBA73005AD387 /* FilePrintStream.cpp */; };
                0F9D3362165DBA73005AD387 /* PrintStream.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F9D335D165DBA73005AD387 /* PrintStream.cpp */; };
                0F30BA8D1E78708E002CA847 /* LoggingHashMap.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = LoggingHashMap.h; sourceTree = "<group>"; };
                0F30BA8E1E78708E002CA847 /* LoggingHashSet.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = LoggingHashSet.h; sourceTree = "<group>"; };
                0F30BA8F1E78708E002CA847 /* LoggingHashTraits.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = LoggingHashTraits.h; sourceTree = "<group>"; };
                0F30BA8D1E78708E002CA847 /* LoggingHashMap.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = LoggingHashMap.h; sourceTree = "<group>"; };
                0F30BA8E1E78708E002CA847 /* LoggingHashSet.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = LoggingHashSet.h; sourceTree = "<group>"; };
                0F30BA8F1E78708E002CA847 /* LoggingHashTraits.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = LoggingHashTraits.h; sourceTree = "<group>"; };
+               0F30CB581FCDF133004B5323 /* ConcurrentPtrHashSet.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ConcurrentPtrHashSet.cpp; sourceTree = "<group>"; };
+               0F30CB591FCDF133004B5323 /* ConcurrentPtrHashSet.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ConcurrentPtrHashSet.h; sourceTree = "<group>"; };
                0F31DD701F1308BC0072EB4A /* LockAlgorithmInlines.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = LockAlgorithmInlines.h; sourceTree = "<group>"; };
                0F348C7D1F47AA9D003CFEF2 /* StringVector.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = StringVector.h; sourceTree = "<group>"; };
                0F3501631BB258C800F0A2A3 /* WeakRandom.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WeakRandom.h; sourceTree = "<group>"; };
                0F31DD701F1308BC0072EB4A /* LockAlgorithmInlines.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = LockAlgorithmInlines.h; sourceTree = "<group>"; };
                0F348C7D1F47AA9D003CFEF2 /* StringVector.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = StringVector.h; sourceTree = "<group>"; };
                0F3501631BB258C800F0A2A3 /* WeakRandom.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WeakRandom.h; sourceTree = "<group>"; };
                0F824A641B7443A0002E345D /* ParkingLot.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ParkingLot.cpp; sourceTree = "<group>"; };
                0F824A651B7443A0002E345D /* ParkingLot.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ParkingLot.h; sourceTree = "<group>"; };
                0F87105916643F190090B0AD /* RawPointer.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RawPointer.h; sourceTree = "<group>"; };
                0F824A641B7443A0002E345D /* ParkingLot.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ParkingLot.cpp; sourceTree = "<group>"; };
                0F824A651B7443A0002E345D /* ParkingLot.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ParkingLot.h; sourceTree = "<group>"; };
                0F87105916643F190090B0AD /* RawPointer.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RawPointer.h; sourceTree = "<group>"; };
+               0F8E85DA1FD485B000691889 /* CountingLock.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = CountingLock.cpp; sourceTree = "<group>"; };
                0F8F2B8F172E00F0007DBDA5 /* CompilationThread.cpp */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.cpp.cpp; path = CompilationThread.cpp; sourceTree = "<group>"; };
                0F8F2B90172E00F0007DBDA5 /* CompilationThread.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = CompilationThread.h; sourceTree = "<group>"; };
                0F8F2B9B172F2594007DBDA5 /* ConversionMode.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = ConversionMode.h; sourceTree = "<group>"; };
                0F8F2B8F172E00F0007DBDA5 /* CompilationThread.cpp */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.cpp.cpp; path = CompilationThread.cpp; sourceTree = "<group>"; };
                0F8F2B90172E00F0007DBDA5 /* CompilationThread.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = CompilationThread.h; sourceTree = "<group>"; };
                0F8F2B9B172F2594007DBDA5 /* ConversionMode.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = ConversionMode.h; sourceTree = "<group>"; };
                0F9D335C165DBA73005AD387 /* FilePrintStream.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = FilePrintStream.h; sourceTree = "<group>"; };
                0F9D335D165DBA73005AD387 /* PrintStream.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = PrintStream.cpp; sourceTree = "<group>"; };
                0F9D335E165DBA73005AD387 /* PrintStream.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = PrintStream.h; sourceTree = "<group>"; };
                0F9D335C165DBA73005AD387 /* FilePrintStream.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = FilePrintStream.h; sourceTree = "<group>"; };
                0F9D335D165DBA73005AD387 /* PrintStream.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = PrintStream.cpp; sourceTree = "<group>"; };
                0F9D335E165DBA73005AD387 /* PrintStream.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = PrintStream.h; sourceTree = "<group>"; };
+               0F9DAA041FD1C37B0079C5B2 /* StackShot.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = StackShot.h; sourceTree = "<group>"; };
+               0F9DAA051FD1C37B0079C5B2 /* StackShotProfiler.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = StackShotProfiler.h; sourceTree = "<group>"; };
                0FB14E18180FA218009B6B4D /* Bag.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = Bag.h; sourceTree = "<group>"; };
                0FB14E1A1810E1DA009B6B4D /* BagToHashMap.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = BagToHashMap.h; sourceTree = "<group>"; };
                0FB317C31C488001007E395A /* SystemTracing.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = SystemTracing.h; sourceTree = "<group>"; };
                0FB14E18180FA218009B6B4D /* Bag.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = Bag.h; sourceTree = "<group>"; };
                0FB14E1A1810E1DA009B6B4D /* BagToHashMap.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = BagToHashMap.h; sourceTree = "<group>"; };
                0FB317C31C488001007E395A /* SystemTracing.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = SystemTracing.h; sourceTree = "<group>"; };
                0FED67B51B22D4D80066CE15 /* TinyPtrSet.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = TinyPtrSet.h; sourceTree = "<group>"; };
                0FF4B4C41E88939C00DBBE86 /* Liveness.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = Liveness.h; sourceTree = "<group>"; };
                0FF860941BCCBD740045127F /* PointerComparison.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = PointerComparison.h; sourceTree = "<group>"; };
                0FED67B51B22D4D80066CE15 /* TinyPtrSet.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = TinyPtrSet.h; sourceTree = "<group>"; };
                0FF4B4C41E88939C00DBBE86 /* Liveness.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = Liveness.h; sourceTree = "<group>"; };
                0FF860941BCCBD740045127F /* PointerComparison.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = PointerComparison.h; sourceTree = "<group>"; };
+               0FFBCBFA1FD37E0F0072AAF0 /* CountingLock.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CountingLock.h; sourceTree = "<group>"; };
                0FFF19DA1BB334EB00886D91 /* ParallelHelperPool.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ParallelHelperPool.cpp; sourceTree = "<group>"; };
                0FFF19DB1BB334EB00886D91 /* ParallelHelperPool.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ParallelHelperPool.h; sourceTree = "<group>"; };
                14022F4018F5C3FC007FF0EB /* libbmalloc.a */ = {isa = PBXFileReference; lastKnownFileType = archive.ar; path = libbmalloc.a; sourceTree = BUILT_PRODUCTS_DIR; };
                0FFF19DA1BB334EB00886D91 /* ParallelHelperPool.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ParallelHelperPool.cpp; sourceTree = "<group>"; };
                0FFF19DB1BB334EB00886D91 /* ParallelHelperPool.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ParallelHelperPool.h; sourceTree = "<group>"; };
                14022F4018F5C3FC007FF0EB /* libbmalloc.a */ = {isa = PBXFileReference; lastKnownFileType = archive.ar; path = libbmalloc.a; sourceTree = BUILT_PRODUCTS_DIR; };
                                0F8F2B90172E00F0007DBDA5 /* CompilationThread.h */,
                                A8A47270151A825A004123FF /* Compiler.h */,
                                46BA9EAB1F4CD61E009A2BBC /* CompletionHandler.h */,
                                0F8F2B90172E00F0007DBDA5 /* CompilationThread.h */,
                                A8A47270151A825A004123FF /* Compiler.h */,
                                46BA9EAB1F4CD61E009A2BBC /* CompletionHandler.h */,
+                               0F30CB581FCDF133004B5323 /* ConcurrentPtrHashSet.cpp */,
+                               0F30CB591FCDF133004B5323 /* ConcurrentPtrHashSet.h */,
                                0FDB698D1B7C643A000C1078 /* Condition.h */,
                                0FDB698D1B7C643A000C1078 /* Condition.h */,
+                               0FFBCBFA1FD37E0F0072AAF0 /* CountingLock.h */,
+                               0F8E85DA1FD485B000691889 /* CountingLock.cpp */,
                                E38C41261EB4E0680042957D /* CPUTime.cpp */,
                                E38C41271EB4E0680042957D /* CPUTime.h */,
                                515F794B1CFC9F4A00CCED93 /* CrossThreadCopier.cpp */,
                                E38C41261EB4E0680042957D /* CPUTime.cpp */,
                                E38C41271EB4E0680042957D /* CPUTime.h */,
                                515F794B1CFC9F4A00CCED93 /* CrossThreadCopier.cpp */,
                                A8A4730D151A825B004123FF /* Spectrum.h */,
                                A8A4730E151A825B004123FF /* StackBounds.cpp */,
                                A8A4730F151A825B004123FF /* StackBounds.h */,
                                A8A4730D151A825B004123FF /* Spectrum.h */,
                                A8A4730E151A825B004123FF /* StackBounds.cpp */,
                                A8A4730F151A825B004123FF /* StackBounds.h */,
+                               0F9DAA041FD1C37B0079C5B2 /* StackShot.h */,
+                               0F9DAA051FD1C37B0079C5B2 /* StackShotProfiler.h */,
                                FEDACD3B1630F83F00C69634 /* StackStats.cpp */,
                                FEDACD3C1630F83F00C69634 /* StackStats.h */,
                                313EDEC9778E49C9BEA91CFC /* StackTrace.cpp */,
                                FEDACD3B1630F83F00C69634 /* StackStats.cpp */,
                                FEDACD3C1630F83F00C69634 /* StackStats.h */,
                                313EDEC9778E49C9BEA91CFC /* StackTrace.cpp */,
<